Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to generate histograms using YugabyteDB (PostgreSQL-compatible)
This provides and example of how to generate frequency histograms using YugabyteDB.
Disambiguation
Step2: Define the query to compute the histogram
Step3: Fetch the histogram data into a pandas dataframe
Step4: Histogram plotting
The first plot is a histogram with the event counts (number of events per bin).
The second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events). | Python Code:
# connect to PostgreSQL using psycopg2
# !pip install psycopg2-binary
import psycopg2
# Connect to an existing database and create the test table
with psycopg2.connect("dbname=yugabyte user=yugabyte host=localhost port=5433") as yb_conn:
cur = yb_conn.cursor()
# use this drop statement if you need to recreate the table
# cur.execute("DROP TABLE data")
cur.execute("CREATE TABLE data as select random()*100 random_value from generate_series(1, 100);")
Explanation: How to generate histograms using YugabyteDB (PostgreSQL-compatible)
This provides and example of how to generate frequency histograms using YugabyteDB.
Disambiguation: we refer here to computing histograms of table data, rather than histograms of the columns statistics used by the cost based optimizer.
Author and contacts: [email protected]
Setup and prerequisites
This is how you can setup an YugabyteDB instance for testing using a Docker image
docker run --name yb -p5433:5433 -p7000:7000 yugabytedb/yugabyte:latest yugabyted start --daemon=false
connect to YugabyteDB:
- DB is on port 5433
- WebUI is on port 7000
This is how you set up a YugabyteDB test cluster:
docker network create -d bridge yb
docker run -d --name yb0 --hostname yb0 --net=yb -p5433:5433 -p7000:7000 yugabytedb/yugabyte:latest yugabyted start --daemon=false --listen yb0 --tserver_flags="ysql_enable_auth=false"
wait and check logs
docker logs -f yb0
start node 2 and 3 (follow the pattern for more nodes)
docker run -d --name yb1 --hostname yb1 --net=yb -p5434:5433 yugabytedb/yugabyte:latest yugabyted start --daemon=false --listen yb1 --tserver_flags="ysql_enable_auth=false" --join yb0
docker run -d --name yb2 --hostname yb2 --net=yb -p5435:5433 yugabytedb/yugabyte:latest yugabyted start --daemon=false --listen yb2 --tserver_flags="ysql_enable_auth=false" --join yb0
connect to YugabyteDB:
- Connect to any node, for example yb0 on port 5433
- WebUI is on port 7000
Install the Python library for connecting to PostgreSQL
pip install psycopg2-binary
Create the test table
End of explanation
table_name = "data" # table or temporary view containing the data
value_col = "random_value" # column name on which to compute the histogram
min = -20 # min: minimum value in the histogram
max = 90 # maximum value in the histogram
bins = 11 # number of histogram buckets to compute
step = (max - min) / bins
query = f
with hist as (
select
width_bucket({value_col}, {min}, {max}, {bins}) as bucket,
count(*) as cnt
from {table_name}
group by bucket
),
buckets as (
select generate_series as bucket from generate_series(1,{bins})
)
select
bucket, {min} + (bucket - 0.5) * {step} as value,
coalesce(cnt, 0) as count
from hist right outer join buckets using(bucket)
order by bucket
Explanation: Define the query to compute the histogram
End of explanation
import pandas as pd
# query Oracle using ora_conn and put the result into a pandas Dataframe
with psycopg2.connect("dbname=yugabyte user=yugabyte host=localhost port=5433") as yb_conn:
hist_pandasDF = pd.read_sql(query, con=yb_conn)
# Decription
#
# bucket: the bucket number, range from 1 to bins (included)
# value: midpoint value of the given bucket
# count: number of values in the bucket
hist_pandasDF
# Optionally normalize the event count into a frequency
# dividing by the total number of events
hist_pandasDF["frequency"] = hist_pandasDF["count"] / sum(hist_pandasDF["count"])
hist_pandasDF
Explanation: Fetch the histogram data into a pandas dataframe
End of explanation
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["count"]
# bar plot
ax.bar(x, y, width = 3.0, color='red')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event count")
ax.set_title("Distribution of event counts")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["frequency"]
# bar plot
ax.bar(x, y, width = 3.0, color='blue')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event frequency")
ax.set_title("Distribution of event frequencies")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
Explanation: Histogram plotting
The first plot is a histogram with the event counts (number of events per bin).
The second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events).
End of explanation |
9,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 3
Step1: We can now import the deepchem package to play with.
Step2: MoleculeNet Overview
In the last two tutorials we loaded the Delaney dataset of molecular solubilities. Let's load it one more time.
Step3: Notice that the loader function we invoke dc.molnet.load_delaney lives in the dc.molnet submodule of MoleculeNet loaders. Let's take a look at the full collection of loaders available for us
Step4: The set of MoleculeNet loaders is actively maintained by the DeepChem community and we work on adding new datasets to the collection. Let's see how many datasets there are in MoleculeNet today
Step5: MoleculeNet Dataset Categories
There's a lot of different datasets in MoleculeNet. Let's do a quick overview of the different types of datasets available. We'll break datasets into different categories and list loaders which belong to those categories. More details on each of these datasets can be found at https
Step6: We have one task in this dataset which corresponds to the measured log solubility in mol/L. Let's now take a look at datasets
Step7: As we mentioned previously, we see that datasets is a tuple of 3 datasets. Let's split them out.
Step8: Let's peek into one of the datapoints in the train dataset.
Step9: Note that this is a dc.feat.mol_graphs.ConvMol object produced by dc.feat.ConvMolFeaturizer. We'll say more about how to control choice of featurization shortly. Finally let's take a look at the transformers field
Step10: So we see that one transformer was applied, the dc.trans.NormalizationTransformer.
After reading through this description so far, you may be wondering what choices are made under the hood. As we've briefly mentioned previously, datasets can be processed with different choices of "featurizers". Can we control the choice of featurization here? In addition, how was the source dataset split into train/valid/test as three different datasets?
You can use the 'featurizer' and 'splitter' keyword arguments and pass in different strings. Common possible choices for 'featurizer' are 'ECFP', 'GraphConv', 'Weave' and 'smiles2img' corresponding to the dc.feat.CircularFingerprint, dc.feat.ConvMolFeaturizer, dc.feat.WeaveFeaturizer and dc.feat.SmilesToImage featurizers. Common possible choices for 'splitter' are None, 'index', 'random', 'scaffold' and 'stratified' corresponding to no split, dc.splits.IndexSplitter, dc.splits.RandomSplitter, dc.splits.SingletaskStratifiedSplitter. We haven't talked much about splitters yet, but intuitively they're a way to partition a dataset based on different criteria. We'll say more in a future tutorial.
Instead of a string, you also can pass in any Featurizer or Splitter object. This is very useful when, for example, a Featurizer has constructor arguments you can use to customize its behavior. | Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
Explanation: Tutorial 3: An Introduction To MoleculeNet
One of the most powerful features of DeepChem is that it comes "batteries included" with datasets to use. The DeepChem developer community maintains the MoleculeNet [1] suite of datasets which maintains a large collection of different scientific datasets for use in machine learning applications. The original MoleculeNet suite had 17 datasets mostly focused on molecular properties. Over the last several years, MoleculeNet has evolved into a broader collection of scientific datasets to facilitate the broad use and development of scientific machine learning tools.
These datasets are integrated with the rest of the DeepChem suite so you can conveniently access these these through functions in the dc.molnet submodule. You've already seen a few examples of these loaders already as you've worked through the tutorial series. The full documentation for the MoleculeNet suite is available in our docs [2].
[1] Wu, Zhenqin, et al. "MoleculeNet: a benchmark for molecular machine learning." Chemical science 9.2 (2018): 513-530.
[2] https://deepchem.readthedocs.io/en/latest/moleculenet.html
Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.
End of explanation
import deepchem as dc
dc.__version__
Explanation: We can now import the deepchem package to play with.
End of explanation
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv', splitter='random')
Explanation: MoleculeNet Overview
In the last two tutorials we loaded the Delaney dataset of molecular solubilities. Let's load it one more time.
End of explanation
[method for method in dir(dc.molnet) if "load_" in method ]
Explanation: Notice that the loader function we invoke dc.molnet.load_delaney lives in the dc.molnet submodule of MoleculeNet loaders. Let's take a look at the full collection of loaders available for us
End of explanation
len([method for method in dir(dc.molnet) if "load_" in method ])
Explanation: The set of MoleculeNet loaders is actively maintained by the DeepChem community and we work on adding new datasets to the collection. Let's see how many datasets there are in MoleculeNet today
End of explanation
tasks
Explanation: MoleculeNet Dataset Categories
There's a lot of different datasets in MoleculeNet. Let's do a quick overview of the different types of datasets available. We'll break datasets into different categories and list loaders which belong to those categories. More details on each of these datasets can be found at https://deepchem.readthedocs.io/en/latest/moleculenet.html. The original MoleculeNet paper [1] provides details about a subset of these papers. We've marked these datasets as "V1" below. All remaining dataset are "V2" and not documented in the older paper.
Quantum Mechanical Datasets
MoleculeNet's quantum mechanical datasets contain various quantum mechanical property prediction tasks. The current set of quantum mechanical datasets includes QM7, QM7b, QM8, QM9. The associated loaders are
dc.molnet.load_qm7: V1
dc.molnet.load_qm7b_from_mat: V1
dc.molnet.load_qm8: V1
dc.molnet.load_qm9: V1
Physical Chemistry Datasets
The physical chemistry dataset collection contain a variety of tasks for predicting various physical properties of molecules.
dc.molnet.load_delaney: V1. This dataset is also referred to as ESOL in the original paper.
dc.molnet.load_sampl: V1. This dataset is also referred to as FreeSolv in the original paper.
dc.molnet.load_lipo: V1. This dataset is also referred to as Lipophilicity in the original paper.
dc.molnet.load_thermosol: V2.
dc.molnet.load_hppb: V2.
dc.molnet.load_hopv: V2. This dataset is drawn from a recent publication [3]
Chemical Reaction Datasets
These datasets hold chemical reaction datasets for use in computational retrosynthesis / forward synthesis.
dc.molnet.load_uspto
Biochemical/Biophysical Datasets
These datasets are drawn from various biochemical/biophysical datasets that measure things like the binding affinity of compounds to proteins.
dc.molnet.load_pcba: V1
dc.molnet.load_nci: V2.
dc.molnet.load_muv: V1
dc.molnet.load_hiv: V1
dc.molnet.load_ppb: V2.
dc.molnet.load_bace_classification: V1. This loader loads the classification task for the BACE dataset from the original MoleculeNet paper.
dc.molnet.load_bace_regression: V1. This loader loads the regression task for the BACE dataset from the original MoleculeNet paper.
dc.molnet.load_kaggle: V2. This dataset is from Merck's drug discovery kaggle contest and is described in [4].
dc.molnet.load_factors: V2. This dataset is from [4].
dc.molnet.load_uv: V2. This dataset is from [4].
dc.molnet.load_kinase: V2. This datset is from [4].
Molecular Catalog Datasets
These datasets provide molecular datasets which have no associated properties beyond the raw SMILES formula or structure. These types of datasets are useful for generative modeling tasks.
dc.molnet.load_zinc15: V2
dc.molnet.load_chembl: V2
dc.molnet.load_chembl25: V2
Physiology Datasets
These datasets measure physiological properties of how molecules interact with human patients.
dc.molnet.load_bbbp: V1
dc.molnet.load_tox21: V1
dc.molnet.load_toxcast: V1
dc.molnet.load_sider: V1
dc.molnet.load_clintox: V1
dc.molnet.load_clearance: V2.
Structural Biology Datasets
These datasets contain 3D structures of macromolecules along with associated properties.
dc.molnet.load_pdbbind: V1
Microscopy Datasets
These datasets contain microscopy image datasets, typically of cell lines. These datasets were not in the original MoleculeNet paper.
dc.molnet.load_bbbc001: V2
dc.molnet.load_bbbc002: V2
dc.molnet.load_cell_counting: V2
Materials Properties Datasets
These datasets compute properties of various materials.
dc.molnet.load_bandgap: V2
dc.molnet.load_perovskite: V2
dc.molnet.load_mp_formation_energy: V2
dc.molnet.load_mp_metallicity: V2
[3] Lopez, Steven A., et al. "The Harvard organic photovoltaic dataset." Scientific data 3.1 (2016): 1-7.
[4] Ramsundar, Bharath, et al. "Is multitask deep learning practical for pharma?." Journal of chemical information and modeling 57.8 (2017): 2068-2076.
MoleculeNet Loaders Explained
All MoleculeNet loader functions take the form dc.molnet.load_X. Loader functions return a tuple of arguments (tasks, datasets, transformers). Let's walk through each of these return values and explain what we get:
tasks: This is a list of task-names. Many datasets in MoleculeNet are "multitask". That is, a given datapoint has multiple labels associated with it. These correspond to different measurements or values associated with this datapoint.
datasets: This field is a tuple of three dc.data.Dataset objects (train, valid, test). These correspond to the training, validation, and test set for this MoleculeNet dataset.
transformers: This field is a list of dc.trans.Transformer objects which were applied to this dataset during processing.
This is abstract so let's take a look at each of these fields for the dc.molnet.load_delaney function we invoked above. Let's start with tasks.
End of explanation
datasets
Explanation: We have one task in this dataset which corresponds to the measured log solubility in mol/L. Let's now take a look at datasets:
End of explanation
train, valid, test = datasets
train
valid
test
Explanation: As we mentioned previously, we see that datasets is a tuple of 3 datasets. Let's split them out.
End of explanation
train.X[0]
Explanation: Let's peek into one of the datapoints in the train dataset.
End of explanation
transformers
Explanation: Note that this is a dc.feat.mol_graphs.ConvMol object produced by dc.feat.ConvMolFeaturizer. We'll say more about how to control choice of featurization shortly. Finally let's take a look at the transformers field:
End of explanation
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer="ECFP", splitter="scaffold")
(train, valid, test) = datasets
train
train.X[0]
Explanation: So we see that one transformer was applied, the dc.trans.NormalizationTransformer.
After reading through this description so far, you may be wondering what choices are made under the hood. As we've briefly mentioned previously, datasets can be processed with different choices of "featurizers". Can we control the choice of featurization here? In addition, how was the source dataset split into train/valid/test as three different datasets?
You can use the 'featurizer' and 'splitter' keyword arguments and pass in different strings. Common possible choices for 'featurizer' are 'ECFP', 'GraphConv', 'Weave' and 'smiles2img' corresponding to the dc.feat.CircularFingerprint, dc.feat.ConvMolFeaturizer, dc.feat.WeaveFeaturizer and dc.feat.SmilesToImage featurizers. Common possible choices for 'splitter' are None, 'index', 'random', 'scaffold' and 'stratified' corresponding to no split, dc.splits.IndexSplitter, dc.splits.RandomSplitter, dc.splits.SingletaskStratifiedSplitter. We haven't talked much about splitters yet, but intuitively they're a way to partition a dataset based on different criteria. We'll say more in a future tutorial.
Instead of a string, you also can pass in any Featurizer or Splitter object. This is very useful when, for example, a Featurizer has constructor arguments you can use to customize its behavior.
End of explanation |
9,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Electric Machinery Fundamentals 5th edition
Chapter 6 (Code examples)
Example 6-5 (d)
Creates a plot of the torque-speed curve of the induction motor as depicted in Figure 6-23.
Note
Step1: First, initialize the values needed in this program.
Step2: Calculate the Thevenin voltage and impedance from Equations 7-41a
Step3: Now calculate the torque-speed characteristic for many slips between 0 and 1.
Step4: Calculate torque for original rotor resistance using
Step5: Calculate torque for doubled rotor resistance
Step6: Plot the torque-speed curve | Python Code:
%pylab notebook
Explanation: Electric Machinery Fundamentals 5th edition
Chapter 6 (Code examples)
Example 6-5 (d)
Creates a plot of the torque-speed curve of the induction motor as depicted in Figure 6-23.
Note: You should first click on "Cell → Run All" in order that the plots get generated.
Import the PyLab namespace (provides set of useful commands and constants like Pi)
End of explanation
r1 = 0.641 # Stator resistance
x1 = 1.106 # Stator reactance
r2 = 0.332 # Rotor resistance
x2 = 0.464 # Rotor reactance
xm = 26.3 # Magnetization branch reactance
v_phase = 460 / sqrt(3) # Phase voltage
n_sync = 1800 # Synchronous speed (r/min)
w_sync = n_sync * 2*pi/60 # Synchronous speed (rad/s)
Explanation: First, initialize the values needed in this program.
End of explanation
v_th = v_phase * ( xm / sqrt(r1**2 + (x1 + xm)**2) )
z_th = ((1j*xm) * (r1 + 1j*x1)) / (r1 + 1j*(x1 + xm))
r_th = real(z_th)
x_th = imag(z_th)
Explanation: Calculate the Thevenin voltage and impedance from Equations 7-41a:
$$ V_{TH} = V_\phi \frac{X_M}{\sqrt{R_1^2 + (X_1 + X_M)^2}} $$
and 7-43:
$$ Z_{TH} = \frac{jX_m (R_1 + jX_1)}{R_1 + j(X_1 + X_M)} $$
End of explanation
s = linspace(0, 1, 50) # Slip
s[0] = 0.001 # avoid divide-by-zero problems
nm = (1 - s) * n_sync # mechanical speed
Explanation: Now calculate the torque-speed characteristic for many slips between 0 and 1.
End of explanation
t_ind1 = ((3 * v_th**2 * r2/s) /
(w_sync * ((r_th + r2/s)**2 + (x_th + x2)**2)))
Explanation: Calculate torque for original rotor resistance using:
$$ \tau_\text{ind} = \frac{3 V_{TH}^2 R_2 / s}{\omega_\text{sync}[(R_{TH} + R_2/s)^2 + (X_{TH} + X_2)^2]} $$
End of explanation
t_ind2 = ((3 * v_th**2 * 2*r2/s) /
(w_sync * ((r_th + 2*r2/s)**2 + (x_th + x2)**2)))
Explanation: Calculate torque for doubled rotor resistance:
End of explanation
rc('text', usetex=True) # enable LaTeX commands for plot
plot(nm, t_ind2,'k--',
nm, t_ind1,'b',
lw=2)
xlabel(r'$\mathbf{n_{m}}\ [rpm]$')
ylabel(r'$\mathbf{\tau_{ind}}\ [Nm]$')
title ('Induction motor torque-speed characteristic')
legend ((r'Doubled $R_{2}$','Original $R_{2}$'), loc = 3);
grid()
Explanation: Plot the torque-speed curve:
End of explanation |
9,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step2: Assignment 3b
Step3: Encoding issues with txt files
For Windows users, the file “AnnaKarenina.txt” gets the encoding cp1252.
In order to open the file, you have to add encoding='utf-8', i.e.,
python
a_path = 'some path on your computer.txt'
with open(a_path, mode='r', encoding='utf-8')
Step4: 2.b) Store the function in the Python module utils.py. Import it in analyze.py.
Edit analyze.py so that
Step5: Please note that Exercises 3 and 4, respectively, are designed to be difficult. You will have to combine what you have learnt so far to complete these exercises.
Exercise 3
Let's compare the books based on the statistics. Create a dictionary stats2book_with_highest_value in analyze.py with four keys | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Assignments-colab/ASSIGNMENT_3b.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
# Downloading data - you get this for free :-)
import requests
import os
def download_book(url):
Download book given a url to a book in .txt format and return it as a string.
text_request = requests.get(url)
text = text_request.text
return text
book_urls = dict()
book_urls['HuckFinn'] = 'http://www.gutenberg.org/cache/epub/7100/pg7100.txt'
book_urls['Macbeth'] = 'http://www.gutenberg.org/cache/epub/1533/pg1533.txt'
book_urls['AnnaKarenina'] = 'http://www.gutenberg.org/files/1399/1399-0.txt'
if not os.path.isdir('../Data/books/'):
os.mkdir('../Data/books/')
for name, url in book_urls.items():
text = download_book(url)
with open('../Data/books/'+name+'.txt', 'w', encoding='utf-8') as outfile:
outfile.write(text)
Explanation: Assignment 3b: Writing your own nlp program
Due: Friday the 1st of October 2021 14:30.
Please submit your assignment (notebooks of parts 3a and 3b + additional files) as a single .zip file using Canvas (Assignments --> Assignment 3)
Please name your zip file with the following naming convention: ASSIGNMENT_3_FIRSTNAME_LASTNAME.zip
IMPORTANTE NOTE:
* The students who follow the Bachelor version of this course, i.e., the course Introduction to Python for Humanities and Social Sciences (L_AABAALG075) as part of the minor Digital Humanities, do not have to do Exercises 3 and 4 of Assignment 3b
* The other students, i.e., who follow the Master version of course, which is Programming in Python for Text Analysis (L_AAMPLIN021), are required to do Exercises 3 and 4 of Assignment 3b
If you have questions about this topic, please contact us ([email protected]). Questions and answers will be collected on Piazza, so please check if your question has already been answered first.
In this part of the assignment, we will carry out our own little text analysis project. The goal is to gain some insights into longer texts without having to read them all in detail.
This part of the assignment builds on some notions that have been revised in part A of the assignment. Please feel free to go back to part A and reuse your code whenever possible.
The goals of this part are:
divide a problem into smaller sub-problems and test code using small examples
doing text analysis and writing results to a file
combining small functions into bigger functions
Tip: The assignment is split into four steps, which are divided into smaller steps. Instead of doing everything step by step, we highly recommend you read all sub-steps of a step first and then start coding. In many cases, the sub-steps are there to help you split the problem into manageable sub-problems, but it is still good to keep the overall goal in mind.
Preparation: Data collection
In the directory ../Data/books/, you should find three .txt files. If not, you can use the following cell to download them. Also, feel free to look at the code to learn how to download .txt files from the web.
We defined a function called download_book which downloads a book in .txt format. Then we define a dictionary with names and urls. We loop through the dictionary, download each book and write it to a file stored in the directory books in the current working directory. You don't need to do anything - just run the cell and the files will be downloaded to your computer.
End of explanation
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize, word_tokenize
text = 'Python is a programming language. It was created by Guido van Rossum.'
for sent in sent_tokenize(text):
print('SENTENCE', sent)
tokens = word_tokenize(sent)
print('TOKENS', tokens)
Explanation: Encoding issues with txt files
For Windows users, the file “AnnaKarenina.txt” gets the encoding cp1252.
In order to open the file, you have to add encoding='utf-8', i.e.,
python
a_path = 'some path on your computer.txt'
with open(a_path, mode='r', encoding='utf-8'):
# process file
Exercise 1
Was the download successful? Let's start writing code! Please create the following two Python modules:
Python module analyze.py This module you will call from the command line
Python module utils.py This module will contain your helper functions
More precisely, please create two files, analyze.py and utils.py, which are both placed in the same directory as this notebook. The two files are empty at this stage of the assignment.
1.a) Write a function called get_paths and store it in the Python module utils.py
The function get_paths:
* takes one positional parameter called input_folder
* the function stores all paths to .txt files in the input_folder in a list
* the function returns a list of strings, i.e., each string is a file path
Once you've created the function and stored it in utils.py:
* Import the function into analyze.py, using from utils import get_paths
* Call the function inside analyze.py (input_folder="../Data/books")
* Assign the output of the function to a variable and print this variable.
* call analyze.py from the command line to test it
Exercise 2
2.a) Let's get a little bit of an overview of what we can find in each text. Write a function called get_basic_stats.
The function get_basic_stats:
* has one positional parameter called txt_path which is the path to a txt file
* reads the content of the txt file into a string
* Computes the following statistics:
* The number of sentences
* The number of tokens
* The size of the vocabulary used (i.e. unique tokens)
* the number of chapters/acts:
* count occurrences of 'CHAPTER' in HuckFinn.txt
* count occurrences of 'Chapter ' (with the space) in AnnaKarenina.txt
* count occurrences of 'ACT' in Macbeth.txt
* return a dictionary with four key:value pairs, one for each statistic described above:
* num_sents
* num_tokens
* vocab_size
* num_chapters_or_acts
In order to compute the statistics, you need to perform sentence splitting and tokenization. Here is an example snippet.
End of explanation
import os
basename = os.path.basename('../Data/books/HuckFinn.txt')
book = basename.strip('.txt')
print(book)
Explanation: 2.b) Store the function in the Python module utils.py. Import it in analyze.py.
Edit analyze.py so that:
* you first call the function get_paths
* create an empty dictionary called book2stats, i.e., book2stats = {}
* Loop over the list of txt files (the output from get_paths) and call the function get_basic_stats on each file
* print the output of calling the function get_basic_stats on each file.
* update the dictionary book2stats with each iteration of the for loop.
Tip: book2stats is a dictionary mapping a book name (the key), e.g., ‘AnnaKarenina’, to a dictionary (the value) (the output from get_basic_stats)
Tip: please use the following code snippet to obtain the basename name of a file path:
End of explanation
import operator
token2freq = {'a': 1000, 'the': 100, 'cow' : 5}
for token, freq in sorted(token2freq.items(),
key=operator.itemgetter(1),
reverse=True):
print(token, freq)
Explanation: Please note that Exercises 3 and 4, respectively, are designed to be difficult. You will have to combine what you have learnt so far to complete these exercises.
Exercise 3
Let's compare the books based on the statistics. Create a dictionary stats2book_with_highest_value in analyze.py with four keys:
* num_sents
* num_tokens
* vocab_size
* num_chapters_or_acts
The values are not the frequencies, but the book that has the highest value for the statistic. Make use of the book2stats dictionary to accomplish this.
Exercise 4
4.a) The statistics above already provide some insights, but we want to know a bit more about what the books are about. To do this, we want to get the 30 most frequent tokens of each book. Edit the function get_basic_stats to add one more key:value pair:
* the key is top_30_tokens
* the value is a list of the 30 most frequent words in the text.
4.b) Write the top 30 tokens (one on each line) for each file to disk using the naming top_30_[FILENAME]:
* top_30_AnnaKarenina.txt
* top_30_HuckFinn.txt
* top_30_Macbeth.txt
Example of file (the and and may not be the most frequent tokens, these are just examples):
the
and
..
The following code snippet can help you with obtaining the top 30 occurring tokens. The goal is to call the function you updated in Exercise 4a, i.e., get_basic_stats, in the file analyze.py. This also makes it possible to write the top 30 tokens to files.
End of explanation |
9,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Columns to remove
Step1: Run decision tree
Step2: Check variance inflation factors
If the VIF is equal to 1 there is no multicollinearity among factors, but if the VIF is greater than 1, the predictors may be moderately correlated. The output above shows that the VIF for the Publication and Years factors are about 1.5, which indicates some correlation, but not enough to be overly concerned about. A VIF between 5 and 10 indicates high correlation that may be problematic. And if the VIF goes above 10, you can assume that the regression coefficients are poorly estimated due to multicollinearity.
Step3: Nested cross validation
Often we want to tune the parameters of a model (for example, C in a support vector machine). That is, we want to find the value of a parameter that minimizes our loss function. The best way to do this is cross validation
Step4: Create Inner Cross Validation (For Parameter Tuning)
This is our inner cross validation. We will use this to hunt for the best parameters for C, the penalty for misclassifying a data point. GridSearchCV will conduct steps 1-6 listed at the top of this tutorial.
Step5: The code below isn't necessary for parameter tuning using nested cross validation, however to demonstrate that our inner cross validation grid search can find the best value for the parameter C, we will run it once here
Step6: Create Outer Cross Validation (For Model Evaluation)
With our inner cross validation constructed, we can use cross_val_score to evaluate the model with a second (outer) cross validation.
The code below splits the data into three folds, running the inner cross validation on two of the folds (merged together) and then evaluating the model on the third fold. This is repeated three times so that every fold is used for testing once. | Python Code:
plt.scatter(train['avg_blocktime_6'], train['avg_blocktime_60'])
Explanation: Columns to remove:
Possible leakage: avg_price_6, avg_price_60, avg_gasUsed_b_6, avg_gasUsed_t_6, avg_gasUsed_b_60, avg_gasUsed_t_60, avg_price_60
Poor performance: gasLimit_t, gasUsed_t, newContract, avg_uncle_count_6, avg_txcnt_second_6, type_enc, avg_uncle_count_60
Candidate features:
avg_difficulty_60, r2=0.99
avg_tx_count_60, r2=0.5
avg_blocktime_6,
avg_tx_count_6, r2=0.99
avg_difficulty_6,
avg_blocktime_60
avg_txcnt_second_60, r2=0.12
mv, r2=0.75
End of explanation
# select features
sub_cols = ['gasLimit_t',
'gasUsed_t',
'newContract',
'avg_blocktime_6',
'avg_tx_count_6',
'avg_uncle_count_6',
'avg_difficulty_6',
'avg_txcnt_second_6',
'avg_gasUsed_t_6',
'avg_price_6',
'avg_blocktime_60',
'avg_gasUsed_b_60',
'avg_uncle_count_60',
'avg_difficulty_60',
'avg_txcnt_second_60',
'avg_gasUsed_t_60',
'avg_price_60',
'mv',
'type_enc']
sub_train = train[sub_cols]
X = sub_train.values
y = y_label
X_train, X_test, y_train, y_test = train_test_split(X, y)
matrix_rank(X), len(sub_cols)
X.shape, y.shape
Explanation: Run decision tree
End of explanation
for i, col in enumerate(sub_train.columns):
print('VIF col {}: {}'.format(col,variance_inflation_factor(X,i)))
dt = tree.DecisionTreeRegressor()
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
r2_score(y_test, y_pred)
mean_squared_error(y_test, y_pred)
print('Mean CV r2_score: {}'.format(np.mean(cross_val_score(dt, X_test, y_test, scoring='r2', cv=3))))
plt.scatter(y_test, y_pred)
#plt.xlim(0,1000)
#plt.ylim(0,1000)
plt.xlabel('y_test')
plt.ylabel('y_pred')
Explanation: Check variance inflation factors
If the VIF is equal to 1 there is no multicollinearity among factors, but if the VIF is greater than 1, the predictors may be moderately correlated. The output above shows that the VIF for the Publication and Years factors are about 1.5, which indicates some correlation, but not enough to be overly concerned about. A VIF between 5 and 10 indicates high correlation that may be problematic. And if the VIF goes above 10, you can assume that the regression coefficients are poorly estimated due to multicollinearity.
End of explanation
# Create a scaler object
#sc = StandardScaler()
# Fit the scaler to the feature data and transform
#X_std = sc.fit_transform(X)
Explanation: Nested cross validation
Often we want to tune the parameters of a model (for example, C in a support vector machine). That is, we want to find the value of a parameter that minimizes our loss function. The best way to do this is cross validation:
Set the parameter you want to tune to some value.
Split your data into K 'folds' (sections).
Train your model using K-1 folds using the parameter value.
Test your model on the remaining fold.
Repeat steps 3 and 4 so that every fold is the test data once.
Repeat steps 1 to 5 for every possible value of the parameter.
Report the parameter that produced the best result.
However, as Cawley and Talbot point out in their 2010 paper, since we used the test set to both select the values of the parameter and evaluate the model, we risk optimistically biasing our model evaluations. For this reason, if a test set is used to select model parameters, then we need a different test set to get an unbiased evaluation of that selected model.
One way to overcome this problem is to have nested cross validations. First, an inner cross validation is used to tune the parameters and select the best model. Second, an outer cross validation is used to evaluate the model selected by the inner cross validation.
Standardize data
End of explanation
# Create a list of 10 candidate values for the C parameter
#max_depth_candidates = dict(max_depth=np.arange(1, 7, 1))
# Create a gridsearch object with the decision tree regressor and the max_depth value candidates
#reg = GridSearchCV(estimator=tree.DecisionTreeRegressor(), param_grid=max_depth_candidates)
Explanation: Create Inner Cross Validation (For Parameter Tuning)
This is our inner cross validation. We will use this to hunt for the best parameters for C, the penalty for misclassifying a data point. GridSearchCV will conduct steps 1-6 listed at the top of this tutorial.
End of explanation
# Fit the cross validated grid search on the data
#reg.fit(X_std, y)
# Show the best value for C
#reg.best_estimator_.max_depth
Explanation: The code below isn't necessary for parameter tuning using nested cross validation, however to demonstrate that our inner cross validation grid search can find the best value for the parameter C, we will run it once here:
End of explanation
#print('Mean CV r2_score: {}'.format(np.mean(cross_val_score(reg, X_std, y, scoring='r2', cv=3))))
Explanation: Create Outer Cross Validation (For Model Evaluation)
With our inner cross validation constructed, we can use cross_val_score to evaluate the model with a second (outer) cross validation.
The code below splits the data into three folds, running the inner cross validation on two of the folds (merged together) and then evaluating the model on the third fold. This is repeated three times so that every fold is used for testing once.
End of explanation |
9,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load from here
Step1: https
Step2: Manual overwrites | Python Code:
dc=pd.read_csv('dcg.csv')
Explanation: Load from here
End of explanation
pop2=pd.read_csv('worldcities.csv')
pop2
def ccc(c,country):
if c=='Moscow region (oblast)': return 'Moscow'
if c=='Rostov-on-Don': return 'Rostov-na-Donu'
if c=='Gdansk, Gdynia and Sopot': return 'Gdansk+Gdynia'
if c=='Nizhny Novgorod': return 'Nizhniy Novgorod'
if c=="Kazan": return "Kazan'"
if c=="Birmingham-Wolverhampton": return "Birmingham+Wolverhampton"
if c=='Frankfurt am Main': return 'Frankfurt'
if c=='Reggio Calabria': return 'Reggio di Calabria'
if c=='Leeds-Bradford': return 'Leeds+Bradford'
if c=='Newcastle-Sunderland': return 'Newcastle+Sunderland'
if c=='Freiburg': return 'Freiburg im Breisgau'
if c=='Ruhr region west': return 'Duisburg+Dusseldorf+Bonn+Cologne'
if c=='Ruhr region east': return 'Essen+Dortmund+Wuppertal'
if c=='Cape Coral-Fort Myers': return 'Cape Coral+Fort Myers'
if c=='Oxnard-Thousand Oaks-Ventura': return 'Oxnard+Thousand Oaks'
if c=='Dallas-Fort Worth': return 'Dallas+Fort Worth'
if c=='Katowice urban area': return 'Katowice'
if c=='Gothenburg': return 'Goteborg'
if c=='Odessa': return 'Odesa'
if c=='Kitchener-Waterloo': return 'Kitchener'
if c=='Omaha-Council Bluffs': return 'Omaha+Council Bluffs'
if c=='Greensboro-High Point': return 'Greensboro+High Point'
if c=='Den Bosch': return "'s-Hertogenbosch"
if c=='Hull': return 'Kingston upon Hull'
if c=='Swansea': return 'Abertawe'
if c=='Newcastle':
if country=='United Kingdom': return 'Newcastle upon Tyne'
return 'Newcastle'
if c=='Seville': return 'Sevilla'
if c=='Ghent': return 'Gent'
return c
def cnc(c):
if c=='United States of America': return 'United States'
return c
pops={}
missing=[]
for i in dc.index:
city=dc.loc[i]['City']
country0=dc.loc[i]['Country']
country=cnc(country0)
if country not in pop2['country'].unique():
print(country)
pop2a=pop2[pop2['country']==country]
index=city+', '+country0
for c in ccc(city,country).split('+'):
c=ccc(c,country)
if c in ['San Sebastian']:
print(city,c)
missing.append(city+', '+country)
elif c in pop2a['city'].values:
if index not in pops:pops[index]=0
pops[index]+=pop2a.drop_duplicates('city').set_index('city').loc[c]['population']
elif c in pop2['city_ascii'].values:
if index not in pops:pops[index]=0
pops[index]+=pop2a.drop_duplicates('city_ascii').set_index('city_ascii').loc[c]['population']
elif c in pop2['admin_name'].values:
if index not in pops:pops[index]=0
pops[index]+=pop2a.drop_duplicates('admin_name').set_index('admin_name').loc[c]['population']
else:
print(city,c,index)
missing.append(index)
fixed={}
for c in missing:
i=c.split(',')
url3='http://population.city/'+i[1].lower().strip()+'/'+i[0].lower().strip()+'/'
#print(url3)
response = requests.get(url3)
soup = BeautifulSoup(response.content)
em=soup.findAll('em')
if em:
fixed[c]=float(em[0].text[:-1].replace(' ',''))
print('OK',c)
else:
print('ERROR',c)
pops.update(fixed)
#manual from Wiki
pops.update({
'Palma de Mallorca, Spain':393256.0,
'Las Palmas, Spain':384315.0,
'Reggio Emilia, Italy':172326.0,
'Padua, Italy':214000.0,
'Almere, Netherlands':207904.0,
'San Sebastian, Spain':186095.0
})
#manual from Wiki for unflagged
pops.update({
'Preston, United Kingdom':141251.0,
'Birmingham-Wolverhampton, United Kingdom': 1157579.0+259376.0,
'Mersin, Turkey':1038940.0
})
Explanation: https://simplemaps.com/data/world-cities
http://worldpopulationreview.com/world-cities/
https://population.un.org/wup/Download/ -> Urban Agglomerations
End of explanation
pops['New Delhi, India']=20268785.0
dp=pd.DataFrame(pops,index=['pop']).T
dc['cc']=dc['City']+', '+dc['Country']
dcp=dc.set_index('cc').join(dp).reset_index()
len(dcp)
dcp[np.isnan(dcp['pop'])]
len(dcp)
#drop manual duplicates
dcp=dcp[~(dcp['City'].isin(['Duisburg','Dusseldorf','Bonn','Cologne']))]
len(dcp)
dcr=dcp[dcp['Country'].isin(['Austria','Belgium','Bulgaria','Croatia','Cyprus','Czechia','Denmark',
'Estonia','Finland','France','Germany','Greece','Hungary','Ireland','Italy',
'Latvia','Lithuania','Luxembourg','Malta','Netherlands','Poland',
'Portugal','Romania','Slovakia','Slovenia','Spain','Sweden'])]\
.sort_values('Rank by filter').reset_index()
dcr['eu']=dcr.index+1
dcw=dcp.join(dcr.set_index('City')[['eu']],on='City').fillna(0)
dcx=dcw.set_index('cc').T.to_dict()
dcx['Washington, United States of America']['lat']=38.930378
dcx['Washington, United States of America']['lon']=-77.057839
dcx['Ruhr region west, Germany']['lat']=51.347795
dcx['Ruhr region west, Germany']['lon']=6.699458
dcx['Ruhr region east, Germany']['lat']=51.484191
dcx['Ruhr region east, Germany']['lon']=7.457383
dcx['Quebec, Canada']['lat']=46.799437
dcx['Quebec, Canada']['lon']=-71.264797
pd.DataFrame(dcx).T.to_csv('dcx.csv')
Explanation: Manual overwrites
End of explanation |
9,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Polynomial Regression and Overfitting
Step1: In this notebook we want to discuss both <em style="color
Step2: Let us plot the data. We will use colors to distinguish between x1 and x2. The pairs $(x_1, y)$ are plotted in blue, while the pairs $(x_2, y)$ are plotted in yellow.
Step3: We want to split the data into a <em style="color
Step4: We will split the data at a ratio of $4
Step5: In order to build a <em style="color
Step6: We use the function linear_regression to build a model for our data. Note that this function uses only the <em style="color
Step7: Notice that the explained variance is a lot worse on the test set. Let's plot the linear model. The coefficients are stored in M.intercept_ and M.coef_.
Step8: In order to improve the explained variance of our model, we extend it with polynomial features, i.e. we add features like $x_1^2$ and $x_1\cdot x_2$ etc.
Step9: Let us fit this quadratic model.
Step10: The accuracy on the training set and on the test set have both increased. Let us plot the model.
Step11: Plotting the regression curve starts to get tedious.
Step12: Obviously, the quadratic curve is a much better match than the linear model. Let's try to use higher order polynomials.
$\texttt{polynomial}(n)$ creates a polynomial in the variables a and b that contains all terms of the form
that contains all terms of the form $\Theta[k] \cdot a^i \cdot b^j$ where $i+j \leq n$.
Step13: Let's check this out for $n=4$.
Step14: The function $\texttt{polynomial_vector}(n, M)$ takes a number $n$ and a model $M$. It returns a pair of vectors that can be used to plot the nonlinear model.
Step15: The function $\texttt{plot_nth_degree_boundary}(n)$ creates a polynomial regression model of degree $n$. It plots both the data and the polynomial model.
Step16: Let us test this for the polynomial logistic regression model of degree $4$.
Step17: This seems to be working and the explained variance has improved. Let's try to use even higher order polynomials. Hopefully, we can get a $100\%$ explained variance.
Step18: It turns out that we can get $100\%$ of explained variance, but only for the training set. The explained variance of the test set has decreased and apparently the curve is starting to get wiggly.
Ridge Regression
Step19: Let's try to use a polynomial of degree 6 but without regularization.
Step20: This looks like the model that we had found before. Let us try to add a bit of regularization.
Step21: Now the model is much smoother and the explained variance has also increased considerably on the test set. | Python Code:
import numpy as np
import sklearn.linear_model as lm
Explanation: Polynomial Regression and Overfitting
End of explanation
np.random.seed(42)
N = 20 # number of data points
X1 = np.array([k for k in range(N)])
X2 = np.array([k + 0.2 * (np.random.rand() - 0.5) for k in range(N)])
Y = np.sqrt(X1) # Y is the square root of X1
X1 = np.reshape(X1, (N, 1)) # turn X1 into an N x 1 matrix
X2 = np.reshape(X2, (N, 1)) # turn X2 into an N x 1 matrix
X = np.hstack([X1, X2]) # combine X1 and X2 into an N x 2 matrix
X
Explanation: In this notebook we want to discuss both <em style="color:blue;">polynomial regression</em> and <em style="color:blue;">overfitting</em>.
One possible reason causing overfitting is a correlation between features. Let us create a dataset with two feature x1 and x2 that are more or less the same. Actually, x2 is x1 plus some random noise. The dependent variable y is the square root of x1.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(15, 12))
sns.set(style='whitegrid')
plt.title('A Regression Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(0.0, N, step=1.0))
plt.yticks(np.arange(0.0, np.sqrt(N) + 1, step=0.25))
plt.scatter(X1, Y, color='b')
plt.scatter(X2, Y, color='y')
Explanation: Let us plot the data. We will use colors to distinguish between x1 and x2. The pairs $(x_1, y)$ are plotted in blue, while the pairs $(x_2, y)$ are plotted in yellow.
End of explanation
from sklearn.model_selection import train_test_split
Explanation: We want to split the data into a <em style="color:blue;">training set</em> and a <em style="color:blue;">test set</em>.
The <em style="color:blue;">training set</em> will be used to compute the parameters of our model, while the
<em style="color:blue;">testing set</em> is only used to check the <em style="color:blue;">accuracy</em>. SciKit-Learn has a predefined method
sklearn.model_selection.train_test_split that can be used to randomly split data into a training set and a test set.
End of explanation
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1)
X_test
Explanation: We will split the data at a ratio of $4:1$, i.e. $80\%$ of the data will be used for training, while the remaining $20\%$ is used to test the accuracy.
End of explanation
def linear_regression(X_train, Y_train, X_test, Y_test):
M = lm.LinearRegression()
M.fit(X_train, Y_train)
train_score = M.score(X_train, Y_train)
test_score = M.score(X_test , Y_test)
return M, train_score, test_score
Explanation: In order to build a <em style="color:blue;">linear regression</em> model, we import the module linear_model from SciKit-Learn.
The function $\texttt{linear_regression}(\texttt{X_train}, \texttt{Y_train}, \texttt{X_test}, \texttt{Y_test})$ takes a feature matrix $\texttt{X_train}$ and a corresponding vector $\texttt{Y_train}$ and computes a linear regression model $M$ that best fits these data. Then, the explained variance of the model is computed both for the training set and for the test set.
End of explanation
M, train_score, test_score = linear_regression(X_train, Y_train, X_test, Y_test)
train_score, test_score
Explanation: We use the function linear_regression to build a model for our data. Note that this function uses only the <em style="color:blue;">training data</em> to create the model. The <em style="color:blue;">test data</em> is only used for evaluating the model.
End of explanation
ϑ0 = M.intercept_
ϑ1, ϑ2 = M.coef_
plt.figure(figsize=(15, 10))
sns.set(style='whitegrid')
plt.title('A Regression Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(0.0, N + 1, step=1.0))
plt.yticks(np.arange(0.0, np.sqrt(N) + 1, step=0.25))
plt.scatter(X_train[:,0], Y_train, color='b')
plt.scatter(X_test [:,0], Y_test , color='g')
plt.plot([0, N], [ϑ0, ϑ0 + (ϑ1 + ϑ2) * N], c='r')
#plt.savefig('sqrt-linear.pdf')
Explanation: Notice that the explained variance is a lot worse on the test set. Let's plot the linear model. The coefficients are stored in M.intercept_ and M.coef_.
End of explanation
from sklearn.preprocessing import PolynomialFeatures
quadratic = PolynomialFeatures(2, include_bias=False)
X_train_quadratic = quadratic.fit_transform(X_train)
X_test_quadratic = quadratic.fit_transform(X_test)
quadratic.get_feature_names(['x1', 'x2'])
X_test_quadratic
Explanation: In order to improve the explained variance of our model, we extend it with polynomial features, i.e. we add features like $x_1^2$ and $x_1\cdot x_2$ etc.
End of explanation
M, train_score, test_score = linear_regression(X_train_quadratic, Y_train, X_test_quadratic, Y_test)
train_score, test_score
Explanation: Let us fit this quadratic model.
End of explanation
ϑ0 = M.intercept_
ϑ1, ϑ2, ϑ3, ϑ4, ϑ5 = M.coef_
Explanation: The accuracy on the training set and on the test set have both increased. Let us plot the model.
End of explanation
a = np.arange(0.0, N+1, 0.01)
b = ϑ0 + (ϑ1 + ϑ2 ) * a + (ϑ3 + ϑ4 + ϑ5) * a**2
plt.figure(figsize=(15, 8))
sns.set(style='darkgrid')
plt.title('A Regression Problem: Second Order Terms included')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(0.0, N + 1, step=1.0))
plt.yticks(np.arange(0.0, np.sqrt(N) + 1, step=0.25))
plt.scatter(X_train[:,0], Y_train, color='b')
plt.scatter(X_test [:,0], Y_test , color='g')
plt.plot(a, b, c='r')
#plt.savefig('sqrt-quadratic.pdf')
Explanation: Plotting the regression curve starts to get tedious.
End of explanation
def polynomial(n):
sum = ' Θ[0]'
cnt = 0
for k in range(1, n+1):
for i in range(0, k+1):
cnt += 1
sum += f' + Θ[{cnt}] * a**{k-i} * b**{i}'
if k < n:
sum += '\\\n'
return sum
Explanation: Obviously, the quadratic curve is a much better match than the linear model. Let's try to use higher order polynomials.
$\texttt{polynomial}(n)$ creates a polynomial in the variables a and b that contains all terms of the form
that contains all terms of the form $\Theta[k] \cdot a^i \cdot b^j$ where $i+j \leq n$.
End of explanation
print(polynomial(4))
Explanation: Let's check this out for $n=4$.
End of explanation
def polynomial_vector(n, M):
Θ = [M.intercept_] + list(M.coef_)
a = np.reshape(X1, (N, ))
b = np.reshape(X2, (N, ))
return 0.5*(a + b), eval(polynomial(n))
Explanation: The function $\texttt{polynomial_vector}(n, M)$ takes a number $n$ and a model $M$. It returns a pair of vectors that can be used to plot the nonlinear model.
End of explanation
def plot_nth_degree_polynomial(n):
poly = PolynomialFeatures(n, include_bias=False)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.fit_transform(X_test)
M, s1, s2 = linear_regression(X_train_poly, Y_train, X_test_poly, Y_test)
print('The explained variance on the training set is:', s1)
print('The explained variance on the test set is:', s2)
a, b = polynomial_vector(n, M)
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Regression Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(0.0, N + 1, step=1.0))
plt.yticks(np.arange(0.0, 2*np.sqrt(N), step=0.25))
plt.scatter(X_train[:,0], Y_train, color='b')
plt.scatter(X_test [:,0], Y_test , color='g')
plt.plot(a, b, c='r')
#plt.savefig('sqrt-' + str(n) + '.pdf')
Explanation: The function $\texttt{plot_nth_degree_boundary}(n)$ creates a polynomial regression model of degree $n$. It plots both the data and the polynomial model.
End of explanation
plot_nth_degree_polynomial(4)
Explanation: Let us test this for the polynomial logistic regression model of degree $4$.
End of explanation
plot_nth_degree_polynomial(6)
Explanation: This seems to be working and the explained variance has improved. Let's try to use even higher order polynomials. Hopefully, we can get a $100\%$ explained variance.
End of explanation
def ridge_regression(X_train, Y_train, X_test, Y_test, alpha):
M = lm.Ridge(alpha, solver='svd')
M.fit(X_train, Y_train)
train_score = M.score(X_train, Y_train)
test_score = M.score(X_test , Y_test)
return M, train_score, test_score
def plot_nth_degree_polynomial_ridge(n, alpha):
poly = PolynomialFeatures(n, include_bias=False)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.fit_transform(X_test)
M, s1, s2 = ridge_regression(X_train_poly, Y_train, X_test_poly, Y_test, alpha)
print('The explained variance on the training set is:', s1)
print('The explained variance on the test set is:', s2)
a, b = polynomial_vector(n, M)
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.title('A Regression Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.xticks(np.arange(0.0, N + 1, step=1.0))
plt.yticks(np.arange(0.0, 2*np.sqrt(N), step=0.25))
plt.scatter(X_train[:,0], Y_train, color='b')
plt.scatter(X_test [:,0], Y_test , color='g')
plt.plot(a, b, c='r')
#plt.savefig('sqrt-' + str(n) + 'ridge.pdf')
Explanation: It turns out that we can get $100\%$ of explained variance, but only for the training set. The explained variance of the test set has decreased and apparently the curve is starting to get wiggly.
Ridge Regression
End of explanation
plot_nth_degree_polynomial_ridge(6, 0.0)
Explanation: Let's try to use a polynomial of degree 6 but without regularization.
End of explanation
plot_nth_degree_polynomial_ridge(6, 0.05)
Explanation: This looks like the model that we had found before. Let us try to add a bit of regularization.
End of explanation
plot_nth_degree_polynomial_ridge(6, 100000)
Explanation: Now the model is much smoother and the explained variance has also increased considerably on the test set.
End of explanation |
9,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Brian part 1
Step1: In Python, the notation ''' is used to begin and end a multi-line string. So the equations are just a string with one line per equation. The equations are formatted with standard mathematical notation, with one addition. At the end of a line you write
Step2: First off, ignore that start_scope() at the top of the cell. You'll see that in each cell in this tutorial where we run a simulation. All it does is make sure that any Brian objects created before the function is called aren't included in the next run of the simulation.
So, what has happened here? Well, the command run(100*ms) runs the simulation for 100 ms. We can see that this has worked by printing the value of the variable v before and after the simulation.
Step3: By default, all variables start with the value 0. Since the differential equation is dv/dt=(1-v)/tau we would expect after a while that v would tend towards the value 1, which is just what we see. Specifically, we'd expect v to have the value 1-exp(-t/tau). Let's see if that's right.
Step4: Good news, the simulation gives the value we'd expect!
Now let's take a look at a graph of how the variable v evolves over time using StateMonitor().
Step5: This time we only ran the simulation for 30 ms so that we can see the behaviour better. It looks like it's behaving as expected, but let's just check that analytically by plotting the expected behaviour on top.
Step6: As you can see, the blue (Brian) and dashed red (analytic solution) lines coincide.
In this example, we used the object StateMonitor object. This is used to record the values of a neuron variable while the simulation runs. The first two arguments are the group to record from, and the variable you want to record from. We also specify record=0. This means that we record all values for neuron 0. We have to specify which neurons we want to record because in large simulations with many neurons it usually uses up too much RAM to record the values of all neurons.
Now try modifying the equations and parameters and see what happens in the cell below.
Step7: Adding spikes
So far we haven't done anything neuronal, just played around with differential equations. Now let's start adding spiking behaviour.
Step8: We've added two new keywords to the NeuronGroup declaration
Step9: The SpikeMonitor object takes the group whose spikes you want to record as its argument and stores the spike times in the variable t. Let's plot those spikes on top of the other figure to see that it's getting it right.
Step10: Here we've used the axvline command from matplotlib to draw a red, dashed vertical line at the time of each spike recorded by the SpikeMonitor.
Now try changing the strings for threshold and reset in the cell above to see what happens.
Refractoriness
A common feature of neuron models is refractoriness. This means that after the neuron fires a spike it becomes refractory for a certain duration and cannot fire another spike until this period is over. Here's how we do that in Brian using (unless refractory).
Step11: As you can see in this figure, after the first spike, v stays at 0 for around 5 ms before it resumes its normal behaviour. To do this, we've done two things. Firstly, we've added the keyword refractory=5*ms to the NeuronGroup declaration. On its own, this only means that the neuron cannot spike in this period (see below), but doesn't change how v behaves. In order to make v stay constant during the refractory period, we have to add (unless refractory) to the end of the definition of v in the differential equations. What this means is that the differential equation determines the behaviour of v unless it's refractory in which case it is switched off.
Here's what would happen if we didn't include (unless refractory). Note that we've also decreased the value of tau and increased the length of the refractory period to make the behaviour clearer.
Step12: So what's going on here? The behaviour for the first spike is the same
Step13: This shows a few changes. Firstly, we've got a new variable N determining the number of neurons. Secondly, we added the statement G.v = 'rand()' before the run. What this does is initialise each neuron with a different uniform random value between 0 and 1. We've done this just so each neuron will do something a bit different. The other big change is how we plot the data in the end.
As well as the variable spikemon.t with the times of all the spikes, we've also used the variable spikemon.i which gives the corresponding neuron index for each spike, and plotted a single black dot with time on the x-axis and neuron index on the y-value. This is the standard "raster plot" used in neuroscience.
Parameters
To make these multiple neurons do something more interesting, let's introduce per-neuron parameters that don't have a differential equation attached to them.
Step14: The line v0
Step15: That's the same figure as in the previous section but with some noise added. Note how the curve has changed shape | Python Code:
tau =
eqs = '''
'''
Explanation: Introduction to Brian part 1: Neurons
Adapted form brian2 tutorial
All Brian scripts start with the following. If you're trying this notebook out in IPython, you should start by running this cell.
Later we'll do some plotting in the notebook, so we activate inline plotting in the IPython notebook by doing this:
Units system
Brian has a system for using quantities with physical dimensions:
All of the basic SI units can be used (volt, amp, etc.) along with all the standard prefixes (m=milli, p=pico, etc.), as well as a few special abbreviations like mV for millivolt, pF for picofarad, etc.
Also note that combinations of units with work as expected:
And if you try to do something wrong like adding amps and volts, what happens?
If you haven't see an error message in Python before that can look a bit overwhelming, but it's actually quite simple and it's important to know how to read these because you'll probably see them quite often.
You should start at the bottom and work up. The last line gives the error type DimensionMismatchError along with a more specific message (in this case, you were trying to add together two quantities with different SI units, which is impossible).
Working upwards, each of the sections starts with a filename (e.g. C:\Users\Dan\...) with possibly the name of a function, and then a few lines surrounding the line where the error occurred (which is identified with an arrow).
The last of these sections shows the place in the function where the error actually happened. The section above it shows the function that called that function, and so on until the first section will be the script that you actually run. This sequence of sections is called a traceback, and is helpful in debugging.
If you see a traceback, what you want to do is start at the bottom and scan up the sections until you find your own file because that's most likely where the problem is. (Of course, your code might be correct and Brian may have a bug in which case, please let us know on the email support list.)
A simple model
Let's start by defining a simple neuron model. In Brian, all models are defined by systems of differential equations. Here's a simple example of what that looks like:
$$ \frac{dv}{dt} = \frac{(1-v)}{\tau}$$
Assume that $\tau = 10 ms$
End of explanation
start_scope()
Explanation: In Python, the notation ''' is used to begin and end a multi-line string. So the equations are just a string with one line per equation. The equations are formatted with standard mathematical notation, with one addition. At the end of a line you write : unit where unit is the SI unit of that variable.
Now let's use this definition to create a neuron.
In Brian, you only create groups of neurons, using the class NeuronGroup. The first two arguments when you create one of these objects are the number of neurons (in this case, 1) and the defining differential equations.
Let's see what happens if we didn't put the variable $\tau$ in the equation:
An error is raised, but why? The reason is that the differential equation is now dimensionally inconsistent. The left hand side dv/dt has units of 1/second but the right hand side 1-v is dimensionless. People often find this behaviour of Brian confusing because this sort of equation is very common in mathematics. However, for quantities with physical dimensions it is incorrect because the results would change depending on the unit you measured it in. For time, if you measured it in seconds the same equation would behave differently to how it would if you measured time in milliseconds. To avoid this, we insist that you always specify dimensionally consistent equations.
Now let's go back to the good equations and actually run the simulation.
End of explanation
print('Before v =',)
print('After v =',)
Explanation: First off, ignore that start_scope() at the top of the cell. You'll see that in each cell in this tutorial where we run a simulation. All it does is make sure that any Brian objects created before the function is called aren't included in the next run of the simulation.
So, what has happened here? Well, the command run(100*ms) runs the simulation for 100 ms. We can see that this has worked by printing the value of the variable v before and after the simulation.
End of explanation
print('Expected value of v =', 1-exp(-100*ms/tau))
Explanation: By default, all variables start with the value 0. Since the differential equation is dv/dt=(1-v)/tau we would expect after a while that v would tend towards the value 1, which is just what we see. Specifically, we'd expect v to have the value 1-exp(-t/tau). Let's see if that's right.
End of explanation
start_scope()
xlabel('Time (ms)')
ylabel('v');
Explanation: Good news, the simulation gives the value we'd expect!
Now let's take a look at a graph of how the variable v evolves over time using StateMonitor().
End of explanation
start_scope()
xlabel('Time (ms)')
ylabel('v')
legend(loc='best')
Explanation: This time we only ran the simulation for 30 ms so that we can see the behaviour better. It looks like it's behaving as expected, but let's just check that analytically by plotting the expected behaviour on top.
End of explanation
start_scope()
tau = 10*ms
eqs = '''
dv/dt = (sin(2*pi*100*Hz*t)-v)/tau : 1
'''
G = NeuronGroup(1, eqs, method='euler') # TODO: we shouldn't have to specify euler here
M = StateMonitor(G, 'v', record=0)
G.v = 5 # initial value
run(60*ms)
plot(M.t/ms, M.v[0])
xlabel('Time (ms)')
ylabel('v');
Explanation: As you can see, the blue (Brian) and dashed red (analytic solution) lines coincide.
In this example, we used the object StateMonitor object. This is used to record the values of a neuron variable while the simulation runs. The first two arguments are the group to record from, and the variable you want to record from. We also specify record=0. This means that we record all values for neuron 0. We have to specify which neurons we want to record because in large simulations with many neurons it usually uses up too much RAM to record the values of all neurons.
Now try modifying the equations and parameters and see what happens in the cell below.
End of explanation
start_scope()
xlabel('Time (ms)')
ylabel('v');
Explanation: Adding spikes
So far we haven't done anything neuronal, just played around with differential equations. Now let's start adding spiking behaviour.
End of explanation
start_scope()
print('Spike times:', )
Explanation: We've added two new keywords to the NeuronGroup declaration: threshold='v>0.8' and reset='v = 0'. What this means is that when v>1 we fire a spike, and immediately reset v = 0 after the spike. We can put any expression and series of statements as these strings.
As you can see, at the beginning the behaviour is the same as before until v crosses the threshold v>0.8 at which point you see it reset to 0. You can't see it in this figure, but internally Brian has registered this event as a spike. Let's have a look at that.
End of explanation
start_scope()
#axvline(t/ms, ls='--', c='r', lw=3)
xlabel('Time (ms)')
ylabel('v');
Explanation: The SpikeMonitor object takes the group whose spikes you want to record as its argument and stores the spike times in the variable t. Let's plot those spikes on top of the other figure to see that it's getting it right.
End of explanation
start_scope()
xlabel('Time (ms)')
ylabel('v');
Explanation: Here we've used the axvline command from matplotlib to draw a red, dashed vertical line at the time of each spike recorded by the SpikeMonitor.
Now try changing the strings for threshold and reset in the cell above to see what happens.
Refractoriness
A common feature of neuron models is refractoriness. This means that after the neuron fires a spike it becomes refractory for a certain duration and cannot fire another spike until this period is over. Here's how we do that in Brian using (unless refractory).
End of explanation
start_scope()
tau = 5*ms
axhline(0.8, ls=':', c='g', lw=3)
xlabel('Time (ms)')
ylabel('v')
print("Spike times:",)
Explanation: As you can see in this figure, after the first spike, v stays at 0 for around 5 ms before it resumes its normal behaviour. To do this, we've done two things. Firstly, we've added the keyword refractory=5*ms to the NeuronGroup declaration. On its own, this only means that the neuron cannot spike in this period (see below), but doesn't change how v behaves. In order to make v stay constant during the refractory period, we have to add (unless refractory) to the end of the definition of v in the differential equations. What this means is that the differential equation determines the behaviour of v unless it's refractory in which case it is switched off.
Here's what would happen if we didn't include (unless refractory). Note that we've also decreased the value of tau and increased the length of the refractory period to make the behaviour clearer.
End of explanation
start_scope()
N = 100
xlabel('Time (ms)')
ylabel('Neuron index');
Explanation: So what's going on here? The behaviour for the first spike is the same: v rises to 0.8 and then the neuron fires a spike at time 8 ms before immediately resetting to 0. Since the refractory period is now 15 ms this means that the neuron won't be able to spike again until time 8 + 15 = 23 ms. Immediately after the first spike, the value of v now instantly starts to rise because we didn't specify (unless refractory) in the definition of dv/dt. However, once it reaches the value 0.8 (the dashed green line) at time roughly 8 ms it doesn't fire a spike even though the threshold is v>0.8. This is because the neuron is still refractory until time 23 ms, at which point it fires a spike.
Note that you can do more complicated and interesting things with refractoriness. See the full documentation for more details about how it works.
Multiple neurons
So far we've only been working with a single neuron. Let's do something interesting with multiple neurons.
End of explanation
start_scope()
N = 100
tau = 10*ms
v0_max = 3.
duration = 1000*ms
figure(figsize=(12,4))
subplot(121)
#plot(M.t/ms, M.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(122)
#plot(G.v0, M.count/duration)
xlabel('v0')
ylabel('Firing rate (sp/s)')
Explanation: This shows a few changes. Firstly, we've got a new variable N determining the number of neurons. Secondly, we added the statement G.v = 'rand()' before the run. What this does is initialise each neuron with a different uniform random value between 0 and 1. We've done this just so each neuron will do something a bit different. The other big change is how we plot the data in the end.
As well as the variable spikemon.t with the times of all the spikes, we've also used the variable spikemon.i which gives the corresponding neuron index for each spike, and plotted a single black dot with time on the x-axis and neuron index on the y-value. This is the standard "raster plot" used in neuroscience.
Parameters
To make these multiple neurons do something more interesting, let's introduce per-neuron parameters that don't have a differential equation attached to them.
End of explanation
start_scope()
N = 100
tau = 10*ms
v0_max = 3.
duration = 1000*ms
sigma = 0.2
run(duration)
figure(figsize=(12,4))
subplot(121)
#plot(M.t/ms, M.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(122)
#plot(G.v0, M.count/duration)
xlabel('v0')
ylabel('Firing rate (sp/s)')
Explanation: The line v0 : 1 declares a new per-neuron parameter v0 with units 1 (i.e. dimensionless).
The line G.v0 = 'i*v0_max/(N-1)' initialises the value of v0 for each neuron varying from 0 up to v0_max. The symbol i when it appears in strings like this refers to the neuron index.
So in this example, we're driving the neuron towards the value v0 exponentially, but we fire spikes when v crosses v>1 it fires a spike and resets. The effect is that the rate at which it fires spikes will be related to the value of v0. For v0<1 it will never fire a spike, and as v0 gets larger it will fire spikes at a higher rate. The right hand plot shows the firing rate as a function of the value of v0. This is the I-f curve of this neuron model.
Note that in the plot we've used the count variable of the SpikeMonitor: this is an array of the number of spikes each neuron in the group fired. Dividing this by the duration of the run gives the firing rate.
Stochastic neurons
Often when making models of neurons, we include a random element to model the effect of various forms of neural noise. In Brian, we can do this by using the symbol xi in differential equations. Strictly speaking, this symbol is a "stochastic differential" but you can sort of thinking of it as just a Gaussian random variable with mean 0 and standard deviation 1. We do have to take into account the way stochastic differentials scale with time, which is why we multiply it by tau**-0.5 in the equations below (see a textbook on stochastic differential equations for more details).
End of explanation
start_scope()
N = 1000
tau = 10*ms
vr = -70*mV
vt0 = -50*mV
delta_vt0 = 5*mV
tau_t = 100*ms
sigma = 0.5*(vt0-vr)
v_drive = 2*(vt0-vr)
duration = 100*ms
eqs = '''
dv/dt = (v_drive+vr-v)/tau + sigma*xi*tau**-0.5 : volt
dvt/dt = (vt0-vt)/tau_t : volt
'''
reset = '''
v = vr
vt += delta_vt0
'''
G = NeuronGroup(N, eqs, threshold='v>vt', reset=reset, refractory=5*ms)
spikemon = SpikeMonitor(G)
G.v = 'rand()*(vt0-vr)+vr'
G.vt = vt0
run(duration)
_ = hist(spikemon.t/ms, 100, histtype='stepfilled', facecolor='k', weights=ones(len(spikemon))/(N*defaultclock.dt))
xlabel('Time (ms)')
ylabel('Instantaneous firing rate (sp/s)')
Explanation: That's the same figure as in the previous section but with some noise added. Note how the curve has changed shape: instead of a sharp jump from firing at rate 0 to firing at a positive rate, it now increases in a sigmoidal fashion. This is because no matter how small the driving force the randomness may cause it to fire a spike.
End of tutorial
That's the end of this part of the tutorial. The cell below has another example. See if you can work out what it is doing and why. Try adding a StateMonitor to record the values of the variables for one of the neurons to help you understand it.
You could also try out the things you've learned in this cell.
Once you're done with that you can move on to the next tutorial on Synapses.
End of explanation |
9,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example notebook
Step1: Create a structure analysis object
If you include a 'mapfile' then we will use locus information to subsample just a single SNP from each locus so that the resulting data file will meet the expectations of structure that SNPs are "unlinked". If you create multiple replicates files using different random seeds then different SNPs will be selected in each rep.
Step2: Set params for the structure analysis
These values are used to generate the "mainparams" and "extraparams" files for structure.
Step3: By default the 'header' of the str file is empty
Step4: You can fill it in by filling the header attribute lists
popdata is the a priori population assignment of an individual to a population. Assignments should be non-zero integers (e.g., 1, 2, 3). Zero is reserved to mean that there is no a priori assignment. popflag indicates whether or not to use the population assignment in the analysis (1) or to leave it to be inferred (0). So in the example below seven samples have assigned populations (popflag=1), and six samples will have their population assignments inferred (popflag=0). The popdata information will only be used for the seven individuals with assigned pops.
Step5: Write the input files and run STRUCTURE
This will write the .str file (and subsample SNPs if you included a mapfile) with the header information included, and it will write a mainparams and extraparams file with the parameter settings that we entered above.
Step6: Or, submit jobs directly to the cluster
If you start an ipcluster instance (see other tutorials) you can submit structure jobs directly to the cluster and easily collect the results, like below.
Step7: Collect results and plot | Python Code:
# conda install ipyrad -c ipyrad
# conda install structure clumpp -c ipyrad
# conda install toytree -c eaton-lab
import ipyrad.analysis as ipa
import toyplot
Explanation: Example notebook: Structure with pop assignments
This notebook shows how to use the ipyrad.analysis toolkit to generate structure input files that use population information.
Required software
End of explanation
s = ipa.structure(
name="test",
workdir="analysis-structure",
data="analysis-ipyrad/ped_min10_outfiles/ped_min10.str",
mapfile="analysis-ipyrad/ped_min10_outfiles/ped_min10.snps.map",
)
Explanation: Create a structure analysis object
If you include a 'mapfile' then we will use locus information to subsample just a single SNP from each locus so that the resulting data file will meet the expectations of structure that SNPs are "unlinked". If you create multiple replicates files using different random seeds then different SNPs will be selected in each rep.
End of explanation
## set run parameters (you probably want to run >10X this long)
s.mainparams.burnin = 1000
s.mainparams.numreps = 5000
## tell structure to expect popdata & popflag
s.mainparams.popdata = 1
s.mainparams.popflag = 1
## print all mainparams
s.mainparams
## tell structure to use popinfo
s.extraparams.usepopinfo = 1
## print all other extraparams
s.extraparams
Explanation: Set params for the structure analysis
These values are used to generate the "mainparams" and "extraparams" files for structure.
End of explanation
s.header
Explanation: By default the 'header' of the str file is empty
End of explanation
## assign popdata and popflag by entering a list of values
s.popdata = [1, 3, 1, 2, 3, 2, 3, 3, 3, 3, 3, 1, 1]
s.popflag = [1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
## print the header information
s.header
Explanation: You can fill it in by filling the header attribute lists
popdata is the a priori population assignment of an individual to a population. Assignments should be non-zero integers (e.g., 1, 2, 3). Zero is reserved to mean that there is no a priori assignment. popflag indicates whether or not to use the population assignment in the analysis (1) or to leave it to be inferred (0). So in the example below seven samples have assigned populations (popflag=1), and six samples will have their population assignments inferred (popflag=0). The popdata information will only be used for the seven individuals with assigned pops.
End of explanation
## writes three input files and then call structure yourself
s.write_structure_files(kpop=3)
Explanation: Write the input files and run STRUCTURE
This will write the .str file (and subsample SNPs if you included a mapfile) with the header information included, and it will write a mainparams and extraparams file with the parameter settings that we entered above.
End of explanation
## connect to a running ipcluster instance
import ipyparallel as ipp
ipyclient = ipp.Client()
print "connected to {} cores".format(len(ipyclient))
## submit job replicates to ipyclient
s.run(kpop=3, nreps=3, ipyclient=ipyclient)
## wait for jobs to finish
ipyclient.wait()
Explanation: Or, submit jobs directly to the cluster
If you start an ipcluster instance (see other tutorials) you can submit structure jobs directly to the cluster and easily collect the results, like below.
End of explanation
## get table of summarized results
table = s.get_clumpp_table(3)
## reorder table by membership in groups
table.sort_values(by=[0, 1, 2], inplace=True)
## build barplot
canvas = toyplot.Canvas(width=500, height=250)
axes = canvas.cartesian(bounds=("10%", "90%", "10%", "45%"))
axes.bars(table)
## add labels to x-axis
ticklabels = [i for i in table.index.tolist()]
axes.x.ticks.locator = toyplot.locator.Explicit(labels=ticklabels)
axes.x.ticks.labels.angle = -60
axes.x.ticks.show = True
axes.x.ticks.labels.offset = 10
axes.x.ticks.labels.style = {"font-size": "12px"}
Explanation: Collect results and plot
End of explanation |
9,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Logistic Regression with PyMC3
This is a reproduction with a few slight alterations of Bayesian Log Reg by J. Benjamin Cook
Author
Step1: The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.
The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression.
Step2: Scrubbing and cleaning
We need to remove any null entries in Income.
And we also want to restrict this study to the United States.
Step3: Exploring the data
Let us get a feel for the parameters.
* We see that age is a tailed distribution. Certainly not Gaussian!
* We don't see much of a correlation between many of the features, with the exception of Age and Age2.
* Hours worked has some interesting behaviour. How would one describe this distribution?
Step4: We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income
(which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering).
The model
We will use a simple model, which assumes that the probability of making more than $50K
is a function of age, years of education and hours worked per week. We will use PyMC3
do inference.
In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters
(in this case the regression coefficients)
The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$
Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity.
What this means in practice is that we only need to worry about the numerator.
Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.
The likelihood is the product of n Bernoulli trials, $\prod^{n}{i=1} p{i}^{y} (1 - p_{i})^{1-y_{i}}$,
where $p_i = \frac{1}{1 + e^{-z_i}}$,
$z_{i} = \beta_{0} + \beta_{1}(age){i} + \beta_2(age)^{2}{i} + \beta_{3}(educ){i} + \beta{4}(hours){i}$ and $y{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise.
With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters.
Step5: Some results
One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.
I'll use seaborn to look at the distribution of some of these factors.
Step6: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models
Step7: Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
Step8: Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics
Step9: Model selection
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.
One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question.
Step10: There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.
Next we look at WAIC. Which is another model selection technique. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn
import warnings
warnings.filterwarnings('ignore')
from collections import OrderedDict
from time import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import fmin_powell
from scipy import integrate
import theano as thno
import theano.tensor as T
def run_models(df, upper_order=5):
'''
Convenience function:
Fit a range of pymc3 models of increasing polynomial complexity.
Suggest limit to max order 5 since calculation time is exponential.
'''
models, traces = OrderedDict(), OrderedDict()
for k in range(1,upper_order+1):
nm = 'k{}'.format(k)
fml = create_poly_modelspec(k)
with pm.Model() as models[nm]:
print('\nRunning: {}'.format(nm))
pm.glm.glm(fml, df, family=pm.glm.families.Normal())
start_MAP = pm.find_MAP(fmin=fmin_powell, disp=False)
traces[nm] = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True)
return models, traces
def plot_traces(traces, retain=1000):
'''
Convenience function:
Plot traces with overlaid means and values
'''
ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()})
for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
,xytext=(5,10), textcoords='offset points', rotation=90
,va='bottom', fontsize='large', color='#AA0022')
def create_poly_modelspec(k=1):
'''
Convenience function:
Create a polynomial modelspec string for patsy
'''
return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j)
for j in range(2,k+1)])).strip()
Explanation: Bayesian Logistic Regression with PyMC3
This is a reproduction with a few slight alterations of Bayesian Log Reg by J. Benjamin Cook
Author: Peadar Coyle and J. Benjamin Cook
How likely am I to make more than $50,000 US Dollars?
Exploration of model selection techniques too - I use DIC and WAIC to select the best model.
The convenience functions are all taken from Jon Sedars work.
This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process.
End of explanation
data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt',
'education-categorical', 'educ',
'marital-status', 'occupation',
'relationship', 'race', 'sex',
'captial-gain', 'capital-loss',
'hours', 'native-country',
'income'])
data
Explanation: The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.
The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression.
End of explanation
data = data[~pd.isnull(data['income'])]
data[data['native-country']==" United-States"]
income = 1 * (data['income'] == " >50K")
age2 = np.square(data['age'])
data = data[['age', 'educ', 'hours']]
data['age2'] = age2
data['income'] = income
income.value_counts()
Explanation: Scrubbing and cleaning
We need to remove any null entries in Income.
And we also want to restrict this study to the United States.
End of explanation
g = seaborn.pairplot(data)
# Compute the correlation matrix
corr = data.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = seaborn.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
Explanation: Exploring the data
Let us get a feel for the parameters.
* We see that age is a tailed distribution. Certainly not Gaussian!
* We don't see much of a correlation between many of the features, with the exception of Age and Age2.
* Hours worked has some interesting behaviour. How would one describe this distribution?
End of explanation
with pm.Model() as logistic_model:
pm.glm.glm('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())
trace_logistic_model = pm.sample(2000, pm.NUTS(), progressbar=True)
plot_traces(trace_logistic_model, retain=1000)
Explanation: We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income
(which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering).
The model
We will use a simple model, which assumes that the probability of making more than $50K
is a function of age, years of education and hours worked per week. We will use PyMC3
do inference.
In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters
(in this case the regression coefficients)
The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$
Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity.
What this means in practice is that we only need to worry about the numerator.
Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.
The likelihood is the product of n Bernoulli trials, $\prod^{n}{i=1} p{i}^{y} (1 - p_{i})^{1-y_{i}}$,
where $p_i = \frac{1}{1 + e^{-z_i}}$,
$z_{i} = \beta_{0} + \beta_{1}(age){i} + \beta_2(age)^{2}{i} + \beta_{3}(educ){i} + \beta{4}(hours){i}$ and $y{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise.
With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters.
End of explanation
plt.figure(figsize=(9,7))
trace = trace_logistic_model[1000:]
seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391")
plt.xlabel("beta_age")
plt.ylabel("beta_educ")
plt.show()
Explanation: Some results
One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.
I'll use seaborn to look at the distribution of some of these factors.
End of explanation
# Linear model with hours == 50 and educ == 12
lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*12 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 16
lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*16 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 19
lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*19 +
samples['hours']*50)))
Explanation: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school).
End of explanation
# Plot the posterior predictive distributions of P(income > $50K) vs. age
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15)
import matplotlib.lines as mlines
blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')
green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')
red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')
plt.legend(handles=[blue_line, green_line, red_line], loc='lower right')
plt.ylabel("P(Income > $50K)")
plt.xlabel("Age")
plt.show()
b = trace['educ']
plt.hist(np.exp(b), bins=20, normed=True)
plt.xlabel("Odds Ratio")
plt.show()
Explanation: Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
End of explanation
lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub)))
Explanation: Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval!
End of explanation
models_lin, traces_lin = run_models(data, 4)
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')
g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
Explanation: Model selection
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.
One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question.
End of explanation
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')
g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
Explanation: There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.
Next we look at WAIC. Which is another model selection technique.
End of explanation |
9,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transport through a barrier
All systems so far had a flat potential. In this notebook, we will change this.
Basic setup (like in the previous notebook)
Step1: "MOSFET" toy model
Let's construct a system like the wire of the previous notebook, except that the scattering region will have a higher (but still flat) potential.
The leads are unchanged, so we take the following function directly from the previous notebook.
Step2: The next one is slightly modified
Step3: The following function also needs to be modified slightly
Step4: Let's put the above functions to some use
Step6: Your turn!
Play with the parameters (width of the barrier, voltage range, etc.) and see how this affects the results.
Quantum point contact
What will happen if we open a hole in the potential barrier? This time we will try to use a more realistic potential.
The following cell defines the potential that we are going to use.
Step7: The function below is almost like make_wire_with_flat_potential. The difference is that it can be tuned to use a general potential using its first parameter.
Step8: Let's construct the system and plot the potential. The lambda expression is used to fix the gate voltage to a particular value.
Step9: To get an idea of the involved energy scales it's useful to plot the band structure
Step10: Finally, let's plot transmission as a function of the gate voltage. We see the hallmark of the QPC | Python Code:
import numpy as np
import kwant
%run matplotlib_setup.ipy
from matplotlib import pyplot
lat = kwant.lattice.square()
Explanation: Transport through a barrier
All systems so far had a flat potential. In this notebook, we will change this.
Basic setup (like in the previous notebook):
End of explanation
def make_lead_x(W=10, t=1):
syst = kwant.Builder(kwant.TranslationalSymmetry([-1, 0]))
syst[(lat(0, y) for y in range(W))] = 4 * t
syst[lat.neighbors()] = -t
return syst
Explanation: "MOSFET" toy model
Let's construct a system like the wire of the previous notebook, except that the scattering region will have a higher (but still flat) potential.
The leads are unchanged, so we take the following function directly from the previous notebook.
End of explanation
def make_wire_with_flat_potential(W=10, L=2, t=1):
def onsite(s, V):
return (4 - V) * t
# Construct the scattering region.
sr = kwant.Builder()
sr[(lat(x, y) for x in range(L) for y in range(W))] = onsite
sr[lat.neighbors()] = -t
# Build and attach lead from both sides.
lead = make_lead_x(W, t)
sr.attach_lead(lead)
sr.attach_lead(lead.reversed())
return sr
Explanation: The next one is slightly modified: the onsite value is now no longer a constant but a value function.
Kwant’s value functions are used when there are parameters to the Hamiltonian. Kwant will call them only when a calculation is requested, at the last possible moment. The onsite function always take one arguments (the site), and an arbitrary number of parameters that are supplied by the user. Here, there is only one such parameter: V.
End of explanation
def plot_transmission(syst, energy, params):
# Compute conductance
trans = []
for param in params:
smatrix = kwant.smatrix(syst, energy, args=[param])
trans.append(smatrix.transmission(1, 0))
pyplot.plot(params, trans)
Explanation: The following function also needs to be modified slightly: instead of plotting transmission as a function of energy, it plots transmission as the function of the Hamiltonian parameter, for a fixed energy.
Observe how args=[param] is used to pass the user-specified Hamiltonian parameters to kwant.smatrix. args must be a sequence of parameters, even if it's only a single one in this case.
End of explanation
_syst = make_wire_with_flat_potential()
kwant.plot(_syst)
_syst = _syst.finalized()
kwant.plotter.bands(_syst.leads[0])
plot_transmission(_syst, 1, np.linspace(-2, 0, 51))
Explanation: Let's put the above functions to some use:
End of explanation
from math import atan2, pi, sqrt
def rectangular_gate_pot(distance, left, right, bottom, top):
Compute the potential of a rectangular gate.
The gate hovers at the given distance over the plane where the
potential is evaluated.
Based on J. Appl. Phys. 77, 4504 (1995)
http://dx.doi.org/10.1063/1.359446
d, l, r, b, t = distance, left, right, bottom, top
def g(u, v):
return atan2(u * v, d * sqrt(u**2 + v**2 + d**2)) / (2 * pi)
def func(x, y, voltage):
return voltage * (g(x-l, y-b) + g(x-l, t-y) +
g(r-x, y-b) + g(r-x, t-y))
return func
_gate1 = rectangular_gate_pot(10, 20, 50, -50, 15)
_gate2 = rectangular_gate_pot(10, 20, 50, 25, 90)
def qpc_potential(site, V):
x, y = site.pos
return _gate1(x, y, V) + _gate2(x, y, V)
Explanation: Your turn!
Play with the parameters (width of the barrier, voltage range, etc.) and see how this affects the results.
Quantum point contact
What will happen if we open a hole in the potential barrier? This time we will try to use a more realistic potential.
The following cell defines the potential that we are going to use.
End of explanation
def make_barrier(pot, W=40, L=70, t=1):
def onsite(*args):
return 4 * t - pot(*args)
# Construct the scattering region.
sr = kwant.Builder()
sr[(lat(x, y) for x in range(L) for y in range(W))] = onsite
sr[lat.neighbors()] = -t
# Build and attach lead from both sides.
lead = make_lead_x(W, t)
sr.attach_lead(lead)
sr.attach_lead(lead.reversed())
return sr
Explanation: The function below is almost like make_wire_with_flat_potential. The difference is that it can be tuned to use a general potential using its first parameter.
End of explanation
qpc = make_barrier(qpc_potential)
kwant.plot(qpc);
kwant.plotter.map(qpc, lambda s: qpc_potential(s, 1));
Explanation: Let's construct the system and plot the potential. The lambda expression is used to fix the gate voltage to a particular value.
End of explanation
fqpc = qpc.finalized()
kwant.plotter.bands(fqpc.leads[0]);
Explanation: To get an idea of the involved energy scales it's useful to plot the band structure:
End of explanation
plot_transmission(fqpc, 0.3, np.linspace(-1, 0, 101))
Explanation: Finally, let's plot transmission as a function of the gate voltage. We see the hallmark of the QPC: conductance quantization. It's similar to what we saw in the clean wire, but now much more "realistic".
End of explanation |
9,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Reference
https
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 1
hidden_nodes = 16
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Reference
https://www.quora.com/How-do-I-decide-the-number-of-nodes-in-a-hidden-layer-of-a-neural-network
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
9,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiments analysis challenge on movie review
Designed by
Step1: Trainning and testing the model with cross validation.
Step2: The next cell may take some time.
Step3: Trainning the model on the complete trainning dataset.
Step4: Get the predictions.
Step5: Save the results. | Python Code:
from __future__ import division, print_function
import pandas as pd
import numpy as np
data_dir = 'data/'
# Load Original Data / contains data + labels 10 k
train = pd.read_csv("../data/train.data")#.drop('id',axis =1 )
# Your validation data / we provide also a validation dataset, contains only data : 5k
valid = pd.read_csv("../data/valid.data")#.drop('id',axis =1 )
# final submission
test = pd.read_csv("../data/test.data")#.drop('id',axis =1 )
print("train size", len(train))
print("public test size", len(valid))
print("private test size",len(test))
# creating arrays from pandas dataframe
X_train = train['review'].values
y_train = train['label'].values
X_valid = valid['review'].values
X_test = test['review'].values
print("raw text : \n", X_train[0])
print("label :", y_train[0])
print(len(X_test))
Explanation: Sentiments analysis challenge on movie review
Designed by : Il faut sauver les datas, Ryan !
Boosz Paul
Estrade Victor
Gensollen Thibaut
Rais Hadjer
Sakly Sami
Introduction
The data set is composed of an equilibred number of positive /negative movie review. It will be on the form of 3 different CSV files.
Train.csv contains 10k examples with 3 columns : id, label, review
Test_public.csv 5k examples with 3 columns : id, label, review
Test_private.csv contains 10k examples with only 2 columns : id, review
The goal is to predict the <code>predicted_label</code> column. The prediction quality is measured by the precision metrics.
Results should be a txt file or csv file with 1 column : the predicted_class {0,1} as shown in this toolkit.
You have to keep the original order of the datasets.
Fetch the data and load it in pandas
The first things to do is to dawload all the data at the website :
https://competitions.codalab.org/competitions/8131#learn_the_details-description
To rename them (remove the keys 'datasets_None_0b3a301a-be2e-4f21-8be9-dfa5c56439c4') to their original names:
train.data
valid.data
test.data
train_preprocessed.data
valid_preprocessed.data
test_preprocessed.data
and place them in a 'data/' folder.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
# creating random forest classifier
rfst = RandomForestClassifier(n_estimators = 100)
# TfIdf Vectorizer with default parameters
myTfidfVect = TfidfVectorizer(stop_words='english', max_features=30000)
X_train_transformed = myTfidfVect.fit_transform(X_train)
Explanation: Trainning and testing the model with cross validation.
End of explanation
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(rfst, X_train_transformed, y_train,
scoring='accuracy', cv=5)
print('accuracy :', np.mean(scores), '% +/-', np.std(scores), '%')
Explanation: The next cell may take some time.
End of explanation
rfst.fit(X_train_transformed, y_train)
print('Model trainned.')
Explanation: Trainning the model on the complete trainning dataset.
End of explanation
X_valid_transformed = myTfidfVect.transform(X_valid)
X_test_transformed = myTfidfVect.transform(X_test)
prediction_valid = rfst.predict(X_valid_transformed)
prediction_test = rfst.predict(X_test_transformed)
pd.DataFrame(prediction_valid[:5], columns=['prediction'])
Explanation: Get the predictions.
End of explanation
import os
if not os.path.isdir(os.path.join(os.getcwd(),'results')):
os.mkdir(os.path.join(os.getcwd(),'results'))
np.savetxt('results/valid.predict', prediction_valid, fmt='%d')
np.savetxt('results/test.predict', prediction_test, fmt='%d')
Explanation: Save the results.
End of explanation |
9,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is largely based on material of the Python Scientific Lecture Notes (https
Step1: Note
Step2: Parameters
Mandatory parameters (positional arguments)
Step3: Optional parameters (keyword or named arguments)
The order of the keyword arguments does not matter, but it is good practice to use the same ordering as the function's definition
Keyword arguments are a very convenient feature for defining functions with a variable number of arguments, especially when default values are to be used in most calls to the function.
Step4: <div class="alert alert-danger">
<b>NOTE</b>
Step5: Using an mutable type in a keyword argument (and modifying it inside the function body)
Step6: Alternative to overcome this problem
Step7: Variable number of parameters
Special forms of parameters
Step9: Docstrings
Documentation about what the function does and its parameters. General convention
Step10: Functions are objects
Functions are first-class objects, which means they can be | Python Code:
def the_answer_to_the_universe():
print(42)
the_answer_to_the_universe()
Explanation: This notebook is largely based on material of the Python Scientific Lecture Notes (https://scipy-lectures.github.io/), adapted with some exercises.
Reusing code
<div class="alert alert-danger">
<b>Rule of thumb</b>: <br><br>
<ul>
<li>Sets of instructions that are called several times should be written inside **functions** for better code reusability. </li>
<li>Functions (or other bits of code) that are called from several scripts should be written inside a **module**, so that only the module is imported in the different scripts (do not copy-and-paste your functions in the different scripts!). </li>
</ul>
</div>
Functions
Function definition
Function blocks must be indented as other control-flow blocks
End of explanation
def area_square(edge):
return edge ** 2
area_square(2.3)
Explanation: Note: the syntax to define a function:
the def keyword;
is followed by the function’s name, then
the arguments of the function are given between parentheses followed by a colon.
the function body;
and return object for optionally returning values.
Return statement
Functions can optionally return values
End of explanation
def double_it(x):
return 2*x
double_it(3)
double_it()
Explanation: Parameters
Mandatory parameters (positional arguments)
End of explanation
def double_it (x=1):
return 2*x
print(double_it(3))
print(double_it())
def addition(int1=1, int2=1, int3=1):
return int1 + 2*int2 + 3*int3
print(addition(int1=1, int2=1, int3=1))
print(addition(int1=1, int3=1, int2=1)) # sequence of these named arguments do not matter
Explanation: Optional parameters (keyword or named arguments)
The order of the keyword arguments does not matter, but it is good practice to use the same ordering as the function's definition
Keyword arguments are a very convenient feature for defining functions with a variable number of arguments, especially when default values are to be used in most calls to the function.
End of explanation
bigx = 10
def double_it(x=bigx):
return x * 2
bigx = 1e9
double_it()
Explanation: <div class="alert alert-danger">
<b>NOTE</b>: <br><br>
Default values are evaluated when the function is defined, not when it is called. This can be problematic when using mutable types (e.g. dictionary or list) and modifying them in the function body, since the modifications will be persistent across invocations of the function.
Using an immutable type in a keyword argument:
</div>
End of explanation
def add_to_dict(args={'a': 1, 'b': 2}):
for i in args.keys():
args[i] += 1
print(args)
add_to_dict
add_to_dict()
add_to_dict()
add_to_dict()
#the {'a': 1, 'b': 2} was created in the memory on the moment that the definition was evaluated
Explanation: Using an mutable type in a keyword argument (and modifying it inside the function body)
End of explanation
def add_to_dict(args=None):
if not args:
args = {'a': 1, 'b': 2}
for i in args.keys():
args[i] += 1
print(args)
add_to_dict
add_to_dict()
add_to_dict()
Explanation: Alternative to overcome this problem:
End of explanation
def variable_args(*args, **kwargs):
print('args is', args)
print('kwargs is', kwargs)
variable_args('one', 'two', x=1, y=2, z=3)
Explanation: Variable number of parameters
Special forms of parameters:
*args: any number of positional arguments packed into a tuple
**kwargs: any number of keyword arguments packed into a dictionary
End of explanation
def funcname(params):
Concise one-line sentence describing the function.
Extended summary which can contain multiple paragraphs.
# function body
pass
funcname?
Explanation: Docstrings
Documentation about what the function does and its parameters. General convention:
End of explanation
va = variable_args
va('three', x=1, y=2)
Explanation: Functions are objects
Functions are first-class objects, which means they can be:
assigned to a variable
an item in a list (or any collection)
passed as an argument to another function.
End of explanation |
9,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sockets can be configured to act as a server and listen for incoming messages, or connect to other applications as a client. After both ends of a TCP/IP socket are connected, communication is bi-directional.
Echo Server
This sample program, based on the one in the standard library documentation, receives incoming messages and echos them back to the sender. It starts by creating a TCP/IP socket, then bind() is used to associate the socket with the server address. In this case, the address is localhost, referring to the current server, and the port number is 10000.
Step1: Calling listen() puts the socket into server mode, and accept() waits for an incoming connection. The integer argument is the number of connections the system should queue up in the background before rejecting new clients. This example only expects to work with one connection at a time.
accept() returns an open connection between the server and client, along with the address of the client. The connection is actually a different socket on another port (assigned by the kernel). Data is read from the connection with recv() and transmitted with sendall().
When communication with a client is finished, the connection needs to be cleaned up using close(). This example uses a try
Step3: Client and Server Together
The client and server should be run in separate terminal windows, so they can communicate with each other. The server output shows the incoming connection and data, as well as the response sent back to the client.
Easy Client Coonnections | Python Code:
# %load socket_echo_server.py
import socket
import sys
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Bind the socket to the port
server_address = ('localhost', 10000)
print('starting up on {} port {}'.format(*server_address))
sock.bind(server_address)
# Listen for incoming connections
sock.listen(1)
while True:
# Wait for a connection
print('waiting for a connection')
connection, client_address = sock.accept()
try:
print('connection from', client_address)
# Receive the data in small chunks and retransmit it
while True:
data = connection.recv(16)
print('received {!r}'.format(data))
if data:
print('sending data back to the client')
connection.sendall(data)
else:
print('no data from', client_address)
break
finally:
# Clean up the connection
connection.close()
Explanation: Sockets can be configured to act as a server and listen for incoming messages, or connect to other applications as a client. After both ends of a TCP/IP socket are connected, communication is bi-directional.
Echo Server
This sample program, based on the one in the standard library documentation, receives incoming messages and echos them back to the sender. It starts by creating a TCP/IP socket, then bind() is used to associate the socket with the server address. In this case, the address is localhost, referring to the current server, and the port number is 10000.
End of explanation
# %load socket_echo_client.py
import socket
import sys
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect the socket to the port where the server is listening
server_address = ('localhost', 10000)
print('connecting to {} port {}'.format(*server_address))
sock.connect(server_address)
try:
# Send data
message = b'This is the message. It will be repeated.'
print('sending {!r}'.format(message))
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print('received {!r}'.format(data))
finally:
print('closing socket')
sock.close()
Explanation: Calling listen() puts the socket into server mode, and accept() waits for an incoming connection. The integer argument is the number of connections the system should queue up in the background before rejecting new clients. This example only expects to work with one connection at a time.
accept() returns an open connection between the server and client, along with the address of the client. The connection is actually a different socket on another port (assigned by the kernel). Data is read from the connection with recv() and transmitted with sendall().
When communication with a client is finished, the connection needs to be cleaned up using close(). This example uses a try:finally block to ensure that close() is always called, even in the event of an error.
Echo Client
The client program sets up its socket differently from the way a server does. Instead of binding to a port and listening, it uses connect() to attach the socket directly to the remote address.
End of explanation
# %load socket_echo_client_easy.py
import socket
import sys
def get_constants(prefix):
Create a dictionary mapping socket module
constants to their names.
return {
getattr(socket, n): n
for n in dir(socket)
if n.startswith(prefix)
}
families = get_constants('AF_')
types = get_constants('SOCK_')
protocols = get_constants('IPPROTO_')
# Create a TCP/IP socket
sock = socket.create_connection(('localhost', 10000))
print('Family :', families[sock.family])
print('Type :', types[sock.type])
print('Protocol:', protocols[sock.proto])
print()
try:
# Send data
message = b'This is the message. It will be repeated.'
print('sending {!r}'.format(message))
sock.sendall(message)
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print('received {!r}'.format(data))
finally:
print('closing socket')
sock.close()
Explanation: Client and Server Together
The client and server should be run in separate terminal windows, so they can communicate with each other. The server output shows the incoming connection and data, as well as the response sent back to the client.
Easy Client Coonnections
End of explanation |
9,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Compiler for a Fragment of C
This file shows how a simple compiler for a fragment of the programming language C can be implemented using Ply.
Specification of the Scanner
Step1: The token Number specifies a natural number.
Step2: Below, we define the tokens for operator symbols consisting of more than one character.
Step3: Our version of C allows both single-line comments and multi-line comments.
- The regular expression //[^\n]* recognizes single-line comments.
A single line comment starts with // and extends to the end of the line.
- The regular expression \/\*([^*]|\*+[^*/])*\*+\/ recognizes multi-line comments.
Multi-line comments start with the string /* and end with the string */.
Between these strings, the string */ must not occur.
* \/\* matches the opening /*.
* [^*]|\*+[^*/] matches any character that is different from the character * as
well as non-empty sequences of *s that are not followed by a /.
* \*+/ matches any non-empty sequence of *s that is followed by a /.
The expression needs to be this complicated in order to match multi-line comments of the
following form
Step4: The keywords 'int', 'if', 'else', 'while', 'return' have to be dealt with separately as they are syntactical identical to identifiers. The dictionary Keywords shown below maps every keyword to its token type.
Step5: When an identifier is read, we first have to check whether the identifier is one of our keywords. If so, we assign the corresponding token type that is stored in the dictionary Keywords. Otherwise, the token type is set to ID.
Step6: Operators consisting of a single character do not need an associated token type.
Step7: White space, i.e. space characters, tabulators, and carriage returns are ignored.
Step8: Syntactically, newline characters are ignored. However, we still need to keep track of them in order to know the current line number, which is used for error messages.
Step9: Given a token, the function find_colum returns the column where token starts. This is possible, because every token contains a reference to the current lexer as token.lexer and this lexer in turn stores the string that is given to it via the reference lexer.lexdata. Furthermore, token.lexpos is the number of characters that precede token.
Step10: The function t_error is called for any token t that can not be scanned by the lexer. In this case, t.value[0] is the first character that is not be recognized by the scanner. This character is discarded. After that, scanning proceeeds as if nothing has happened.
Step11: The next assignment is necessary to make the lexer think that the code given above is part of some file.
Step12: The function test_scanner(file_name) takes the name of a file as its argument. This file is opened and displayed.
The function returns a list of all the tokens that are recognized.
Step13: Specification of the Parser
We will use the following grammar to specify the language that our compiler can translate
Step14: The start variable of our grammar is program.
Step15: Below, the precedence declarations for the tokens 'IF' and 'ELSE' are needed to solve the dangling else problem.
Step16: In the grammar rule
$$ \texttt{stmnt} \rightarrow \texttt{'if'}\;\texttt{'('}\; \texttt{bool_expr}\; \texttt{')'}\; \texttt{stmnt}$$
the rightmost token is ')'. However, this token does not have a precedence. Therefore, the grammar rule
does not have a precedence either. Hence, we manually assign the precedence of the token IF to this rule via
the keyword %prec. This way, the shift/reduce conflict resulting from the dangling-else ambiguity is resolved.
Step17: Setting the optional argument write_tables to False is required to prevent an obscure bug where the parser generator tries to read an empty parse table.
Step18: As we have used precedence declarations to resolve all shift/reduce conflicts, the action table contains no conflict.
Step19: The notebook AST-2-Dot.ipynb provides the function tuple2dot. This function can be used to visualize the abstract syntax tree that is generated by the function yacc.parse.
Step20: The function parse takes a file_name as ist sole argument. The file is read and parsed.
The resulting parse tree is visualized using graphviz. It is important to reset the
attribute lineno of the scanner, for otherwise error messages will not have the correct line numbers.
Step21: The function indent is used to indent the generated assembler commands by preceding them with 8 space characters.
Step22: The method compile_expr(expr, st, class_name) takes three arguments
Step23: The following is a test of the function compile_expr.
Step24: The variable label_counter is a global counter that is used to create unique label names.
Every call of new_label creates a new, unique label.
Step25: The method compile_bool(expr, st, class_name) takes three arguments
Step26: Below is a test for the function compile_bool.
Step27: The method compile_stmnt(stmnt, st, class_name) takes three arguments
Step28: The input to compile_program(file_name) is the name of a C file. This file assumed to have the ending .c.
This file is compiled and the resulting assembler file is written into a file with the same name but the ending .jas.
Step29: The file Primes.c is a simple C program that computes the prime numbers in a naive way.
Step30: Next, we generate Java byte code using
jasmin.
Step31: Finally, we run the generated byte code. | Python Code:
import ply.lex as lex
tokens = [ 'NUMBER', 'ID', 'EQ', 'NE', 'LE', 'GE', 'AND', 'OR',
'INT', 'IF', 'ELSE', 'WHILE', 'RETURN'
]
Explanation: A Simple Compiler for a Fragment of C
This file shows how a simple compiler for a fragment of the programming language C can be implemented using Ply.
Specification of the Scanner
End of explanation
t_NUMBER = r'0|[1-9][0-9]*'
Explanation: The token Number specifies a natural number.
End of explanation
t_EQ = r'=='
t_NE = r'!='
t_LE = r'<='
t_GE = r'>='
t_AND = r'&&'
t_OR = r'\|\|'
Explanation: Below, we define the tokens for operator symbols consisting of more than one character.
End of explanation
def t_COMMENT(t):
r'//[^\n]*|\/\*([^*]|\*+[^*/])*\*+\/'
t.lexer.lineno += t.value.count('\n')
pass
Explanation: Our version of C allows both single-line comments and multi-line comments.
- The regular expression //[^\n]* recognizes single-line comments.
A single line comment starts with // and extends to the end of the line.
- The regular expression \/\*([^*]|\*+[^*/])*\*+\/ recognizes multi-line comments.
Multi-line comments start with the string /* and end with the string */.
Between these strings, the string */ must not occur.
* \/\* matches the opening /*.
* [^*]|\*+[^*/] matches any character that is different from the character * as
well as non-empty sequences of *s that are not followed by a /.
* \*+/ matches any non-empty sequence of *s that is followed by a /.
The expression needs to be this complicated in order to match multi-line comments of the
following form:
/*** abc *** xyz ***/
End of explanation
Keywords = { 'int' : 'INT',
'if' : 'IF',
'else' : 'ELSE',
'while' : 'WHILE',
'return': 'RETURN'
}
Explanation: The keywords 'int', 'if', 'else', 'while', 'return' have to be dealt with separately as they are syntactical identical to identifiers. The dictionary Keywords shown below maps every keyword to its token type.
End of explanation
def t_ID(t):
r'[a-zA-Z][a-zA-Z0-9_]*'
t.type = Keywords.get(t.value, 'ID')
return t
Explanation: When an identifier is read, we first have to check whether the identifier is one of our keywords. If so, we assign the corresponding token type that is stored in the dictionary Keywords. Otherwise, the token type is set to ID.
End of explanation
literals = ['+', '-', '*', '/', '%', '(', ')', '{', '}', ';', '=', '<', '>', '!', ',']
Explanation: Operators consisting of a single character do not need an associated token type.
End of explanation
t_ignore = ' \t\r'
Explanation: White space, i.e. space characters, tabulators, and carriage returns are ignored.
End of explanation
def t_newline(t):
r'\n'
t.lexer.lineno += 1
return
Explanation: Syntactically, newline characters are ignored. However, we still need to keep track of them in order to know the current line number, which is used for error messages.
End of explanation
def find_column(token):
program = token.lexer.lexdata # the complete string presented to the scanner
line_start = program.rfind('\n', 0, token.lexpos) + 1
return (token.lexpos - line_start) + 1
Explanation: Given a token, the function find_colum returns the column where token starts. This is possible, because every token contains a reference to the current lexer as token.lexer and this lexer in turn stores the string that is given to it via the reference lexer.lexdata. Furthermore, token.lexpos is the number of characters that precede token.
End of explanation
def t_error(t):
column = find_column(t)
print(f"Illegal character '{t.value[0]}' in line {t.lineno}, column {column}.")
t.lexer.skip(1)
Explanation: The function t_error is called for any token t that can not be scanned by the lexer. In this case, t.value[0] is the first character that is not be recognized by the scanner. This character is discarded. After that, scanning proceeeds as if nothing has happened.
End of explanation
__file__ = 'main'
lexer = lex.lex()
Explanation: The next assignment is necessary to make the lexer think that the code given above is part of some file.
End of explanation
def test_scanner(file_name):
with open(file_name, 'r') as handle:
program = handle.read()
print(program)
lexer.input(program)
lexer.lineno = 1 # reset line number
return [t for t in lexer] # start scanning and collect all tokens
for t in test_scanner('Examples/MySum.c'):
print(t)
Explanation: The function test_scanner(file_name) takes the name of a file as its argument. This file is opened and displayed.
The function returns a list of all the tokens that are recognized.
End of explanation
import ply.yacc as yacc
Explanation: Specification of the Parser
We will use the following grammar to specify the language that our compiler can translate:
```
program
: function
| function program
function
: INT ID '(' param_list ')' '{' decl_list stmnt_list '}'
param_list
: / epsilon /
| INT ID
| INT ID ',' ne_param_list
ne_param_list
: INT ID
| INT ID ',' ne_param_list
decl_list
: / epsilon /
| INT ID ';' decl_list
stmnt_list
: / epsilon /
| stmnt stmnt_list
stmnt
: IF '(' bool_expr ')' stmnt
| IF '(' bool_expr ')' stmnt ELSE stmnt
| WHILE '(' bool_expr ')' stmnt
| '{' stmnt_list '}'
| ID '=' expr ';'
| RETURN expr ';'
| expr ';'
bool_expr
: bool_expr '||' bool_expr
| bool_expr '&&' bool_expr
| '!' bool_expr
| '(' bool_expr ')'
| expr '==' expr
| expr '!=' expr
| expr '<=' expr
| expr '>=' expr
| expr '<' expr
| expr '>' expr
expr: expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| expr '%' expr
| '(' expr ')'
| NUMBER
| ID
| ID '(' expr_list ')'
expr_list
: / epsilon /
| expr
| expr ',' ne_expr_list
ne_expr_list
: expr
| expr ',' ne_expr_list
```
We will use precedence declarations to resolve the ambiguity that is inherent in this grammar.
End of explanation
start = 'program'
Explanation: The start variable of our grammar is program.
End of explanation
precedence = (
('nonassoc', 'IF'),
('nonassoc', 'ELSE'),
('left', 'OR'),
('left', 'AND'),
('right', '!'),
('nonassoc', 'EQ', 'NE', 'LE', 'GE', '<', '>'),
('left', '+', '-'),
('left', '*', '/', '%')
)
def p_program_one(p):
"program : function"
p[0] = ('program', p[1])
def p_program_more(p):
"program : function program"
p[0] = ('program', p[1]) + p[2][1:]
def p_function(p):
"function : INT ID '(' param_list ')' '{' decl_list stmnt_list '}'"
p[0] = ('fct', p[2], p[4], p[7], p[8])
def p_param_list_empty(p):
"param_list :"
p[0] = ('.', )
def p_param_list_one(p):
"param_list : INT ID"
p[0] = ('.', p[2])
def p_param_list_more(p):
"param_list : INT ID ',' ne_param_list"
p[0] = ('.', p[2]) + p[4][1:]
def p_ne_param_list_one(p):
"ne_param_list : INT ID"
p[0] = ('.', p[2])
def p_ne_param_list_more(p):
"ne_param_list : INT ID ',' ne_param_list"
p[0] = ('.', p[2]) + p[4][1:]
def p_decl_list_one(p):
"decl_list :"
p[0] = ('.',)
def p_decl_list_more(p):
"decl_list : INT ID ';' decl_list"
p[0] = ('.', p[2]) + p[4][1:]
def p_stmnt_list_one(p):
"stmnt_list :"
p[0] = ('.',)
def p_stmnt_list_more(p):
"stmnt_list : stmnt stmnt_list"
p[0] = ('.', p[1]) + p[2][1:]
Explanation: Below, the precedence declarations for the tokens 'IF' and 'ELSE' are needed to solve the dangling else problem.
End of explanation
def p_stmnt_if(p):
"stmnt : IF '(' bool_expr ')' stmnt %prec IF"
p[0] = ('if', p[3], p[5])
def p_stmnt_if_else(p):
"stmnt : IF '(' bool_expr ')' stmnt ELSE stmnt"
p[0] = ('if-else', p[3], p[5], p[7])
def p_stmnt_while(p):
"stmnt : WHILE '(' bool_expr ')' stmnt"
p[0] = ('while', p[3], p[5])
def p_stmnt_block(p):
"stmnt : '{' stmnt_list '}'"
p[0] = p[2]
def p_stmnt_assign(p):
"stmnt : ID '=' expr ';'"
p[0] = ('=', p[1], p[3])
def p_stmnt_return(p):
"stmnt : RETURN expr ';'"
p[0] = ('return', p[2])
def p_stmnt_expr(p):
"stmnt : expr ';'"
p[0] = p[1]
def p_bool_expr_or(p):
"bool_expr : bool_expr OR bool_expr"
p[0] = ('||', p[1], p[3])
def p_bool_expr_and(p):
"bool_expr : bool_expr AND bool_expr"
p[0] = ('&&', p[1], p[3])
def p_bool_expr_neg(p):
"bool_expr : '!' bool_expr"
p[0] = ('!', p[2])
def p_bool_expr_paren(p):
"bool_expr : '(' bool_expr ')'"
p[0] = p[2]
def p_bool_expr_eq(p):
"bool_expr : expr EQ expr"
p[0] = ('==', p[1], p[3])
def p_bool_expr_ne(p):
"bool_expr : expr NE expr"
p[0] = ('!=', p[1], p[3])
def p_bool_expr_le(p):
"bool_expr : expr LE expr"
p[0] = ('<=', p[1], p[3])
def p_bool_expr_ge(p):
"bool_expr : expr GE expr"
p[0] = ('>=', p[1], p[3])
def p_bool_expr_lt(p):
"bool_expr : expr '<' expr"
p[0] = ('<', p[1], p[3])
def p_bool_expr_gt(p):
"bool_expr : expr '>' expr"
p[0] = ('>', p[1], p[3])
def p_expr_plus(p):
"expr : expr '+' expr"
p[0] = ('+', p[1], p[3])
def p_expr_minus(p):
"expr : expr '-' expr"
p[0] = ('-', p[1], p[3])
def p_expr_times(p):
"expr : expr '*' expr"
p[0] = ('*', p[1], p[3])
def p_expr_divide(p):
"expr : expr '/' expr"
p[0] = ('/', p[1], p[3])
def p_expr_modulo(p):
"expr : expr '%' expr"
p[0] = ('%', p[1], p[3])
def p_expr_group(p):
"expr : '(' expr ')'"
p[0] = p[2]
def p_expr_number(p):
"expr : NUMBER"
p[0] = ('Number', p[1])
def p_expr_id(p):
"expr : ID"
p[0] = p[1]
def p_expr_fct_call(p):
"expr : ID '(' expr_list ')'"
p[0] = ('call', p[1]) + p[3][1:]
def p_expr_list_empty(p):
"expr_list :"
p[0] = ('.',)
def p_expr_list_one(p):
"expr_list : expr"
p[0] = ('.', p[1])
def p_expr_list_more(p):
"expr_list : expr ',' ne_expr_list"
p[0] = ('.', p[1]) + p[3][1:]
def p_ne_expr_list_one(p):
"ne_expr_list : expr"
p[0] = ('.', p[1])
def p_ne_expr_list_more(p):
"ne_expr_list : expr ',' ne_expr_list"
p[0] = ('.', p[1]) + p[3][1:]
def p_error(p):
column = find_column(p)
if p:
print(f'Syntax error at token "{p.value}" in line {p.lineno}, column {column}.')
else:
print('Syntax error at end of input.')
Explanation: In the grammar rule
$$ \texttt{stmnt} \rightarrow \texttt{'if'}\;\texttt{'('}\; \texttt{bool_expr}\; \texttt{')'}\; \texttt{stmnt}$$
the rightmost token is ')'. However, this token does not have a precedence. Therefore, the grammar rule
does not have a precedence either. Hence, we manually assign the precedence of the token IF to this rule via
the keyword %prec. This way, the shift/reduce conflict resulting from the dangling-else ambiguity is resolved.
End of explanation
parser = yacc.yacc(write_tables=False, debug=True)
Explanation: Setting the optional argument write_tables to False is required to prevent an obscure bug where the parser generator tries to read an empty parse table.
End of explanation
!type parser.out
!cat parser.out
Explanation: As we have used precedence declarations to resolve all shift/reduce conflicts, the action table contains no conflict.
End of explanation
%run ../ANTLR4-Python/AST-2-Dot.ipynb
Explanation: The notebook AST-2-Dot.ipynb provides the function tuple2dot. This function can be used to visualize the abstract syntax tree that is generated by the function yacc.parse.
End of explanation
def parse(file_name):
lexer.lineno = 1
with open(file_name, 'r') as handle:
program = handle.read()
ast = yacc.parse(program)
print(ast)
return tuple2dot(ast)
!type Examples\MySum.c
!cat -n Examples/MySum.c
parse('Examples/MySum.c')
Explanation: The function parse takes a file_name as ist sole argument. The file is read and parsed.
The resulting parse tree is visualized using graphviz. It is important to reset the
attribute lineno of the scanner, for otherwise error messages will not have the correct line numbers.
End of explanation
def indent(s):
return ' ' * 8 + s
Explanation: The function indent is used to indent the generated assembler commands by preceding them with 8 space characters.
End of explanation
def compile_expr(expr, st, class_name):
match expr:
case str(expr):
Cmd = indent(f'iload {st[expr]}')
return [Cmd], 1
case 'Number', n:
Cmd = indent(f'ldc {n}')
return [Cmd], 1
case ('+' | '-' | '*' | '/' | '%') as op, lhs, rhs:
L1, sz1 = compile_expr(lhs, st, class_name)
L2, sz2 = compile_expr(rhs, st, class_name)
OpToCmd = { '+': 'iadd', '-': 'isub', '*': 'imul', '/': 'idiv', '%': 'irem' }
Cmd = indent(OpToCmd[op])
return L1 + L2 + [Cmd], max(sz1, 1 + sz2)
case 'call', 'println', *args:
CmdLst = [indent('getstatic java/lang/System/out Ljava/io/PrintStream;')]
stck_size = 0
cnt = 0
for arg in args:
L, sz_arg = compile_expr(arg, st, class_name)
stck_size = max(stck_size, cnt + 1 + sz_arg)
CmdLst += L
cnt += 1
CmdLst += [indent(f'invokevirtual java/io/PrintStream/println({"I"*cnt})V')]
return CmdLst, stck_size
case 'call', f, *args:
CmdLst = []
stck_size = 0
cnt = 0
for arg in args:
L, sz_arg = compile_expr(arg, st, class_name)
stck_size = max(stck_size, cnt + sz_arg)
CmdLst += L
cnt += 1
CmdLst += [indent(f'invokestatic {class_name}/{f}({"I"*cnt})I')]
return CmdLst, max(stck_size, 1)
case _:
assert False, f'Error in compile_expr({expr}, {st}, {class_name})'
Explanation: The method compile_expr(expr, st, class_name) takes three arguments:
- expr is an abstract syntax tree that represents an expression.
This abstract syntax tree is in turn represented as a nested tuple.
- st is short for symbol table. This is a dictionary that maps variable
names to natural numbers. Given a variable x, the number st[x] specifies
the location where the variable x is stored on the stack with respect to the
local stack frame.
- class_name is the name of the class that is to be generated.
The function returns a pair of the form (cmds, size).
- cmds is a list of assembler commands,
- size is the maximum size of the stack that is needed.
End of explanation
expr = ('call', 'println', 'x', ('call', 'sum', ('+', 'x', ('*', 'y', ('Number','2')))))
st = { 'x': 0, 'y': 1}
compile_expr(expr, st, 'Sum')
Explanation: The following is a test of the function compile_expr.
End of explanation
label_counter = 0
def new_label():
global label_counter
label_counter += 1
return 'l' + str(label_counter)
Explanation: The variable label_counter is a global counter that is used to create unique label names.
Every call of new_label creates a new, unique label.
End of explanation
def compile_bool(expr, st, class_name):
match expr:
case ('==' | '!=' | '<=' | '>=' | '<' | '>') as op, lhs, rhs:
OpToCmd = { '==': 'if_icmpeq',
'!=': 'if_icmpne',
'<=': 'if_icmple',
'>=': 'if_icmpge',
'<' : 'if_icmplt',
'>' : 'if_icmpgt'
}
L1, sz1 = compile_expr(lhs, st, class_name)
L2, sz2 = compile_expr(rhs, st, class_name)
true_label = new_label()
next_label = new_label()
CmdLst = L1 + L2
cmd = OpToCmd[op]
CmdLst += [indent(cmd + ' ' + true_label)]
CmdLst += [indent('bipush 0')]
CmdLst += [indent('goto ' + next_label)]
CmdLst += [' ' * 4 + true_label + ':']
CmdLst += [indent('bipush 1')]
CmdLst += [' ' * 4 + next_label + ':']
return CmdLst, max(sz1, 1 + sz2)
case ('&&' | '||') as op, lhs, rhs:
OpToCmd = { '&&': 'iand', '||': 'ior' }
L1, sz1 = compile_bool(lhs, st, class_name)
L2, sz2 = compile_bool(rhs, st, class_name)
cmd = OpToCmd[op]
CmdLst = L1 + L2 + [indent(cmd)]
return CmdLst, max(sz1, 1 + sz2)
case '!', arg:
L, sz = compile_expr(arg, st, class_name)
CmdLst = [indent('bipush 1')] + L + [indent('isub')]
return CmdLst, sz + 1
case _:
assert False, f'Error in compile_bool({expr}, {st}, {class_name})'
Explanation: The method compile_bool(expr, st, class_name) takes three arguments:
- expr is an abstract syntax tree that represents a Boolean expression.
This abstract syntax tree is in turn represented as a nested tuple.
- st is short for symbol table. This is a dictionary that maps variable
names to natural numbers. Given a variable x, the number st[x] specifies
the location where the variable x is stored on the stack with respect to the
local stack frame.
- class_name is the name of the class that is to be generated.
The function returns a pair of the form (cmds, size).
- cmds is a list of assembler commands,
- size is the maximum size of the stack that is needed.
End of explanation
expr = ('==', 'x', ('Number', '0'))
st = { 'x': 0, 'y': 1}
compile_bool(expr, st, 'Sum')
Explanation: Below is a test for the function compile_bool.
End of explanation
def compile_stmnt(stmnt, st, class_name):
match stmnt:
case '=', var, expr:
CmdLst, sz = compile_expr(expr, st, class_name)
CmdLst += [indent(f'istore {st[var]}')]
return CmdLst, sz
case 'if', expr, sub_stmnt:
L1, sz1 = compile_bool(expr, st, class_name)
L2, sz2 = compile_stmnt(sub_stmnt, st, class_name)
else_label = new_label()
lbl_stmnt = ' ' * 4 + else_label + ':'
CmdLst = L1 + [indent(f'ifeq {else_label}')] + L2 + [lbl_stmnt]
return CmdLst, max(sz1, sz2)
case 'if-else', expr, then_stmnt, else_stmnt:
L1, sz1 = compile_bool(expr, st, class_name)
L2, sz2 = compile_stmnt(then_stmnt, st, class_name)
L3, sz3 = compile_stmnt(else_stmnt, st, class_name)
else_label = new_label()
next_label = new_label()
if_stmnt = indent(f'ifeq {else_label}')
else_stmnt = ' ' * 4 + else_label + ':'
next_stmnt = ' ' * 4 + next_label + ':'
goto_stmnt = indent(f'goto {next_label}')
CmdLst = L1 + [if_stmnt] + L2 + [goto_stmnt, else_stmnt] + L3 + [next_stmnt]
return CmdLst, max(sz1, sz2, sz3)
case 'while', expr, body_stmnt:
L1, sz1 = compile_bool(expr, st, class_name)
L2, sz2 = compile_stmnt(body_stmnt, st, class_name)
loop_label = new_label()
next_label = new_label()
if_stmnt = indent(f'ifeq {next_label}')
loop_stmnt = ' ' * 4 + loop_label + ':'
next_stmnt = ' ' * 4 + next_label + ':'
goto_stmnt = indent(f'goto {loop_label}')
CmdLst = [loop_stmnt] + L1 + [if_stmnt] + L2 + [goto_stmnt, next_stmnt]
return CmdLst, max(sz1, sz2)
case 'return', expr:
CmdLst, sz = compile_expr(expr, st, class_name)
CmdLst += [indent('ireturn')]
return CmdLst, sz
case '.', *stmnt_lst:
CmdLst = []
size = 0
for s in stmnt_lst:
L, sz = compile_stmnt(s, st, class_name)
CmdLst += L
size = max(size, sz)
return CmdLst, size
case _: # it must be an expression statement
return compile_expr(stmnt, st, class_name)
stmnt = ('if', ('==', ('/', 'x', 'y'), ('Number', '0')), ('=', 'x', 'y'))
compile_stmnt(stmnt, st, 'Sum')
stmnt = ('if-else', ('<', 'x', 'y'), ('=', 'x', 'y'), ('=', 'y', 'x'))
compile_stmnt(stmnt, st, 'Sum')
stmnt = ('while', ('<', 'x', 'y'), ('=', 'x', ('+', 'x', ('Number', '1'))))
compile_stmnt(stmnt, st, 'Sum')
stmnt = ('.', ('=', 'x', 'y'), ('.', ('=', 'x', ('Number', '1')), ('=', 'y', 'x')))
compile_stmnt(stmnt, st, 'Sum')
def compile_fct(fct_def, class_name):
global label_counter
label_counter = 0
_, name, parameters, variables, stmnts = fct_def
_, *parameters = parameters
_, *variables = variables
_, *stmnts = stmnts
m = len(parameters)
n = len(variables)
st = {}
cnt = 0
for var in parameters + variables:
st[var] = cnt
cnt += 1
CmdLst = []
size = 0
for stmnt in stmnts:
L, sz = compile_stmnt(stmnt, st, class_name)
CmdLst += L
size = max(size, sz)
limit_locals = f'.limit locals {m+n}'
limit_stack = f'.limit stack {size}'
return_stmnt = indent('return')
if name != 'main':
method = f'.method public static {name}({"I"*m})I'
CmdLst = [method, limit_locals, limit_stack] + CmdLst + ['.end method']
return CmdLst, sz
else:
method = '.method public static main([Ljava/lang/String;)V'
CmdLst = [method, limit_locals, limit_stack] + CmdLst + [return_stmnt, '.end method']
return CmdLst, sz
f = ('fct', 'sum', ('.', 'x'), ('.', 'y', 'z'), ('.', ('return', 'x')))
compile_fct(f, 'Sum')
import os
file = "~/Dropbox/Kurse/Formal-Languages/Ply/Examples/Test.c"
print(os.path.dirname(file))
print(os.path.basename(file))
print(os.path.join('abc', 'xyz.c'))
Explanation: The method compile_stmnt(stmnt, st, class_name) takes three arguments:
- stmnt is an abstract syntax tree that represents a statement.
This abstract syntax tree is in turn represented as a nested tuple.
- st is short for symbol table. This is a dictionary that maps variable
names to natural numbers. Given a variable x, the number st[x] specifies
the location where the variable x is stored on the stack with respect to the
local stack frame.
- class_name is the name of the class that is to be generated.
The function returns a pair of the form (cmds, size).
- cmds is a list of assembler commands,
- size is the maximum size of the stack that is needed.
End of explanation
def compile_program(file_name):
directory = os.path.dirname(file_name)
base = os.path.basename(file_name)
base = base[:-2]
outfile = os.path.join(directory, base + '.jas')
with open(file_name, 'r') as handle:
program = handle.read()
lexer.lineno = 1
ast = yacc.parse(program)
_, *fct_lst = ast
CmdLst = []
for fct in fct_lst:
L, _ = compile_fct(fct, base)
CmdLst += L + ['\n']
with open(outfile, 'w') as handle:
handle.write('.class public ' + base + '\n');
handle.write('.super java/lang/Object\n\n');
handle.write('.method public <init>()V\n');
handle.write(' aload 0\n');
handle.write(' invokenonvirtual java/lang/Object/<init>()V\n');
handle.write(' return\n');
handle.write('.end method\n\n');
for cmd in CmdLst:
handle.write(cmd + '\n')
%cd Examples
Explanation: The input to compile_program(file_name) is the name of a C file. This file assumed to have the ending .c.
This file is compiled and the resulting assembler file is written into a file with the same name but the ending .jas.
End of explanation
!type Primes.c
!cat Primes.c
compile_program('Primes.c')
!dir
!ls -l
!type Primes.jas
!cat Primes.jas
Explanation: The file Primes.c is a simple C program that computes the prime numbers in a naive way.
End of explanation
!jasmin Primes.jas
Explanation: Next, we generate Java byte code using
jasmin.
End of explanation
!java Primes
!del *.jas *.class
!rm *.jas *.class
!dir
!ls -al
Explanation: Finally, we run the generated byte code.
End of explanation |
9,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quiz 2
Intelligent Systems 2016-1
After solving all the questions in the exam save your notebook with the name username.ipynb and submit it to
Step3: 1. (1.7)
Implement a MDP that solves the question 2 of HW4 from [AI-edX]
Step4: 2. (1.7)
Implement a MDP corresponding to the one in question 3 of HW4 from [AI-edX]. Modify the value iteration algorithm to consider rewards that depend on $(s, a, s')$, where $s$, $a$, and $s'$ correspond to the current state, the action and the next state respectively.
Step5: 3. (1.6)
Apply Q-learning to calculate an optimal policy for the LinearMDP in question 1. | Python Code:
from mdp import *
from rl import *
Explanation: Quiz 2
Intelligent Systems 2016-1
After solving all the questions in the exam save your notebook with the name username.ipynb and submit it to: https://www.dropbox.com/request/0Eh9d2PvQMdAyJviK4Nl
End of explanation
class LinearMDP(GridMDP):
A two-dimensional grid MDP, as in [Figure 17.1]. All you have to do is
specify the grid as a list of lists of rewards; use None for an obstacle
(unreachable state). Also, you should specify the terminal states.
An action is an (x, y) unit vector; e.g. (1, 0) means move east.
def __init__(self, grid, terminals, init=(0, 0), gamma=.9):
GridMDP.__init__(self, grid=grid, terminals=terminals, init=init, gamma=gamma)
def T(self, state, action):
This function must return a list of tuples (p, state') where p is the probability
of going to state'
# Your code here #
def calculate_v_star(rew_a, gamma):
'''
This function must create an instance of LinearMDP that corresponds to the
MDP in question 2 of HW4 and use it to calculate the expected reward for each state.
The function receives as parameter the reward for state 'a', which in the example
corresponds to 10, but here is variable, and the value of gamma.
The reward for state 'e' is 1. The function must return a dictionary with the V* values.
'''
# Your code here #
mdp =
v_star =
##################
return v_star
Explanation: 1. (1.7)
Implement a MDP that solves the question 2 of HW4 from [AI-edX]
End of explanation
class ExtendedMDP(MDP):
def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):
# All possible actions.
actlist = []
for state in transition_matrix.keys():
actlist.extend(transition_matrix[state])
actlist = list(set(actlist))
MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma)
self.t = transition_matrix
self.reward = rewards
for state in self.t:
self.states.add(state)
def T(self, state, action):
return [(prob, new_state) for new_state, prob
in self.t[state][action].items()]
def R(self, state, action, statep):
"Returns a numeric reward for the combination the current state, the action and the next state."
return self.reward[state][action][statep]
def ext_value_iteration(mdp, epsilon=0.001):
"Solving an MDP by value iteration. Uses a reward function that depends on s, a and s1 "
U1 = {s: 0 for s in mdp.states}
R, T, gamma = mdp.R, mdp.T, mdp.gamma
while True:
U = U1.copy()
delta = 0
for s in mdp.states:
# Your code here #
U1[s] =
##################
delta = max(delta, abs(U1[s] - U[s]))
if delta < epsilon * (1 - gamma) / gamma:
return U
def ext_calculate_v_star(gamma):
'''
This function must create an instance of ExtendedMDP that corresponds to the
MDP in question 3 of HW4 and use it to calculate the expected reward for each state.
The function receives as parameter the value of gamma.
The function must return a dictionary with the V* values.
'''
# Your code here #
t =
rewards =
##################
emdp = ExtendedMDP(t, rewards, [], None, gamma=gamma)
v_star = ext_value_iteration(emdp, epsilon = 0.00001)
return v_star
Explanation: 2. (1.7)
Implement a MDP corresponding to the one in question 3 of HW4 from [AI-edX]. Modify the value iteration algorithm to consider rewards that depend on $(s, a, s')$, where $s$, $a$, and $s'$ correspond to the current state, the action and the next state respectively.
End of explanation
def calculate_policy(rew_a, gamma):
'''
This function must create an instance of LinearMDP that corresponds to the
MDP in question 2 of HW4 and use it to calculate an optimal policy by applying
Q-learning. The function receives as parameter the reward for state 'a',
which in the example corresponds to 10, but here is variable, and the value of gamma.
The reward for state 'e' is 1. The function must return a dictionary with an action for
each state.
'''
# Your code here #
mdp =
##################
q_agent = QLearningAgent(mdp, Ne=5, Rplus=10,
alpha=lambda n: 60./(59+n))
for i in range(200):
run_single_trial(q_agent,mdp)
U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below.
policy = {}
# Your code here #
##################
return policy
Explanation: 3. (1.6)
Apply Q-learning to calculate an optimal policy for the LinearMDP in question 1.
End of explanation |
9,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step1: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step2: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step3: Compute a 2d NumPy array called phi
Step5: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step6: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from math import sqrt
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
def soliton(x, t, c, a):
z = (0.5) * c * (((1) / (np.cosh(((sqrt(c)) / 2) * (x - c * t - a))))**2)
return z
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
def phi(d, e, f):
b = np.empty(200, 100)
for i in range(200):
for j in range(100):
b[i][j] = f(d[i], e[j], c, a)
return b
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
# YOUR CODE HERE
raise NotImplementedError()
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
interact(phi, soliton)
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
9,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook is a revised version of notebook from Sara Robinson and Ivan Chueng
E2E ML on GCP
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step3: Get your project number
Now that the project ID is set, you get your corresponding project number.
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step12: Download and prepare the prebuilt GloVe embeddings
The GloVe embeddings consists of a set of pre-trained embeddings. The embeddings are split into a "train" and "test" splits.
You create a Vertex AI Matching Engine index from the "train" split, and use the embedding vectors in the "test" split as query vectors to test the index.
Note
Step13: Load the embeddings into memory
Load the GloVe embeddings into memory from a HDF5 storage format.
Step14: Save the train split in JSONL format
Next, you store the embeddings from the train split as a JSONL formatted file. Each embedding is stored as
Step15: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
Step16: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported
Step17: Update the Index
Next, you update the index with a new embedding -- i.e., insertion.
Create update delta file
First, you make a JSONL file with the embeddings to update. You use synthetic data -- in this case, all zeros -- for existing embedding with id of 0. You then upload the JSONL file to a Cloud Storage location.
Step18: Update the index
Next, you use the method update_embeddings() to incrementally update the index, with the following parameters
Step19: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT
Step20: Create the VPC connection
Next, create the connection for VPC peering.
Note
Step21: Check the status of your peering connections.
Step22: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
Step23: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters
Step24: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters
Step25: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters
Step26: Create brute force index for calibration
The brute force index uses a naive brute force method to find the nearest neighbors. This method uses a linear search and thus not efficient for large scale indexes. We recommend using the brute force index for calibrating the approximate nearest neighbor (ANN) index for recall, or for mission critical matches.
Create the brute force index
Now create the brute force index using the method create_brute_force_index().
To ensure an apples to apples comparison, the distanceMeasureType and featureNormType, dimensions of the brute force index should match those of the production indices being tuned.
Step27: Update the index
For apples to apples comparison, you perform the same incremental update to the brute force index as you did for the Tree AH index.
Step28: Deploy the brute force index to the IndexEndpoint resource
Next, you deploy the brute force index to the same IndexEndpoint.
Note
Step29: Calibration
Now your ready to do calibration. The production version of the index uses an approxiamation method, which means it may have less than perfect recall when compared to the slower exact match (brute force) method.
Get test results for both indexes
First, using the GloVe test embeddings, you make the identical request to both indexes.
Step30: Compute Recall
Finally, you determine from the results the percentage of exact matches are recalled from the production index. You can subsequently use this information to tune the deployment of the production index.
Step31: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also manually delete resources that you created by running the following code. | Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform {USER_FLAG} -q
! pip3 install -U grpcio-tools {USER_FLAG} -q
! pip3 install -U h5py {USER_FLAG} -q
Explanation: Notebook is a revised version of notebook from Sara Robinson and Ivan Chueng
E2E ML on GCP: MLOps stage 6 : serving: get started with Vertex AI Matching Engine
<table align="left">
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/communityml_ops/stage6/get_started_with_matching_engine.ipynbb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine.ipynbb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Run in Vertex Workbench
</a>
</td>
</table>
Overview
This tutorial demonstrates how to use Vertex AI Matching Engine service. This Cloud AI service is a appropriamate nearest neighbor (ANN) index and matching service for vectors (i.e., embeddings), with high scaling and low latency.
the GCP ANN Service. The service is built upon Approximate Nearest Neighbor (ANN) technology developed by Google Research.
There are several levels of using this service.
no code
Demomstrated in this tutorial. The user brings their own embeddings for indexing and querying.
low code
The user constructs embeddings using a Vertex AI pre-built algorithm: Swivel and TwoTowers.
high code
The user configures the serving binary how to generate embeddings from the model, indexing and querying, using Vertex AI Explanations by Examples
Learn more about Vertex AI Matching Engine
Embeddings
The prebuilt embeddings used for this tutorial is the GloVe dataset.
"GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space."
Objective
In this notebook, you will learn how to create Approximate Nearest Neighbor (ANN) Index, query against indexes.
The steps performed include:
Create ANN Index.
Create an IndexEndpoint with VPC Network
Deploy ANN Index
Perform online query
Deploy brute force Index.
Perform calibration between ANN and brute force index.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the packages required for executing this notebook.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
Enable the Service Networking API.
Enable the Cloud DNS API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
shell_output = ! gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = shell_output[0]
print("Project Number:", PROJECT_NUMBER)
Explanation: Get your project number
Now that the project ID is set, you get your corresponding project number.
End of explanation
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aiplatform
import h5py
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
! gsutil cp gs://cloud-samples-data/vertex-ai/matching_engine/glove-100-angular.hdf5 .
Explanation: Download and prepare the prebuilt GloVe embeddings
The GloVe embeddings consists of a set of pre-trained embeddings. The embeddings are split into a "train" and "test" splits.
You create a Vertex AI Matching Engine index from the "train" split, and use the embedding vectors in the "test" split as query vectors to test the index.
Note: While the data split uses the term "train", these are pre-trained embeddings and thus are ready to be indexed for search. The terms "train" and "test" split are used just to be consistent with usual machine learning terminology.
End of explanation
h5 = h5py.File("glove-100-angular.hdf5", "r")
train = h5["train"]
test = h5["test"]
print(train)
Explanation: Load the embeddings into memory
Load the GloVe embeddings into memory from a HDF5 storage format.
End of explanation
with open("glove100.json", "w") as f:
for i in range(len(train)):
f.write('{"id":"' + str(i) + '",')
f.write('"embedding":[' + ",".join(str(x) for x in train[i]) + "]}")
f.write("\n")
Explanation: Save the train split in JSONL format
Next, you store the embeddings from the train split as a JSONL formatted file. Each embedding is stored as:
{ 'id': .., 'embedding': [ ... ] }
The format of the embeddings for the index can be in either CSV, JSON, or Avro format.
Learn more about Embedding Formats for Indexing
End of explanation
EMBEDDINGS_INITIAL_URI = f"{BUCKET_URI}/matching_engine/initial/"
! gsutil cp glove100.json {EMBEDDINGS_INITIAL_URI}
Explanation: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
End of explanation
DIMENSIONS = 100
DISPLAY_NAME = "glove_100_1"
tree_ah_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDINGS_INITIAL_URI,
dimensions=DIMENSIONS,
approximate_neighbors_count=150,
distance_measure_type="DOT_PRODUCT_DISTANCE",
description="Glove 100 ANN index",
labels={"label_name": "label_value"},
# TreeAH specific parameters
leaf_node_embedding_count=500,
leaf_nodes_to_search_percent=7,
)
INDEX_RESOURCE_NAME = tree_ah_index.resource_name
print(INDEX_RESOURCE_NAME)
Explanation: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported:
create_tree_ah_index(): Shallow tree + Asymmetric hashing.
create_brute_force_index(): Linear search.
In this tutorial, you use the create_tree_ah_index()for production scale. The method is called with the following parameters:
display_name: A human readable name for the index.
contents_delta_uri: A Cloud Storage location for the embeddings, which are either to be inserted, updated or deleted.
dimensions: The number of dimensions of the input vector
approximate_neighbors_count: (for Tree AH) The default number of neighbors to find via approximate search before exact reordering is performed. Exact reordering is a procedure where results returned by an approximate search algorithm are reordered via a more expensive distance computation.
distance_measure_type: The distance measure used in nearest neighbor search.
SQUARED_L2_DISTANCE: Euclidean (L2) Distance
L1_DISTANCE: Manhattan (L1) Distance
COSINE_DISTANCE: Cosine Distance. Defined as 1 - cosine similarity.
DOT_PRODUCT_DISTANCE: Default value. Defined as a negative of the dot product.
description: A human readble description of the index.
labels: User metadata in the form of a dictionary.
leaf_node_embedding_count: Number of embeddings on each leaf node. The default value is 1000 if not set.
leaf_nodes_to_search_percent: The default percentage of leaf nodes that any query may be searched. Must be in range 1-100, inclusive. The default value is 10 (means 10%) if not set.
This may take upto 30 minutes.
Learn more about Configuring Matching Engine Indexes.
End of explanation
with open("glove100_incremental.json", "w") as f:
f.write(
'{"id":"0","embedding":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]}\n'
)
EMBEDDINGS_UPDATE_URI = f"{BUCKET_URI}/matching-engine/incremental/"
! gsutil cp glove100_incremental.json {EMBEDDINGS_UPDATE_URI}
Explanation: Update the Index
Next, you update the index with a new embedding -- i.e., insertion.
Create update delta file
First, you make a JSONL file with the embeddings to update. You use synthetic data -- in this case, all zeros -- for existing embedding with id of 0. You then upload the JSONL file to a Cloud Storage location.
End of explanation
tree_ah_index = tree_ah_index.update_embeddings(
contents_delta_uri=EMBEDDINGS_UPDATE_URI,
)
INDEX_RESOURCE_NAME = tree_ah_index.resource_name
print(INDEX_RESOURCE_NAME)
Explanation: Update the index
Next, you use the method update_embeddings() to incrementally update the index, with the following parameters:
contents_delta_uri: A Cloud Storage location for the embeddings, which are either to be inserted or updated.
Optionally, the parameter is_complete_overwrite will replace the entire index.
End of explanation
# This is for display only; you can name the range anything.
PEERING_RANGE_NAME = "vertex-ai-prediction-peering-range"
NETWORK = "default"
# NOTE: `prefix-length=16` means a CIDR block with mask /16 will be
# reserved for use by Google services, such as Vertex AI.
! gcloud compute addresses create $PEERING_RANGE_NAME \
--global \
--prefix-length=16 \
--description="peering range for Google service" \
--network=$NETWORK \
--purpose=VPC_PEERING
Explanation: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT: you can only setup one VPC peering to servicenetworking.googleapis.com per project.
Create VPC peering for default network
For simplicity, we setup VPC peering to the default network. You can create a different network for your project.
If you setup VPC peering with any other network, make sure that the network already exists and that your VM is running on that network.
End of explanation
! gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--network=$NETWORK \
--ranges=$PEERING_RANGE_NAME \
--project=$PROJECT_ID
Explanation: Create the VPC connection
Next, create the connection for VPC peering.
Note: If you get a PERMISSION DENIED, you may not have the neccessary role 'Compute Network Admin' set for your default service account. In the Cloud Console, do the following steps.
Goto IAM & Admin
Find your service account.
Click edit icon.
Select Add Another Role.
Enter 'Compute Network Admin'.
Select Save
End of explanation
! gcloud compute networks peerings list --network $NETWORK
Explanation: Check the status of your peering connections.
End of explanation
full_network_name = f"projects/{PROJECT_NUMBER}/global/networks/{NETWORK}"
Explanation: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
End of explanation
index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(
display_name="index_endpoint_for_demo",
description="index endpoint description",
network=full_network_name,
)
INDEX_ENDPOINT_NAME = index_endpoint.resource_name
print(INDEX_ENDPOINT_NAME)
Explanation: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters:
display_name: A human readable name for the Index Endpoint.
description: A description for the Index Endpoint.
network: The VPC network resource name.
End of explanation
DEPLOYED_INDEX_ID = "tree_ah_glove_deployed_" + TIMESTAMP
MIN_NODES = 1
MAX_NODES = 2
DEPLOY_COMPUTE = "n1-standard-16"
index_endpoint.deploy_index(
display_name="deployed_index_for_demo",
index=tree_ah_index,
deployed_index_id=DEPLOYED_INDEX_ID,
# machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
print(index_endpoint.deployed_indexes)
Explanation: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters:
display_name: A human readable name for the deployed index.
index: Your index.
deployed_index_id: A user assigned identifier for the deployed index.
machine_type: (optional) The VM instance type.
min_replica_count: (optional) Minimum number of VM instances for auto-scaling.
max_replica_count: (optional) Maximum number of VM instances for auto-scaling.
Learn more about Machine resources for Index Endpoint
End of explanation
# The number of nearest neighbors to be retrieved from database for each query.
NUM_NEIGHBOURS = 10
# Test query
queries = [
[
-0.11333,
0.48402,
0.090771,
-0.22439,
0.034206,
-0.55831,
0.041849,
-0.53573,
0.18809,
-0.58722,
0.015313,
-0.014555,
0.80842,
-0.038519,
0.75348,
0.70502,
-0.17863,
0.3222,
0.67575,
0.67198,
0.26044,
0.4187,
-0.34122,
0.2286,
-0.53529,
1.2582,
-0.091543,
0.19716,
-0.037454,
-0.3336,
0.31399,
0.36488,
0.71263,
0.1307,
-0.24654,
-0.52445,
-0.036091,
0.55068,
0.10017,
0.48095,
0.71104,
-0.053462,
0.22325,
0.30917,
-0.39926,
0.036634,
-0.35431,
-0.42795,
0.46444,
0.25586,
0.68257,
-0.20821,
0.38433,
0.055773,
-0.2539,
-0.20804,
0.52522,
-0.11399,
-0.3253,
-0.44104,
0.17528,
0.62255,
0.50237,
-0.7607,
-0.071786,
0.0080131,
-0.13286,
0.50097,
0.18824,
-0.54722,
-0.42664,
0.4292,
0.14877,
-0.0072514,
-0.16484,
-0.059798,
0.9895,
-0.61738,
0.054169,
0.48424,
-0.35084,
-0.27053,
0.37829,
0.11503,
-0.39613,
0.24266,
0.39147,
-0.075256,
0.65093,
-0.20822,
-0.17456,
0.53571,
-0.16537,
0.13582,
-0.56016,
0.016964,
0.1277,
0.94071,
-0.22608,
-0.021106,
],
[
-0.99544,
-2.3651,
-0.24332,
-1.0321,
0.42052,
-1.1817,
-0.16451,
-1.683,
0.49673,
-0.27258,
-0.025397,
0.34188,
1.5523,
1.3532,
0.33297,
-0.0056677,
-0.76525,
0.49587,
1.2211,
0.83394,
-0.20031,
-0.59657,
0.38485,
-0.23487,
-1.0725,
0.95856,
0.16161,
-1.2496,
1.6751,
0.73899,
0.051347,
-0.42702,
0.16257,
-0.16772,
0.40146,
0.29837,
0.96204,
-0.36232,
-0.47848,
0.78278,
0.14834,
1.3407,
0.47834,
-0.39083,
-1.037,
-0.24643,
-0.75841,
0.7669,
-0.37363,
0.52741,
0.018563,
-0.51301,
0.97674,
0.55232,
1.1584,
0.73715,
1.3055,
-0.44743,
-0.15961,
0.85006,
-0.34092,
-0.67667,
0.2317,
1.5582,
1.2308,
-0.62213,
-0.032801,
0.1206,
-0.25899,
-0.02756,
-0.52814,
-0.93523,
0.58434,
-0.24799,
0.37692,
0.86527,
0.069626,
1.3096,
0.29975,
-1.3651,
-0.32048,
-0.13741,
0.33329,
-1.9113,
-0.60222,
-0.23921,
0.12664,
-0.47961,
-0.89531,
0.62054,
0.40869,
-0.08503,
0.6413,
-0.84044,
-0.74325,
-0.19426,
0.098722,
0.32648,
-0.67621,
-0.62692,
],
]
matches = index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID, queries=queries, num_neighbors=NUM_NEIGHBOURS
)
for instance in matches:
print("INSTANCE")
for match in instance:
print(match)
Explanation: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters:
deployed_index_id: The identifier of the deployed index.
queries: A list of queries (instances).
num_neighbors: The number of closest matches to return.
End of explanation
brute_force_index = aiplatform.MatchingEngineIndex.create_brute_force_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDINGS_INITIAL_URI,
dimensions=DIMENSIONS,
distance_measure_type="DOT_PRODUCT_DISTANCE",
description="Glove 100 index (brute force)",
labels={"label_name": "label_value"},
)
INDEX_BRUTE_FORCE_RESOURCE_NAME = brute_force_index.resource_name
print(INDEX_BRUTE_FORCE_RESOURCE_NAME)
Explanation: Create brute force index for calibration
The brute force index uses a naive brute force method to find the nearest neighbors. This method uses a linear search and thus not efficient for large scale indexes. We recommend using the brute force index for calibrating the approximate nearest neighbor (ANN) index for recall, or for mission critical matches.
Create the brute force index
Now create the brute force index using the method create_brute_force_index().
To ensure an apples to apples comparison, the distanceMeasureType and featureNormType, dimensions of the brute force index should match those of the production indices being tuned.
End of explanation
brute_force_index = tree_ah_index.update_embeddings(
contents_delta_uri=EMBEDDINGS_UPDATE_URI
)
Explanation: Update the index
For apples to apples comparison, you perform the same incremental update to the brute force index as you did for the Tree AH index.
End of explanation
DEPLOYED_BRUTE_FORCE_INDEX_ID = "glove_brute_force_deployed_" + TIMESTAMP
index_endpoint.deploy_index(
index=brute_force_index, deployed_index_id=DEPLOYED_BRUTE_FORCE_INDEX_ID
)
Explanation: Deploy the brute force index to the IndexEndpoint resource
Next, you deploy the brute force index to the same IndexEndpoint.
Note: You can deploy multiple indexes to the same Index Endpoint resource.
End of explanation
prod_matches = index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID,
queries=list(test),
num_neighbors=NUM_NEIGHBOURS,
)
exact_matches = index_endpoint.match(
deployed_index_id=DEPLOYED_BRUTE_FORCE_INDEX_ID,
queries=list(test),
num_neighbors=NUM_NEIGHBOURS,
)
Explanation: Calibration
Now your ready to do calibration. The production version of the index uses an approxiamation method, which means it may have less than perfect recall when compared to the slower exact match (brute force) method.
Get test results for both indexes
First, using the GloVe test embeddings, you make the identical request to both indexes.
End of explanation
# Calculate recall by determining how many neighbors were correctly retrieved as compared to the brute-force option.
correct_neighbors = 0
for tree_ah_neighbors, brute_force_neighbors in zip(prod_matches, exact_matches):
tree_ah_neighbor_ids = [neighbor.id for neighbor in tree_ah_neighbors]
brute_force_neighbor_ids = [neighbor.id for neighbor in brute_force_neighbors]
correct_neighbors += len(
set(tree_ah_neighbor_ids).intersection(brute_force_neighbor_ids)
)
recall = correct_neighbors / (len(test) * NUM_NEIGHBOURS)
print("Recall: {}".format(recall))
Explanation: Compute Recall
Finally, you determine from the results the percentage of exact matches are recalled from the production index. You can subsequently use this information to tune the deployment of the production index.
End of explanation
# Force undeployment of indexes and delete endpoint
try:
index_endpoint.delete(force=True)
except Exception as e:
print(e)
# Delete indexes
try:
tree_ah_index.delete()
brute_force_index.delete()
except Exception as e:
print(e)
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -rf {BUCKET_URI}
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also manually delete resources that you created by running the following code.
End of explanation |
9,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resit Assignment part A
Deadline
Step1: Please make sure you can load the English spaCy model
Step2: Exercise 1
Step3: Please test your function using the following function call
Step4: Exercise 2
Step6: Exercise 3
Step7: Please test you function by running the following cell
Step8: Exercise 4
Step9: tip 2
Step10: tip 3
Step11: tip 4
Step13: Define a function called extract_statistics that has the following parameters
Step14: Exercise 5
Step16: Define a function called process_all_txt_files that has the following parameters
Step17: Exercise 6 | Python Code:
import spacy
Explanation: Resit Assignment part A
Deadline: Tuesday, November 30, 2021 before 17:00
Please name your files:
ASSIGNMENT-RESIT-A.ipynb
utils.py (from part B)
raw_text_to_coll.py (from part B)
Please name your zip file as follows: RESIT-ASSIGNMENT.zip and upload it via Canvas (Resit Assignment).
- Please submit your assignment on Canvas: Resit Assignment
- If you have questions about this topic, please use the Python Teacher mailing list ([email protected]).
Note that we currently only check this mailing list once a day. We have given a week extra time, so please start timely.
Answers to general questions will be covered on Piazza (https://piazza.com/class/kt1o9ir48ph50c), so please check if your question has already been answered.
All of the covered chapters are important to this assignment. However, please pay special attention to:
- Chapter 10 - Dictionaries
- Chapter 11 - Functions and scope
* Chapter 14 - Reading and writing text files
* Chapter 15 - Off to analyzing text
- Chapter 17 - Data Formats II (JSON)
- Chapter 19 - More about Natural Language Processing Tools (spaCy)
In this assignment:
* we are going to process the texts in ../Data/Dreams/*txt
* for each file, we are going to determine:
* the number of characters
* the number of sentences
* the number of words
* the longest word
* the longest sentence
Note
This notebook should be placed in the same folder as the other Assignments!
Loading spaCy
Please make sure that spaCy is installed on your computer
End of explanation
nlp = spacy.load('en_core_web_sm')
Explanation: Please make sure you can load the English spaCy model:
End of explanation
# your code here
Explanation: Exercise 1: get paths
Define a function called get_paths that has the following parameter:
* input_folder: a string
The function:
* stores all paths to .txt files in the input_folder in a list
* returns a list of strings, i.e., each string is a file path
End of explanation
paths = get_paths(input_folder='../Data/Dreams')
print(paths)
Explanation: Please test your function using the following function call
End of explanation
# your code here
Explanation: Exercise 2: load text
Define a function called load_text that has the following parameter:
* txt_path: a string
The function:
* opens the txt_path for reading and loads the contents of the file as a string
* returns a string, i.e., the content of the file
End of explanation
def return_the_longest(list_of_strings):
given a list of strings, return the longest string
if multiple strings have the same length, return one of them.
:param str list_of_strings: a list of strings
Explanation: Exercise 3: return the longest
Define a function called return_the_longest that has the following parameter:
* list_of_strings: a list of strings
The function:
* returns the string with the highest number of characters. If multiple strings have the same length, return one of them.
End of explanation
a_list_of_strings = ["this", "is", "a", "sentence"]
longest_string = return_the_longest(a_list_of_strings)
error_message = f'the longest string should be "sentence", you provided {longest_string}'
assert longest_string == 'sentence', error_message
Explanation: Please test you function by running the following cell:
End of explanation
a_text = 'this is one sentence. this is another.'
doc = nlp(a_text)
Explanation: Exercise 4: extract statistics
We are going to use spaCy to extract statistics from Vickie's dreams! Here are a few tips below about how to use spaCy:
tip 1: process text with spaCy
End of explanation
num_chars = len(doc.text)
print(num_chars)
Explanation: tip 2: the number of characters is the length of the document
End of explanation
for sent in doc.sents:
sent = sent.text
print(sent)
Explanation: tip 3: loop through the sentences of a document
End of explanation
for token in doc:
word = token.text
print(word)
Explanation: tip 4: loop through the words of a document
End of explanation
def extract_statistics(nlp, txt_path):
given a txt_path
-use the load_text function to load the text
-process the text using spaCy
:param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))
:param str txt_path: path to txt file
:rtype: dict
:return: a dictionary with the following keys:
-"num_sents" : the number of sentences
-"num_chars" : the number of characters
-"num_tokens" : the number of words
-"longest_sent" : the longest sentence
-"longest_word" : the longest word
stats = extract_statistics(nlp, txt_path=paths[0])
stats
Explanation: Define a function called extract_statistics that has the following parameters:
* nlp: the result of calling spacy.load('en_core_web_sm')
* txt_path: path to a txt file, e.g., '../Data/Dreams/vickie8.txt'
The function:
* loads the content of the file using the function load_text
* processes the content of the file using nlp(content) (see tip 1 of this exercise)
The function returns a dictionary with five keys:
* num_sents: the number of sentences in the document
* num_chars: the number of characters in the document
* num_tokens: the number of words in the document
* longest_sent: the longest sentence in the document
* Please make a list with all the sentences and call the function return_the_longest to retrieve the longest sentence
* longest_word: the longest word in the document
* Please make a list with all the words and call the function return_the_longest to retrieve the longest word
Test the function on one of the files from Vickie's dreams.
End of explanation
import os
basename = os.path.basename('../Data/Dreams/vickie1.txt')[:-4]
print(basename)
Explanation: Exercise 5: process all txt files
tip 1: how to obtain the basename of a file
End of explanation
def process_all_txt_files(nlp, input_folder):
given a list of txt_paths
-process each with the extract_statistics function
:param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))
:param list txt_paths: list of paths to txt files
:rtype: dict
:return: dictionary mapping:
-basename -> output of extract_statistics function
basename_to_stats = process_all_txt_files(nlp, input_folder='../Data/Dreams')
basename_to_stats
Explanation: Define a function called process_all_txt_files that has the following parameters:
* nlp: the result of calling spacy.load('en_core_web_sm')
* input_folder: a string (we will test it using '../Data/Dreams')
The function:
* obtains a list of txt paths using the function get_paths with input_folder as an argument
* loops through the txt paths one by one
* for each iteration, the extract_statistics function is called with txt_path as an argument
The function returns a dictionary:
* the keys are the basenames of the txt files (see tip 1 of this exercise)
* the values are the output of calling the function extract_statistics for a specific file
Test your function using '../Data/Dreams' as a value for the parameter input_folder.
End of explanation
import json
for basename, stats in basename_to_stats.items():
pass
Explanation: Exercise 6: write to disk
In this exercise, you are going to write our results to our computer.
Please loop through basename_to_stats and create one JSON file for each dream.
the path is f'{basename}.json', i.e., 'vickie1.json', 'vickie2.json', etc. (please write them to the same folder as this notebook)
the content of each JSON file is each value of basename_to_stats
End of explanation |
9,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Making predictions
Load Model
This notebook loads a model previously trained in 2_keras.ipynb or 3_eager.ipynb from earlier in the TensorFlow Basics workshop.
Note
Step2: Live Predictions
Step3: TensorFlow.js
Read about basic concepts in TensorFlow.js
Step4: Convert Model
We can convert the Keras model into TensorFlow.js format using the Python package tensorflowjs.
Read more about importing Keras models
Step5: Predict in JS
1. write index.html
This is essentially the same drawing code as above in "Live Predictions", additionally some code to load the exported Javascript model for calling model.predict().
Step6: 2. A static web server
Serving both index.html and the converted model.
Step7: 3. Port forwarding
Via a ngrok tunnel from the local machine to the internet. | Python Code:
# In Jupyter, you would need to install TF 2 via !pip.
%tensorflow_version 2.x
## Load models from Drive (Colab only).
models_path = '/content/gdrive/My Drive/amld_data/models'
data_path = '/content/gdrive/My Drive/amld_data/zoo_img'
## Or load models from local machine.
# models_path = './amld_models'
# data_path = './amld_data'
## Or load models from GCS (Colab only).
# models_path = 'gs://amld-datasets/models'
# data_path = 'gs://amld-datasets/zoo_img_small'
if models_path.startswith('/content/gdrive/'):
from google.colab import drive
drive.mount('/content/gdrive')
if models_path.startswith('gs://'):
# Keras doesn't read directly from GCS -> download.
from google.colab import auth
import os
os.makedirs('./amld_models', exist_ok=True)
auth.authenticate_user()
!gsutil cp -r "$models_path"/\* ./amld_models
models_path = './amld_models'
!ls -lh "$models_path"
import json, os
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
# Tested with TensorFlow 2.1.0
print('version={}, CUDA={}, GPU={}, TPU={}'.format(
tf.__version__, tf.test.is_built_with_cuda(),
# GPU attached? Note that you can "Runtime/Change runtime type..." in Colab.
len(tf.config.list_physical_devices('GPU')) > 0,
# TPU accessible? (only works on Colab)
'COLAB_TPU_ADDR' in os.environ))
# Load the label names from the dataset.
labels = [label.strip() for label in
tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('\n'.join(['%2d: %s' % (i, label) for i, label in enumerate(labels)]))
# Load model from 2_keras.ipynb
model = tf.keras.models.load_model(os.path.join(models_path, 'linear.h5'))
model.summary()
Explanation: Making predictions
Load Model
This notebook loads a model previously trained in 2_keras.ipynb or 3_eager.ipynb from earlier in the TensorFlow Basics workshop.
Note : The code in this notebook is quite Colab specific and won't work with Jupyter.
End of explanation
from google.colab import output
import IPython
def predict(img_64):
Get Predictions for provided image.
Args:
img_64: Raw image data (dtype int).
Returns:
A JSON object with the value for `result` being a text representation of the
top predictions.
# Reshape image into batch with single image (extra dimension "1").
preds = model.predict(np.array(img_64, float).reshape([1, 64, 64]))
# Get top three predictions (reverse argsort).
top3 = (-preds[0]).argsort()[:3]
# Return both probability and prediction label name.
result = '\n'.join(['%.3f: %s' % (preds[0, i], labels[i]) for i in top3])
return IPython.display.JSON(dict(result=result))
output.register_callback('amld.predict', predict)
%%html
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<canvas width="256" height="256" id="canvas" style="border:1px solid black"></canvas><br />
<button id="clear">clear</button><br />
<pre id="output"></pre>
<script>
let upscaleFactor = 4, halfPenSize = 2
let canvas = document.getElementById('canvas')
let output = document.getElementById('output')
let ctx = canvas.getContext('2d')
let img_64 = new Uint8Array(64*64)
let dragging = false
let timeout
let predict = () => {
google.colab.kernel.invokeFunction('amld.predict', [Array.from(img_64)], {}).then(
obj => output.textContent = obj.data['application/json'].result)
}
const getPos = e => {
let x = e.offsetX, y = e.offsetY
if (e.touches) {
const rect = canvas.getBoundingClientRect()
x = e.touches[0].clientX - rect.left
y = e.touches[0].clientY - rect.left
}
return {
x: Math.floor((x - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
y: Math.floor((y - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
}
}
const handler = e => {
const { x, y } = getPos(e)
ctx.fillStyle = 'black'
ctx.fillRect(x*upscaleFactor, y*upscaleFactor,
2*halfPenSize*upscaleFactor, 2*halfPenSize*upscaleFactor)
for (let yy = y - halfPenSize; yy < y + halfPenSize; yy++)
for (let xx = x - halfPenSize; xx < x + halfPenSize; xx++)
img_64[64*Math.min(63, Math.max(0, yy)) + Math.min(63, Math.max(0, xx))] = 1
clearTimeout(timeout)
timeout = setTimeout(predict, 500)
}
canvas.addEventListener('touchstart', e => {dragging=true; handler(e)})
canvas.addEventListener('touchmove', e => {e.preventDefault(); dragging && handler(e)})
canvas.addEventListener('touchend', () => dragging=false)
canvas.addEventListener('mousedown', e => {dragging=true; handler(e)})
canvas.addEventListener('mousemove', e => {dragging && handler(e)})
canvas.addEventListener('mouseup', () => dragging=false)
canvas.addEventListener('mouseleave', () => dragging=false)
document.getElementById('clear').addEventListener('click', () => {
ctx.fillStyle = 'white'
ctx.fillRect(0, 0, 64*upscaleFactor, 64*upscaleFactor)
output.textContent = ''
img_64 = new Uint8Array(64*64)
})
</script>
# YOUR ACTION REQUIRED:
# Load another model from 2_keras.ipynb and observe:
# - Do you get better/worse predictions?
# - Do you feel a difference in latency?
# - Can you figure out by how the model "thinks" by providing similar images
# that yield different predictions, or different images that yield the same
# picture?
Explanation: Live Predictions
End of explanation
# Getting the data of a tensor in TensorFlow.js: Use the async .data() method
# to show the output in the "output" element.
# See output in javascript console (e.g. Chrome developer tools).
# For convenience, you can also use the following Codepen:
# https://codepen.io/amld-tensorflow-basics/pen/OJPagyN
%%html
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
<pre id="output"></pre>
<script>
let output = document.getElementById('output')
let t = tf.tensor([1, 2, 3])
output.textContent = t
// YOUR ACTION REQUIRED:
// Use "t.data()" to append the tensor's data values to "output.textContent".
# Get top 3 predictions using TensorFlow Eager.
preds = tf.constant([0.1, 0.5, 0.2, 0.0])
topk = tf.math.top_k(preds, 3)
for idx, value in zip(topk.indices.numpy(), topk.values.numpy()):
print('idx', idx, 'value', value)
# Implement the same top 3 functionality in TensorFlow.js, showing the output
# in the "output" element.
# See https://js.tensorflow.org/api/latest/index.html#topk
%%html
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
<pre id="output"></pre>
<script>
let output = document.getElementById('output')
let preds = tf.tensor([0.1, 0.5, 0.2, 0.0])
// YOUR ACTION REQUIRED:
// Use tf.topk() to get top 3 predictions in "preds" and append both the
// index and the value of these predictions to "output".
Explanation: TensorFlow.js
Read about basic concepts in TensorFlow.js:
https://js.tensorflow.org/tutorials/core-concepts.html
If you find the Colab %%html way cumbersome to explore the JS API, then have a try codepen by clicking on the "Try TensorFlow.js" button on https://js.tensorflow.org/
Basics
End of explanation
# (Never mind the incompatible package complaints - it just works fine.)
!pip install -q tensorflowjs
# Specify directory where to store model.
tfjs_model_path = './tfjs/model'
!mkdir -p "$tfjs_model_path"
import tensorflowjs as tfjs
# Convert model
tf.keras.backend.clear_session() # Clean up variable names before exporting.
# (You can safely ignore the H5pyDeprecationWarning here...)
model = tf.keras.models.load_model(os.path.join(models_path, 'linear.h5'))
tfjs.converters.save_keras_model(model, tfjs_model_path)
!ls -lh "$tfjs_model_path"
import json
# You can copy this into the JavaScript code in the next cell if you load a
# model trained on a custom dataset (code below assumes dataset="zoo").
print(json.dumps(labels))
Explanation: Convert Model
We can convert the Keras model into TensorFlow.js format using the Python package tensorflowjs.
Read more about importing Keras models:
https://js.tensorflow.org/tutorials/import-keras.html
End of explanation
with open('./tfjs/index.html', 'w') as f:
f.write('''
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<canvas width="256" height="256" id="canvas" style="border:1px solid black"></canvas><br />
<button id="clear">clear</button><br />
<pre id="output"></pre>
<script>
let upscaleFactor = 4, halfPenSize = 2
let canvas = document.getElementById('canvas')
let output = document.getElementById('output')
let ctx = canvas.getContext('2d')
let img_64 = new Uint8Array(64*64)
let dragging = false
let timeout
let predict = () => {
google.colab.kernel.invokeFunction('amld.predict', [Array.from(img_64)], {}).then(
obj => output.textContent = obj.data['application/json'].result)
}
const getPos = e => {
let x = e.offsetX, y = e.offsetY
if (e.touches) {
const rect = canvas.getBoundingClientRect()
x = e.touches[0].clientX - rect.left
y = e.touches[0].clientY - rect.left
}
return {
x: Math.floor((x - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
y: Math.floor((y - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
}
}
const handler = e => {
const { x, y } = getPos(e)
ctx.fillStyle = 'black'
ctx.fillRect(x*upscaleFactor, y*upscaleFactor,
2*halfPenSize*upscaleFactor, 2*halfPenSize*upscaleFactor)
for (let yy = y - halfPenSize; yy < y + halfPenSize; yy++)
for (let xx = x - halfPenSize; xx < x + halfPenSize; xx++)
img_64[64*Math.min(63, Math.max(0, yy)) + Math.min(63, Math.max(0, xx))] = 1
clearTimeout(timeout)
timeout = setTimeout(predict, 500)
}
canvas.addEventListener('touchstart', e => {dragging=true; handler(e)})
canvas.addEventListener('touchmove', e => {e.preventDefault(); dragging && handler(e)})
canvas.addEventListener('touchend', () => dragging=false)
canvas.addEventListener('mousedown', e => {dragging=true; handler(e)})
canvas.addEventListener('mousemove', e => {dragging && handler(e)})
canvas.addEventListener('mouseup', () => dragging=false)
canvas.addEventListener('mouseleave', () => dragging=false)
document.getElementById('clear').addEventListener('click', () => {
ctx.fillStyle = 'white'
ctx.fillRect(0, 0, 64*upscaleFactor, 64*upscaleFactor)
output.textContent = ''
img_64 = new Uint8Array(64*64)
})
</script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
<script>
const labels = %s
const modelPath = './model/model.json'
let model = null
tf.loadLayersModel(modelPath)
.then(response => model = response)
.catch(error => output.textContent = 'ERROR : ' + error.message)
predict = () => {
const preds = model.predict(tf.tensor(img_64).reshape([1, 64, -1]))
const { values, indices } = tf.topk(preds, 3)
Promise.all([values.data(), indices.data()]).then(data => {
const [ values, indices ] = data
output.textContent = ''
values.forEach((v, i) => output.textContent += `${labels[indices[i]]} : ${v.toFixed(3)}\n`)
})
}
</script>''' % json.dumps(labels))
Explanation: Predict in JS
1. write index.html
This is essentially the same drawing code as above in "Live Predictions", additionally some code to load the exported Javascript model for calling model.predict().
End of explanation
# Download ngrok for tunneling.
!if [ ! -f ./ngrok ]; then \
wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip; \
unzip -o ngrok-stable-linux-amd64.zip; \
fi
# Then start a mini web server at a random port.
import random
port = random.randint(1000, 2**16)
!pkill ngrok
!kill $(ps x | grep -v grep | grep http.server | awk '{print $1}') 2>/dev/null
get_ipython().system_raw(
'cd ./tfjs && python3 -m http.server {} &'
.format(port)
)
# And, forward the port using ngrok.
get_ipython().system_raw('./ngrok http {} &'.format(port))
Explanation: 2. A static web server
Serving both index.html and the converted model.
End of explanation
# Get the public address from localhost:4040 (ngrok's web interface).
import time, urllib
time.sleep(1) # Give ngrok time to startup.
ngrok_data = json.load(
urllib.request.urlopen('http://localhost:4040/api/tunnels'))
ngrok_data['tunnels'][0]['public_url']
# You can connect to this external address using your mobile phone!
# Once the page is loaded you can turn on flight modus and verify that
# predictions are really generated on-device. :-)
!pip install -q qrcode
import qrcode
qrcode.make(ngrok_data['tunnels'][0]['public_url'])
Explanation: 3. Port forwarding
Via a ngrok tunnel from the local machine to the internet.
End of explanation |
9,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What's going on with ROCAUC for Binary Classification?
We've identified a bug in ROCAUC
Step1: Binary Classification with 1D Coefficients or Feature Importances
When the function has 1D coefficients, we don't seem to have a problem
Step2: Looks good; everything works!
Binary Classification with Multidimensional Coefficients or Feature Importances
What about classification with estimators that have multidimensional coefficients? Thanks to ZJ Poh for identifying these in this PR.
Step3: Some of these generate the IndexError
Step4: so what's going on here?
The Shape of y_pred
It looks like all of the classifiers that trigger the IndexError during binary classification with ROCAUC are ones that have only a decision_function and for which y_pred.shape is (n_samples,).
Classifiers that Raise the IndexError with Binary Classification & ROCAUC
LinearSVC()
SVC()
SGDClassifier()
PassiveAggressiveClassifier()
RidgeClassifier()
RidgeClassifierCV()
Step5: Classifiers that Currently Work with Binary Classification & ROCAUC
The classifiers with decision functions and y_pred.shape (n_samples, ) that do work with ROCAUC for binary classification seem to work because they also have predict_proba
Step6: Sklearn Documentation
AdaBoostClassifier()
QuadraticDiscriminantAnalysis()
LogisticRegression()
LogisticRegressionCV() | Python Code:
%matplotlib inline
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
from yellowbrick.classifier import ROCAUC
from sklearn.model_selection import train_test_split
occupancy = pd.read_csv('data/occupancy/occupancy.csv')
features = [
"temperature", "relative humidity", "light", "C02", "humidity"
]
classes = ["unoccupied", "occupied"]
X = occupancy[features]
y = occupancy['occupancy']
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
Explanation: What's going on with ROCAUC for Binary Classification?
We've identified a bug in ROCAUC:
```
ERROR: Test ROCAUC with a binary classifier
Traceback (most recent call last):
File "/Users/benjamin/Repos/ddl/yellowbrick/tests/test_classifier/test_rocauc.py", line 110, in test_binary_rocauc
s = visualizer.score(X_test, y_test)
File "/Users/benjamin/Repos/ddl/yellowbrick/yellowbrick/classifier/rocauc.py", line 171, in score
self.fpr[i], self.tpr[i], _ = roc_curve(y, y_pred[:,i], pos_label=c)
IndexError: too many indices for array
```
Let's see if we can figure out where it's getting triggered.
End of explanation
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
classifiers = [
AdaBoostClassifier(),
MLPClassifier(),
DecisionTreeClassifier(),
QuadraticDiscriminantAnalysis(),
DecisionTreeClassifier(),
RandomForestClassifier(),
]
for classifier in classifiers:
oz = ROCAUC(classifier)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
g = oz.show()
Explanation: Binary Classification with 1D Coefficients or Feature Importances
When the function has 1D coefficients, we don't seem to have a problem
End of explanation
from sklearn.svm import LinearSVC, NuSVC, SVC
from sklearn.linear_model import SGDClassifier
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import RidgeClassifier, RidgeClassifierCV
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
Explanation: Looks good; everything works!
Binary Classification with Multidimensional Coefficients or Feature Importances
What about classification with estimators that have multidimensional coefficients? Thanks to ZJ Poh for identifying these in this PR.
End of explanation
classifiers = [
BernoulliNB(),
MultinomialNB(),
LogisticRegression(),
LogisticRegressionCV()
]
for classifier in classifiers:
oz = ROCAUC(classifier)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
g = oz.show()
oz = ROCAUC(LinearSVC())
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(SVC())
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(SGDClassifier())
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(PassiveAggressiveClassifier())
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(RidgeClassifier())
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(RidgeClassifierCV())
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
Explanation: Some of these generate the IndexError: too many indices for array error, but not all!
These are the ones that seem to work: BernoulliNB(), MultinomialNB(), LogisticRegression(), and LogisticRegressionCV().
End of explanation
attrs = (
'predict_proba',
'decision_function',
)
failing_classifiers = [
LinearSVC(),
SVC(),
SGDClassifier(),
PassiveAggressiveClassifier(),
RidgeClassifier(),
RidgeClassifierCV()
]
def profile(classifiers):
for classifier in classifiers:
classifier.fit(X_train, y_train)
# Return the first resolved function
for attr in attrs:
try:
method = getattr(classifier, attr, None)
if method:
y_pred = method(X_test)
except AttributeError:
continue
print("y_pred shape for {} is {}.".format(
classifier.__class__.__name__, y_pred.shape)
)
print(y_pred)
profile(failing_classifiers)
Explanation: so what's going on here?
The Shape of y_pred
It looks like all of the classifiers that trigger the IndexError during binary classification with ROCAUC are ones that have only a decision_function and for which y_pred.shape is (n_samples,).
Classifiers that Raise the IndexError with Binary Classification & ROCAUC
LinearSVC()
SVC()
SGDClassifier()
PassiveAggressiveClassifier()
RidgeClassifier()
RidgeClassifierCV()
End of explanation
working_classifiers_decision = [
AdaBoostClassifier(),
QuadraticDiscriminantAnalysis(),
LogisticRegression(),
LogisticRegressionCV()
]
profile(working_classifiers_decision)
Explanation: Classifiers that Currently Work with Binary Classification & ROCAUC
The classifiers with decision functions and y_pred.shape (n_samples, ) that do work with ROCAUC for binary classification seem to work because they also have predict_proba:
End of explanation
working_classifiers_proba = [
MLPClassifier(),
DecisionTreeClassifier(),
RandomForestClassifier(),
BernoulliNB(),
MultinomialNB()
]
profile(working_classifiers_proba)
Explanation: Sklearn Documentation
AdaBoostClassifier()
QuadraticDiscriminantAnalysis()
LogisticRegression()
LogisticRegressionCV()
End of explanation |
9,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Large-scale multi-label text classification
Author
Step1: Perform exploratory data analysis
In this section, we first load the dataset into a pandas dataframe and then perform
some basic exploratory data analysis (EDA).
Step2: Our text features are present in the summaries column and their corresponding labels
are in terms. As you can notice, there are multiple categories associated with a
particular entry.
Step3: Real-world data is noisy. One of the most commonly observed source of noise is data
duplication. Here we notice that our initial dataset has got about 13k duplicate entries.
Step4: Before proceeding further, we drop these entries.
Step5: As observed above, out of 3,157 unique combinations of terms, 2,321 entries have the
lowest occurrence. To prepare our train, validation, and test sets with
stratification, we need to drop
these terms.
Step6: Convert the string labels to lists of strings
The initial labels are represented as raw strings. Here we make them List[str] for a
more compact representation.
Step7: Use stratified splits because of class imbalance
The dataset has a
class imbalance problem.
So, to have a fair evaluation result, we need to ensure the datasets are sampled with
stratification. To know more about different strategies to deal with the class imbalance
problem, you can follow
this tutorial.
For an end-to-end demonstration of classification with imbablanced data, refer to
Imbalanced classification
Step9: Multi-label binarization
Now we preprocess our labels using the
StringLookup
layer.
Step10: Here we are separating the individual unique classes available from the label
pool and then using this information to represent a given label set with 0's and 1's.
Below is an example.
Step11: Data preprocessing and tf.data.Dataset objects
We first get percentile estimates of the sequence lengths. The purpose will be clear in a
moment.
Step12: Notice that 50% of the abstracts have a length of 154 (you may get a different number
based on the split). So, any number close to that value is a good enough approximate for the
maximum sequence length.
Now, we implement utilities to prepare our datasets.
Step13: Now we can prepare the tf.data.Dataset objects.
Step14: Dataset preview
Step15: Vectorization
Before we feed the data to our model, we need to vectorize it (represent it in a numerical form).
For that purpose, we will use the
TextVectorization layer.
It can operate as a part of your main model so that the model is excluded from the core
preprocessing logic. This greatly reduces the chances of training / serving skew during inference.
We first calculate the number of unique words present in the abstracts.
Step16: We now create our vectorization layer and map() to the tf.data.Datasets created
earlier.
Step17: A batch of raw text will first go through the TextVectorization layer and it will
generate their integer representations. Internally, the TextVectorization layer will
first create bi-grams out of the sequences and then represent them using
TF-IDF. The output representations will then
be passed to the shallow model responsible for text classification.
To learn more about other possible configurations with TextVectorizer, please consult
the
official documentation.
Note
Step18: Train the model
We will train our model using the binary crossentropy loss. This is because the labels
are not disjoint. For a given abstract, we may have multiple categories. So, we will
divide the prediction task into a series of multiple binary classification problems. This
is also why we kept the activation function of the classification layer in our model to
sigmoid. Researchers have used other combinations of loss function and activation
function as well. For example, in
Exploring the Limits of Weakly Supervised Pretraining,
Mahajan et al. used the softmax activation function and cross-entropy loss to train
their models.
Step19: While training, we notice an initial sharp fall in the loss followed by a gradual decay.
Evaluate the model
Step20: The trained model gives us an evaluation accuracy of ~87%.
Inference
An important feature of the
preprocessing layers provided by Keras
is that they can be included inside a tf.keras.Model. We will export an inference model
by including the text_vectorization layer on top of shallow_mlp_model. This will
allow our inference model to directly operate on raw strings.
Note that during training it is always preferable to use these preprocessing
layers as a part of the data input pipeline rather than the model to avoid
surfacing bottlenecks for the hardware accelerators. This also allows for
asynchronous data processing. | Python Code:
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow as tf
from sklearn.model_selection import train_test_split
from ast import literal_eval
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
Explanation: Large-scale multi-label text classification
Author: Sayak Paul, Soumik Rakshit<br>
Date created: 2020/09/25<br>
Last modified: 2020/12/23<br>
Description: Implementing a large-scale multi-label text classification model.
Introduction
In this example, we will build a multi-label text classifier to predict the subject areas
of arXiv papers from their abstract bodies. This type of classifier can be useful for
conference submission portals like OpenReview. Given a paper
abstract, the portal could provide suggestions for which areas the paper would
best belong to.
The dataset was collected using the
arXiv Python library
that provides a wrapper around the
original arXiv API.
To learn more about the data collection process, please refer to
this notebook.
Additionally, you can also find the dataset on
Kaggle.
Imports
End of explanation
arxiv_data = pd.read_csv(
"https://github.com/soumik12345/multi-label-text-classification/releases/download/v0.2/arxiv_data.csv"
)
arxiv_data.head()
Explanation: Perform exploratory data analysis
In this section, we first load the dataset into a pandas dataframe and then perform
some basic exploratory data analysis (EDA).
End of explanation
print(f"There are {len(arxiv_data)} rows in the dataset.")
Explanation: Our text features are present in the summaries column and their corresponding labels
are in terms. As you can notice, there are multiple categories associated with a
particular entry.
End of explanation
total_duplicate_titles = sum(arxiv_data["titles"].duplicated())
print(f"There are {total_duplicate_titles} duplicate titles.")
Explanation: Real-world data is noisy. One of the most commonly observed source of noise is data
duplication. Here we notice that our initial dataset has got about 13k duplicate entries.
End of explanation
arxiv_data = arxiv_data[~arxiv_data["titles"].duplicated()]
print(f"There are {len(arxiv_data)} rows in the deduplicated dataset.")
# There are some terms with occurrence as low as 1.
print(sum(arxiv_data["terms"].value_counts() == 1))
# How many unique terms?
print(arxiv_data["terms"].nunique())
Explanation: Before proceeding further, we drop these entries.
End of explanation
# Filtering the rare terms.
arxiv_data_filtered = arxiv_data.groupby("terms").filter(lambda x: len(x) > 1)
arxiv_data_filtered.shape
Explanation: As observed above, out of 3,157 unique combinations of terms, 2,321 entries have the
lowest occurrence. To prepare our train, validation, and test sets with
stratification, we need to drop
these terms.
End of explanation
arxiv_data_filtered["terms"] = arxiv_data_filtered["terms"].apply(
lambda x: literal_eval(x)
)
arxiv_data_filtered["terms"].values[:5]
Explanation: Convert the string labels to lists of strings
The initial labels are represented as raw strings. Here we make them List[str] for a
more compact representation.
End of explanation
test_split = 0.1
# Initial train and test split.
train_df, test_df = train_test_split(
arxiv_data_filtered,
test_size=test_split,
stratify=arxiv_data_filtered["terms"].values,
)
# Splitting the test set further into validation
# and new test sets.
val_df = test_df.sample(frac=0.5)
test_df.drop(val_df.index, inplace=True)
print(f"Number of rows in training set: {len(train_df)}")
print(f"Number of rows in validation set: {len(val_df)}")
print(f"Number of rows in test set: {len(test_df)}")
Explanation: Use stratified splits because of class imbalance
The dataset has a
class imbalance problem.
So, to have a fair evaluation result, we need to ensure the datasets are sampled with
stratification. To know more about different strategies to deal with the class imbalance
problem, you can follow
this tutorial.
For an end-to-end demonstration of classification with imbablanced data, refer to
Imbalanced classification: credit card fraud detection.
End of explanation
terms = tf.ragged.constant(train_df["terms"].values)
lookup = tf.keras.layers.StringLookup(output_mode="multi_hot")
lookup.adapt(terms)
vocab = lookup.get_vocabulary()
def invert_multi_hot(encoded_labels):
Reverse a single multi-hot encoded label to a tuple of vocab terms.
hot_indices = np.argwhere(encoded_labels == 1.0)[..., 0]
return np.take(vocab, hot_indices)
print("Vocabulary:\n")
print(vocab)
Explanation: Multi-label binarization
Now we preprocess our labels using the
StringLookup
layer.
End of explanation
sample_label = train_df["terms"].iloc[0]
print(f"Original label: {sample_label}")
label_binarized = lookup([sample_label])
print(f"Label-binarized representation: {label_binarized}")
Explanation: Here we are separating the individual unique classes available from the label
pool and then using this information to represent a given label set with 0's and 1's.
Below is an example.
End of explanation
train_df["summaries"].apply(lambda x: len(x.split(" "))).describe()
Explanation: Data preprocessing and tf.data.Dataset objects
We first get percentile estimates of the sequence lengths. The purpose will be clear in a
moment.
End of explanation
max_seqlen = 150
batch_size = 128
padding_token = "<pad>"
auto = tf.data.AUTOTUNE
def make_dataset(dataframe, is_train=True):
labels = tf.ragged.constant(dataframe["terms"].values)
label_binarized = lookup(labels).numpy()
dataset = tf.data.Dataset.from_tensor_slices(
(dataframe["summaries"].values, label_binarized)
)
dataset = dataset.shuffle(batch_size * 10) if is_train else dataset
return dataset.batch(batch_size)
Explanation: Notice that 50% of the abstracts have a length of 154 (you may get a different number
based on the split). So, any number close to that value is a good enough approximate for the
maximum sequence length.
Now, we implement utilities to prepare our datasets.
End of explanation
train_dataset = make_dataset(train_df, is_train=True)
validation_dataset = make_dataset(val_df, is_train=False)
test_dataset = make_dataset(test_df, is_train=False)
Explanation: Now we can prepare the tf.data.Dataset objects.
End of explanation
text_batch, label_batch = next(iter(train_dataset))
for i, text in enumerate(text_batch[:5]):
label = label_batch[i].numpy()[None, ...]
print(f"Abstract: {text}")
print(f"Label(s): {invert_multi_hot(label[0])}")
print(" ")
Explanation: Dataset preview
End of explanation
# Source: https://stackoverflow.com/a/18937309/7636462
vocabulary = set()
train_df["summaries"].str.lower().str.split().apply(vocabulary.update)
vocabulary_size = len(vocabulary)
print(vocabulary_size)
Explanation: Vectorization
Before we feed the data to our model, we need to vectorize it (represent it in a numerical form).
For that purpose, we will use the
TextVectorization layer.
It can operate as a part of your main model so that the model is excluded from the core
preprocessing logic. This greatly reduces the chances of training / serving skew during inference.
We first calculate the number of unique words present in the abstracts.
End of explanation
text_vectorizer = layers.TextVectorization(
max_tokens=vocabulary_size, ngrams=2, output_mode="tf_idf"
)
# `TextVectorization` layer needs to be adapted as per the vocabulary from our
# training set.
with tf.device("/CPU:0"):
text_vectorizer.adapt(train_dataset.map(lambda text, label: text))
train_dataset = train_dataset.map(
lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto
).prefetch(auto)
validation_dataset = validation_dataset.map(
lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto
).prefetch(auto)
test_dataset = test_dataset.map(
lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto
).prefetch(auto)
Explanation: We now create our vectorization layer and map() to the tf.data.Datasets created
earlier.
End of explanation
def make_model():
shallow_mlp_model = keras.Sequential(
[
layers.Dense(512, activation="relu"),
layers.Dense(256, activation="relu"),
layers.Dense(lookup.vocabulary_size(), activation="sigmoid"),
] # More on why "sigmoid" has been used here in a moment.
)
return shallow_mlp_model
Explanation: A batch of raw text will first go through the TextVectorization layer and it will
generate their integer representations. Internally, the TextVectorization layer will
first create bi-grams out of the sequences and then represent them using
TF-IDF. The output representations will then
be passed to the shallow model responsible for text classification.
To learn more about other possible configurations with TextVectorizer, please consult
the
official documentation.
Note: Setting the max_tokens argument to a pre-calculated vocabulary size is
not a requirement.
Create a text classification model
We will keep our model simple -- it will be a small stack of fully-connected layers with
ReLU as the non-linearity.
End of explanation
epochs = 20
shallow_mlp_model = make_model()
shallow_mlp_model.compile(
loss="binary_crossentropy", optimizer="adam", metrics=["categorical_accuracy"]
)
history = shallow_mlp_model.fit(
train_dataset, validation_data=validation_dataset, epochs=epochs
)
def plot_result(item):
plt.plot(history.history[item], label=item)
plt.plot(history.history["val_" + item], label="val_" + item)
plt.xlabel("Epochs")
plt.ylabel(item)
plt.title("Train and Validation {} Over Epochs".format(item), fontsize=14)
plt.legend()
plt.grid()
plt.show()
plot_result("loss")
plot_result("categorical_accuracy")
Explanation: Train the model
We will train our model using the binary crossentropy loss. This is because the labels
are not disjoint. For a given abstract, we may have multiple categories. So, we will
divide the prediction task into a series of multiple binary classification problems. This
is also why we kept the activation function of the classification layer in our model to
sigmoid. Researchers have used other combinations of loss function and activation
function as well. For example, in
Exploring the Limits of Weakly Supervised Pretraining,
Mahajan et al. used the softmax activation function and cross-entropy loss to train
their models.
End of explanation
_, categorical_acc = shallow_mlp_model.evaluate(test_dataset)
print(f"Categorical accuracy on the test set: {round(categorical_acc * 100, 2)}%.")
Explanation: While training, we notice an initial sharp fall in the loss followed by a gradual decay.
Evaluate the model
End of explanation
# Create a model for inference.
model_for_inference = keras.Sequential([text_vectorizer, shallow_mlp_model])
# Create a small dataset just for demoing inference.
inference_dataset = make_dataset(test_df.sample(100), is_train=False)
text_batch, label_batch = next(iter(inference_dataset))
predicted_probabilities = model_for_inference.predict(text_batch)
# Perform inference.
for i, text in enumerate(text_batch[:5]):
label = label_batch[i].numpy()[None, ...]
print(f"Abstract: {text}")
print(f"Label(s): {invert_multi_hot(label[0])}")
predicted_proba = [proba for proba in predicted_probabilities[i]]
top_3_labels = [
x
for _, x in sorted(
zip(predicted_probabilities[i], lookup.get_vocabulary()),
key=lambda pair: pair[0],
reverse=True,
)
][:3]
print(f"Predicted Label(s): ({', '.join([label for label in top_3_labels])})")
print(" ")
Explanation: The trained model gives us an evaluation accuracy of ~87%.
Inference
An important feature of the
preprocessing layers provided by Keras
is that they can be included inside a tf.keras.Model. We will export an inference model
by including the text_vectorization layer on top of shallow_mlp_model. This will
allow our inference model to directly operate on raw strings.
Note that during training it is always preferable to use these preprocessing
layers as a part of the data input pipeline rather than the model to avoid
surfacing bottlenecks for the hardware accelerators. This also allows for
asynchronous data processing.
End of explanation |
9,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objective
Predict the survival of Titanic passengers using a K-Means algorithm.
Data Analysis
Data Import
Step1: Selection of Features
Step2: Cleaning Data
Step6: Experiment Heueristics (Design)
Evaluation Function Declarations
F1 score to be used to evaluate algoritm results.
Step7: Representation
Step8: Experiment
Step9: The K-Means split up the passengers into two groups 0 and 1 but it's not clear which of these represents Surivived and Non-Survived. The assumption is made that whichever group has the higher mean fare is the survival group. Depending on which group is the survival group the True Positives/False Positives calculations are slighty different.
Step10: Conclusions
K-Means algoritm predicts ~78% correct results.
Performing the Above Experiment For Kaggle | Python Code:
import pandas
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.cluster import KMeans
from pprint import pprint
TITANIC_TRAIN = 'train.csv'
TITANIC_TEST = 'test.csv'
# t_df refers to titanic_dataframe
t_df = pandas.read_csv(TITANIC_TRAIN, header=0)
Explanation: Objective
Predict the survival of Titanic passengers using a K-Means algorithm.
Data Analysis
Data Import
End of explanation
t_df.drop(['Name', 'Ticket', 'Cabin', 'Embarked', 'Sex'], axis=1, inplace=True)
t_df.info()
t_df.head(1)
Explanation: Selection of Features
End of explanation
t_df.Age.fillna(np.mean(t_df.Age), inplace=True)
t_df.info()
Explanation: Cleaning Data
End of explanation
def precision(tp, fp):
Determtine The Precision of Algorithm
return tp / (tp + fp)
def recall(tp, fn):
Determine The Recall of Algorithm
return tp / (tp + fn)
def f1_score(tp, fn, fp):
Return the F1 score of a algorithm
pre = precision(tp, fp)
rec = recall(tp, fn)
return (2 * ((pre * rec) / (pre + rec)))
Explanation: Experiment Heueristics (Design)
Evaluation Function Declarations
F1 score to be used to evaluate algoritm results.
End of explanation
train, test = train_test_split(t_df, test_size = 0.2)
y = np.array(train['Survived'])
x = np.array(train[['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']])
train_fares = []
for i in range(len(x)):
train_fares.append(x[i][-1])
Explanation: Representation
End of explanation
k = 2
kmeans = KMeans(n_clusters=k)
results = kmeans.fit_predict(x)
Explanation: Experiment
End of explanation
tp = 0
fp = 0
fn = 0
one_fare = []
zero_fare = []
for i in range(len(results)):
if results[i] == 1:
one_fare.append(train_fares[i])
elif results[i] == 0:
zero_fare.append(train_fares[i])
one_mean_fare = np.mean(one_fare)
print("Mean Fare of Group One: {}".format(one_mean_fare))
zero_mean_fare = np.mean(zero_fare)
print("Mean Fare of Group Zero: {}".format(zero_mean_fare))
if one_mean_fare > zero_mean_fare:
for i in range(len(results)):
diff = y[i] - results[i]
if diff == 1:
fp += 1
elif diff == 0:
tp += 1
else:
fn += 1
else:
for i in range(len(results)):
diff = y[i] - results[i]
if diff == 1:
fn += 1
elif diff == 0:
tp += 1
else:
fp += 1
print("True Positives: " + str(tp))
print("False Positives: " + str(fp))
print("False Negative: " + str(fn))
f1 = f1_score(tp, fn, fp)
print("F1 Score: " + str(f1))
Explanation: The K-Means split up the passengers into two groups 0 and 1 but it's not clear which of these represents Surivived and Non-Survived. The assumption is made that whichever group has the higher mean fare is the survival group. Depending on which group is the survival group the True Positives/False Positives calculations are slighty different.
End of explanation
test_df = pandas.read_csv(TITANIC_TEST, header=0)
test_df.drop(['Name', 'Ticket', 'Cabin', 'Embarked', 'Sex'], axis=1, inplace=True)
test_df.Age.fillna(np.mean(test_df.Age), inplace=True)
test_df.Fare.fillna(np.mean(test_df.Fare), inplace=True)
x = np.array(test_df[['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']])
kmeans = KMeans(n_clusters=k)
results = kmeans.fit_predict(x)
s1 = pandas.Series(np.array(test_df.PassengerId), name='PassengerId')
s2 = pandas.Series(results, name='Survived')
kaggle_result = pandas.concat([s1,s2], axis=1)
kaggle_result.to_csv('titanic_day2.csv', index=False)
Explanation: Conclusions
K-Means algoritm predicts ~78% correct results.
Performing the Above Experiment For Kaggle
End of explanation |
9,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
Step1: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Here, just creating some placeholders like normal.
Step6: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper
Step7: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note
Step9: Model Loss
Calculating the loss like before, nothing new here.
Step11: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
Step12: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise | Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
End of explanation
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
End of explanation
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
Explanation: Here is a function for displaying generated images.
End of explanation
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
Explanation: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
End of explanation |
9,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Einops tutorial, part 1
Step1: Load a batch of images to play with
Step2: Composition of axes
transposition is very common and useful, but let's move to other capabilities provided by einops
Step3: Decomposition of axis
Step4: Order of axes matters
Step5: Meet einops.reduce
In einops-land you don't need to guess what happened
python
x.mean(-1)
Because you write what the operation does
python
reduce(x, 'b h w c -> b h w', 'mean')
if axis is not present in the output — you guessed it — axis was reduced.
Step6: Stack and concatenate
Step7: Addition or removal of axes
You can write 1 to create a new axis of length 1. Similarly you can remove such axis.
There is also a synonym () that you can use. That's a composition of zero axes and it also has a unit length.
Step8: Repeating elements
Third operation we introduce is repeat
Step9: Note
Step10: Fancy examples in random order
(a.k.a. mad designer gallery) | Python Code:
# Examples are given for numpy. This code also setups ipython/jupyter
# so that numpy arrays in the output are displayed as images
import numpy
from utils import display_np_arrays_as_images
display_np_arrays_as_images()
Explanation: Einops tutorial, part 1: basics
<!-- <img src='http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png' height="80" /> -->
Welcome to einops-land!
We don't write
python
y = x.transpose(0, 2, 3, 1)
We write comprehensible code
python
y = rearrange(x, 'b c h w -> b h w c')
einops supports widely used tensor packages (such as numpy, pytorch, chainer, gluon, tensorflow), and extends them.
What's in this tutorial?
fundamentals: reordering, composition and decomposition of axes
operations: rearrange, reduce, repeat
how much you can do with a single operation!
Preparations
End of explanation
ims = numpy.load('./resources/test_images.npy', allow_pickle=False)
# There are 6 images of shape 96x96 with 3 color channels packed into tensor
print(ims.shape, ims.dtype)
# display the first image (whole 4d tensor can't be rendered)
ims[0]
# second image in a batch
ims[1]
# we'll use three operations
from einops import rearrange, reduce, repeat
# rearrange, as its name suggests, rearranges elements
# below we swapped height and width.
# In other words, transposed first two axes (dimensions)
rearrange(ims[0], 'h w c -> w h c')
Explanation: Load a batch of images to play with
End of explanation
# einops allows seamlessly composing batch and height to a new height dimension
# We just rendered all images by collapsing to 3d tensor!
rearrange(ims, 'b h w c -> (b h) w c')
# or compose a new dimension of batch and width
rearrange(ims, 'b h w c -> h (b w) c')
# resulting dimensions are computed very simply
# length of newly composed axis is a product of components
# [6, 96, 96, 3] -> [96, (6 * 96), 3]
rearrange(ims, 'b h w c -> h (b w) c').shape
# we can compose more than two axes.
# let's flatten 4d array into 1d, resulting array has as many elements as the original
rearrange(ims, 'b h w c -> (b h w c)').shape
Explanation: Composition of axes
transposition is very common and useful, but let's move to other capabilities provided by einops
End of explanation
# decomposition is the inverse process - represent an axis as a combination of new axes
# several decompositions possible, so b1=2 is to decompose 6 to b1=2 and b2=3
rearrange(ims, '(b1 b2) h w c -> b1 b2 h w c ', b1=2).shape
# finally, combine composition and decomposition:
rearrange(ims, '(b1 b2) h w c -> (b1 h) (b2 w) c ', b1=2)
# slightly different composition: b1 is merged with width, b2 with height
# ... so letters are ordered by w then by h
rearrange(ims, '(b1 b2) h w c -> (b2 h) (b1 w) c ', b1=2)
# move part of width dimension to height.
# we should call this width-to-height as image width shrunk by 2 and height doubled.
# but all pixels are the same!
# Can you write reverse operation (height-to-width)?
rearrange(ims, 'b h (w w2) c -> (h w2) (b w) c', w2=2)
Explanation: Decomposition of axis
End of explanation
# compare with the next example
rearrange(ims, 'b h w c -> h (b w) c')
# order of axes in composition is different
# rule is just as for digits in the number: leftmost digit is the most significant,
# while neighboring numbers differ in the rightmost axis.
# you can also think of this as lexicographic sort
rearrange(ims, 'b h w c -> h (w b) c')
# what if b1 and b2 are reordered before composing to width?
rearrange(ims, '(b1 b2) h w c -> h (b1 b2 w) c ', b1=2) # produces 'einops'
rearrange(ims, '(b1 b2) h w c -> h (b2 b1 w) c ', b1=2) # produces 'eoipns'
Explanation: Order of axes matters
End of explanation
# average over batch
reduce(ims, 'b h w c -> h w c', 'mean')
# the previous is identical to familiar:
ims.mean(axis=0)
# but is so much more readable
# Example of reducing of several axes
# besides mean, there are also min, max, sum, prod
reduce(ims, 'b h w c -> h w', 'min')
# this is mean-pooling with 2x2 kernel
# image is split into 2x2 patches, each patch is averaged
reduce(ims, 'b (h h2) (w w2) c -> h (b w) c', 'mean', h2=2, w2=2)
# max-pooling is similar
# result is not as smooth as for mean-pooling
reduce(ims, 'b (h h2) (w w2) c -> h (b w) c', 'max', h2=2, w2=2)
# yet another example. Can you compute result shape?
reduce(ims, '(b1 b2) h w c -> (b2 h) (b1 w)', 'mean', b1=2)
Explanation: Meet einops.reduce
In einops-land you don't need to guess what happened
python
x.mean(-1)
Because you write what the operation does
python
reduce(x, 'b h w c -> b h w', 'mean')
if axis is not present in the output — you guessed it — axis was reduced.
End of explanation
# rearrange can also take care of lists of arrays with the same shape
x = list(ims)
print(type(x), 'with', len(x), 'tensors of shape', x[0].shape)
# that's how we can stack inputs
# "list axis" becomes first ("b" in this case), and we left it there
rearrange(x, 'b h w c -> b h w c').shape
# but new axis can appear in the other place:
rearrange(x, 'b h w c -> h w c b').shape
# that's equivalent to numpy stacking, but written more explicitly
numpy.array_equal(rearrange(x, 'b h w c -> h w c b'), numpy.stack(x, axis=3))
# ... or we can concatenate along axes
rearrange(x, 'b h w c -> h (b w) c').shape
# which is equivalent to concatenation
numpy.array_equal(rearrange(x, 'b h w c -> h (b w) c'), numpy.concatenate(x, axis=1))
Explanation: Stack and concatenate
End of explanation
x = rearrange(ims, 'b h w c -> b 1 h w 1 c') # functionality of numpy.expand_dims
print(x.shape)
print(rearrange(x, 'b 1 h w 1 c -> b h w c').shape) # functionality of numpy.squeeze
# compute max in each image individually, then show a difference
x = reduce(ims, 'b h w c -> b () () c', 'max') - ims
rearrange(x, 'b h w c -> h (b w) c')
Explanation: Addition or removal of axes
You can write 1 to create a new axis of length 1. Similarly you can remove such axis.
There is also a synonym () that you can use. That's a composition of zero axes and it also has a unit length.
End of explanation
# repeat along a new axis. New axis can be placed anywhere
repeat(ims[0], 'h w c -> h new_axis w c', new_axis=5).shape
# shortcut
repeat(ims[0], 'h w c -> h 5 w c').shape
# repeat along w (existing axis)
repeat(ims[0], 'h w c -> h (repeat w) c', repeat=3)
# repeat along two existing axes
repeat(ims[0], 'h w c -> (2 h) (2 w) c')
# order of axes matters as usual - you can repeat each element (pixel) 3 times
# by changing order in parenthesis
repeat(ims[0], 'h w c -> h (w repeat) c', repeat=3)
Explanation: Repeating elements
Third operation we introduce is repeat
End of explanation
repeated = repeat(ims, 'b h w c -> b h new_axis w c', new_axis=2)
reduced = reduce(repeated, 'b h new_axis w c -> b h w c', 'min')
assert numpy.array_equal(ims, reduced)
Explanation: Note: repeat operation covers functionality identical to numpy.repeat, numpy.tile and actually more than that.
Reduce ⇆ repeat
reduce and repeat are like opposite of each other: first one reduces amount of elements, second one increases.
In the following example each image is repeated first, then we reduce over new axis to get back original tensor. Notice that operation patterns are "reverse" of each other
End of explanation
# interweaving pixels of different pictures
# all letters are observable
rearrange(ims, '(b1 b2) h w c -> (h b1) (w b2) c ', b1=2)
# interweaving along vertical for couples of images
rearrange(ims, '(b1 b2) h w c -> (h b1) (b2 w) c', b1=2)
# interweaving lines for couples of images
# exercise: achieve the same result without einops in your favourite framework
reduce(ims, '(b1 b2) h w c -> h (b2 w) c', 'max', b1=2)
# color can be also composed into dimension
# ... while image is downsampled
reduce(ims, 'b (h 2) (w 2) c -> (c h) (b w)', 'mean')
# disproportionate resize
reduce(ims, 'b (h 4) (w 3) c -> (h) (b w)', 'mean')
# spilt each image in two halves, compute mean of the two
reduce(ims, 'b (h1 h2) w c -> h2 (b w)', 'mean', h1=2)
# split in small patches and transpose each patch
rearrange(ims, 'b (h1 h2) (w1 w2) c -> (h1 w2) (b w1 h2) c', h2=8, w2=8)
# stop me someone!
rearrange(ims, 'b (h1 h2 h3) (w1 w2 w3) c -> (h1 w2 h3) (b w1 h2 w3) c', h2=2, w2=2, w3=2, h3=2)
rearrange(ims, '(b1 b2) (h1 h2) (w1 w2) c -> (h1 b1 h2) (w1 b2 w2) c', h1=3, w1=3, b2=3)
# patterns can be arbitrarily complicated
reduce(ims, '(b1 b2) (h1 h2 h3) (w1 w2 w3) c -> (h1 w1 h3) (b1 w2 h2 w3 b2) c', 'mean',
h2=2, w1=2, w3=2, h3=2, b2=2)
# subtract background in each image individually and normalize
# pay attention to () - this is composition of 0 axis, a dummy axis with 1 element.
im2 = reduce(ims, 'b h w c -> b () () c', 'max') - ims
im2 /= reduce(im2, 'b h w c -> b () () c', 'max')
rearrange(im2, 'b h w c -> h (b w) c')
# pixelate: first downscale by averaging, then upscale back using the same pattern
averaged = reduce(ims, 'b (h h2) (w w2) c -> b h w c', 'mean', h2=6, w2=8)
repeat(averaged, 'b h w c -> (h h2) (b w w2) c', h2=6, w2=8)
rearrange(ims, 'b h w c -> w (b h) c')
# let's bring color dimension as part of horizontal axis
# at the same time horizontal axis is downsampled by 2x
reduce(ims, 'b (h h2) (w w2) c -> (h w2) (b w c)', 'mean', h2=3, w2=3)
Explanation: Fancy examples in random order
(a.k.a. mad designer gallery)
End of explanation |
9,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ZIKA CLASSIFICATION MODEL
IMPORTS
Step1: LOAD DATA
Step2: ALGORITHMS
Step3: TRAIN MODELS
Step4: OPTIMIZE N PRINCIPAL COMPONENTS
Step5: SAVE MODELS
Step6: RUN MODEL | Python Code:
# Algorithms
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
# Metrics
from sklearn.metrics import confusion_matrix, roc_curve, auc, accuracy_score
from sklearn.metrics import classification_report, precision_recall_curve
# Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline, make_union
from sklearn.decomposition import TruncatedSVD
from modules.transformers import *
# Visuals
# from modules.custom_plot import plot_confusion_matrix
import matplotlib.pyplot as plt
% matplotlib inline
# Miscellaneous
from sklearn.cross_validation import train_test_split
from glob import glob
import pandas as pd
import numpy as np
import pickle
import re
Explanation: ZIKA CLASSIFICATION MODEL
IMPORTS
End of explanation
df = pd.read_csv('data/161207_ZikaLabels.csv')
df.dropna(axis=0,inplace=True) #drop NaNs or else NaNs would confuse the algorithms
X = df.diagnosisRAW
y = df.zika
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=42)
class_mapping = {label:indx for indx,label in enumerate(np.unique(df.zika))}
encoded_y_test = y_test.map(class_mapping)
N = float(len(encoded_y_test[encoded_y_test==0]))
P = float(len(encoded_y_test[encoded_y_test==1]))
baseline_PR = P/(P+N)
Explanation: LOAD DATA
End of explanation
algorithms = {}
algorithms['Gradient_Boost'] = GradientBoostingClassifier(random_state=42)
algorithms['Logistic_Regression'] = LogisticRegression(random_state=42)
algorithms['Random_Forest'] = RandomForestClassifier(random_state=42)
algorithms['Gauss_Naive_Bayes'] = GaussianNB()
Explanation: ALGORITHMS
End of explanation
# Latent Semantic Analysis (LSA)
lsa = make_pipeline(TfidfVectorizer(),TruncatedSVD(n_components=500))
# n_components = 100-500 is the general range for LSA applications and depends on size of corpus
# https://medium.com/@adi_enasoaie/easy-lsi-pipeline-using-scikit-learn-a073f2484408#.j50q4rwnz
# Feature Extractions
feature_union = make_union(lsa,ZikaCounterTransformer(), SentimentTransformer())
for name,algorithm in algorithms.items():
# Data Pipeline
pipeline = make_pipeline(AsciiTransformer(),
LowerCaseTransformer(),
RemoveSymsTransformer(),
RemoveStopWordsTransformer(),
feature_union,
algorithm)
# Train Model
model = pipeline.fit(X_train,y_train)
# Make Predictions
y_pred = model.predict(X_test)
y_pred_probs = model.predict_proba(X_test)
# Metrics (for model evaluation)
cnf_matrix = confusion_matrix(y_test, y_pred)
score = accuracy_score(y_test,y_pred) # accuracy = (correct preds)/(num samples) = (TP+TN)/(TP+TN+FP+FN)
precision,recall,threshold_PR = precision_recall_curve(encoded_y_test,y_pred_probs[:,1],pos_label=1)
fpr,tpr,threshold_ROC = roc_curve(encoded_y_test,y_pred_probs[:,1],pos_label=1)
AUC = auc(fpr,tpr)
print '#'*90
print '\t MODEL:{} \t ACCURACY:{} \t AUC:{}'.format(name, score, AUC)
print '#'*90
print classification_report(y_test,y_pred)
# Plot Figures
fig,axs = plt.subplots(nrows=1,ncols=3)
fig.set_figwidth(15)
fig.set_figheight(5)
# PR Curve
ax = axs[0]
ax.plot(recall,precision)
ax.plot(np.linspace(0,1,len(recall)),[baseline_PR]*len(recall),'--r')
ax.set_title('Precision-Recall ({})'.format(name))
ax.set_xlabel('Recall')
ax.set_ylabel('Precision')
ax.legend(['PR Curve','PR Baseline'])
ax.grid(True)
# ROC Curve
ax = axs[1]
ax.plot(fpr,tpr)
ax.plot(np.linspace(0,1,len(fpr)),np.linspace(0,1,len(fpr)),'--r')
ax.set_title('ROC Curve ({})'.format(name))
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.legend(['ROC Curve','ROC Baseline'])
ax.grid(True)
# Plot Confusion Matrix
ax = axs[2]
class_names = sorted(df.zika.unique())
plot_confusion_matrix(cnf_matrix, classes=class_names,title='Confusion Matrix ({})'.format(name) )
plt.show()
print
Explanation: TRAIN MODELS
End of explanation
# Initiate Figures
fig,axs = plt.subplots(nrows=2,ncols=2)
fig.set_figwidth(15)
fig.set_figheight(15)
labels = [] # model names
for name,algorithm in algorithms.items():
labels.append(name)
scores = [] # accuracy score = (TP+TN)/(TP+TN+FP+FN)
aucs = [] # area under ROC curve
tprs = [] # true positive rate = sensitivity = TP/P = TP/(TP+FN)
tnrs = [] # true negative rate = specificity = TN/N = TN/(TN+FP)
n_components = [] # number of principal components
for n in range(10,501,10):
# Data Pipeline
lsa = make_pipeline(TfidfVectorizer(),TruncatedSVD(n_components=n))
feature_union = make_union(lsa,ZikaCounterTransformer(), SentimentTransformer())
pipeline = make_pipeline(AsciiTransformer(),
LowerCaseTransformer(),
RemoveSymsTransformer(),
RemoveStopWordsTransformer(),
feature_union,
algorithm)
# Train Model
model = pipeline.fit(X_train,y_train)
# Make Predictions
y_pred = model.predict(X_test)
y_pred_probs = model.predict_proba(X_test)
# Metrics (for model evaluation)
score = accuracy_score(y_test,y_pred) # accuracy = (correct preds)/(num samples) = (TP+TN)/(TP+TN+FP+FN)
precision,recall,threshold_PR = precision_recall_curve(encoded_y_test,y_pred_probs[:,1],pos_label=1)
fpr,tpr,threshold_ROC = roc_curve(encoded_y_test,y_pred_probs[:,1],pos_label=1)
AUC = auc(fpr,tpr)
cnf_matrix = confusion_matrix(y_test, y_pred)
TP = float(cnf_matrix[1][1])
FN = float(cnf_matrix[1][0])
TN = float(cnf_matrix[0][0])
FP = float(cnf_matrix[0][1])
TPR = TP/(TP+FN)
TNR = TN/(TN+FP)
# Save Data
scores.append(score)
aucs.append(AUC)
tprs.append(TPR)
tnrs.append(TNR)
n_components.append(n)
ax = axs[0,0]
ax.plot(n_components,scores,'--o')
ax = axs[0,1]
ax.plot(n_components,aucs,'--o')
ax = axs[1,0]
ax.plot(n_components,tprs,'--o')
ax = axs[1,1]
ax.plot(n_components,tnrs,'--o')
# Scores VS n_components
ax = axs[0,0]
ax.set_title('Accuracy Score')
ax.set_xlabel('n_component')
ax.set_ylabel('Score')
ax.legend(labels, loc='best')
ax.grid(True)
# AUCs VS n_components
ax = axs[0,1]
ax.plot(n_components,aucs)
ax.set_title('Area Under Curve')
ax.set_xlabel('n_component')
ax.set_ylabel('AUC')
ax.legend(labels, loc='best')
ax.grid(True)
# True Positive Rate VS n_components
ax = axs[1,0]
ax.plot(n_components,tprs)
ax.set_title('True Positive Rate')
ax.set_xlabel('n_component')
ax.set_ylabel('TPR')
ax.legend(labels, loc='best')
ax.grid(True)
# True Negative Rate VS n_components
ax = axs[1,1]
ax.plot(n_components,tnrs)
ax.set_title('True Negative Rate')
ax.set_xlabel('n_component')
ax.set_ylabel('TNR')
ax.legend(labels, loc='best')
ax.grid(True)
Explanation: OPTIMIZE N PRINCIPAL COMPONENTS
End of explanation
for name,algorithm in algorithms.items():
lsa = make_pipeline(TfidfVectorizer(),TruncatedSVD(n_components=500))
feature_union = make_union(lsa,ZikaCounterTransformer(), SentimentTransformer())
pipeline = make_pipeline(AsciiTransformer(),
LowerCaseTransformer(),
RemoveSymsTransformer(),
RemoveStopWordsTransformer(),
feature_union,
algorithm)
model = pipeline.fit(X_train,y_train)
# with open('models/MODEL_{}.plk'.format(name),'wb') as f:
# pickle.dump(model,f)
Explanation: SAVE MODELS
End of explanation
def run_model(model,text):
print 'Text input: \"{}\"'.format(text)
print 'Prediction: {}'.format(model.predict(text)[0])
print
# print 'Probability the model thinks you do NOT have the Zika Virus (FALSE): {}'.format(model.predict_proba(text)[0][0],4)
# print 'Probability the model thinks you DO have the Zika Virus (TRUE): {}'.format(model.predict_proba(text)[0][1],4)
sent1 = 'Eu tenho o vírus zika' # I have the zika virus
sent2 = 'Eu nao tenho o virus zika' # I do not have the zika virus
sent3 = 'Tenho febre, erupções cutâneas, dores nas articulações e olhos vermelhos.' # I have a fever, rash, joint pain, and red eyes.
sent4 = 'Estou completamente saudável' # I am completely healthy
sentences = [sent1,sent2,sent3,sent4]
file_names = glob('models/MODEL_*.plk')
for name in file_names:
with open(name,'rb') as f:
model = pickle.load(f)
print '#'*90
print '\t \t \t {}'.format(name)
print '#'*90
for sent in sentences:
run_model(model,sent)
Explanation: RUN MODEL
End of explanation |
9,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power spectral density (PSD) in a label
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
Step1: Set parameters
Step2: View PSD of sources in label | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
print(__doc__)
Explanation: Compute source power spectral density (PSD) in a label
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = 0, 120 # use the first 120s of data
fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
label = mne.read_label(fname_label)
stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM",
tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
pick_ori="normal", n_fft=n_fft, label=label,
dB=True)
stc.save('psd_dSPM')
Explanation: Set parameters
End of explanation
plt.plot(stc.times, stc.data.T)
plt.xlabel('Frequency (Hz)')
plt.ylabel('PSD (dB)')
plt.title('Source Power Spectrum (PSD)')
plt.show()
Explanation: View PSD of sources in label
End of explanation |
9,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Waveguide Example
In this example we show how to calculate the stationary paraxial field for a slab and cylindircal waveguide using PyPropagate.
Step1: Setting up the propagators
We begin by creating the settings for the propagator. We set a simulation box of size $1 \mu \text{m} \times 1 \mu \text{m} \times 0.5 \text{mm}$ and $1000 \cdot 1000 \cdot 1000$ voxels and set initial and boundary conditions for a monochromatic plane wave with $12 \text{keV}$ photon energy.
Note that we could also set the $x_\text{min}$, $x_\text{max}$, $N_x$, ... boundaries individually.
Step2: Our waveguide will consist of a vacuum core with a Titanium cladding. We can automatically lookup the refraction index for a compund material by providing chemical formula.
Step3: For a slab waveguide we will use a one dimensional and for a circular waveguide a two dimensional propagator. Since the one dimensional propagator assumes the values for $y = 0$ are valid everywhere we can define both waveguides with one formula. As the waveguide radius we choose $24\text{nm}$.
Step4: Slab Waveguide
We now calculate the stationary solution for the slab waveguide using the one dimensional finite differences propagator.
Step5: Cylindrical Waveguide
With the same settings we can calculate the solution for a cylindrical waveguide by using a cylindrically symmetrical finite differences propagator. | Python Code:
from pypropagate import *
%matplotlib inline
Explanation: Minimal Waveguide Example
In this example we show how to calculate the stationary paraxial field for a slab and cylindircal waveguide using PyPropagate.
End of explanation
settings = presets.settings.create_paraxial_wave_equation_settings()
settings.simulation_box.set((0.25*units.um,0.25*units.um,0.25*units.mm),(1000,1000,1000))
presets.boundaries.set_plane_wave_initial_conditions(settings)
settings.wave_equation.set_energy(12*units.keV)
Explanation: Setting up the propagators
We begin by creating the settings for the propagator. We set a simulation box of size $1 \mu \text{m} \times 1 \mu \text{m} \times 0.5 \text{mm}$ and $1000 \cdot 1000 \cdot 1000$ voxels and set initial and boundary conditions for a monochromatic plane wave with $12 \text{keV}$ photon energy.
Note that we could also set the $x_\text{min}$, $x_\text{max}$, $N_x$, ... boundaries individually.
End of explanation
nVa = 1
nGe = presets.medium.create_material('Ge',settings)
Explanation: Our waveguide will consist of a vacuum core with a Titanium cladding. We can automatically lookup the refraction index for a compund material by providing chemical formula.
End of explanation
s = settings.symbols
waveguide_radius = 25*units.nm
settings.wave_equation.n = pc.piecewise((nVa,pc.sqrt(s.x**2+s.y**2) <= waveguide_radius),(nGe,True))
settings.get_numeric(s.n)
Explanation: For a slab waveguide we will use a one dimensional and for a circular waveguide a two dimensional propagator. Since the one dimensional propagator assumes the values for $y = 0$ are valid everywhere we can define both waveguides with one formula. As the waveguide radius we choose $24\text{nm}$.
End of explanation
propagator = propagators.FiniteDifferences2D(settings)
field = propagator.run_slice()[-2*waveguide_radius:2*waveguide_radius]
plot(field,figsize = (13,5));
Explanation: Slab Waveguide
We now calculate the stationary solution for the slab waveguide using the one dimensional finite differences propagator.
End of explanation
propagator = propagators.FiniteDifferencesCS(settings)
field = propagator.run_slice()[-2*waveguide_radius:2*waveguide_radius]
plot(field,figsize = (13,5));
Explanation: Cylindrical Waveguide
With the same settings we can calculate the solution for a cylindrical waveguide by using a cylindrically symmetrical finite differences propagator.
End of explanation |
9,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 9
Step1: The Solow model with exogenous population growth
Now, let's suppose that production is a function of the supply of labor $L_t$
Step2: An alternative approach
Suppose that we wanted to simulate the Solow model with different parameter values so that we could compare the simulations. Since we'd be doing the same basic steps multiple times using different numbers, it would make sense to define a function so that we could avoid repetition.
The code below defines a function called solow_example() that simulates the Solow model with exogenous labor growth. solow_example() takes as arguments the parameters of the Solow model $A$, $\alpha$, $\delta$, $s$, and $n$; the initial values $K_0$ and $L_0$; and the number of simulation periods $T$. solow_example() returns a Pandas DataFrame with computed values for aggregate and per worker quantities.
Step3: With solow_example() defined, we can redo the previous exercise quickly
Step4: solow_example() can be used to perform multiple simulations. For example, suppose we want to see the effect of having two different initial values of capital | Python Code:
# Initialize parameters for the simulation (A, s, T, delta, alpha, K0)
K0 = 20
T= 100
A= 10
alpha = 0.35
delta = 0.1
s = 0.15
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
capital = np.zeros(T+1)
capital[0] = K0
# Compute all capital values by iterating over t from 0 through T
for t in np.arange(T):
capital[t+1] = s*A*capital[t]**alpha + (1-delta)*capital[t]
# Print the value of capital at dates 0 and T
print('capital(t=0):',capital[0])
print('capital(t=T):',capital[-1])
# Store the simulated capital data in a pandas DataFrame called data
data = pd.DataFrame({'capital':capital})
# Print the first five rows of the DataFrame
print(data.head())
# Create columns in the DataFrame to store computed values of the other endogenous variables
data['output'] = data['capital']**alpha
data['consumption'] = (1-s)*data['output']
data['investment'] = data['output'] - data['consumption']
# Print the first row of the DataFrame
print(data.iloc[0])
# Print the last row of the DataFrame
print(data.iloc[-1])
# Create a 2x2 grid of plots of capital, output, consumption, and investment
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(data['capital'],lw=3)
ax.grid()
ax.set_title('Capital')
ax = fig.add_subplot(2,2,2)
ax.plot(data['output'],lw=3)
ax.grid()
ax.set_title('Output')
ax = fig.add_subplot(2,2,3)
ax.plot(data['consumption'],lw=3)
ax.grid()
ax.set_title('Consumption')
ax = fig.add_subplot(2,2,4)
ax.plot(data['investment'],lw=3)
ax.grid()
ax.set_title('Investment')
Explanation: Class 9: The Solow growth model
The Solow growth model is at the core of modern theories of growth and business cycles. The Solow model is a model of exogenous growth: long-run growth arises in the model as a consequence of exogenous growth in the labor supply and total factor productivity. The Solow model, like many other macroeconomic models, is a time series model.
The Solow model without exogenous growth
For the moment, let's disregard population and total factor productivity growth and assume that equilibrium in a closed economy is described by the following four equations:
\begin{align}
Y_t & = A K_t^{\alpha} \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + ( 1- \delta)K_t \tag{4}\
\end{align}
Equation (1) is the production function. Equation (2) is the consumption function where $s$ denotes the exogenously given saving rate. Equation (3) is the aggregate market clearing condition. Finally, Equation (4) is the capital evolution equation specifying that capital in yeat $t+1$ is the sum of newly created capital $I_t$ and the capital stock from year $t$ that has not depreciated $(1-\delta)K_t$.
Combine Equations (1) through (4) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a single-variable recurrence relation for $K_{t+1}$:
\begin{align}
K_{t+1} & = sAK_t^{\alpha} + ( 1- \delta)K_t \tag{5}
\end{align}
Given an initial value for capital $K_0 >0$, iterate on Equation (5) to compute the value of the capital stock at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1) through (3).
Simulation
Simulate the Solow growth model for $t=0\ldots 100$. For the simulation, assume the following values of the parameters:
\begin{align}
A & = 10\
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1
\end{align}
Furthermore, suppose that the initial value of capital is:
\begin{align}
K_0 & = 20
\end{align}
End of explanation
# Initialize parameters for the simulation (A, s, T, delta, alpha, n, K0, L0)
K0 = 20
L0 = 1
T= 100
A= 10
alpha = 0.35
delta = 0.1
s = 0.15
n = 0.01
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
labor = np.zeros(T+1)
labor[0] = L0
# Compute all labor values by iterating over t from 0 through T
for t in np.arange(T):
labor[t+1] = (1+n)*labor[t]
# Plot the simulated labor series
plt.plot(labor,lw=3)
plt.grid()
plt.title('Labor')
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
capital = np.zeros(T+1)
capital[0] = K0
# Compute all capital values by iterating over t from 0 through T
for t in np.arange(T):
capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
# Plot the simulated capital series
plt.plot(capital,lw=3)
plt.grid()
plt.title('Capital')
# Store the simulated capital data in a pandas DataFrame called data_labor
data_labor = pd.DataFrame({'capital':capital,'labor':labor})
# Print the first five rows of the data_labor
print(data_labor.head())
# Create columns in the DataFrame to store computed values of the other endogenous variables
data_labor['output'] = data_labor['capital']**alpha*data_labor['labor']**(1-alpha)
data_labor['consumption'] = (1-s)*data_labor['output']
data_labor['investment'] = data_labor['output'] - data_labor['consumption']
# Print the first five rows of data_labor
print(data_labor.iloc[0])
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
data_labor['capital_pw'] = data_labor['capital']/data_labor['labor']
data_labor['output_pw'] = data_labor['output']/data_labor['labor']
data_labor['consumption_pw'] = data_labor['consumption']/data_labor['labor']
data_labor['investment_pw'] = data_labor['investment']/data_labor['labor']
# Print the first five rows of data_labor
print(data_labor.head())
# Create a 2x2 grid of plots of capital, output, consumption, and investment
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(data_labor['capital'],lw=3)
ax.grid()
ax.set_title('Capital')
ax = fig.add_subplot(2,2,2)
ax.plot(data_labor['output'],lw=3)
ax.grid()
ax.set_title('Output')
ax = fig.add_subplot(2,2,3)
ax.plot(data_labor['consumption'],lw=3)
ax.grid()
ax.set_title('Consumption')
ax = fig.add_subplot(2,2,4)
ax.plot(data_labor['investment'],lw=3)
ax.grid()
ax.set_title('Investment')
# Create a 2x2 grid of plots of capital per worker, outputper worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(data_labor['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(data_labor['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(data_labor['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(data_labor['investment_pw'],lw=3)
ax.grid()
ax.set_title('Investment per worker')
Explanation: The Solow model with exogenous population growth
Now, let's suppose that production is a function of the supply of labor $L_t$:
\begin{align}
Y_t & = AK_t^{\alpha} L_t^{1-\alpha}\tag{6}
\end{align}
The supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation:
\begin{align}
L_{t+1} & = (1+n) L_t \tag{7}
\end{align}
The rest of the economy is characterized by the same equations as before:
\begin{align}
C_t & = (1-s)Y_t \tag{8}\
Y_t & = C_t + I_t \tag{9}\
K_{t+1} & = I_t + ( 1- \delta)K_t \tag{10}\
\end{align}
Combine Equations (6), (8), (9), and (10) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$ and $L_t$:
\begin{align}
K_{t+1} & = sAK_t^{\alpha}L_t^{1-\alpha} + ( 1- \delta)K_t \tag{11}
\end{align}
Given an initial values for capital and labor, Equations (7) and (11) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (6), (8), (9), and (10).
Simulation
Simulate the Solow growth model with exogenous labor growth for $t=0\ldots 100$. For the simulation, assume the following values of the parameters:
\begin{align}
A & = 10\
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
n & = 0.01
\end{align}
Furthermore, suppose that the initial values of capital and labor are:
\begin{align}
K_0 & = 20\
L_0 & = 1
\end{align}
End of explanation
def solow_example(A,alpha,delta,s,n,K0,L0,T):
'''Returns DataFrame with simulated values for a Solow model with labor growth and constant TFP'''
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
capital = np.zeros(T+1)
capital[0] = K0
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
labor = np.zeros(T+1)
labor[0] = L0
# Compute all capital and labor values by iterating over t from 0 through T
for t in np.arange(T):
labor[t+1] = (1+n)*labor[t]
capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
# Store the simulated capital df in a pandas DataFrame called data
df = pd.DataFrame({'capital':capital,'labor':labor})
# Create columns in the DataFrame to store computed values of the other endogenous variables
df['output'] = df['capital']**alpha*df['labor']**(1-alpha)
df['consumption'] = (1-s)*df['output']
df['investment'] = df['output'] - df['consumption']
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
df['capital_pw'] = df['capital']/df['labor']
df['output_pw'] = df['output']/df['labor']
df['consumption_pw'] = df['consumption']/df['labor']
df['investment_pw'] = df['investment']/df['labor']
return df
Explanation: An alternative approach
Suppose that we wanted to simulate the Solow model with different parameter values so that we could compare the simulations. Since we'd be doing the same basic steps multiple times using different numbers, it would make sense to define a function so that we could avoid repetition.
The code below defines a function called solow_example() that simulates the Solow model with exogenous labor growth. solow_example() takes as arguments the parameters of the Solow model $A$, $\alpha$, $\delta$, $s$, and $n$; the initial values $K_0$ and $L_0$; and the number of simulation periods $T$. solow_example() returns a Pandas DataFrame with computed values for aggregate and per worker quantities.
End of explanation
# Create the DataFrame with simulated values
df = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, outputper worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df['investment_pw'],lw=3)
ax.grid()
ax.set_title('Investment per worker')
Explanation: With solow_example() defined, we can redo the previous exercise quickly:
End of explanation
df1 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)
df2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, outputper worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df1['capital_pw'],lw=3)
ax.plot(df2['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df1['output_pw'],lw=3)
ax.plot(df2['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df1['consumption_pw'],lw=3)
ax.plot(df2['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df1['investment_pw'],lw=3,label='$k_0=20$')
ax.plot(df2['investment_pw'],lw=3,label='$k_0=10$')
ax.grid()
ax.set_title('Investment per worker')
ax.legend(loc='lower right')
Explanation: solow_example() can be used to perform multiple simulations. For example, suppose we want to see the effect of having two different initial values of capital: $k_0 = 20$ and $k_0'=10$.
End of explanation |
9,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Language Basics, IPython, and Jupyter Notebooks
Step1: The Python Interpreter
```python
$ python
Python 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23
Step2: from numpy.random import randn
data = {i
Step5: python
def append_element(some_list, element)
Step6: Attributes and methods
```python
In [1]
Step7: Duck typing
Step8: if not isinstance(x, list) and isiterable(x)
Step9: Mutable and immutable objects
Step10: Scalar Types
Numeric types
Step12: Strings
a = 'one way of writing a string'
b = "another way"
Step13: Bytes and Unicode
Step14: Booleans
Step15: Type casting
Step16: None
Step17: def add_and_maybe_multiply(a, b, c=None)
Step18: Dates and times
Step19: Control Flow
if, elif, and else
if x < 0
Step20: for loops
for value in collection
Step21: for a, b, c in iterator
Step22: seq = [1, 2, 3, 4]
for i in range(len(seq)) | Python Code:
import numpy as np
np.random.seed(12345)
np.set_printoptions(precision=4, suppress=True)
Explanation: Python Language Basics, IPython, and Jupyter Notebooks
End of explanation
import numpy as np
data = {i : np.random.randn() for i in range(7)}
data
Explanation: The Python Interpreter
```python
$ python
Python 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
a = 5
print(a)
5
```
python
print('Hello world')
python
$ python hello_world.py
Hello world
```shell
$ ipython
Python 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)
Type "copyright", "credits" or "license" for more information.
IPython 5.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: %run hello_world.py
Hello world
In [2]:
```
IPython Basics
Running the IPython Shell
$
End of explanation
a = [1, 2, 3]
b = a
a.append(4)
b
Explanation: from numpy.random import randn
data = {i : randn() for i in range(7)}
print(data)
{0: -1.5948255432744511, 1: 0.10569006472787983, 2: 1.972367135977295,
3: 0.15455217573074576, 4: -0.24058577449429575, 5: -1.2904897053651216,
6: 0.3308507317325902}
Running the Jupyter Notebook
shell
$ jupyter notebook
[I 15:20:52.739 NotebookApp] Serving notebooks from local directory:
/home/wesm/code/pydata-book
[I 15:20:52.739 NotebookApp] 0 active kernels
[I 15:20:52.739 NotebookApp] The Jupyter Notebook is running at:
http://localhost:8888/
[I 15:20:52.740 NotebookApp] Use Control-C to stop this server and shut down
all kernels (twice to skip confirmation).
Created new window in existing browser session.
Tab Completion
```
In [1]: an_apple = 27
In [2]: an_example = 42
In [3]: an
```
```
In [3]: b = [1, 2, 3]
In [4]: b.
```
```
In [1]: import datetime
In [2]: datetime.
```
In [7]: datasets/movielens/
Introspection
```
In [8]: b = [1, 2, 3]
In [9]: b?
Type: list
String Form:[1, 2, 3]
Length: 3
Docstring:
list() -> new empty list
list(iterable) -> new list initialized from iterable's items
In [10]: print?
Docstring:
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
Type: builtin_function_or_method
```
```python
def add_numbers(a, b):
Add two numbers together
Returns
-------
the_sum : type of arguments
return a + b
```
```python
In [11]: add_numbers?
Signature: add_numbers(a, b)
Docstring:
Add two numbers together
Returns
the_sum : type of arguments
File: <ipython-input-9-6a548a216e27>
Type: function
```
```python
In [12]: add_numbers??
Signature: add_numbers(a, b)
Source:
def add_numbers(a, b):
Add two numbers together
Returns
-------
the_sum : type of arguments
return a + b
File: <ipython-input-9-6a548a216e27>
Type: function
```
python
In [13]: np.*load*?
np.__loader__
np.load
np.loads
np.loadtxt
np.pkgload
The %run Command
```python
def f(x, y, z):
return (x + y) / z
a = 5
b = 6
c = 7.5
result = f(a, b, c)
```
python
In [14]: %run ipython_script_test.py
```python
In [15]: c
Out [15]: 7.5
In [16]: result
Out[16]: 1.4666666666666666
```
```python
%load ipython_script_test.py
def f(x, y, z):
return (x + y) / z
a = 5
b = 6
c = 7.5
result = f(a, b, c)
```
Interrupting running code
Executing Code from the Clipboard
```python
x = 5
y = 7
if x > 5:
x += 1
y = 8
```
```python
In [17]: %paste
x = 5
y = 7
if x > 5:
x += 1
y = 8
-- End pasted text --
```
python
In [18]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:x = 5
:y = 7
:if x > 5:
: x += 1
:
: y = 8
:--
Terminal Keyboard Shortcuts
About Magic Commands
```python
In [20]: a = np.random.randn(100, 100)
In [20]: %timeit np.dot(a, a)
10000 loops, best of 3: 20.9 µs per loop
```
```python
In [21]: %debug?
Docstring:
::
%debug [--breakpoint FILE:LINE] [statement [statement ...]]
Activate the interactive debugger.
This magic command support two ways of activating debugger.
One is to activate debugger before executing code. This way, you
can set a break point, to step through the code from the point.
You can use this mode by giving statements to execute and optionally
a breakpoint.
The other one is to activate debugger in post-mortem mode. You can
activate this mode simply running %debug without any argument.
If an exception has just occurred, this lets you inspect its stack
frames interactively. Note that this will always work only on the last
traceback that occurred, so you must call this quickly after an
exception that you wish to inspect has fired, because if another one
occurs, it clobbers the previous one.
If you want IPython to automatically do this on every exception, see
the %pdb magic for more details.
positional arguments:
statement Code to run in debugger. You can omit this in cell
magic mode.
optional arguments:
--breakpoint <FILE:LINE>, -b <FILE:LINE>
Set break point at LINE in FILE.
```
```python
In [22]: %pwd
Out[22]: '/home/wesm/code/pydata-book
In [23]: foo = %pwd
In [24]: foo
Out[24]: '/home/wesm/code/pydata-book'
```
Matplotlib Integration
python
In [26]: %matplotlib
Using matplotlib backend: Qt4Agg
python
In [26]: %matplotlib inline
Python Language Basics
Language Semantics
Indentation, not braces
python
for x in array:
if x < pivot:
less.append(x)
else:
greater.append(x)
python
a = 5; b = 6; c = 7
Everything is an object
Comments
python
results = []
for line in file_handle:
# keep the empty lines for now
# if len(line) == 0:
# continue
results.append(line.replace('foo', 'bar'))
python
print("Reached this line") # Simple status report
Function and object method calls
result = f(x, y, z)
g()
obj.some_method(x, y, z)
python
result = f(a, b, c, d=5, e='foo')
Variables and argument passing
End of explanation
a = 5
type(a)
a = 'foo'
type(a)
'5' + 5
a = 4.5
b = 2
# String formatting, to be visited later
print('a is {0}, b is {1}'.format(type(a), type(b)))
a / b
a = 5
isinstance(a, int)
a = 5; b = 4.5
isinstance(a, (int, float))
isinstance(b, (int, float))
Explanation: python
def append_element(some_list, element):
some_list.append(element)
```python
In [27]: data = [1, 2, 3]
In [28]: append_element(data, 4)
In [29]: data
Out[29]: [1, 2, 3, 4]
```
Dynamic references, strong types
End of explanation
a = 'foo'
getattr(a, 'split')
Explanation: Attributes and methods
```python
In [1]: a = 'foo'
In [2]: a.<Press Tab>
a.capitalize a.format a.isupper a.rindex a.strip
a.center a.index a.join a.rjust a.swapcase
a.count a.isalnum a.ljust a.rpartition a.title
a.decode a.isalpha a.lower a.rsplit a.translate
a.encode a.isdigit a.lstrip a.rstrip a.upper
a.endswith a.islower a.partition a.split a.zfill
a.expandtabs a.isspace a.replace a.splitlines
a.find a.istitle a.rfind a.startswith
```
End of explanation
def isiterable(obj):
try:
iter(obj)
return True
except TypeError: # not iterable
return False
isiterable('a string')
isiterable([1, 2, 3])
isiterable(5)
Explanation: Duck typing
End of explanation
5 - 7
12 + 21.5
5 <= 2
a = [1, 2, 3]
b = a
c = list(a)
a is b
a is not c
a == c
a = None
a is None
Explanation: if not isinstance(x, list) and isiterable(x):
x = list(x)
Imports
```python
some_module.py
PI = 3.14159
def f(x):
return x + 2
def g(a, b):
return a + b
```
import some_module
result = some_module.f(5)
pi = some_module.PI
from some_module import f, g, PI
result = g(5, PI)
import some_module as sm
from some_module import PI as pi, g as gf
r1 = sm.f(pi)
r2 = gf(6, pi)
Binary operators and comparisons
End of explanation
a_list = ['foo', 2, [4, 5]]
a_list[2] = (3, 4)
a_list
a_tuple = (3, 5, (4, 5))
a_tuple[1] = 'four'
Explanation: Mutable and immutable objects
End of explanation
ival = 17239871
ival ** 6
fval = 7.243
fval2 = 6.78e-5
3 / 2
3 // 2
Explanation: Scalar Types
Numeric types
End of explanation
c =
This is a longer string that
spans multiple lines
c.count('\n')
a = 'this is a string'
a[10] = 'f'
b = a.replace('string', 'longer string')
b
a
a = 5.6
s = str(a)
print(s)
s = 'python'
list(s)
s[:3]
s = '12\\34'
print(s)
s = r'this\has\no\special\characters'
s
a = 'this is the first half '
b = 'and this is the second half'
a + b
template = '{0:.2f} {1:s} are worth US${2:d}'
template.format(4.5560, 'Argentine Pesos', 1)
Explanation: Strings
a = 'one way of writing a string'
b = "another way"
End of explanation
val = "español"
val
val_utf8 = val.encode('utf-8')
val_utf8
type(val_utf8)
val_utf8.decode('utf-8')
val.encode('latin1')
val.encode('utf-16')
val.encode('utf-16le')
bytes_val = b'this is bytes'
bytes_val
decoded = bytes_val.decode('utf8')
decoded # this is str (Unicode) now
Explanation: Bytes and Unicode
End of explanation
True and True
False or True
Explanation: Booleans
End of explanation
s = '3.14159'
fval = float(s)
type(fval)
int(fval)
bool(fval)
bool(0)
Explanation: Type casting
End of explanation
a = None
a is None
b = 5
b is not None
Explanation: None
End of explanation
type(None)
Explanation: def add_and_maybe_multiply(a, b, c=None):
result = a + b
if c is not None:
result = result * c
return result
End of explanation
from datetime import datetime, date, time
dt = datetime(2011, 10, 29, 20, 30, 21)
dt.day
dt.minute
dt.date()
dt.time()
dt.strftime('%m/%d/%Y %H:%M')
datetime.strptime('20091031', '%Y%m%d')
dt.replace(minute=0, second=0)
dt2 = datetime(2011, 11, 15, 22, 30)
delta = dt2 - dt
delta
type(delta)
dt
dt + delta
Explanation: Dates and times
End of explanation
a = 5; b = 7
c = 8; d = 4
if a < b or c > d:
print('Made it')
4 > 3 > 2 > 1
Explanation: Control Flow
if, elif, and else
if x < 0:
print('It's negative')
if x < 0:
print('It's negative')
elif x == 0:
print('Equal to zero')
elif 0 < x < 5:
print('Positive but smaller than 5')
else:
print('Positive and larger than or equal to 5')
End of explanation
for i in range(4):
for j in range(4):
if j > i:
break
print((i, j))
Explanation: for loops
for value in collection:
# do something with value
sequence = [1, 2, None, 4, None, 5]
total = 0
for value in sequence:
if value is None:
continue
total += value
sequence = [1, 2, 0, 4, 6, 5, 2, 1]
total_until_5 = 0
for value in sequence:
if value == 5:
break
total_until_5 += value
End of explanation
range(10)
list(range(10))
list(range(0, 20, 2))
list(range(5, 0, -1))
Explanation: for a, b, c in iterator:
# do something
while loops
x = 256
total = 0
while x > 0:
if total > 500:
break
total += x
x = x // 2
pass
if x < 0:
print('negative!')
elif x == 0:
# TODO: put something smart here
pass
else:
print('positive!')
range
End of explanation
x = 5
'Non-negative' if x >= 0 else 'Negative'
Explanation: seq = [1, 2, 3, 4]
for i in range(len(seq)):
val = seq[i]
sum = 0
for i in range(100000):
# % is the modulo operator
if i % 3 == 0 or i % 5 == 0:
sum += i
Ternary expressions
value =
if
End of explanation |
9,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boolean Generator
This notebook will show how to use the boolean generator to generate a boolean combinational function. The function that is implemented is a 2-input XOR.
Step 1
Step1: Step 2
Step2: Step 3
Step3: Find the On-board pushbuttons and LEDs
Step 4
Step4: Verify the operation of the XOR function
| PB0 | PB3 | LD2 |
|
Step5: Step 6
Step6: Stop the boolean generator | Python Code:
from pynq.overlays.logictools import LogicToolsOverlay
logictools_olay = LogicToolsOverlay('logictools.bit')
Explanation: Boolean Generator
This notebook will show how to use the boolean generator to generate a boolean combinational function. The function that is implemented is a 2-input XOR.
Step 1: Download the logictools overlay
End of explanation
function = ['LD2 = PB3 ^ PB0']
Explanation: Step 2: Specify the boolean function of a 2-input XOR
The logic is applied to the on-board pushbuttons and LED, pushbuttons PB0 and PB3 are set as inputs and LED LD2 is set as an output
End of explanation
boolean_generator = logictools_olay.boolean_generator
boolean_generator.setup(function)
Explanation: Step 3: Instantiate and setup of the boolean generator object.
The logic function defined in the previous step is setup using the setup() method
End of explanation
boolean_generator.run()
Explanation: Find the On-board pushbuttons and LEDs
Step 4: Run the boolean generator verify operation
End of explanation
boolean_generator.stop()
Explanation: Verify the operation of the XOR function
| PB0 | PB3 | LD2 |
|:---:|:---:|:---:|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
Step 5: Stop the boolean generator
End of explanation
from pynq.overlays.logictools import LogicToolsOverlay
logictools_olay = LogicToolsOverlay('logictools.bit')
boolean_generator = logictools_olay.boolean_generator
function = {'XOR_gate': 'LD2 = PB3 ^ PB0'}
boolean_generator.setup(function)
boolean_generator.run()
Explanation: Step 6: Re-run the entire boolean function generation in a single cell
Note: The boolean expression format can be list or dict. We had used a list in the example above. We will now use a dict.
<font color="DodgerBlue">Alternative format:</font>
python
function = {'XOR_gate': 'LD2 = PB3 ^ PB0'}
End of explanation
boolean_generator.stop()
Explanation: Stop the boolean generator
End of explanation |
9,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pcygni-Profile Calculator Tool Tutorial
This brief tutorial should give you a basic overview of the main features and capabilities of the Python P-Cygni Line profile calculator, which is based on the Elementary Supernova Model (ES) of Jefferey and Branch 1990.
Installation
Obtaining the tool
Head over to github and either clone my repository (https
Step1: Import the line profile calculator module
Step2: Create an instance of the Line Calculator, for now with the default parameters (check the source code for the default parameters)
Step3: Calculate and illustrate line profile
Step4: Advanced Uses
In the remaining part of this tutorial, we have a close look at the line calculator and investigate a few phenomena in more detail. As a preparation, we import astropy
Step5: Now, we have a look at the parameters of the line profile calculator. The following code line just shows all keyword arguments of the calculator and their default values
Step6: t
Step7: The stronger the line, the deeper the absorption throughs and stronger the emission peak becomes. At a certain point, the line "staurates", i.e the profile does not become more prominent with increasing line strength, since all photons are already scatterd.
Increasing the ejecta size
Step8: Changing the size of the photosphere
Step9: Detaching the line forming region from the photosphere
Finally, we investigate what happens when the line does not form throughout the entire envelope but only in a detached shell within the ejecta. | Python Code:
import matplotlib.pyplot as plt
Explanation: Pcygni-Profile Calculator Tool Tutorial
This brief tutorial should give you a basic overview of the main features and capabilities of the Python P-Cygni Line profile calculator, which is based on the Elementary Supernova Model (ES) of Jefferey and Branch 1990.
Installation
Obtaining the tool
Head over to github and either clone my repository (https://github.com/unoebauer/public-astro-tools) or download the Python file directly.
Requisites
python 2.7 and following packages
numpy
scipy
astropy
matplotlib
(recommended) ipython
These are all standard Python packages and you should be able to install these with the package manager of your favourite distribution. Alternatively, you can use Anaconda/Miniconda. For this, you can use the requirements file shipped with the github repository:
conda-env create -f pcygni_env.yml
(Optional) Running this tutorial as a jupyter notebook
If you want to interactively use this jupyter notebook, you have to install jupyter as well (it is part of the requirements file and will be installed automatically when setting up the anaconda environment). Then you can start this notebook with:
jupyter notebook pcygni_tutorial.ipnb
Basic usage
The following Python code snippets demonstrate the basic use of the tool. Just execute the following lines in an python/ipython (preferable) shell in the directory in which the Python tool is located:
End of explanation
import pcygni_profile as pcp
Explanation: Import the line profile calculator module
End of explanation
profcalc = pcp.PcygniCalculator()
Explanation: Create an instance of the Line Calculator, for now with the default parameters (check the source code for the default parameters)
End of explanation
fig = profcalc.show_line_profile()
Explanation: Calculate and illustrate line profile
End of explanation
import astropy.units as units
import astropy.constants as csts
Explanation: Advanced Uses
In the remaining part of this tutorial, we have a close look at the line calculator and investigate a few phenomena in more detail. As a preparation, we import astropy
End of explanation
tmp = pcp.PcygniCalculator(t=3000 * units.s, vmax=0.01 * csts.c, vphot=0.001 * csts.c, tauref=1,
vref=5e7 * units.cm/units.s, ve=5e7 * units.cm/units.s,
lam0=1215.7 * units.AA, vdet_min=None, vdet_max=None)
Explanation: Now, we have a look at the parameters of the line profile calculator. The following code line just shows all keyword arguments of the calculator and their default values:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
for t in [1, 10, 100, 1000, 10000]:
tmp = pcp.PcygniCalculator(tauref=t)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"$\tau_{{\mathrm{{ref}}}} = {:f}$".format(t))
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda}/F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
Explanation: t: time since explosion (default 3000 secs)
vmax: velocity at the outer ejecta edge (with t, can be turned into r) (1% c)
vphot: velocity, i.e. location, of the photosphere (0.1% c)
vref: reference velocity, used in the density law (500 km/s)
ve: another parameter for the density law (500 km/s)
tauref: reference optical depth of the line transition (at vref) (1)
lam0: rest frame natural wavelength of the line transition (1215.7 Angstrom)
vdet_min: inner location of the detached line-formation shell; if None, will be set to vphot (None)
vdet_max: outer location of the detached line-formation shell; if None, will be set to vmax (None)
Note that you have to supply astropy quantities (i.e. numbers with units) for all these parameters (except for the reference optical depth).
Varying the Line Strength - Line Saturation
To start, we investigate the effect of increasing the line strength.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
vmax = 0.01 * csts.c
for v in [0.5 * vmax, vmax, 2 * vmax]:
tmp = pcp.PcygniCalculator(tauref=1000, vmax=v)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"$v_{{\mathrm{{max}}}} = {:f}\,c$".format((v / csts.c).to("")))
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda} / F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
Explanation: The stronger the line, the deeper the absorption throughs and stronger the emission peak becomes. At a certain point, the line "staurates", i.e the profile does not become more prominent with increasing line strength, since all photons are already scatterd.
Increasing the ejecta size
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
vphot = 0.001 * csts.c
for v in [0.5 * vphot, vphot, 2 * vphot]:
tmp = pcp.PcygniCalculator(tauref=1000, vphot=v)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"$v_{{\mathrm{{phot}}}} = {:f}\,c$".format((v / csts.c).to("")))
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda} / F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
Explanation: Changing the size of the photosphere
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
tmp = pcp.PcygniCalculator(tauref=1000)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"no detached line-formation shell")
tmp = pcp.PcygniCalculator(tauref=1000, vdet_min=0.0025 * csts.c, vdet_max=0.0075 * csts.c)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"detached line-formation shell")
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda} / F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
Explanation: Detaching the line forming region from the photosphere
Finally, we investigate what happens when the line does not form throughout the entire envelope but only in a detached shell within the ejecta.
End of explanation |
9,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solutions to
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 4)
Determining Important Nodes (cont'd)
Betweenness Centrality
If you interpret the Facebook graph as web link graph in the previous Q&A session, the obvious ranking choice is the PageRank. Note that today it is only one of many aspects modern web search engines consider to rank web pages. However, we were looking for the eigenvector centrality as MIT8 is a social network (both and possibly others can be applicable, though).
In applications where the flow of goods, vehicles, information, etc. via shortest paths in a network plays a major role, betweenness centrality is an interesting centrality index. It is also widely used for social networks. Its drawback is its rather high running time, which makes its use problematic for really large networks. But in many applications we do not need to consider the exact betweenness values nor an exact ranking. An approximation is often good enough.
Q&A Session #7
In the PGP network, compute the 15 nodes with the highest (exact) betweenness values and order them accordingly in a ranking.
Answer
Step1: Community Detection
This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to each other than to the rest of the network.
Code for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.
Step2: The default setting uses the parallel Louvain method (PLM) as underlying algorithm. The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. PLM yields a high-quality solution at reasonably fast running times. Let us now apply other algorithms. To this end, one specifies the algorithm directly in the call.
Step3: The visualization module, which is based on external code for graph drawing, provides a function which visualizes the community graph for a community detection solution
Step4: Q&A Session 8
Run PLMR as well. What are the main differences between the three algorithms PLM, PLMR, and PLP in terms of the solutions they compute and the time they need for this computation?
Answer | Python Code:
from networkit import *
%matplotlib inline
cd ~/Documents/workspace/NetworKit
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
# Code for 7-1)
# exact computation
bc = centrality.Betweenness(G, True)
%time bc.run()
bc.ranking()[:15]
# Code for 7-2)
# approximate computation
bca = centrality.ApproxBetweenness(G, 0.05)
%time bca.run()
bca.ranking()[:15]
Explanation: Solutions to
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 4)
Determining Important Nodes (cont'd)
Betweenness Centrality
If you interpret the Facebook graph as web link graph in the previous Q&A session, the obvious ranking choice is the PageRank. Note that today it is only one of many aspects modern web search engines consider to rank web pages. However, we were looking for the eigenvector centrality as MIT8 is a social network (both and possibly others can be applicable, though).
In applications where the flow of goods, vehicles, information, etc. via shortest paths in a network plays a major role, betweenness centrality is an interesting centrality index. It is also widely used for social networks. Its drawback is its rather high running time, which makes its use problematic for really large networks. But in many applications we do not need to consider the exact betweenness values nor an exact ranking. An approximation is often good enough.
Q&A Session #7
In the PGP network, compute the 15 nodes with the highest (exact) betweenness values and order them accordingly in a ranking.
Answer:
Perform the same as in 1) with one difference: Instead of using the algorithm for computing exact betweenness values, use the RK approximation algorithm. Use also values different from the default ones for the parameters $\delta$ and $\epsilon$. What effects do you see in comparison to the ranking based on exact values? What about running time (you can use %time preceding a call to get its CPU time)? And how do the parameter settings affect these effects?
Answer:
End of explanation
community.detectCommunities(G)
Explanation: Community Detection
This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to each other than to the rest of the network.
Code for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.
End of explanation
community.detectCommunities(G, algo=community.PLP(G))
Explanation: The default setting uses the parallel Louvain method (PLM) as underlying algorithm. The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. PLM yields a high-quality solution at reasonably fast running times. Let us now apply other algorithms. To this end, one specifies the algorithm directly in the call.
End of explanation
communities = _
viztasks.drawCommunityGraph(G, communities)
Explanation: The visualization module, which is based on external code for graph drawing, provides a function which visualizes the community graph for a community detection solution: Communities are contracted into single nodes whose size corresponds to the community size. For problems with hundreds or thousands of communities, this may take a while.
End of explanation
# Code for 8-1) and 8-2)
community.detectCommunities(G, algo=community.PLM(G,True))
communities = _
viztasks.drawCommunityGraph(G, communities)
Explanation: Q&A Session 8
Run PLMR as well. What are the main differences between the three algorithms PLM, PLMR, and PLP in terms of the solutions they compute and the time they need for this computation?
Answer:
Visualize the three results. Can you see aspects of your answer to 1) in the figure as well? Do the figures lead to other insights?
Answer:
End of explanation |
9,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Semi-supervision and domain adaptation with AdaMatch
Author
Step1: Before we proceed, let's review a few preliminary concepts underlying this example.
Preliminaries
In semi-supervised learning (SSL), we use a small amount of labeled data to
train models on a bigger unlabeled dataset. Popular semi-supervised learning methods
for computer vision include FixMatch,
MixMatch,
Noisy Student Training, etc. You can refer to
this example to get an idea
of what a standard SSL workflow looks like.
In unsupervised domain adaptation, we have access to a source labeled dataset and
a target unlabeled dataset. Then the task is to learn a model that can generalize well
to the target dataset. The source and the target datasets vary in terms of distribution.
The following figure provides an illustration of this idea. In the present example, we use the
MNIST dataset as the source dataset, while the target dataset is
SVHN, which consists of images of house
numbers. Both datasets have various varying factors in terms of texture, viewpoint,
appearence, etc.
Step2: Prepare the data
Step3: Define constants and hyperparameters
Step4: Data augmentation utilities
A standard element of SSL algorithms is to feed weakly and strongly augmented versions of
the same images to the learning model to make its predictions consistent. For strong
augmentation, RandAugment is a standard choice. For
weak augmentation, we will use horizontal flipping and random cropping.
Step5: Data loading utilities
Step6: _w and _s suffixes denote weak and strong respectively.
Step7: Here's what a single image batch looks like
Step8: Subclassed model for AdaMatch training
The figure below presents the overall workflow of AdaMatch (taken from the
original paper)
Step9: The authors introduce three improvements in the paper
Step10: We can now instantiate a Wide ResNet model like so. Note that the purpose of using a
Wide ResNet here is to keep the implementation as close to the original one
as possible.
Step11: Instantiate AdaMatch model and compile it
Step12: Model training
Step13: Evaluation on the target and source test sets
Step14: With more training, this score improves. When this same network is trained with
standard classification objective, it yields an accuracy of 7.20% which is
significantly lower than what we got with AdaMatch. You can check out
this notebook
to learn more about the hyperparameters and other experimental details. | Python Code:
!pip install -q tf-models-official
Explanation: Semi-supervision and domain adaptation with AdaMatch
Author: Sayak Paul<br>
Date created: 2021/06/19<br>
Last modified: 2021/06/19<br>
Description: Unifying semi-supervised learning and unsupervised domain adaptation with AdaMatch.
Introduction
In this example, we will implement the AdaMatch algorithm, proposed in
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
by Berthelot et al. It sets a new state-of-the-art in unsupervised domain adaptation (as of
June 2021). AdaMatch is particularly interesting because it
unifies semi-supervised learning (SSL) and unsupervised domain adaptation
(UDA) under one framework. It thereby provides a way to perform semi-supervised domain
adaptation (SSDA).
This example requires TensorFlow 2.5 or higher, as well as TensorFlow Models, which can
be installed using the following command:
End of explanation
import tensorflow as tf
tf.random.set_seed(42)
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from official.vision.image_classification.augment import RandAugment
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
Explanation: Before we proceed, let's review a few preliminary concepts underlying this example.
Preliminaries
In semi-supervised learning (SSL), we use a small amount of labeled data to
train models on a bigger unlabeled dataset. Popular semi-supervised learning methods
for computer vision include FixMatch,
MixMatch,
Noisy Student Training, etc. You can refer to
this example to get an idea
of what a standard SSL workflow looks like.
In unsupervised domain adaptation, we have access to a source labeled dataset and
a target unlabeled dataset. Then the task is to learn a model that can generalize well
to the target dataset. The source and the target datasets vary in terms of distribution.
The following figure provides an illustration of this idea. In the present example, we use the
MNIST dataset as the source dataset, while the target dataset is
SVHN, which consists of images of house
numbers. Both datasets have various varying factors in terms of texture, viewpoint,
appearence, etc.: their domains, or distributions, are different from one
another.
Popular domain adaptation algorithms in deep learning include
Deep CORAL,
Moment Matching, etc.
Setup
End of explanation
# MNIST
(
(mnist_x_train, mnist_y_train),
(mnist_x_test, mnist_y_test),
) = keras.datasets.mnist.load_data()
# Add a channel dimension
mnist_x_train = tf.expand_dims(mnist_x_train, -1)
mnist_x_test = tf.expand_dims(mnist_x_test, -1)
# Convert the labels to one-hot encoded vectors
mnist_y_train = tf.one_hot(mnist_y_train, 10).numpy()
# SVHN
svhn_train, svhn_test = tfds.load(
"svhn_cropped", split=["train", "test"], as_supervised=True
)
Explanation: Prepare the data
End of explanation
RESIZE_TO = 32
SOURCE_BATCH_SIZE = 64
TARGET_BATCH_SIZE = 3 * SOURCE_BATCH_SIZE # Reference: Section 3.2
EPOCHS = 10
STEPS_PER_EPOCH = len(mnist_x_train) // SOURCE_BATCH_SIZE
TOTAL_STEPS = EPOCHS * STEPS_PER_EPOCH
AUTO = tf.data.AUTOTUNE
LEARNING_RATE = 0.03
WEIGHT_DECAY = 0.0005
INIT = "he_normal"
DEPTH = 28
WIDTH_MULT = 2
Explanation: Define constants and hyperparameters
End of explanation
# Initialize `RandAugment` object with 2 layers of
# augmentation transforms and strength of 5.
augmenter = RandAugment(num_layers=2, magnitude=5)
def weak_augment(image, source=True):
if image.dtype != tf.float32:
image = tf.cast(image, tf.float32)
# MNIST images are grayscale, this is why we first convert them to
# RGB images.
if source:
image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO)
image = tf.tile(image, [1, 1, 3])
image = tf.image.random_flip_left_right(image)
image = tf.image.random_crop(image, (RESIZE_TO, RESIZE_TO, 3))
return image
def strong_augment(image, source=True):
if image.dtype != tf.float32:
image = tf.cast(image, tf.float32)
if source:
image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO)
image = tf.tile(image, [1, 1, 3])
image = augmenter.distort(image)
return image
Explanation: Data augmentation utilities
A standard element of SSL algorithms is to feed weakly and strongly augmented versions of
the same images to the learning model to make its predictions consistent. For strong
augmentation, RandAugment is a standard choice. For
weak augmentation, we will use horizontal flipping and random cropping.
End of explanation
def create_individual_ds(ds, aug_func, source=True):
if source:
batch_size = SOURCE_BATCH_SIZE
else:
# During training 3x more target unlabeled samples are shown
# to the model in AdaMatch (Section 3.2 of the paper).
batch_size = TARGET_BATCH_SIZE
ds = ds.shuffle(batch_size * 10, seed=42)
if source:
ds = ds.map(lambda x, y: (aug_func(x), y), num_parallel_calls=AUTO)
else:
ds = ds.map(lambda x, y: (aug_func(x, False), y), num_parallel_calls=AUTO)
ds = ds.batch(batch_size).prefetch(AUTO)
return ds
Explanation: Data loading utilities
End of explanation
source_ds = tf.data.Dataset.from_tensor_slices((mnist_x_train, mnist_y_train))
source_ds_w = create_individual_ds(source_ds, weak_augment)
source_ds_s = create_individual_ds(source_ds, strong_augment)
final_source_ds = tf.data.Dataset.zip((source_ds_w, source_ds_s))
target_ds_w = create_individual_ds(svhn_train, weak_augment, source=False)
target_ds_s = create_individual_ds(svhn_train, strong_augment, source=False)
final_target_ds = tf.data.Dataset.zip((target_ds_w, target_ds_s))
Explanation: _w and _s suffixes denote weak and strong respectively.
End of explanation
def compute_loss_source(source_labels, logits_source_w, logits_source_s):
loss_func = keras.losses.CategoricalCrossentropy(from_logits=True)
# First compute the losses between original source labels and
# predictions made on the weakly and strongly augmented versions
# of the same images.
w_loss = loss_func(source_labels, logits_source_w)
s_loss = loss_func(source_labels, logits_source_s)
return w_loss + s_loss
def compute_loss_target(target_pseudo_labels_w, logits_target_s, mask):
loss_func = keras.losses.CategoricalCrossentropy(from_logits=True, reduction="none")
target_pseudo_labels_w = tf.stop_gradient(target_pseudo_labels_w)
# For calculating loss for the target samples, we treat the pseudo labels
# as the ground-truth. These are not considered during backpropagation
# which is a standard SSL practice.
target_loss = loss_func(target_pseudo_labels_w, logits_target_s)
# More on `mask` later.
mask = tf.cast(mask, target_loss.dtype)
target_loss *= mask
return tf.reduce_mean(target_loss, 0)
Explanation: Here's what a single image batch looks like:
Loss computation utilities
End of explanation
class AdaMatch(keras.Model):
def __init__(self, model, total_steps, tau=0.9):
super(AdaMatch, self).__init__()
self.model = model
self.tau = tau # Denotes the confidence threshold
self.loss_tracker = tf.keras.metrics.Mean(name="loss")
self.total_steps = total_steps
self.current_step = tf.Variable(0, dtype="int64")
@property
def metrics(self):
return [self.loss_tracker]
# This is a warmup schedule to update the weight of the
# loss contributed by the target unlabeled samples. More
# on this in the text.
def compute_mu(self):
pi = tf.constant(np.pi, dtype="float32")
step = tf.cast(self.current_step, dtype="float32")
return 0.5 - tf.cos(tf.math.minimum(pi, (2 * pi * step) / self.total_steps)) / 2
def train_step(self, data):
## Unpack and organize the data ##
source_ds, target_ds = data
(source_w, source_labels), (source_s, _) = source_ds
(
(target_w, _),
(target_s, _),
) = target_ds # Notice that we are NOT using any labels here.
combined_images = tf.concat([source_w, source_s, target_w, target_s], 0)
combined_source = tf.concat([source_w, source_s], 0)
total_source = tf.shape(combined_source)[0]
total_target = tf.shape(tf.concat([target_w, target_s], 0))[0]
with tf.GradientTape() as tape:
## Forward passes ##
combined_logits = self.model(combined_images, training=True)
z_d_prime_source = self.model(
combined_source, training=False
) # No BatchNorm update.
z_prime_source = combined_logits[:total_source]
## 1. Random logit interpolation for the source images ##
lambd = tf.random.uniform((total_source, 10), 0, 1)
final_source_logits = (lambd * z_prime_source) + (
(1 - lambd) * z_d_prime_source
)
## 2. Distribution alignment (only consider weakly augmented images) ##
# Compute softmax for logits of the WEAKLY augmented SOURCE images.
y_hat_source_w = tf.nn.softmax(final_source_logits[: tf.shape(source_w)[0]])
# Extract logits for the WEAKLY augmented TARGET images and compute softmax.
logits_target = combined_logits[total_source:]
logits_target_w = logits_target[: tf.shape(target_w)[0]]
y_hat_target_w = tf.nn.softmax(logits_target_w)
# Align the target label distribution to that of the source.
expectation_ratio = tf.reduce_mean(y_hat_source_w) / tf.reduce_mean(
y_hat_target_w
)
y_tilde_target_w = tf.math.l2_normalize(
y_hat_target_w * expectation_ratio, 1
)
## 3. Relative confidence thresholding ##
row_wise_max = tf.reduce_max(y_hat_source_w, axis=-1)
final_sum = tf.reduce_mean(row_wise_max, 0)
c_tau = self.tau * final_sum
mask = tf.reduce_max(y_tilde_target_w, axis=-1) >= c_tau
## Compute losses (pay attention to the indexing) ##
source_loss = compute_loss_source(
source_labels,
final_source_logits[: tf.shape(source_w)[0]],
final_source_logits[tf.shape(source_w)[0] :],
)
target_loss = compute_loss_target(
y_tilde_target_w, logits_target[tf.shape(target_w)[0] :], mask
)
t = self.compute_mu() # Compute weight for the target loss
total_loss = source_loss + (t * target_loss)
self.current_step.assign_add(
1
) # Update current training step for the scheduler
gradients = tape.gradient(total_loss, self.model.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))
self.loss_tracker.update_state(total_loss)
return {"loss": self.loss_tracker.result()}
Explanation: Subclassed model for AdaMatch training
The figure below presents the overall workflow of AdaMatch (taken from the
original paper):
Here's a brief step-by-step breakdown of the workflow:
We first retrieve the weakly and strongly augmented pairs of images from the source and
target datasets.
We prepare two concatenated copies:
i. One where both pairs are concatenated.
ii. One where only the source data image pair is concatenated.
We run two forward passes through the model:
i. The first forward pass uses the concatenated copy obtained from 2.i. In
this forward pass, the Batch Normalization statistics
are updated.
ii. In the second forward pass, we only use the concatenated copy obtained from 2.ii.
Batch Normalization layers are run in inference mode.
The respective logits are computed for both the forward passes.
The logits go through a series of transformations, introduced in the paper (which
we will discuss shortly).
We compute the loss and update the gradients of the underlying model.
End of explanation
def wide_basic(x, n_input_plane, n_output_plane, stride):
conv_params = [[3, 3, stride, "same"], [3, 3, (1, 1), "same"]]
n_bottleneck_plane = n_output_plane
# Residual block
for i, v in enumerate(conv_params):
if i == 0:
if n_input_plane != n_output_plane:
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
convs = x
else:
convs = layers.BatchNormalization()(x)
convs = layers.Activation("relu")(convs)
convs = layers.Conv2D(
n_bottleneck_plane,
(v[0], v[1]),
strides=v[2],
padding=v[3],
kernel_initializer=INIT,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
use_bias=False,
)(convs)
else:
convs = layers.BatchNormalization()(convs)
convs = layers.Activation("relu")(convs)
convs = layers.Conv2D(
n_bottleneck_plane,
(v[0], v[1]),
strides=v[2],
padding=v[3],
kernel_initializer=INIT,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
use_bias=False,
)(convs)
# Shortcut connection: identity function or 1x1
# convolutional
# (depends on difference between input & output shape - this
# corresponds to whether we are using the first block in
# each
# group; see `block_series()`).
if n_input_plane != n_output_plane:
shortcut = layers.Conv2D(
n_output_plane,
(1, 1),
strides=stride,
padding="same",
kernel_initializer=INIT,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
use_bias=False,
)(x)
else:
shortcut = x
return layers.Add()([convs, shortcut])
# Stacking residual units on the same stage
def block_series(x, n_input_plane, n_output_plane, count, stride):
x = wide_basic(x, n_input_plane, n_output_plane, stride)
for i in range(2, int(count + 1)):
x = wide_basic(x, n_output_plane, n_output_plane, stride=1)
return x
def get_network(image_size=32, num_classes=10):
n = (DEPTH - 4) / 6
n_stages = [16, 16 * WIDTH_MULT, 32 * WIDTH_MULT, 64 * WIDTH_MULT]
inputs = keras.Input(shape=(image_size, image_size, 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
conv1 = layers.Conv2D(
n_stages[0],
(3, 3),
strides=1,
padding="same",
kernel_initializer=INIT,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
use_bias=False,
)(x)
## Add wide residual blocks ##
conv2 = block_series(
conv1,
n_input_plane=n_stages[0],
n_output_plane=n_stages[1],
count=n,
stride=(1, 1),
) # Stage 1
conv3 = block_series(
conv2,
n_input_plane=n_stages[1],
n_output_plane=n_stages[2],
count=n,
stride=(2, 2),
) # Stage 2
conv4 = block_series(
conv3,
n_input_plane=n_stages[2],
n_output_plane=n_stages[3],
count=n,
stride=(2, 2),
) # Stage 3
batch_norm = layers.BatchNormalization()(conv4)
relu = layers.Activation("relu")(batch_norm)
# Classifier
trunk_outputs = layers.GlobalAveragePooling2D()(relu)
outputs = layers.Dense(
num_classes, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(trunk_outputs)
return keras.Model(inputs, outputs)
Explanation: The authors introduce three improvements in the paper:
In AdaMatch, we perform two forward passes, and only one of them is respsonsible for
updating the Batch Normalization statistics. This is done to account for distribution
shifts in the target dataset. In the other forward pass, we only use the source sample,
and the Batch Normalization layers are run in inference mode. Logits for the source
samples (weakly and strongly augmented versions) from these two passes are slightly
different from one another because of how Batch Normalization layers are run. Final
logits for the source samples are computed by linearly interpolating between these two
different pairs of logits. This induces a form of consistency regularization. This step
is referred to as random logit interpolation.
Distribution alignment is used to align the source and target label distributions.
This further helps the underlying model learn domain-invariant representations. In case
of unsupervised domain adaptation, we don't have access to any labels of the target
dataset. This is why pseudo labels are generated from the underlying model.
The underlying model generates pseudo-labels for the target samples. It's likely that
the model would make faulty predictions. Those can propagate back as we make progress in
the training, and hurt the overall performance. To compensate for that, we filter the
high-confidence predictions based on a threshold (hence the use of mask inside
compute_loss_target()). In AdaMatch, this threshold is relatively adjusted which is why
it is called relative confidence thresholding.
For more details on these methods and to know how each of them contribute please refer to
the paper.
About compute_mu():
Rather than using a fixed scalar quantity, a varying scalar is used in AdaMatch. It
denotes the weight of the loss contibuted by the target samples. Visually, the weight
scheduler look like so:
This scheduler increases the weight of the target domain loss from 0 to 1 for the first
half of the training. Then it keeps that weight at 1 for the second half of the training.
Instantiate a Wide-ResNet-28-2
The authors use a WideResNet-28-2 for the dataset
pairs we are using in this example. Most of the following code has been referred from
this script. Note
that the following model has a scaling layer inside it that scales the pixel values to
[0, 1].
End of explanation
wrn_model = get_network()
print(f"Model has {wrn_model.count_params()/1e6} Million parameters.")
Explanation: We can now instantiate a Wide ResNet model like so. Note that the purpose of using a
Wide ResNet here is to keep the implementation as close to the original one
as possible.
End of explanation
reduce_lr = keras.optimizers.schedules.CosineDecay(LEARNING_RATE, TOTAL_STEPS, 0.25)
optimizer = keras.optimizers.Adam(reduce_lr)
adamatch_trainer = AdaMatch(model=wrn_model, total_steps=TOTAL_STEPS)
adamatch_trainer.compile(optimizer=optimizer)
Explanation: Instantiate AdaMatch model and compile it
End of explanation
total_ds = tf.data.Dataset.zip((final_source_ds, final_target_ds))
adamatch_trainer.fit(total_ds, epochs=EPOCHS)
Explanation: Model training
End of explanation
# Compile the AdaMatch model to yield accuracy.
adamatch_trained_model = adamatch_trainer.model
adamatch_trained_model.compile(metrics=keras.metrics.SparseCategoricalAccuracy())
# Score on the target test set.
svhn_test = svhn_test.batch(TARGET_BATCH_SIZE).prefetch(AUTO)
_, accuracy = adamatch_trained_model.evaluate(svhn_test)
print(f"Accuracy on target test set: {accuracy * 100:.2f}%")
Explanation: Evaluation on the target and source test sets
End of explanation
# Utility function for preprocessing the source test set.
def prepare_test_ds_source(image, label):
image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO)
image = tf.tile(image, [1, 1, 3])
return image, label
source_test_ds = tf.data.Dataset.from_tensor_slices((mnist_x_test, mnist_y_test))
source_test_ds = (
source_test_ds.map(prepare_test_ds_source, num_parallel_calls=AUTO)
.batch(TARGET_BATCH_SIZE)
.prefetch(AUTO)
)
# Evaluation on the source test set.
_, accuracy = adamatch_trained_model.evaluate(source_test_ds)
print(f"Accuracy on source test set: {accuracy * 100:.2f}%")
Explanation: With more training, this score improves. When this same network is trained with
standard classification objective, it yields an accuracy of 7.20% which is
significantly lower than what we got with AdaMatch. You can check out
this notebook
to learn more about the hyperparameters and other experimental details.
End of explanation |
9,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SQL Bootcamp
Sarah Beckett-Hile | NYU Stern School of Business | March 2015
Today's plan
SQL, the tool of business
Relational Databases
Why can't I do this in Excel?
Setting up this course
Basic Clauses
About SQL
SQL = "Structured Query Language" (pronounced "S-Q-L" or "sequel")
Database language of choice for most businesses
The software optimized for storing relational databases that you access with SQL varies. Relational Database Management Systems (RDBMS) include MySQL, Microsoft SQL Server, Oracle, and SQLite. We will be working with SQLite.
Relational Databases have multiple tables. Visualize it like an Excel file
Step1: TO GET STARTED, CLICK "CELL" IN THE MENU BAR ABOVE, THEN SELECT "RUN ALL"
Step2: Structure and Formatting Query Basics
Step3: These are the names of the tables in our mini SQLite database
Step4: Rewrite the query to look at the other tables
Step5: Different RDBMS have different datatypes available
Step6: Write a query to select all columns from the car_table
Step7: SELECT COLUMN
Step8: Write a query to select model_id and model from the car_table
Step9: One more quick note on the basics of SELECT - technically you can SELECT a value without using FROM to specify a table. You could just tell the query exactly what you want to see in the result-set. If it's a number, you can write the exact number. If you are using various characters, put them in quotes.
See the query below as an example
Step10: SELECT DISTINCT VALUES IN COLUMNS
Step11: Use DISTINCT to select unqiue values from the salesman_id column in sales_table. Delete DISTINCT and rerun to see the effect.
Step12: WHERE
SELECT
column_a
FROM
table_name
WHERE
column_a = x # filters the result-set to rows where column_a's value is exactly x
A few more options for the where clause
Step13: Rewrite the query to return rows where payment_type is NOT cash, and the model_id is either 31 or 36
- Extra
Step14: Using BETWEEN, rewrite the query to return rows where the revenue was between 24,000 and 25,000
Step15: WHERE column LIKE
Step16: Be careful with LIKE though - it can't deal with extra characters or mispellings
Step17: LIKE and % will also return too much if you're not specific enough. This returns both 'cash' and 'finance' because both have a 'c' with some letters before or after
Step18: You can use different wildcards besides % to get more specific. An underscore is a substitute for a single letter or character, rather than any number. The query below uses 3 underscores after c to get 'cash'
Step19: Say you can't remember the model of the car you're trying to look up. You know it's "out"...something. Outcast? Outstanding? Write a query to return the model_id and model from the car_table and use LIKE to help you search
Step20: ORDER BY
SELECT
column_a
FROM
table_name
WHERE # optional
column_a = x
ORDER BY # sorts the result-set by column_a
column_a DESC # DESC is optional. It sorts results in descending order (100->1) instead of ascending (1->100)
Without an ORDER BY clause, the default result-set will be ordered by however it appears in the database
By default, ORDER BY will sort values in ascending order (A→Z, 1→100). Add DESC to order results in desceding order instead (Z→A, 100→1)
More on ORDER BY
Step21: Rewrite the query above to look at the sticker_price of cars from the car_table in descending order
Step22: LIMIT
SELECT
column_a
FROM
table_name
WHERE
columna_a = x # optional
ORDER BY
column_a # optional
LIMIT # Limits the result-set to N rows
N
LIMIT just limits the number of rows in your result set
More on LIMIT
Step23: The query below limits the number of rows to 5 results. Change it to 10 to get a quick sense of what we're doing here
Step24: ALIASES
SELECT
T.column_a AS alias_a # creates a nickname for column_a, and states that it's from table_name (whose alias is T)
FROM
table_name AS T # creates a nickname for table_name
WHERE
alias_a = z # refer to an alias in the WHERE clause
ORDER BY
alias_a # refer to an alias in the ORDER BY clause
Aliases are optional, but save you time and make column headers cleaner
AS isn't necessary to create and alias, but commonly used
The convention is to use "AS" in the "SELECT" clause, but not in the "FROM" clause.
More on Aliases
Step25: You can use an alias in the ORDER BY and WHERE clauses now. Write a query to
Step26: You can also assign an alias to a table, and use the alias to tell SQL which table the column is coming from. This isn't of much use when you're only using one table, but it will come in handy when you start using multiple tables.
Below,the sales_table has the alias "S". Read "S.model_id" as "the model_id column from S, which is the sales_table"
Change the S to another letter in the FROM clause and run. Why did you hit an error? What can you do to fix it?
Step27: JOINS
SELECT
*
FROM
table_x
JOIN table_y # use JOIN to add the second table
ON table_x.column_a = table_y.column_a # use ON to specify which columns correspond on each table
Joining tables is the most fundamental and useful part about relational databases
Use columns on different tables with corresponding values to join the two tables
The format "table_x.column_a" can be read as "column_a from table_x"; it tells SQL the table where it can find that column
More on JOINS
Step28: Now the first few rows of the car_table
Step29: These tables are related. There's a column named "model_id" in the sales_table and a "model_id" in the car_table - but the column names don't need to be the same, what's important is that the values in the sales_table's model_id column correspond to the values in the car_table's model_id column.
You can join these tables by using these columns as keys.
Step30: Write a query to join the cust_table to the sales_table, using the customer_id columns in both tables as the key
Step31: Rewrite the query from above, but instead of selecting all columns, specify just the customer gender and the revenue
Step32: Rewrite the query from above, but this time select the customer_id, gender, and revenue
Step33: A column with the name customer_id appears in both the cust_table and the sales_table. SQL doesn't know which one you want to see. You have to tell it from which table you want the customer_id.
This can be important when columns in different tables have the same names but totally unrelated values.
Look at the sales_table again
Step34: Above, there's a column called "id".
Now look at the salesman_table again
Step35: There's a column named "id" in the salesman_table too. However, it doesn't look like those IDs correspond to the sales_table IDs. In fact, it's the salesman_id column in the sales_table that corresponds to the id column in the salesman_table. More often than not, your tables will use different names for corresponding columns, and will have columns with identical names that don't correspond at all.
Write a query to join the salesman_table with the sales_table (select all columns using an asterisk)
Step36: Practice applying this "table_x.column_a" format to all columns in the SELECT clause when you are joining multiple tables, since multiple tables frequenty use the same column names even when they don't correspond.
It's common to use single-letter aliases for tables to make queries shorter. Take a look at the query below and make sure you understand what's going on with the table aliases. It's the same query that you wrote earlier, but with aliases to help identify the columns
Step37: Join the sales_table (assign it the alias S) and salesman_table (alias SM) again.
- Select the id and salesman_id column from the sales_table
- Also, select the id column from the salesman_table
- Optional
Step38: Different Types of Joins
There are different types of joins you can do according to your needs. Here's a helpful way to visualize your options
Step39: So far, we've just done a simple join, also called an "inner join". To illustrate different types of joins, we're going to use a different "database" for the following lesson. First, let's take a look at each one
Step40: Notice that the Owner_Name columns on each table have some corresponding values (Michael, Gilbert, May and Elizabeth and Donna are in both tables), but they both also have values that don't overlap.
JOINS or INNER JOINS
SELECT
*
FROM
table_x X
JOIN table_y Y ON X.column_a = Y.column_a # Returns rows when values match on both tables.
This is what we used in the initial example. Simple joins, (also called Inner Joins) will combine tables only where there are corresponding values on both tables.
Write a query below to join the Cat_Table and Dog_Table using the same method we've used before
Step41: Notice that the result-set only includes the names that are in both tables. Think of inner joins as being the overlapping parts of a Venn Diagram. So, essentially we're looking at results only where the pet owner has both a cat and a dog.
LEFT JOINS or LEFT OUTER JOINS
SELECT
*
FROM
table_x X
LEFT JOIN table_y Y ON X.column_a = Y.column_a # Returns all rows from 1st table, rows that match from 2nd
LEFT JOINS will return all rows from the first table, but only rows from the second table if a value matches on the key column.
Rewrite your query from above, but instead of "JOIN", write "LEFT JOIN"
Step42: This time, you're seeing everything from the Dog_Table, but only results from the Cat_Table IF the owner also has a dog.
OUTER JOINS or FULL OUTER JOINS
Step43: Essentially, in Venn Diagram terms, and outer join lets you see all contents of both circles. This join will let you see all pet owners, regardless of whether the own only a cat or only a dog
Using the "WHERE" Clause to Join Tables
SELECT
*
FROM
table_x X
JOIN table y Y
WHERE
X.column_a = Y.column_a # tells SQL the key for the join
Some people prefer to use the WHERE clause to specify the key for a join
Fine if the query is short, but SUPER messy when the query is complex
We won't use this moving forward, but it's good to see it in case you run across someone else's code and you need to make sense of it
When it's simple, it's not so bad
Step44: When the query is longer, this method is messy. Suddenly it's harder to parse out which parts of the "WHERE" clause are actual filters, and which parts are just facilitating the join.
Note that we've covered all of these clauses and expressions by now, try to parse out what's going on
Step45: OPERATORS
ADDING / SUBSTRACTING / MULTIPLYING / DIVIDING
SELECT
column_a + column_b # adds the values in column_a to the values in columns_b
FROM
table_name
Use the standard formats for add, subtract, mutiply, and divide
Step46: Rewrite the query above to return gross margin instead of gross profit. Rename the alias as well. Limit it to 5 results
Step47: CONCATENATING
Step48: Here we'll use SQLite and use the concatenating operator || to combine words/values in different columns
Step49: Use || to pull the make and model from the car_table and make it appear in this format
Step50: FUNCTIONS
Step51: Rewrite the query to return the average cost of goods for a car in the car table. Try rounding it to cents.
- If you can't remember the name of the column for cost of goods in the car_table, remember you can use "SELECT * FROM car_table LIMIT 1" to see the first row of all columns, or you can use "PRAGMA TABLE_INFO(car_table)"
Step52: Using COUNT(*) will return the number of rows in any given table. Rewrite the query to return the number of rows in the car_table
Step53: You can apply functions on top of other operators. Below is the sum of gross profits
Step54: Write a query to show the average difference between the sticker_price (in car_table) and the revenue.
If you want a challenge, try to join cust_table and limit the query to only look at transactions where the customer's age is over 35
Step55: GROUP_CONCAT
SELECT
GROUP_CONCAT(column_a, '[some character separating items]')
FROM
table_x
This function is useful to return comma-separated lists of the values in a column
Step56: Use GROUP_CONCAT to return a comma-separated list of last names from the salesman_table
Step57: GROUP BY
Step58: Rewrite the query above to return the average gross profit (revenue - cogs) per make (remember that "make" is in the car_table)
Extra things to try
Step59: Write a query to make a comma-separated list of models for each car maker
Step60: GROUP BY, when used with joins and functions, can help you quickly see trends in your data. Parse out what's going on here
Step61: You can also use GROUP BY with multiple columns to segment out the results further
Step62: Rewrite the query to find the total revenue grouped by each salesperson's first_name and by the customer's gender (gender column in cust_table)
- For an extra challenge, use the concatenating operator to use the salesperson's full name instead
- Add COUNT(S.id) to the SELECT clause to see the number of transactions in each group
Step63: "HAVING" in GROUP BY statements
Step64: Rewrite the query above to look at average revenue per model, and using HAVING to filter your result-set to only include models whose average revenue is less than 18,000
Step65: HAVING vs WHERE
Step66: All model_ids are returned, but the averages are all much lower than they should be. That's because the query first drops all rows that have revenue greater than 18000, and then averages the remaining rows.
When you use HAVING, SQL follows these steps instead (this query should look like the one you wrote in the last challenge)
Step67: HAVING & WHERE in the same query
Step68: Write a query with the following criteria
Step69: ROLLUP
SELECT
column_a,
SUM(column_b)
FROM
table_x
GROUP BY
ROLLUP(column_a) # adds up all groups' values in a single final row
Rollup, used with GROUP BY, provides subtotals and totals for your groups
Useful for quick analysis
Varies by RDBMS
Step70: Because SQLite doesn't support ROLLUP, the query below is just intended to illustrate how ROLLUP would work. Don't worry about understanding the query itself, just get familiar with what's going on in the result-set
Step71: Conditional Expressions
Step72: Starting with a simple example, here we'll use CASE WHEN to create a new column on the sales_table
Step73: CASE WHEN gives you the value "Revenue is more MORE 20,000" when revenue in that same row is greater than 20,000. Otherwise, it has no value.
Now let's add a level
Step74: Now to deal with the blank spaces. You can assign an "ELSE" value to catch anything that's not included in the prior expressions
Step75: You can use values from another column as well. Remember this query from the GROUP BY lesson? It's often helpful to look at information broken out by multiple groups, but it's not especially easy to digest
Step76: Look at what's going on in that query without the AVG( ) function and the GROUP BY clause
Step77: The result-set above is essentially what SQL is working with right before it separates the rows into groups and averages the revenue within those groups.
Now, we're going to use some CASE WHEN statements to change this a little
Step78: Now let's add back the ROUND() and AVG() functions and the GROUP BY statement
Step79: CASE WHEN makes this same information a lot easier to read by letting you pivot the result set a little.
Write a query using CASE WHEN to look at total revenue per gender, grouped by each car model
Step80: CASE WHEN also lets you create new groups. Start by looking at the cust_table grouped by age - remember that COUNT(***) tells you how many rows are in each group (which is the same as telling you the number of customers in each group)
Step81: When you want to segment your results, but there are too many different values for GROUP BY to be helpful, use CASE WHEN to make your own groups. GROUP BY the column you created with CASE WHEN to look at your newly created segments.
Step82: Ta-DA! Useful customer segments!
Try to break up the "Customers" column into 2 columns - one for male and one for female. Keep the age segments intact.
- Note that COUNT(***) cannot be wrapped around a CASE WHEN expression the way that other functions can. Try to think of a different way to get a count.
- Extra challenge
Step83: NESTING
Nested queries allow you to put a query within a query
Depending on your needs, you might put a nested query in the SELECT clause, the FROM clause, or the WHERE clause
Consider the following query. We're using a nested query in the SELECT clause to see the sum of all revenue in the sales_table, and then using it again to what percentage of total revenue can be attributed to each Car_Model.
Step84: Write a query to look at the model name and COGs for each car in car_table, then use a nested query to also look at the average COGs off all car models in a third column
- Extra Challenge
Step85: UNION & UNION ALL
SELECT
column_a
FROM
table_x
UNION # or UNION ALL
SELECT
column_b
FROM
table_y
UNION allows you to run a 2nd query (or 3rd or 4th), the results will be ordered by default with the results of the first query
UNION ALL ensures that the results in the result set appear in order that the queries are written
The number of columns in each query must be the same in order for UNION & UNION ALL to work
Starting with something simple (and a little nonsensical), UNION basically lets you run two entirely separate queries. Technically, they could have nothing to do with each other
Step86: Some things to note
Step87: Consider the issue we had before, where SQLite didn't support WITH ROUNDUP. We used this query as a workaround. Does it make sense now?
Step88: Optimization
Step89: DON'T use an asterisk unless you absolutely have to
Step90: DO use LIKE on small tables and in simple queries
Step91: DON'T use LIKE on large tables or when using JOINs
Step92: If you want to look at average revenue for car models that are like "%undra", run the LIKE query on the small table (car_table) first to figure out exacly what you're looking for, then use that information to search for the data you need from the sales_table
DO dip your toe in by starting with a small data set
Use WHERE to only view a few days of data at first. If the query runs quickly, add a few days at a time. If it starts to run slowly, run just a few days at a time and paste results into excel to combine results (or use Python...ask me later!!!).
The query below won't work because SQLite doesn't recognize dates, but remember these concepts when working with other RDBMS
Step93: DO use a UNION to look at result-sets that aren't mutually exclusive
Let's say you were interested in seeing all Toyotas as well as cars with COGs of more than 13000. Write a query for the first group, then a query for the second group, and unite them with UNION. The result set won't show you repeats - if a row matches both result sets, it will only display once.
Step94: DON'T use OR when a UNION will generate the same results
Note that we'll get the same results as above, but this query could run MUCH slower on a large table. It's tempting to use OR because it's faster to write, but unless you're dealing with very small tables, avoid the temptation. In 5 years of doing business analytics with SQL, I never used OR once. It's slow. Use a UNION.
Step95: DON'T use negative filters when a positive filter is possible
Let's say you want to look at cars made by Toyato and Honda, but you don't care about Subaru. It might be tempting to use a negative filter
Step96: On a big table, this will run much more slowly than if you use a positive filter. Try this instead - it might require a little extra typing, but it will run much faster
Step97: Wrapping Up
Step98: Add on the average amount of revenue made per sale
Step99: Make it easier to compare the average revenue of Jared's sales to the average revenue of per sale overall by adding a column to see by what percent each salesperson's sales are more or less than average
Step100: So maybe Jared is just selling cheaper cars.
Let's go further and compare the sale price of each car against the sticker price to see how low Jared was willing to negotiate with customers. Sticker price is in anther table, but again, that's no problem with SQL
Step101: Looks like Jared is letting customers negotiate prices down much more than his peers.
But is this a real problem? How much is each salesperson contributing to our gross profits?
Step102: SQL really lets you dig.
Some other quick examples - we could do a gender breakdown of customers per car model and add a total at the bottom
Step103: Easily create age groups and see how aggressively each group negotiates (judged by the difference between the actual sale amount and the sticker price) | Python Code:
# check to see if support code is there
import os
print('List of files in working directory:')
[print(file) for file in os.listdir()]
file = 'SQL_support_code.py'
if not os.path.isfile(file):
raise Exception('***** Program halted, file missing *****')
Explanation: SQL Bootcamp
Sarah Beckett-Hile | NYU Stern School of Business | March 2015
Today's plan
SQL, the tool of business
Relational Databases
Why can't I do this in Excel?
Setting up this course
Basic Clauses
About SQL
SQL = "Structured Query Language" (pronounced "S-Q-L" or "sequel")
Database language of choice for most businesses
The software optimized for storing relational databases that you access with SQL varies. Relational Database Management Systems (RDBMS) include MySQL, Microsoft SQL Server, Oracle, and SQLite. We will be working with SQLite.
Relational Databases have multiple tables. Visualize it like an Excel file:
Database = a single Excel file/workbook
Table = a single worksheet in the same Excel file
SQL lets you perform four basic functions: C.R.U.D. = Create, Read, Update, Delete
"Read" is all you'll need for business analytics
Additional reading: http://www.w3schools.com/sql/sql_intro.asp
Find examples of queries for business analysis at botton of this lesson page
About this file
We'll use SQL in Python, specifically an IPython Notebook
No need to know what that means, but be sure you have SQL_support_code.py saved in the same folder as this file.
Download if you haven't already: https://www.dropbox.com/s/dacxdvkk11tyr4n/SQL_support_code.py?dl=0
All SQL queries are in red
If you get stumped on a challenge, there are cheats at the bottom of a challenge cell. You'll see something like "#print(cheat1)". Delete the hash and run the cell (SHIFT-RETURN). Once you've figured it out, replace the hash, and try again.
End of explanation
from SQL_support_code import *
Explanation: TO GET STARTED, CLICK "CELL" IN THE MENU BAR ABOVE, THEN SELECT "RUN ALL"
End of explanation
describe_differences
Explanation: Structure and Formatting Query Basics:
Indentations and Returns:
Mostly arbitrary in SQL
Usually for readability
Capitalization:
Convention to put keywords (functions, clauses) in CAPS
Consistency is best
Order of Clauses:
Very strict
Not all clauses need to be present in a query, but when they are present, then they must be in the correct order
Below are the major clauses that we are going to cover. Use this list as reference if you are getting errors with your queries - there's a chance you just have the clauses in the wrong order:
SELECT
FROM
JOIN...ON
WHERE
GROUP BY
UNION
ORDER BY
LIMIT
Reading a table's structure:
PRAGMA TABLE_INFO(table_name)
Running this will let you see the column heads and data types of any table.
The SQL query above only works for SQLite, which is what we're using here. If you're interested in knowing the equivalent versions for other RDBMS options, see the table below.
End of explanation
run('''
PRAGMA TABLE_INFO(sales_table)
''')
Explanation: These are the names of the tables in our mini SQLite database:
sales_table
car_table
salesman_table
cust_table
Start by looking at the columns and their data types in the sales_table.
End of explanation
run('''
PRAGMA TABLE_INFO(sales_table)
''')
#print(describe_cheat)
Explanation: Rewrite the query to look at the other tables:
End of explanation
run('''
SELECT
*
FROM
sales_table
''')
Explanation: Different RDBMS have different datatypes available:
- Oracle: http://docs.oracle.com/cd/B10501_01/appdev.920/a96624/03_types.htm
- MySQL:
- Numeric: http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
- Date/time: http://dev.mysql.com/doc/refman/5.0/en/date-and-time-type-overview.html
- String/text: http://dev.mysql.com/doc/refman/5.0/en/string-type-overview.html
- SQLite: https://www.sqlite.org/datatype3.html
- Microsoft: http://msdn.microsoft.com/en-us/library/ms187752.aspx
SELECT & FROM:
Basically every "read" query will contain a SELECT and FROM clause
In the SELECT clause, you tell SQL which columns you want to see
In the FROM clause, you tell SQL the table where those columns are located
More on SELECT: http://www.w3schools.com/sql/sql_select.asp
SELECT * (ALL COLUMNS)
SELECT # specifies which columns you want to see
* # asterisk returns all columns
FROM # specifies the table or tables where these columns can be found
table_name
Use an asterisk to tell SQL to return all columns from the table:
End of explanation
run('''
SELECT NULL
''')
#print(select_cheat1)
Explanation: Write a query to select all columns from the car_table:
End of explanation
run('''
SELECT
model_id,
revenue
FROM
sales_table
''')
Explanation: SELECT COLUMN:
SELECT
column_a, # comma-separate multiple columns
column_b
FROM
table_name
Instead of using an asterisk for "all columns", you can specify a particular column or columns:
End of explanation
run('''
SELECT NULL
''')
#print(select_cheat2)
Explanation: Write a query to select model_id and model from the car_table:
End of explanation
run('''
SELECT
4,
5,
7,
'various characters or text'
''')
Explanation: One more quick note on the basics of SELECT - technically you can SELECT a value without using FROM to specify a table. You could just tell the query exactly what you want to see in the result-set. If it's a number, you can write the exact number. If you are using various characters, put them in quotes.
See the query below as an example:
End of explanation
run('''
SELECT
DISTINCT model_id
FROM
sales_table
''')
Explanation: SELECT DISTINCT VALUES IN COLUMNS:
SELECT
DISTINCT column_a # returns a list of each unique value in column_a
FROM
table_name
Use DISTINCT to return unique values from a column
More on DISTINCT: http://www.w3schools.com/sql/sql_distinct.asp
The query below pulls each distinct value from the model_id column in the sales_table, so each value is only listed one time:
End of explanation
run('''
SELECT NULL
''')
#print(select_cheat3)
Explanation: Use DISTINCT to select unqiue values from the salesman_id column in sales_table. Delete DISTINCT and rerun to see the effect.
End of explanation
run('''
SELECT
*
FROM
sales_table
WHERE
payment_type = 'cash'
AND model_id = 46
''')
Explanation: WHERE
SELECT
column_a
FROM
table_name
WHERE
column_a = x # filters the result-set to rows where column_a's value is exactly x
A few more options for the where clause:
WHERE column_a = 'some_text' # put text in quotations. CAPITALIZATION IS IMPORTANT
WHERE column_a != x # filters the result-set to rows where column_a's value DOES NOT EQUAL x
WHERE column_a < x # filters the result-set to rows where column_a's value is less than x
WHERE columna_a <= x # filters the result-set to rows where column_a's value is less than or equal to x
WHERE column_a IN (x, y) # column_a's value can be EITHER x OR y
WHERE column_a NOT IN (x, y) # column_a's value can be NEITHER x NOR y
WHERE column_a BETWEEN x AND y # BETWEEN lets you specify a range
WHERE column_a = x AND column_b = y # AND lets you add more filters
WHERE column_a = x OR column_b = y # OR will include results that fulfill either criteria
WHERE (column_a = x AND column_b = y) OR (column_c = z) # use parentheses to create complex AND/OR statements
WHERE allows you to filter the result-set to only include rows matching specific values/criteria. If the value/criteria is text, remember to put it in single or double quotation marks
More on WHERE: http://www.w3schools.com/sql/sql_where.asp
Below, WHERE filters out any rows that don't match the criteria. The result-set will only contain rows where the payment type is cash AND where the model_id is 46:
End of explanation
run('''
SELECT NULL
''')
#print(where_cheat1)
Explanation: Rewrite the query to return rows where payment_type is NOT cash, and the model_id is either 31 or 36
- Extra: Try changing 'cash' to 'Cash' to see what happens.
End of explanation
run('''
SELECT NULL
''')
#print(where_cheat2)
Explanation: Using BETWEEN, rewrite the query to return rows where the revenue was between 24,000 and 25,000:
End of explanation
run('''
SELECT
*
FROM
sales_table
WHERE
payment_type LIKE 'Cas%'
''').head()
Explanation: WHERE column LIKE:
SELECT
column_a
FROM
table_name
WHERE
column_a LIKE '%text or number%' # Filters the result_set to rows where that text or value can be found, with % standing in as a wildcard
LIKE lets you avoid issues with capitalization in quotes, and you can use % as a wildcard to stand in for any character
Useful if you have an idea of what text you're looking for, but you are not sure of the spelling or you want all results that contatin those letters
More on LIKE: http://www.w3schools.com/sql/sql_like.asp
More on wildcards: http://www.w3schools.com/sql/sql_wildcards.asp
Note that you don't have to use the whole word "cash" when you use LIKE, and that the capital "C" now doesn't cause a problem:
End of explanation
run('''
SELECT
*
FROM
sales_table
WHERE
payment_type LIKE 'ces%'
LIMIT 5
''')
Explanation: Be careful with LIKE though - it can't deal with extra characters or mispellings:
End of explanation
run('''
SELECT
*
FROM
sales_table
WHERE
payment_type LIKE '%c%'
LIMIT 5
''')
Explanation: LIKE and % will also return too much if you're not specific enough. This returns both 'cash' and 'finance' because both have a 'c' with some letters before or after:
End of explanation
run('''
SELECT
*
FROM
sales_table
WHERE
payment_type LIKE 'c___'
LIMIT 5
''')
Explanation: You can use different wildcards besides % to get more specific. An underscore is a substitute for a single letter or character, rather than any number. The query below uses 3 underscores after c to get 'cash':
End of explanation
run('''
SELECT NULL
''')
#print(where_cheat3)
Explanation: Say you can't remember the model of the car you're trying to look up. You know it's "out"...something. Outcast? Outstanding? Write a query to return the model_id and model from the car_table and use LIKE to help you search:
End of explanation
run('''
SELECT
*
FROM
sales_table
ORDER BY
revenue DESC
LIMIT 5
''')
Explanation: ORDER BY
SELECT
column_a
FROM
table_name
WHERE # optional
column_a = x
ORDER BY # sorts the result-set by column_a
column_a DESC # DESC is optional. It sorts results in descending order (100->1) instead of ascending (1->100)
Without an ORDER BY clause, the default result-set will be ordered by however it appears in the database
By default, ORDER BY will sort values in ascending order (A→Z, 1→100). Add DESC to order results in desceding order instead (Z→A, 100→1)
More on ORDER BY: http://www.w3schools.com/sql/sql_orderby.asp
The query below orders the result-set by revenue amount, starting with the smallest amount listed first:
End of explanation
run('''
SELECT NULL
''')
#print(order_cheat)
Explanation: Rewrite the query above to look at the sticker_price of cars from the car_table in descending order:
End of explanation
limit_differences
Explanation: LIMIT
SELECT
column_a
FROM
table_name
WHERE
columna_a = x # optional
ORDER BY
column_a # optional
LIMIT # Limits the result-set to N rows
N
LIMIT just limits the number of rows in your result set
More on LIMIT: http://www.w3schools.com/sql/sql_top.asp
The ability to limit results varies by RBDSM. Below you can see the different ways to do this:
End of explanation
run('''
SELECT
*
FROM
sales_table
LIMIT 5
''')
Explanation: The query below limits the number of rows to 5 results. Change it to 10 to get a quick sense of what we're doing here:
End of explanation
run('''
SELECT
model_id AS Model_of_car,
revenue AS Rev_per_car
FROM
sales_table
''')
Explanation: ALIASES
SELECT
T.column_a AS alias_a # creates a nickname for column_a, and states that it's from table_name (whose alias is T)
FROM
table_name AS T # creates a nickname for table_name
WHERE
alias_a = z # refer to an alias in the WHERE clause
ORDER BY
alias_a # refer to an alias in the ORDER BY clause
Aliases are optional, but save you time and make column headers cleaner
AS isn't necessary to create and alias, but commonly used
The convention is to use "AS" in the "SELECT" clause, but not in the "FROM" clause.
More on Aliases: http://www.w3schools.com/sql/sql_alias.asp
Change the aiases for model_id and revenue, or add extra columns to see how they work:
End of explanation
run('''
SELECT NULL
''')
#print(alias_cheat)
Explanation: You can use an alias in the ORDER BY and WHERE clauses now. Write a query to:
- pull the model_id and revenue for each transaction
- give model_id the alias "Model"
- give revenue the alias "Rev"
- limit the results to only include rows where the model_id id 36, use the alias in the WHERE clause
- order the results by revenue in descending order, use the alias in the ORDER BY clause
- Run the query
THEN:
- Try giving model_id the alias "ID" and use it in the WHERE clause, then rerun the query. What do you think is causing the error?
End of explanation
run('''
SELECT
S.model_id,
S.revenue
FROM
sales_table AS S
LIMIT 5
''')
Explanation: You can also assign an alias to a table, and use the alias to tell SQL which table the column is coming from. This isn't of much use when you're only using one table, but it will come in handy when you start using multiple tables.
Below,the sales_table has the alias "S". Read "S.model_id" as "the model_id column from S, which is the sales_table"
Change the S to another letter in the FROM clause and run. Why did you hit an error? What can you do to fix it?
End of explanation
run('''
SELECT
*
FROM
sales_table
LIMIT 5
''')
Explanation: JOINS
SELECT
*
FROM
table_x
JOIN table_y # use JOIN to add the second table
ON table_x.column_a = table_y.column_a # use ON to specify which columns correspond on each table
Joining tables is the most fundamental and useful part about relational databases
Use columns on different tables with corresponding values to join the two tables
The format "table_x.column_a" can be read as "column_a from table_x"; it tells SQL the table where it can find that column
More on JOINS: http://www.w3schools.com/sql/sql_join.asp
Start by looking at the first few rows of sales_table again:
End of explanation
run('''
SELECT
*
FROM
car_table
LIMIT 5
''')
Explanation: Now the first few rows of the car_table:
End of explanation
run('''
SELECT
*
FROM
sales_table
JOIN car_table ON sales_table.model_id = car_table.model_id
LIMIT 10
''')
Explanation: These tables are related. There's a column named "model_id" in the sales_table and a "model_id" in the car_table - but the column names don't need to be the same, what's important is that the values in the sales_table's model_id column correspond to the values in the car_table's model_id column.
You can join these tables by using these columns as keys.
End of explanation
run('''
SELECT NULL
''')
#print(join_cheat1)
Explanation: Write a query to join the cust_table to the sales_table, using the customer_id columns in both tables as the key:
End of explanation
run('''
SELECT NULL
''')
#print(join_cheat2)
Explanation: Rewrite the query from above, but instead of selecting all columns, specify just the customer gender and the revenue:
End of explanation
run('''
SELECT NULL
''')
#print(join_cheat3)
Explanation: Rewrite the query from above, but this time select the customer_id, gender, and revenue:
- You'll probably hit an error at first. Try to use what you've learned about this structure "table_x.column_a" to fix the issue. Why do you think you need to use this?
End of explanation
run('''
SELECT
*
FROM
sales_table
LIMIT 5
''')
Explanation: A column with the name customer_id appears in both the cust_table and the sales_table. SQL doesn't know which one you want to see. You have to tell it from which table you want the customer_id.
This can be important when columns in different tables have the same names but totally unrelated values.
Look at the sales_table again:
End of explanation
run('''
SELECT
*
FROM
salesman_table
LIMIT 5
''')
Explanation: Above, there's a column called "id".
Now look at the salesman_table again:
End of explanation
run('''
SELECT NULL
''')
#print(join_cheat4)
Explanation: There's a column named "id" in the salesman_table too. However, it doesn't look like those IDs correspond to the sales_table IDs. In fact, it's the salesman_id column in the sales_table that corresponds to the id column in the salesman_table. More often than not, your tables will use different names for corresponding columns, and will have columns with identical names that don't correspond at all.
Write a query to join the salesman_table with the sales_table (select all columns using an asterisk)
End of explanation
run('''
SELECT
S.customer_id,
C.gender,
S.revenue
FROM
sales_table AS S
JOIN cust_table AS C on S.customer_id = C.customer_id
''')
Explanation: Practice applying this "table_x.column_a" format to all columns in the SELECT clause when you are joining multiple tables, since multiple tables frequenty use the same column names even when they don't correspond.
It's common to use single-letter aliases for tables to make queries shorter. Take a look at the query below and make sure you understand what's going on with the table aliases. It's the same query that you wrote earlier, but with aliases to help identify the columns
End of explanation
run('''
SELECT NULL
''')
#print(join_cheat5)
Explanation: Join the sales_table (assign it the alias S) and salesman_table (alias SM) again.
- Select the id and salesman_id column from the sales_table
- Also, select the id column from the salesman_table
- Optional: assign aliases to the columns in the SELECT clause to make the result-set easier to read
End of explanation
join_differences
Explanation: Different Types of Joins
There are different types of joins you can do according to your needs. Here's a helpful way to visualize your options: http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins
However, not all types of joins are compatible with SQLite and MySQL. The table below breaks down compatibility:
End of explanation
run('''
SELECT
*
FROM
Dog_Table
''')
run('''
SELECT
*
FROM
Cat_Table
''')
Explanation: So far, we've just done a simple join, also called an "inner join". To illustrate different types of joins, we're going to use a different "database" for the following lesson. First, let's take a look at each one:
End of explanation
run('''
SELECT NULL
''')
#print(inner_join_cheat)
Explanation: Notice that the Owner_Name columns on each table have some corresponding values (Michael, Gilbert, May and Elizabeth and Donna are in both tables), but they both also have values that don't overlap.
JOINS or INNER JOINS
SELECT
*
FROM
table_x X
JOIN table_y Y ON X.column_a = Y.column_a # Returns rows when values match on both tables.
This is what we used in the initial example. Simple joins, (also called Inner Joins) will combine tables only where there are corresponding values on both tables.
Write a query below to join the Cat_Table and Dog_Table using the same method we've used before:
End of explanation
run('''
SELECT NULL
''')
#print(left_join_cheat)
Explanation: Notice that the result-set only includes the names that are in both tables. Think of inner joins as being the overlapping parts of a Venn Diagram. So, essentially we're looking at results only where the pet owner has both a cat and a dog.
LEFT JOINS or LEFT OUTER JOINS
SELECT
*
FROM
table_x X
LEFT JOIN table_y Y ON X.column_a = Y.column_a # Returns all rows from 1st table, rows that match from 2nd
LEFT JOINS will return all rows from the first table, but only rows from the second table if a value matches on the key column.
Rewrite your query from above, but instead of "JOIN", write "LEFT JOIN":
End of explanation
run('''
SELECT
C.Owner_Name,
Cat_Name,
Dog_Name
FROM
Cat_Table C
LEFT JOIN Dog_Table D ON D.Owner_Name = C.Owner_Name
UNION ALL
SELECT
D.Owner_Name,
' ',
Dog_Name
FROM
Dog_Table D
WHERE
Owner_Name NOT IN (SELECT Owner_Name from Cat_Table)
''')
Explanation: This time, you're seeing everything from the Dog_Table, but only results from the Cat_Table IF the owner also has a dog.
OUTER JOINS or FULL OUTER JOINS:
SELECT
*
FROM
table_x X
OUTER JOIN table_y Y ON X.column_a = Y.column_a # Returns all rows, regardless of whether values match
Outer joins include ALL rows from both tables, even if the values on the key columns don't match up.
SQLite doesn't support this, so the query below is a workaround to show you the visual effect of an outer join
This provides a great workaround for MySQL: http://stackoverflow.com/questions/4796872/full-outer-join-in-mysql
For now, this query won't totally make sense, just pay attention to the results so you can visualize an outer join:
End of explanation
run('''
SELECT
C.model,
S.revenue
FROM
sales_table S, car_table C
WHERE
S.model_id = C.model_id
LIMIT 5
''')
Explanation: Essentially, in Venn Diagram terms, and outer join lets you see all contents of both circles. This join will let you see all pet owners, regardless of whether the own only a cat or only a dog
Using the "WHERE" Clause to Join Tables
SELECT
*
FROM
table_x X
JOIN table y Y
WHERE
X.column_a = Y.column_a # tells SQL the key for the join
Some people prefer to use the WHERE clause to specify the key for a join
Fine if the query is short, but SUPER messy when the query is complex
We won't use this moving forward, but it's good to see it in case you run across someone else's code and you need to make sense of it
When it's simple, it's not so bad:
End of explanation
run('''
SELECT
C.make,
C.model,
S.revenue,
CUST.gender,
SM.first_name
FROM
sales_table S
JOIN car_table C
JOIN salesman_table SM
JOIN cust_table CUST
WHERE
S.customer_id = CUST.customer_id
AND S.model_id = C.model_id
AND S.salesman_id = SM.id
AND (C.model in ('Tundra', 'Camry', 'Corolla') OR C.make = 'Subaru')
AND S.revenue between 17000 and 22000
AND CUST.gender = 'female'
AND SM.first_name NOT IN ('Kathleen', 'Samantha')
LIMIT 5
''')
Explanation: When the query is longer, this method is messy. Suddenly it's harder to parse out which parts of the "WHERE" clause are actual filters, and which parts are just facilitating the join.
Note that we've covered all of these clauses and expressions by now, try to parse out what's going on:
End of explanation
run('''
SELECT
S.id,
C.model,
S.revenue,
C.cogs,
S.revenue - C.cogs AS gross_profit
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
LIMIT 5
''')
Explanation: OPERATORS
ADDING / SUBSTRACTING / MULTIPLYING / DIVIDING
SELECT
column_a + column_b # adds the values in column_a to the values in columns_b
FROM
table_name
Use the standard formats for add, subtract, mutiply, and divide: + - * /
The query below subtracts cogs (from the car_table) from revenue (from the sales_table) to show us the gross_profit per transaction
End of explanation
run('''
SELECT NULL
''')
#print(operator_cheat)
Explanation: Rewrite the query above to return gross margin instead of gross profit. Rename the alias as well. Limit it to 5 results
End of explanation
concat_differences
Explanation: CONCATENATING:
Concatenating varies by RDBMS:
End of explanation
run('''
SELECT
last_name,
first_name,
last_name || ', ' || first_name AS full_name
FROM
salesman_table
''')
Explanation: Here we'll use SQLite and use the concatenating operator || to combine words/values in different columns:
End of explanation
run('''
SELECT NULL
''')
#print(concat_cheat)
Explanation: Use || to pull the make and model from the car_table and make it appear in this format: "Model (Make)"
- give it an alias to clean up the column header, otherwise it'll look pretty messy
End of explanation
run('''
SELECT
SUM(revenue) AS Total_Revenue
FROM
sales_table
''')
Explanation: FUNCTIONS:
SELECT
SUM(column_a), # sums up the values in column_a
AVG(column_a), # averages the values in column_a
ROUND(AVG(column_a), 2), # rounds the averaged values in column_a to 2 digits
COUNT(column_a), # counts the number of rows in column_a
MAX(column_a), # returns the maximum value in column_a
MIN(column_a), # returns the minimum value in column_a
GROUP_CONCAT(column_a) # returns a comma separated list of all values in column_a
FROM
table_name
Functions can be applied to columns to help analyze data
You can find more than just these basic few in the link below, or just Google what you're looking to do - there's a lot of help available on forums
More on functions: http://www.w3schools.com/sql/sql_functions.asp
The function below will sum up everything in the revenue column. Note that now we only get one row:
End of explanation
run('''
SELECT NULL
''')
#print(avg_cheat)
Explanation: Rewrite the query to return the average cost of goods for a car in the car table. Try rounding it to cents.
- If you can't remember the name of the column for cost of goods in the car_table, remember you can use "SELECT * FROM car_table LIMIT 1" to see the first row of all columns, or you can use "PRAGMA TABLE_INFO(car_table)"
End of explanation
run('''
SELECT NULL
''')
#print(count_cheat)
Explanation: Using COUNT(*) will return the number of rows in any given table. Rewrite the query to return the number of rows in the car_table:
- After you've run the query, try changing it by adding "WHERE make = 'Subaru'" and see what happens
End of explanation
run('''
SELECT
'$ ' || SUM(S.revenue - C.cogs) total_gross_profit
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
''')
Explanation: You can apply functions on top of other operators. Below is the sum of gross profits:
End of explanation
run('''
SELECT NULL
''')
#print(avg_cheat2)
Explanation: Write a query to show the average difference between the sticker_price (in car_table) and the revenue.
If you want a challenge, try to join cust_table and limit the query to only look at transactions where the customer's age is over 35
End of explanation
run('''
SELECT
GROUP_CONCAT(model, ', ') as Car_Models
FROM
car_table
''')
Explanation: GROUP_CONCAT
SELECT
GROUP_CONCAT(column_a, '[some character separating items]')
FROM
table_x
This function is useful to return comma-separated lists of the values in a column
End of explanation
run('''
SELECT NULL
''')
#print(concat_cheat)
Explanation: Use GROUP_CONCAT to return a comma-separated list of last names from the salesman_table:
End of explanation
run('''
SELECT
C.model AS Car_Model,
SUM(revenue) AS Total_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
Car_Model
''')
Explanation: GROUP BY:
SELECT
column_a,
SUM(column_b) # sums up the values in column_b
FROM
table_name
GROUP BY # creates one group for each unique value in column_a
column_a
Creates a group for each unique value in the column you specify
Extremely helpful when you're using functions - it segments out results
More on GROUP BY: http://www.w3schools.com/sql/sql_groupby.asp
The query below creates a group for each unique value in the car_table's model column, then sums up the revenue for each group. Note that you can use an alias in the GROUP BY clause.
End of explanation
run('''
SELECT NULL
''')
#print(group_cheat)
Explanation: Rewrite the query above to return the average gross profit (revenue - cogs) per make (remember that "make" is in the car_table)
Extra things to try:
- Round average revenue to two decimal points
- Order the results by gross profit in descending order
- Rename the make column as "Car_Maker" and use the alias in the GROUP BY clause
- Rename gross profit column as "Avg_Gross_Profit" and use the alias in the ORDER BY clause
- Join the salesman_table and filter results to only look at revenue where first_name is Michael
- After you've gotten the query to run with all of these adjustments, think about the risks involved with adding something in the WHERE clause that doesn't show up in the SELECT clause. Think about a potential solution to these risks.
End of explanation
run('''
SELECT NULL
''')
#print(group_cheat1)
Explanation: Write a query to make a comma-separated list of models for each car maker:
End of explanation
run('''
SELECT
C.model AS Car_Model,
MIN(S.revenue) || ' - ' || MAX(S.revenue) AS Min_to_Max_Sale,
MAX(S.revenue) - MIN(S.revenue) AS Range,
ROUND(AVG(S.revenue), 2) AS Average_Sale
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
Car_Model
ORDER BY
Average_Sale DESC
''')
Explanation: GROUP BY, when used with joins and functions, can help you quickly see trends in your data. Parse out what's going on here:
End of explanation
run('''
SELECT
C.make AS car_caker,
payment_type,
ROUND(AVG(revenue)) as avg_revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
C.Make,
payment_type
''')
Explanation: You can also use GROUP BY with multiple columns to segment out the results further:
End of explanation
run('''
SELECT NULL
''')
#print(group_cheat2)
Explanation: Rewrite the query to find the total revenue grouped by each salesperson's first_name and by the customer's gender (gender column in cust_table)
- For an extra challenge, use the concatenating operator to use the salesperson's full name instead
- Add COUNT(S.id) to the SELECT clause to see the number of transactions in each group
End of explanation
run('''
SELECT
C.Make as Car_Maker,
SUM(revenue) as Total_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
Car_Maker HAVING Total_Revenue > 500000
''')
Explanation: "HAVING" in GROUP BY statements:
SELECT
column_a,
SUM(column_b) AS alias_b
FROM
table_name
GROUP BY
column_a HAVING alias_b > x # only includes groups in column_a when the sum of column_b is greater than x
If you've applied a function to a column and want to filter to only show results meeting a particular criteria, use HAVING in your GROUP BY clause.
More on HAVING: http://www.w3schools.com/sql/sql_having.asp
The query below will sum up all the revenue for each car maker, but it will only show you results for car maker's whose total revenue is greater than 50,000:
End of explanation
run('''
SELECT NULL
''')
#print(having_cheat)
Explanation: Rewrite the query above to look at average revenue per model, and using HAVING to filter your result-set to only include models whose average revenue is less than 18,000:
End of explanation
run('''
SELECT
C.model as Car_Model,
AVG(S.revenue) as Avg_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
WHERE
S.revenue < 18000
GROUP BY
Car_Model
''')
Explanation: HAVING vs WHERE:
WHERE filters which rows will be included in the function, whereas HAVING filters what's returned after the function has been applied.
Take a look at the query below. It might look like the query you just wrote (above) if you'd tried to use WHERE instead of HAVING:
SELECT
C.model as Car_Model,
AVG(S.revenue) as Avg_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
WHERE
S.revenue < 18000
GROUP BY
Car_Model
Find the sales_table and join it to the car_table
Pull the data from the 'model' column in car_table and 'revenue' column in sales_table
Filter out all rows where revenue is less than 18000
Average remaining rows for each Car_Model
Even though AVG( ) appears early in the query, it's not actually being applied until after the WHERE statement has filtered out rows with less than 18,000 in revenue.
This is the result:
End of explanation
run('''
SELECT
C.model as Car_Model,
AVG(S.revenue) as Avg_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
Car_Model HAVING Avg_Revenue < 18000
''')
Explanation: All model_ids are returned, but the averages are all much lower than they should be. That's because the query first drops all rows that have revenue greater than 18000, and then averages the remaining rows.
When you use HAVING, SQL follows these steps instead (this query should look like the one you wrote in the last challenge):
SELECT
C.model as Car_Model,
AVG(S.revenue) as Avg_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
Car_Model HAVING Avg_Revenue < 18000
Find the sales_table and join it to the car_table (same as before)
Pull the data from the 'model' column in car_table and 'revenue' column in sales_table (same as before)
Average the rows for each Car_Model
Return only the Car_Models whose averages are less than 18,000
And as you can see, there's a big difference in these results and the results of the query that used "WHERE" instead of HAVING:
End of explanation
run('''
SELECT
C.model as Car_Model,
AVG(S.revenue) as Avg_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
WHERE
C.make = 'Toyota'
GROUP BY
Car_Model HAVING Avg_Revenue < 18000
''')
Explanation: HAVING & WHERE in the same query:
Sometimes, you will want to use WHERE and HAVING in the same query
Just be aware of the order of the steps that SQL takes
Rule of thumb: if you're applying a function to a column, you probably don't want that column in there WHERE clause
This query is only looking at Toyotas whose revenue is less than 18,000, using WHERE to limit the results to Toyotas, and HAVING to limit the results by revenue:
End of explanation
run('''
SELECT NULL
''')
#print(having_where_cheat)
Explanation: Write a query with the following criteria:
- SELECT clause:
- salesman's last name and average revenue, rounded to the nearest cent
- FROM clause:
- sales_table joined with the salesman_table and the cust_table
- WHERE clause:
- only female customers
- GROUP BY clause:
- only salespeople whose average revenue was greater than 20,000
So, in plain English, we want to see salespeople whose average revenue for female customers is greater than 20,000
End of explanation
rollup_differences
Explanation: ROLLUP
SELECT
column_a,
SUM(column_b)
FROM
table_x
GROUP BY
ROLLUP(column_a) # adds up all groups' values in a single final row
Rollup, used with GROUP BY, provides subtotals and totals for your groups
Useful for quick analysis
Varies by RDBMS
End of explanation
run('''
SELECT
C.model AS Car_Model,
SUM(S.revenue) as Sum_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY C.model
UNION ALL
SELECT
'NULL',
SUM(S.revenue)
FROM
sales_table S
''')
Explanation: Because SQLite doesn't support ROLLUP, the query below is just intended to illustrate how ROLLUP would work. Don't worry about understanding the query itself, just get familiar with what's going on in the result-set:
End of explanation
conditional_differences
Explanation: Conditional Expressions: IF & CASE WHEN
SELECT
CASE WHEN column_a = x THEN some_value
WHEN column_a = y THEN some_value2
ELSE some_other_value
END some_alias # alias optional after END
FROM
table_name
Conditional expressions let you use IF/THEN logic in SQL
In SQLite, you have to use CASE WHEN, but in other RDBMS you may prefer to use IF, depending on your needs
More on CASE WHEN: http://www.dotnet-tricks.com/Tutorial/sqlserver/1MS1120313-Understanding-Case-Expression-in-SQL-Server-with-Example.html
End of explanation
run('''
SELECT
revenue,
CASE WHEN revenue > 20000 THEN 'Revenue is more than 20,000'
END Conditional_Column
FROM
sales_table
LIMIT 10
''')
Explanation: Starting with a simple example, here we'll use CASE WHEN to create a new column on the sales_table:
End of explanation
run('''
SELECT
revenue,
CASE WHEN revenue > 20000 THEN 'Revenue is MORE than 20,000'
WHEN revenue < 15000 THEN 'Revenue is LESS than 15,000'
END Conditional_Column
FROM
sales_table
LIMIT 10
''')
Explanation: CASE WHEN gives you the value "Revenue is more MORE 20,000" when revenue in that same row is greater than 20,000. Otherwise, it has no value.
Now let's add a level:
End of explanation
run('''
SELECT
revenue,
CASE WHEN revenue > 20000 THEN 'Revenue is MORE than 20,000'
WHEN revenue < 15000 THEN 'Revenue is LESS than 15,000'
ELSE 'NEITHER'
END Conditional_Column
FROM
sales_table
LIMIT 10
''')
Explanation: Now to deal with the blank spaces. You can assign an "ELSE" value to catch anything that's not included in the prior expressions:
End of explanation
run('''
SELECT
C.Make as car_maker,
payment_type,
ROUND(AVG(S.revenue)) as avg_revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
C.Make,
payment_type
''')
Explanation: You can use values from another column as well. Remember this query from the GROUP BY lesson? It's often helpful to look at information broken out by multiple groups, but it's not especially easy to digest:
End of explanation
run('''
SELECT
C.Make as Car_Maker,
payment_type,
S.revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
''')
Explanation: Look at what's going on in that query without the AVG( ) function and the GROUP BY clause:
End of explanation
run('''
SELECT
C.Make as Car_Maker,
payment_type,
CASE WHEN payment_type = 'cash' THEN S.revenue END Cash_Revenue,
CASE WHEN payment_type = 'finance' THEN S.revenue END Finance_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
''')
Explanation: The result-set above is essentially what SQL is working with right before it separates the rows into groups and averages the revenue within those groups.
Now, we're going to use some CASE WHEN statements to change this a little:
End of explanation
run('''
SELECT
C.Make as Car_Maker,
ROUND(AVG(CASE WHEN payment_type = 'cash' THEN S.revenue END)) AS Avg_Cash_Revenue,
ROUND(AVG(CASE WHEN payment_type = 'finance' THEN S.revenue END)) AS Avg_Finance_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY
C.Make
''')
Explanation: Now let's add back the ROUND() and AVG() functions and the GROUP BY statement:
End of explanation
run('''
SELECT NULL
''')
#print(case_cheat)
Explanation: CASE WHEN makes this same information a lot easier to read by letting you pivot the result set a little.
Write a query using CASE WHEN to look at total revenue per gender, grouped by each car model
End of explanation
run('''
SELECT
age,
COUNT(*) customers
FROM
cust_table
GROUP BY
age
''')
Explanation: CASE WHEN also lets you create new groups. Start by looking at the cust_table grouped by age - remember that COUNT(***) tells you how many rows are in each group (which is the same as telling you the number of customers in each group):
End of explanation
run('''
SELECT
CASE WHEN age BETWEEN 18 AND 24 THEN '18-24 years'
WHEN age BETWEEN 25 AND 34 THEN '25-34 years'
WHEN age BETWEEN 35 AND 44 THEN '35-45 years'
WHEN age BETWEEN 45 AND 54 THEN '45-54 years'
WHEN age BETWEEN 55 AND 64 THEN '55-64 years'
END Age_Group,
COUNT(*) as Customers
FROM
cust_table
GROUP BY
Age_Group
''')
Explanation: When you want to segment your results, but there are too many different values for GROUP BY to be helpful, use CASE WHEN to make your own groups. GROUP BY the column you created with CASE WHEN to look at your newly created segments.
End of explanation
run('''
SELECT NULL
''')
#print(case_cheat2)
Explanation: Ta-DA! Useful customer segments!
Try to break up the "Customers" column into 2 columns - one for male and one for female. Keep the age segments intact.
- Note that COUNT(***) cannot be wrapped around a CASE WHEN expression the way that other functions can. Try to think of a different way to get a count.
- Extra challenge: try to express male and female customers as a percentage of the total for each group, rounded to 2 decimal points
End of explanation
run('''
SELECT
C.model AS Car_Model,
SUM(S.revenue) AS Revenue_Per_Model,
(SELECT SUM(revenue) FROM sales_table) AS Total_Revenue,
SUM(S.revenue) / (SELECT SUM(revenue) FROM sales_table) AS Contribution_to_Revenue
FROM
sales_table S
JOIN car_table C ON C.model_id = S.model_id
GROUP BY
Car_Model
''')
Explanation: NESTING
Nested queries allow you to put a query within a query
Depending on your needs, you might put a nested query in the SELECT clause, the FROM clause, or the WHERE clause
Consider the following query. We're using a nested query in the SELECT clause to see the sum of all revenue in the sales_table, and then using it again to what percentage of total revenue can be attributed to each Car_Model.
End of explanation
run('''
SELECT NULL
''')
#print(nest_cheat1)
Explanation: Write a query to look at the model name and COGs for each car in car_table, then use a nested query to also look at the average COGs off all car models in a third column
- Extra Challenge: add a fourth colum using another nested query to return the difference between each car model's COGs and the average COGs
End of explanation
run('''
SELECT
model
FROM
car_table
WHERE
model = 'Tundra'
UNION
SELECT
first_name
FROM
salesman_table
WHERE first_name = 'Jared'
''')
Explanation: UNION & UNION ALL
SELECT
column_a
FROM
table_x
UNION # or UNION ALL
SELECT
column_b
FROM
table_y
UNION allows you to run a 2nd query (or 3rd or 4th), the results will be ordered by default with the results of the first query
UNION ALL ensures that the results in the result set appear in order that the queries are written
The number of columns in each query must be the same in order for UNION & UNION ALL to work
Starting with something simple (and a little nonsensical), UNION basically lets you run two entirely separate queries. Technically, they could have nothing to do with each other:
End of explanation
run('''
SELECT NULL
''')
#print(union_cheat1)
Explanation: Some things to note:
- Although these queries and their results are unrelated, the column header is dictated by the query that appears first
- Even though the query for "Tundra" is first, "Tundra" is second in the results. UNION will sort all results according to the normal default rules, acending order.
- Replace UNION with UNION ALL and run the query again. What changes?
Use UNION to join two queries. The first should have two columns: car model and COGs per car. The second query should show you to average COGs for all the car models, rounded to cents. You want the the average COGs to appear in the last row.
- Remember that united queries need to have the same number of columns.
End of explanation
run('''
SELECT
C.model AS Car_Model,
SUM(S.revenue) as Sum_Revenue
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
GROUP BY C.model
UNION ALL
SELECT
'NULL',
SUM(S.revenue)
FROM
sales_table S
''')
Explanation: Consider the issue we had before, where SQLite didn't support WITH ROUNDUP. We used this query as a workaround. Does it make sense now?
End of explanation
run('''
SELECT
date,
revenue
FROM
sales_table
''').head()
Explanation: Optimization:
Non-optimized queries can cause a lot of problems because tables frequently have thousands or millions of rows:
If you haven't optimized your query, it might:
Take several minutes (or even hours) to return the information you're requesting
Crash your computer
Muck up the server's processes, and you'll face the wrath of your company's system administrators once they figure out that you are the reason why the whole system has slowed down and everyone is sending them angry emails (this will probably happen to you no matter what. It's a rite of passage).
Find a few more useful optimization tips here: http://hungred.com/useful-information/ways-optimize-sql-queries/
Some of these seem strange, because we're going ling you NOT to do a bunch of things that you've learned how to do. Stick to this principal: if you're dealing with a small table, you can break a few of these rules. The larger the table, the fewer rules you can break.
DO name specific columns in the SELECT CLAUSE:
End of explanation
run('''
SELECT
*
FROM
sales_table
''').head()
Explanation: DON'T use an asterisk unless you absolutely have to:
This can put a lot of strain on servers. Only use if you know for certain that your using a small table
End of explanation
run('''
SELECT
model_id, model
FROM
car_table
WHERE
model LIKE '%undra'
''')
Explanation: DO use LIKE on small tables and in simple queries:
LIKE is helpful if you know where to find something but you can't quite remember what it's called. Try to use a wildcard sparingly - don't use 2 when 1 will suffice:
End of explanation
run('''
SELECT
C.model,
AVG(revenue)
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
WHERE
C.model LIKE '%undra'
''')
Explanation: DON'T use LIKE on large tables or when using JOINs:
End of explanation
run('''
SELECT
revenue,
date
FROM
sales_table
WHERE
date = '1/1/14'
''')
Explanation: If you want to look at average revenue for car models that are like "%undra", run the LIKE query on the small table (car_table) first to figure out exacly what you're looking for, then use that information to search for the data you need from the sales_table
DO dip your toe in by starting with a small data set
Use WHERE to only view a few days of data at first. If the query runs quickly, add a few days at a time. If it starts to run slowly, run just a few days at a time and paste results into excel to combine results (or use Python...ask me later!!!).
The query below won't work because SQLite doesn't recognize dates, but remember these concepts when working with other RDBMS
End of explanation
run('''
SELECT
make, model, cogs
FROM
car_table
WHERE
make = 'Toyota'
UNION
SELECT
make, model, cogs
FROM
car_table
WHERE
cogs > 13000
''')
Explanation: DO use a UNION to look at result-sets that aren't mutually exclusive
Let's say you were interested in seeing all Toyotas as well as cars with COGs of more than 13000. Write a query for the first group, then a query for the second group, and unite them with UNION. The result set won't show you repeats - if a row matches both result sets, it will only display once.
End of explanation
run('''
SELECT
make, model, cogs
FROM
car_table
WHERE
make = 'Toyota' OR cogs > 13000
''')
Explanation: DON'T use OR when a UNION will generate the same results
Note that we'll get the same results as above, but this query could run MUCH slower on a large table. It's tempting to use OR because it's faster to write, but unless you're dealing with very small tables, avoid the temptation. In 5 years of doing business analytics with SQL, I never used OR once. It's slow. Use a UNION.
End of explanation
run('''
SELECT
*
FROM
car_table
WHERE
make != 'Subaru'
''')
Explanation: DON'T use negative filters when a positive filter is possible
Let's say you want to look at cars made by Toyato and Honda, but you don't care about Subaru. It might be tempting to use a negative filter:
End of explanation
run('''
SELECT
*
FROM
car_table
WHERE
make in ('Toyota', 'Honda')
''')
Explanation: On a big table, this will run much more slowly than if you use a positive filter. Try this instead - it might require a little extra typing, but it will run much faster:
End of explanation
run('''
SELECT
first_name || ' ' || last_name as Salesperson,
COUNT(*) as Cars_Sold
FROM
sales_table S
JOIN salesman_table M ON S.salesman_id = M.id
GROUP BY
Salesperson
ORDER BY
Cars_Sold DESC
''')
Explanation: Wrapping Up:
Debugging:
If you run into errors when you start writing your own queries, here are some things to make sure your query has:
- The right names for columns in the SELECT clause
- Columns that can be found in the tables in the FROM clause
- Consistent use of aliases throughout (if using aliases)
- Joined tables on the corresponding column and proper aliases to indicate each table
- The correct order of clauses:
SELECT
FROM
JOIN...ON
WHERE
GROUP BY
UNION
ORDER BY
LIMIT
- Consistent use of capitalization for variables in quotes
- Fuctions and opperators for real numbers, not integers
- The same number of columns/expressions in their SELECT clauses of your queries when using UNION
Gain a deeper understanding:
http://tech.pro/tutorial/1555/10-easy-steps-to-a-complete-understanding-of-sql
Practice on other databases:
http://sqlzoo.net/wiki/SELECT_.._WHERE
Sample Queries for Business Analysis:
Let's say you recently opened a car dealership, and you now have one month's worth of sales data. You want to know how your sales team is doing.
Start by looking at the number of cars each person sold last month. The names of the sales team and the list of transactions are on different tables in your database, but SQL can help you with that:
End of explanation
run('''
SELECT
first_name || ' ' || last_name as Salesperson,
COUNT(*) as Cars_Sold,
ROUND(AVG(revenue)) as Revenue_per_Sale
FROM
sales_table S
JOIN salesman_table M ON S.salesman_id = M.id
GROUP BY
Salesperson
ORDER BY
Cars_Sold DESC
''')
Explanation: Add on the average amount of revenue made per sale:
End of explanation
run('''
SELECT
first_name || ' ' || last_name as Salesperson,
COUNT(*) as Cars_Sold,
ROUND(AVG(revenue), 2) as Rev_per_Sale,
ROUND((((AVG(revenue)
- (SELECT AVG(revenue) from sales_table))
/(SELECT AVG(revenue) from sales_table))*100), 1) || ' %'
as RPS_Compared_to_Avg
FROM
sales_table S
JOIN salesman_table M ON S.salesman_id = M.id
GROUP BY
Salesperson
ORDER BY
Cars_Sold DESC
''')
Explanation: Make it easier to compare the average revenue of Jared's sales to the average revenue of per sale overall by adding a column to see by what percent each salesperson's sales are more or less than average:
End of explanation
run('''
SELECT
first_name || ' ' || last_name as Salesperson,
COUNT(*) as Cars_Sold,
'$ ' || ROUND(AVG(revenue), 2) as Rev_per_Sale,
ROUND((((AVG(revenue)
- (SELECT AVG(revenue) from sales_table where salesman_id != 215))
/(SELECT AVG(revenue) from sales_table where salesman_id != 215))*100), 1) || ' %'
AS RPS_Compared_to_Avg,
ROUND((1-(SUM(revenue) / SUM(sticker_price)))*100, 1) || ' %' as Avg_Customer_Discount
FROM
sales_table S
JOIN salesman_table M ON S.salesman_id = M.id
JOIN car_table C ON S.model_id = C.model_id
GROUP BY
Salesperson
ORDER BY
Cars_Sold DESC
''')
Explanation: So maybe Jared is just selling cheaper cars.
Let's go further and compare the sale price of each car against the sticker price to see how low Jared was willing to negotiate with customers. Sticker price is in anther table, but again, that's no problem with SQL:
End of explanation
run('''
SELECT
first_name || ' ' || last_name as Salesperson,
COUNT(*) as Cars_Sold,
'$ ' || ROUND(AVG(revenue), 2) as Rev_per_Sale,
ROUND((((AVG(revenue)
- (SELECT AVG(revenue) from sales_table where salesman_id != 215))
/(SELECT AVG(revenue) from sales_table where salesman_id != 215))*100), 1) || ' %'
AS RPS_Compared_to_Peers,
ROUND((1-(SUM(revenue) / SUM(sticker_price)))*100, 1) || ' %' as Avg_Customer_Discount,
ROUND(((SUM(revenue)-sum(C.cogs))
/(SELECT SUM(revenue)-sum(cogs) FROM sales_table S join car_table C on S.model_id = C.model_id))*100, 1) || ' %' as Gross_Profit_Contribution
FROM
sales_table S
JOIN salesman_table M ON S.salesman_id = M.id
JOIN car_table C ON S.model_id = C.model_id
GROUP BY
Salesperson
ORDER BY
Cars_Sold DESC
''')
Explanation: Looks like Jared is letting customers negotiate prices down much more than his peers.
But is this a real problem? How much is each salesperson contributing to our gross profits?
End of explanation
run('''
SELECT
C.model as Car_Model,
ROUND(SUM(CASE WHEN CUST.gender = 'female' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Female Customers',
ROUND(SUM(CASE WHEN CUST.gender = 'male' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Male Customers'
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
JOIN cust_table CUST on S.customer_id = CUST.customer_id
GROUP BY
Car_Model
UNION ALL
SELECT
'Total:',
ROUND(SUM(CASE WHEN CUST.gender = 'female' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Female Customers',
ROUND(SUM(CASE WHEN CUST.gender = 'male' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Male Customers'
FROM
sales_table S
JOIN cust_table CUST on S.customer_id = CUST.customer_id
''')
Explanation: SQL really lets you dig.
Some other quick examples - we could do a gender breakdown of customers per car model and add a total at the bottom:
End of explanation
run('''
SELECT
CASE WHEN age BETWEEN 18 AND 24 THEN '18-24 years'
WHEN age BETWEEN 25 AND 34 THEN '25-34 years'
WHEN age BETWEEN 35 AND 44 THEN '35-44 years'
WHEN age BETWEEN 45 AND 54 THEN '45-54 years'
WHEN age BETWEEN 55 AND 64 THEN '55-64 years'
END Age_Group,
ROUND((SUM(S.revenue)-SUM(C.sticker_price))/SUM(C.sticker_price), 2) as '% Paid Below Sticker Price'
FROM
sales_table S
JOIN car_table C on S.model_id = C.model_id
JOIN cust_table CUST on S.customer_id = CUST.customer_id
GROUP BY
Age_Group
''')
Explanation: Easily create age groups and see how aggressively each group negotiates (judged by the difference between the actual sale amount and the sticker price):
End of explanation |
9,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time-frequency beamforming using DICS
Compute DICS source power [1]_ in a grid of time-frequency windows.
References
.. [1] Dalal et al. Five-dimensional neuroimaging
Step1: Read raw data
Step2: Time-frequency beamforming based on DICS | Python Code:
# Author: Roman Goj <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.event import make_fixed_length_events
from mne.datasets import sample
from mne.time_frequency import csd_fourier
from mne.beamformer import tf_dics
from mne.viz import plot_source_spectrogram
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
noise_fname = data_path + '/MEG/sample/ernoise_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
Explanation: Time-frequency beamforming using DICS
Compute DICS source power [1]_ in a grid of time-frequency windows.
References
.. [1] Dalal et al. Five-dimensional neuroimaging: Localization of the
time-frequency dynamics of cortical activity.
NeuroImage (2008) vol. 40 (4) pp. 1686-1700
End of explanation
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Pick a selection of magnetometer channels. A subset of all channels was used
# to speed up the example. For a solution based on all MEG channels use
# meg=True, selection=None and add mag=4e-12 to the reject dictionary.
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads',
selection=left_temporal_channels)
raw.pick_channels([raw.ch_names[pick] for pick in picks])
reject = dict(mag=4e-12)
# Re-normalize our empty-room projectors, which should be fine after
# subselection
raw.info.normalize_proj()
# Setting time windows. Note that tmin and tmax are set so that time-frequency
# beamforming will be performed for a wider range of time points than will
# later be displayed on the final spectrogram. This ensures that all time bins
# displayed represent an average of an equal number of time windows.
tmin, tmax, tstep = -0.5, 0.75, 0.05 # s
tmin_plot, tmax_plot = -0.3, 0.5 # s
# Read epochs
event_id = 1
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=None, preload=True, proj=True, reject=reject)
# Read empty room noise raw data
raw_noise = mne.io.read_raw_fif(noise_fname, preload=True)
raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
raw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])
raw_noise.info.normalize_proj()
# Create noise epochs and make sure the number of noise epochs corresponds to
# the number of data epochs
events_noise = make_fixed_length_events(raw_noise, event_id)
epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,
tmax_plot, baseline=None, preload=True, proj=True,
reject=reject)
epochs_noise.info.normalize_proj()
epochs_noise.apply_proj()
# then make sure the number of epochs is the same
epochs_noise = epochs_noise[:len(epochs.events)]
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
# Read label
label = mne.read_label(fname_label)
Explanation: Read raw data
End of explanation
# Setting frequency bins as in Dalal et al. 2008
freq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz
win_lengths = [0.3, 0.2, 0.15, 0.1] # s
# Then set FFTs length for each frequency range.
# Should be a power of 2 to be faster.
n_ffts = [256, 128, 128, 128]
# Subtract evoked response prior to computation?
subtract_evoked = False
# Calculating noise cross-spectral density from empty room noise for each
# frequency bin and the corresponding time window length. To calculate noise
# from the baseline period in the data, change epochs_noise to epochs
noise_csds = []
for freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):
noise_csd = csd_fourier(epochs_noise, fmin=freq_bin[0], fmax=freq_bin[1],
tmin=-win_length, tmax=0, n_fft=n_fft)
noise_csds.append(noise_csd.sum())
# Computing DICS solutions for time-frequency windows in a label in source
# space for faster computation, use label=None for full solution
stcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,
freq_bins=freq_bins, subtract_evoked=subtract_evoked,
n_ffts=n_ffts, reg=0.05, label=label, inversion='matrix')
# Plotting source spectrogram for source with maximum activity
# Note that tmin and tmax are set to display a time range that is smaller than
# the one for which beamforming estimates were calculated. This ensures that
# all time bins shown are a result of smoothing across an identical number of
# time windows.
plot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,
source_index=None, colorbar=True)
Explanation: Time-frequency beamforming based on DICS
End of explanation |
9,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Steady-state space-charge-limited current with traps
This example shows how to simulate effects of a single trap level on current-voltage characteristics of a single carrier device.
Step1: Model and parameters
Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
Step2: Sweep parameters
For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
Step3: Result | Python Code:
%matplotlib inline
import matplotlib.pylab as plt
import oedes
import numpy as np
oedes.init_notebook() # for displaying progress bars
Explanation: Steady-state space-charge-limited current with traps
This example shows how to simulate effects of a single trap level on current-voltage characteristics of a single carrier device.
End of explanation
L = 200e-9 # device thickness, m
model = oedes.models.std.electrononly(L, traps=['trap'])
params = {
'T': 300, # K
'electrode0.workfunction': 0, # eV
'electrode1.workfunction': 0, # eV
'electron.energy': 0, # eV
'electron.mu': 1e-9, # m2/(Vs)
'electron.N0': 2.4e26, # 1/m^3
'electron.trap.energy': 0, # eV
'electron.trap.trate': 1e-22, # 1/(m^3 s)
'electron.trap.N0': 6.2e22, # 1/m^3
'electrode0.voltage': 0, # V
'electrode1.voltage': 0, # V
'epsilon_r': 3. # 1
}
Explanation: Model and parameters
Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
End of explanation
trapenergy_sweep = oedes.sweep('electron.trap.energy',np.asarray([-0.45, -0.33, -0.21, 1.]))
voltage_sweep = oedes.sweep('electrode0.voltage', np.logspace(-3, np.log10(20.), 100))
Explanation: Sweep parameters
For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
End of explanation
c=oedes.context(model)
for tdepth,ct in c.sweep(params, trapenergy_sweep):
for _ in ct.sweep(ct.params, voltage_sweep):
pass
v,j = ct.teval(voltage_sweep.parameter_name,'J')
oedes.testing.store(j, rtol=1e-3) # for automatic testing
if tdepth < 0:
label = 'no traps'
else:
label = 'trap depth %s eV' % tdepth
plt.plot(v,j,label=label)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('V')
plt.ylabel(r'$\mathrm{A/m^2}$')
plt.legend(loc=0,frameon=False);
Explanation: Result
End of explanation |
9,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 STYLE="background
Step1: <h4 style="border-bottom
Step2: <h4 style="border-bottom
Step3: <h2 STYLE="background
Step4: 上図から分かるように、奇数と偶数が等確率で出るルーレットを10回プレイして、奇数と偶数が同じ回数だけ出る確率(5回ずつ出る確率)は、約25%(10000回中の約2500回)ほどです。案外少ない、という印象を持つかもしれませんね。
<h4 style="border-bottom
Step5: 上と同様の計算で、今度は「ルーレットを100回プレイして奇数が60回以上出る確率」を計算してみます。
Step6: 奇数と偶数が等確率で出るルーレットを100回プレイして、奇数が出る回数が「偶然」60回以上になる確率は 5% 以下になることが分かりました。つまり、100回中60回以上奇数が出るようなルーレットは、そのルーレットがイカサマだと疑ってみるのが良さそうです。
このときのpを、p値(有意確率)と呼びます。
帰無仮説:そのルーレットはイカサマではない(奇数と偶数が等確率で出る)。
対立仮説:そのルーレットはイカサマである。
p < 0.05 なので、有意水準5%で、帰無仮説を棄却できる。
すなわち、そのルーレットはイカサマである可能性が高い。
<h4 style="padding
Step7: <h4 style="border-bottom
Step8: <h4 style="padding
Step9: <h2 STYLE="background
Step10: 標準正規分布に従う乱数が2以上の値を出力する確率はどのくらいでしょうか。計算してみましょう。
Step11: <h4 style="border-bottom
Step12: <h4 style="padding | Python Code:
# 乱数を扱うためのライブラリをインポートする。
import random
sample_size = 10 # 乱数発生回数
# 一様乱数を dist に格納する (distribution : 分布)
dist = [random.random() for i in range(sample_size)]
# dist の中身を確認する。
dist
# 図やグラフを図示するためのライブラリをインポートする。
import matplotlib.pyplot as plt
%matplotlib inline
# ヒストグラムを描く。
plt.hist(dist)
plt.grid()
plt.show()
Explanation: <h1 STYLE="background: #c2edff;padding: 0.5em;">Step 1. 分布</h1>
<ol>
<li><a href="#1">乱数と一様分布</a>
<li><a href="#2">二項分布</a>
<li><a href="#3">正規分布</a>
</ol>
<h4 style="border-bottom: solid 1px black;">Step 1 の目標</h4>
正規分布に従う乱数のヒストグラムを描く。
<img src="fig/seikibunpu.png">
<h2 STYLE="background: #c2edff;padding: 0.5em;"><a name="1">1.1 乱数と一様分布</a></h2>
<h4 style="border-bottom: solid 1px black;">一様乱数の分布の図示</h4>
まずは、一様乱数を発生させて、その分布を図示してみましょう。
End of explanation
sample_size = 100 # 乱数発生回数
# 一様乱数を dist に格納する
dist = [random.random() for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist)
plt.grid()
plt.show()
sample_size = 1000 # 乱数発生回数
# 一様乱数を dist に格納する
dist = [random.random() for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist)
plt.grid()
plt.show()
sample_size = 10000 # 乱数発生回数
# 一様乱数を dist に格納する
dist = [random.random() for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist)
plt.grid()
plt.show()
sample_size = 100000 # 乱数発生回数
# 一様乱数を dist に格納する
dist = [random.random() for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist)
plt.grid()
plt.show()
Explanation: <h4 style="border-bottom: solid 1px black;">乱数発生回数を増やしてみる</h4>
乱数発生回数を多くするにしたがって、"理想的な" 分布の形に近づいていきます。
End of explanation
sample_size = 100000 # 乱数発生回数
# 一様乱数を dist に格納する
dist = [random.random() for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist, bins=100) # binを多くする
plt.grid()
plt.show()
Explanation: <h4 style="border-bottom: solid 1px black;">binを増やしてみる</h4>
ゴミを分別するのに使う箱のことを bin と言います。ヒストグラムを描く時は、いくつの bin に分別するかで表示が違ってきます。binの数を多くすると、分布の細かい形が見えますが、ひとつのbinあたり分別されたデータ数は当然少なくなります。
End of explanation
# 数値計算のライブラリをインポートする。
import numpy as np
sample_size = 10000 # 乱数発生回数
# 確率pで奇数が出る(確率1-pで偶数が出る)ルーレットをn回プレイしたときに、
# 奇数が出る回数の分布
dist = [np.random.binomial(n=10, p=0.5) for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist, bins=100)
plt.grid()
plt.show()
Explanation: <h2 STYLE="background: #c2edff;padding: 0.5em;"><a name="2">1.2 二項分布</a></h2>
np.random.binomial(n, p) は、確率pで奇数が出る(確率1-pで偶数が出る)ルーレットをn回プレイしたときに、奇数が出る個数を返します。このような分布を、二項分布と言います。
<h4 style="border-bottom: solid 1px black;">等確率の二項分布</h4>
奇数と偶数が等確率で出るルーレットを10回プレイし、奇数が出る回数を数えます。それを10000回繰り返します。奇数と偶数が同じ回数だけ出る確率(5回ずつ出る確率)はどのくらいでしょうか。
End of explanation
sample_size = 10000 # 乱数発生回数
# 確率pで奇数が出る(確率1-pで偶数が出る)ルーレットをn回プレイしたときに、
# 奇数が出る回数の分布
dist = [np.random.binomial(n=100, p=0.5) for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist, bins=100)
plt.grid()
plt.show()
Explanation: 上図から分かるように、奇数と偶数が等確率で出るルーレットを10回プレイして、奇数と偶数が同じ回数だけ出る確率(5回ずつ出る確率)は、約25%(10000回中の約2500回)ほどです。案外少ない、という印象を持つかもしれませんね。
<h4 style="border-bottom: solid 1px black;">そのルーレットはイカサマか</h4>
あなたはカジノで他の客がルーレットをプレイしているのを観察していました。すると、奇数の出る回数がやけに多いので、そのルーレットはイカサマではないかという気がしてきました。イカサマでなければ、ルーレットは奇数と偶数が等確率で出るはずです。ところがこのルーレットは、100回中60回、奇数が出ました。このルーレットはイカサマでしょうか。
奇数と偶数が等確率で出るルーレットを100回プレイした時、奇数が出る回数が60回以上になる確率はどれくらいでしょうか。まずは分布を描いてみましょう。
End of explanation
sample_size = 10000 # 乱数発生回数
# 確率pで奇数が出る(確率1-pで偶数が出る)ルーレットをn回プレイしたときに、
# 奇数が出る回数の分布
dist = [np.random.binomial(n=100, p=0.5) for i in range(sample_size)]
p = sum([1 for n in dist if n >= 60]) / sample_size
print("p値: %(p)s " %locals())
Explanation: 上と同様の計算で、今度は「ルーレットを100回プレイして奇数が60回以上出る確率」を計算してみます。
End of explanation
# 練習1.1
Explanation: 奇数と偶数が等確率で出るルーレットを100回プレイして、奇数が出る回数が「偶然」60回以上になる確率は 5% 以下になることが分かりました。つまり、100回中60回以上奇数が出るようなルーレットは、そのルーレットがイカサマだと疑ってみるのが良さそうです。
このときのpを、p値(有意確率)と呼びます。
帰無仮説:そのルーレットはイカサマではない(奇数と偶数が等確率で出る)。
対立仮説:そのルーレットはイカサマである。
p < 0.05 なので、有意水準5%で、帰無仮説を棄却できる。
すなわち、そのルーレットはイカサマである可能性が高い。
<h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習1.1</h4>
100回中60回以上奇数が出るようなルーレットは、そのルーレットがイカサマだと疑ってみるのが良さそうです。では、10回中6回以上奇数が出た場合、<u>奇数が出た確率は同じ60%</u>ですが、そのルーレットはイカサマと言えるでしょうか。p値を計算して答えてください。
End of explanation
sample_size = 10000 # 乱数発生回数
# 確率pで奇数が出る(確率1-pで偶数が出る)ルーレットをn回プレイしたときに、
# 奇数が出る回数の分布
dist = [np.random.binomial(n=20, p=0.05) for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist, bins=100)
plt.grid()
plt.show()
Explanation: <h4 style="border-bottom: solid 1px black;">等確率でない二項分布</h4>
全住民の5%がある感染症に罹患したと推定されている。その全住民の中から無作為に20人を抽出した場合、抽出された集団の中に罹患者は何人いるでしょうか。そのような分布も二項分布になります。分布を描いてみましょう。
End of explanation
# 練習1.2
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習1.2</h4>
全住民の5%がある感染症に罹患したと推定されている。その全住民の中から無作為に100人を抽出したところ、抽出された集団の中に罹患者が10人以上いた。
(1) それが偶然起こる確率を概算しなさい。
(2) その結果をどう解釈すれば良いか。
End of explanation
sample_size = 10000 # 乱数発生回数
dist = [random.normalvariate(mu=0, sigma=1) for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist, bins=100)
plt.grid()
plt.show()
Explanation: <h2 STYLE="background: #c2edff;padding: 0.5em;"><a name="3">1.3 正規分布</a></h2>
random.normalvariate(mu, sigma) は正規分布に従う乱数を発生させる関数です(mu は平均で、sigma は標準偏差)。
<h4 style="border-bottom: solid 1px black;">標準正規分布</h4>
平均0、標準偏差1の正規分布を「標準正規分布」と言います。標準正規分布を描いてみましょう。
End of explanation
sample_size = 10000 # 乱数発生回数
dist = [random.normalvariate(mu=0, sigma=1) for i in range(sample_size)]
p = sum([1 for n in dist if n >= 2]) / sample_size
print("p値: %(p)s " %locals())
Explanation: 標準正規分布に従う乱数が2以上の値を出力する確率はどのくらいでしょうか。計算してみましょう。
End of explanation
sample_size = 10000 # 乱数発生回数
# 確率pで奇数が出る(確率1-pで偶数が出る)ルーレットをn回プレイしたときに、
# 奇数が出る回数の分布
dist = [random.normalvariate(mu=50, sigma=10) for i in range(sample_size)]
# ヒストグラムを描く。
plt.hist(dist, bins=100)
plt.grid()
plt.show()
Explanation: <h4 style="border-bottom: solid 1px black;">偏差値</h4>
大学受験模試などでよく使われる「偏差値」は、平均50、標準偏差10の正規分布に従うという仮定をおいています。分布を描いてみましょう。ここで、縦軸は「学生数」をイメージしてください。
End of explanation
# 練習1.3
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習1.3</h4>
偏差値70以上の学生は、1万人中、何人いると推定されるでしょうか。
End of explanation |
9,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manuscript7 - Compute percent of significant information transfers FROM source regions
Analysis for Fig. 7
Master code for Ito et al., 2017¶
Takuya Ito ([email protected])
Step1: 0.0 Basic parameters
Step2: 1.0 Run Region-to-region information transfer mapping
Due to computational constraints, all region-to-region activity flow mapping procedures and the RSA analyses were run on the supercomputer using MATLAB scripts in ./SupercomputerScripts/Fig6_RegionToRegionInformationTransferMapping/
2.0 Construct information transfer mapping matrix
Step3: 2.1 Visualize Information transfer mapping matrices (Threshold and Unthresholded)
Step4: 2.2 Compute the regions with the most information transfers TO and FROM
Step5: 3.0 Compute FWE-corrected results (as opposed to FDR)
Step6: 3.1 Visualize information transfer mapping matrices (FWE-Threshold and Unthresholded)
Step7: 3.2 Compute the regions with the most information transfers TO and FROM | Python Code:
import sys
sys.path.append('utils/')
import numpy as np
import loadGlasser as lg
import scipy.stats as stats
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import sys
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import nibabel as nib
import os
import permutationTesting as pt
from matplotlib.colors import Normalize
from matplotlib.colors import LogNorm
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
class MidpointNormalizeLog(LogNorm):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
class MidpointNormalize2(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
t1 = (self.midpoint - self.vmin)/2.0
t2 = (self.vmax - self.midpoint)/30.0 + self.midpoint
x, y = [self.vmin, t1, self.midpoint, t2, self.vmax], [0, 0.25, .5, .75, 1.0]
return np.ma.masked_array(np.interp(value, x, y))
Explanation: Manuscript7 - Compute percent of significant information transfers FROM source regions
Analysis for Fig. 7
Master code for Ito et al., 2017¶
Takuya Ito ([email protected])
End of explanation
# Set basic parameters
basedir = '/projects2/ModalityControl2/'
datadir = basedir + 'data/'
resultsdir = datadir + 'resultsMaster/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
glasserparcels = lg.loadGlasserParcels()
networkdef = lg.loadGlasserNetworks()
# Define the main networks (in main manuscript)
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1
# aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
# networkdef[aud2_ind] = networkmappings['aud1']
# Merge aud1 and aud2
# networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
nParcels = 360
# Import network reordering
networkorder = np.asarray(sorted(range(len(networkdef)), key=lambda k: networkdef[k]))
order = networkorder
order.shape = (len(networkorder),1)
# Construct xticklabels and xticks for plotting figures
networks = networkmappings.keys()
xticks = {}
reorderednetworkaffil = networkdef[order]
for net in networks:
netNum = networkmappings[net]
netind = np.where(reorderednetworkaffil==netNum)[0]
tick = np.max(netind)
xticks[tick] = net
# Load in Glasser parcels in their native format (vertex formula)
glasserfilename = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'
glasser2 = nib.load('/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii')
glasser2 = np.squeeze(glasser2.get_data())
Explanation: 0.0 Basic parameters
End of explanation
## Load in NM3 Data
ruledims = ['logic','sensory','motor']
datadir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
# Load in RSA matrices
rsaMats = {}
df_stats = {}
for ruledim in ruledims:
rsaMats[ruledim] = np.zeros((nParcels,nParcels,len(subjNums)))
df_stats[ruledim] = {}
scount = 0
for subj in subjNums:
filename = datadir +subj+'_' + ruledim + '_RegionToRegionActFlowGlasserParcels.csv'
rsaMats[ruledim][:,:,scount] = np.loadtxt(filename, delimiter=',')
scount += 1
## Compute Group Stats
for ruledim in ruledims:
## Compute group statistics
# Compute average across subjects
df_stats[ruledim]['avgrho'] = np.mean(rsaMats[ruledim],axis=2)
# Compute t-test for each pairwise connection
t = np.zeros((nParcels,nParcels))
p = np.zeros((nParcels,nParcels))
for i in range(nParcels):
for j in range(nParcels):
t[i,j], p[i,j] = stats.ttest_1samp(rsaMats[ruledim][i,j,:], 0)
# One-sided t-test so...
if t[i,j] > 0:
p[i,j] = p[i,j]/2.0
else:
p[i,j] = 1.0-(p[i,j]/2.0)
df_stats[ruledim]['t'] = t
df_stats[ruledim]['p'] = p
## Run multiple corrections
triu_ind = np.triu_indices(nParcels,k=1)
tril_ind = np.tril_indices(nParcels,k=-1)
tmpq = []
tmpq.extend(df_stats[ruledim]['p'][triu_ind])
tmpq.extend(df_stats[ruledim]['p'][tril_ind])
# only run FDR correction on non-NaN values
ind_nans = np.isnan(tmpq)
ind_nonnan = np.where(ind_nans==False)[0]
tmpq = np.asarray(tmpq)
tmpq2 = mc.fdrcorrection0(tmpq[ind_nonnan])[1]
tmpq[ind_nonnan] = tmpq2
qmat = np.zeros((nParcels,nParcels))
qmat[triu_ind] = tmpq[0:len(triu_ind[0])]
qmat[tril_ind] = tmpq[len(tril_ind[0]):]
df_stats[ruledim]['q'] = qmat
np.fill_diagonal(df_stats[ruledim]['q'],1)
Explanation: 1.0 Run Region-to-region information transfer mapping
Due to computational constraints, all region-to-region activity flow mapping procedures and the RSA analyses were run on the supercomputer using MATLAB scripts in ./SupercomputerScripts/Fig6_RegionToRegionInformationTransferMapping/
2.0 Construct information transfer mapping matrix
End of explanation
# Visualize Unthresholded and thresholded side-by-side
order = networkorder
order.shape = (len(networkorder),1)
for ruledim in ruledims:
# Unthresholded t-stat map
plt.figure(figsize=(12,10))
plt.subplot(121)
# First visualize unthresholded
mat = df_stats[ruledim]['t'][order,order.T]
ind = np.isnan(mat)
mat[ind] = 0
pos = mat > 0
mat = np.multiply(pos,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',vmin=0, norm=norm, interpolation='none',cmap='seismic')
plt.colorbar(fraction=0.046)
plt.title('Unthresholded T-stat Map\nInformation Transfer Estimates\n' + ruledim,
fontsize=16,y=1.04)
plt.xlabel('Target Regions',fontsize=12)
plt.ylabel('Source Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
# plt.tight_layout()
# Thresholded T-stat map
plt.subplot(122)
# First visualize unthresholded
mat = df_stats[ruledim]['t']
thresh = df_stats[ruledim]['q'] < 0.05
mat = np.multiply(mat,thresh)
mat = mat[order,order.T]
ind = np.isnan(mat)
mat[ind]=0
pos = mat > 0
mat = np.multiply(pos,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',norm=norm,vmin = 0,interpolation='none',cmap='seismic')
plt.colorbar(fraction=0.046)
plt.title('FDR-Thresholded T-stat Map\nInformation Transfer Estimates\n ' + ruledim,
fontsize=16, y=1.04)
plt.xlabel('Target Regions',fontsize=12)
plt.ylabel('Source Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
plt.tight_layout()
# plt.savefig('Fig4b_Connectome_ActFlowRSA_TstatMap_MatchVMismatch_' + ruledim + '.pdf')
Explanation: 2.1 Visualize Information transfer mapping matrices (Threshold and Unthresholded)
End of explanation
networks = networkmappings.keys()
regions_actflowTO = {}
regions_actflowFROM = {}
for ruledim in ruledims:
thresh = df_stats[ruledim]['q'] < 0.05
regions_actflowFROM[ruledim] = np.nanmean(thresh,axis=1)*100.0
regions_actflowTO[ruledim] = np.nanmean(thresh,axis=0)*100.0
# # Convert to dataframe
# plt.figure()
# plt.bar(np.arange(nParcels),regions_actflow[ruledim],align='center')
# plt.title('Percent of Significant ActFlow FROM each region', fontsize=16)
# plt.ylabel('Percent of Significant ActFlow\nTo Other Regions', fontsize=12)
# plt.xlabel('Regions', fontsize=12)
# plt.tight_layout()
# Save these arrays to a file
savearrayTO = np.zeros((len(glasser2),len(ruledims)+1))
savearrayFROM = np.zeros((len(glasser2),len(ruledims)+1))
rulecount = 0
for ruledim in ruledims:
for roi in range(1,nParcels+1):
parcel_ind = np.where(glasser2==roi)[0]
# Compute map of all rule dimension for rule general actflow
if rulecount < 3:
savearrayTO[parcel_ind,rulecount] = regions_actflowTO[ruledim][roi-1].astype('double')
savearrayFROM[parcel_ind,rulecount] = regions_actflowFROM[ruledim][roi-1].astype('double')
rulecount += 1
to_avg = savearrayTO[:,0:3] > 0
# Create conjunction map
to_avg = np.mean(to_avg,axis=1)
to_avg = (to_avg == 1)
savearrayTO[:,3] = to_avg
from_avg = savearrayFROM[:,0:3] > 0
from_avg = np.mean(from_avg,axis=1)
from_avg = (from_avg == 1)
savearrayFROM[:,3] = from_avg
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowFROM_FDR.csv'
np.savetxt(outdir + filename,savearrayFROM,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowFROM_FDR.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowTO_FDR.csv'
np.savetxt(outdir + filename,savearrayTO,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowTO_FDR.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
Explanation: 2.2 Compute the regions with the most information transfers TO and FROM
End of explanation
## Load in NM3 Data
ruledims = ['logic','sensory','motor']
datadir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
# Load in RSA matrices
iteMats = {}
df_stats = {}
for ruledim in ruledims:
iteMats[ruledim] = np.zeros((nParcels,nParcels,len(subjNums)))
df_stats[ruledim] = {}
scount = 0
for subj in subjNums:
filename = datadir +subj+'_' + ruledim + '_RegionToRegionActFlowGlasserParcels.csv'
iteMats[ruledim][:,:,scount] = np.loadtxt(filename, delimiter=',')
scount += 1
pt = reload(pt)
fwe_Ts = np.zeros((nParcels,nParcels,len(ruledims)))
fwe_Ps = np.zeros((nParcels,nParcels,len(ruledims)))
# Obtain indices for multiple comparisons
indices = np.ones((nParcels,nParcels))
np.fill_diagonal(indices,0)
notnan_ind = np.isnan(iteMats['logic'][:,:,0])==False
indices = np.multiply(indices,notnan_ind)
flatten_ind = np.where(indices==1)
rulecount = 0
for ruledim in ruledims:
# tmpcor = np.arctanh(corrMats[ruledim][flatten_ind[0],flatten_ind[1],:])
# tmperr = np.arctanh(errMats[ruledim][flatten_ind[0],flatten_ind[1],:])
t, p = pt.permutationFWE(iteMats[ruledim][flatten_ind[0],flatten_ind[1],:], permutations=1000, nproc=15)
fwe_Ts[flatten_ind[0],flatten_ind[1],rulecount] = t
fwe_Ps[flatten_ind[0],flatten_ind[1],rulecount] = 1.0 - p
rulecount += 1
Explanation: 3.0 Compute FWE-corrected results (as opposed to FDR)
End of explanation
pthresh = .05
# Visualize FWER-corrected T-statistic map
order = networkorder
order.shape = (len(networkorder),1)
rulecount = 0
for ruledim in ruledims:
# Thresholded T-stat map
plt.figure()
# First visualize unthresholded
mat = fwe_Ts[:,:,rulecount]
thresh = fwe_Ps[:,:,rulecount] < pthresh
mat = np.multiply(mat,thresh)
mat = mat[order,order.T]
ind = np.isnan(mat)
mat[ind]=0
pos = mat > 0
mat = np.multiply(pos,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',norm=norm,vmin = 0,interpolation='none',cmap='seismic')
plt.colorbar(fraction=0.046)
plt.title('FWE-corrected T-statistic Map\nInformation Transfer Estimates\n ' + ruledim,
fontsize=16, y=1.04)
plt.xlabel('Target Regions',fontsize=12)
plt.ylabel('Source Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
plt.tight_layout()
# plt.savefig('Fig6_RegionITE_TstatMap' + ruledim + '_FWER.pdf')
rulecount += 1
Explanation: 3.1 Visualize information transfer mapping matrices (FWE-Threshold and Unthresholded)
End of explanation
networks = networkmappings.keys()
regions_actflowTO = {}
regions_actflowFROM = {}
rulecount = 0
for ruledim in ruledims:
thresh = fwe_Ps[:,:,rulecount] > pthresh
regions_actflowFROM[ruledim] = np.nanmean(thresh,axis=1)*100.0
regions_actflowTO[ruledim] = np.nanmean(thresh,axis=0)*100.0
rulecount += 1
# Save these arrays to a file
savearrayTO = np.zeros((len(glasser2),len(ruledims)+1))
savearrayFROM = np.zeros((len(glasser2),len(ruledims)+1))
rulecount = 0
for ruledim in ruledims:
for roi in range(1,nParcels+1):
parcel_ind = np.where(glasser2==roi)[0]
# Compute map of all rule dimension for rule general actflow
if rulecount < 3:
savearrayTO[parcel_ind,rulecount] = regions_actflowTO[ruledim][roi-1].astype('double')
savearrayFROM[parcel_ind,rulecount] = regions_actflowFROM[ruledim][roi-1].astype('double')
rulecount += 1
to_avg = savearrayTO[:,0:3] > 0
# Create conjunction map
to_avg = np.mean(to_avg,axis=1)
to_avg = (to_avg == 1)
savearrayTO[:,3] = to_avg
from_avg = savearrayFROM[:,0:3] > 0
from_avg = np.mean(from_avg,axis=1)
from_avg = (from_avg == 1)
savearrayFROM[:,3] = from_avg
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowFROM_FWER.csv'
np.savetxt(outdir + filename,savearrayFROM,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowFROM_FWER.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowTO_FWER.csv'
np.savetxt(outdir + filename,savearrayTO,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowTO_FWER.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
Explanation: 3.2 Compute the regions with the most information transfers TO and FROM
End of explanation |
9,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
In this example we will see how to classify images as horses or people using logistic regression. The tutorial builds upon the concepts introduced in the Objax basics tutorial. Consider reading that tutorial first, or you can always go back.
Imports
First, we import the modules we will use in our code.
Step2: Loading the Dataset
Next, we will load the "horses_or_humans" dataset from TensorFlow DataSets.
The prepare method downscales the image by 3x to reduce training time, flattens each image to a vector, and rescales each pixel value to [-1,1].
Step3: Visualizing the Data
Let's see a couple of the images in the dataset and their corresponding labels. Note that label 0 corresponds to a horse, while label 1 corresponds to a human.
Step4: Model Definition
objax.nn.Linear(ndim, 1) is a linear neural unit with ndim inputs and a single output. Given input $\mathbf{X}$, the output is equal to $\mathbf{W}\mathbf{X} + \mathbf{b}$ where $\mathbf{W}, \mathbf{b}$ are the model's parameters. These parameters are available through model.vars()
Step5: Model Inference
Now that we have defined the model we can use to classify images. To do so, we call the model with an image from the train dataset we previously prepared. Notice that we use the image of a human we previously visualized.
We get the output of the model by calling model(). We then apply the sigmoid activation function and round the output. Activation outputs lower than or equal to 0.5 are rounded to zero (i.e., horses) whereas outputs larger than 0.5 are rounded to one (i.e., humans).
Step6: Considering that we initialized the model with random weights, it should not come as a surprise that the model may misclassify a human as a horse.
Optimizer and Loss Function
In this example we use the objax.optimizer.SGD optimizer. Next, we define the loss function we will use to optimize the network. In this case we use the cross entropy loss function. Note that we use objax.functional.loss.sigmoid_cross_entropy_logits because we perform binary classification.
Step7: Back Propagation and Gradient Descent
objax.GradValues calculates the gradient of loss wrt model.vars(). If you want to learn more about gradients read the Understanding Gradients in-depth topic.
The train_op function implements the core of backward propagation and gradient descent. First, we calculate the gradient g and then pass it to the optimizer which updates the model’s weights.
Step8: Training and Evaluation Loop
For each of the training epochs we process all the training data, contained in the train dictionary, in batches of batch size. At the end of each epoch we compute the classification accuracy by comparing the model’s predictions over the test data to the ground truth labels.
Step9: Model Inference After Training
Now that the network is trained we can retry classification example above | Python Code:
%pip --quiet install objax
import matplotlib.pyplot as plt
import os
import numpy as np
import tensorflow_datasets as tfds
import objax
from objax.util import EasyDict
Explanation: Logistic Regression
In this example we will see how to classify images as horses or people using logistic regression. The tutorial builds upon the concepts introduced in the Objax basics tutorial. Consider reading that tutorial first, or you can always go back.
Imports
First, we import the modules we will use in our code.
End of explanation
# Data: train has 1027 images - test has 256 images
# Each image is 300 x 300 x 3 bytes
DATA_DIR = os.path.join(os.environ['HOME'], 'TFDS')
data = tfds.as_numpy(tfds.load(name='horses_or_humans', batch_size=-1, data_dir=DATA_DIR))
def prepare(x, downscale=3):
Normalize images to [-1, 1] and downscale them to 100x100x3 (for faster training) and flatten them.
s = x.shape
x = x.astype('f').reshape((s[0], s[1] // downscale, downscale, s[2] // downscale, downscale, s[3]))
return x.mean((2, 4)).reshape((s[0], -1)) * (1 / 127.5) - 1
train = EasyDict(image=prepare(data['train']['image']), label=data['train']['label'])
test = EasyDict(image=prepare(data['test']['image']), label=data['test']['label'])
ndim = train.image.shape[-1]
del data
Explanation: Loading the Dataset
Next, we will load the "horses_or_humans" dataset from TensorFlow DataSets.
The prepare method downscales the image by 3x to reduce training time, flattens each image to a vector, and rescales each pixel value to [-1,1].
End of explanation
#sample image of a horse.
horse_image = np.reshape(train.image[0], [100,100,3])
plt.imshow(horse_image)
print("label for horse_image:", train.label[0])
#sample image of a human.
human_image = np.reshape(train.image[9], [100, 100, 3])
plt.imshow(human_image)
print("label for human_image:", train.label[9])
Explanation: Visualizing the Data
Let's see a couple of the images in the dataset and their corresponding labels. Note that label 0 corresponds to a horse, while label 1 corresponds to a human.
End of explanation
# Settings
lr = 0.0001 # learning rate
batch = 256
epochs = 20
model = objax.nn.Linear(ndim, 1)
print(model.vars())
Explanation: Model Definition
objax.nn.Linear(ndim, 1) is a linear neural unit with ndim inputs and a single output. Given input $\mathbf{X}$, the output is equal to $\mathbf{W}\mathbf{X} + \mathbf{b}$ where $\mathbf{W}, \mathbf{b}$ are the model's parameters. These parameters are available through model.vars()
End of explanation
# This is an image of a human.
print(np.round(objax.functional.sigmoid(model(train.image[9]))))
Explanation: Model Inference
Now that we have defined the model we can use to classify images. To do so, we call the model with an image from the train dataset we previously prepared. Notice that we use the image of a human we previously visualized.
We get the output of the model by calling model(). We then apply the sigmoid activation function and round the output. Activation outputs lower than or equal to 0.5 are rounded to zero (i.e., horses) whereas outputs larger than 0.5 are rounded to one (i.e., humans).
End of explanation
opt = objax.optimizer.SGD(model.vars())
# Cross Entropy Loss
def loss(x, label):
return objax.functional.loss.sigmoid_cross_entropy_logits(model(x)[:, 0], label).mean()
Explanation: Considering that we initialized the model with random weights, it should not come as a surprise that the model may misclassify a human as a horse.
Optimizer and Loss Function
In this example we use the objax.optimizer.SGD optimizer. Next, we define the loss function we will use to optimize the network. In this case we use the cross entropy loss function. Note that we use objax.functional.loss.sigmoid_cross_entropy_logits because we perform binary classification.
End of explanation
gv = objax.GradValues(loss, model.vars())
def train_op(x, label):
g, v = gv(x, label) # returns gradients, loss
opt(lr, g)
return v
# This line is optional: it is compiling the code to make it faster.
train_op = objax.Jit(train_op, gv.vars() + opt.vars())
Explanation: Back Propagation and Gradient Descent
objax.GradValues calculates the gradient of loss wrt model.vars(). If you want to learn more about gradients read the Understanding Gradients in-depth topic.
The train_op function implements the core of backward propagation and gradient descent. First, we calculate the gradient g and then pass it to the optimizer which updates the model’s weights.
End of explanation
for epoch in range(epochs):
# Train
avg_loss = 0
# randomly shuffle training data
shuffle_idx = np.random.permutation(train.image.shape[0])
for it in range(0, train.image.shape[0], batch):
sel = shuffle_idx[it: it + batch]
avg_loss += float(train_op(train.image[sel], train.label[sel])[0]) * len(sel)
avg_loss /= it + batch
# Eval
accuracy = 0
for it in range(0, test.image.shape[0], batch):
x, y = test.image[it: it + batch], test.label[it: it + batch]
accuracy += (np.round(objax.functional.sigmoid(model(x)))[:, 0] == y).sum()
accuracy /= test.image.shape[0]
print('Epoch %04d Loss %.2f Accuracy %.2f' % (epoch + 1, avg_loss, 100 * accuracy))
Explanation: Training and Evaluation Loop
For each of the training epochs we process all the training data, contained in the train dictionary, in batches of batch size. At the end of each epoch we compute the classification accuracy by comparing the model’s predictions over the test data to the ground truth labels.
End of explanation
print(np.round(objax.functional.sigmoid(model(train.image[9]))))
Explanation: Model Inference After Training
Now that the network is trained we can retry classification example above:
End of explanation |
9,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Chapter 1
Copyright 2020 Allen Downey
License
Step1: The first time you run this on a new installation of Python, it might produce a warning message in pink. That's probably ok, but if you get a message that says modsim.py depends on Python 3.7 features, that means you have an older version of Python, and some features in modsim.py won't work correctly.
If you need a newer version of Python, I recommend installing Anaconda. You'll find more information in the preface of the book.
You can find out what version of Python and Jupyter you have by running the following cells.
Step2: Configuring Jupyter
The following cell
Step3: The penny myth
The following cells contain code from the beginning of Chapter 1.
modsim defines UNITS, which contains variables representing pretty much every unit you've ever heard of. It uses Pint, which is a Python library that provides tools for computing with units.
The following lines create new variables named meter and second.
Step4: To find out what other units are defined, type UNITS. (including the period) in the next cell and then press TAB. You should see a pop-up menu with a list of units.
Create a variable named a and give it the value of acceleration due to gravity.
Step5: Create t and give it the value 4 seconds.
Step6: Compute the distance a penny would fall after t seconds with constant acceleration a. Notice that the units of the result are correct.
Step7: Exercise
Step8: Exercise
Step9: The error messages you get from Python are big and scary, but if you read them carefully, they contain a lot of useful information.
Start from the bottom and read up.
The last line usually tells you what type of error happened, and sometimes additional information.
The previous lines are a "traceback" of what was happening when the error occurred. The first section of the traceback shows the code you wrote. The following sections are often from Python libraries.
In this example, you should get a DimensionalityError, which is defined by Pint to indicate that you have violated a rules of dimensional analysis
Step10: Compute the time it would take a penny to fall, assuming constant acceleration.
$ a t^2 / 2 = h $
$ t = \sqrt{2 h / a}$
Step11: Given t, we can compute the velocity of the penny when it lands.
$v = a t$
Step12: We can convert from one set of units to another like this
Step13: Exercise
Step14: Exercise | Python Code:
try:
import pint
except ImportError:
!pip install pint
import pint
try:
from modsim import *
except ImportError:
!pip install modsimpy
from modsim import *
Explanation: Modeling and Simulation in Python
Chapter 1
Copyright 2020 Allen Downey
License: Creative Commons Attribution 4.0 International
Jupyter
Welcome to Modeling and Simulation, welcome to Python, and welcome to Jupyter.
This is a Jupyter notebook, which is a development environment where you can write and run Python code. Each notebook is divided into cells. Each cell contains either text (like this cell) or Python code.
Selecting and running cells
To select a cell, click in the left margin next to the cell. You should see a blue frame surrounding the selected cell.
To edit a code cell, click inside the cell. You should see a green frame around the selected cell, and you should see a cursor inside the cell.
To edit a text cell, double-click inside the cell. Again, you should see a green frame around the selected cell, and you should see a cursor inside the cell.
To run a cell, hold down SHIFT and press ENTER.
If you run a text cell, Jupyter formats the text and displays the result.
If you run a code cell, Jupyter runs the Python code in the cell and displays the result, if any.
To try it out, edit this cell, change some of the text, and then press SHIFT-ENTER to format it.
Adding and removing cells
You can add and remove cells from a notebook using the buttons in the toolbar and the items in the menu, both of which you should see at the top of this notebook.
Try the following exercises:
From the Insert menu select "Insert cell below" to add a cell below this one. By default, you get a code cell, as you can see in the pulldown menu that says "Code".
In the new cell, add a print statement like print('Hello'), and run it.
Add another cell, select the new cell, and then click on the pulldown menu that says "Code" and select "Markdown". This makes the new cell a text cell.
In the new cell, type some text, and then run it.
Use the arrow buttons in the toolbar to move cells up and down.
Use the cut, copy, and paste buttons to delete, add, and move cells.
As you make changes, Jupyter saves your notebook automatically, but if you want to make sure, you can press the save button, which looks like a floppy disk from the 1990s.
Finally, when you are done with a notebook, select "Close and Halt" from the File menu.
Using the notebooks
The notebooks for each chapter contain the code from the chapter along with additional examples, explanatory text, and exercises. I recommend you
Read the chapter first to understand the concepts and vocabulary,
Run the notebook to review what you learned and see it in action, and then
Attempt the exercises.
If you try to work through the notebooks without reading the book, you're gonna have a bad time. The notebooks contain some explanatory text, but it is probably not enough to make sense if you have not read the book. If you are working through a notebook and you get stuck, you might want to re-read (or read!) the corresponding section of the book.
Installing modules
These notebooks use standard Python modules like NumPy and SciPy. I assume you already have them installed in your environment.
They also use two less common modules: Pint, which provides units, and modsim, which contains code I wrote specifically for this book.
The following cells check whether you have these modules already and tries to install them if you don't.
End of explanation
!python --version
!jupyter-notebook --version
Explanation: The first time you run this on a new installation of Python, it might produce a warning message in pink. That's probably ok, but if you get a message that says modsim.py depends on Python 3.7 features, that means you have an older version of Python, and some features in modsim.py won't work correctly.
If you need a newer version of Python, I recommend installing Anaconda. You'll find more information in the preface of the book.
You can find out what version of Python and Jupyter you have by running the following cells.
End of explanation
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
Explanation: Configuring Jupyter
The following cell:
Uses a Jupyter "magic command" to specify whether figures should appear in the notebook, or pop up in a new window.
Configures Jupyter to display some values that would otherwise be invisible.
Select the following cell and press SHIFT-ENTER to run it.
End of explanation
meter = UNITS.meter
second = UNITS.second
Explanation: The penny myth
The following cells contain code from the beginning of Chapter 1.
modsim defines UNITS, which contains variables representing pretty much every unit you've ever heard of. It uses Pint, which is a Python library that provides tools for computing with units.
The following lines create new variables named meter and second.
End of explanation
a = 9.8 * meter / second**2
Explanation: To find out what other units are defined, type UNITS. (including the period) in the next cell and then press TAB. You should see a pop-up menu with a list of units.
Create a variable named a and give it the value of acceleration due to gravity.
End of explanation
t = 4 * second
Explanation: Create t and give it the value 4 seconds.
End of explanation
a * t**2 / 2
Explanation: Compute the distance a penny would fall after t seconds with constant acceleration a. Notice that the units of the result are correct.
End of explanation
# Solution
a * t
Explanation: Exercise: Compute the velocity of the penny after t seconds. Check that the units of the result are correct.
End of explanation
# Solution
# a + t
Explanation: Exercise: Why would it be nonsensical to add a and t? What happens if you try?
End of explanation
h = 381 * meter
Explanation: The error messages you get from Python are big and scary, but if you read them carefully, they contain a lot of useful information.
Start from the bottom and read up.
The last line usually tells you what type of error happened, and sometimes additional information.
The previous lines are a "traceback" of what was happening when the error occurred. The first section of the traceback shows the code you wrote. The following sections are often from Python libraries.
In this example, you should get a DimensionalityError, which is defined by Pint to indicate that you have violated a rules of dimensional analysis: you cannot add quantities with different dimensions.
Before you go on, you might want to delete the erroneous code so the notebook can run without errors.
Falling pennies
Now let's solve the falling penny problem.
Set h to the height of the Empire State Building:
End of explanation
t = sqrt(2 * h / a)
Explanation: Compute the time it would take a penny to fall, assuming constant acceleration.
$ a t^2 / 2 = h $
$ t = \sqrt{2 h / a}$
End of explanation
v = a * t
Explanation: Given t, we can compute the velocity of the penny when it lands.
$v = a t$
End of explanation
mile = UNITS.mile
hour = UNITS.hour
v.to(mile/hour)
Explanation: We can convert from one set of units to another like this:
End of explanation
# Solution
foot = UNITS.foot
pole_height = 10 * foot
h + pole_height
# Solution
pole_height + h
Explanation: Exercise: Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from h plus 10 feet.
Define a variable named foot that contains the unit foot provided by UNITS. Define a variable named pole_height and give it the value 10 feet.
What happens if you add h, which is in units of meters, to pole_height, which is in units of feet? What happens if you write the addition the other way around?
End of explanation
# Solution
v_terminal = 18 * meter / second
# Solution
t1 = v_terminal / a
print('Time to reach terminal velocity', t1)
# Solution
h1 = a * t1**2 / 2
print('Height fallen in t1', h1)
# Solution
t2 = (h - h1) / v_terminal
print('Time to fall remaining distance', t2)
# Solution
t_total = t1 + t2
print('Total falling time', t_total)
Explanation: Exercise: In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating.
As a simplification, let's assume that the acceleration of the penny is a until the penny reaches 18 m/s, and then 0 afterwards. What is the total time for the penny to fall 381 m?
You can break this question into three parts:
How long until the penny reaches 18 m/s with constant acceleration a.
How far would the penny fall during that time?
How long to fall the remaining distance with constant velocity 18 m/s?
Suggestion: Assign each intermediate result to a variable with a meaningful name. And assign units to all quantities!
End of explanation |
9,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read and dump calin and ACTL header and event
calin/examples/iact_data/read and dump calin and ACTL raw header and event from zfits file.ipynb - Stephen Fegan - 2017-03-09
Copyright 2017, Stephen Fegan sfegan@llr.in2p3.fr
Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris
This file is part of "calin". "calin" is free software
Step1: 1 - Create NectarCam data source
The serialized raw data is included with the calin data structure only if the include_serialized_raw_data option is set in the decoder configuration, as shown below.
Step2: 2 - Read header and print JSON string
Step3: 3 - Extract serialized ACTL header, deserialize it and print JSON string
Step4: 4 - Read first calin event and print JSON string
Step5: 5 - Plot waveforms for this event
Step6: 6 - Extract serialized ACTL event deserialize it and print JSON string
Step7: 7 - Use Python JSON decoder to conver string to Python structure
Print keys of the JSON dictionary
Step8: 8 - Convert samples data to integer array
The bytes array from the ACTL Protobuf structure is encoded as a base64 string in the JSON; for details see the mapping guide below | Python Code:
%pylab inline
import calin.iact_data.raw_actl_event_data_source
import calin.iact_data.telescope_data_source
import json
import struct
import base64
Explanation: Read and dump calin and ACTL header and event
calin/examples/iact_data/read and dump calin and ACTL raw header and event from zfits file.ipynb - Stephen Fegan - 2017-03-09
Copyright 2017, Stephen Fegan sfegan@llr.in2p3.fr
Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris
This file is part of "calin". "calin" is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License version 2 or later, as published by
the Free Software Foundation. "calin" is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Introduction
calin provides some level of access to the raw ACTL header and event protobuf structures. In Python it is possible to retrieve the header and any desired event from ZFits but calin does not provide simple access to the fields in the protobuf in the usual way. However the messages can be serialized to JSON and then accessed through the Python JSON packages if desired. This is only really intended for debugging.
This example shows how both the calin and ACTL data structures can be retrieved at the same time.
End of explanation
decoder_cfg = calin.iact_data.telescope_data_source.NectarCamZFITSDataSource.default_decoder_config()
decoder_cfg.set_include_serialized_raw_data(True)
src = calin.iact_data.telescope_data_source.NectarCamZFITSDataSource(
'/CTA/cta.cppm.in2p3.fr/NectarCAM/20161207/Run0268.1.fits.fz', decoder_cfg)
Explanation: 1 - Create NectarCam data source
The serialized raw data is included with the calin data structure only if the include_serialized_raw_data option is set in the decoder configuration, as shown below.
End of explanation
calin_header = src.get_run_configuration()
print(calin_header.SerializeAsJSON())
Explanation: 2 - Read header and print JSON string
End of explanation
cta_serialized_header = calin_header.serialized_raw_header()
cta_header = calin.iact_data.raw_actl_event_data_source.CameraRunHeader()
cta_header.ParseFromString(cta_serialized_header)
cta_str_header = cta_header.SerializeAsJSON();
print(cta_str_header)
Explanation: 3 - Extract serialized ACTL header, deserialize it and print JSON string
End of explanation
calin_event = src.simple_get_next()
calin_str_event = calin_event.SerializeAsJSON();
print(calin_str_event)
Explanation: 4 - Read first calin event and print JSON string
End of explanation
high_gain_wfs = calin_event.high_gain_image().camera_waveforms();
for ichan, chan_id in enumerate(high_gain_wfs.channel_id()):
plot(high_gain_wfs.waveform(ichan).samples())
xlabel('Sample number')
ylabel('Sample amplitude [DC]')
Explanation: 5 - Plot waveforms for this event
End of explanation
cta_serialized_event = calin_event.serialized_raw_event()
cta_event = calin.iact_data.raw_actl_event_data_source.CameraEvent()
cta_event.ParseFromString(cta_serialized_event)
cta_str_event = cta_event.SerializeAsJSON();
print(cta_str_event)
Explanation: 6 - Extract serialized ACTL event deserialize it and print JSON string
End of explanation
json_event = json.loads(cta_str_event)
json_event.keys()
Explanation: 7 - Use Python JSON decoder to conver string to Python structure
Print keys of the JSON dictionary
End of explanation
nsamples = json_event['hiGain']['waveforms']['numSamples']
samples_data = base64.b64decode(json_event['hiGain']['waveforms']['samples']['data'])
samples = frombuffer(samples_data,dtype='int16').reshape([-1,nsamples])
plot(samples.transpose())
xlabel('Sample number')
ylabel('Sample amplitude [DC]')
Explanation: 8 - Convert samples data to integer array
The bytes array from the ACTL Protobuf structure is encoded as a base64 string in the JSON; for details see the mapping guide below:
https://developers.google.com/protocol-buffers/docs/proto3#json
End of explanation |
9,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nearest neighbor classification
Arguably the most simplest classification method.
We are given example input vectors $x_i$ and corresponding class labels $c_i$ for $i=1,\dots, N$.
The collection of pairs ${x_i, c_i}$ for $i=1\dots N$ is called a data set.
Just store the dataset and for a new observed point $x$, find it's nearest neighbor $i^$ and report $c_{i^}$
$$
i^* = \arg\min_{i=1\dots N} D(x_i, x)
$$
KNN
Step1: The choice of the distance function (divergence) can be important. In practice, a popular choice is the Euclidian distance but this is by no means the only one.
Step2: Equal distance contours
Step3: http | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
df = pd.read_csv(u'data/iris.txt',sep=' ')
df
X = np.hstack([
np.matrix(df.sl).T,
np.matrix(df.sw).T,
np.matrix(df.pl).T,
np.matrix(df.pw).T])
print X[:5] # sample view
c = np.matrix(df.c).T
print c[:5]
Explanation: Nearest neighbor classification
Arguably the most simplest classification method.
We are given example input vectors $x_i$ and corresponding class labels $c_i$ for $i=1,\dots, N$.
The collection of pairs ${x_i, c_i}$ for $i=1\dots N$ is called a data set.
Just store the dataset and for a new observed point $x$, find it's nearest neighbor $i^$ and report $c_{i^}$
$$
i^* = \arg\min_{i=1\dots N} D(x_i, x)
$$
KNN: K nearest neighbors
Find the $k$ nearest neighbors and do a majority voting.
End of explanation
def Divergence(x,y,p=2.):
e = np.array(x) - np.array(y)
if np.isscalar(p):
return np.sum(np.abs(e)**p)
else:
return np.sum(np.matrix(e)*p*np.matrix(e).T)
Divergence([0,0],[1,1],p=2)
W = np.matrix(np.diag([2,1]))
Divergence([0,0],[1,1],p=W)
W = np.matrix([[2,1],[1,2]])
Divergence([0,0],[1,1],p=W)
Explanation: The choice of the distance function (divergence) can be important. In practice, a popular choice is the Euclidian distance but this is by no means the only one.
End of explanation
%run plot_normballs.py
def nearest(A,x, p=2):
'''A: NxD data matrix, N - number of samples, D - the number of features
x: test vector
returns the distance and index of the the nearest neigbor
'''
N = A.shape[0]
d = np.zeros((N,1))
md = np.inf
for i in range(N):
d[i] = Divergence(A[i,:], x, p)
if d[i]<md:
md = d[i]
min_idx = i
return min_idx
def predict(A, c, X, p=2):
L = X.shape[0]
return [np.asscalar(c[nearest(A, X[i,:], p=p)]) for i in range(L)]
x_test = np.mat('[3.3, 2.5,5.5,1.7]')
#d, idx = distance(X, x_test, p=2)
cc = predict(X, c, x_test)
print(cc)
#float(c[idx])
def leave_one_out(A, c, p=2):
N = A.shape[0]
correct = 0
for j in range(N):
md = np.inf
for i in range(N):
if i != j:
d = Divergence(A[i,:], A[j,:], p=p)
if d<md:
md = d
min_idx = i
if c[min_idx] == c[j]:
correct += 1
accuracy = 1.*correct/N
return accuracy
leave_one_out(X, c, p=np.diag([1,1,1,1]))
Explanation: Equal distance contours
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 3
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] + 0.02*np.random.randn(150,2) # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
weights='uniform'
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8,8))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.axis('equal')
plt.show()
Explanation: http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
End of explanation |
9,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing Cython
In this notebook, we'll take a look at how to implement a simple function using Cython. The operation we'll implement is the first-order diff, which takes in an array of length $n$
Step1: Below is a simple implementation using pure Python (no NumPy). The %timeit magic command let's us see how long it takes the function to run on the 10,000-element array defined above.
Step2: Now use the exact same function body but add the %%cython magic at the top of the code cell. How much of a difference does simply pre-compiling make?
Step3: So it didn't make much of a difference. That's because Cython really shines when you specify data types. We do this by annotating the variables used in the function with cdef <type> .... Let's see how much this improves things.
Note
Step4: That made a huge difference! There are a couple more things we can do to speed up our diff implementation, including disabling some safety checks. The combination of disabling bounds checking (making sure you don't try access an index of an array that doesn't exist) and disabling wraparound (disabling use of negative indices) can really improve things when we are sure neither condition will occur. Let's try that.
Step5: Finally, let's see how NumPy's diff performs for comparison. | Python Code:
import numpy as np
x = np.random.randn(10000)
Explanation: Writing Cython
In this notebook, we'll take a look at how to implement a simple function using Cython. The operation we'll implement is the first-order diff, which takes in an array of length $n$:
$$\mathbf{x} = \begin{bmatrix} x_1 \ x_2 \ \vdots \ x_n\end{bmatrix}$$
and returns the following:
$$\mathbf{y} = \begin{bmatrix} x_2 - x_1 \ x_3 - x_2 \ \vdots \ x_n - x_{n-1} \end{bmatrix}$$
First we'll import everything we'll need and generate some data to work with.
End of explanation
def py_diff(x):
n = x.size
y = np.zeros(n-1)
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit py_diff(x)
Explanation: Below is a simple implementation using pure Python (no NumPy). The %timeit magic command let's us see how long it takes the function to run on the 10,000-element array defined above.
End of explanation
%load_ext cython
%%cython
import numpy as np
def cy_diff_naive(x):
n = x.size
y = np.zeros(n-1)
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit cy_diff_naive(x)
Explanation: Now use the exact same function body but add the %%cython magic at the top of the code cell. How much of a difference does simply pre-compiling make?
End of explanation
%%cython
import numpy as np
def cy_diff(double[::1] x):
cdef int n = x.size
cdef double[::1] y = np.zeros(n-1)
cdef int i
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit cy_diff(x)
Explanation: So it didn't make much of a difference. That's because Cython really shines when you specify data types. We do this by annotating the variables used in the function with cdef <type> .... Let's see how much this improves things.
Note: array types (like for the input arg x) can be declared using the memoryview syntax double[::1] or using np.ndarray[cnp.float64_t, ndim=1].
End of explanation
%%cython
from cython import wraparound, boundscheck
import numpy as np
@boundscheck(False)
@wraparound(False)
def cy_diff2(double[::1] x):
cdef int n = x.size
cdef double[::1] y = np.zeros(n-1)
cdef int i
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit cy_diff2(x)
Explanation: That made a huge difference! There are a couple more things we can do to speed up our diff implementation, including disabling some safety checks. The combination of disabling bounds checking (making sure you don't try access an index of an array that doesn't exist) and disabling wraparound (disabling use of negative indices) can really improve things when we are sure neither condition will occur. Let's try that.
End of explanation
def np_diff(x):
return np.diff(x)
%timeit np_diff(x)
Explanation: Finally, let's see how NumPy's diff performs for comparison.
End of explanation |
9,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Item-Based Collaborative Filtering
As before, we'll start by importing the MovieLens 100K data set into a pandas DataFrame
Step1: Now we'll pivot this table to construct a nice matrix of users and the movies they rated. NaN indicates missing data, or movies that a given user did not watch
Step2: Now the magic happens - pandas has a built-in corr() method that will compute a correlation score for every column pair in the matrix! This gives us a correlation score between every pair of movies (where at least one user rated both movies - otherwise NaN's will show up.) That's amazing!
Step3: However, we want to avoid spurious results that happened from just a handful of users that happened to rate the same pair of movies. In order to restrict our results to movies that lots of people rated together - and also give us more popular results that are more easily recongnizable - we'll use the min_periods argument to throw out results where fewer than 100 users rated a given movie pair
Step4: Now let's produce some movie recommendations for user ID 0, who I manually added to the data set as a test case. This guy really likes Star Wars and The Empire Strikes Back, but hated Gone with the Wind. I'll extract his ratings from the userRatings DataFrame, and use dropna() to get rid of missing data (leaving me only with a Series of the movies I actually rated
Step5: Now, let's go through each movie I rated one at a time, and build up a list of possible recommendations based on the movies similar to the ones I rated.
So for each movie I rated, I'll retrieve the list of similar movies from our correlation matrix. I'll then scale those correlation scores by how well I rated the movie they are similar to, so movies similar to ones I liked count more than movies similar to ones I hated
Step6: This is starting to look like something useful! Note that some of the same movies came up more than once, because they were similar to more than one movie I rated. We'll use groupby() to add together the scores from movies that show up more than once, so they'll count more
Step7: The last thing we have to do is filter out movies I've already rated, as recommending a movie I've already watched isn't helpful | Python Code:
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3), encoding="ISO-8859-1")
m_cols = ['movie_id', 'title']
movies = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2), encoding="ISO-8859-1")
ratings = pd.merge(movies, ratings)
ratings.head()
Explanation: Item-Based Collaborative Filtering
As before, we'll start by importing the MovieLens 100K data set into a pandas DataFrame:
End of explanation
userRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')
userRatings.head()
Explanation: Now we'll pivot this table to construct a nice matrix of users and the movies they rated. NaN indicates missing data, or movies that a given user did not watch:
End of explanation
corrMatrix = userRatings.corr()
corrMatrix.head()
Explanation: Now the magic happens - pandas has a built-in corr() method that will compute a correlation score for every column pair in the matrix! This gives us a correlation score between every pair of movies (where at least one user rated both movies - otherwise NaN's will show up.) That's amazing!
End of explanation
corrMatrix = userRatings.corr(method='pearson', min_periods=100)
corrMatrix.head()
Explanation: However, we want to avoid spurious results that happened from just a handful of users that happened to rate the same pair of movies. In order to restrict our results to movies that lots of people rated together - and also give us more popular results that are more easily recongnizable - we'll use the min_periods argument to throw out results where fewer than 100 users rated a given movie pair:
End of explanation
myRatings = userRatings.loc[0].dropna()
myRatings
Explanation: Now let's produce some movie recommendations for user ID 0, who I manually added to the data set as a test case. This guy really likes Star Wars and The Empire Strikes Back, but hated Gone with the Wind. I'll extract his ratings from the userRatings DataFrame, and use dropna() to get rid of missing data (leaving me only with a Series of the movies I actually rated:)
End of explanation
simCandidates = pd.Series()
for i in range(0, len(myRatings.index)):
print "Adding sims for " + myRatings.index[i] + "..."
# Retrieve similar movies to this one that I rated
sims = corrMatrix[myRatings.index[i]].dropna()
# Now scale its similarity by how well I rated this movie
sims = sims.map(lambda x: x * myRatings[i])
# Add the score to the list of similarity candidates
simCandidates = simCandidates.append(sims)
#Glance at our results so far:
print "sorting..."
simCandidates.sort_values(inplace = True, ascending = False)
print simCandidates.head(10)
Explanation: Now, let's go through each movie I rated one at a time, and build up a list of possible recommendations based on the movies similar to the ones I rated.
So for each movie I rated, I'll retrieve the list of similar movies from our correlation matrix. I'll then scale those correlation scores by how well I rated the movie they are similar to, so movies similar to ones I liked count more than movies similar to ones I hated:
End of explanation
simCandidates = simCandidates.groupby(simCandidates.index).sum()
simCandidates.sort_values(inplace = True, ascending = False)
simCandidates.head(10)
Explanation: This is starting to look like something useful! Note that some of the same movies came up more than once, because they were similar to more than one movie I rated. We'll use groupby() to add together the scores from movies that show up more than once, so they'll count more:
End of explanation
filteredSims = simCandidates.drop(myRatings.index)
filteredSims.head(10)
Explanation: The last thing we have to do is filter out movies I've already rated, as recommending a movie I've already watched isn't helpful:
End of explanation |
9,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Evoked data structure
Step1: Creating Evoked objects from Epochs
Step2: You may have noticed that MNE informed us that "baseline correction" has been
applied. This happened automatically by during creation of the
~mne.Epochs object, but may also be initiated (or disabled!) manually
Step3: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the
Step4: Like the plot() methods for
Step5: To select based on time in seconds, the
Step6: Similarities among the core data structures
Step7: Notice that
Step8: If you want to load only some of the conditions present in a .fif file,
Step9: Above, when we created an
Step10: This can be remedied by either passing a baseline parameter to
Step11: Notice that
Step12: This approach will weight each epoch equally and create a single
Step13: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use
Step14: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting | Python Code:
import os
import mne
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut-creating-data-structures.
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
print(f'Epochs baseline: {epochs.baseline}')
print(f'Evoked baseline: {evoked.baseline}')
Explanation: You may have noticed that MNE informed us that "baseline correction" has been
applied. This happened automatically by during creation of the
~mne.Epochs object, but may also be initiated (or disabled!) manually:
We will discuss this in more detail later.
The information about the baseline period of ~mne.Epochs is transferred to
derived ~mne.Evoked objects to maintain provenance as you process your
data:
End of explanation
evoked.plot()
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
for evok in evokeds_list:
print(evok.comment)
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
evokeds_list[0].plot(picks='eeg')
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
# Original baseline (none set).
print(f'Baseline after loading: {evokeds_list[0].baseline}')
# Apply a custom baseline correction.
evokeds_list[0].apply_baseline((None, 0))
print(f'Baseline after calling apply_baseline(): {evokeds_list[0].baseline}')
# Visualize the evoked response.
evokeds_list[0].plot(picks='eeg')
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each ~mne.Evoked object
by $\frac{1}{N}$, where $N$ is the number of ~mne.Evoked
objects given) or the keyword 'nave' (weight each ~mne.Evoked object
proportional to the number of epochs averaged together to create it):
End of explanation
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
Explanation: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation |
9,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Least squares problems
We sometimes wish to solve problems of the form
$$
\boldsymbol{A} \boldsymbol{x} = \boldsymbol{b}
$$
where $\boldsymbol{A}$ is a $m \times n$ matrix. If $m > n$, in general no solution to the problem exists. This is a typical of an over-determined problem - we have more equations than unknowns. A classical example is when trying to fit an $k$th-order polynomial to $p > k + 1$ data points - the degree of the polynomial is not high enough to construct an interpolating polynomial.
In this notebook we assume that $\boldsymbol{A}$ is full rank, i.e. the columns of $\boldsymbol{A}$ are linearly independent. We will look at the case when $\boldsymbol{A}$ is not full rank later.
Before computing least-squares problems, we start with examples of polynomial interpolation.
Note
Step1: Next, we will sample the Runge function at points and fit a polynomial to these evaluation points. If we sample the function at $n$ points we can fit a polynomial of degree $n - 1$
Step2: Solving for the coefficients
Step3: NumPy has a function poly1d to turn the coefficients into a polynomial object, and it can display a representation of the polynomial
Step4: To plot the fitted polynomial, we evaluate it at at 200 points
Step5: Note how the polynomial fitting function oscillates near the ends of the interval. We might think that a richer, higher-order degree polynomial would provide a better fit
Step6: However, we see that the oscillations near the ends of the internal become worse with a higher degree polynomial.
By wrapping this problem in a function we can make it interactive
Step7: There two issue to consider here. The first is that the polynmial clearly fluctuates near the ends of the interval. This is know as the Runge effect. A second issue that is less immediately obvious is that the Vandermonde matrix is very poorly conditioned.
Conditioning of the Vandermonde matrix
We compute the Vandermone matrix for increasing polynimial degree, and see below that the condition number of the Vandermonde matrix can become extremely large for high polynomial degrees.
Step8: Orthogonal polynomials
In the preceding, we worked with a monomial basis
Step9: Comparing to the mononomal basis
Step10: we see that the Legendre polynomial and the mononomials appear very different, depsite both spanning (being a basis) for the same space. Note how the higher order mononomial terms are indistinguishable near zero, whereas the Legendre polynomials are clearly distinct from each other.
Legendre polynomials of degree up to and including $n$ span the same space as $1, x, x^{2},\ldots, x^{n}$, so we can express any polynomial of degree $n$ as
$$
f = \alpha_{n} P_{n}(x) + \alpha_{n-1} P_{n-1}(x) + \ldots + \alpha_{0} P_{0}(x)
$$
To find the ${ \alpha_{n} }$ coefficients can construct a generalised Vandermonde matrix $\boldsymbol{A}$ and solve $\boldsymbol{A} \boldsymbol{\alpha} = \boldsymbol{y}p$, where
$$
\boldsymbol{A} = \begin{bmatrix}
P{n}(x_{0}) & P_{n-1}(x_{0}) & \ldots & P_2(x_{0}) & P_1(x_{0}) & P_0(x_{0})
\
P_{n}(x_{1}) & P_{n-1}(x_{1}) & \ldots & P_2(x_{1}) & P_1(x_{1}) & P_0(x_{1})
\
\vdots & \vdots & \vdots & \ldots & \vdots
\
P_{n}(x_{n}) & P_{n-1}(x_{n}) & \ldots & P_2(x_{n}) & P_1(x_{n}) & P_0(x_{n})
\end{bmatrix}
$$
If we use the Legendre Vandermonde matrix to compute the coeffecients, in exact arithmetic we would compute the same polynomial as with the mononomial basis. However, comparing the condition number for the Vandermonde matrices
Step11: Clearly the condition number for the Legendre case is dramatically smaller.
Non-equispaced interpolaton points
We have seen the Runge effect where interpolating polynomials can exhibit large oscillations close to the ends of the domain, with the effect becoming more pronounced as the polynomial degree is increased. A well-known approach to reducing the oscillations is to use evaluation points that clustered towards the ends of the domain. In particular, the roots of orthogonal polynomials can be particularly good evaluation points.
To test, we will interpolate the Runge function at the roots of Legendre polynomials.
Step12: The polynomial that interpolates at points that are clustered near the ends of the interval exhibits very limited oscillation, whereas the oscillations for the equispaced case are very large.
Interpolating the sine graph
We consider now interpolating points takes from the sine graph.
Step13: We can see from the graph that the polynomial closely ressambles the sine function in this case, despite the high degree of the polynomial.
However, the picture changes if we introduce a very small amount of noise to the sine graph
Step14: We see the Runge effect again, with large oscillations towards the ends of the domain.
Least-squares fitting
We will now looking at fitting a polynomial of degree $k < n + 1$ to points on the sine graph. The degree of the polynomial is not high enough to interpolate all points, so we will compute a best-fit in the least-squares sense.
We have seen in lectures that solving the least squares solution involves solving
$$
\boldsymbol{A}^{T}\boldsymbol{A} \boldsymbol{c} = \boldsymbol{A}^{T} \boldsymbol{y}
$$
If we want ot fit a $5$th-order polynomial to 20 data points, $\boldsymbol{A}$ is the $20 \times 6$ matrix
Step15: and then solve $$\boldsymbol{A}^{T}\boldsymbol{A} \boldsymbol{c} = \boldsymbol{A}^{T} \boldsymbol{y}$$ and create a NumPy polynomial from the coefficients
Step16: Plotting the polynomial
Step17: To explore polynomial orders, we will create an interactive plot with a slider for the polynomial degree. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# Use seaborn to style the plots and use accessible colors
import seaborn as sns
sns.set()
sns.set_palette("colorblind")
import numpy as np
N = 100
x = np.linspace(-1, 1, N)
def runge(x):
return 1 /(25 * (x**2) + 1)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Runge function')
plt.plot(x, runge(x),'-');
Explanation: Least squares problems
We sometimes wish to solve problems of the form
$$
\boldsymbol{A} \boldsymbol{x} = \boldsymbol{b}
$$
where $\boldsymbol{A}$ is a $m \times n$ matrix. If $m > n$, in general no solution to the problem exists. This is a typical of an over-determined problem - we have more equations than unknowns. A classical example is when trying to fit an $k$th-order polynomial to $p > k + 1$ data points - the degree of the polynomial is not high enough to construct an interpolating polynomial.
In this notebook we assume that $\boldsymbol{A}$ is full rank, i.e. the columns of $\boldsymbol{A}$ are linearly independent. We will look at the case when $\boldsymbol{A}$ is not full rank later.
Before computing least-squares problems, we start with examples of polynomial interpolation.
Note: This notebook uses interactive widgets to interactively explore various effects. The widget sliders will be be visiable through nbviewer.
Polynomial interpolation
Polynomial interpolation involves fitting a $n$th-order polynomial to values at $n + 1$ data points.
Interpolating the Runge function
We will investigate interpolating the Runge function
$$
y = \frac{1}{1 + 25 x^{2}}
$$
on the interval $[-1, 1]$:
End of explanation
n_p = 5
x_p = np.linspace(-1, 1, n_p)
A = np.vander(x_p, n_p)
print(x_p)
print(A)
Explanation: Next, we will sample the Runge function at points and fit a polynomial to these evaluation points. If we sample the function at $n$ points we can fit a polynomial of degree $n - 1$:
$$
f = c_{n-1} x^{n-1} + c_{n-2} x^{n-2} + \ldots + c_{1} x + c_{0}.
$$
We can find the polynomial coefficients $c_{i}$ by solving $\boldsymbol{A} \boldsymbol{c} = \boldsymbol{y}p$, where $\boldsymbol{A}$ is the Vandermonde matrix:
$$
\boldsymbol{A} = \begin{bmatrix}
x{1}^{n-1} & x_{1}^{n-2} & \ldots & x_{1}^{2} & x_{1} & 1
\
x_{2}^{n-1} & x_{2}^{n-2} & \ldots & x_{2}^{2} & x_{2} & 1
\
\vdots & \vdots & \vdots & \ldots & \vdots
\
x_{n}^{n-1} & x_{n}^{n-2} & \ldots & x_{n}^{2} & x_{n} & 1
\end{bmatrix}
$$
and the vector $\boldsymbol{c}$ contains the unknown polynomial coefficients
$$
\boldsymbol{c} = \begin{bmatrix}
c_{19} & c_{20} & \ldots & c_{0}
\end{bmatrix}^{T}
$$
and the vector $\boldsymbol{y}p$ contains the points $y(x{i})$ that we wish to fit.
Note: the ordering in each row of the Vandermonde matrix above is reversed with respect to what you will find in most books. We do this because the default NumPy function for generating the Vandermonde matrix uses the above ordering.
Using the NumPy built-in function to generate the Vandermonde matrix for $n_p$ points (polynomial degree $n_{p} -1)$:
End of explanation
y_p = runge(x_p)
c = np.linalg.solve(A, y_p)
Explanation: Solving for the coefficients:
End of explanation
p = np.poly1d(c)
print(p)
Explanation: NumPy has a function poly1d to turn the coefficients into a polynomial object, and it can display a representation of the polynomial:
End of explanation
# Create an array of 200 equally spaced points on [-1, 1]
x_fit = np.linspace(-1, 1, 200)
# Evaluate the polynomial at the points
y_fit = p(x_fit)
# Plot the interpolating polynomial and the sample points
plt.xlabel('$x$')
plt.ylabel('$f$')
plt.title('Points of the Runge function interpolated by a polynomial')
plot = plt.plot(x_p, y_p, 'o', label='points')
plot = plt.plot(x_fit, y_fit,'-', label='interpolate')
plot = plt.plot(x, runge(x), '--', label='exact')
plt.legend();
Explanation: To plot the fitted polynomial, we evaluate it at at 200 points:
End of explanation
n_p = 13
x_p = np.linspace(-1, 1, n_p)
A = np.vander(x_p, n_p)
y_p = runge(x_p)
c = np.linalg.solve(A, y_p)
p = np.poly1d(c)
y_fit = p(x_fit)
# Plot the interpolating polynomial and the sample points
plt.xlabel('$x$')
plt.ylabel('$f$')
plt.title('Points of the Runge function interpolated by a polynomial')
plot = plt.plot(x_p, y_p, 'o', label='points')
plot = plt.plot(x_fit, y_fit,'-', label='interpolate')
plot = plt.plot(x, runge(x), '--', label='exact')
plt.legend();
Explanation: Note how the polynomial fitting function oscillates near the ends of the interval. We might think that a richer, higher-order degree polynomial would provide a better fit:
End of explanation
from ipywidgets import widgets
from ipywidgets import interact
@interact(order=(0, 19))
def plot(order):
x_p = np.linspace(-1, 1, order + 1)
A = np.vander(x_p, order + 1)
y_p = runge(x_p)
c = np.linalg.solve(A, y_p)
p = np.poly1d(c)
y_fit = p(x_fit)
# Plot the interpolating polynomial and the sample points
plt.xlabel('$x$')
plt.ylabel('$f$')
plt.title('Points of the Runge function interpolated by a polynomial')
plot = plt.plot(x_p, y_p, 'o', label='points')
plot = plt.plot(x_fit, y_fit,'-', label='interpolate ' + str(order) )
plot = plt.plot(x, runge(x), '--', label='exact')
plt.legend()
Explanation: However, we see that the oscillations near the ends of the internal become worse with a higher degree polynomial.
By wrapping this problem in a function we can make it interactive:
End of explanation
n_p = 13
x_p = np.linspace(-1, 1, n_p)
A = np.vander(x_p, n_p)
print(f"Condition number of the Vandermonde matrix: {np.linalg.cond(A, 2):e}")
n_p = 20
x_p = np.linspace(-1, 1, n_p)
A = np.vander(x_p, n_p)
print(f"Condition number of the Vandermonde matrix: {np.linalg.cond(A, 2):e}")
n_p = 30
x_p = np.linspace(-1, 1, n_p)
A = np.vander(x_p, n_p)
print(f"Condition number of the Vandermonde matrix: {np.linalg.cond(A, 2):e}")
n_p = 30
x_p = np.linspace(-10, 10, n_p)
A = np.vander(x_p, n_p)
print(f"Condition number of the Vandermonde matrix: {np.linalg.cond(A, 2):e}")
Explanation: There two issue to consider here. The first is that the polynmial clearly fluctuates near the ends of the interval. This is know as the Runge effect. A second issue that is less immediately obvious is that the Vandermonde matrix is very poorly conditioned.
Conditioning of the Vandermonde matrix
We compute the Vandermone matrix for increasing polynimial degree, and see below that the condition number of the Vandermonde matrix can become extremely large for high polynomial degrees.
End of explanation
for order in range(10):
x, y = np.polynomial.legendre.Legendre.basis(order, [-1, 1]).linspace(100)
plt.plot(x, y, label="P_{}".format(order))
plt.grid(True)
plt.legend();
plt.savefig("legendre.pdf")
Explanation: Orthogonal polynomials
In the preceding, we worked with a monomial basis:
$$
1, \ x, \ x^{2}, \ldots , \ x^{n-1}
$$
and considered polynomials of the form
$$
f = c_{n-1} x^{n-1} + c_{n-2} x^{n-2} + \ldots + c_{1} x + c_{0}.
$$
where we would pick (or solver for in the case of interpolation) the coefficients ${c_{i}}$.
There are, however, alternatives to the monomial basis with remarkably rich and fascinating properties. We will consider Legendre polynomials on the internal $[-1, 1]$. There are various expressions for computing Legendre polynomials. One expressions for computing the Legendre polynomial of degree $n$, $P_{n}$, is:
$$
(n+1) P_{n+1}(x) = (2n+1) x P_{n}(x) - n P_{n-1}(x)
$$
where $P_{0} = 1$ and $P_{0} = x$. The special feature of Legendre polynomial is that:
$$
\int_{-1}^{1} P_{m}(x) P_{n}(x) \, dx = 0 \quad {\rm if} \ m \ne n,
$$
i.e. Legendre polynomials of different degree are orthogonal to each other.
Plotting the Legendre polynomials up to $P_{9}$:
End of explanation
x_p = np.linspace(-1, 1, 100)
for order in range(10):
y_p = x_p**order
plt.plot(x_p, y_p, label="m_{}".format(order))
plt.savefig("mononomial.pdf")
Explanation: Comparing to the mononomal basis:
End of explanation
N = 20
x_p = np.linspace(-1, 1, N)
A = np.vander(x_p, N)
print(f"Condition number of the Vandermonde matrix (mononomial): {np.linalg.cond(A, 2):e}")
A = np.polynomial.legendre.legvander(x_p, N)
print(f"Condition number of the Vandermonde matrix (Legendre): {np.linalg.cond(A, 2):e}")
Explanation: we see that the Legendre polynomial and the mononomials appear very different, depsite both spanning (being a basis) for the same space. Note how the higher order mononomial terms are indistinguishable near zero, whereas the Legendre polynomials are clearly distinct from each other.
Legendre polynomials of degree up to and including $n$ span the same space as $1, x, x^{2},\ldots, x^{n}$, so we can express any polynomial of degree $n$ as
$$
f = \alpha_{n} P_{n}(x) + \alpha_{n-1} P_{n-1}(x) + \ldots + \alpha_{0} P_{0}(x)
$$
To find the ${ \alpha_{n} }$ coefficients can construct a generalised Vandermonde matrix $\boldsymbol{A}$ and solve $\boldsymbol{A} \boldsymbol{\alpha} = \boldsymbol{y}p$, where
$$
\boldsymbol{A} = \begin{bmatrix}
P{n}(x_{0}) & P_{n-1}(x_{0}) & \ldots & P_2(x_{0}) & P_1(x_{0}) & P_0(x_{0})
\
P_{n}(x_{1}) & P_{n-1}(x_{1}) & \ldots & P_2(x_{1}) & P_1(x_{1}) & P_0(x_{1})
\
\vdots & \vdots & \vdots & \ldots & \vdots
\
P_{n}(x_{n}) & P_{n-1}(x_{n}) & \ldots & P_2(x_{n}) & P_1(x_{n}) & P_0(x_{n})
\end{bmatrix}
$$
If we use the Legendre Vandermonde matrix to compute the coeffecients, in exact arithmetic we would compute the same polynomial as with the mononomial basis. However, comparing the condition number for the Vandermonde matrices:
End of explanation
@interact(N=(0, 30))
def plot(N):
# Get the roots of the Legendre polynomials
x_p = np.linspace(-1, 1, N)
x_p = np.polynomial.legendre.leggauss(N)[0]
print(f"Number of interpolation points: {len(x_p)} ")
# Evaluate the Runge function
y_p = runge(x_p)
# Use NumPy function to compute the polynomial
from numpy.polynomial import Legendre as P
p = P.fit(x_p, y_p, N);
x_e = np.linspace(-1, 1, N)
# Evaluate the Runge function
y_e = runge(x_e)
# Use NumPy function to compute the polynomial
from numpy.polynomial import Legendre as P
p_e = P.fit(x_e, y_e, N);
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.plot(np.linspace(-1, 1, 100), runge( np.linspace(-1, 1, 100)),'--', label='Runge function');
plt.plot(x_p, np.zeros(len(x_p)),'o', label="Legendre roots");
plt.plot(*p.linspace(200), '-', label='interpolation (clustered)');
plt.plot(*p_e.linspace(200), '-', label='interpolation (equispaced)');
plt.plot(x_e, np.zeros(len(x_e)),'x', label="Equispaced points");
plt.ylim(-0.2, 1.5)
plt.legend();
Explanation: Clearly the condition number for the Legendre case is dramatically smaller.
Non-equispaced interpolaton points
We have seen the Runge effect where interpolating polynomials can exhibit large oscillations close to the ends of the domain, with the effect becoming more pronounced as the polynomial degree is increased. A well-known approach to reducing the oscillations is to use evaluation points that clustered towards the ends of the domain. In particular, the roots of orthogonal polynomials can be particularly good evaluation points.
To test, we will interpolate the Runge function at the roots of Legendre polynomials.
End of explanation
N = 20
def sine(x):
return np.sin(2 * np.pi * x)
x_p = np.linspace(-1, 1, N)
y_p = sine(x_p)
from numpy.polynomial import Polynomial as P
y_fit = P.fit(x_p, sine(x_p), N);
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Points on a sine graph')
plt.plot(x_p, y_p,'ro');
plt.plot(*y_fit.linspace(200),'-', label='interpolated');
x = np.linspace(-1, 1, 200)
plt.plot(x, sine(x),'--', label='sin(x)');
plt.legend();
Explanation: The polynomial that interpolates at points that are clustered near the ends of the interval exhibits very limited oscillation, whereas the oscillations for the equispaced case are very large.
Interpolating the sine graph
We consider now interpolating points takes from the sine graph.
End of explanation
N = 20
def noisy_sine(x, noise):
return np.sin(2 * np.pi * x) + np.random.uniform(-noise/2.0, noise/2.0, len(x))
x_p = np.linspace(-1, 1, N)
y_p = sine(x_p)
from numpy.polynomial import Polynomial as P
y_fit = P.fit(x_p, noisy_sine(x_p, 0.01), N);
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Points on a sine graph')
plt.plot(x_p, y_p,'x',label='interpolation points (noise)');
plt.plot(*y_fit.linspace(200),'-', label='interpolated');
x = np.linspace(-1, 1, 200)
plt.plot(x, sine(x),'--', label='sin(x)');
plt.ylim(-1.2, 1.2)
plt.legend();
Explanation: We can see from the graph that the polynomial closely ressambles the sine function in this case, despite the high degree of the polynomial.
However, the picture changes if we introduce a very small amount of noise to the sine graph:
End of explanation
N = 20
x_p = np.linspace(-1, 1, N)
A = np.vander(x_p, 6)
Explanation: We see the Runge effect again, with large oscillations towards the ends of the domain.
Least-squares fitting
We will now looking at fitting a polynomial of degree $k < n + 1$ to points on the sine graph. The degree of the polynomial is not high enough to interpolate all points, so we will compute a best-fit in the least-squares sense.
We have seen in lectures that solving the least squares solution involves solving
$$
\boldsymbol{A}^{T}\boldsymbol{A} \boldsymbol{c} = \boldsymbol{A}^{T} \boldsymbol{y}
$$
If we want ot fit a $5$th-order polynomial to 20 data points, $\boldsymbol{A}$ is the $20 \times 6$ matrix:
$$
\boldsymbol{A} = \begin{bmatrix}
x_{1}^{5} & x_{1}^{4} & \ldots & x_{1}^{2} & x_{1} & 1
\
x_{2}^{5} & x_{2}^{4} & \ldots & x_{2}^{2} & x_{2} & 1
\
\vdots & \vdots & \vdots & \ldots & \vdots
\
\vdots & \vdots & \vdots & \ldots & \vdots
\
x_{20}^{5} & x_{20}^{4} & \ldots & x_{20}^{2} & x_{20} & 1
\end{bmatrix}
$$
and $\boldsymbol{c}$ contains the $6$ polynomial coefficients
$$
\boldsymbol{c}
= \begin{bmatrix}
c_{0} & c_{1} & c_{2} & c_{3} & c_{4}
\end{bmatrix}
$$
and $\boldsymbol{y}$ contains the 20 points we want to fit.
Fitting points on the Runge function
Let's try fitting a lower-order polynomial to the 20 data points without noise. We start with a polynomial of degree 6. We first create the Vandermonde matrix:
End of explanation
ATA = (A.T).dot(A)
y_p = runge(x_p)
c_ls = np.linalg.solve(ATA, (A.T).dot(y_p))
p_ls = np.poly1d(c_ls)
print(p_ls)
Explanation: and then solve $$\boldsymbol{A}^{T}\boldsymbol{A} \boldsymbol{c} = \boldsymbol{A}^{T} \boldsymbol{y}$$ and create a NumPy polynomial from the coefficients:
End of explanation
# Evaluate polynomial at some points
y_ls = p_ls(x_fit)
# Plot
plt.xlabel('$x$')
plt.ylabel('$f$')
# plt.ylim(-1.1, 1.1)
plt.title('Least-squares fit of 20 points')
plt.plot(x_p, y_p, 'o', x_fit, y_ls,'-');
Explanation: Plotting the polynomial:
End of explanation
@interact(order=(0, 19))
def plot(order):
# Create Vandermonde matrix
A = np.vander(x_p, order + 1)
ATA = (A.T).dot(A)
c_ls = np.linalg.solve(ATA, (A.T).dot(y_p))
p_ls = np.poly1d(c_ls)
# Evaluate polynomial at some points
y_ls = p_ls(x_fit)
# Plot
plt.xlabel('$x$')
plt.ylabel('$f$')
plt.ylim(-1.2, 1.2)
plt.title('Least-squares fit of 20 points on the Runge graph with a ${}$th-order polynomial'.format(order))
plt.plot(x_p, y_p, 'o', x_fit, y_ls,'-')
Explanation: To explore polynomial orders, we will create an interactive plot with a slider for the polynomial degree.
End of explanation |
9,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theano example
Step1: The model
Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix $W$ and a bias vector $b$. Classification is
done by projecting an input vector onto a set of hyperplanes, each of which
corresponds to a class. The distance from the input to a hyperplane reflects
the probability that the input is a member of the corresponding class.
Mathematically, the probability that an input vector $x$ is a member of a
class $i$, a value of a stochastic variable $Y$, can be written as
Step2: Now, we can build a symbolic expression for the matrix of class-membership probability (p_y_given_x), and for the class whose probability is maximal (y_pred).
Step3: Defining a loss function
Learning optimal model parameters involves minimizing a loss function. In the
case of multi-class logistic regression, it is very common to use the negative
log-likelihood as the loss. This is equivalent to maximizing the likelihood of the
data set $\cal{D}$ under the model parameterized by $\theta$. Let
us first start by defining the likelihood $\cal{L}$ and loss
$\ell$
Step4: Training procedure
This notebook will use the method of stochastic gradient descent with mini-batches (MSGD) to find values of W and b that minimize the loss.
We can let Theano compute symbolic expressions for the gradient of the loss wrt W and b.
Step5: g_W and g_b are symbolic variables, which can be used as part of a computation graph. In particular, let us define the expressions for one step of gradient descent for W and b, for a fixed learning rate.
Step6: We can then define update expressions, or pairs of (shared variable, expression for its update), that we will use when compiling the Theano function. The updates will be performed each time the function gets called.
The following function, train_model, returns the loss on the current minibatch, then changes the values of the shared variables according to the update rules. It needs to be passed x and y as inputs, but not the shared variables, which are implicit inputs.
The entire learning algorithm thus consists in looping over all examples in the dataset, considering all the examples in one minibatch at a time, and repeatedly calling the train_model function.
Step7: Testing the model
When testing the model, we are interested in the number of misclassified examples (and not only in the likelihood). Here, we build a symbolic expression for retrieving the number of misclassified examples in a minibatch.
This will also be useful to apply on the validation and testing sets, in order to monitor the progress of the model during training, and to do early stopping.
Step8: Training the model
Here is the main training loop of the algorithm
Step9: Visualization
You can visualize the columns of W, which correspond to the separation hyperplanes for each class. | Python Code:
import os
import requests
import gzip
import six
from six.moves import cPickle
if not os.path.exists('mnist.pkl.gz'):
r = requests.get('http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz')
with open('mnist.pkl.gz', 'wb') as data_file:
data_file.write(r.content)
with gzip.open('mnist.pkl.gz', 'rb') as data_file:
if six.PY3:
train_set, valid_set, test_set = cPickle.load(data_file, encoding='latin1')
else:
train_set, valid_set, test_set = cPickle.load(data_file)
train_set_x, train_set_y = train_set
valid_set_x, valid_set_y = valid_set
test_set_x, test_set_y = test_set
Explanation: Theano example: logistic regression
This notebook is inspired from the tutorial on logistic regression on deeplearning.net.
In this notebook, we show how Theano can be used to implement the most basic classifier: the logistic regression. We start off with a quick primer of the model, which serves both as a refresher but also to anchor the notation and show how mathematical expressions are mapped onto Theano graphs.
In the deepest of machine learning traditions, this tutorial will tackle the exciting problem of MNIST digit classification.
Get the data
This Friday, Vincent Dumoulin will present how to use Fuel, a framework to automatically manage and handle datasets.
In the mean time, let's just download a pre-packaged version of MNIST, and load each split of the dataset as NumPy ndarrays.
End of explanation
import numpy
import theano
from theano import tensor
# Size of the data
n_in = 28 * 28
# Number of classes
n_out = 10
x = tensor.matrix('x')
W = theano.shared(value=numpy.zeros((n_in, n_out), dtype=theano.config.floatX),
name='W',
borrow=True)
b = theano.shared(value=numpy.zeros((n_out,), dtype=theano.config.floatX),
name='b',
borrow=True)
Explanation: The model
Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix $W$ and a bias vector $b$. Classification is
done by projecting an input vector onto a set of hyperplanes, each of which
corresponds to a class. The distance from the input to a hyperplane reflects
the probability that the input is a member of the corresponding class.
Mathematically, the probability that an input vector $x$ is a member of a
class $i$, a value of a stochastic variable $Y$, can be written as:
$$P(Y=i|x, W,b) = softmax_i(W x + b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}$$
The model's prediction $y_{pred}$ is the class whose probability is maximal, specifically:
$$ y_{pred} = {\rm argmax}_i P(Y=i|x,W,b)$$
Now, let us define our input variables. First, we need to define the dimension of our tensors:
- n_in is the length of each training vector,
- n_out is the number of classes.
Our variables will be:
- x is a matrix, where each row contains a different example of the dataset. Its shape is (batch_size, n_in), but batch_size does not have to be specified in advance, and can change during training.
- W is a shared matrix, of shape (n_in, n_out), initialized with zeros. Column k of W represents the separation hyperplane for class k.
- b is a shared vector, of length n_out, initialized with zeros. Element k of b represents the free parameter of hyperplane k.
End of explanation
p_y_given_x = tensor.nnet.softmax(tensor.dot(x, W) + b)
y_pred = tensor.argmax(p_y_given_x, axis=1)
Explanation: Now, we can build a symbolic expression for the matrix of class-membership probability (p_y_given_x), and for the class whose probability is maximal (y_pred).
End of explanation
y = tensor.lvector('y')
log_prob = tensor.log(p_y_given_x)
log_likelihood = log_prob[tensor.arange(y.shape[0]), y]
loss = - log_likelihood.mean()
Explanation: Defining a loss function
Learning optimal model parameters involves minimizing a loss function. In the
case of multi-class logistic regression, it is very common to use the negative
log-likelihood as the loss. This is equivalent to maximizing the likelihood of the
data set $\cal{D}$ under the model parameterized by $\theta$. Let
us first start by defining the likelihood $\cal{L}$ and loss
$\ell$:
$$\mathcal{L} (\theta={W,b}, \mathcal{D}) =
\sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \
\ell (\theta={W,b}, \mathcal{D}) = - \mathcal{L} (\theta={W,b}, \mathcal{D})
$$
Again, we will express those expressions using Theano. We have one additional input, the actual target class y:
- y is an input vector of integers, of length batch_size (which will have to match the length of x at runtime). The length of y can be symbolically expressed by y.shape[0].
- log_prob is a (batch_size, n_out) matrix containing the log probabilities of class membership for each example.
- arange(y.shape[0]) is a symbolic vector which will contain [0,1,2,... batch_size-1]
- log_likelihood is a vector containing the log probability of the target, for each example.
- loss is the mean of the negative log_likelihood over the examples in the minibatch.
End of explanation
g_W, g_b = theano.grad(cost=loss, wrt=[W, b])
Explanation: Training procedure
This notebook will use the method of stochastic gradient descent with mini-batches (MSGD) to find values of W and b that minimize the loss.
We can let Theano compute symbolic expressions for the gradient of the loss wrt W and b.
End of explanation
learning_rate = numpy.float32(0.13)
new_W = W - learning_rate * g_W
new_b = b - learning_rate * g_b
Explanation: g_W and g_b are symbolic variables, which can be used as part of a computation graph. In particular, let us define the expressions for one step of gradient descent for W and b, for a fixed learning rate.
End of explanation
train_model = theano.function(inputs=[x, y],
outputs=loss,
updates=[(W, new_W),
(b, new_b)])
Explanation: We can then define update expressions, or pairs of (shared variable, expression for its update), that we will use when compiling the Theano function. The updates will be performed each time the function gets called.
The following function, train_model, returns the loss on the current minibatch, then changes the values of the shared variables according to the update rules. It needs to be passed x and y as inputs, but not the shared variables, which are implicit inputs.
The entire learning algorithm thus consists in looping over all examples in the dataset, considering all the examples in one minibatch at a time, and repeatedly calling the train_model function.
End of explanation
misclass_nb = tensor.neq(y_pred, y)
misclass_rate = misclass_nb.mean()
test_model = theano.function(inputs=[x, y],
outputs=misclass_rate)
Explanation: Testing the model
When testing the model, we are interested in the number of misclassified examples (and not only in the likelihood). Here, we build a symbolic expression for retrieving the number of misclassified examples in a minibatch.
This will also be useful to apply on the validation and testing sets, in order to monitor the progress of the model during training, and to do early stopping.
End of explanation
## Define a couple of helper variables and functions for the optimization
batch_size = 500
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.shape[0] // batch_size
n_valid_batches = valid_set_x.shape[0] // batch_size
n_test_batches = test_set_x.shape[0] // batch_size
def get_minibatch(i, dataset_x, dataset_y):
start_idx = i * batch_size
end_idx = (i + 1) * batch_size
batch_x = dataset_x[start_idx:end_idx]
batch_y = dataset_y[start_idx:end_idx]
return (batch_x, batch_y)
## early-stopping parameters
# maximum number of epochs
n_epochs = 1000
# look as this many examples regardless
patience = 5000
# wait this much longer when a new best is found
patience_increase = 2
# a relative improvement of this much is considered significant
improvement_threshold = 0.995
# go through this many minibatches before checking the network on the validation set;
# in this case we check every epoch
validation_frequency = min(n_train_batches, patience / 2)
import timeit
from six.moves import xrange
best_validation_loss = numpy.inf
test_score = 0.
start_time = timeit.default_timer()
done_looping = False
epoch = 0
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
minibatch_x, minibatch_y = get_minibatch(minibatch_index, train_set_x, train_set_y)
minibatch_avg_cost = train_model(minibatch_x, minibatch_y)
# iteration number
iter = (epoch - 1) * n_train_batches + minibatch_index
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = []
for i in xrange(n_valid_batches):
valid_xi, valid_yi = get_minibatch(i, valid_set_x, valid_set_y)
validation_losses.append(test_model(valid_xi, valid_yi))
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch,
minibatch_index + 1,
n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
# improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * improvement_threshold:
patience = max(patience, iter * patience_increase)
best_validation_loss = this_validation_loss
# test it on the test set
test_losses = []
for i in xrange(n_test_batches):
test_xi, test_yi = get_minibatch(i, test_set_x, test_set_y)
test_losses.append(test_model(test_xi, test_yi))
test_score = numpy.mean(test_losses)
print(' epoch %i, minibatch %i/%i, test error of best model %f %%' %
(epoch,
minibatch_index + 1,
n_train_batches,
test_score * 100.))
# save the best parameters
numpy.savez('best_model.npz', W=W.get_value(), b=b.get_value())
if patience <= iter:
done_looping = True
break
end_time = timeit.default_timer()
print('Optimization complete with best validation score of %f %%, '
'with test performance %f %%' %
(best_validation_loss * 100., test_score * 100.))
print('The code ran for %d epochs, with %f epochs/sec' %
(epoch, 1. * epoch / (end_time - start_time)))
Explanation: Training the model
Here is the main training loop of the algorithm:
- For each epoch, or pass through the training set
- split the training set in minibatches, and call train_model on each minibatch
- split the validation set in minibatches, and call test_model on each minibatch to measure the misclassification rate
- if the misclassification rate has not improved in a while, stop training
- Measure performance on the test set
The early stopping procedure is what decide whether the performance has improved enough. There are many variants, and we will not go into the details of this one here.
We first need to define a few parameters for the training loop and the early stopping procedure.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from utils import tile_raster_images
plt.clf()
# Increase the size of the figure
plt.gcf().set_size_inches(15, 10)
plot_data = tile_raster_images(W.get_value(borrow=True).T,
img_shape=(28, 28), tile_shape=(2, 5), tile_spacing=(1, 1))
plt.imshow(plot_data, cmap='Greys', interpolation='none')
plt.axis('off')
Explanation: Visualization
You can visualize the columns of W, which correspond to the separation hyperplanes for each class.
End of explanation |
9,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1>Introduction to Jet Images and Computer Vision</h1>
<h3>Michela Paganini - Yale University</h3>
<h4>High Energy Phenomenology, Experiment and Cosmology Seminar Series</h4>
<img src='http
Step1: Jets at the LHC
<img src="http
Step2: You can compute the quantities above directly from the images
Step3: Looking at Physics features
Step4: Jet Image Classification
We can now try to use various techniques to classify the jet images into signal (i.e. originating from boosted W bosons) and background (QCD).
We will start with a classic feature-based classifier, which will use properties of the jet such as mass, tau_21, and delta_R (known to have good discriminative power) to separate the two classes.
Then, we will construct different networks that operate directly at the pixel level and compare them all.
Simple feature-based classifier
Data processing
Follow the procedure from yesterday to create your matrix of features X. Shuffle its entries, split them into train, test, and validation set, and scale them to zero mean and unit standard deviation.
Step5: Model
Build a simple keras model made of fully-connected (Dense) layers. Remember the steps
Step6: The command below trains the model. However, in the interest of time, I will not train the network on the spot. I will instead load in pre-trained weights from a training I performed last night.
Step7: If you were to actually run the training, you would be able to visualize its history. Keras saves the entire training history, keeping track of whatever metric you specify (here accuracy and loss).
Step8: Evaluate on test set
Step9: Convolutional Neural Network
We can now instead try to learn a model directly on the pixel space, instead of summarizing the information into engineered features such as mass, tau_21 and delta_R.
Step10: Feel free to try to train it at home! For this tutorial, we will just load in pre-trained weights.
Step11: Locally-Connected Neural Network
Step12: Feel free to try to train it at home! For this tutorial, we will just load in pre-trained weights.
Step13: Fully-Connected network
Step14: Plot ROC Curves
A standard way to visualize the tradeoff between low false positive rate (FPR) and high true positive rate (TPR) is by plotting them on a ROC curve. In Physics, people like to plot the TPR on the x-axis, and 1/FPR on the y-axis.
Step15: What is the network learning?
Step16: You can now visualize what each network picks up on, at least to first order. These correlation plots tells us whether a specific pixel being strongly activated is a good indicator of that jet image belonging to a class or the other. Red represents the signal (boosted W from W'-->WZ), blue represents the background (QCD).
Step17: You can also look at how the output of each classifier is correlated with known quantities that are known to be discriminative, such as the ones used in the baseline classifier above (mass, tau_21, delta_R). This will inform us as to whether the network has 'learned' to internally calculate a representation that is close to these variables, thus eliminating our need to come up with these features ourselves.
Step18: Mass
Step19: Tau_21
Step20: Delta R
Step21: Sometimes, however, it is not a good idea to have your network learn the mass of a jet and use that to classify jets. In that case, in fact, the network will successfully suppress all jets outside of the signal-like mass window and sculpt the mass of the background to look like a signal peak.
What we would like to be able to do, instead, is to have a classifier that is insensitive to mass, and that reduced the background across the entire mass spectrum.
For reference, see
Step23: Define new custom loss functions not included among the standard keras ones
Step24: Build individual models
Step25: Build connected model with only generator trainable
Step26: Build connected model with only discriminator trainable
Step27: Load pre-trained wights
Step28: Now that we have a trained GAN, we can see if it actually works and what it produces. We can now get rid of the critic (discriminator) and focus only on the part that we really care about, the generator. Let's execute a forward pass on the generator.
Step29: Let's look at some GAN-generated jet images!
Step30: We can also look at the difference between the average generated image and the average real image to identify parts of the image that are not well-captured by the GAN and might need improvement. Green pixels are more strongly activated, on average, in fake images, while purple pixels are more strongly activated in background images. | Python Code:
import os
from keras.utils.data_utils import get_file
# Info for downloading the dataset from Zenodo
MD5_HASH = 'f9b11c46b6a0ff928bec2eccf865ecf0'
DATAFILE = 'jet-images_Mass60-100_pT250-300_R1.25_Pix25.hdf5'
URL_TEMPLATE = 'https://zenodo.org/record/{record}/files/{filename}'
print('[INFO] MD5 verification: {}'.format(MD5_HASH))
datafile = get_file(
fname=DATAFILE,
origin=URL_TEMPLATE.format(record=269622, filename=DATAFILE),
md5_hash=MD5_HASH
)
Explanation: <center>
<h1>Introduction to Jet Images and Computer Vision</h1>
<h3>Michela Paganini - Yale University</h3>
<h4>High Energy Phenomenology, Experiment and Cosmology Seminar Series</h4>
<img src='http://www.edustart.org/wp-content/uploads/2014/03/MIST-Vert-Aggie-Maroon-page-001-300x200.jpg'>
</center>
Material:
1. Ben Nachman's plenary talk at ACAT 2017 (<a href="https://indico.cern.ch/event/567550/contributions/2656471/attachments/1510207/2354761/Nachman_ACAT.pdf">slides</a>)
1. My talk on Generative Adversarial Networks for jet images at the 2017 IML Workshop (<a href="http://cds.cern.ch/record/2256878?ln=en">video</a>)
1. Jannicke Pearkes's talk on boosted top tagging with jet images at the 2017 IML Workshop (<a href="http://cds.cern.ch/record/2256876?ln=en">video</a>)
1. Michael Kagan's overview talk at LHCP 2017 (<a href="https://cds.cern.ch/record/2267879?ln=en">slides</a>)
1. ATLAS PUB Note on quark vs gluon tagging with jet images(<a href="https://cds.cern.ch/record/2275641/files/ATL-PHYS-PUB-2017-017.pdf">note</a>)
1. Lynn Huynh's summer report on jet image pre-processing (<a href="https://cds.cern.ch/record/2209127/files/Lynn_Huynh_Report.pdf">write-up</a>)
1. Ben Nachman's talk at DataScience@LHC 2015 (<a href="http://cds.cern.ch/record/2069153">video</a>)
Dataset
<a href="https://zenodo.org/record/269622#.WgZFPRNSyRs"><img src="images/zenodo.jpg"></a>
Although the dataset was released in conjunction with the arXiv publication of our work on Generative Adversarial Networks for jet images, it was previously used in the original "Jet Images -- Deep Learning Edition" work on jet image classification. Feel free to explore the dataset and use it for any project you have in mind (please cite the dataset and relevant publications explaining its generation!)
Download dataset from Zenodo
End of explanation
import h5py
import numpy as np
import os
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.colors import LogNorm, Normalize
%matplotlib inline
# number of images to load
nb_points = 800000
# open hdf5 data file
d = h5py.File(datafile, 'r')
# content of the dataset
d.items()
# extract a random subset of samples
ix = range(d['image'].shape[0]) # get indices
np.random.shuffle(ix) # shuffle them
ix = ix[:nb_points] # select out nb_points
# extract data from dataset
images, labels = d['image'][:][ix], d['signal'][:][ix]
mass = d['jet_mass'][:][ix]
delta_R = d['jet_delta_R'][:][ix]
tau_21 = d['tau_21'][:][ix]
Explanation: Jets at the LHC
<img src="http://cms.web.cern.ch/sites/cms.web.cern.ch/files/styles/large/public/field/image/jets_v1.png?itok=ULcYw1lS">
Jets are the observable result of quarks and gluons scattering at high energy. A collimated stream of
protons and other hadrons forms in the direction of the initiating quark or gluon. Clusters of such
particles are called jets.
Jet Images
Mature field of research! (image courtesy of B.P.Nachman)
<img src="./images/graph.jpg" width="800">
<a href="https://arxiv.org/abs/1709.04464"><img src="./images/jet.jpg" width="600" align="right" style="border:5px solid black"></a>
What is a jet image?
<img src="./images/jet_image.jpg" width="300">
A jet image is a two-dimensional representation of the radiation pattern within a jet: the distribution of the locations and energies of the jet’s constituent particles. The jet image consists of a regular grid of pixels in η×φ.
Advantages of this data format include: easy visual inspection, fixed-length representation, suitable for application of computer vision techniques.
Pre-processing
In the dataset we will be using today:
The finite granularity of a calorimeter is simulated with a regular 0.1×0.1 grid in η and φ. The energy of each calorimeter cell is given by the sum of the energies of all particles incident on the cell. Cells with positive energy are assigned to jets using the anti-kt clustering algorithm with a radius parameter of R = 1.0 via the software package FastJet 3.2.1.
To mitigate the contribution from the underlying event, jets are are trimmed by re-clustering the constituents into R = 0.3 kt subjets and dropping those which have less than 5% of the transverse momentum of the parent jet. Trimming also reduces the impact of pileup: multiple proton-proton collisions occurring in the same event as the hard-scatter process. Jet images are formed by translating the η and φ of all constituents of a given jet so that its highest pT subjet is centered at the origin.
A rectangular grid of η × φ ∈ [−1.25, 1.25] × [−1.25, 1.25] with 0.1 × 0.1 pixels centered at the origin
forms the basis of the jet image. The intensity of each pixel is the pT corresponding to the energy
and pseudorapditiy of the constituent calorimeter cell, pT = E_cell/ cosh(η_cell). The radiation pattern
is symmetric about the origin of the jet image and so the images are rotated. The subjet with the
second highest pT (or, in its absence, the direction of the first principle component) is placed at an
angle of −π/2 with respect to the η − φ axes. Finally, a parity transform about the vertical axis is
applied if the left side of the image has more energy than the right side.
<div align="right">
<i>Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis</i> <br>
[arXiv:1701.05927](https://arxiv.org/pdf/1701.05927.pdf)
</div>
References:
* Section 3 of arXiv:1511.05190
* <a href="https://link.springer.com/article/10.1007/s41781-017-0004-6#Sec16">Appendix B</a> of arXiv:1701.05927
Uniqueness with respect to natural images in ML literature
Sparse (low occupancy)
High dynamic range (pixel intensity represents pT of particles and spans several orders of magnitude)
Pixel activations and positions are physically meaningful
Small variations can drastically modify physical properties of a jet
Hands-on tutorial
End of explanation
def plot_jet_image(content, output_fname=None, vmin=1e-6, vmax=300, title=''):
'''
Function to help you visualize a jet image on a log scale
Args:
-----
content : numpy array of dimensions 25x25, first arg to imshow,
content of the image
e.g.: images.mean(axis=0) --> the average image
output_fname : string, name of the output file where the plot will be
saved.
vmin : (default = 1e-6) float, lower bound of the pixel intensity
scale before saturation
vmax : (default = 300) float, upper bound of the pixel intensity
scale before saturation
title : (default = '') string, title of the plot, to be displayed
on top of the image
'''
fig, ax = plt.subplots(figsize=(7, 6))
extent = [-1.25, 1.25, -1.25, 1.25]
im = ax.imshow(content, interpolation='nearest',
norm=LogNorm(vmin=vmin, vmax=vmax), extent=extent)
cbar = plt.colorbar(im, fraction=0.05, pad=0.05)
cbar.set_label(r'Pixel $p_T$ (GeV)', y=0.85)
plt.xlabel(r'[Transformed] Pseudorapidity $(\eta)$')
plt.ylabel(r'[Transformed] Azimuthal Angle $(\phi)$')
plt.title(title)
if output_fname is None:
plt.savefig(output_fname)
def plot_diff_jet_image(content, output_fname=None, extr=None, title='',
cmap=matplotlib.cm.seismic):
'''
Function to help you visualize the difference between two sets of jet
images on a linear scale
Args:
-----
content : numpy array of dimensions 25x25, first arg to imshow,
content of the image
e.g.: sig_images.mean(axis=0) - bkg_images.mean(axis=0)
output_fname : string, name of the output file where the plot will be
saved.
extr : (default = None) float, magnitude of the upper and lower
bounds of the pixel intensity scale before saturation (symmetric
around 0)
title : (default = '') string, title of the plot, to be displayed on
top of the image
cmap : (default = matplotlib.cm.PRGn_r) matplotlib colormap, ideally
white in the middle
'''
fig, ax = plt.subplots(figsize=(6, 6))
extent = [-1.25, 1.25, -1.25, 1.25]
if extr == None:
extr = max(abs(content.min()), abs(content.max()))
im = ax.imshow(
content,
interpolation='nearest',
norm=Normalize(vmin=-extr, vmax=+extr), extent=extent,
cmap=cmap
)
plt.colorbar(im, fraction=0.05, pad=0.05)
plt.xlabel(r'[Transformed] Pseudorapidity $(\eta)$')
plt.ylabel(r'[Transformed] Azimuthal Angle $(\phi)$')
plt.title(title)
if output_fname:
plt.savefig(output_fname)
# visualize a jet image
plot_jet_image(images[0])
# visualize the average jet image
plot_jet_image(images.mean(axis=0))
# visualize the difference between the average signal and the average background image
plot_diff_jet_image(
images[labels == 1].mean(axis=0) - images[labels == 0].mean(axis=0)
)
Explanation: You can compute the quantities above directly from the images:
\begin{align}
&p_\text{T}^2(I) =\left(\sum_{i=0}^{N} I_i\cos(\phi_i)\right)^2+\left(\sum_{i=0}^{N} I_i\sin(\phi_i)\right)^2
\label{eq:pt}
\
&m^2(I) = \left(\sum_{i=0}^{N} I_i\right)^2-p_\text{T}^2(I)-\left(\sum_{i=0}^{N} I_i\sinh(\eta_i)\right)^2
\label{eq:m}
\
&\tau_{21}(I)=\frac{\tau_2(I)}{\tau_1(I)},
\label{eq:tau21}
\end{align}
where:
\begin{equation}
\tau_{n}(I)\propto\sum_{i=0}^{N} I_i \min_{1\leq a\leq n}\left{\sqrt{\left(\eta_i-\eta_a\right)^2+\left(\phi_i-\phi_a\right)^2}\right}
\end{equation}
End of explanation
def plot_physics_feature(feature_name, feature, labels, bins=None, output_fname=None):
# if bins are not defined when function is called, define them here
if not bins:
bins = np.linspace(feature.min(), feature.max(), 50)
fig, ax = plt.subplots(figsize=(6, 6))
_ = plt.hist(feature[labels == 1], bins=bins, histtype='step',
label=r"Signal ($W' \rightarrow WZ$)",
normed=True, color='red')
_ = plt.hist(feature[labels == 0], bins=bins, histtype='step',
label=r'Background (QCD dijets)', normed=True, color='blue')
plt.xlabel(r'Discretized {} of Jet Image'.format(feature_name))
plt.ylabel(r'Units normalized to unit area')
plt.legend()
if output_fname:
plt.savefig(output_fname)
plot_physics_feature(r'$m$', mass, labels)
plot_physics_feature(r'$\Delta R$', delta_R, labels)
plot_physics_feature(r'$\tau_{2,1}$', tau_21, labels)
Explanation: Looking at Physics features
End of explanation
features = np.stack((mass, tau_21, delta_R)).T # What we called X yesterday
features
from sklearn.model_selection import train_test_split
# 80% train+validate, 20% test
images_train, images_test, \
labels_train, labels_test, \
features_train, features_test \
= train_test_split(images, labels, features,
test_size=0.2)
# 64% train, 16% validate
images_train, images_val, \
labels_train, labels_val, \
features_train, features_val \
= train_test_split(images_train, labels_train, features_train,
test_size=0.2)
print ('{} training samples\n{} validation samples\n{} testing samples'
.format(images_train.shape[0], images_val.shape[0], images_test.shape[0]))
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
features_train = scaler.fit_transform(features_train)
features_val = scaler.transform(features_val)
features_test = scaler.transform(features_test)
Explanation: Jet Image Classification
We can now try to use various techniques to classify the jet images into signal (i.e. originating from boosted W bosons) and background (QCD).
We will start with a classic feature-based classifier, which will use properties of the jet such as mass, tau_21, and delta_R (known to have good discriminative power) to separate the two classes.
Then, we will construct different networks that operate directly at the pixel level and compare them all.
Simple feature-based classifier
Data processing
Follow the procedure from yesterday to create your matrix of features X. Shuffle its entries, split them into train, test, and validation set, and scale them to zero mean and unit standard deviation.
End of explanation
from keras.layers import Input, Dense, Dropout
from keras.models import Model
from keras.callbacks import EarlyStopping, ModelCheckpoint
x = Input(shape=(features_train.shape[1], ))
h = Dense(64, activation='relu')(x)
h = Dense(64, activation='relu')(h)
h = Dense(64, activation='relu')(h)
y = Dense(1, activation='sigmoid')(h)
baseline_model = Model(x, y)
baseline_model.compile('adam', 'binary_crossentropy', metrics=['acc'])
Explanation: Model
Build a simple keras model made of fully-connected (Dense) layers. Remember the steps:
1. Define the symbolic graph by connecting layers
1. Define an optimizer and a loss function to minimize
1. Train ('fit') the model to the training dataset, monitoring whether the validation loss continues to decrease
1. Stop the training automatically when the validation loss stops going down
1. Evaluate performance on test set
Recall activation functions: Rectified Linear Unit (relu) vs. Leaky Rectified Linear Unit
<img src="https://cdn-images-1.medium.com/max/1600/1*A_Bzn0CjUgOXtPCJKnKLqA.jpeg">
End of explanation
# baseline_model.fit(
# features_train, labels_train, # X and y
# epochs=200,
# batch_size=128,
# validation_data=(features_val, labels_val), # validation X and y
# callbacks=[
# EarlyStopping(verbose=True, patience=15, monitor='val_loss'),
# ModelCheckpoint('./models/baseline-model.h5', monitor='val_loss',
# verbose=True, save_best_only=True)
# ]
# )
Explanation: The command below trains the model. However, in the interest of time, I will not train the network on the spot. I will instead load in pre-trained weights from a training I performed last night.
End of explanation
history = baseline_model.history.history
history.keys()
# accuracy plot
plt.plot(100 * np.array(history['acc']), label='training')
plt.plot(100 * np.array(history['val_acc']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('accuracy %')
plt.legend(loc='lower right', fontsize=20)
plt.show()
# loss plot
plt.plot(100 * np.array(history['loss']), label='training')
plt.plot(100 * np.array(history['val_loss']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='upper right', fontsize=20)
# the line indicate the epoch corresponding to the best performance on the validation set
# plt.vlines(np.argmin(history['val_loss']), 45, 56, linestyle='dashed', linewidth=0.5)
plt.show()
print 'Loss estimate on unseen examples (from validation set) = {0:.3f}'.format(np.min(history['val_loss']))
Explanation: If you were to actually run the training, you would be able to visualize its history. Keras saves the entire training history, keeping track of whatever metric you specify (here accuracy and loss).
End of explanation
baseline_model.load_weights('./models/baseline-model.h5')
yhat_baseline = baseline_model.predict(features_test, batch_size=512)
bins = np.linspace(0, 1, 20)
_ = plt.hist(yhat_baseline[labels_test==1],
histtype='stepfilled', alpha=0.5, color='red', label=r"Signal ($W' \rightarrow WZ$)", bins=bins)
_ = plt.hist(yhat_baseline[labels_test==0],
histtype='stepfilled', alpha=0.5, color='blue', label=r'Background (QCD dijets)', bins=bins)
plt.legend(loc='upper center')
plt.xlabel('P(signal) assigned by the baseline model')
Explanation: Evaluate on test set
End of explanation
from keras.layers import Conv2D, Flatten, LeakyReLU
# add channel dimension (1 for grayscale)
images_train = np.expand_dims(images_train, -1)
images_test = np.expand_dims(images_test, -1)
images_val = np.expand_dims(images_val, -1)
x = Input(shape=(images_train.shape[1:]))
h = Conv2D(32, kernel_size=7, strides=1)(x)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = Conv2D(64, kernel_size=7, strides=1)(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = Conv2D(128, kernel_size=5, strides=1)(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = Conv2D(256, kernel_size=5, strides=1)(h)
h = LeakyReLU()(h)
h = Flatten()(h)
h = Dropout(0.2)(h)
y = Dense(1, activation='sigmoid')(h)
cnn_model = Model(x, y)
cnn_model.compile('adam', 'binary_crossentropy', metrics=['acc'])
cnn_model.summary()
Explanation: Convolutional Neural Network
We can now instead try to learn a model directly on the pixel space, instead of summarizing the information into engineered features such as mass, tau_21 and delta_R.
End of explanation
# cnn_model.fit(
# images_train, labels_train,
# epochs=100,
# batch_size=512,
# validation_data=(images_val, labels_val),
# callbacks=[
# EarlyStopping(verbose=True, patience=30, monitor='val_loss'),
# ModelCheckpoint('./models/cnn-model.h5', monitor='val_loss',
# verbose=True, save_best_only=True)
# ]
# )
cnn_history = cnn_model.history.history
# accuracy plot
plt.plot(100 * np.array(cnn_history['acc']), label='training')
plt.plot(100 * np.array(cnn_history['val_acc']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('accuracy %')
plt.legend(loc='lower right', fontsize=20)
plt.show()
# loss plot
plt.plot(100 * np.array(cnn_history['loss']), label='training')
plt.plot(100 * np.array(cnn_history['val_loss']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='upper right', fontsize=20)
# the line indicate the epoch corresponding to the best performance on the validation set
plt.vlines(np.argmin(cnn_history['val_loss']), 43, 56, linestyle='dashed', linewidth=0.5)
plt.show()
print 'Loss estimate on unseen examples (from validation set) = {0:.3f}'.format(np.min(cnn_history['val_loss']))
cnn_model.load_weights('models/cnn-model.h5')
yhat_cnn = cnn_model.predict(images_test, batch_size=512, verbose=True)
Explanation: Feel free to try to train it at home! For this tutorial, we will just load in pre-trained weights.
End of explanation
from keras.layers import LocallyConnected2D , MaxPool2D, Flatten
x = Input(shape=(images_train.shape[1:]))
h = LocallyConnected2D(32, kernel_size=9, strides=2)(x)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = LocallyConnected2D(32, kernel_size=5, strides=1)(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = LocallyConnected2D(64, kernel_size=3, strides=1)(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = LocallyConnected2D(64, kernel_size=3, strides=1)(h)
h = LeakyReLU()(h)
h = Flatten()(h)
h = Dropout(0.2)(h)
y = Dense(1, activation='sigmoid')(h)
lcn_model = Model(x, y)
lcn_model.compile('adam', 'binary_crossentropy', metrics=['acc'])
lcn_model.summary()
Explanation: Locally-Connected Neural Network
End of explanation
# lcn_model.fit(
# images_train, labels_train,
# epochs=100,
# batch_size=256,
# validation_data=(images_val, labels_val),
# callbacks=[
# EarlyStopping(verbose=True, patience=30, monitor='val_loss'),
# ModelCheckpoint('./models/lcn-model.h5', monitor='val_loss',
# verbose=True, save_best_only=True)
# ]
# )
lcn_history = lcn_model.history.history
# accuracy plot
plt.plot(100 * np.array(lcn_history['acc']), label='training')
plt.plot(100 * np.array(lcn_history['val_acc']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('accuracy %')
plt.legend(loc='lower right', fontsize=20)
plt.show()
# loss plot
plt.plot(100 * np.array(lcn_history['loss']), label='training')
plt.plot(100 * np.array(lcn_history['val_loss']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='upper right', fontsize=20)
# the line indicate the epoch corresponding to the best performance on the validation set
plt.vlines(np.argmin(lcn_history['val_loss']), 43, 56, linestyle='dashed', linewidth=0.5)
plt.show()
print 'Loss estimate on unseen examples (from validation set) = {0:.3f}'.format(np.min(lcn_history['val_loss']))
lcn_model.load_weights('models/lcn-model.h5')
yhat_lcn = lcn_model.predict(images_test, batch_size=512)
Explanation: Feel free to try to train it at home! For this tutorial, we will just load in pre-trained weights.
End of explanation
x = Input(shape=(images_train.shape[1:]))
h = Flatten()(x)
h = Dense(25 ** 2, kernel_initializer='he_normal')(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = Dense(512, kernel_initializer='he_normal')(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = Dense(256, kernel_initializer='he_normal')(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
h = Dense(128, kernel_initializer='he_normal')(h)
h = LeakyReLU()(h)
h = Dropout(0.2)(h)
y = Dense(1, activation='sigmoid')(h)
dense_model = Model(x, y)
dense_model.compile('adam', 'binary_crossentropy', metrics=['acc'])
# dense_model.fit(
# images_train, labels_train,
# epochs=100,
# batch_size=256,
# validation_data=(images_val, labels_val),
# callbacks=[
# EarlyStopping(verbose=True, patience=30, monitor='val_loss'),
# ModelCheckpoint('./models/dense-model.h5', monitor='val_loss',
# verbose=True, save_best_only=True)
# ]
# )
dense_history = dense_model.history.history
# accuracy plot
plt.plot(100 * np.array(dense_history['acc']), label='training')
plt.plot(100 * np.array(dense_history['val_acc']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('accuracy %')
plt.legend(loc='lower right', fontsize=20)
plt.show()
# loss plot
plt.plot(100 * np.array(dense_history['loss']), label='training')
plt.plot(100 * np.array(dense_history['val_loss']), label='validation')
plt.xlim(0)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='upper right', fontsize=20)
# the line indicate the epoch corresponding to the best performance on the validation set
plt.vlines(np.argmin(dense_history['val_loss']), 43, 56, linestyle='dashed', linewidth=0.5)
plt.show()
print 'Loss estimate on unseen examples (from validation set) = {0:.3f}'.format(np.min(dense_history['val_loss']))
dense_model.load_weights('models/dense-model.h5')
yhat_dense = dense_model.predict(images_test, batch_size=512, verbose=True)
Explanation: Fully-Connected network
End of explanation
from sklearn.metrics import roc_curve
fpr_cnn, tpr_cnn, _ = roc_curve(labels_test, yhat_cnn)
fpr_lcn, tpr_lcn, _ = roc_curve(labels_test, yhat_lcn)
fpr_dense, tpr_dense, _ = roc_curve(labels_test, yhat_dense)
fpr_baseline, tpr_baseline, _ = roc_curve(labels_test, yhat_baseline)
plt.figure(figsize=(10,10))
plt.grid(b = True, which = 'minor')
plt.grid(b = True, which = 'major')
_ = plt.plot(tpr_cnn, 1./fpr_cnn, label='CNN')
_ = plt.plot(tpr_lcn, 1./fpr_lcn, label='LCN')
_ = plt.plot(tpr_dense, 1./fpr_dense, label='FCN')
_ = plt.plot(tpr_baseline, 1./fpr_baseline, label='Baseline')
plt.legend()
plt.xlim((0.1, 0.9))
plt.ylim((1, 1000))
plt.yscale('log')
Explanation: Plot ROC Curves
A standard way to visualize the tradeoff between low false positive rate (FPR) and high true positive rate (TPR) is by plotting them on a ROC curve. In Physics, people like to plot the TPR on the x-axis, and 1/FPR on the y-axis.
End of explanation
def get_correlations(images, disc_output):
'''
calculate linear correlation between each pixel and the output of the classifier
to see what pixels are more indicative of a specific class.
'''
import pandas as pd
# -- find the total number of pixels per image, here 25 x 25
n_pixels = np.prod(images.shape[1:3])
# -- add the pixels as columns to a dataframe
df = pd.DataFrame(
{i : np.squeeze(images).reshape(-1, n_pixels)[:, i] for i in range(n_pixels)}
)
# -- add a column to the end of the dataframe for the discriminator's output
df['disc_output'] = disc_output
# -- pandas offers an easy solution to calculate correlations
# (even though it's slow because it also calculates the correlation between each pixel and every other pixel)
correlations = df.corr().values[:-1, -1]
return correlations
def plot_correlations(correlations, extent, title='', img_dim=(25, 25), cmap=plt.cm.seismic):
'''
call the function about and then plot the correlations in image format
'''
max_mag = max(
abs(np.min(correlations[np.isfinite(correlations)])),
abs(np.max(correlations[np.isfinite(correlations)])),
) # highest correlation value (abs value), to make the plot look nice and on a reasonable scale
f, ax = plt.subplots(figsize=(6, 6))
im = ax.imshow(
correlations.reshape(img_dim),
interpolation='nearest',
norm=Normalize(vmin=-max_mag, vmax=max_mag),
extent=extent,
cmap=cmap
)
plt.colorbar(im, fraction=0.05, pad=0.05)
plt.xlabel(r'[Transformed] Pseudorapidity $(\eta)$')
plt.ylabel(r'[Transformed] Azimuthal Angle $(\phi)$')
plt.title(title)
# plt.savefig(os.path.join('..', outdir, outname))
Explanation: What is the network learning?
End of explanation
plot_correlations(
get_correlations(images_test[:10000], yhat_cnn[:10000]),
extent=[-1.25, 1.25, -1.25, 1.25],
title='Correlation between pixels \n and the CNN prediction'
)
plot_correlations(
get_correlations(images_test[:10000], yhat_lcn[:10000]),
extent=[-1.25, 1.25, -1.25, 1.25],
title='Correlation between pixels \n and the LCN prediction'
)
plot_correlations(
get_correlations(images_test[:10000], yhat_dense[:10000]),
extent=[-1.25, 1.25, -1.25, 1.25],
title='Correlation between pixels \n and the FCN prediction'
)
plot_correlations(
get_correlations(images_test[:10000], yhat_baseline[:10000]),
extent=[-1.25, 1.25, -1.25, 1.25],
title='Correlation between pixels \n and the Baseline prediction'
)
Explanation: You can now visualize what each network picks up on, at least to first order. These correlation plots tells us whether a specific pixel being strongly activated is a good indicator of that jet image belonging to a class or the other. Red represents the signal (boosted W from W'-->WZ), blue represents the background (QCD).
End of explanation
def plot_output_vs_kin(kin, output, xlabel, ylabel, nbins=30):
'''
Plot one output of the discriminator network vs. one of the 1D physics variables that describe jets
Args:
-----
kin : numpy array, kinematic property (such as mass or pT) associated with each image. I.e.: discrete_mass(np.squeeze(generated_images))
output : numpy array, one of the 2 outputs of the discriminator, evaluated on the same images that `kin` refers to
xlabel : string, x-axis label that describes the meaning of `kin`
ylabel : string, y-axis label that describes the meaning og `output`
outname : name of the output file, to be placed in ../plots/
nbins : (default = 30) number of bins to use to represent the distributions in a discretized way
'''
# set the colormap
plt.set_cmap('jet')
# draw a 2d histogram of the discriminator's output versus the kinematic variable of choice (mass, pT, etc.)
h, binx, biny, _ = plt.hist2d(kin, output.reshape(-1,), bins=nbins)
plt.clf() # we don't want to plot this 2D histogram, we want to normalize it per bin first
# normalize the histogram such that the entries in each column add up to 1, such that the intensity
# of each corresponds to the percentage of the jets in a given mass (or pT) bin that get assigned a p
for i in range(nbins):
h[i, :] = h[i, :] / float(np.sum(h[i, :]))
# plot the normalized histogram as an image
f, ax2 = plt.subplots(figsize=(6, 6))
im = ax2.imshow(
np.flipud(h.T),
interpolation='nearest',
norm=LogNorm(),
extent=[binx.min(), binx.max(), biny.min(), biny.max()],
aspect="auto"
)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
# add a custom colorbar
cax = f.add_axes([0.93, 0.1, 0.03, 0.8])
plt.colorbar(im, cax = cax)
plt.set_cmap('viridis')
Explanation: You can also look at how the output of each classifier is correlated with known quantities that are known to be discriminative, such as the ones used in the baseline classifier above (mass, tau_21, delta_R). This will inform us as to whether the network has 'learned' to internally calculate a representation that is close to these variables, thus eliminating our need to come up with these features ourselves.
End of explanation
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 0], # mass
yhat_cnn,
xlabel='Discrete jet image mass (GeV)',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 0], # mass
yhat_lcn,
xlabel='Discrete jet image mass (GeV)',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 0], # mass
yhat_dense,
xlabel='Discrete jet image mass (GeV)',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 0], # mass
yhat_baseline,
xlabel='Discrete jet image mass (GeV)',
ylabel='P(signal)',
)
Explanation: Mass
End of explanation
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 1], # tau21
yhat_cnn,
xlabel=r'Discrete jet $\tau_{21}$',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 1], # tau21
yhat_lcn,
xlabel=r'Discrete jet $\tau_{21}$',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 1], # tau21
yhat_dense,
xlabel=r'Discrete jet $\tau_{21}$',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 1], # tau21
yhat_baseline,
xlabel=r'Discrete jet $\tau_{21}$',
ylabel='P(signal)',
)
Explanation: Tau_21
End of explanation
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 2], # deltaR
yhat_cnn,
xlabel=r'Discrete jet $\Delta R$',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 2], # deltaR
yhat_lcn,
xlabel=r'Discrete jet $\Delta R$',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 2], # deltaR
yhat_dense,
xlabel=r'Discrete jet $\Delta R$',
ylabel='P(signal)',
)
plot_output_vs_kin(
scaler.inverse_transform(features_test)[:, 2], # deltaR
yhat_baseline,
xlabel=r'Discrete jet $\Delta R$',
ylabel='P(signal)',
)
Explanation: Delta R
End of explanation
from keras.layers import Input, Dense, Reshape, Flatten
from keras.layers.merge import _Merge
from keras.layers.convolutional import Convolution2D, Conv2DTranspose
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import LeakyReLU
from keras.optimizers import Adam
from keras import backend as K
BATCH_SIZE = 100
# The training ratio is the number of discriminator updates per generator
# update. The paper uses 5.
TRAINING_RATIO = 5
GRADIENT_PENALTY_WEIGHT = 10 # As per the paper
Explanation: Sometimes, however, it is not a good idea to have your network learn the mass of a jet and use that to classify jets. In that case, in fact, the network will successfully suppress all jets outside of the signal-like mass window and sculpt the mass of the background to look like a signal peak.
What we would like to be able to do, instead, is to have a classifier that is insensitive to mass, and that reduced the background across the entire mass spectrum.
For reference, see: C. Shimmin et al., <a href="https://arxiv.org/abs/1703.03507">Decorrelated Jet Substructure Tagging using Adversarial Neural Networks</a>.
Training a GAN (<a href="https://arxiv.org/pdf/1704.00028.pdf">WGAN-GP</a>) on jet images
WGAN-GP = a type of GAN that minimizes the Wasserstein distance between the target and generated distributions and enforces the Lipschitz contraint by penalizing the norm of the gradient instead of clipping weights.
End of explanation
def wasserstein_loss(y_true, y_pred):
return K.mean(y_true * y_pred)
def gradient_penalty_loss(y_true, y_pred, averaged_samples,
gradient_penalty_weight):
gradients = K.gradients(K.sum(y_pred), averaged_samples)
gradient_l2_norm = K.sqrt(K.sum(K.square(gradients)))
gradient_penalty = gradient_penalty_weight * K.square(1 - gradient_l2_norm)
return gradient_penalty
def make_generator():
Creates a generator model that takes a 100-dimensional latent prior and
converts to size 25 x 25 x 1
z = Input(shape=(100, ))
x = Dense(1024, input_dim=100)(z)
x = LeakyReLU()(x)
x = Dense(128 * 7 * 7)(x)
x = BatchNormalization()(x)
x = LeakyReLU()(x)
x = Reshape((7, 7, 128))(x)
x = Conv2DTranspose(128, (5, 5), strides=2, padding='same')(x)
x = BatchNormalization(axis=-1)(x)
x = LeakyReLU()(x)
x = Convolution2D(64, (5, 5), padding='same')(x)
x = BatchNormalization(axis=-1)(x)
x = LeakyReLU()(x)
x = Conv2DTranspose(64, (5, 5), strides=2, padding='same')(x)
x = BatchNormalization(axis=-1)(x)
x = LeakyReLU()(x)
y = Convolution2D(1, (4, 4), padding='valid', activation='relu')(x)
return Model(z, y)
def make_discriminator():
x = Input(shape=(25, 25, 1))
h = Convolution2D(64, (5, 5), padding='same')(x)
h = LeakyReLU()(h)
h = Convolution2D(128, (5, 5), kernel_initializer='he_normal',
strides=2)(h)
h = LeakyReLU()(h)
h = Convolution2D(256, (5, 5), kernel_initializer='he_normal',
padding='same', strides=2)(h)
h = LeakyReLU()(h)
h = Flatten()(h)
h = Dense(1024, kernel_initializer='he_normal')(h)
h = LeakyReLU()(h)
y = Dense(1, kernel_initializer='he_normal')(h)
return Model(x, y)
Explanation: Define new custom loss functions not included among the standard keras ones:
End of explanation
generator = make_generator()
discriminator = make_discriminator()
Explanation: Build individual models:
End of explanation
discriminator.trainable = False
z = Input(shape=(100, ))
generator_model = Model(z, discriminator(generator(z)))
# We use the Adam paramaters from Gulrajani et al.
generator_model.compile(optimizer=Adam(0.0001, beta_1=0.5, beta_2=0.9),
loss=wasserstein_loss)
Explanation: Build connected model with only generator trainable:
End of explanation
discriminator.trainable = True
generator.trainable = False
class RandomWeightedAverage(_Merge): # used for gradient norm penalty
def _merge_function(self, inputs):
weights = K.random_uniform((K.shape(inputs[0])[0], 1, 1, 1))
return (weights * inputs[0]) + ((1 - weights) * inputs[1])
real_samples = Input(shape=(25, 25, 1))
z = Input(shape=(100,))
fake_samples = generator(z)
critic_out_fake = discriminator(fake_samples)
critic_out_real = discriminator(real_samples)
# generate weighted-averages of real and generated
# samples, to use for the gradient norm penalty.
averaged_samples = RandomWeightedAverage()([real_samples, fake_samples])
# running them thru critic to get the gradient norm for the GP loss.
averaged_samples_out = discriminator(averaged_samples)
# The gradient penalty loss function requires the input averaged samples
def gp_loss(y_true, y_pred):
return gradient_penalty_loss(
y_true, y_pred,
averaged_samples=averaged_samples,
gradient_penalty_weight=GRADIENT_PENALTY_WEIGHT
)
discriminator_model = Model(
inputs=[real_samples, z],
outputs=[critic_out_real, critic_out_fake, averaged_samples_out]
)
# We use the Adam paramaters from Gulrajani et al.
discriminator_model.compile(
optimizer=Adam(0.0001, beta_1=0.5, beta_2=0.9),
loss=[wasserstein_loss, wasserstein_loss, gp_loss]
)
# positive_y is the label vector for real samples, with value 1.
# negative_y is the label vector for generated samples, with value -1.
# dummy_y vector is passed to the gradient_penalty loss function and is
# not used.
positive_y = np.ones((BATCH_SIZE, 1), dtype=np.float32)
negative_y = -positive_y
dummy_y = np.zeros((BATCH_SIZE, 1), dtype=np.float32)
# do a little bit of scaling for stability
X_train = np.expand_dims(np.squeeze(images_train[:30000]) / 100, -1)
overall_disc_loss = []
for epoch in range(200): # train for 200 iterations
# at each epoch, shuffle the training set to get new samples
np.random.shuffle(X_train)
print "Epoch: ", epoch
print "Number of batches: ", int(X_train.shape[0] // BATCH_SIZE)
discriminator_loss = []
generator_loss = []
# we'll need this many samples per critic update
critic_nb_samples = BATCH_SIZE * TRAINING_RATIO
# loop through batches
for i in range(int(X_train.shape[0] // (BATCH_SIZE * TRAINING_RATIO))):
X_critic = X_train[i * critic_nb_samples:(i + 1) * critic_nb_samples]
# critic gets trained 5 times more per iteration than the generator
for j in range(TRAINING_RATIO):
X_minibatch = X_critic[j * BATCH_SIZE:(j + 1) * BATCH_SIZE]
# generate new input noise
noise = np.random.rand(BATCH_SIZE, 100).astype(np.float32)
# train the discriminator (or critic)
disc_loss = discriminator_model.train_on_batch(
[X_minibatch, noise],
[positive_y, negative_y, dummy_y]
)
discriminator_loss.append(disc_loss)
critic_score = np.array(discriminator_loss)[:, 0]
if i % 10 == 0:
print 'critic score =', critic_score.mean()
overall_disc_loss.extend(critic_score.tolist())
# train the generator
gen_loss = generator_model.train_on_batch(
np.random.rand(BATCH_SIZE, 100),
positive_y
)
generator_loss.append(gen_loss)
# discriminator.save_weights('./models/wgan-discriminator.h5')
# generator.save_weights('./models/wgan-generator.h5')
Explanation: Build connected model with only discriminator trainable:
End of explanation
discriminator.load_weights('./models/wgan-discriminator.h5')
generator.load_weights('./models/wgan-generator.h5')
Explanation: Load pre-trained wights:
End of explanation
# input noise that will be transformed into jet images
noise = np.random.rand(1000, 100).astype(np.float32)
# produce some jet images from the generator!
fake_jets = generator.predict(noise, batch_size=BATCH_SIZE, verbose=True)
# rescale energies and remove redundant dimension for grayscale channel
fake_jets = np.squeeze(fake_jets * 100)
Explanation: Now that we have a trained GAN, we can see if it actually works and what it produces. We can now get rid of the critic (discriminator) and focus only on the part that we really care about, the generator. Let's execute a forward pass on the generator.
End of explanation
plot_jet_image(fake_jets.mean(0))
Explanation: Let's look at some GAN-generated jet images!
End of explanation
plot_diff_jet_image(fake_jets.mean(0) - images.mean(0), cmap='PRGn')
Explanation: We can also look at the difference between the average generated image and the average real image to identify parts of the image that are not well-captured by the GAN and might need improvement. Green pixels are more strongly activated, on average, in fake images, while purple pixels are more strongly activated in background images.
End of explanation |
9,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a LAS file from scratch
Step1: Step 1
Create some fake data, and make some of the values at the bottom NULL (numpy.nan). Note that of course every curve in a LAS file is recorded against a reference/index, either depth or time, so we create that array too.
Step2: Step 2
Create an empty LASFile object and review its header section
Step3: Let's add some information to the header
Step4: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem
Step5: And finally, add some free text to the ~Other section
Step6: Step 3
Add the curves to the LAS file using the add_curve method
Step7: Step 4
Now let's write out two files
Step8: and let's see if that worked | Python Code:
import lasio
import datetime
import numpy
import os
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Building a LAS file from scratch
End of explanation
depths = numpy.arange(10, 50, 0.5)
fake_curve = numpy.random.random(len(depths))
fake_curve[-10:] = numpy.nan # Add some null values at the bottom
plt.plot(depths, fake_curve)
Explanation: Step 1
Create some fake data, and make some of the values at the bottom NULL (numpy.nan). Note that of course every curve in a LAS file is recorded against a reference/index, either depth or time, so we create that array too.
End of explanation
l = lasio.LASFile()
l.header
Explanation: Step 2
Create an empty LASFile object and review its header section
End of explanation
l.well.DATE = str(datetime.datetime.today())
Explanation: Let's add some information to the header:
the date
the operator (in the Parameter section)
a description of the file in the Other section.
First, let's change the date.
End of explanation
l.params['ENGI'] = lasio.HeaderItem("ENGI", "", "[email protected]", "Creator of this file...")
Explanation: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem:
End of explanation
l.other = "Example of how to create a LAS file from scratch using lasio"
Explanation: And finally, add some free text to the ~Other section:
End of explanation
l.add_curve('DEPT', depths, unit='m')
l.add_curve('FAKE_CURVE', fake_curve, descr='fake curve')
Explanation: Step 3
Add the curves to the LAS file using the add_curve method:
End of explanation
fn = "scratch_example_v2.las"
with open(fn, mode="w") as f: # Write LAS file to disk
l.write(f)
Explanation: Step 4
Now let's write out two files: one according to the LAS file specification version 1.2, and one according to 2.0. Note that by default an empty LASFile object is version 2.0.
End of explanation
with open(fn, mode="r") as f: # Show the result...
print(f.read())
plt.plot(l['DEPT'], l['FAKE_CURVE'])
os.remove(fn)
Explanation: and let's see if that worked
End of explanation |
9,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Development notebook
Step1: Photo-count detection
Theory
Stochastic master equation in Milburn's formulation
$\displaystyle d\rho(t) = dN(t) \mathcal{G}[a] \rho(t) - dt \gamma \mathcal{H}[\frac{1}{2}a^\dagger a] \rho(t)$
where
$\displaystyle \mathcal{G}[A] \rho = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho$
$\displaystyle \mathcal{H}[A] \rho = \frac{1}{2}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho) $
and $dN(t)$ is a Poisson distributed increment with $E[dN(t)] = \gamma \langle a^\dagger a\rangle (t)dt$.
Formulation in QuTiP
In QuTiP we write the stochastic master equation on the form (in the interaction picture, with no deterministic dissipation)
Step2: Solve using stochastic master equation
$\displaystyle D_{1}[a, \rho] = -\gamma \frac{1}{2}\left( a^\dagger a\rho + \rho a^\dagger a - \mathrm{Tr}[a^\dagger a\rho + \rho a^\dagger a] \right)
\rightarrow - \frac{1}{2}({A^\dagger A}_L + {A^\dagger A}_R)\rho_v + \mathrm{E}[({A^\dagger A}_L + {A^\dagger A}_R)\rho_v]$
$\displaystyle D_{2}[A, \rho(t)] = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho
\rightarrow \frac{A_LA^\dagger_R \rho_v}{\mathrm{E}[A_LA^\dagger_R \rho_v]} - \rho_v$
Using QuTiP built-in photo-current detection functions for $D_1$ and $D_2$
Step3: Solve problem again, with the same noise as the previous run
photocurrentmesolve does not take custom noise, but you can set the seed.
Step4: Homodyne detection
Step5: Theory
Stochastic master equation for homodyne in Milburn's formulation
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \gamma\mathcal{D}[a]\rho(t) dt + dW(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t)$
where $\mathcal{D}$ is the standard Lindblad dissipator superoperator, and $\mathcal{H}$ is defined as above,
and $dW(t)$ is a normal distributed increment with $E[dW(t)] = \sqrt{dt}$.
In QuTiP format we have
Step6: $\displaystyle D_{2}[A]\rho(t) = \sqrt{\gamma} \mathcal{H}[a]\rho(t)
= A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho
\rightarrow (A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v$
Step7: Using QuTiP built-in homodyne detection functions for $D_1$ and $D_2$
Step8: Solve problem again, this time with a specified noise (from previous run)
Step9: Heterodyne detection
Step10: Stochastic master equation for heterodyne in Milburn's formulation
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \gamma\mathcal{D}[a]\rho(t) dt + \frac{1}{\sqrt{2}} dW_1(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t) + \frac{1}{\sqrt{2}} dW_2(t) \sqrt{\gamma} \mathcal{H}[-ia] \rho(t)$
where $\mathcal{D}$ is the standard Lindblad dissipator superoperator, and $\mathcal{H}$ is defined as above,
and $dW_i(t)$ is a normal distributed increment with $E[dW_i(t)] = \sqrt{dt}$.
In QuTiP format we have
Step11: $D_{2}^{(1)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[a] \rho =
\frac{1}{\sqrt{2}} \mathcal{H}[A] \rho =
\frac{1}{\sqrt{2}}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho)
\rightarrow \frac{1}{\sqrt{2}} \left{(A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v\right}$
$D_{2}^{(2)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[-ia] \rho
= \frac{1}{\sqrt{2}} \mathcal{H}[-iA] \rho =
\frac{-i}{\sqrt{2}}(A\rho - \rho A^\dagger - \mathrm{Tr}[A\rho - \rho A^\dagger] \rho)
\rightarrow \frac{-i}{\sqrt{2}} \left{(A_L - A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L - A_R^\dagger)\rho_v] \rho_v\right}$
Step12: Using QuTiP built-in heterodyne detection functions for $D_1$ and $D_2$
Step13: Solve problem again, this time with a specified noise (from previous run)
Step14: Software version | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
Explanation: Development notebook: Tests for QuTiP's stochastic master equation solver
Copyright (C) 2011 and later, Paul D. Nation & Robert J. Johansson
In this notebook we test the qutip stochastic master equation solver (smesolve) with a few textbook examples taken from the book Quantum Optics, by Walls and Milburn, section 6.7.
<style>
.rendered_html {
font-family: Liberation Serif;
}
.rendered_html h1 {
font-family: Liberation Sans;
margin: 0 0;
}
.rendered_html h2 {
font-family: Liberation Sans;
margin: 0 0;
}
</style>
End of explanation
N = 10
w0 = 0.5 * 2 * np.pi
times = np.linspace(0, 15, 150)
dt = times[1] - times[0]
gamma = 0.25
A = 2.5
ntraj = 50
nsubsteps = 50
a = destroy(N)
x = a + a.dag()
H = w0 * a.dag() * a
#rho0 = coherent(N, 5)
rho0 = fock(N, 5)
c_ops = [np.sqrt(gamma) * a]
e_ops = [a.dag() * a, x]
result_ref = mesolve(H, rho0, times, c_ops, e_ops)
plot_expectation_values(result_ref);
Explanation: Photo-count detection
Theory
Stochastic master equation in Milburn's formulation
$\displaystyle d\rho(t) = dN(t) \mathcal{G}[a] \rho(t) - dt \gamma \mathcal{H}[\frac{1}{2}a^\dagger a] \rho(t)$
where
$\displaystyle \mathcal{G}[A] \rho = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho$
$\displaystyle \mathcal{H}[A] \rho = \frac{1}{2}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho) $
and $dN(t)$ is a Poisson distributed increment with $E[dN(t)] = \gamma \langle a^\dagger a\rangle (t)dt$.
Formulation in QuTiP
In QuTiP we write the stochastic master equation on the form (in the interaction picture, with no deterministic dissipation):
$\displaystyle d\rho(t) = D_{1}[A]\rho(t) dt + D_{2}[A]\rho(t) dW$
where $A = \sqrt{\gamma} a$, so we can identify
$\displaystyle D_{1}[A]\rho(t) = - \frac{1}{2}\gamma \mathcal{H}[a^\dagger a] \rho(t)
= -\gamma \frac{1}{2}\left( a^\dagger a\rho + \rho a^\dagger a - \mathrm{Tr}[a^\dagger a\rho + \rho a^\dagger a] \rho \right)
= -\frac{1}{2}\left( A^\dagger A\rho + \rho A^\dagger A - \mathrm{Tr}[A^\dagger A\rho + \rho A^\dagger A] \rho \right)$
$\displaystyle D_{2}[A]\rho(t) = \mathcal{G}[a] \rho = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho$
and
$dW = dN(t)$
and $A = \sqrt{\gamma} a$ is the collapse operator including the rate of the process as a coefficient in the operator.
Reference solution: deterministic master equation
End of explanation
result = photocurrent_mesolve(H, rho0, times, c_ops=[], sc_ops=c_ops, e_ops=e_ops,
ntraj=ntraj, nsubsteps=nsubsteps,
store_measurement=True, noise=1234)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.step(times, dt * m.real)
Explanation: Solve using stochastic master equation
$\displaystyle D_{1}[a, \rho] = -\gamma \frac{1}{2}\left( a^\dagger a\rho + \rho a^\dagger a - \mathrm{Tr}[a^\dagger a\rho + \rho a^\dagger a] \right)
\rightarrow - \frac{1}{2}({A^\dagger A}_L + {A^\dagger A}_R)\rho_v + \mathrm{E}[({A^\dagger A}_L + {A^\dagger A}_R)\rho_v]$
$\displaystyle D_{2}[A, \rho(t)] = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho
\rightarrow \frac{A_LA^\dagger_R \rho_v}{\mathrm{E}[A_LA^\dagger_R \rho_v]} - \rho_v$
Using QuTiP built-in photo-current detection functions for $D_1$ and $D_2$
End of explanation
result = photocurrent_mesolve(H, rho0, times, c_ops=[], sc_ops=c_ops, e_ops=e_ops,
ntraj=ntraj, nsubsteps=nsubsteps, store_measurement=True, noise=1234)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.step(times, dt * m.real)
Explanation: Solve problem again, with the same noise as the previous run
photocurrentmesolve does not take custom noise, but you can set the seed.
End of explanation
H = w0 * a.dag() * a + A * (a + a.dag())
result_ref = mesolve(H, rho0, times, c_ops, e_ops)
Explanation: Homodyne detection
End of explanation
L = liouvillian(H, c_ops=c_ops).data
def d1_rho_func(t, rho_vec):
return cy.spmv(L, rho_vec)
Explanation: Theory
Stochastic master equation for homodyne in Milburn's formulation
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \gamma\mathcal{D}[a]\rho(t) dt + dW(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t)$
where $\mathcal{D}$ is the standard Lindblad dissipator superoperator, and $\mathcal{H}$ is defined as above,
and $dW(t)$ is a normal distributed increment with $E[dW(t)] = \sqrt{dt}$.
In QuTiP format we have:
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + D_{1}[A]\rho(t) dt + D_{2}[A]\rho(t) dW$
where $A = \sqrt{\gamma} a$, so we can identify
$\displaystyle D_{1}[A]\rho(t) = \gamma \mathcal{D}[a]\rho(t) = \mathcal{D}[A]\rho(t)$
End of explanation
n_sum = spre(c_ops[0]) + spost(c_ops[0].dag())
n_sum_data = n_sum.data
def d2_rho_func(t, rho_vec):
e1 = cy.cy_expect_rho_vec(n_sum_data, rho_vec, False)
out = np.zeros((1,len(rho_vec)),dtype=complex)
out += cy.spmv(n_sum_data, rho_vec) - e1 * rho_vec
return out
result = general_stochastic(ket2dm(rho0), times, d1=d1_rho_func, d2=d2_rho_func,
e_ops=[spre(op) for op in e_ops], ntraj=ntraj, solver="platen",
m_ops=[spre(a + a.dag())], dW_factors=[1/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True, map_func=parallel_map)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.ylim(-15, 15)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2);
Explanation: $\displaystyle D_{2}[A]\rho(t) = \sqrt{\gamma} \mathcal{H}[a]\rho(t)
= A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho
\rightarrow (A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v$
End of explanation
result = smesolve(H, rho0, times, [], c_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps, solver="pc-euler",
method='homodyne', store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real / np.sqrt(gamma), 'b', alpha=0.025)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real / np.sqrt(gamma), 'r', lw=2);
plt.plot(times, result_ref.expect[1], 'k', lw=2)
Explanation: Using QuTiP built-in homodyne detection functions for $D_1$ and $D_2$
End of explanation
result = smesolve(H, rho0, times, [], c_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps, solver="pc-euler",
method='homodyne', store_measurement=True, noise=result.noise)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real / np.sqrt(gamma), 'b', alpha=0.025)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real / np.sqrt(gamma), 'r', lw=2);
plt.plot(times, result_ref.expect[1], 'k', lw=2)
Explanation: Solve problem again, this time with a specified noise (from previous run)
End of explanation
e_ops = [a.dag() * a, a + a.dag(), -1j * (a - a.dag())]
result_ref = mesolve(H, rho0, times, c_ops, e_ops)
Explanation: Heterodyne detection
End of explanation
#def d1_rho_func(A, rho_vec):
# return A[7] * rho_vec
L = liouvillian(H, c_ops=c_ops).data
def d1_rho_func(t, rho_vec):
return cy.spmv(L, rho_vec)
Explanation: Stochastic master equation for heterodyne in Milburn's formulation
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \gamma\mathcal{D}[a]\rho(t) dt + \frac{1}{\sqrt{2}} dW_1(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t) + \frac{1}{\sqrt{2}} dW_2(t) \sqrt{\gamma} \mathcal{H}[-ia] \rho(t)$
where $\mathcal{D}$ is the standard Lindblad dissipator superoperator, and $\mathcal{H}$ is defined as above,
and $dW_i(t)$ is a normal distributed increment with $E[dW_i(t)] = \sqrt{dt}$.
In QuTiP format we have:
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + D_{1}[A]\rho(t) dt + D_{2}^{(1)}[A]\rho(t) dW_1 + D_{2}^{(2)}[A]\rho(t) dW_2$
where $A = \sqrt{\gamma} a$, so we can identify
$\displaystyle D_{1}[A]\rho = \gamma \mathcal{D}[a]\rho = \mathcal{D}[A]\rho$
End of explanation
n_sump = spre(c_ops[0]) + spost(c_ops[0].dag())
n_sump_data = n_sump.data/np.sqrt(2)
n_summ = spre(c_ops[0]) - spost(c_ops[0].dag())
n_summ_data = -1.0j*n_summ.data/np.sqrt(2)
def d2_rho_func(A, rho_vec):
out = np.zeros((2,len(rho_vec)),dtype=complex)
e1 = cy.cy_expect_rho_vec(n_sump_data, rho_vec, False)
out[0,:] += cy.spmv(n_sump_data, rho_vec) - e1 * rho_vec
e1 = cy.cy_expect_rho_vec(n_summ_data, rho_vec, False)
out[1,:] += cy.spmv(n_summ_data, rho_vec) - e1 * rho_vec
return out
#def d2_rho_func(t, rho_vec):
# e1 = cy.cy_expect_rho_vec(n_sum_data, rho_vec, False)
# out = np.zeros((1,len(rho_vec)),dtype=complex)
# out += cy.spmv(n_sum_data, rho_vec) - e1 * rho_vec
# return out
result = general_stochastic(ket2dm(rho0), times, d1=d1_rho_func, d2=d2_rho_func,
e_ops=[spre(op) for op in e_ops], solver="platen", # order=1
ntraj=ntraj, nsubsteps=nsubsteps, len_d2=2,
m_ops=[spre(a + a.dag()), (-1j)*spre(a - a.dag())],
dW_factors=[2/np.sqrt(gamma), 2/np.sqrt(gamma)],
store_measurement=True, map_func=parallel_map)
plot_expectation_values([result, result_ref])
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 1].real, 'b', alpha=0.025)
plt.ylim(-20, 20)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,1].real, 'b', lw=2);
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
Explanation: $D_{2}^{(1)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[a] \rho =
\frac{1}{\sqrt{2}} \mathcal{H}[A] \rho =
\frac{1}{\sqrt{2}}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho)
\rightarrow \frac{1}{\sqrt{2}} \left{(A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v\right}$
$D_{2}^{(2)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[-ia] \rho
= \frac{1}{\sqrt{2}} \mathcal{H}[-iA] \rho =
\frac{-i}{\sqrt{2}}(A\rho - \rho A^\dagger - \mathrm{Tr}[A\rho - \rho A^\dagger] \rho)
\rightarrow \frac{-i}{\sqrt{2}} \left{(A_L - A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L - A_R^\dagger)\rho_v] \rho_v\right}$
End of explanation
result = smesolve(H, rho0, times, [], c_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps, solver="milstein", # order=1
method='heterodyne', store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0, 0].real / np.sqrt(gamma), 'r', alpha=0.025)
plt.plot(times, m[:, 0, 1].real / np.sqrt(gamma), 'b', alpha=0.025)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,0].real / np.sqrt(gamma), 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,1].real / np.sqrt(gamma), 'b', lw=2);
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
Explanation: Using QuTiP built-in heterodyne detection functions for $D_1$ and $D_2$
End of explanation
result = smesolve(H, rho0, times, [], c_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps, solver="milstein", # order=1
method='heterodyne', store_measurement=True, noise=result.noise)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0, 0].real / np.sqrt(gamma), 'r', alpha=0.025)
plt.plot(times, m[:, 0, 1].real / np.sqrt(gamma), 'b', alpha=0.025)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,0].real / np.sqrt(gamma), 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,1].real / np.sqrt(gamma), 'b', lw=2);
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
plt.axis('tight')
plt.ylim(-25, 25);
Explanation: Solve problem again, this time with a specified noise (from previous run)
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Software version
End of explanation |
9,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Check Point
What is the dimension of the ascii character vector space?
What is the support of 'pizza'?
Step2: The dot product and the norm
Convince yourself that this dot product here corresponds to dotting vectors.
With numpy arrays, that is numpy.dot.
Step3: An Example
Step4: The angles between vectors
This function finds the cosine_similarity in character-space between two char2vec vectors.
Step5: How far are our examples from one another?
Step6: Totally Optional Exercises
If the cosine_similarity=0, what do you know about the words?
If the cosine_similarity=1, are the words the same?
Can the cosine_similarity be negative?
Is the cosine_similarity symmetric?
In the definition of dot, above, why are these three definitions of the variable domain equivalent?
domain = string.ascii_letters
domain = support(v).union(w)
domain = support(v).intersection(w)
Define a distance by taking the inverse cosine of cosine_similarity.
Show this now actually computes the angle.
Given 3 words, does this notion of distance satisfy the triangle inequality? Modify the code above to show these three words do.
Advanced
Find a sequence of words w_1, w_2, ..., for which the sequence norm(w_1), norm(w_2), ... is unbounded.
Find two sequences of words whose pair-wise cosine_similarity is arbitrarily close to 1. i.e. Find word sequences a=a_1,a_2,... and b=b_1,b_2,... so that a_n and b_n make arbitrarily tiny angles.
This means that our vector space (with this distance) is not topologically discrete.
State a reasonable condition so that the vector space is discrete.
Step7: Forget about sparsity! (a dense implementation)
The above example used a Python Counter, which is a defaultdict(int) to implement sparse vectors. That is critically important if your domain is e.g. the English Language. However in our case here our domain is just characters, so the sparseity is unnecessary. Here is a non-sparse, i.e. dense, implementation.
Step8: More exercises
Hey, vector-spaces are over a field. What is the field in which this vector-space is over?
Pick a unicode encoding. What is its dimension?
Find an alternative basis for ascii character space. Can you think of a situation where it might be more useful than string.ascii_letters? (*)
Implement cosine_similarity_dense. It looks an awful lot like cosine_similarity, doesn't it?
Re-implement cosine_similarity using sklearn.CountVectorizer.
What is an advantage of the sparse implementation?
What is an advantage of the dense implementation? | Python Code:
from collections import Counter
import math
def char2vec(word):
# Counts each of the the characters in word.
# We use a dictionary instead of a sparse matrix to describe the characters,
# however the concept is identical.
return Counter(word)
def support(v):
# The support of a vector over a basis is the subset of basis elements with
# non-zero components.
return set(v)
# Note: We could have written this simpler: char2vec = Counter; support = set;
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Character Space - The ascii_letters vector space
Inspired by the char2vec colab.
The character vector space
We begin by defining the character vector space which has the ascii_letters as its basis. Why ascii? It's arbitrary and having only __ dimesions keeps things simple. We could have taken any of the unicode variations, but depending on the size of the character space we may want to pursue some optional optimizations which are also discussed below.
There are 2 functions which compute: <br>
1) the vector representation of the word, i.e. the counts of the characters in the word <br>
2) the support: the set of unique characters
End of explanation
import string
vector_space = string.ascii_letters
print(f"1. The dimension of vector_space is {len(vector_space)}, since there is one \n independent vector per character and that's all of them.\n")
print(f'2. The support of pizza is {support(char2vec("pizza"))}.')
Explanation: Check Point
What is the dimension of the ascii character vector space?
What is the support of 'pizza'?
End of explanation
import string
def dot(v, w):
domain = string.ascii_letters
# Note that this computation is equivalent to each of the following
# optimizations. Exercise: Why?
# domain = support(v).union(w)
# domain = support(v).intersection(w)
#
# This domain here bears a passing resemblance to integration doesn't it.
return sum(v[ch] * w[ch] for ch in domain)
def norm(v):
return math.sqrt(dot(v, v))
Explanation: The dot product and the norm
Convince yourself that this dot product here corresponds to dotting vectors.
With numpy arrays, that is numpy.dot.
End of explanation
# Our sample "words"
wordlist = [
'TheQuickBrownFoxJumpsOverTheLazyDog',
'TheQuickWhiteFoxJumpsOverTheLazyDog',
'SupermanJumpsOverTheTallBuilding',
]
# For each of our words, print its vector and some information about it.
for word in wordlist:
print(word)
print(" vector: ", char2vec(word))
print(" support: ", support(char2vec(word)))
print(" norm:", norm(char2vec(word)), end="\n\n")
Explanation: An Example
End of explanation
def cosine_similarity(v, w):
return dot(v, w) / norm(v) / norm(w)
Explanation: The angles between vectors
This function finds the cosine_similarity in character-space between two char2vec vectors.
End of explanation
from itertools import combinations
# Find the cosine similarity between each of the 3 vectors created above
# Similar senteces will have higher scores, ranging from 0-1.
for x, y in combinations(wordlist, 2):
print ("Cosine Similarity between", x, "and", y,"=",
cosine_similarity(char2vec(x),char2vec(y)), end="\n\n")
Explanation: How far are our examples from one another?
End of explanation
def similarity(x,y):
return cosine_similarity(char2vec(x), char2vec(y))
print("similarity of", "able", "elba", ": ", similarity("able", "elba"))
print("similarity of", "gabe", "juno", ": ", similarity("gabe", "juno"))
print("similarity of", "piz...za", "piz...zza", ": ", similarity("pi" + "z"*10**3 + "a", "pi" + "z"*(10**3 +1) + "a"))
Explanation: Totally Optional Exercises
If the cosine_similarity=0, what do you know about the words?
If the cosine_similarity=1, are the words the same?
Can the cosine_similarity be negative?
Is the cosine_similarity symmetric?
In the definition of dot, above, why are these three definitions of the variable domain equivalent?
domain = string.ascii_letters
domain = support(v).union(w)
domain = support(v).intersection(w)
Define a distance by taking the inverse cosine of cosine_similarity.
Show this now actually computes the angle.
Given 3 words, does this notion of distance satisfy the triangle inequality? Modify the code above to show these three words do.
Advanced
Find a sequence of words w_1, w_2, ..., for which the sequence norm(w_1), norm(w_2), ... is unbounded.
Find two sequences of words whose pair-wise cosine_similarity is arbitrarily close to 1. i.e. Find word sequences a=a_1,a_2,... and b=b_1,b_2,... so that a_n and b_n make arbitrarily tiny angles.
This means that our vector space (with this distance) is not topologically discrete.
State a reasonable condition so that the vector space is discrete.
End of explanation
import string
import numpy as np
vector_space = string.ascii_letters
def char2index(ch):
return vector_space.index(ch)
def index2char(index):
return vector_space[index]
def char2vec_dense(word):
out = np.zeros(len(vector_space))
for ch in word:
out[char2index(ch)] += 1
return out
def support_dense(v):
return "".join(index2char(i) for i in v.nonzero()[0])
# Now the dot, really is np.dot
def dot_dense(v, w):
return v.dot(w)
def norm_dense(v):
return dot_dense(v)**.5
# For each of our words, print its vector and some information about it.
for word in wordlist:
print(word)
print(" vector: ", char2vec_dense(word))
print(" support: ", support_dense(char2vec_dense(word)))
print(" norm:", norm_dense(char2vec_dense(word)), end="\n\n")
Explanation: Forget about sparsity! (a dense implementation)
The above example used a Python Counter, which is a defaultdict(int) to implement sparse vectors. That is critically important if your domain is e.g. the English Language. However in our case here our domain is just characters, so the sparseity is unnecessary. Here is a non-sparse, i.e. dense, implementation.
End of explanation
# This function looks an awful lot like cosine_distance, doesn't it.
def cosine_similarity_dense(v, w):
return dot_dense(v, w) / norm_dense(v) / norm_dense(w)
Explanation: More exercises
Hey, vector-spaces are over a field. What is the field in which this vector-space is over?
Pick a unicode encoding. What is its dimension?
Find an alternative basis for ascii character space. Can you think of a situation where it might be more useful than string.ascii_letters? (*)
Implement cosine_similarity_dense. It looks an awful lot like cosine_similarity, doesn't it?
Re-implement cosine_similarity using sklearn.CountVectorizer.
What is an advantage of the sparse implementation?
What is an advantage of the dense implementation?
End of explanation |
9,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recurrent Neural Networks for Vietnamese Name Entity Recognition
Step1: This the second part of the Recurrent Neural Network Tutorial. The first part is here.
In this part we will implement a full Recurrent Neural Network from scratch using Python and optimize our implementation using Theano, a library to perform operations on a GPU. The full code is available on Github. I will skip over some boilerplate code that is not essential to understanding Recurrent Neural Networks, but all of that is also on Github.
Language Modeling
Our goal is to build a Language Model using a Recurrent Neural Network. Here's what that means. Let's say we have sentence of $m$ words. Language Model allows us to predict the probability of observing the sentence (in a given dataset) as
Step2: Here's an actual training example from our text
Step3: Building the RNN
For a general overview of RNNs take a look at first part of the tutorial.
Let's get concrete and see what the RNN for our language model looks like. The input $x$ will be a sequence of words (just like the example printed above) and each $x_t$ is a single word. But there's one more thing
Step4: Above, word_dim is the size of our vocabulary, and hidden_dim is the size of our hidden layer (we can pick it). Don't worry about the bptt_truncate parameter for now, we'll explain what that is later.
Forward Propagation
Next, let's implement the forward propagation (predicting word probabilities) defined by our equations above
Step5: We not only return the calculated outputs, but also the hidden states. We will use them later to calculate the gradients, and by returning them here we avoid duplicate computation. Each $o_t$ is a vector of probabilities representing the words in our vocabulary, but sometimes, for example when evaluating our model, all we want is the next word with the highest probability. We call this function predict
Step6: Let's try our newly implemented methods and see an example output
Step7: For each word in the sentence (45 above), our model made 8000 predictions representing probabilities of the next word. Note that because we initialized $U,V,W$ to random values these predictions are completely random right now. The following gives the indices of the highest probability predictions for each word
Step8: Calculating the Loss
To train our network we need a way to measure the errors it makes. We call this the loss function $L$, and our goal is find the parameters $U,V$ and $W$ that minimize the loss function for our training data. A common choice for the loss function is the cross-entropy loss. If we have $N$ training examples (words in our text) and $C$ classes (the size of our vocabulary) then the loss with respect to our predictions $o$ and the true labels $y$ is given by
Step9: Let's take a step back and think about what the loss should be for random predictions. That will give us a baseline and make sure our implementation is correct. We have $C$ words in our vocabulary, so each word should be (on average) predicted with probability $1/C$, which would yield a loss of $L = -\frac{1}{N} N \log\frac{1}{C} = \log C$
Step10: Pretty close! Keep in mind that evaluating the loss on the full dataset is an expensive operation and can take hours if you have a lot of data!
Training the RNN with SGD and Backpropagation Through Time (BPTT)
Remember that we want to find the parameters $U,V$ and $W$ that minimize the total loss on the training data. The most common way to do this is SGD, Stochastic Gradient Descent. The idea behind SGD is pretty simple. We iterate over all our training examples and during each iteration we nudge the parameters into a direction that reduces the error. These directions are given by the gradients on the loss
Step11: Gradient Checking
Whenever you implement backpropagation it is good idea to also implement gradient checking, which is a way of verifying that your implementation is correct. The idea behind gradient checking is that derivative of a parameter is equal to the slope at the point, which we can approximate by slightly changing the parameter and then dividing by the change
Step12: SGD Implementation
Now that we are able to calculate the gradients for our parameters we can implement SGD. I like to do this in two steps
Step13: Done! Let's try to get a sense of how long it would take to train our network
Step14: Uh-oh, bad news. One step of SGD takes approximately 350 milliseconds on my laptop. We have about 80,000 examples in our training data, so one epoch (iteration over the whole data set) would take several hours. Multiple epochs would take days, or even weeks! And we're still working with a small dataset compared to what's being used by many of the companies and researchers out there. What now?
Fortunately there are many ways to speed up our code. We could stick with the same model and make our code run faster, or we could modify our model to be less computationally expensive, or both. Researchers have identified many ways to make models less computationally expensive, for example by using a hierarchical softmax or adding projection layers to avoid the large matrix multiplications (see also here or here). But I want to keep our model simple and go the first route
Step15: Good, it seems like our implementation is at least doing something useful and decreasing the loss, just like we wanted.
Training our Network with Theano and the GPU
I have previously written a tutorial on Theano, and since all our logic will stay exactly the same I won't go through optimized code here again. I defined a RNNTheano class that replaces the numpy calculations with corresponding calculations in Theano. Just like the rest of this post, the code is also available Github.
Step16: This time, one SGD step takes 70ms on my Mac (without GPU) and 23ms on a g2.2xlarge Amazon EC2 instance with GPU. That's a 15x improvement over our initial implementation and means we can train our model in hours/days instead of weeks. There are still a vast number of optimizations we could make, but we're good enough for now.
To help you avoid spending days training a model I have pre-trained a Theano model with a hidden layer dimensionality of 50 and a vocabulary size of 8000. I trained it for 50 epochs in about 20 hours. The loss was was still decreasing and training longer would probably have resulted in a better model, but I was running out of time and wanted to publish this post. Feel free to try it out yourself and trian for longer. You can find the model parameters in data/trained-model-theano.npz in the Github repository and load them using the load_model_parameters_theano method
Step17: Generating Text
Now that we have our model we can ask it to generate new text for us! Let's implement a helper function to generate new sentences
Step18: A few selected (censored) sentences. I added capitalization.
Anyway, to the city scene you're an idiot teenager.
What ? ! ! ! ! ignore!
Screw fitness, you're saying | Python Code:
import csv
import itertools
import operator
import numpy as np
import nltk
import sys
from datetime import datetime
from utils import *
import matplotlib.pyplot as plt
%matplotlib inline
# Download NLTK model data (you need to do this once)
nltk.download("book")
Explanation: Recurrent Neural Networks for Vietnamese Name Entity Recognition
End of explanation
vocabulary_size = 8000
unknown_token = "UNKNOWN_TOKEN"
sentence_start_token = "SENTENCE_START"
sentence_end_token = "SENTENCE_END"
# Read the data and append SENTENCE_START and SENTENCE_END tokens
print "Reading CSV file..."
with open('data/reddit-comments-2015-08.csv', 'rb') as f:
reader = csv.reader(f, skipinitialspace=True)
reader.next()
# Split full comments into sentences
sentences = itertools.chain(*[nltk.sent_tokenize(x[0].decode('utf-8').lower()) for x in reader])
# Append SENTENCE_START and SENTENCE_END
sentences = ["%s %s %s" % (sentence_start_token, x, sentence_end_token) for x in sentences]
print "Parsed %d sentences." % (len(sentences))
# Tokenize the sentences into words
tokenized_sentences = [nltk.word_tokenize(sent) for sent in sentences]
# Count the word frequencies
word_freq = nltk.FreqDist(itertools.chain(*tokenized_sentences))
print "Found %d unique words tokens." % len(word_freq.items())
# Get the most common words and build index_to_word and word_to_index vectors
vocab = word_freq.most_common(vocabulary_size-1)
index_to_word = [x[0] for x in vocab]
index_to_word.append(unknown_token)
word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)])
print "Using vocabulary size %d." % vocabulary_size
print "The least frequent word in our vocabulary is '%s' and appeared %d times." % (vocab[-1][0], vocab[-1][1])
# Replace all words not in our vocabulary with the unknown token
for i, sent in enumerate(tokenized_sentences):
tokenized_sentences[i] = [w if w in word_to_index else unknown_token for w in sent]
print "\nExample sentence: '%s'" % sentences[0]
print "\nExample sentence after Pre-processing: '%s'" % tokenized_sentences[0]
# Create the training data
X_train = np.asarray([[word_to_index[w] for w in sent[:-1]] for sent in tokenized_sentences])
y_train = np.asarray([[word_to_index[w] for w in sent[1:]] for sent in tokenized_sentences])
Explanation: This the second part of the Recurrent Neural Network Tutorial. The first part is here.
In this part we will implement a full Recurrent Neural Network from scratch using Python and optimize our implementation using Theano, a library to perform operations on a GPU. The full code is available on Github. I will skip over some boilerplate code that is not essential to understanding Recurrent Neural Networks, but all of that is also on Github.
Language Modeling
Our goal is to build a Language Model using a Recurrent Neural Network. Here's what that means. Let's say we have sentence of $m$ words. Language Model allows us to predict the probability of observing the sentence (in a given dataset) as:
$
\begin{aligned}
P(w_1,...,w_m) = \prod_{i=1}^{m}P(w_i \mid w_1,..., w_{i-1})
\end{aligned}
$
In words, the probability of a sentence is the product of probabilities of each word given the words that came before it. So, the probability of the sentence "He went to buy some chocolate" would be the probability of "chocolate" given "He went to buy some", multiplied by the probability of "some" given "He went to buy", and so on.
Why is that useful? Why would we want to assign a probability to observing a sentence?
First, such a model can be used as a scoring mechanism. For example, a Machine Translation system typically generates multiple candidates for an input sentence. You could use a language model to pick the most probable sentence. Intuitively, the most probable sentence is likely to be grammatically correct. Similar scoring happens in speech recognition systems.
But solving the Language Modeling problem also has a cool side effect. Because we can predict the probability of a word given the preceding words, we are able to generate new text. It's a generative model. Given an existing sequence of words we sample a next word from the predicted probabilities, and repeat the process until we have a full sentence. Andrew Karparthy has a great post that demonstrates what language models are capable of. His models are trained on single characters as opposed to full words, and can generate anything from Shakespeare to Linux Code.
Note that in the above equation the probability of each word is conditioned on all previous words. In practice, many models have a hard time representing such long-term dependencies due to computational or memory constraints. They are typically limited to looking at only a few of the previous words. RNNs can, in theory, capture such long-term dependencies, but in practice it's a bit more complex. We'll explore that in a later post.
Training Data and Preprocessing
To train our language model we need text to learn from. Fortunately we don't need any labels to train a language model, just raw text. I downloaded 15,000 longish reddit comments from a dataset available on Google's BigQuery. Text generated by our model will sound like reddit commenters (hopefully)! But as with most Machine Learning projects we first need to do some pre-processing to get our data into the right format.
1. Tokenize Text
We have raw text, but we want to make predictions on a per-word basis. This means we must tokenize our comments into sentences, and sentences into words. We could just split each of the comments by spaces, but that wouldn't handle punctuation properly. The sentence "He left!" should be 3 tokens: "He", "left", "!". We'll use NLTK's word_tokenize and sent_tokenize methods, which do most of the hard work for us.
2. Remove infrequent words
Most words in our text will only appear one or two times. It's a good idea to remove these infrequent words. Having a huge vocabulary will make our model slow to train (we'll talk about why that is later), and because we don't have a lot of contextual examples for such words we wouldn't be able to learn how to use them correctly anyway. That's quite similar to how humans learn. To really understand how to appropriately use a word you need to have seen it in different contexts.
In our code we limit our vocabulary to the vocabulary_size most common words (which I set to 8000, but feel free to change it). We replace all words not included in our vocabulary by UNKNOWN_TOKEN. For example, if we don't include the word "nonlinearities" in our vocabulary, the sentence "nonlineraties are important in neural networks" becomes "UNKNOWN_TOKEN are important in Neural Networks". The word UNKNOWN_TOKEN will become part of our vocabulary and we will predict it just like any other word. When we generate new text we can replace UNKNOWN_TOKEN again, for example by taking a randomly sampled word not in our vocabulary, or we could just generate sentences until we get one that doesn't contain an unknown token.
3. Prepend special start and end tokens
We also want to learn which words tend start and end a sentence. To do this we prepend a special SENTENCE_START token, and append a special SENTENCE_END token to each sentence. This allows us to ask: Given that the first token is SENTENCE_START, what is the likely next word (the actual first word of the sentence)?
4. Build training data matrices
The input to our Recurrent Neural Networks are vectors, not strings. So we create a mapping between words and indices, index_to_word, and word_to_index. For example, the word "friendly" may be at index 2001. A training example $x$ may look like [0, 179, 341, 416], where 0 corresponds to SENTENCE_START. The corresponding label $y$ would be [179, 341, 416, 1]. Remember that our goal is to predict the next word, so y is just the x vector shifted by one position with the last element being the SENTENCE_END token. In other words, the correct prediction for word 179 above would be 341, the actual next word.
End of explanation
# Print an training data example
x_example, y_example = X_train[17], y_train[17]
print "x:\n%s\n%s" % (" ".join([index_to_word[x] for x in x_example]), x_example)
print "\ny:\n%s\n%s" % (" ".join([index_to_word[x] for x in y_example]), y_example)
Explanation: Here's an actual training example from our text:
End of explanation
class RNNNumpy:
def __init__(self, word_dim, hidden_dim=100, bptt_truncate=4):
# Assign instance variables
self.word_dim = word_dim
self.hidden_dim = hidden_dim
self.bptt_truncate = bptt_truncate
# Randomly initialize the network parameters
self.U = np.random.uniform(-np.sqrt(1./word_dim), np.sqrt(1./word_dim), (hidden_dim, word_dim))
self.V = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (word_dim, hidden_dim))
self.W = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (hidden_dim, hidden_dim))
Explanation: Building the RNN
For a general overview of RNNs take a look at first part of the tutorial.
Let's get concrete and see what the RNN for our language model looks like. The input $x$ will be a sequence of words (just like the example printed above) and each $x_t$ is a single word. But there's one more thing: Because of how matrix multiplication works we can't simply use a word index (like 36) as an input. Instead, we represent each word as a one-hot vector of size vocabulary_size. For example, the word with index 36 would be the vector of all 0's and a 1 at position 36. So, each $x_t$ will become a vector, and $x$ will be a matrix, with each row representing a word. We'll perform this transformation in our Neural Network code instead of doing it in the pre-processing. The output of our network $o$ has a similar format. Each $o_t$ is a vector of vocabulary_size elements, and each element represents the probability of that word being the next word in the sentence.
Let's recap the equations for the RNN from the first part of the tutorial:
$
\begin{aligned}
s_t &= \tanh(Ux_t + Ws_{t-1}) \
o_t &= \mathrm{softmax}(Vs_t)
\end{aligned}
$
I always find it useful to write down the dimensions of the matrices and vectors. Let's assume we pick a vocabulary size $C = 8000$ and a hidden layer size $H = 100$. You can think of the hidden layer size as the "memory" of our network. Making it bigger allows us to learn more complex patterns, but also results in additional computation. Then we have:
$
\begin{aligned}
x_t & \in \mathbb{R}^{8000} \
o_t & \in \mathbb{R}^{8000} \
s_t & \in \mathbb{R}^{100} \
U & \in \mathbb{R}^{100 \times 8000} \
V & \in \mathbb{R}^{8000 \times 100} \
W & \in \mathbb{R}^{100 \times 100} \
\end{aligned}
$
This is valuable information. Remember that $U,V$ and $W$ are the parameters of our network we want to learn from data. Thus, we need to learn a total of $2HC + H^2$ parameters. In the case of $C=8000$ and $H=100$ that's 1,610,000. The dimensions also tell us the bottleneck of our model. Note that because $x_t$ is a one-hot vector, multiplying it with $U$ is essentially the same as selecting a column of U, so we don't need to perform the full multiplication. Then, the biggest matrix multiplication in our network is $Vs_t$. That's why we want to keep our vocabulary size small if possible.
Armed with this, it's time to start our implementation.
Initialization
We start by declaring a RNN class an initializing our parameters. I'm calling this class RNNNumpy because we will implement a Theano version later. Initializing the parameters $U,V$ and $W$ is a bit tricky. We can't just initialize them to 0's because that would result in symmetric calculations in all our layers. We must initialize them randomly. Because proper initialization seems to have an impact on training results there has been lot of research in this area. It turns out that the best initialization depends on the activation function ($\tanh$ in our case) and one recommended approach is to initialize the weights randomly in the interval from $\left[-\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}\right]$ where $n$ is the number of incoming connections from the previous layer. This may sound overly complicated, but don't worry too much about it. As long as you initialize your parameters to small random values it typically works out fine.
End of explanation
def forward_propagation(self, x):
# The total number of time steps
T = len(x)
# During forward propagation we save all hidden states in s because need them later.
# We add one additional element for the initial hidden, which we set to 0
s = np.zeros((T + 1, self.hidden_dim))
s[-1] = np.zeros(self.hidden_dim)
# The outputs at each time step. Again, we save them for later.
o = np.zeros((T, self.word_dim))
# For each time step...
for t in np.arange(T):
# Note that we are indxing U by x[t]. This is the same as multiplying U with a one-hot vector.
s[t] = np.tanh(self.U[:,x[t]] + self.W.dot(s[t-1]))
o[t] = softmax(self.V.dot(s[t]))
return [o, s]
RNNNumpy.forward_propagation = forward_propagation
Explanation: Above, word_dim is the size of our vocabulary, and hidden_dim is the size of our hidden layer (we can pick it). Don't worry about the bptt_truncate parameter for now, we'll explain what that is later.
Forward Propagation
Next, let's implement the forward propagation (predicting word probabilities) defined by our equations above:
End of explanation
def predict(self, x):
# Perform forward propagation and return index of the highest score
o, s = self.forward_propagation(x)
return np.argmax(o, axis=1)
RNNNumpy.predict = predict
Explanation: We not only return the calculated outputs, but also the hidden states. We will use them later to calculate the gradients, and by returning them here we avoid duplicate computation. Each $o_t$ is a vector of probabilities representing the words in our vocabulary, but sometimes, for example when evaluating our model, all we want is the next word with the highest probability. We call this function predict:
End of explanation
np.random.seed(10)
model = RNNNumpy(vocabulary_size)
o, s = model.forward_propagation(X_train[10])
print o.shape
print o
Explanation: Let's try our newly implemented methods and see an example output:
End of explanation
predictions = model.predict(X_train[10])
print predictions.shape
print predictions
Explanation: For each word in the sentence (45 above), our model made 8000 predictions representing probabilities of the next word. Note that because we initialized $U,V,W$ to random values these predictions are completely random right now. The following gives the indices of the highest probability predictions for each word:
End of explanation
def calculate_total_loss(self, x, y):
L = 0
# For each sentence...
for i in np.arange(len(y)):
o, s = self.forward_propagation(x[i])
# We only care about our prediction of the "correct" words
correct_word_predictions = o[np.arange(len(y[i])), y[i]]
# Add to the loss based on how off we were
L += -1 * np.sum(np.log(correct_word_predictions))
return L
def calculate_loss(self, x, y):
# Divide the total loss by the number of training examples
N = np.sum((len(y_i) for y_i in y))
return self.calculate_total_loss(x,y)/N
RNNNumpy.calculate_total_loss = calculate_total_loss
RNNNumpy.calculate_loss = calculate_loss
Explanation: Calculating the Loss
To train our network we need a way to measure the errors it makes. We call this the loss function $L$, and our goal is find the parameters $U,V$ and $W$ that minimize the loss function for our training data. A common choice for the loss function is the cross-entropy loss. If we have $N$ training examples (words in our text) and $C$ classes (the size of our vocabulary) then the loss with respect to our predictions $o$ and the true labels $y$ is given by:
$
\begin{aligned}
L(y,o) = - \frac{1}{N} \sum_{n \in N} y_{n} \log o_{n}
\end{aligned}
$
The formula looks a bit complicated, but all it really does is sum over our training examples and add to the loss based on how off our prediction are. The further away $y$ (the correct words) and $o$ (our predictions), the greater the loss will be. We implement the function calculate_loss:
End of explanation
# Limit to 1000 examples to save time
print "Expected Loss for random predictions: %f" % np.log(vocabulary_size)
print "Actual loss: %f" % model.calculate_loss(X_train[:1000], y_train[:1000])
Explanation: Let's take a step back and think about what the loss should be for random predictions. That will give us a baseline and make sure our implementation is correct. We have $C$ words in our vocabulary, so each word should be (on average) predicted with probability $1/C$, which would yield a loss of $L = -\frac{1}{N} N \log\frac{1}{C} = \log C$:
End of explanation
def bptt(self, x, y):
T = len(y)
# Perform forward propagation
o, s = self.forward_propagation(x)
# We accumulate the gradients in these variables
dLdU = np.zeros(self.U.shape)
dLdV = np.zeros(self.V.shape)
dLdW = np.zeros(self.W.shape)
delta_o = o
delta_o[np.arange(len(y)), y] -= 1.
# For each output backwards...
for t in np.arange(T)[::-1]:
dLdV += np.outer(delta_o[t], s[t].T)
# Initial delta calculation
delta_t = self.V.T.dot(delta_o[t]) * (1 - (s[t] ** 2))
# Backpropagation through time (for at most self.bptt_truncate steps)
for bptt_step in np.arange(max(0, t-self.bptt_truncate), t+1)[::-1]:
# print "Backpropagation step t=%d bptt step=%d " % (t, bptt_step)
dLdW += np.outer(delta_t, s[bptt_step-1])
dLdU[:,x[bptt_step]] += delta_t
# Update delta for next step
delta_t = self.W.T.dot(delta_t) * (1 - s[bptt_step-1] ** 2)
return [dLdU, dLdV, dLdW]
RNNNumpy.bptt = bptt
Explanation: Pretty close! Keep in mind that evaluating the loss on the full dataset is an expensive operation and can take hours if you have a lot of data!
Training the RNN with SGD and Backpropagation Through Time (BPTT)
Remember that we want to find the parameters $U,V$ and $W$ that minimize the total loss on the training data. The most common way to do this is SGD, Stochastic Gradient Descent. The idea behind SGD is pretty simple. We iterate over all our training examples and during each iteration we nudge the parameters into a direction that reduces the error. These directions are given by the gradients on the loss: $\frac{\partial L}{\partial U}, \frac{\partial L}{\partial V}, \frac{\partial L}{\partial W}$. SGD also needs a learning rate, which defines how big of a step we want to make in each iteration. SGD is the most popular optimization method not only for Neural Networks, but also for many other Machine Learning algorithms. As such there has been a lot of research on how to optimize SGD using batching, parallelism and adaptive learning rates. Even though the basic idea is simple, implementing SGD in a really efficient way can become very complex. If you want to learn more about SGD this is a good place to start. Due to its popularity there are a wealth of tutorials floating around the web, and I don't want to duplicate them here. I'll implement a simple version of SGD that should be understandable even without a background in optimization.
But how do we calculate those gradients we mentioned above? In a traditional Neural Network we do this through the backpropagation algorithm. In RNNs we use a slightly modified version of the this algorithm called Backpropagation Through Time (BPTT). Because the parameters are shared by all time steps in the network, the gradient at each output depends not only on the calculations of the current time step, but also the previous time steps. If you know calculus, it really is just applying the chain rule. The next part of the tutorial will be all about BPTT, so I won't go into detailed derivation here. For a general introduction to backpropagation check out this and this post. For now you can treat BPTT as a black box. It takes as input a training example $(x,y)$ and returns the gradients $\frac{\partial L}{\partial U}, \frac{\partial L}{\partial V}, \frac{\partial L}{\partial W}$.
End of explanation
def gradient_check(self, x, y, h=0.001, error_threshold=0.01):
# Calculate the gradients using backpropagation. We want to checker if these are correct.
bptt_gradients = model.bptt(x, y)
# List of all parameters we want to check.
model_parameters = ['U', 'V', 'W']
# Gradient check for each parameter
for pidx, pname in enumerate(model_parameters):
# Get the actual parameter value from the mode, e.g. model.W
parameter = operator.attrgetter(pname)(self)
print "Performing gradient check for parameter %s with size %d." % (pname, np.prod(parameter.shape))
# Iterate over each element of the parameter matrix, e.g. (0,0), (0,1), ...
it = np.nditer(parameter, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
# Save the original value so we can reset it later
original_value = parameter[ix]
# Estimate the gradient using (f(x+h) - f(x-h))/(2*h)
parameter[ix] = original_value + h
gradplus = model.calculate_total_loss([x],[y])
parameter[ix] = original_value - h
gradminus = model.calculate_total_loss([x],[y])
estimated_gradient = (gradplus - gradminus)/(2*h)
# Reset parameter to original value
parameter[ix] = original_value
# The gradient for this parameter calculated using backpropagation
backprop_gradient = bptt_gradients[pidx][ix]
# calculate The relative error: (|x - y|/(|x| + |y|))
relative_error = np.abs(backprop_gradient - estimated_gradient)/(np.abs(backprop_gradient) + np.abs(estimated_gradient))
# If the error is to large fail the gradient check
if relative_error > error_threshold:
print "Gradient Check ERROR: parameter=%s ix=%s" % (pname, ix)
print "+h Loss: %f" % gradplus
print "-h Loss: %f" % gradminus
print "Estimated_gradient: %f" % estimated_gradient
print "Backpropagation gradient: %f" % backprop_gradient
print "Relative Error: %f" % relative_error
return
it.iternext()
print "Gradient check for parameter %s passed." % (pname)
RNNNumpy.gradient_check = gradient_check
# To avoid performing millions of expensive calculations we use a smaller vocabulary size for checking.
grad_check_vocab_size = 100
np.random.seed(10)
model = RNNNumpy(grad_check_vocab_size, 10, bptt_truncate=1000)
model.gradient_check([0,1,2,3], [1,2,3,4])
Explanation: Gradient Checking
Whenever you implement backpropagation it is good idea to also implement gradient checking, which is a way of verifying that your implementation is correct. The idea behind gradient checking is that derivative of a parameter is equal to the slope at the point, which we can approximate by slightly changing the parameter and then dividing by the change:
$
\begin{aligned}
\frac{\partial L}{\partial \theta} \approx \lim_{h \to 0} \frac{J(\theta + h) - J(\theta -h)}{2h}
\end{aligned}
$
We then compare the gradient we calculated using backpropagation to the gradient we estimated with the method above. If there's no large difference we are good. The approximation needs to calculate the total loss for every parameter, so that gradient checking is very expensive (remember, we had more than a million parameters in the example above). So it's a good idea to perform it on a model with a smaller vocabulary.
End of explanation
# Performs one step of SGD.
def numpy_sdg_step(self, x, y, learning_rate):
# Calculate the gradients
dLdU, dLdV, dLdW = self.bptt(x, y)
# Change parameters according to gradients and learning rate
self.U -= learning_rate * dLdU
self.V -= learning_rate * dLdV
self.W -= learning_rate * dLdW
RNNNumpy.sgd_step = numpy_sdg_step
# Outer SGD Loop
# - model: The RNN model instance
# - X_train: The training data set
# - y_train: The training data labels
# - learning_rate: Initial learning rate for SGD
# - nepoch: Number of times to iterate through the complete dataset
# - evaluate_loss_after: Evaluate the loss after this many epochs
def train_with_sgd(model, X_train, y_train, learning_rate=0.005, nepoch=100, evaluate_loss_after=5):
# We keep track of the losses so we can plot them later
losses = []
num_examples_seen = 0
for epoch in range(nepoch):
# Optionally evaluate the loss
if (epoch % evaluate_loss_after == 0):
loss = model.calculate_loss(X_train, y_train)
losses.append((num_examples_seen, loss))
time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print "%s: Loss after num_examples_seen=%d epoch=%d: %f" % (time, num_examples_seen, epoch, loss)
# Adjust the learning rate if loss increases
if (len(losses) > 1 and losses[-1][1] > losses[-2][1]):
learning_rate = learning_rate * 0.5
print "Setting learning rate to %f" % learning_rate
sys.stdout.flush()
# For each training example...
for i in range(len(y_train)):
# One SGD step
model.sgd_step(X_train[i], y_train[i], learning_rate)
num_examples_seen += 1
Explanation: SGD Implementation
Now that we are able to calculate the gradients for our parameters we can implement SGD. I like to do this in two steps: 1. A function sdg_step that calculates the gradients and performs the updates for one batch. 2. An outer loop that iterates through the training set and adjusts the learning rate.
End of explanation
np.random.seed(10)
model = RNNNumpy(vocabulary_size)
%timeit model.sgd_step(X_train[10], y_train[10], 0.005)
Explanation: Done! Let's try to get a sense of how long it would take to train our network:
End of explanation
np.random.seed(10)
# Train on a small subset of the data to see what happens
model = RNNNumpy(vocabulary_size)
losses = train_with_sgd(model, X_train[:100], y_train[:100], nepoch=10, evaluate_loss_after=1)
Explanation: Uh-oh, bad news. One step of SGD takes approximately 350 milliseconds on my laptop. We have about 80,000 examples in our training data, so one epoch (iteration over the whole data set) would take several hours. Multiple epochs would take days, or even weeks! And we're still working with a small dataset compared to what's being used by many of the companies and researchers out there. What now?
Fortunately there are many ways to speed up our code. We could stick with the same model and make our code run faster, or we could modify our model to be less computationally expensive, or both. Researchers have identified many ways to make models less computationally expensive, for example by using a hierarchical softmax or adding projection layers to avoid the large matrix multiplications (see also here or here). But I want to keep our model simple and go the first route: Make our implementation run faster using a GPU. Before doing that though, let's just try to run SGD with a small dataset and check if the loss actually decreases:
End of explanation
from rnn_theano import RNNTheano, gradient_check_theano
np.random.seed(10)
# To avoid performing millions of expensive calculations we use a smaller vocabulary size for checking.
grad_check_vocab_size = 5
model = RNNTheano(grad_check_vocab_size, 10)
gradient_check_theano(model, [0,1,2,3], [1,2,3,4])
np.random.seed(10)
model = RNNTheano(vocabulary_size)
%timeit model.sgd_step(X_train[10], y_train[10], 0.005)
Explanation: Good, it seems like our implementation is at least doing something useful and decreasing the loss, just like we wanted.
Training our Network with Theano and the GPU
I have previously written a tutorial on Theano, and since all our logic will stay exactly the same I won't go through optimized code here again. I defined a RNNTheano class that replaces the numpy calculations with corresponding calculations in Theano. Just like the rest of this post, the code is also available Github.
End of explanation
from utils import load_model_parameters_theano, save_model_parameters_theano
model = RNNTheano(vocabulary_size, hidden_dim=50)
# losses = train_with_sgd(model, X_train, y_train, nepoch=50)
# save_model_parameters_theano('./data/trained-model-theano.npz', model)
load_model_parameters_theano('./data/trained-model-theano.npz', model)
Explanation: This time, one SGD step takes 70ms on my Mac (without GPU) and 23ms on a g2.2xlarge Amazon EC2 instance with GPU. That's a 15x improvement over our initial implementation and means we can train our model in hours/days instead of weeks. There are still a vast number of optimizations we could make, but we're good enough for now.
To help you avoid spending days training a model I have pre-trained a Theano model with a hidden layer dimensionality of 50 and a vocabulary size of 8000. I trained it for 50 epochs in about 20 hours. The loss was was still decreasing and training longer would probably have resulted in a better model, but I was running out of time and wanted to publish this post. Feel free to try it out yourself and trian for longer. You can find the model parameters in data/trained-model-theano.npz in the Github repository and load them using the load_model_parameters_theano method:
End of explanation
def generate_sentence(model):
# We start the sentence with the start token
new_sentence = [word_to_index[sentence_start_token]]
# Repeat until we get an end token
while not new_sentence[-1] == word_to_index[sentence_end_token]:
next_word_probs = model.forward_propagation(new_sentence)
sampled_word = word_to_index[unknown_token]
# We don't want to sample unknown words
while sampled_word == word_to_index[unknown_token]:
samples = np.random.multinomial(1, next_word_probs[-1])
sampled_word = np.argmax(samples)
new_sentence.append(sampled_word)
sentence_str = [index_to_word[x] for x in new_sentence[1:-1]]
return sentence_str
num_sentences = 10
senten_min_length = 7
for i in range(num_sentences):
sent = []
# We want long sentences, not sentences with one or two words
while len(sent) < senten_min_length:
sent = generate_sentence(model)
print " ".join(sent)
Explanation: Generating Text
Now that we have our model we can ask it to generate new text for us! Let's implement a helper function to generate new sentences:
End of explanation
%run train.py
%run NER.py
%run train.py
Explanation: A few selected (censored) sentences. I added capitalization.
Anyway, to the city scene you're an idiot teenager.
What ? ! ! ! ! ignore!
Screw fitness, you're saying: https
Thanks for the advice to keep my thoughts around girls.
Yep, please disappear with the terrible generation.
Looking at the generated sentences there are a few interesting things to note. The model successfully learn syntax. It properly places commas (usually before and's and or's) and ends sentence with punctuation. Sometimes it mimics internet speech such as multiple exclamation marks or smileys.
However, the vast majority of generated sentences don't make sense or have grammatical errors. One reason could be that we did not train our network long enough (or didn't use enough training data). That may be true, but it's most likely not the main reason. Our vanilla RNN can't generate meaningful text because it's unable to learn dependencies between words that are several steps apart. That's also why RNNs failed to gain popularity when they were first invented. They were beautiful in theory but didn't work well in practice, and we didn't immediately understand why.
Fortunately, the difficulties in training RNNs are much better understood now. In the next part of this tutorial we will explore the Backpropagation Through Time (BPTT) algorithm in more detail and demonstrate what's called the vanishing gradient problem. This will motivate our move to more sophisticated RNN models, such as LSTMs, which are the current state of the art for many tasks in NLP (and can generate much better reddit comments!). Everything you learned in this tutorial also applies to LSTMs and other RNN models, so don't feel discouraged if the results for a vanilla RNN are worse then you expected.
End of explanation |
9,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading the output from sinsin2dtex.cu
We go from a flattened std
Step1: Quick aside on Wireframe plots in matplotlib
cf. mplot3d tutorial, matplotlib
Step2: EY | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import csv
ld = [ 1., 1.]
WIDTH = 640
HEIGHT = 640
print WIDTH*HEIGHT
hd = [ld[0]/(float(WIDTH)), ld[1]/(float(HEIGHT)) ]
with open('sinsin2dtex_result.csv','r') as csvfile_result:
plot_results = csv.reader(csvfile_result, delimiter=',')
result_list = list( list(rec) for rec in plot_results )
with open('sinsin2dtex_ogref.csv','r') as csvfile_ogref:
plot_ogref = csv.reader(csvfile_ogref, delimiter=',')
ogref_list = list( list(rec) for rec in plot_ogref )
csvfile_result.close()
csvfile_ogref.close()
result_list = [[float(ele) for ele in row] for row in result_list]
ogref_list = [[float(ele) for ele in row] for row in ogref_list]
# sanity check
print len(result_list); print len(result_list[0]);
print result_list[ len(result_list)/4][ len(result_list[0])/4 : len(result_list[0])/4+22];
print len(ogref_list); print len(ogref_list[0]);
print ogref_list[ len(ogref_list)/4][ len(ogref_list[0])/4 : len(ogref_list[0])/4+22]
Explanation: Reading the output from sinsin2dtex.cu
We go from a flattened std::vector (C++, representing 2-dimensional data) to a .csv file.
End of explanation
from mpl_toolkits.mplot3d import axes3d
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
fig
print type(X), type(Y), type(Z); print len(X), len(Y), len(Z); print X.shape, Y.shape, Z.shape;
X
Y
Z
X[0][0:10]
Explanation: Quick aside on Wireframe plots in matplotlib
cf. mplot3d tutorial, matplotlib
End of explanation
X_sinsin = np.array( [[i*hd[0] for i in range(WIDTH)] for j in range(HEIGHT)] )
Y_sinsin = np.array( [[j*hd[1] for i in range(WIDTH)] for j in range(HEIGHT)] )
Z_sinsinresult = np.array( [[result_list[i][j] for i in range(WIDTH)] for j in range(HEIGHT)] )
Z_sinsinogref = np.array( [[ogref_list[i][j] for i in range(WIDTH)] for j in range(HEIGHT)] )
fig02 = plt.figure()
ax02 = fig02.add_subplot(111,projection='3d')
ax02.plot_wireframe(X_sinsin, Y_sinsin, Z_sinsinresult )
plt.show()
fig02
fig03 = plt.figure()
ax03 = fig03.add_subplot(111,projection='3d')
ax03.plot_wireframe(X_sinsin, Y_sinsin, Z_sinsinogref )
plt.show()
fig03
Explanation: EY : At least what I could surmise or infer the 2-dim. (???) python arrays for X,Y,Z of the wireframe plot work like this: imagine a 2-dimensional grid; on top of each grid point is the x-coordinate, then the y-coordinate, and then the z-coordinate. Thus you have 2-dimensional arrays for each.
Making X,Y,Z axes for mplot3d from the .csv files
End of explanation |
9,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maze Solver
In this notebook, we write a maze solver by solving the Poisson equation with two Dirichlet boundary conditions imposed on the two faces that correspond to the start and end of the maze, respectively.
The logic is pretty simple
Step1: Load maze samples
Step2: Approach A
Step3: Convert the maze into a Cubic network
Step4: Solve the Poisson equation ($\nabla^2 \phi = 0$) on the maze
Step5: Follow the gradient!
Step6: Approach B
Step7: Solve the Poisson equation ($\nabla^2 \phi = 0$) on the extracted network
Step8: Follow the gradient! | Python Code:
# Install the required pmeal packages in the current Jupyter kernel
import sys
try:
import openpnm as op
except:
!{sys.executable} -m pip install openpnm
import openpnm as op
try:
import porespy as ps
except:
!{sys.executable} -m pip install porespy
import porespy as ps
import requests
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
import porespy as ps
import openpnm as op
from openpnm.utils import tic, toc
from PIL import Image
from io import BytesIO
%config InlineBackend.figure_formats = ['svg']
ws = op.Workspace()
ws.settings["loglevel"] = 60
Explanation: Maze Solver
In this notebook, we write a maze solver by solving the Poisson equation with two Dirichlet boundary conditions imposed on the two faces that correspond to the start and end of the maze, respectively.
The logic is pretty simple: once we have the solution, we just need to start off from one face and follow the gradient. Since the gradient in the deadends is almost close to 0, following the nonzero gradient should guide us toward the other side of the maze.
We implement two different approaches:
Direct numerical simulation
Here, we first convert the image into a Cubic network, trim the pores that correspond to the walls, and finally run a basic OhmicConduction (or FickianDiffusion) on the resulting trimmed network.
Network extraction
Here, we first use the SNOW algorithm to extract the equivalent network of the maze. Note that the nodes in the equivalent network will not exactly give us the corners of the maze, but at least it gives us a rough idea, enough for solving the maze! Then, like the first approach, we run a basic OhmicConduction on the extracted network. The advantage of this approach is that it's way faster due to much fewer unknowns.
Note: Inspired by this post by Jeremy Theler https://www.linkedin.com/posts/jeremytheler_how-to-solve-a-maze-without-ai-use-laplaces-activity-6831291311832760320-x9d5
End of explanation
im_size = 'medium'
if im_size == 'small':
url = 'https://imgur.com/ZLbV4eh.png'
elif im_size == 'medium':
url = 'https://imgur.com/A3Jx8SJ.png'
else:
url = 'https://imgur.com/FLJ21e5.png'
response = requests.get(url)
img = Image.open(BytesIO(response.content))
im = np.array(img.getdata()).reshape(img.size[0], img.size[1], 4)[:, :, 0]
im = im == 255
Nx, Ny, = im.shape
fig, ax = plt.subplots(figsize=(5, 5))
ax.imshow(im, cmap='Blues', interpolation="none")
ax.axis("off");
Explanation: Load maze samples
End of explanation
# Structuring element for thickening walls
strel = np.array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
# Save some computation by thickening the walls
def thicken_wall(im):
return ~ndimage.morphology.binary_dilation(~im, structure=strel)
for _ in range(5):
im = thicken_wall(im)
fig, ax = plt.subplots(figsize=(5, 5))
ax.imshow(im, cmap='Blues', interpolation="none")
ax.axis("off")
Explanation: Approach A: Direct numerical simulation
Thicken the walls to reduce number of unknowns
End of explanation
# Get top and bottom boundaries
BP_top = np.zeros_like(im)
BP_bot = np.zeros_like(im)
BP_top[0, :] = True
BP_bot[-1, :] = True
BP_top *= im
BP_bot *= im
# Make a cubis network with same dimensions as image and assign the props
net = op.network.Cubic(shape=[Nx, Ny, 1])
net['pore.index'] = np.arange(0, net.Np)
net['pore.BP_top'] = BP_top.flatten()
net['pore.BP_bot'] = BP_bot.flatten()
# Trim wall pores
op.topotools.trim(network=net, pores=~im.flatten())
Explanation: Convert the maze into a Cubic network
End of explanation
# Set up a dummy phase and apply uniform arbitrary conductance
phase = op.phases.GenericPhase(network=net)
phase['throat.electrical_conductance'] = 1.0
# Run algorithm
alg = op.algorithms.OhmicConduction(network=net, phase=phase)
alg.set_value_BC(pores=net.pores('BP_top'), values=0.0)
alg.set_value_BC(pores=net.pores('BP_bot'), values=1.0)
tic()
alg.run()
dt = toc(quiet=True);
print(f'Solve time: {dt:.3f} s')
Explanation: Solve the Poisson equation ($\nabla^2 \phi = 0$) on the maze
End of explanation
# Calculate flux in throats and show in pores
# Note: No need to calculate pore.rate as it auto interpolates from throat values
phase['throat.rate'] = alg.rate(throats=net.Ts, mode='single')
rate_im = np.ones([Nx, Ny]).flatten() * np.nan
rate_im[net['pore.index']] = phase['pore.rate']
rate_im = rate_im.reshape([Nx, Ny])
# Plot the maze solution
fig, ax = plt.subplots(figsize=(5, 5))
ax.imshow(rate_im, cmap='jet', interpolation="none")
ax.axis("off");
Explanation: Follow the gradient!
End of explanation
# We need to pass image transpose since matrix xy coords is inverted
# i.e., row is x and col is y, whereas in Cartesian convention, it's the opposite.
out = ps.networks.snow2(im.T)
proj = op.io.PoreSpy.import_data(out.network)
net = proj.network
Explanation: Approach B: Network extraction
Network extraction using SNOW algorithm
End of explanation
# Set up a dummy phase and apply uniform arbitrary conductance
phase = op.phases.GenericPhase(network=net)
phase['throat.electrical_conductance'] = 1.0
# Run algorithm
alg = op.algorithms.OhmicConduction(network=net, phase=phase)
alg.set_value_BC(pores=net.pores('ymin'), values=0.0)
alg.set_value_BC(pores=net.pores('ymax'), values=1.0)
tic()
alg.run()
dt = toc(quiet=True);
print(f'Solve time: {dt:.3f} s')
Explanation: Solve the Poisson equation ($\nabla^2 \phi = 0$) on the extracted network
End of explanation
# Get throat rate values
phase['throat.rate'] = alg.rate(throats=net.Ts, mode='single')
# Plot the maze solution (i.e., throat rates!)
fig, ax = plt.subplots(figsize=(5, 5))
op.topotools.plot_connections(net, ax=ax,
color_by=phase["throat.rate"],
linewidth=2, cmap="Wistia")
ax.imshow(im, interpolation="none", cmap='Blues');
ax.axis("off");
Explanation: Follow the gradient!
End of explanation |
9,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Признаки по одному
1.1. Количественные
Гистограмма и боксплот
Step1: 1.2. Категориальные
countplot
Step2: 2. Взаимодействия признаков
2.1. Количественный с количественным
pairplot, scatterplot, корреляции, heatmap
Step3: 2.2. Количественный с категориальным
boxplot, violinplot
Step4: 2.3. Категориальный с категориальным
countplot
Step5: 3. Прочее
Manifold learning, один из представителей – t-SNE | Python Code:
df['Total day minutes'].hist();
sns.boxplot(df['Total day minutes']);
df.hist();
Explanation: 1. Признаки по одному
1.1. Количественные
Гистограмма и боксплот
End of explanation
df['State'].value_counts().head()
df['Churn'].value_counts()
sns.countplot(df['Churn']);
sns.countplot(df['State']);
sns.countplot(df[df['State'].\
isin(df['State'].value_counts().head().index)]['State']);
Explanation: 1.2. Категориальные
countplot
End of explanation
feat = [f for f in df.columns if 'charge' in f]
df[feat].hist();
sns.pairplot(df[feat]);
df['Churn'].map({False: 'blue', True: 'orange'}).head()
df[~df['Churn']].head()
plt.scatter(df[df['Churn']]['Total eve charge'],
df[df['Churn']]['Total intl charge'],
color='orange', label='churn');
plt.scatter(df[~df['Churn']]['Total eve charge'],
df[~df['Churn']]['Total intl charge'],
color='blue', label='loyal');
plt.xlabel('Вечерние начисления');
plt.ylabel('Межнар. начисления');
plt.title('Распределение начислений для лояльных/ушедших');
plt.legend();
sns.heatmap(df.corr());
df.drop(feat, axis=1, inplace=True)
sns.heatmap(df.corr());
Explanation: 2. Взаимодействия признаков
2.1. Количественный с количественным
pairplot, scatterplot, корреляции, heatmap
End of explanation
sns.boxplot(x='Churn', y='Total day minutes', data=df);
sns.boxplot(x='State', y='Total day minutes', data=df);
sns.violinplot(x='Churn', y='Total day minutes', data=df);
df.groupby('International plan')['Total day minutes'].mean()
sns.boxplot(x='International plan', y='Total day minutes', data=df);
Explanation: 2.2. Количественный с категориальным
boxplot, violinplot
End of explanation
pd.crosstab(df['Churn'], df['International plan'])
sns.countplot(x='International plan', hue='Churn', data=df);
sns.countplot(x='Customer service calls', hue='Churn', data=df);
Explanation: 2.3. Категориальный с категориальным
countplot
End of explanation
from sklearn.manifold import TSNE
tsne = TSNE(random_state=0)
df2 = df.drop(['State', 'Churn'], axis=1)
df2['International plan'] = df2['International plan'].map({'Yes': 1,
'No': 0})
df2['Voice mail plan'] = df2['Voice mail plan'].map({'Yes': 1,
'No': 0})
df2.info()
%%time
tsne.fit(df2)
plt.scatter(tsne.embedding_[df['Churn'].values, 0],
tsne.embedding_[df['Churn'].values, 1],
color='orange', alpha=.7);
plt.scatter(tsne.embedding_[~df['Churn'].values, 0],
tsne.embedding_[~df['Churn'].values, 1],
color='blue', alpha=.7);
Explanation: 3. Прочее
Manifold learning, один из представителей – t-SNE
End of explanation |
9,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: Now read the train and test questions into list of questions.
Step2: Using keras tokenizer to tokenize the text and then do padding the sentences to 30 words
Step3: Now let us create the embedding matrix where each row corresponds to a word.
Step4: Now its time to build the model. Let us specify the model architecture. First layer is the embedding layer.
Step5: In embedding layer, 'trainable' is set to False so as to not train the word embeddings during the back propogation.
The neural net architecture is as follows
Step6: Model training and predictions | Python Code:
import os
import csv
import codecs
import numpy as np
import pandas as pd
np.random.seed(1337)
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils.np_utils import to_categorical
from keras.layers import Dense, Input, Flatten, merge, LSTM, Lambda, Dropout
from keras.layers import Conv1D, MaxPooling1D, Embedding
from keras.models import Model
from keras.layers.wrappers import TimeDistributed, Bidirectional
from keras.layers.normalization import BatchNormalization
from keras import backend as K
import sys
BASE_DIR = 'data/'
GLOVE_DIR = '/home/mageswarand/dataset/glove/'
TRAIN_DATA_FILE = BASE_DIR + 'train.csv'
TEST_DATA_FILE = BASE_DIR + 'test.csv'
MAX_SEQUENCE_LENGTH = 30
MAX_NB_WORDS = 200000
EMBEDDING_DIM = 300
VALIDATION_SPLIT = 0.01
print('Indexing word vectors.')
embeddings_index = {}
f = codecs.open(os.path.join(GLOVE_DIR, 'glove.840B.300d.txt'), encoding='utf-8')
for line in f:
values = line.split(' ')
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
Explanation: https://www.kaggle.com/sudalairajkumar/quora-question-pairs/keras-starter-script-with-word-embeddings
End of explanation
print('Processing text dataset')
texts_1 = []
texts_2 = []
labels = [] # list of label ids
with codecs.open(TRAIN_DATA_FILE, encoding='utf-8') as f:
reader = csv.reader(f, delimiter=',')
header = next(reader)
for values in reader:
texts_1.append(values[3])
texts_2.append(values[4])
labels.append(int(values[5]))
print('Found %s texts.' % len(texts_1))
test_texts_1 = []
test_texts_2 = []
test_labels = [] # list of label ids
with codecs.open(TEST_DATA_FILE, encoding='utf-8') as f:
reader = csv.reader(f, delimiter=',')
header = next(reader)
for values in reader:
test_texts_1.append(values[1])
test_texts_2.append(values[2])
test_labels.append(values[0])
print('Found %s texts.' % len(test_texts_1))
Explanation: Now read the train and test questions into list of questions.
End of explanation
tokenizer = Tokenizer(nb_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(texts_1 + texts_2 + test_texts_1 + test_texts_2)
sequences_1 = tokenizer.texts_to_sequences(texts_1)
sequences_2 = tokenizer.texts_to_sequences(texts_2)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
test_sequences_1 = tokenizer.texts_to_sequences(test_texts_1)
test_sequences_2 = tokenizer.texts_to_sequences(test_texts_2)
data_1 = pad_sequences(sequences_1, maxlen=MAX_SEQUENCE_LENGTH)
data_2 = pad_sequences(sequences_2, maxlen=MAX_SEQUENCE_LENGTH)
labels = np.array(labels)
print('Shape of data tensor:', data_1.shape)
print('Shape of label tensor:', labels.shape)
test_data_1 = pad_sequences(test_sequences_1, maxlen=MAX_SEQUENCE_LENGTH)
test_data_2 = pad_sequences(test_sequences_2, maxlen=MAX_SEQUENCE_LENGTH)
test_labels = np.array(test_labels)
del test_sequences_1
del test_sequences_2
del sequences_1
del sequences_2
import gc
gc.collect()
Explanation: Using keras tokenizer to tokenize the text and then do padding the sentences to 30 words
End of explanation
print('Preparing embedding matrix.')
# prepare embedding matrix
nb_words = min(MAX_NB_WORDS, len(word_index))
embedding_matrix = np.zeros((nb_words, EMBEDDING_DIM))
for word, i in word_index.items():
if i >= nb_words:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
print('Null word embeddings: %d' % np.sum(np.sum(embedding_matrix, axis=1) == 0))
Explanation: Now let us create the embedding matrix where each row corresponds to a word.
End of explanation
embedding_layer = Embedding(nb_words,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
Explanation: Now its time to build the model. Let us specify the model architecture. First layer is the embedding layer.
End of explanation
# Model Architecture #
sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_1 = embedding_layer(sequence_1_input)
x1 = Conv1D(128, 3, activation='relu')(embedded_sequences_1)
x1 = MaxPooling1D(10)(x1)
x1 = Flatten()(x1)
x1 = Dense(64, activation='relu')(x1)
x1 = Dropout(0.2)(x1)
sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_2 = embedding_layer(sequence_2_input)
y1 = Conv1D(128, 3, activation='relu')(embedded_sequences_2)
y1 = MaxPooling1D(10)(y1)
y1 = Flatten()(y1)
y1 = Dense(64, activation='relu')(y1)
y1 = Dropout(0.2)(y1)
merged = merge([x1,y1], mode='concat')
merged = BatchNormalization()(merged)
merged = Dense(64, activation='relu')(merged)
merged = Dropout(0.2)(merged)
merged = BatchNormalization()(merged)
preds = Dense(1, activation='sigmoid')(merged)
model = Model(input=[sequence_1_input,sequence_2_input], output=preds)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['acc'])
Explanation: In embedding layer, 'trainable' is set to False so as to not train the word embeddings during the back propogation.
The neural net architecture is as follows:
Word embeddings of each question is passed to a 1-dimensional convolution layer followed by max pooling.
It is followed by one dense layer for each of the two questions
The outputs from both the dense layers are merged together
It is followed by a dense layer
Final layer is a sigmoid layer
End of explanation
# pass
model.fit([data_1,data_2], labels, validation_split=VALIDATION_SPLIT, nb_epoch=1, batch_size=1024, shuffle=True)
preds = model.predict([test_data_1, test_data_2])
print(preds.shape)
out_df = pd.DataFrame({"test_id":test_labels, "is_duplicate":preds.ravel()})
out_df.to_csv("test_predictions.csv", index=False)
Explanation: Model training and predictions :
Uncomment the below cell and run it in local as it is exceeding the time limits here.
End of explanation |
9,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
This tutorial will show you how to use the Table-Cleaner validation framework.
First, let's import the necessary modules. My personal style is to abbreviate
the scientific python libraries with two letters. This avoids namespace cluttering
on the one hand, and is still reasonably short.
Step1: IPython.display provides us with the means to display Python objects in a "rich" way, especially useful for tables.
Introduction
Validating tabular data, especially from CSV or Excel files is a very common task in data science and even generic programming. Many times this data isn't "clean" enough for further processing. Writing custom code to transform or clean up this kind of data quickly gets out of hand.
Table-Cleaner is a framework to generalize this cleaning process.
Basic Example
First, let's create a DataFrame with messy data.
Step2: This dataframe contains several columns. Some of the cells don't look much like the other cells in the same column. For Example we have numbers in the email and name columns and strings in the number columns.
Looking at the dtypes assigned to the dataframe columns reveals a further issue with this mess
Step3: All columns are referred to as "object", which means they are saved as individual Python objects, rather than strings, integers or floats. This can make further processing inefficient, but also error prone, because different Python objects may not work with certain dataframe functionality.
Let's define a cleaner
Step4: The TableCleaner constructor takes a dictionary for its first argument. This dictionary contains a mapping from column names to validator instances.
The tc.String instance validates every input to a string. Because most Python objects have some way of being represented as a string, this will usually work. Additionally, it can impose restrictions on minimum and maximum string length.
The tc.Int instance tries to turn the input into integer objects. This usually only works with numbers, or strings which look like integers. Here, also, minimum and maximum values can be optionally specified.
The cleaner object can now validate the input dataframe like this
Step5: The validate method returns a tuple containing the validated output dataframe
and a dataframe containing the verdicts on the individual cells.
Step6: The DataFrame only contains completely valid rows, because the default behavior is to delete any rows containing an error. See below on how to use missing values instead.
The datatypes for the "x" column is now int64 instead of object. "y" is now float64. Pandas uses the dtype system specified in numpy, and numpy references strings as "object". The main reason for this is that numeric data is usually stored in a contiguous way, meaning every value has the same "width" of bytes in memory. Strings, not so much. Their size varies. So arrays containing strings have to reference a string object with a pointer. Then the array of pointers is contiguous with a fixed number of bytes per pointer.
The "active" column is validated as a boolean field. There is a dtype called bool, but it only allows True and False. If there are missing values, the column reverts to "object".
Step7: So far, we have ensured only valid data is in the output table. But Table Cleaner can do more
Step8: In this case there is only one row per cell, or one per row and column. Except for the last row, where there are two warnings/errors for the Email column. In the current set of built-in validators this arises very rarely. Just keep in mind not to sum the errors up naively and call it the "number of invalid data points".
Let's filter the verdicts by validity
Step9: As this is an ordinary DataFrame, we can do all the known shenanigans to it, for example
Step10: This functionality is the main reason why Table Cleaner was initially written. In reproducible datascience, it is important not only to validate input data, but also be aware of, analyze and present the errors present in the data.
The framework laid out in this project aims to provide this capability. It's still in its infancy, and the API may well be changed.
Markup Frames
Let's bring some color into our tables. First, define some CSS styles for the notebook, like so
Step11: The MarkupFrame class is subclassed from Pandas' DataFrame class and is used to manipulate and render cell-specific markup. It behaves almost exactly the same as a DataFrame.
It can be created from a validation like this
Step12: Note that we put in the initial_df table, because the verdicts always relate to the original dataframe, not the output, which has possibly been altered and shortened during the validation process.
Now watch this
Step13: Booleans
The trouble with Booleans
Boolean values are either True or False. In Pandas, and data science in general, things are a bit more tricky. There is a third state, which Pandas would refer to as a missing value. Numpy's Bool dtype does not support missing values though.
Step14: What's happening there is that many Python objects have a way of being interpreted as either True or False. An empty list, empty strings, and None, are all considered false, for example.
Now, let's try that in Pandas
Step15: The dtype is not "bool". Instead Pandas refers to the individual Python object, and thus dtype must be "object". We can make it bool, though
Step16: Notice how np.NaN, which is normally interpreted as a missing value, has been converted to True?
If you try to index something with this sequence, this is what happens
Step17: Let's take a look at how to bring some sanity into this issue with Table Cleaner. First, define a messy DataFrame, with columns that are identical
Step18: Now create a cleaner which validates each column differently
Step19: Note that I used "delete=False" to keep rows with invalid data, while still converting available values. Then this dataframe has the same shape as MarkupFrame.from_validation expects. "allow_nan" defaults to True and controls whether or not missing values are considered an error.
Step20: Tables coming from external sources, especially spreadsheet data is notorious for having all sorts of ways to indicate booleans or missing values. The Bool validator takes three arguments to handle these cases
Step21: Email validation
Email validation is a subject onto its own. Some frameworks offer validation by simple regular expressions, which sometimes isn't enough. Other libraries or programs go so far as to ask the corresponding mail server if it knows a particular address.
In almost all generic usecases, you expect email names to adhere to a very specific form, meaning a username "at" a particular globally identifiable domain name. It is assumed that every computer in the world can resolve this domain name to the same physical server. Email standards and most eail servers however, don't require "fully qualified domain names" or even globally resolvable domains. "root@localhost" is a perfectly valid email address, but completely useless in most circumstances where you want to collect or use email addresses.
TableCleaner's Email validator class is based on Django's validation method. | Python Code:
import numpy as np
import pandas as pd
from IPython import display
import table_cleaner as tc
Explanation: Tutorial
This tutorial will show you how to use the Table-Cleaner validation framework.
First, let's import the necessary modules. My personal style is to abbreviate
the scientific python libraries with two letters. This avoids namespace cluttering
on the one hand, and is still reasonably short.
End of explanation
initial_df = pd.DataFrame(dict(name=["Alice", "Bob", "Wilhelm Alexander", 1, "Mary", "Andy"],
email=["[email protected]", "[email protected]", "blub", 4, "[email protected]",
"andy k@example .com"],
x=[0,3.2,"5","hello", -3,11,],
y=[0.2,3.2,1.3,"hello",-3.0,11.0],
active=["Y", None, "T", "false", "no", "T"]
))
display.display(initial_df)
Explanation: IPython.display provides us with the means to display Python objects in a "rich" way, especially useful for tables.
Introduction
Validating tabular data, especially from CSV or Excel files is a very common task in data science and even generic programming. Many times this data isn't "clean" enough for further processing. Writing custom code to transform or clean up this kind of data quickly gets out of hand.
Table-Cleaner is a framework to generalize this cleaning process.
Basic Example
First, let's create a DataFrame with messy data.
End of explanation
initial_df.dtypes
Explanation: This dataframe contains several columns. Some of the cells don't look much like the other cells in the same column. For Example we have numbers in the email and name columns and strings in the number columns.
Looking at the dtypes assigned to the dataframe columns reveals a further issue with this mess:
End of explanation
cleaner = tc.TableCleaner({'name': tc.String(min_length=2, max_length=10),
'email': tc.Email(),
'x': tc.Int(min_value=0, max_value=10),
'y': tc.Float64(min_value=0, max_value=10),
'active': tc.Bool(),
})
Explanation: All columns are referred to as "object", which means they are saved as individual Python objects, rather than strings, integers or floats. This can make further processing inefficient, but also error prone, because different Python objects may not work with certain dataframe functionality.
Let's define a cleaner:
End of explanation
output, verdicts = cleaner.validate(initial_df)
Explanation: The TableCleaner constructor takes a dictionary for its first argument. This dictionary contains a mapping from column names to validator instances.
The tc.String instance validates every input to a string. Because most Python objects have some way of being represented as a string, this will usually work. Additionally, it can impose restrictions on minimum and maximum string length.
The tc.Int instance tries to turn the input into integer objects. This usually only works with numbers, or strings which look like integers. Here, also, minimum and maximum values can be optionally specified.
The cleaner object can now validate the input dataframe like this:
End of explanation
display.display(output)
Explanation: The validate method returns a tuple containing the validated output dataframe
and a dataframe containing the verdicts on the individual cells.
End of explanation
output.dtypes
Explanation: The DataFrame only contains completely valid rows, because the default behavior is to delete any rows containing an error. See below on how to use missing values instead.
The datatypes for the "x" column is now int64 instead of object. "y" is now float64. Pandas uses the dtype system specified in numpy, and numpy references strings as "object". The main reason for this is that numeric data is usually stored in a contiguous way, meaning every value has the same "width" of bytes in memory. Strings, not so much. Their size varies. So arrays containing strings have to reference a string object with a pointer. Then the array of pointers is contiguous with a fixed number of bytes per pointer.
The "active" column is validated as a boolean field. There is a dtype called bool, but it only allows True and False. If there are missing values, the column reverts to "object".
End of explanation
verdicts
Explanation: So far, we have ensured only valid data is in the output table. But Table Cleaner can do more: The errors themselves can be treated as data:
End of explanation
errors = verdicts[~verdicts.valid]
display.display(errors)
Explanation: In this case there is only one row per cell, or one per row and column. Except for the last row, where there are two warnings/errors for the Email column. In the current set of built-in validators this arises very rarely. Just keep in mind not to sum the errors up naively and call it the "number of invalid data points".
Let's filter the verdicts by validity:
End of explanation
errors.groupby(["column", "reason"])["counter",].count()
Explanation: As this is an ordinary DataFrame, we can do all the known shenanigans to it, for example:
End of explanation
%%html
<style>
.tc-cell-invalid {
background-color: #ff8080
}
.tc-highlight {
color: red;
font-weight: bold;
margin: 3px solid black;
background-color: #b0b0b0;
}
.tc-green {
background-color: #80ff80
}
.tc-blue {
background-color: #8080ff;
}
</style>
Explanation: This functionality is the main reason why Table Cleaner was initially written. In reproducible datascience, it is important not only to validate input data, but also be aware of, analyze and present the errors present in the data.
The framework laid out in this project aims to provide this capability. It's still in its infancy, and the API may well be changed.
Markup Frames
Let's bring some color into our tables. First, define some CSS styles for the notebook, like so:
End of explanation
mdf = tc.MarkupFrame.from_validation(initial_df, verdicts)
mdf
Explanation: The MarkupFrame class is subclassed from Pandas' DataFrame class and is used to manipulate and render cell-specific markup. It behaves almost exactly the same as a DataFrame.
It can be created from a validation like this:
End of explanation
mdf.x[1] += "tc-highlight"
mdf.y += "tc-green"
mdf.ix[0, :] += "tc-blue"
mdf
Explanation: Note that we put in the initial_df table, because the verdicts always relate to the original dataframe, not the output, which has possibly been altered and shortened during the validation process.
Now watch this:
End of explanation
np.bool(None)
Explanation: Booleans
The trouble with Booleans
Boolean values are either True or False. In Pandas, and data science in general, things are a bit more tricky. There is a third state, which Pandas would refer to as a missing value. Numpy's Bool dtype does not support missing values though.
End of explanation
bools = pd.Series([True, False, None, np.NaN])
bools
Explanation: What's happening there is that many Python objects have a way of being interpreted as either True or False. An empty list, empty strings, and None, are all considered false, for example.
Now, let's try that in Pandas:
End of explanation
bools.astype(bool)
Explanation: The dtype is not "bool". Instead Pandas refers to the individual Python object, and thus dtype must be "object". We can make it bool, though:
End of explanation
original = pd.Series(range(3))
original[bools]
Explanation: Notice how np.NaN, which is normally interpreted as a missing value, has been converted to True?
If you try to index something with this sequence, this is what happens:
End of explanation
bools = [True, False, None, np.NaN]
bool_df = pd.DataFrame(dict(a=bools, b=bools, c=bools, d=bools))
bool_df
Explanation: Let's take a look at how to bring some sanity into this issue with Table Cleaner. First, define a messy DataFrame, with columns that are identical:
End of explanation
bool_cleaner = tc.TableCleaner(dict(a=tc.Bool(),
b=tc.Bool(true_values=[True], false_values=[False], allow_nan=False),
c=tc.Bool(true_values=[True], false_values=[False, None], allow_nan=False),
d=tc.Bool(true_values=[True], false_values=[False, np.nan],
nan_values=[None], allow_nan=False)))
bool_output, bool_verdicts = bool_cleaner.validate(bool_df, delete=False)
tc.MarkupFrame.from_validation(bool_output, bool_verdicts)
Explanation: Now create a cleaner which validates each column differently:
End of explanation
bool_verdicts[~bool_verdicts.valid]
Explanation: Note that I used "delete=False" to keep rows with invalid data, while still converting available values. Then this dataframe has the same shape as MarkupFrame.from_validation expects. "allow_nan" defaults to True and controls whether or not missing values are considered an error.
End of explanation
messy_bools_column =["T","t","on","yes", "No", "F"]
messy_bools = pd.DataFrame(dict(a=messy_bools_column, b=messy_bools_column))
bool_cleaner2 = tc.TableCleaner(dict(a=tc.Bool(),
b=tc.Bool(true_values=["T"], false_values=["F"], allow_nan=False),
))
bool_output2, bool_verdicts2 = bool_cleaner2.validate(messy_bools, delete=False)
tc.MarkupFrame.from_validation(bool_output2, bool_verdicts2)
Explanation: Tables coming from external sources, especially spreadsheet data is notorious for having all sorts of ways to indicate booleans or missing values. The Bool validator takes three arguments to handle these cases: true_values, false_values and nan_values.
End of explanation
messy_emails =["[email protected]", "[email protected]", "chris", "delta@localhost", "ernest@[email protected]", "fridolin@dev_server"]
email_df = pd.DataFrame(dict(email=messy_emails))
email_cleaner = tc.TableCleaner(dict(email=tc.Email()))
email_output, email_verdicts = email_cleaner.validate(email_df, delete=False)
tc.MarkupFrame.from_validation(email_output, email_verdicts)
Explanation: Email validation
Email validation is a subject onto its own. Some frameworks offer validation by simple regular expressions, which sometimes isn't enough. Other libraries or programs go so far as to ask the corresponding mail server if it knows a particular address.
In almost all generic usecases, you expect email names to adhere to a very specific form, meaning a username "at" a particular globally identifiable domain name. It is assumed that every computer in the world can resolve this domain name to the same physical server. Email standards and most eail servers however, don't require "fully qualified domain names" or even globally resolvable domains. "root@localhost" is a perfectly valid email address, but completely useless in most circumstances where you want to collect or use email addresses.
TableCleaner's Email validator class is based on Django's validation method.
End of explanation |
9,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pandas-validator example
This is example of pandas-validator in English.
Step1: Series Validator
Step2: DataFrame Validator
DataFrameValidator class can validate panda's dataframe object.
It can define easily like Django's model definition. | Python Code:
# Please install this package using following command.
# $ pip install pandas-validator
import pandas_validator as pv
import pandas as pd
import numpy as np
Explanation: pandas-validator example
This is example of pandas-validator in English.
End of explanation
# Create validator's instance
validator = pv.IntegerSeriesValidator(min_value=0, max_value=10)
series = pd.Series([0, 3, 6, 9]) # This series is valid.
print(validator.is_valid(series))
series = pd.Series([0, 4, 8, 12]) # This series is invalid. because that includes 12 number.
print(validator.is_valid(series))
Explanation: Series Validator
End of explanation
# Define validator
class SampleDataFrameValidator(pv.DataFrameValidator):
row_num = 5
column_num = 2
label1 = pv.IntegerColumnValidator('label1', min_value=0, max_value=10)
label2 = pv.FloatColumnValidator('label2', min_value=0, max_value=10)
# Create validator's instance
validator = SampleDataFrameValidator()
df = pd.DataFrame({'label1': [0, 1, 2, 3, 4], 'label2': [5.0, 6.0, 7.0, 8.0, 9.0]}) # This data frame is valid.
print(validator.is_valid(df))
df = pd.DataFrame({'label1': [11, 12, 13, 14, 15], 'label2': [5.0, 6.0, 7.0, 8.0, 9.0]}) # This data frame is invalid.
print(validator.is_valid(df))
df = pd.DataFrame({'label1': [0, 1, 2], 'label2': [5.0, 6.0, 7.0]}) # This data frame is invalid.
print(validator.is_valid(df))
Explanation: DataFrame Validator
DataFrameValidator class can validate panda's dataframe object.
It can define easily like Django's model definition.
End of explanation |
9,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Test
https
Step1: The Computational Graph
You might think of TensorFlow Core programs as consisting of two discrete sections
Step2: Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect. Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively. To actually evaluate the nodes, we must run the computational graph within a session. A <b>session</b> encapsulates the control and state of the TensorFlow runtime.
The following code creates a Session object and then invokes its run method to run enough of the computational graph to evaluate <i>node1</i> and <i>node2</i>. By running the computational graph in a session as follows
Step3: We can build more complicated computations by combining Tensor nodes with operations (Operations are also nodes.). For example, we can add our two constant nodes and produce a new graph as follows
Step4: TensorFlow provides a utility called TensorBoard that can display a picture of the computational graph. Here is a screenshot showing how TensorBoard visualizes the graph https
Step5: The preceding three lines are a bit like a function or a lambda in which we define two input parameters (a and b) and then an operation on them. We can evaluate this graph with multiple inputs by using the feed_dict parameter to specify Tensors that provide concrete values to these placeholders
Step6: In TensorBoard, the graph looks like this
Step7: The preceding computational graph would look as follows in TensorBoard
Step8: Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows
Step9: It is important to realize init is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call sess.run, the variables are uninitialized.
Since x is a placeholder, we can evaluate linear_model for several values of x simultaneously as follows
Step10: We've created a model, but we don't know how good it is yet. To evaluate the model on training data, we need a y placeholder to provide the desired values, and we need to write a loss function.
A loss function measures how far apart the current model is from the provided data. We'll use a standard loss model for linear regression, which sums the squares of the deltas between the current model and the provided data. linear_model - y creates a vector where each element is the corresponding example's error delta. We call tf.square to square that error. Then, we sum all the squared errors to create a single scalar that abstracts the error of all examples using tf.reduce_sum
Step11: We could improve this manually by reassigning the values of W and b to the perfect values of -1 and 1. A variable is initialized to the value provided to tf.Variable but can be changed using operations like tf.assign. For example, W=-1 and b=1 are the optimal parameters for our model. We can change W and b accordingly | Python Code:
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
Explanation: TensorFlow Test
https://www.tensorflow.org/get_started/
End of explanation
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)
Explanation: The Computational Graph
You might think of TensorFlow Core programs as consisting of two discrete sections:
Building the <b>computational graph.</b>
Running the computational graph.
A computational graph is a series of TensorFlow operations arranged into a graph of nodes. Let's build a simple computational graph. Each node takes zero or more tensors as inputs and produces a tensor as an output. One type of node is a constant. Like all TensorFlow constants, it takes no inputs, and it outputs a value it stores internally. We can create two floating point Tensors <i>node1</i> and <i>node2</i> as follows:
End of explanation
sess = tf.Session()
print(sess.run([node1, node2]))
Explanation: Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect. Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively. To actually evaluate the nodes, we must run the computational graph within a session. A <b>session</b> encapsulates the control and state of the TensorFlow runtime.
The following code creates a Session object and then invokes its run method to run enough of the computational graph to evaluate <i>node1</i> and <i>node2</i>. By running the computational graph in a session as follows:
End of explanation
node3 = tf.add(node1, node2)
print("node3: ", node3)
print("sess.run(node3): ",sess.run(node3))
Explanation: We can build more complicated computations by combining Tensor nodes with operations (Operations are also nodes.). For example, we can add our two constant nodes and produce a new graph as follows:
End of explanation
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
Explanation: TensorFlow provides a utility called TensorBoard that can display a picture of the computational graph. Here is a screenshot showing how TensorBoard visualizes the graph https://www.tensorflow.org/images/getting_started_add.png.
As it stands, this graph is not especially interesting because it always produces a constant result. A graph can be parameterized to accept external inputs, known as placeholders. A placeholder is a promise to provide a value later.
End of explanation
print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))
Explanation: The preceding three lines are a bit like a function or a lambda in which we define two input parameters (a and b) and then an operation on them. We can evaluate this graph with multiple inputs by using the feed_dict parameter to specify Tensors that provide concrete values to these placeholders:
End of explanation
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b:4.5}))
Explanation: In TensorBoard, the graph looks like this: https://www.tensorflow.org/images/getting_started_adder.png.
We can make the computational graph more complex by adding another operation. For example,
End of explanation
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
Explanation: The preceding computational graph would look as follows in TensorBoard: https://www.tensorflow.org/images/getting_started_triple.png
In machine learning we will typically want a model that can take arbitrary inputs, such as the one above. To make the model trainable, we need to be able to modify the graph to get new outputs with the same input. Variables allow us to add trainable parameters to a graph. They are constructed with a type and initial value:
End of explanation
init = tf.global_variables_initializer()
sess.run(init)
Explanation: Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows:
End of explanation
print(sess.run(linear_model, {x:[1,2,3,4]}))
Explanation: It is important to realize init is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call sess.run, the variables are uninitialized.
Since x is a placeholder, we can evaluate linear_model for several values of x simultaneously as follows:
End of explanation
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
Explanation: We've created a model, but we don't know how good it is yet. To evaluate the model on training data, we need a y placeholder to provide the desired values, and we need to write a loss function.
A loss function measures how far apart the current model is from the provided data. We'll use a standard loss model for linear regression, which sums the squares of the deltas between the current model and the provided data. linear_model - y creates a vector where each element is the corresponding example's error delta. We call tf.square to square that error. Then, we sum all the squared errors to create a single scalar that abstracts the error of all examples using tf.reduce_sum:
End of explanation
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
Explanation: We could improve this manually by reassigning the values of W and b to the perfect values of -1 and 1. A variable is initialized to the value provided to tf.Variable but can be changed using operations like tf.assign. For example, W=-1 and b=1 are the optimal parameters for our model. We can change W and b accordingly:
End of explanation |
9,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing and cleaning tweets
This notebook is a slight modification of @wwymak's word2vec notebook, with different tokenization, and a way to iterate over tweets linked to their named user
WWmyak's iterator and helper functions
Step1: My gensim tinkering
Tasks
Step2: And now we save everything for later analysis | Python Code:
import gensim
import os
import numpy as np
import itertools
import json
import re
import pymoji
import importlib
from nltk.tokenize import TweetTokenizer
from gensim import corpora
import string
from nltk.corpus import stopwords
from six import iteritems
import csv
tokenizer = TweetTokenizer()
def keep_retweets(tweets_objs_arr):
return [x["text"] for x in tweets_objs_arr if x['retweet'] != 'N'], [x["name"] for x in tweets_objs_arr if x['retweet'] != 'N'], [x["followers"] for x in tweets_objs_arr if x['retweet'] != 'N']
def convert_emojis(tweets_arr):
return [pymoji.replaceEmojiAlt(x, trailingSpaces=1) for x in tweets_arr]
def tokenize_tweets(tweets_arr):
result = []
for x in tweets_arr:
try:
tokenized = tokenizer.tokenize(x)
result.append([x.lower() for x in tokenized if x not in string.punctuation])
except:
pass
# print(x)
return result
class Tweets(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for root, directories, filenames in os.walk(self.dirname):
for filename in filenames:
if(filename.endswith('json')):
print(root + filename)
with open(os.path.join(root,filename), 'r') as f:
data = json.load(f)
data_parsed_step1, user_names, followers = keep_retweets(data)
data_parsed_step2 = convert_emojis(data_parsed_step1)
data_parsed_step3 = tokenize_tweets(data_parsed_step2)
for data, name, follower in zip(data_parsed_step3, user_names, followers):
yield name, data, follower
#model = gensim.models.Word2Vec(sentences, workers=2, window=5, sg = 1, size = 100, max_vocab_size = 2 * 10000000)
#model.save('tweets_word2vec_2017_1_size100_window5')
#print('done')
#print(time.time() - start_time)
Explanation: Parsing and cleaning tweets
This notebook is a slight modification of @wwymak's word2vec notebook, with different tokenization, and a way to iterate over tweets linked to their named user
WWmyak's iterator and helper functions
End of explanation
# building the dictionary first, from the iterator
sentences = Tweets('/media/henripal/hd1/data/2017/1/') # a memory-friendly iterator
dictionary = corpora.Dictionary((tweet for _, tweet, _ in sentences))
# here we use the downloaded stopwords from nltk and create the list
# of stop ids using the hash defined above
stop = set(stopwords.words('english'))
stop_ids = [dictionary.token2id[stopword] for stopword in stop if stopword in dictionary.token2id]
# and this is the items we don't want - that appear less than 20 times
# hardcoded numbers FTW
low_freq_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq <1500]
# finally we filter the dictionary and compactify
dictionary.filter_tokens(stop_ids + low_freq_ids)
dictionary.compactify() # remove gaps in id sequence after words that were removed
print(dictionary)
# reinitializing the iterator to get more stuff
sentences = Tweets('/media/henripal/hd1/data/2017/1/')
corpus = []
name_to_follower = {}
names = []
for name, tweet, follower in sentences:
corpus.append(tweet)
names.append(name)
name_to_follower[name] = follower
Explanation: My gensim tinkering
Tasks:
- build the gensim dictionary
- build the bow matrix using this dictionary (sparse matrix so memory friendly)
- save the names and the dicitionary for later use
End of explanation
with open('/media/henripal/hd1/data/name_to_follower.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
for key, value in name_to_follower.items():
writer.writerow([key, value])
with open('/media/henripal/hd1/dta/corpus_names.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(names)
# now we save the sparse bow corpus matrix using matrix market format
corpora.MmCorpus.serialize('/media/henripal/hd1/data/corp.mm', corpus)
# and we save the dictionary as a text file
dictionary.save('/media/henripal/hd1/data/dict')
Explanation: And now we save everything for later analysis
End of explanation |
9,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras versus Poisonous Mushrooms
This example demonstrates building a simple dense neural network using Keras. The example uses Agaricus Lepiota training data to detect poisonous mushrooms.
Step1: Feature extraction
If we wanted to use all the features in the training set then we would need to map each out. The LabelEncoder converts T/F data to 1 and 0. The LabelBinarizer converts categorical data to one hot encoding.
If we wanted to use all the features in the training set then we would need to map each out
Step2: Now lets transform the textual data to a vector...
The transformed data should have 26 features. The break down is as follows
Step3: Before we train the neural network, let's split the data into training and test datasets.
Step4: Model Definition
We will create a simple three layer neural network. The network contains two dense layers and a dropout layer (to avoid overfitting).
Layer 1
Step5: Model Training
Model Complie
This step configures the model for training with the following settings
Step6: Keras Callbacks
Keras provides callbacks as a means to instrument internal state. In this example, we will write a tensorflow event log. The event log enables a tensorboard visualization of the translated model. The event log also captures key metrics during training.
Note
Step7: Model Evaluation
Step8: Save/Restore the Model
Keras provides methods to save the models architecture as yaml or json.
Step9: We also need to save the parameters or weights learns from training.
Step10: Model Restore
We'll load the definition and parameters...
Step11: Lets run some predictions on the newly initiated model.
Step13: Confusion Matrix | Python Code:
from pandas import read_csv
srooms_df = read_csv('../data/agaricus-lepiota.data.csv')
srooms_df.head()
Explanation: Keras versus Poisonous Mushrooms
This example demonstrates building a simple dense neural network using Keras. The example uses Agaricus Lepiota training data to detect poisonous mushrooms.
End of explanation
from sklearn_pandas import DataFrameMapper
import sklearn
import numpy as np
mappings = ([
('edibility', sklearn.preprocessing.LabelEncoder()),
('odor', sklearn.preprocessing.LabelBinarizer()),
('habitat', sklearn.preprocessing.LabelBinarizer()),
('spore-print-color', sklearn.preprocessing.LabelBinarizer())
])
mapper = DataFrameMapper(mappings)
srooms_np = mapper.fit_transform(srooms_df.copy())
Explanation: Feature extraction
If we wanted to use all the features in the training set then we would need to map each out. The LabelEncoder converts T/F data to 1 and 0. The LabelBinarizer converts categorical data to one hot encoding.
If we wanted to use all the features in the training set then we would need to map each out:
```
column_names = srooms_df.axes[1]
def get_mapping(name):
if(name == 'edibility' or name == 'gill-attachment'):
return (name, sklearn.preprocessing.LabelEncoder())
else:
return (name, sklearn.preprocessing.LabelBinarizer())
mappings = list(map(lambda name: get_mapping(name), column_names)
```
We will use a subset of features to make it interesting. Are there simple rules or a handful of features that can be used to test edibility? Lets try a few.
End of explanation
print(srooms_np.shape)
print("Frist sample: {}".format(srooms_np[0]))
print(" edibility (poisonous): {}".format(srooms_np[0][0]))
print(" ordr (pungent): {}".format(srooms_np[0][1:10]))
print(" habitat (urban): {}".format(srooms_np[0][10:17]))
print(" spore-print-color (black): {}".format(srooms_np[0][17:]))
Explanation: Now lets transform the textual data to a vector...
The transformed data should have 26 features. The break down is as follows:
* Edibility (0 = edible, 1 = poisonous)
* odor (9 features):
[almond=a, creosote=c, foul=f, anise=l, musty=m, none=n, pungent=p, spicy=s, fishy=y]
* habitat (7 features):
[woods=d, grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w]
* spore-print-color (9 features):
[buff=b, chocolate=h, black=k, brown=n, orange=o, green=r, purple=u, white=w, yellow=y]
End of explanation
from sklearn.model_selection import train_test_split
train, test = train_test_split(srooms_np, test_size = 0.2, random_state=7)
train_labels = train[:,0:1]
train_data = train[:,1:]
test_labels = test[:,0:1]
test_data = test[:,1:]
print('training data dims: {}, label dims: {}'.format(train_data.shape,train_labels.shape))
print('test data dims: {}, label dims: {}'.format(test_data.shape,test_labels.shape))
Explanation: Before we train the neural network, let's split the data into training and test datasets.
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Dropout
model = Sequential()
model.add(Dense(20, activation='relu', input_dim=25))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
Explanation: Model Definition
We will create a simple three layer neural network. The network contains two dense layers and a dropout layer (to avoid overfitting).
Layer 1: Dense Layer
A dense layer applies an activation function to the output of $W \cdot x + b$. If the dense layer only had three inputs and outputs, then the dense layer looks like this...
Under the covers, keras represents the layer's weights as a matrix. The inputs, outputs, and biases are vectors...
$$
\begin{bmatrix}
y_1 \
y_2 \
y_3
\end{bmatrix}
=
relu
\begin{pmatrix}
\begin{bmatrix}
W_{1,1} & W_{1,2} & W_{1,3} \
W_{2,1} & W_{2,2} & W_{2,3} \
W_{3,1} & W_{3,2} & W_{3,3}
\end{bmatrix}
\cdot
\begin{bmatrix}
x_1 \
x_2 \
x_3
\end{bmatrix}
+
\begin{bmatrix}
b_1 \
b_2 \
b_3
\end{bmatrix}
\end{pmatrix}
$$
If this operation was decomposed futher, it would look like this...
$$
\begin{bmatrix}
y_1 \
y_2 \
y_3
\end{bmatrix}
=
\begin{bmatrix}
relu(W_{1,1} x_1 + W_{1,2} x_2 + W_{1,3} x_3 + b_1) \
relu(W_{2,1} x_1 + W_{2,2} x_2 + W_{2,3} x_3 + b_2) \
relu(W_{3,1} x_1 + W_{3,2} x_2 + W_{3,3} x_3 + b_3)
\end{bmatrix}
$$
The Rectified Linear Unit (RELU) function looks like this...
Layer 2: Dropout
The dropout layer prevents overfitting by randomly dropping inputs to the next layer.
Layer 3: Dense Layer
This layer acts like the first one, except this layer applies a sigmod activation function. The output is the probability a mushroom is poisonous. If a sample represents a small probability of poisoning, we'll want to know!
$$y = sigmod(W \cdot x + b)$$
Putting It Together
Fortunately, we don't need to worry about defining the parameters (the weigths and biases) in Keras. We just define the layers in a sequence...
End of explanation
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
Explanation: Model Training
Model Complie
This step configures the model for training with the following settings:
* An optimizier (update the gradients based on a loss function)
* A loss function
* Metrics to track during training
End of explanation
from keras.callbacks import TensorBoard
tensor_board = TensorBoard(log_dir='./logs/keras_srooms', histogram_freq=1)
model.fit(train_data, train_labels, epochs=10, batch_size=32, callbacks=[tensor_board])
Explanation: Keras Callbacks
Keras provides callbacks as a means to instrument internal state. In this example, we will write a tensorflow event log. The event log enables a tensorboard visualization of the translated model. The event log also captures key metrics during training.
Note: This step is completely optional and depends on the backend engine.
End of explanation
score = model.evaluate(test_data, test_labels, batch_size=1625)
print(score)
Explanation: Model Evaluation
End of explanation
print(model.to_yaml())
definition = model.to_yaml()
Explanation: Save/Restore the Model
Keras provides methods to save the models architecture as yaml or json.
End of explanation
model.save_weights('/tmp/srmooms.hdf5')
Explanation: We also need to save the parameters or weights learns from training.
End of explanation
from keras.models import model_from_yaml
new_model = model_from_yaml(definition)
new_model.load_weights('/tmp/srmooms.hdf5')
Explanation: Model Restore
We'll load the definition and parameters...
End of explanation
predictions = new_model.predict(test_data[0:25]).round()
for i in range(25):
if predictions[i]:
print('Test sample {} is poisonous.'.format(i))
Explanation: Lets run some predictions on the newly initiated model.
End of explanation
predictions = new_model.predict(test_data).round()
labels = test_labels[:,0]
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(labels,predictions)
import matplotlib.pyplot as plt
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plot_confusion_matrix(cm,['edible','poisonous'])
plt.show()
Explanation: Confusion Matrix
End of explanation |
9,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sommer 2017
HS 5
Step1: Die Bioscan GmbH plant ein System zur Zugangskontrolle. Dazu wurde bereits folgende Datenbank entwickelt und mit Testdaten gefüllt.
a) Liste aller Gebäude mit deren Räumen jeweils aufsteigend sortiert nach Gebäudebezeichnung und Raumtyp
Step2: b) Liste aller Daten, die in der Tabelle Zugang gespeichert sind
und die dazugehörigen Personendaten
Hinweis
Step3: c) Anzahl der Räume, die bei derZugangskontrolle das Merkmal "Fingerabdruck prüfen" bzw. das Merkmal Iris prüfen
Step4: d) Liste der Zugangsdaten von Max Müller.
Hinweis
Step5: e) Liste aller Personen aus dem PLZ-Gebiet 5000 bis 5999 | Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/sommer_2017
Explanation: Sommer 2017
HS 5
End of explanation
%%sql
select G.*, R.*
from `gebaeude` G
left join Raum R on G.`GebID` = R.`GebID`
order by G.`Bezeichnung`, R.`Typ`
Explanation: Die Bioscan GmbH plant ein System zur Zugangskontrolle. Dazu wurde bereits folgende Datenbank entwickelt und mit Testdaten gefüllt.
a) Liste aller Gebäude mit deren Räumen jeweils aufsteigend sortiert nach Gebäudebezeichnung und Raumtyp
End of explanation
%%sql
-- b
-- Ausgabe in der Kammerprüfung kann laut Datenbestand nicht ausgegeben werden
select p.*, Z.*
from Zugang Z
left join Person P on P.`PersID` = Z.`PersID`
Explanation: b) Liste aller Daten, die in der Tabelle Zugang gespeichert sind
und die dazugehörigen Personendaten
Hinweis: Die Darstellung entspricht nicht dem erwarteten Ergebnis (redaktioneller Fehler in der Kammerprüfung (Dilettanten :-))
End of explanation
%%sql
Select M.Merkmal, COUNT(R.RaumID) as AnzahlRaueme
from Raum R left join Merkmal m on M.`MerkID` = R.`MerkID`
group by M.`Merkmal`
Explanation: c) Anzahl der Räume, die bei derZugangskontrolle das Merkmal "Fingerabdruck prüfen" bzw. das Merkmal Iris prüfen
End of explanation
%%sql
select P.`Nachname`, P.`Vorname`, Z.RaumID, Z.`ZeitVon`, Z.`ZeitBis`from Zugang Z
left join Person P
on P.`PersID` = Z.`PersID`
where P.`Nachname` = 'Müller' and P.`Vorname` = 'Max'
Explanation: d) Liste der Zugangsdaten von Max Müller.
Hinweis: Es ist nur der Name, nicht die PersID bekannt
End of explanation
%%sql
Select P.*
from Person P
where P.`PLZ` like '5%'
Explanation: e) Liste aller Personen aus dem PLZ-Gebiet 5000 bis 5999
End of explanation |
9,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: DDSP Synths and Effects
This notebook demonstrates the use of several of the Synths and Effects Processors in the DDSP library. While the core functions are also directly accessible through ddsp.core, using Processors is the preferred API for end-2-end training.
As demonstrated in the 0_processors.ipynb tutorial, Processors contain the necessary nonlinearities and preprocessing in their get_controls() method to convert generic neural network outputs into valid processor controls, which are then converted to signal by get_signal(). The two methods are called in series by __call__().
While each processor is capable of a wide range of expression, we focus on simple examples here for clarity.
Step2: Synths
Synthesizers, located in ddsp.synths, take network outputs and produce a signal (usually used as audio).
Harmonic
The harmonic synthesizer models a sound as a linear combination of harmonic sinusoids. Amplitude envelopes are generated with 50% overlapping hann windows. The final audio is cropped to n_samples.
Inputs
Step3: Filtered Noise
The filtered noise synthesizer is a subtractive synthesizer that shapes white noise with a series of time-varying filter banks.
Inputs
Step4: Wavetable
The wavetable synthesizer generates audio through interpolative lookup from small chunks of waveforms (wavetables) provided by the network. In principle, it is very similar to the Harmonic synth, but with a parameterization in the waveform domain and generation using linear interpolation vs. cumulative summation of sinusoid phases.
Inputs
Step5: Effects
Effects, located in ddsp.effects are different in that they take network outputs to transform a given audio signal. Some effects, such as Reverb, optionally have trainable parameters of their own.
Reverb
There are several types of reverberation processors in ddsp.
Reverb
ExpDecayReverb
FilteredNoiseReverb
Unlike other processors, reverbs also have the option to treat the impulse response as a 'trainable' variable, and not require it from network outputs. This is helpful for instance if the room environment is the same for the whole dataset. To make the reverb trainable, just pass the kwarg trainable=True to the constructor
Step6: FIR Filter
Linear time-varying finite impulse response (LTV-FIR) filters are a broad class of filters that can vary over time.
Step8: ModDelay
Variable length delay lines create an instantaneous pitch shift that can be useful in a variety of time modulation effects such as vibrato, chorus, and flanging. | Python Code:
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/tutorials/1_synths_and_effects.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2021 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Install and import dependencies
%tensorflow_version 2.x
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import ddsp
import ddsp.training
from ddsp.colab.colab_utils import (play, record, specplot, upload,
DEFAULT_SAMPLE_RATE)
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
sample_rate = DEFAULT_SAMPLE_RATE # 16000
Explanation: DDSP Synths and Effects
This notebook demonstrates the use of several of the Synths and Effects Processors in the DDSP library. While the core functions are also directly accessible through ddsp.core, using Processors is the preferred API for end-2-end training.
As demonstrated in the 0_processors.ipynb tutorial, Processors contain the necessary nonlinearities and preprocessing in their get_controls() method to convert generic neural network outputs into valid processor controls, which are then converted to signal by get_signal(). The two methods are called in series by __call__().
While each processor is capable of a wide range of expression, we focus on simple examples here for clarity.
End of explanation
n_frames = 1000
hop_size = 64
n_samples = n_frames * hop_size
# Amplitude [batch, n_frames, 1].
# Make amplitude linearly decay over time.
amps = np.linspace(1.0, -3.0, n_frames)
amps = amps[np.newaxis, :, np.newaxis]
# Harmonic Distribution [batch, n_frames, n_harmonics].
# Make harmonics decrease linearly with frequency.
n_harmonics = 20
harmonic_distribution = np.ones([n_frames, 1]) * np.linspace(1.0, -1.0, n_harmonics)[np.newaxis, :]
harmonic_distribution = harmonic_distribution[np.newaxis, :, :]
# Fundamental frequency in Hz [batch, n_frames, 1].
f0_hz = 440.0 * np.ones([1, n_frames, 1])
# Create synthesizer object.
harmonic_synth = ddsp.synths.Harmonic(n_samples=n_samples,
scale_fn=ddsp.core.exp_sigmoid,
sample_rate=sample_rate)
# Generate some audio.
audio = harmonic_synth(amps, harmonic_distribution, f0_hz)
# Listen.
play(audio)
specplot(audio)
Explanation: Synths
Synthesizers, located in ddsp.synths, take network outputs and produce a signal (usually used as audio).
Harmonic
The harmonic synthesizer models a sound as a linear combination of harmonic sinusoids. Amplitude envelopes are generated with 50% overlapping hann windows. The final audio is cropped to n_samples.
Inputs:
* amplitudes: Amplitude envelope of the synthesizer output.
* harmonic_distribution: Normalized amplitudes of each harmonic.
* frequencies: Frequency in Hz of base oscillator.
End of explanation
n_frames = 250
n_frequencies = 1000
n_samples = 64000
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = [tf.sin(tf.linspace(0.0, w, n_frequencies)) for w in np.linspace(8.0, 80.0, n_frames)]
magnitudes = 0.5 * tf.stack(magnitudes)**4.0
magnitudes = magnitudes[tf.newaxis, :, :]
# Create synthesizer object.
filtered_noise_synth = ddsp.synths.FilteredNoise(n_samples=n_samples,
scale_fn=None)
# Generate some audio.
audio = filtered_noise_synth(magnitudes)
# Listen.
play(audio)
specplot(audio)
Explanation: Filtered Noise
The filtered noise synthesizer is a subtractive synthesizer that shapes white noise with a series of time-varying filter banks.
Inputs:
* magnitudes: Amplitude envelope of each filter bank (linearly spaced from 0Hz to the Nyquist frequency).
End of explanation
n_samples = 64000
n_wavetable = 2048
n_frames = 100
# Amplitude [batch, n_frames, 1].
amps = tf.linspace(0.5, 1e-3, n_frames)[tf.newaxis, :, tf.newaxis]
# Fundamental frequency in Hz [batch, n_frames, 1].
f0_hz = 110 * tf.linspace(1.5, 1, n_frames)[tf.newaxis, :, tf.newaxis]
# Wavetables [batch, n_frames, n_wavetable].
# Sin wave
wavetable_sin = tf.sin(tf.linspace(0.0, 2.0 * np.pi, n_wavetable))
wavetable_sin = wavetable_sin[tf.newaxis, tf.newaxis, :]
# Square wave
wavetable_square = tf.cast(wavetable_sin > 0.0, tf.float32) * 2.0 - 1.0
# Combine them and upsample to n_frames.
wavetables = tf.concat([wavetable_square, wavetable_sin], axis=1)
wavetables = ddsp.core.resample(wavetables, n_frames)
# Create synthesizer object.
wavetable_synth = ddsp.synths.Wavetable(n_samples=n_samples,
sample_rate=sample_rate,
scale_fn=None)
# Generate some audio.
audio = wavetable_synth(amps, wavetables, f0_hz)
# Listen, notice the aliasing artifacts from linear interpolation.
play(audio)
specplot(audio)
Explanation: Wavetable
The wavetable synthesizer generates audio through interpolative lookup from small chunks of waveforms (wavetables) provided by the network. In principle, it is very similar to the Harmonic synth, but with a parameterization in the waveform domain and generation using linear interpolation vs. cumulative summation of sinusoid phases.
Inputs:
* amplitudes: Amplitude envelope of the synthesizer output.
* wavetables: A series of wavetables that are interpolated to cover n_samples.
* frequencies: Frequency in Hz of base oscillator.
End of explanation
#@markdown Record or Upload Audio
record_or_upload = "Record" #@param ["Record", "Upload (.mp3 or .wav)"]
record_seconds = 5#@param {type:"number", min:1, max:10, step:1}
if record_or_upload == "Record":
audio = record(seconds=record_seconds)
else:
# Load audio sample here (.mp3 or .wav3 file)
# Just use the first file.
filenames, audios = upload()
audio = audios[0]
# Add batch dimension
audio = audio[np.newaxis, :]
# Listen.
specplot(audio)
play(audio)
# Let's just do a simple exponential decay reverb.
reverb = ddsp.effects.ExpDecayReverb(reverb_length=48000)
gain = [[-2.0]]
decay = [[2.0]]
# gain: Linear gain of impulse response. Scaled by self._gain_scale_fn.
# decay: Exponential decay coefficient. The final impulse response is
# exp(-(2 + exp(decay)) * time) where time goes from 0 to 1.0 over the
# reverb_length samples.
audio_out = reverb(audio, gain, decay)
# Listen.
specplot(audio_out)
play(audio_out)
# Just the filtered noise reverb can be quite expressive.
reverb = ddsp.effects.FilteredNoiseReverb(reverb_length=48000,
scale_fn=None)
# Rising gaussian filtered band pass.
n_frames = 1000
n_frequencies = 100
frequencies = np.linspace(0, sample_rate / 2.0, n_frequencies)
center_frequency = 4000.0 * np.linspace(0, 1.0, n_frames)
width = 500.0
gauss = lambda x, mu: 2.0 * np.pi * width**-2.0 * np.exp(- ((x - mu) / width)**2.0)
# Actually make the magnitudes.
magnitudes = np.array([gauss(frequencies, cf) for cf in center_frequency])
magnitudes = magnitudes[np.newaxis, ...]
magnitudes /= magnitudes.sum(axis=-1, keepdims=True) * 5
# Apply the reverb.
audio_out = reverb(audio, magnitudes)
# Listen.
specplot(audio_out)
play(audio_out)
plt.matshow(np.rot90(magnitudes[0]), aspect='auto')
plt.title('Impulse Response Frequency Response')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.xticks([])
_ = plt.yticks([])
Explanation: Effects
Effects, located in ddsp.effects are different in that they take network outputs to transform a given audio signal. Some effects, such as Reverb, optionally have trainable parameters of their own.
Reverb
There are several types of reverberation processors in ddsp.
Reverb
ExpDecayReverb
FilteredNoiseReverb
Unlike other processors, reverbs also have the option to treat the impulse response as a 'trainable' variable, and not require it from network outputs. This is helpful for instance if the room environment is the same for the whole dataset. To make the reverb trainable, just pass the kwarg trainable=True to the constructor
End of explanation
#@markdown Record or Upload Audio
record_or_upload = "Record" #@param ["Record", "Upload (.mp3 or .wav)"]
record_seconds = 5#@param {type:"number", min:1, max:10, step:1}
if record_or_upload == "Record":
audio = record(seconds=record_seconds)
else:
# Load audio sample here (.mp3 or .wav3 file)
# Just use the first file.
filenames, audios = upload()
audio = audios[0]
# Add batch dimension
audio = audio[np.newaxis, :]
# Listen.
specplot(audio)
play(audio)
# Let's make an oscillating gaussian bandpass filter.
fir_filter = ddsp.effects.FIRFilter(scale_fn=None)
# Make up some oscillating gaussians.
n_seconds = audio.size / sample_rate
frame_rate = 100 # Hz
n_frames = int(n_seconds * frame_rate)
n_samples = int(n_frames * sample_rate / frame_rate)
audio_trimmed = audio[:, :n_samples]
n_frequencies = 1000
frequencies = np.linspace(0, sample_rate / 2.0, n_frequencies)
lfo_rate = 0.5 # Hz
n_cycles = n_seconds * lfo_rate
center_frequency = 1000 + 500 * np.sin(np.linspace(0, 2.0*np.pi*n_cycles, n_frames))
width = 500.0
gauss = lambda x, mu: 2.0 * np.pi * width**-2.0 * np.exp(- ((x - mu) / width)**2.0)
# Actually make the magnitudes.
magnitudes = np.array([gauss(frequencies, cf) for cf in center_frequency])
magnitudes = magnitudes[np.newaxis, ...]
magnitudes /= magnitudes.max(axis=-1, keepdims=True)
# Filter.
audio_out = fir_filter(audio_trimmed, magnitudes)
# Listen.
play(audio_out)
specplot(audio_out)
_ = plt.matshow(np.rot90(magnitudes[0]), aspect='auto')
plt.title('Frequency Response')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.xticks([])
_ = plt.yticks([])
Explanation: FIR Filter
Linear time-varying finite impulse response (LTV-FIR) filters are a broad class of filters that can vary over time.
End of explanation
#@markdown Record or Upload Audio
record_or_upload = "Record" #@param ["Record", "Upload (.mp3 or .wav)"]
record_seconds = 5#@param {type:"number", min:1, max:10, step:1}
if record_or_upload == "Record":
audio = record(seconds=record_seconds)
else:
# Load audio sample here (.mp3 or .wav3 file)
# Just use the first file.
filenames, audios = upload()
audio = audios[0]
# Add batch dimension
audio = audio[np.newaxis, :]
# Listen.
specplot(audio)
play(audio)
def sin_phase(mod_rate):
Helper function.
n_samples = audio.size
n_seconds = n_samples / sample_rate
phase = tf.sin(tf.linspace(0.0, mod_rate * n_seconds * 2.0 * np.pi, n_samples))
return phase[tf.newaxis, :, tf.newaxis]
def modulate_audio(audio, center_ms, depth_ms, mod_rate):
mod_delay = ddsp.effects.ModDelay(center_ms=center_ms,
depth_ms=depth_ms,
gain_scale_fn=None,
phase_scale_fn=None)
phase = sin_phase(mod_rate) # Hz
gain = 1.0 * np.ones_like(audio)[..., np.newaxis]
audio_out = 0.5 * mod_delay(audio, gain, phase)
# Listen.
play(audio_out)
specplot(audio_out)
# Three different effects.
print('Flanger')
modulate_audio(audio, center_ms=0.75, depth_ms=0.75, mod_rate=0.25)
print('Chorus')
modulate_audio(audio, center_ms=25.0, depth_ms=1.0, mod_rate=2.0)
print('Vibrato')
modulate_audio(audio, center_ms=25.0, depth_ms=12.5, mod_rate=5.0)
Explanation: ModDelay
Variable length delay lines create an instantaneous pitch shift that can be useful in a variety of time modulation effects such as vibrato, chorus, and flanging.
End of explanation |
9,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
03 - Sequence Model Approach
The more 'classical' approach to solving this problem
Train a model that can take any number of 'steps'
Makes a prediction on next step based on previous steps
Learn from full tracks
For test tracks, predict what the next step's values will be
Step1: Load up and prep the datasets
Step2: Construct the training data and targets
For each track
Choose a number N between 8 and 24
That track will have 6 kinematics for N blocks
The target variable will be the 6 kinematic variables for the N+1th detector block
This will cause variable length sequences
Apply pad_sequences to prepend with zeros appropriately
Training Dataset
Step4: Validation Dataset
Step6: Multi-layer GRU Model with LReLU
Step7: Calculate the score on my predictions
Scoring code provided by Thomas Britton
Each kinematic has different weight
Step8: Visualize the predictions vs true
You can slice and dice the stats however you want, but it helps to be able to see your predictions at work.
Running history of me tinkering around
I didn't arrive at this construction from the start.
Many different changes and tweaks
Step9: Early Conclusions
GRU > LSTM
LeakyReLU > ReLU
adam > rmsprop
dropout 0.25 > dropout 0.5 > no dropout
Step10: Try CNN LSTM
Step11: Enough tinkering around
Formalize this into some scripts
Make predictions on competition test data | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, LeakyReLU, Dropout, ReLU, GRU, TimeDistributed, Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.callbacks import EarlyStopping
from jlab import load_test_data, get_test_detector_plane
Explanation: 03 - Sequence Model Approach
The more 'classical' approach to solving this problem
Train a model that can take any number of 'steps'
Makes a prediction on next step based on previous steps
Learn from full tracks
For test tracks, predict what the next step's values will be
End of explanation
X_train = pd.read_csv('MLchallenge2_training.csv')
X_test = load_test_data('test_in.csv')
eval_planes = get_test_detector_plane(X_test)
# Also, load our truth values
y_true = pd.read_csv('test_prediction.csv', names=['x', 'y', 'px', 'py', 'pz'],
header=None)
X_test.head()
y_true.head()
Explanation: Load up and prep the datasets
End of explanation
N_SAMPLES = len(X_train)
N_DETECTORS = 25
N_KINEMATICS = 6
SHAPE = (N_SAMPLES, N_DETECTORS-1, N_KINEMATICS)
X_train_list = []
y_train_array = np.ndarray(shape=(N_SAMPLES, N_KINEMATICS-1))
for ix in range(N_SAMPLES):
seq_len = np.random.choice(range(8, 25))
track = X_train.iloc[ix].values.reshape(N_DETECTORS, N_KINEMATICS)
X_train_list.append(track[0:seq_len])
# Store the kinematics of the next in the sequence
# Ignore the 3rd one, which is z
y_train_array[ix] = track[seq_len][[0,1,3,4,5]]
for track in X_train_list[:10]:
print(len(track))
X_train_list = pad_sequences(X_train_list, dtype=float)
for track in X_train_list[:10]:
print(len(track))
X_train_array = np.array(X_train_list)
X_train_array.shape
y_train_array.shape
Explanation: Construct the training data and targets
For each track
Choose a number N between 8 and 24
That track will have 6 kinematics for N blocks
The target variable will be the 6 kinematic variables for the N+1th detector block
This will cause variable length sequences
Apply pad_sequences to prepend with zeros appropriately
Training Dataset
End of explanation
N_TEST_SAMPLES = len(X_test)
y_test_array = y_true.values
X_test_list = []
for ix in range(N_TEST_SAMPLES):
seq_len = get_test_detector_plane(X_test.iloc[ix])
track = X_test.iloc[ix].values.reshape(N_DETECTORS, N_KINEMATICS)
X_test_list.append(track[0:seq_len])
X_test_list = pad_sequences(X_test_list, dtype=float)
X_test_array = np.array(X_test_list)
X_test_array.shape
y_test_array.shape
y_true.values.shape
import pandas as pd
import numpy as np
from math import floor
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
data = pd.read_csv('MLchallenge2_training.csv')
# Z values are constant -- what are they?
Z_VALS = data[['z'] + [f'z{i}' for i in range(1, 25)]].loc[0].values
# Z-distance from one timestep to another is set; calculate it
Z_DIST = [Z_VALS[i+1] - Z_VALS[i] for i in range(0, 24)] + [0.0]
# Number of timesteps
N_DETECTORS = 25
# Provided number of kinematics
N_KINEMATICS = 6
# Number of features after engineering them all
N_FEATURES = 13
def get_detector_meta(kin_array, det_id):
# Is there a large gap after this detector?
# 0 is for padded timesteps
# 1 is for No, 2 is for Yes
mind_the_gap = int(det_id % 6 == 0) + 1
# Detector group: 1 (origin), 2, 3, 4, or 5
det_grp = floor((det_id-1) / 6) + 2
# Detectors numbered 1-6 (origin is 6)
# (Which one in the group of six is it?)
det_rank = ((det_id-1) % 6) + 1
# Distance to the next detector?
z_dist = Z_DIST[det_id]
# Transverse momentum (x-y component)
pt = np.sqrt(np.square(kin_array[3]) + np.square(kin_array[4]))
# Total momentum
p_tot = np.sqrt(np.square(kin_array[3])
+ np.square(kin_array[4])
+ np.square(kin_array[5]))
# Put all the calculated features together
det_meta = np.array([det_id, mind_the_gap, det_grp, det_rank,
z_dist, pt, p_tot])
# Return detector data plus calculated features
return np.concatenate([kin_array, det_meta], axis=None)
def tracks_to_time_series(X):
Convert training dataframe to multivariate time series training set
Pivots each track to a series ot timesteps. Then randomly truncates them
to be identical to the provided test set. The step after the truncated
step is saved as the target.
Truncated sequence are front-padded with zeros.
Parameters
----------
X : pandas.DataFrame
Returns
-------
(numpy.ndarray, numpy.ndarray)
Tuple of the training data and labels
X_ts_list = []
n_samples = len(X)
y_array = np.ndarray(shape=(n_samples, N_KINEMATICS-1))
for ix in range(n_samples):
# Randomly choose how many detectors the track went through
track_len = np.random.choice(range(8, 25))
# Reshape into ts-like
track = X.iloc[ix].values.reshape(N_DETECTORS, N_KINEMATICS)
#eng_track = np.zeros(shape=(N_DETECTORS, N_FEATURES))
#for i in range(0, N_DETECTORS):
# eng_track[i] = get_detector_meta(track[i], i)
# Truncate the track to only N detectors
X_ts_list.append(track[0:track_len])
# Store the kinematics of the next in the sequence
# Ignore the 3rd one, which is z
y_array[ix] = track[track_len][[0,1,3,4,5]]
# Pad the training sequence
X_ts_list = pad_sequences(X_ts_list, dtype=float)
X_ts_array = np.array(X_ts_list)
return X_ts_array, y_array
X, y = tracks_to_time_series(data)
X[3]
y[3]
X_train, X_test, y_train, y_test = train_test_split(X, y)
len(X_train), len(X_test)
Explanation: Validation Dataset
End of explanation
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import GRU, Dense, LeakyReLU, Dropout
from tensorflow.keras.callbacks import EarlyStopping
import joblib
def lrelu(x):
return LeakyReLU()(x)
def gru_model(gru_units=35, dense_units=100,
dropout_rate=0.25):
Model definition.
Three layers of Gated Recurrent Units (GRUs), utilizing
LeakyReLU activations, finally passing GRU block output
to a dense layer, passing its output to the final output
layer, with a touch of dropout in between.
Bon apetit.
Parameters
----------
gru_units : int
dense_units : int
dropout_rate : float
Returns
-------
tensorflow.keras.models.Sequential
model = Sequential()
model.add(GRU(gru_units, activation=lrelu,
input_shape=(N_DETECTORS-1, N_KINEMATICS),
return_sequences=True))
model.add(GRU(gru_units, activation=lrelu,
return_sequences=True))
model.add(GRU(gru_units, activation=lrelu))
model.add(Dense(dense_units, activation=lrelu))
model.add(Dropout(dropout_rate))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
model = gru_model()
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='gru_model.png', show_shapes=True)
es = EarlyStopping(monitor='val_loss', mode='min',
patience=5, restore_best_weights=True)
history = model.fit(
x=X_train,
y=y_train,
validation_data=(X_test, y_test),
callbacks=[es],
epochs=50,
)
model.save("gru_model.h5")
joblib.dump(history.history, "gru_model.history")
history = joblib.load("dannowitz_jlab2_model_20191031.history")
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(history['loss'])
plt.plot(history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper right')
plt.show()
Explanation: Multi-layer GRU Model with LReLU
End of explanation
pred = pd.read_csv('data/submission/dannowitz_jlab2_submission_20191112.csv', header=None)
truth = pd.read_csv('data/ANSWERS.csv', header=None)
# Calculate square root of the mean squared error
# Then apply weights and sum them all up
sq_error = (truth - pred).applymap(np.square)
mse = sq_error.sum() / len(truth)
rmse = np.sqrt(mse)
rms_weighted = rmse / [0.03, 0.03, 0.01, 0.01, 0.011]
score = rms_weighted.sum()
score
Explanation: Calculate the score on my predictions
Scoring code provided by Thomas Britton
Each kinematic has different weight
End of explanation
def lstm_model():
model = Sequential()
model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
return model
model = lstm_model()
model.summary()
history = model.fit(x=X_train_array, y=y_train_array, validation_data=(X_test_array, y_test_array), epochs=5)
history = model.fit(x=X_train_array, y=y_train_array,
validation_data=(X_test_array, y_test_array),
epochs=50, use_multiprocessing=True)
model = lstm_model()
es = EarlyStopping(monitor='val_loss', mode='min')
history = model.fit(x=X_train_array, y=y_train_array,
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
model.save("lstm100-dense100-dropout025-epochs20-early-stopping.h5")
def lstm_model_lin():
model = Sequential()
model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
return model
lin_act_model = lstm_model_lin()
es = EarlyStopping(monitor='val_loss', mode='min')
history = lin_act_model.fit(x=X_train_array[:10000], y=y_train_array[:10000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
def lstm_model_adam():
model = Sequential()
model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
adam_model = lstm_model_adam()
es = EarlyStopping(monitor='val_loss', mode='min')
history = adam_model.fit(x=X_train_array[:10000], y=y_train_array[:10000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
def lstm_model_dropout50():
model = Sequential()
model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.50))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
dropout50_model = lstm_model_dropout50()
es = EarlyStopping(monitor='val_loss', mode='min')
history = dropout50_model.fit(x=X_train_array[:10000], y=y_train_array[:10000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
def lstm_model_nodropout():
model = Sequential()
model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
nodropout_model = lstm_model_nodropout()
es = EarlyStopping(monitor='val_loss', mode='min')
history = nodropout_model.fit(x=X_train_array[:10000], y=y_train_array[:10000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
def lstm_model_relu():
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
relu_model = lstm_model_relu()
es = EarlyStopping(monitor='val_loss', mode='min')
history = relu_model.fit(x=X_train_array[:10000], y=y_train_array[:10000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
def model_gru():
model = Sequential()
model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
gru_model = model_gru()
es = EarlyStopping(monitor='val_loss', mode='min')
history = gru_model.fit(x=X_train_array[:10000], y=y_train_array[:10000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
Explanation: Visualize the predictions vs true
You can slice and dice the stats however you want, but it helps to be able to see your predictions at work.
Running history of me tinkering around
I didn't arrive at this construction from the start.
Many different changes and tweaks
End of explanation
def model_v2():
model = Sequential()
model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
v2_model = model_v2()
es = EarlyStopping(monitor='val_loss', mode='min')
history = v2_model.fit(x=X_train_array, y=y_train_array,
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=8, use_multiprocessing=True)
from tensorflow.keras.back
def model_v2_deep():
model = Sequential()
model.add(GRU(30, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS),
return_sequences=True))
model.add(GRU(30, activation=LeakyReLU(), return_sequences=True))
model.add(GRU(30, activation=LeakyReLU()))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
v2_model_deep = model_v2_deep()
v2_model_deep.summary()
es = EarlyStopping(monitor='val_loss', mode='min', patience=2, restore_best_weights=True)
history = v2_model_deep.fit(x=X_train_array, y=y_train_array,
validation_data=(X_test_array, y_test_array),
callbacks=[es],
epochs=8, use_multiprocessing=True)
def model_v2_dbl_gru():
model = Sequential()
model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS),
return_sequences=True))
model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
v2_model_dbl_gru = model_v2_dbl_gru()
es = EarlyStopping(monitor='val_loss', mode='min')
history = v2_model_dbl_gru.fit(x=X_train_array[:20000], y=y_train_array[:20000],
validation_data=(X_test_array, y_test_array),
#callbacks=[es],
epochs=10, use_multiprocessing=True)
def model_v2_2x_dropout():
model = Sequential()
model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dropout(0.25))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
v2_model_dbl_dropout = model_v2_2x_dropout()
es = EarlyStopping(monitor='val_loss', mode='min')
history = v2_model_dbl_dropout.fit(x=X_train_array[:20000], y=y_train_array[:20000],
validation_data=(X_test_array, y_test_array),
callbacks=[es], epochs=20, use_multiprocessing=True)
def model_v2_big_gru():
model = Sequential()
model.add(GRU(400, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS)))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
v2_model_big_gru = model_v2_big_gru()
es = EarlyStopping(monitor='val_loss', mode='min')
history = v2_model_big_gru.fit(x=X_train_array[:20000], y=y_train_array[:20000],
validation_data=(X_test_array, y_test_array),
#callbacks=[es],
epochs=10, use_multiprocessing=True)
v2_model_big_gru.fit(x=X_train_array[:20000], y=y_train_array[:20000],
validation_data=(X_test_array, y_test_array),
#callbacks=[es],
epochs=15, use_multiprocessing=True, initial_epoch=10)
Explanation: Early Conclusions
GRU > LSTM
LeakyReLU > ReLU
adam > rmsprop
dropout 0.25 > dropout 0.5 > no dropout
End of explanation
X_train_array.shape
def cnn_gru():
model = Sequential()
model.add(Conv1D(filters=5, kernel_size=2, strides=1, input_shape=(N_DETECTORS-1, N_KINEMATICS)))
#model.add(MaxPooling1D())
model.add(GRU(200, activation=LeakyReLU()))
model.add(Dense(100, activation=LeakyReLU()))
model.add(Dropout(0.25))
model.add(Dense(N_KINEMATICS-1))
model.compile(loss='mse', optimizer='adam')
return model
cnn_model = cnn_gru()
cnn_model.summary()
#es = EarlyStopping(monitor='val_loss', mode='min')
history = cnn_model.fit(x=X_train_array[:20000], y=y_train_array[:20000],
validation_data=(X_test_array, y_test_array),
epochs=10, use_multiprocessing=True)
history.history
Explanation: Try CNN LSTM
End of explanation
from train import train
from predict import predict
model = train(frac=1.00, filename="dannowitz_jlab2_model", epochs=100, ret_model=True)
preds = predict(model_filename="dannowitz_jlab2_model.h5",
data_filename="test_in (1).csv",
output_filename="danowitz_jlab2_submission.csv")
Explanation: Enough tinkering around
Formalize this into some scripts
Make predictions on competition test data
End of explanation |
9,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
======================================================================
Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell)
======================================================================
This example demonstrates the different time-frequency estimation methods
on simulated data. It shows the time-frequency resolution trade-off
and the problem of estimation variance. In addition it highlights
alternative functions for generating TFRs without averaging across
trials, or by operating on numpy arrays.
Step1: Simulate data
We'll simulate data with a known spectro-temporal structure.
Step2: Calculate a time-frequency representation (TFR)
Below we'll demonstrate the output of several TFR functions in MNE
Step3: (1) Least smoothing (most variance/background fluctuations).
Step4: (2) Less frequency smoothing, more time smoothing.
Step5: (3) Less time smoothing, more frequency smoothing.
Step6: Stockwell (S) transform
Stockwell uses a Gaussian window to balance temporal and spectral resolution.
Importantly, frequency bands are phase-normalized, hence strictly comparable
with regard to timing, and, the input signal can be recoverd from the
transform in a lossless way if we disregard numerical errors. In this case,
we control the spectral / temporal resolution by specifying different widths
of the gaussian window using the width parameter.
Step7: Morlet Wavelets
Finally, show the TFR using morlet wavelets, which are a sinusoidal wave
with a gaussian envelope. We can control the balance between spectral and
temporal resolution with the n_cycles parameter, which defines the
number of cycles to include in the window.
Step8: Calculating a TFR without averaging over epochs
It is also possible to calculate a TFR without averaging across trials.
We can do this by using average=False. In this case, an instance of
Step9: Operating on arrays
MNE also has versions of the functions above which operate on numpy arrays
instead of MNE objects. They expect inputs of the shape
(n_epochs, n_channels, n_times). They will also return a numpy array
of shape (n_epochs, n_channels, n_freqs, n_times). | Python Code:
# Authors: Hari Bharadwaj <[email protected]>
# Denis Engemann <[email protected]>
# Chris Holdgraf <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
from matplotlib import pyplot as plt
from mne import create_info, EpochsArray
from mne.baseline import rescale
from mne.time_frequency import (tfr_multitaper, tfr_stockwell, tfr_morlet,
tfr_array_morlet)
print(__doc__)
Explanation: ======================================================================
Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell)
======================================================================
This example demonstrates the different time-frequency estimation methods
on simulated data. It shows the time-frequency resolution trade-off
and the problem of estimation variance. In addition it highlights
alternative functions for generating TFRs without averaging across
trials, or by operating on numpy arrays.
End of explanation
sfreq = 1000.0
ch_names = ['SIM0001', 'SIM0002']
ch_types = ['grad', 'grad']
info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
n_times = 1024 # Just over 1 second epochs
n_epochs = 40
seed = 42
rng = np.random.RandomState(seed)
noise = rng.randn(n_epochs, len(ch_names), n_times)
# Add a 50 Hz sinusoidal burst to the noise and ramp it.
t = np.arange(n_times, dtype=np.float) / sfreq
signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal
signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing
on_time = np.logical_and(t >= 0.45, t <= 0.55)
signal[on_time] *= np.hanning(on_time.sum()) # Ramping
data = noise + signal
reject = dict(grad=4000)
events = np.empty((n_epochs, 3), dtype=int)
first_event_sample = 100
event_id = dict(sin50hz=1)
for k in range(n_epochs):
events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz']
epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id,
reject=reject)
epochs.average().plot()
Explanation: Simulate data
We'll simulate data with a known spectro-temporal structure.
End of explanation
freqs = np.arange(5., 100., 3.)
vmin, vmax = -3., 3. # Define our color limits.
Explanation: Calculate a time-frequency representation (TFR)
Below we'll demonstrate the output of several TFR functions in MNE:
:func:mne.time_frequency.tfr_multitaper
:func:mne.time_frequency.tfr_stockwell
:func:mne.time_frequency.tfr_morlet
Multitaper transform
First we'll use the multitaper method for calculating the TFR.
This creates several orthogonal tapering windows in the TFR estimation,
which reduces variance. We'll also show some of the parameters that can be
tweaked (e.g., time_bandwidth) that will result in different multitaper
properties, and thus a different TFR. You can trade time resolution or
frequency resolution or both in order to get a reduction in variance.
End of explanation
n_cycles = freqs / 2.
time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper)
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Least smoothing, most variance')
Explanation: (1) Least smoothing (most variance/background fluctuations).
End of explanation
n_cycles = freqs # Increase time-window length to 1 second.
time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Less frequency smoothing, more time smoothing')
Explanation: (2) Less frequency smoothing, more time smoothing.
End of explanation
n_cycles = freqs / 2.
time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Less time smoothing, more frequency smoothing')
Explanation: (3) Less time smoothing, more frequency smoothing.
End of explanation
fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fmin, fmax = freqs[[0, -1]]
for width, ax in zip((0.2, .7, 3.0), axs):
power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width)
power.plot([0], baseline=(0., 0.1), mode='mean', axes=ax, show=False,
colorbar=False)
ax.set_title('Sim: Using S transform, width = {:0.1f}'.format(width))
plt.tight_layout()
Explanation: Stockwell (S) transform
Stockwell uses a Gaussian window to balance temporal and spectral resolution.
Importantly, frequency bands are phase-normalized, hence strictly comparable
with regard to timing, and, the input signal can be recoverd from the
transform in a lossless way if we disregard numerical errors. In this case,
we control the spectral / temporal resolution by specifying different widths
of the gaussian window using the width parameter.
End of explanation
fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
all_n_cycles = [1, 3, freqs / 2.]
for n_cycles, ax in zip(all_n_cycles, axs):
power = tfr_morlet(epochs, freqs=freqs,
n_cycles=n_cycles, return_itc=False)
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
axes=ax, show=False, colorbar=False)
n_cycles = 'scaled by freqs' if not isinstance(n_cycles, int) else n_cycles
ax.set_title('Sim: Using Morlet wavelet, n_cycles = %s' % n_cycles)
plt.tight_layout()
Explanation: Morlet Wavelets
Finally, show the TFR using morlet wavelets, which are a sinusoidal wave
with a gaussian envelope. We can control the balance between spectral and
temporal resolution with the n_cycles parameter, which defines the
number of cycles to include in the window.
End of explanation
n_cycles = freqs / 2.
power = tfr_morlet(epochs, freqs=freqs,
n_cycles=n_cycles, return_itc=False, average=False)
print(type(power))
avgpower = power.average()
avgpower.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Using Morlet wavelets and EpochsTFR', show=False)
Explanation: Calculating a TFR without averaging over epochs
It is also possible to calculate a TFR without averaging across trials.
We can do this by using average=False. In this case, an instance of
:class:mne.time_frequency.EpochsTFR is returned.
End of explanation
power = tfr_array_morlet(epochs.get_data(), sfreq=epochs.info['sfreq'],
freqs=freqs, n_cycles=n_cycles,
output='avg_power')
# Baseline the output
rescale(power, epochs.times, (0., 0.1), mode='mean', copy=False)
fig, ax = plt.subplots()
mesh = ax.pcolormesh(epochs.times * 1000, freqs, power[0],
cmap='RdBu_r', vmin=vmin, vmax=vmax)
ax.set_title('TFR calculated on a numpy array')
ax.set(ylim=freqs[[0, -1]], xlabel='Time (ms)')
fig.colorbar(mesh)
plt.tight_layout()
plt.show()
Explanation: Operating on arrays
MNE also has versions of the functions above which operate on numpy arrays
instead of MNE objects. They expect inputs of the shape
(n_epochs, n_channels, n_times). They will also return a numpy array
of shape (n_epochs, n_channels, n_freqs, n_times).
End of explanation |
9,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Задание 1
Вывести 10 самых больших по размеру треков жанра ROCK и формата MPEG
Step1: Задание 2
Вывести названия всех групп, их песен и названия их альбомов для всех треков жанра Рок, приобретенные сотрудниками Microsoft. | Python Code:
%%sql
SELECT t
FROM tracks t
INNER JOIN genres g
ON t.genreid = g.genreid
INNER JOIN media_types m
ON m.mediatypeid = t.mediatypeid
ORDER BY t.bytes desc
limit 10
Explanation: Задание 1
Вывести 10 самых больших по размеру треков жанра ROCK и формата MPEG
End of explanation
%%sql
SELECT distinct ar.name, t.name, a.title
FROM tracks t
INNER JOIN albums a
ON a.albumid = t.albumid
INNER JOIN artists ar
ON a.artistid = ar.artistid
INNER JOIN invoice_items i
ON i.trackid = t.trackid
INNER JOIN invoices ii
on ii.invoiceid = i.invoiceid
INNER JOIN customers c
ON ii.customerid = c.customerid
INNER JOIN genres g
ON t.genreid = t.genreid
WHERE c.company like '%Microsoft%'
AND g.name = 'Rock'
Explanation: Задание 2
Вывести названия всех групп, их песен и названия их альбомов для всех треков жанра Рок, приобретенные сотрудниками Microsoft.
End of explanation |
9,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial for flexx.react - reactive programming
Also see http
Step1: First, we create two input signals
Step2: Input signals can be called with an argument to set their value
Step3: Now we create a new signal to react to changes in our input signals
Step4: Signals produce new values, thereby transforming or combining the upstream signals. Let's create another signal to react to the "name" signal
Step5: So if we change either of the inputs ...
Step6: Observations
Step7: Dynamism
We subclass Collection and define reactions to signals of signals. Any change in a collections' ref or in the name of that ref will invoke an update of show_ref_name. Similary, show_index is updated when the list of items changes, or any of the names of the items
Step8: Lazy evaluation
By default react uses a push approach, which is useful in GUI's. In other situation, a pull approach might be more appropriate.
Step9: Signal history
Signals store their current value as well as the previous value. (The timestamps are also stored, though these are not yet available via the public API.) | Python Code:
from flexx import react
Explanation: Tutorial for flexx.react - reactive programming
Also see http://flexx.readthedocs.org/en/latest/react/
Where classic event-driven programming is about reacting to things that happen, RP is about staying up to date with changing signals. Signals are objects that have a value which changes over time. Signals are fed with either user input or other (upstream) signal values. In that way you create a pipeline that is always kept up-to date; it defines how information flows through your application.
Introduction
End of explanation
@react.input
def first_name(n='John'):
assert isinstance(n, str) # validation
return n.capitalize() # normalization
@react.input
def last_name(n='Doe'):
assert isinstance(n, str)
return n.capitalize()
Explanation: First, we create two input signals:
End of explanation
first_name() # get signal value
first_name('jane') # set signal value (for input signals)
first_name()
Explanation: Input signals can be called with an argument to set their value:
End of explanation
@react.connect('first_name', 'last_name')
def name(first, last):
return '%s %s' % (first, last)
Explanation: Now we create a new signal to react to changes in our input signals:
End of explanation
@react.connect('name')
def greet(n):
print('hello %s!' % n)
Explanation: Signals produce new values, thereby transforming or combining the upstream signals. Let's create another signal to react to the "name" signal:
End of explanation
first_name('Guido')
last_name('van Rossum')
Explanation: So if we change either of the inputs ...
End of explanation
class Item(react.HasSignals):
@react.input
def name(n):
return str(n)
class Collection(react.HasSignals):
@react.input
def items(items):
assert all([isinstance(i, Item) for i in items])
return tuple(list(items))
@react.input
def ref(i):
assert isinstance(i, Item)
return i
itemA, itemB, itemC, itemD = Item(name='A'), Item(name='B'), Item(name='C'), Item(name='D')
C1 = Collection(items=(itemA, itemB))
C2 = Collection(items=(itemC, itemD))
itemB.name()
C1.items()
Explanation: Observations:
The upstream signal (i.e. source) is specified at the callback function
The callback function is transformed into a signal
The signal produce new signal values, so you can create a stream/pipeline
Creating a pipeline provides a nice mechanism for caching values that take long to compute
Multiple upstream signals can be specified
It provides a nice integral way for user-provided data, as an alternative to properties or traits
The HasSignals class
Signals can also be specified at a class:
End of explanation
class Collection2(Collection):
@react.connect('ref.name')
def show_ref_name(name):
print('The ref is %s' % name)
@react.connect('items.*.name')
def show_index(*names):
print('index: '+ ', '.join(names))
itemA, itemB, itemC, itemD = Item(name='A'), Item(name='B'), Item(name='C'), Item(name='D')
C1 = Collection2(items=(itemA, itemB))
C2 = Collection2(items=(itemC, ))
C1.ref(itemA)
C1.ref(itemD)
itemD.name('D-renamed')
C2.items([itemC, itemD])
itemC.name('C-renamed')
Explanation: Dynamism
We subclass Collection and define reactions to signals of signals. Any change in a collections' ref or in the name of that ref will invoke an update of show_ref_name. Similary, show_index is updated when the list of items changes, or any of the names of the items:
End of explanation
@react.input
def foo(v):
return str(v)
@react.lazy('foo')
def bar(v):
print('update bar')
return v * 10 # imagine that this is an expensive operation
foo('hello') # Does not trigger bar
foo('heya')
foo('hi')
bar() # this is where bar gets updated
bar() # foo has not changed; cached value is returned
Explanation: Lazy evaluation
By default react uses a push approach, which is useful in GUI's. In other situation, a pull approach might be more appropriate.
End of explanation
@react.input
def some_value(v=0):
return float(v)
some_value(0) # init
@react.connect('some_value')
def show_diff(s):
print('diff: ', s - some_value.last_value) # note: we might rename this to previous_value
some_value(10)
some_value(12)
Explanation: Signal history
Signals store their current value as well as the previous value. (The timestamps are also stored, though these are not yet available via the public API.)
End of explanation |
9,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preamble
Step1: Notebook Environment
Step2: Example
Step3: Closed-form KL divergence between diagonal Gaussians
Step4: Monte Carlo estimation
The KL divergence is an expectation of log density ratios over distribution p. We can approximate it with Monte Carlo samples. | Python Code:
%matplotlib notebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import norm
from keras import backend as K
from keras.layers import (Input, Activation, Dense, Lambda, Layer,
add, multiply)
from keras.models import Model, Sequential
from keras.callbacks import TerminateOnNaN
from keras.datasets import mnist
from tqdm import tnrange
import tensorflow as tf
Explanation: Preamble
End of explanation
plt.style.use('seaborn-notebook')
sns.set_context('notebook')
np.set_printoptions(precision=2,
edgeitems=3,
linewidth=80,
suppress=True)
'TensorFlow version: ' + K.tf.__version__
sess = tf.InteractiveSession()
Explanation: Notebook Environment
End of explanation
D = 2
q_mu = np.float32([ 1., 4.])
p_mu = np.float32([-3., 2.])
q_sigma = np.ones(D).astype('float32')
p_sigma = 2.5*np.ones(D).astype('float32')
q = tf.distributions.Normal(loc=q_mu, scale=q_sigma)
p = tf.distributions.Normal(loc=p_mu, scale=p_sigma)
fig, ax = plt.subplots(figsize=(5, 5))
ax.scatter(*p.sample(sample_shape=(500,)).eval().T,
s=8., alpha=.8, label='p samples')
ax.scatter(*q.sample(sample_shape=(500,)).eval().T,
s=8.,alpha=.8, label='q samples')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.legend()
plt.show()
Explanation: Example: Diagonal Gaussians
End of explanation
def kl_divergence_gaussians(q_mu, q_sigma, p_mu, p_sigma):
r = q_mu - p_mu
return np.sum(np.log(p_sigma) - np.log(q_sigma)
- .5 * (1. - (q_sigma**2 + r**2) / p_sigma**2),
axis=-1)
kl_true = kl_divergence_gaussians(q_mu, q_sigma, p_mu, p_sigma)
kl_true
tf.reduce_sum(tf.distributions.kl_divergence(
tf.distributions.Normal(loc=q_mu, scale=q_sigma),
tf.distributions.Normal(loc=p_mu, scale=p_sigma)), axis=-1).eval()
Explanation: Closed-form KL divergence between diagonal Gaussians
End of explanation
mc_samples = 10000
def log_density_ratio_gaussians(z, q_mu, q_sigma, p_mu, p_sigma):
r_p = (z - p_mu) / p_sigma
r_q = (z - q_mu) / q_sigma
return np.sum(np.log(p_sigma) - np.log(q_sigma) +
.5 * (r_p**2 - r_q**2), axis=-1)
mc_estimates = pd.Series(
log_density_ratio_gaussians(
q.sample(sample_shape=(mc_samples,)).eval(),
q_mu, q_sigma,
p_mu, p_sigma
)
)
# cumulative mean of the MC estimates
mean = mc_estimates.expanding().mean()
golden_size = lambda width: (width, 2. * width / (1 + np.sqrt(5)))
fig, ax = plt.subplots(figsize=golden_size(6))
mean.plot(ax=ax, label='Estimated')
ax.axhline(y=kl_true, color='r', linewidth=2., label='True')
ax.set_xlabel('Monte Carlo samples')
ax.set_ylabel('KL Divergence')
ax.set_ylim(.95*kl_true, 1.05*kl_true)
ax.legend()
plt.show()
fig, ax = plt.subplots(figsize=golden_size(6))
ax.plot(np.square(mean - kl_true))
ax.set_xlabel('Monte Carlo samples')
ax.set_ylabel('Squared error')
ax.set_ylim(-0.01, 1.)
plt.show()
mean.index.name = 'samples'
mean_df = pd.DataFrame(mean.rename('mc_estimate'))
g = sns.JointGrid(x='samples', y='mc_estimate',
data=mean_df.reset_index())
g = g.plot_joint(plt.plot)
g = g.plot_marginals(sns.kdeplot, shade=True)
g.ax_marg_x.clear()
g.ax_marg_x.set_xticks([])
g.ax_marg_x.set_yticks([])
g.set_axis_labels('Monte Carlo samples', 'KL Divergence')
Explanation: Monte Carlo estimation
The KL divergence is an expectation of log density ratios over distribution p. We can approximate it with Monte Carlo samples.
End of explanation |
9,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combining geosocial and financial data to understand retail performance
Geosocial data is location-based social media data that can be interpreted and analyzed as part of any location-oriented business decision.
In this notebook, Spatial.ai geosocial data is combined with Mastercard financial data to quantify how much of retail performance can be explained by geosocial behavior. We compare the cities of Chicago and Los Angeles to show how retail performance is driven by different social segments depending on the city.
Note this use case leverages premium datasets from CARTO Data Observatory.
The notebook is organized in the following sections
Step1: 0.2. Set CARTO default credentials
In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
Step2: Note about credentials
For security reasons, we recommend storing your credentials in an external file to prevent publishing them by accident when sharing your notebooks. You can get more information in the section Setting your credentials of the Authentication guide.
<a id='section1'></a>
1. Download data from the Data Observatory
In this section, we'll download the two datasets we're interested in and combine them into a single dataframe.
For more information on how to access Data Observatory datasets using CARTOframes visit the Guides or take a look at the <a href='https
Step3: The following function downloads the latest Spatial.ai Geosocial Segments data for the specified bounding box.
Step4: The following function downloads the latest Mastercard Geographic Insights data for the specified bounding box.
Note the data is also filtered by geo_type. There are three types of geo_types depending if the indices shown represent a comparison with regards to country (c), province (p) or metropolitan area (m).
Step5: The following function loads the three datasets and merges them into a single one.
Note we can merge the dataframes because both datasets are defined at the census block group level, so the geoid's are common to both of them.
Step6: Download data into dataframes
Here we'll load the data for Chicago and LA.
Step7: <a id='section2'></a>
2. Analysis
Once we have the data ready for analysis, we will start by selecting a target variable out of the Mastercard financial variables. This is the variable we are interested in explaining. We'll select the variable txn_amt which is the total transaction amount by census block group. In addition, we can select an industry. Mastercard provides data for 5 different industries
Step8: <a id='section21'></a>
2.1 Top performers. Location visualization.
Here we identify the top 5% performers and visualize them.
An interesting insight here is how in Chicago top performers tend to concentrate in the downtown area, while in LA they are more spread throughout the city.
Step9: <a id='section22'></a>
2.2 Characterization of top performers
In this section, we'll identify what social segments characterize top performers in Chicago and LA and compare the two cities.
Characterization based on social segments
<b>Main insights</b>
Step10: We'll use the following function to make sure social segments have same color in both cities for an easier comparison.
Note we'll use CARTO's bold palette colors. Here you can explore all our palettes.
Step11: Characterization based on indices
<b>Main insights</b>
Step12: <a id='section23'></a>
2.3 Correlation analysis
In this section we'll carry out a deeper analysis on how geosocial data can help explain retail performance. We'll compare the most important features for Chicago and LA.
Social segments
Here we calculate the correlation coefficient between every social segment and the total transaction amount for restaurants. This allows us to identify which segments have a strongest impact and the differences between both cities.
This analysis provides very interesting insights. We can see how the social segment "sites to see" impacts positively in Chicago, while it barely impacts in LA.
Step13: Indices
Now, we'll calculate the correlation coefficient between every social index and the metric total transaction amount for restaurants. This allows us to identify which indices have a strongest impact and the differences between both cities.
It is interesting to see the different correlation coefficients for older affinity and discount affinity in Chicago and LA. | Python Code:
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from cartoframes.auth import set_default_credentials
from cartoframes.data.observatory import *
from cartoframes.viz import *
from shapely import wkt
pd.set_option('display.max_columns', None)
sns.set_style('whitegrid')
%matplotlib inline
Explanation: Combining geosocial and financial data to understand retail performance
Geosocial data is location-based social media data that can be interpreted and analyzed as part of any location-oriented business decision.
In this notebook, Spatial.ai geosocial data is combined with Mastercard financial data to quantify how much of retail performance can be explained by geosocial behavior. We compare the cities of Chicago and Los Angeles to show how retail performance is driven by different social segments depending on the city.
Note this use case leverages premium datasets from CARTO Data Observatory.
The notebook is organized in the following sections:
1. Download data from the Data Observatory
2. Analyzing geosocial drivers
- Identify where top performers are
- Characterization of top performers
- Correlation analysis
0. Setup
0.1. Import packages
End of explanation
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
Explanation: 0.2. Set CARTO default credentials
In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
End of explanation
LA_CBG_PATH = 'https://libs.cartocdn.com/cartoframes/samples/la_cbg.csv'
LA_BB = '-118.673619,33.553967,-117.997960,34.360425'
CHICACO_CBG_PATH = 'https://libs.cartocdn.com/cartoframes/samples/chicago_cbg.csv'
CHICAGO_BB = '-88.638285,41.434892,-87.487468,42.502873'
def read_cbg(city):
cbg_list = pd.DataFrame()
bbox = ''
if city == 'la':
cbg_list = pd.read_csv(LA_CBG_PATH, dtype={'geoid':str})['geoid'].tolist()
bbox = LA_BB
elif city == 'chicago':
cbg_list = pd.read_csv(CHICACO_CBG_PATH, dtype={'geoid':str})['geoid'].tolist()
bbox = CHICAGO_BB
return cbg_list, bbox
Explanation: Note about credentials
For security reasons, we recommend storing your credentials in an external file to prevent publishing them by accident when sharing your notebooks. You can get more information in the section Setting your credentials of the Authentication guide.
<a id='section1'></a>
1. Download data from the Data Observatory
In this section, we'll download the two datasets we're interested in and combine them into a single dataframe.
For more information on how to access Data Observatory datasets using CARTOframes visit the Guides or take a look at the <a href='https://carto.com/developers/cartoframes/examples/#example-access-premium-data-from-the-data-observatory' target='_blank'>Access Premium Data</a> template.
Data loading functions
The following function loads the census block group ids for the city of interest. This is used afterwards to filter the data downloaded from the Data Observatory.
End of explanation
def download_social(bbox, cbg_list):
dataset = Dataset.get('spa_geosocial_s_d5dc42ae')
sql_query = f"SELECT * FROM $dataset$ WHERE CAST(do_date AS date) >= (SELECT MAX(CAST(do_date AS date)) FROM $dataset$) AND ST_IntersectsBox(geom, {bbox})"
social = dataset.to_dataframe(sql_query=sql_query)
social = social[social['geoid'].isin(cbg_list)]
social.drop(columns=['do_label', 'do_area', 'do_perimeter', 'do_num_vertices'], inplace=True)
return social
Explanation: The following function downloads the latest Spatial.ai Geosocial Segments data for the specified bounding box.
End of explanation
def download_mastercard(bbox, cbg_list):
dataset = Dataset.get('mc_geographic__7980c5c3')
sql_query = f"SELECT * FROM $dataset$ WHERE geo_type = 'm' AND CAST(do_date AS date) >= (SELECT MAX(CAST(do_date AS date)) FROM $dataset$) AND ST_IntersectsBox(geom, {bbox})"
mrli = dataset.to_dataframe(sql_query=sql_query)
mrli = mrli[mrli['geoid'].isin(cbg_list)]
mrli.drop(columns=['do_label', 'do_area', 'do_perimeter', 'do_num_vertices'], inplace=True)
mrli = mrli.sort_values(['geoid', 'industry', 'segment', 'geo_type', 'do_date']).reset_index(drop=True)
return mrli
Explanation: The following function downloads the latest Mastercard Geographic Insights data for the specified bounding box.
Note the data is also filtered by geo_type. There are three types of geo_types depending if the indices shown represent a comparison with regards to country (c), province (p) or metropolitan area (m).
End of explanation
def download_data(city):
cbg_list, bbox = read_cbg(city)
social = download_social(bbox, cbg_list)
social.drop(columns='geom', inplace=True)
mrli = download_mastercard(bbox, cbg_list)
mrli = mrli[(mrli['segment'] == 'o')]
mrli = mrli.merge(social, on='geoid')
mrli = gpd.GeoDataFrame(mrli, crs='epsg:4326')
return mrli
Explanation: The following function loads the three datasets and merges them into a single one.
Note we can merge the dataframes because both datasets are defined at the census block group level, so the geoid's are common to both of them.
End of explanation
chicago = download_data('chicago')
chicago.head(3)
la = download_data('la')
la.head(3)
Explanation: Download data into dataframes
Here we'll load the data for Chicago and LA.
End of explanation
target_var = 'txn_amt' # Total transaction amount
industry = 'eap' # Eating places
ch_ret = chicago[(chicago['industry'] == industry) & (chicago['comparison_level'] == 'DMA')]
la_ret = la[(la['industry'] == industry) & (la['comparison_level'] == 'DMA')]
Explanation: <a id='section2'></a>
2. Analysis
Once we have the data ready for analysis, we will start by selecting a target variable out of the Mastercard financial variables. This is the variable we are interested in explaining. We'll select the variable txn_amt which is the total transaction amount by census block group. In addition, we can select an industry. Mastercard provides data for 5 different industries: eating places, groceries, apparel, automotive fuel, and accomodation. They also provide the industry retail which is the sum of all the latter. For this use case, we'll select eating places, eap.
End of explanation
def classify_finance(value, thres_95):
if value >= thres_95:
return 'Top 5%'
else:
return 'Rest'
ch_ret.loc[:, f'{target_var}_class'] = list(map(classify_finance, ch_ret[target_var], [ch_ret[target_var].quantile(0.95)]*ch_ret.shape[0]))
la_ret.loc[:, f'{target_var}_class'] = list(map(classify_finance, la_ret[target_var], [la_ret[target_var].quantile(0.95)]*la_ret.shape[0]))
Layout([Map(Layer(ch_ret[ch_ret[f'{target_var}_class'] != 'Rest'],
geom_col='geom',
style=color_category_style(f'{target_var}_class',
cat=['Top 5%'],
palette=['#009B9E']),
legends=color_category_legend('Performers'))),
Map(Layer(la_ret[la_ret[f'{target_var}_class'] != 'Rest'],
geom_col='geom',
style=color_category_style(f'{target_var}_class',
cat=['Top 5%'],
palette=['#009B9E']),
legends=color_category_legend('Performers')))],
map_height=420
)
Explanation: <a id='section21'></a>
2.1 Top performers. Location visualization.
Here we identify the top 5% performers and visualize them.
An interesting insight here is how in Chicago top performers tend to concentrate in the downtown area, while in LA they are more spread throughout the city.
End of explanation
social_segment_columns = ch_ret.columns[17:-22]
top_ch_ret = ch_ret[ch_ret[f'{target_var}_class'] == 'Top 5%']
top_ch_ret = top_ch_ret[social_segment_columns].describe().transpose()
top_ch_ret.sort_values(['50%'], ascending=False, inplace=True)
top_la_ret = la_ret[la_ret[f'{target_var}_class'] == 'Top 5%']
top_la_ret = top_la_ret[social_segment_columns].describe().transpose()
top_la_ret.sort_values(['50%'], ascending=False, inplace=True)
Explanation: <a id='section22'></a>
2.2 Characterization of top performers
In this section, we'll identify what social segments characterize top performers in Chicago and LA and compare the two cities.
Characterization based on social segments
<b>Main insights</b>:
- Both cities share 4 out of 7 of their most important social segments (although not with the same order). These segments are related to food and drinks, and lgbtq culture.
- Regarding the social segments which are different from one city to the other, while in Chicago Asian food and culture, whiskey business, and sweet treats are on the top 7 segments, in LA it is film lovers, fitness fashion, and heartfelt sharing. This shows two very different cities in terms of social behavior patterns.
End of explanation
def palette(segments, color_dict):
return [color_dict[segment] for segment in segments]
unique_segments = np.unique(top_ch_ret.head(7).index.tolist() + top_la_ret.head(7).index.tolist())
palette_c=['#7F3C8D','#11A579','#3969AC','#F2B701','#E73F74','#80BA5A','#E68310','#008695','#CF1C90','#f97b72', '#4b4b8f', '#A5AA99']
color_dict = dict(zip(unique_segments, palette_c))
def plot_top_segments(df, ax, color_dict):
sns.barplot(x='50%', y='social_segment', data=df.head(7).reset_index().rename(columns={'index':'social_segment'}),
alpha=0.96, ax=ax, palette=palette(df.head(7).index.tolist(), color_dict))
ax.set_title('Chicago', fontsize=18, fontweight='bold')
ax.set_xlabel('')
ax.set_ylabel('')
ax.tick_params(labelsize=15)
fig, axs = plt.subplots(1, 2, figsize=(18, 6))
plot_top_segments(top_ch_ret, axs[0], color_dict)
plot_top_segments(top_la_ret, axs[1], color_dict)
fig.tight_layout()
Explanation: We'll use the following function to make sure social segments have same color in both cities for an easier comparison.
Note we'll use CARTO's bold palette colors. Here you can explore all our palettes.
End of explanation
social_indices_columns = ch_ret.columns[-22:-3]
top_ch_ret_ix = ch_ret[ch_ret[f'{target_var}_class'] == 'Top 5%']
top_ch_ret_ix = top_ch_ret_ix[social_indices_columns].describe().transpose()
top_ch_ret_ix.sort_values(['50%'], ascending=False, inplace=True)
top_la_ret_ix = la_ret[la_ret[f'{target_var}_class'] == 'Top 5%']
top_la_ret_ix = top_la_ret_ix[social_indices_columns].describe().transpose()
top_la_ret_ix.sort_values(['50%'], ascending=False, inplace=True)
unique_segments = np.unique(top_ch_ret_ix.head(7).index.tolist() + top_la_ret_ix.head(7).index.tolist())
palette_c=['#7F3C8D','#11A579','#3969AC','#F2B701','#E73F74','#80BA5A','#E68310','#008695','#CF1C90','#f97b72']
color_dict = dict(zip(unique_segments, palette_c))
def plot_top_indices(df, ax, color_dict):
sns.barplot(x='50%', y='index', data=df.head(7).reset_index(),
alpha=0.96, ax=ax, palette=palette(df.head(7).index.tolist(), color_dict))
ax.set_title('Chicago', fontsize=18, fontweight='bold')
ax.set_xlabel('')
ax.set_ylabel('')
ax.tick_params(labelsize=11)
fig, axs = plt.subplots(1, 2, figsize=(18, 6))
plot_top_indices(top_ch_ret_ix, axs[0], color_dict)
plot_top_indices(top_la_ret_ix, axs[1], color_dict)
fig.tight_layout()
Explanation: Characterization based on indices
<b>Main insights</b>:
- Both cities share 4 out of the 7 most impactful social indices: Breakfast+brunch, coffe, foodie, and high end affinity.
- In Chicago top eating places performers are located where late night, fashion, and entertainment affinity are high, whereas in LA they are located where politically liberal, personal care, and organic+local affinity are high.
End of explanation
def calculate_corrcoef(target_var, df, columns):
corr_coefs = []
for ssegment in columns:
corr_aux = df[~df[ssegment].isnull()]
corr_coefs.append(np.corrcoef(corr_aux[target_var], corr_aux[ssegment])[0][1])
corr_df_ch = pd.DataFrame(data={'social_segment':columns, 'corr_coef':corr_coefs})
corr_df_ch['corr_coef_abs'] = np.abs(corr_df_ch['corr_coef'])
corr_df_ch.sort_values('corr_coef_abs', ascending=False, inplace=True)
return corr_df_ch
corr_df_ch = calculate_corrcoef(target_var, ch_ret, social_segment_columns)
corr_df_la = calculate_corrcoef(target_var, la_ret, social_segment_columns)
corr_df_ch['city'] = 'Chicago'
corr_df_la['city'] = 'LA'
corr_df = pd.concat([corr_df_ch, corr_df_la], ignore_index=True)
plt.figure(figsize=(18, 6))
sns.barplot(x='social_segment', y='corr_coef', hue='city', data=corr_df, alpha=0.99)
plt.xticks(rotation=90)
plt.xlabel('')
plt.title(f"Correlation strength of social segments with target var '{target_var}' - industry '{industry}'",
fontsize=15, fontweight='light', pad=15)
plt.tight_layout()
Explanation: <a id='section23'></a>
2.3 Correlation analysis
In this section we'll carry out a deeper analysis on how geosocial data can help explain retail performance. We'll compare the most important features for Chicago and LA.
Social segments
Here we calculate the correlation coefficient between every social segment and the total transaction amount for restaurants. This allows us to identify which segments have a strongest impact and the differences between both cities.
This analysis provides very interesting insights. We can see how the social segment "sites to see" impacts positively in Chicago, while it barely impacts in LA.
End of explanation
corr_df_ch_ix = calculate_corrcoef(target_var, ch_ret, social_indices_columns)
corr_df_la_ix = calculate_corrcoef(target_var, la_ret, social_indices_columns)
corr_df_ch_ix['city'] = 'Chicago'
corr_df_la_ix['city'] = 'LA'
corr_df_ix = pd.concat([corr_df_ch_ix, corr_df_la_ix], ignore_index=True)
plt.figure(figsize=(10, 6))
sns.barplot(y='social_segment', x='corr_coef', hue='city', data=corr_df_ix, alpha=0.95)
plt.xticks(rotation=90)
plt.ylabel('')
plt.yticks(fontsize=12)
plt.title(f"Correlation strength of indices with target var '{target_var}' - industry '{industry}'",
fontsize=14, fontweight='light', pad=15)
plt.tight_layout()
Explanation: Indices
Now, we'll calculate the correlation coefficient between every social index and the metric total transaction amount for restaurants. This allows us to identify which indices have a strongest impact and the differences between both cities.
It is interesting to see the different correlation coefficients for older affinity and discount affinity in Chicago and LA.
End of explanation |
9,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
png, jpeg와 SVG의 차이. 만약 scatter plot 같은 경우 scatter가 매우 많을 때에는 png보다 용량이 더 클 수 있다.
Step1: scale
Step2: cond number가 이상할 때(10000이 넘어갈 때). scale의 문제와 dependant 문제. 그래서 scale 과정을 거쳐서 1000 이하로 떨어뜨렸다.
Step3: 아웃라이어 왜 생겼을까? 위에 보면 나와 있어. 왜곡된 데이터들이 있어서
이러한 데이터가 있는 이유는 여러 가지 경우가 있다. 그러면 어떻게 하느냐? 버리면 된다.
버린 데이터를 df2 데이터로 만들어라.
Step4: LSTAT은 이상하다.
log를 취하면 고차항이 항이 없어지게 된다.
CRIM도 마찬가지 형태로 되어 있어서 로그를 취하면 나아지게 된다.
DIS도 마찬가지
Step5: 다중공선성이 있다. Multicolinearity
Step6: result4 기존보다 Adj. R-squared가 더 좋아졌다. 4개의 변수가 빠졌음에도 좋아졌다는 것은 의미가 있음 | Python Code:
# sns.pairplot(df_all, diag_kind="kde", kind="reg")
# plt.show()
sns.jointplot("RM", "MEDV", data=df)
plt.show()
import statsmodels.api as sm
model = sm.OLS(df.ix[:, -1], df.ix[:, :-1])
result = model.fit()
print(result.summary())
Explanation: png, jpeg와 SVG의 차이. 만약 scatter plot 같은 경우 scatter가 매우 많을 때에는 png보다 용량이 더 클 수 있다.
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(with_mean=False)
X_scaled = scaler.fit_transform(X)
X_scaled
dfX0 = pd.DataFrame(X_scaled, columns=names)
dfX = sm.add_constant(dfX0)
dfy = pd.DataFrame(y, columns=["MEDV"])
df = pd.concat([dfX, dfy], axis=1)
df.tail(2)
model = sm.OLS(df.ix[:, -1], df.ix[:, :-1])
result = model.fit()
print(result.summary())
Explanation: scale
End of explanation
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(df.ix[:, :-1], df.ix[:, -1])
model.intercept_, model.coef_
sns.distplot(result.resid)
plt.show();
sns.distplot(df.MEDV)
df.count()
Explanation: cond number가 이상할 때(10000이 넘어갈 때). scale의 문제와 dependant 문제. 그래서 scale 과정을 거쳐서 1000 이하로 떨어뜨렸다.
End of explanation
df2 = df.drop(df[df.MEDV >= df.MEDV.max()].index)
df2.count()
model2 = sm.OLS(df2.ix[:, -1], df2.ix[:, :-1])
result2 = model2.fit()
print(result2.summary())
# 보스턴 집값에서 스케일링과 아웃라이어 제거했더니 0.778로 올라가고 JB도 줄었다.
model_anova = sm.OLS.from_formula("MEDV ~ CHAS", data=df2)
result_anova = model_anova.fit()
table_anova = sm.stats.anova_lm(result_anova)
table_anova
model1 = LinearRegression()
model1.fit(df.ix[:, :-1], df.ix[:, -1])
model1.intercept_, model1.coef_
model2 = LinearRegression()
model2.fit(df2.ix[:, :-1], df2.ix[:, -1])
model2.intercept_, model2.coef_
from sklearn.cross_validation import cross_val_score
score = cross_val_score(model1, df.ix[:, :-1], df.ix[:, -1], cv=5)
score, score.mean(), score.std()
score2 = cross_val_score(model2, df2.ix[:, :-1], df2.ix[:, -1], cv=5)
score2, score2.mean(), score2.std()
Explanation: 아웃라이어 왜 생겼을까? 위에 보면 나와 있어. 왜곡된 데이터들이 있어서
이러한 데이터가 있는 이유는 여러 가지 경우가 있다. 그러면 어떻게 하느냐? 버리면 된다.
버린 데이터를 df2 데이터로 만들어라.
End of explanation
#Log transform
df3 = df2.drop(["CRIM", "DIS", "LSTAT", "MEDV"], axis=1)
df3["LOGCRIM"] = np.log(df2.CRIM)
df3["LOGDIS"] = np.log(df2.DIS)
df3["LOGLSTAT"] = np.log(df2.LSTAT)
df3["MEDV"] = df2.MEDV
sns.jointplot("CRIM", "MEDV", data=df2)
sns.jointplot("LOGCRIM", "MEDV", data=df3)
sns.jointplot("DIS", "MEDV", data=df2)
sns.jointplot("LOGDIS", "MEDV", data=df3)
sns.jointplot("LSTAT", "MEDV", data=df2)
sns.jointplot("LOGLSTAT", "MEDV", data=df3)
model3 = sm.OLS(df3.ix[:, -1], df3.ix[:, :-1])
result3 = model3.fit()
print(result3.summary())
score3 = cross_val_score(LinearRegression(), df3.ix[:, :-1], df3.ix[:, -1], cv=5)
score3, score3.mean(), score3.std()
#Multicolinearity
sns.heatmap(np.corrcoef(df3.T))
Explanation: LSTAT은 이상하다.
log를 취하면 고차항이 항이 없어지게 된다.
CRIM도 마찬가지 형태로 되어 있어서 로그를 취하면 나아지게 된다.
DIS도 마찬가지
End of explanation
df4 = df3.drop(["ZN", "INDUS", "AGE", "LOGCRIM", "RAD", "TAX"], axis=1)
model4 = sm.OLS(df4.ix[:, -1], df4.ix[:, :-1])
result4 = model4.fit()
print(result4.summary())
Explanation: 다중공선성이 있다. Multicolinearity
End of explanation
model4 = LinearRegression()
model4.fit(df4.ix[:, :-1], df4.ix[:, -1])
model4.intercept_, model4.coef_
score4 = cross_val_score(LinearRegression(), df4.ix[:, :-1], df4.ix[:, -1], cv=5)
score4, score4.mean(), score4.std()
sns.heatmap(np.corrcoef(df4.T), xticklabels=df4.columns, yticklabels=df4.columns, annot=True)
Explanation: result4 기존보다 Adj. R-squared가 더 좋아졌다. 4개의 변수가 빠졌음에도 좋아졌다는 것은 의미가 있음
End of explanation |
9,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook shows how to use an index file.<br/>
This example uses the index file from the Mediterranean Sea region (INSITU_MED_NRT_OBSERVATIONS_013_035) corresponding to the latest data.<br/>
If you download the same file, the results will be slightly different from what is shown here.
Step1: To read the index file (comma separated values), we will try with the genfromtxt function.
Step2: Map of observations
Step3: We import the modules necessary for the plot.
Step4: We create the projection, centered on the Mediterranean Sea in this case.
Step5: And we create a plot showing all the data locations.
Step6: Selection of a data file based on coordinates
Let's assume we want to have the list of files corresponding to measurements off the northern of Lybia.<br/>
We define a rectangular box containg the data
Step7: then we look for the observations within this box
Step8: The generation of the file list is direct
Step9: According to the file names, we have 7 profiling drifters available in the area. <br/>
To check, we replot the data only in the selected box | Python Code:
indexfile = "datafiles/index_latest.txt"
Explanation: This notebook shows how to use an index file.<br/>
This example uses the index file from the Mediterranean Sea region (INSITU_MED_NRT_OBSERVATIONS_013_035) corresponding to the latest data.<br/>
If you download the same file, the results will be slightly different from what is shown here.
End of explanation
import numpy as np
dataindex = np.genfromtxt(indexfile, skip_header=6, unpack=True, delimiter=',', dtype=None, \
names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters'])
Explanation: To read the index file (comma separated values), we will try with the genfromtxt function.
End of explanation
lon_min = dataindex['geospatial_lon_min']
lon_max = dataindex['geospatial_lon_max']
lat_min = dataindex['geospatial_lat_min']
lat_max = dataindex['geospatial_lat_max']
Explanation: Map of observations
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
Explanation: We import the modules necessary for the plot.
End of explanation
m = Basemap(projection='merc', llcrnrlat=30., urcrnrlat=46.,
llcrnrlon=-10, urcrnrlon=40., lat_ts=38., resolution='l')
lonmean, latmean = 0.5*(lon_min + lon_max), 0.5*(lat_min + lat_max)
lon2plot, lat2plot = m(lonmean, latmean)
Explanation: We create the projection, centered on the Mediterranean Sea in this case.
End of explanation
fig = plt.figure(figsize=(10,8))
m.plot(lon2plot, lat2plot, 'ko', markersize=2)
m.drawcoastlines(linewidth=0.5, zorder=3)
m.fillcontinents(zorder=2)
m.drawparallels(np.arange(-90.,91.,2.), labels=[1,0,0,0], linewidth=0.5, zorder=1)
m.drawmeridians(np.arange(-180.,181.,3.), labels=[0,0,1,0], linewidth=0.5, zorder=1)
plt.show()
Explanation: And we create a plot showing all the data locations.
End of explanation
box = [12, 15, 32, 34]
Explanation: Selection of a data file based on coordinates
Let's assume we want to have the list of files corresponding to measurements off the northern of Lybia.<br/>
We define a rectangular box containg the data:
End of explanation
import numpy as np
goodcoordinates = np.where( (lonmean>=box[0]) & (lonmean<=box[1]) & (latmean>=box[2]) & (latmean<=box[3]))
print goodcoordinates
Explanation: then we look for the observations within this box:
End of explanation
goodfilelist = dataindex['file_name'][goodcoordinates]
print goodfilelist
Explanation: The generation of the file list is direct:
End of explanation
m2 = Basemap(projection='merc', llcrnrlat=32., urcrnrlat=34.,
llcrnrlon=12, urcrnrlon=15., lat_ts=38., resolution='h')
lon2plot, lat2plot = m2(lonmean[goodcoordinates], latmean[goodcoordinates])
fig = plt.figure(figsize=(10,8))
m2.plot(lon2plot, lat2plot, 'ko', markersize=4)
m2.drawcoastlines(linewidth=0.5, zorder=3)
m2.fillcontinents(zorder=2)
m2.drawparallels(np.arange(-90.,91.,0.5), labels=[1,0,0,0], linewidth=0.5, zorder=1)
m2.drawmeridians(np.arange(-180.,181.,0.5), labels=[0,0,1,0], linewidth=0.5, zorder=1)
plt.show()
Explanation: According to the file names, we have 7 profiling drifters available in the area. <br/>
To check, we replot the data only in the selected box:
End of explanation |
9,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scipy.spatial
scipy.spatial can compute triangulations, Voronoi diagrams, and convex hulls of a set of points, by leveraging the Qhull library.
Moreover, it contains KDTree implementations for nearest-neighbor point queries, and utilities for distance computations in various metrics.
Triangulations (qhull)
Step1: KDtree
Allows very fast point to point searches.
Step2: Compare this to the brute-force version
At what point does it make sense to use kdTree and not brute-force distance tests ?
The brute force method takes a fixed time per sample point and a fixed cost associated with the N-neighbour distance computation (but this can be vectorised efficiently). | Python Code:
%matplotlib inline
import numpy as np
from scipy.spatial import Delaunay, ConvexHull, Voronoi
import matplotlib.pyplot as plt
points = np.random.rand(30, 2) # 30 random points in 2-D
tri = Delaunay(points)
hull = ConvexHull(points)
voronoi = Voronoi(points)
print "Neighbour triangles\n",tri.neighbors[0:5]
print "Simplices\n", tri.simplices[0:5]
print "Points\n", points[tri.simplices[0:5]]
from scipy.spatial import delaunay_plot_2d
delaunay_plot_2d(tri)
pass
from scipy.spatial import convex_hull_plot_2d
convex_hull_plot_2d(hull)
pass
from scipy.spatial import voronoi_plot_2d
voronoi_plot_2d(voronoi)
pass
Explanation: scipy.spatial
scipy.spatial can compute triangulations, Voronoi diagrams, and convex hulls of a set of points, by leveraging the Qhull library.
Moreover, it contains KDTree implementations for nearest-neighbor point queries, and utilities for distance computations in various metrics.
Triangulations (qhull)
End of explanation
from scipy.spatial import KDTree, cKDTree
tree = cKDTree(points)
print tree.data
%%timeit
tree.query((0.5,0.5))
test_points = np.random.rand(1000, 2) # 1000 random points in 2-D
%%timeit
tree.query(test_points)
more_points = np.random.rand(10000, 2) # 1000 random points in 2-D
big_tree = KDTree(more_points)
%%timeit
KDTree(more_points)
%%timeit
big_tree.query(test_points)
Explanation: KDtree
Allows very fast point to point searches.
End of explanation
# Brute force version
def brute_force_distance(pts, spt):
d = pts - spt
d2 = d**2
distances2 = np.einsum('ij->i',d2)
nearest = np.argsort(distances2)[0]
return np.sqrt(distances2[nearest]), nearest
# print np.einsum('ij->i',distances2)
print brute_force_distance(more_points, (0.0,0.0))
print big_tree.query((0.0,0.0))
%%timeit
brute_force_distance(points, (0.5,0.5))
brute_force_distance(points, (0.0,0.0))
brute_force_distance(points, (0.25,0.25))
%%timeit
tree.query(np.array([(0.5,0.5), (0.0,0.0), (0.25,0.25)]))
%%timeit
brute_force_distance(more_points, (0.5,0.5))
# brute_force_distance(more_points, (0.0,0.0))
# brute_force_distance(more_points, (0.25,0.25))
%%timeit
big_tree.query(np.array([(0.5,0.5), (0.0,0.0), (0.25,0.25)]))
Explanation: Compare this to the brute-force version
At what point does it make sense to use kdTree and not brute-force distance tests ?
The brute force method takes a fixed time per sample point and a fixed cost associated with the N-neighbour distance computation (but this can be vectorised efficiently).
End of explanation |
9,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets and Compute Options
Step3: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
Step4: Now we add compute options for the 'legacy' backend.
Step5: And set the two RV datasets to use the correct methods (for both compute options)
Step6: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step7: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
Step8: Let's also disable other special effect such as heating, gravity, and light-time effects.
Step9: Finally, let's compute all of our models
Step10: Plotting
Light Curve
Step11: Now let's plot the residuals between these two models
Step12: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
Step13: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
Step14: Numerical (flux-weighted) RVs | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2.0. In order to run this backend, you'll need to have PHOEBE 1.0 installed and manually build the python bindings in the phoebe-py directory.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.7
b['requiv@secondary'] = 0.7
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')
Explanation: Adding Datasets and Compute Options
End of explanation
b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')
b.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')
Explanation: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
End of explanation
b.add_compute('legacy', compute='phoebe1', irrad_method='none')
Explanation: Now we add compute options for the 'legacy' backend.
End of explanation
b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')
b.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')
Explanation: And set the two RV datasets to use the correct methods (for both compute options)
End of explanation
b.set_value_all('atm', 'extern_planckint')
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
b.set_value_all('gridsize', 30)
Explanation: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
End of explanation
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
Explanation: Let's also disable other special effect such as heating, gravity, and light-time effects.
End of explanation
b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')
b.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')
b.run_compute(compute='phoebe1', model='phoebe1model')
Explanation: Finally, let's compute all of our models
End of explanation
colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'}
afig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)
Explanation: Plotting
Light Curve
End of explanation
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-0.003, 0.003)
Explanation: Now let's plot the residuals between these two models
End of explanation
afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)
Explanation: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
End of explanation
artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1.5e-12, 1.5e-12)
Explanation: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
End of explanation
afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True)
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1e-2, 1e-2)
Explanation: Numerical (flux-weighted) RVs
End of explanation |
9,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
New York University
Applied Data Science 2016 Final Project
Measuring household income under Redatam in CensusData
2. Merge Individual to Household Data
Project Description
Step1: DATA HANDLING
Step2: REGRESSION MODEL
Predicted Income
Step3: Merge subset of individuals (with job and income) with total individuals
We create a merged dataset combining a subset of individuals (with job and income) with the total individuals. We then account for people with no income (kids or unemployed) and assign them a "0" income. The resulting dataset will succesfully match every individual with their household.
Step4: New variables for individuals
We create a set of new variables (re-arrangement of previous ones) set in a way we create will add value to our prediction
Step5: Create Pivot Dataset
We create a pivot dataset in order to merge it later with the household dataset. With this we will have all individual information relative to a particular household.
Step6: Create Intermediate Dataset
We save our dataset into a csv file in order to eliminate the double index, and then after some data handling we export our final dataset.
Step7: GET HOUSEHOLD DATA
Variables description
Step8: DATA CLEANING
Step9: Ordinal Order Transformation
In order to have positive coefficients we changed the order in which ordinal variables were displayed. We only do this for 'CookingCombustible' (which shows what combustible is used in the household) and for the others we use dummy variables.
Step10: LOAD INDIVIDUAL DATA
Step11: MERGE INDIVIDUAL WITH HOUSEHOLD | Python Code:
# helper functions
import getEPH
import categorize
import createVariables
import schoolYears
import make_dummy
import functionsForModels
# libraries
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
%matplotlib inline
getEPH.getEPHdbf('t310')
Explanation: New York University
Applied Data Science 2016 Final Project
Measuring household income under Redatam in CensusData
2. Merge Individual to Household Data
Project Description: Lorem ipsum
Members:
- Felipe Gonzales
- Ilan Reinstein
- Fernando Melchor
- Nicolas Metallo
Sources:
- http://dlab-geo.github.io/geocoding-geopy/slides/index.html#2
- https://gist.github.com/rgdonohue/c4beedd3ca47d29aef01
- http://darribas.org/gds_scipy16/ipynb_md/07_spatial_clustering.html
- https://glenbambrick.com/2016/01/09/csv-to-shapefile-with-pyshp/
- http://statsmodels.sourceforge.net/devel/examples/generated/example_wls.html
LIBRARIES
End of explanation
data1 = pd.read_csv('data/cleanDatat310.csv')
data2 = categorize.categorize(data1)
data3 = schoolYears.schoolYears(data2)
data4 = createVariables.createVariables(data3)
Explanation: DATA HANDLING
End of explanation
jobsAndIncome = (data4.activity == 1) & (data4.P21 > 1) # we only consider people who are working and have income
headAndSpouse = (data4.familyRelation == 1)|(data4.familyRelation == 2) #
dataParaModelo = data4.copy().loc[jobsAndIncome,:]
variablesOfInterest = ['age',
'age2',
'female',
'education',
'education2']
model = functionsForModels.runModel(dataset = dataParaModelo, income = 'lnIncome', variables = variablesOfInterest)
X = sm.add_constant(dataParaModelo.copy().loc[:,variablesOfInterest].values)
dataParaModelo['predictedLnIncome'] = model.predict(X)
Explanation: REGRESSION MODEL
Predicted Income:
Using the model generated in the previous notebook (Model # 2 - Alternative) we create a new variable called predicted income with its output. The ultimate goal is to merge this data with household data.
End of explanation
paraMerge = dataParaModelo.loc[:,['CODUSU', 'NRO_HOGAR', 'COMPONENTE','predictedLnIncome']]
paraMerge.head()
data = pd.merge(left = data4 , right = paraMerge, on = ['CODUSU', 'NRO_HOGAR', 'COMPONENTE'], how = 'left')
data.predictedLnIncome[data.predictedLnIncome.isnull()] = 0
Explanation: Merge subset of individuals (with job and income) with total individuals
We create a merged dataset combining a subset of individuals (with job and income) with the total individuals. We then account for people with no income (kids or unemployed) and assign them a "0" income. The resulting dataset will succesfully match every individual with their household.
End of explanation
# Variables related to Occupation
data['job'] = (data.activity==1).astype(int)
data['noJob'] = (data.activity!=1).astype(int)
data['schoolAndJob'] = data.job * data.education
cantidadActivos = data.job.groupby(by=data['id']).sum() # Number of people working in the household
cantidadInactivos = data.noJob.groupby(by=data['id']).sum() # Number of people not working in the household
schoolAndJob = data.schoolAndJob.groupby(by=data['id']).sum() # Sum of total schooling years in the household
dfJobsAndEduc = pd.merge(left = schoolAndJob.to_frame() ,
right = cantidadInactivos.to_frame(),
left_index = True,
right_index = True)
dfJobsAndEduc = pd.merge(left = dfJobsAndEduc ,
right = cantidadActivos.to_frame(),
left_index = True,
right_index = True)
Explanation: New variables for individuals
We create a set of new variables (re-arrangement of previous ones) set in a way we create will add value to our prediction
End of explanation
cleanData = data.copy().loc[(headAndSpouse),
['id',
'AGLOMERADO',
'familyRelation',
'age',
'age2',
'female',
'education',
'education2',
'primary',
'secondary',
'university',
'P21',
'P47T',
'lnIncome',
u'lnIncomeT',
'predictedLnIncome',
'job',
'DECCFR',
'DECIFR',
'maritalStatus',
'reading',
'placeOfBirth',
]]
cleanData.head()
pivot = cleanData.pivot(index='id', columns='familyRelation')
pivot.head()
Explanation: Create Pivot Dataset
We create a pivot dataset in order to merge it later with the household dataset. With this we will have all individual information relative to a particular household.
End of explanation
pivot.to_csv('data/pivotInd.csv')
dataN = pd.read_csv('data/pivotInd.csv', names = ['id','AGLO1','AGLO2','headAge','spouseAge','headAge2','spouseAge2',
'headFemale','spouseFemale','headEduc','spouseEduc',
'headEduc2','spouseEduc2','headPrimary','spousePrimary',
'headSecondary','spouseSecondary','headUniversity','spouseUniversity',
'headP21','spouseP21','headP47T','spouseP47T',
'headLnIncome','spouseLnIncome','headLnIncomeT','spouseLnIncomeT',
'headPredictedLnIncome','spousePredictedLnIncome','headJob','spouseJob',
'headDECCFR','spouseDECCFR','headDECIFR','spouseDECIFR',
'headMaritalStatus','spouseMaritalStatus',
'headReading','spouseReading','headPlaceOfBirth','spouseplaceOfBirth',
],skiprows = 3)
dfJobsAndEduc['id'] = dfJobsAndEduc.index
dfJobsAndEduc['id'] = dfJobsAndEduc['id'].astype(int)
dataFinalCSV = pd.merge(left = dfJobsAndEduc ,
right = dataN,
left_on = 'id',
right_on = 'id')
dataFinalCSV.to_csv('data/pivotInd.csv',index=False)
dataFinalCSV.head()
Explanation: Create Intermediate Dataset
We save our dataset into a csv file in order to eliminate the double index, and then after some data handling we export our final dataset.
End of explanation
getEPH.getEPHdbf('t310')
hog = pd.read_csv('data/cleanDataHouseholdt310.csv')
print hog.shape
hog.head()
Explanation: GET HOUSEHOLD DATA
Variables description:
- HomeType
- FloorMaterial
- RoofMaterial
- RoofCoat
- Water
- WaterType
- Toilet
- ToletLocation
- ToiletType
End of explanation
def remove9(df,variables):
for var in variables:
df[var].replace(to_replace=[9], value=[np.nan] , inplace=True, axis=None)
def remove0(df,variables):
for var in variables:
df[var].replace(to_replace=[0], value=[np.nan] , inplace=True, axis=None)
def remove99(df,variables):
for var in variables:
df[var].replace(to_replace=[99], value=[np.nan] , inplace=True, axis=None)
hog2 = hog.copy()
remove9(df = hog2, variables = ['FloorMaterial','RoofMaterial','RoofCoat','Water','WaterType','Toilet','ToiletLocation',
'ToiletType','Sewer','DumpSites','Flooding','EmergencyLoc','CookingCombustible',
'BathroomUse'])
remove0(df = hog2, variables = ['FloorMaterial','RoofMaterial','RoofCoat','Water','WaterType','Toilet','ToiletLocation',
'ToiletType','Sewer','DumpSites','Flooding','EmergencyLoc','Ownership','CookingCombustible',
'BathroomUse'])
remove99(df = hog2, variables = ['Ownership'])
Explanation: DATA CLEANING
End of explanation
variables = ['CookingCombustible']
for var in variables:
print hog2[var].value_counts()
plt.scatter(hog2[var], hog2.TotalHouseHoldIncome)
plt.show()
hog2['CookingRec'] = np.nan
hog2['CookingRec'][hog2.CookingCombustible == 4] = 1
hog2['CookingRec'][hog2.CookingCombustible == 3] = 1
hog2['CookingRec'][hog2.CookingCombustible == 2] = 2
hog2['CookingRec'][hog2.CookingCombustible == 1] = 3
variables = ['CookingRec']
for var in variables:
print hog2[var].value_counts()
plt.scatter(hog2[var], hog2.TotalHouseHoldIncome)
plt.show()
hog2['WaterRec'] = (hog2.Water == 1).astype(int)
hog2['OwnershipRec'] = ((hog2.Ownership == 1) | (hog2.Ownership == 3)).astype(int)
hog2['Hacinamiento'] = hog2.HouseMembers * 1.0 / hog2.SleepingRooms
hog2['id'] = (hog2.CODUSU.astype(str) + hog2.NRO_HOGAR.astype(str))
hog2['TotalHouseHoldIncome'].replace(to_replace=[0], value=[1] , inplace=True, axis=None)
hog2['lnHouseIncome'] = np.log(hog2['TotalHouseHoldIncome'])
sinCuartosParaDormir = (hog2.SleepingRooms == 0)
hogReducida = hog2.copy().drop(['CODUSU','NRO_HOGAR','REGION','HomeTypeesp','FloorMaterialesp',
'WaterTypeesp','Ownershipesp','CookingCombustibleesp','DomesticService1',
'DomesticService2', 'DomesticService3','DomesticService4', 'DomesticService5',
'DomesticService6'],axis = 1)
Explanation: Ordinal Order Transformation
In order to have positive coefficients we changed the order in which ordinal variables were displayed. We only do this for 'CookingCombustible' (which shows what combustible is used in the household) and for the others we use dummy variables.
End of explanation
# Load data previously exported
ind = pd.read_csv('data/pivotInd.csv')
ind['id'] = ind['id'].astype(str)
ind.drop(['AGLO2'],axis =1,inplace=True)
ind['sumPredicted'] = ind.headPredictedLnIncome + ind.spousePredictedLnIncome
Explanation: LOAD INDIVIDUAL DATA
End of explanation
hogReducida = hogReducida.copy().loc[~sinCuartosParaDormir,:]
# check before merge
print 'filas hog:',hogReducida.shape[0]
print 'filas ind:',ind.shape[0]
print 'cantidad de ind en hog:', sum(ind['id'].sort_values().isin(hogReducida['id'].sort_values()))
dataUnida = pd.merge(left=hogReducida, right=ind, on='id',how='left')
dataUnida.to_csv('data/dataFinalParaModelo.csv',index=False)
Explanation: MERGE INDIVIDUAL WITH HOUSEHOLD
End of explanation |
9,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Gathering system data
Goals
Step4: If you want to stream command output, use subprocess.Popen and check carefully subprocess documentation!
Step7: Parsing /proc
Linux /proc filesystem is a cool place to get data
In the next example we'll see how to get | Python Code:
import psutil
import glob
import sys
import subprocess
#
# Our code is p3-ready
#
from __future__ import print_function, unicode_literals
def grep(needle, fpath):
A simple grep implementation
goal: open() is iterable and doesn't
need splitlines()
goal: comprehension can filter lists
return [x for x in open(fpath) if needle in x]
# Do we have localhost?
print(grep("localhost", "/etc/hosts"))
#The psutil module is very nice
import psutil
#Works on Windows, Linux and MacOS
psutil.cpu_percent()
#And its output is very easy to manage
ret = psutil.disk_io_counters()
print(ret)
# Exercise: Which other informations
# does psutil provide?
# Use this cell and the tab-completion jupyter functionalities.
# Exercise
def multiplatform_vmstat(count):
# Write a vmstat-like function printing every second:
# - cpu usage%
# - bytes read and written in the given interval
# Hint: use psutil and time.sleep(1)
# Hint: use this cell or try on ipython and *then* write the function
# using %edit vmstat.py
for i in range(count):
raise NotImplementedError
print(cpu_usage, bytes_rw)
multiplatform_vmstat(5)
!python -c "from solutions import multiplatform_vmstat;multiplatform_vmstat(3)"
#
# subprocess
#
# The check_output function returns the command stdout
from subprocess import check_output
# It takes a *list* as an argument!
out = check_output("ping -c5 www.google.com".split())
# and returns a string
print(out)
print(type(out))
Explanation: Gathering system data
Goals:
- Gathering System Data with multiplatform and platform-dependent tools
- Get infos from files, /proc, /sys
- Capture command output
- Use psutil to get IO, CPU and memory data
- Parse files with a strategy
Non-goals for this lesson:
- use with, yield or pipes
Modules
End of explanation
def sh(cmd, shell=False, timeout=0):
"Returns an iterable output of a command string
checking...
from sys import version_info as python_version
if python_version < (3, 3): # ..before using..
if timeout:
raise ValueError("Timeout not supported until Python 3.3")
output = check_output(cmd.split(), shell=shell)
else:
output = check_output(cmd.split(), shell=shell, timeout=timeout)
return output.splitlines()
# Exercise:
# implement a multiplatform pgrep-like function.
def pgrep(program):
A multiplatform pgrep-like function.
Prints a list of processes executing 'program'
@param program - eg firefox, explorer.exe
Hint: use subprocess, os and list-comprehension
eg. items = [x for x in a_list if 'firefox' in x]
raise NotImplementedError
pgrep('firefox')
from solutions import pgrep as sol_pgrep
sol_pgrep("firefox")
Explanation: If you want to stream command output, use subprocess.Popen and check carefully subprocess documentation!
End of explanation
# Parsing /proc - 1
def linux_threads(pid):
Retrieving data from /proc
from glob import glob
# glob emulates shell expansion of * and ?
# Change to /proc the base path if you run on linux machine
path = "proc/{}/task/*/status".format(pid)
# pick a set of fields to gather
t_info = ('Pid', 'Tgid', 'voluntary') # this is a tuple!
for t in glob(path):
# ... and use comprehension to get
# intersting data.
t_info = [x
for x in open(t)
if x.startswith(t_info)] # startswith accepts tuples!
print(t_info)
# If you're on linux try linux_threads
pid_of_init = 1 # or systemd ?
linux_threads(pid_of_init)
# On linux /proc/diskstats is the source of I/O infos
disk_l = grep("vda1", "proc/diskstats")
print(''.join(disk_l))
# To gather that data we put the header in a multiline string
from solutions import diskstats_headers as headers
print(*headers, sep='\n')
#Take the 1st entry (sda), split the data...
disk_info = disk_l[0].split()
# ... and tie them with the header
ret = zip(headers, disk_info)
# On py3 we need to iterate over the generators
print(list(ret))
# Try to mangle ret
print('\n'.join(str(x) for x in ret))
# Exercise: trasform ret in a dict.
# We can create a reusable commodity class with
from collections import namedtuple
# using the imported `headers` as attributes
# like the one provided by psutil
DiskStats = namedtuple('DiskStat', headers)
# ... and disk_info as values
dstat = DiskStats(*disk_info)
print(dstat.device, dstat.writes_ms)
# Exercise
# Write the following function
def linux_diskstats(partition):
Print every second I/O information from /proc/diskstats
@param: partition - eg sda1 or vdx1
Hint: use the above `grep` function
Hint: use zip, time.sleep, print() and *magic
diskstats_headers = ('reads reads_merged reads_sectors reads_ms'
' writes writes_merged writes_sectors writes_ms'
' io_in_progress io_ms_weight').split()
while True:
raise NotImplementedError
print(values, sep="\t")
!python -c "from solutions import linux_diskstats;linux_diskstats('vda1')"
# Using check_output with split() doesn't always work
from os import makedirs
makedirs('/tmp/course/b l a n k s') # , exist_ok=True) this on py3
check_output('ls "/tmp/course/b l a n k s"'.split())
# You can use
from shlex import split
# and
cmd = split('dir -a "/tmp/course/b l a n k s"')
check_output(cmd)
Explanation: Parsing /proc
Linux /proc filesystem is a cool place to get data
In the next example we'll see how to get:
- thread informations;
- disk statistics;
End of explanation |
9,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Morse Code Neural Net
I created a text file that has the entire alphabet of numerical morse code. Meaning, "." is represented by the number "0.5" and "-" is represented by "1.0". This neural net is trained through that set until it's accuracy is 100%. Then, it is "tested" by user generated input. Once the weights are able to give the training set 100%, it will have 100% accuracy for all tested data, since the inputs do not change. Seeing if the neural network can determine similar but not exact test data requires the neural network to find the best fit function for data. In this neural network, it is not finding the "best-fit" function, but rather finding an exact function that satisfies the data. This isn't a question of how accurate the neural network can predict future data, but rather a question of how much does it take for a neural network to memorize perfect data.
I will be importing the neural net base code for this neural network.
Step1: Visualize Morse Code
Notice that numbes are represented by differing numbers of components. Some letters have 4 components and some have as little as one.
Step2: Enter Morse Cord Sentence
Note, each letter is separated by a single space. And each word is separated by four spaces. To test every letter of the alphabet, use the sentence below
Step4: Morse Code to 3D Number Array
Use two functions to turn a morse code sentence into a 3D array of numbers. This first function turns the morse code into string numbers. Each dot is "0.5" and each dash is "1.0". Each letter is an array, each word is an array of arrays, and each sentence is an array of arrays of arrays.
Step7: This second function turns each string number into a float. Because our neural net needs a constant value of inputs, and morse letters have 1 to 4 components, "0.0" is appened on to the end of each letter array that has less than four components.
Step9: Create Input Array
This input array is the entire morse alphabet. Each letter has four number components that correspond to its dots and dashes. There are 4 inputs in the input layer.
Step12: Create Solution Array
There are 26 possible solutions and therefore 26 neurons in the output layer. Different letters are represented by their placement in the alphabet. A is 0. The 0th node "firing" represents an A.
Step13: Training the Neural Network
This Neural Network has a input for every output and a unique output for every input. Because of this, the neural network must be trained to 100% accuracy on the training set to get a correct translation. For this, the neural net requires 30 neurons in the hidden layer and 400 iterations with a learning rate of 0.7.
Step14: I am commenting out the cell below. This is how you would calculate weights, but for the demonstration, I will load weights from a previous training.
Step15: Assert 100% Accuracy
Step16: Translate Morse
Because the Neural Network is perfectly trained, the accuracy of the "test data" will be 100%. Giving the neural net and morse code sentence will receive a perfect translation. | Python Code:
import NeuralNetImport as NN
import numpy as np
import NNpix as npx
from IPython.display import Image
Explanation: Morse Code Neural Net
I created a text file that has the entire alphabet of numerical morse code. Meaning, "." is represented by the number "0.5" and "-" is represented by "1.0". This neural net is trained through that set until it's accuracy is 100%. Then, it is "tested" by user generated input. Once the weights are able to give the training set 100%, it will have 100% accuracy for all tested data, since the inputs do not change. Seeing if the neural network can determine similar but not exact test data requires the neural network to find the best fit function for data. In this neural network, it is not finding the "best-fit" function, but rather finding an exact function that satisfies the data. This isn't a question of how accurate the neural network can predict future data, but rather a question of how much does it take for a neural network to memorize perfect data.
I will be importing the neural net base code for this neural network.
End of explanation
npx.morse1
Explanation: Visualize Morse Code
Notice that numbes are represented by differing numbers of components. Some letters have 4 components and some have as little as one.
End of explanation
enter = input("Enter Your Morse: ")
Explanation: Enter Morse Cord Sentence
Note, each letter is separated by a single space. And each word is separated by four spaces. To test every letter of the alphabet, use the sentence below:
"- .... . --.- ..- .. -.-. -.- -... .-. --- .-- -. ..-. --- -..- .--- ..- -- .--. . -.. --- ...- . .-. - .... . .-.. .- --.. -.-- -.. --- --."
FOR THIS NOTEBOOK NOT TO RAISE ERRORS, THERE MUST BE AN INPUT BELOW
End of explanation
def morse_to_num_str(morse):
Takes morse code and divides in into a 3D array, 1D for each letter, 2D for each word, and 3D for the sentence
morse = morse.replace(".", "0.5,")
morse = morse.replace("-", "1.0,")
new = list(morse)
for i in range(len(new)):
if i > 1 and new[i-1] == "," and new[i] == " ":
new[i-1] = " "
if i == (len(new)-1):
new[i] = ""
new = "".join(new)
a = new.split(" ")
for i in range(len(a)):
a[i] = a[i].split(" ")
for h in range(len(a)):
for j in range(len(a[h])):
a[h][j] = a[h][j].split(",")
return a
assert morse_to_num_str("-. -- -- ..") == [[['1.0', '0.5'], ['1.0', '1.0']], [['1.0', '1.0'], ['0.5', '0.5']]]
Explanation: Morse Code to 3D Number Array
Use two functions to turn a morse code sentence into a 3D array of numbers. This first function turns the morse code into string numbers. Each dot is "0.5" and each dash is "1.0". Each letter is an array, each word is an array of arrays, and each sentence is an array of arrays of arrays.
End of explanation
def morse_str_to_float(morse):
Turns the 3D array generated above into float
Adds 0.0 for letters without 4 elements
for i in range(len(morse)):
for j in range(len(morse[i])):
while len(morse[i][j]) != 4:
morse[i][j].append("0.0")
for k in range(len(morse[i][j])):
morse[i][j][k] = float(morse[i][j][k])
return np.array(morse)
assert np.all(morse_str_to_float([[['1.0', '0.5'], ['1.0', '1.0']], [['1.0', '1.0'], ['0.5', '0.5']]]) == np.array(([[[ 1. , 0.5, 0. , 0. ],
[ 1. , 1. , 0. , 0. ]],[[ 1. , 1. , 0. , 0. ],[ 0.5, 0.5, 0. , 0. ]]])))
Explanation: This second function turns each string number into a float. Because our neural net needs a constant value of inputs, and morse letters have 1 to 4 components, "0.0" is appened on to the end of each letter array that has less than four components.
End of explanation
The entire morse alphabet in numerical morse
all_in = np.genfromtxt("MorseTxt.txt", delimiter=",", usecols=(1,2,3,4))
Explanation: Create Input Array
This input array is the entire morse alphabet. Each letter has four number components that correspond to its dots and dashes. There are 4 inputs in the input layer.
End of explanation
The letters that correspond with all-in above
real_letters = np.genfromtxt("MorseTxt.txt", dtype=str, delimiter=",", usecols=(0))
26 element array of all the ouputs
all_out = NN.create_training_soln(np.genfromtxt("MorseTxt.txt", dtype=str, delimiter=",", usecols=(0)),26)
Explanation: Create Solution Array
There are 26 possible solutions and therefore 26 neurons in the output layer. Different letters are represented by their placement in the alphabet. A is 0. The 0th node "firing" represents an A.
End of explanation
morse_net = NN.NN_training(all_in, all_out, 4, 26, 30, 400, 0.7)
Explanation: Training the Neural Network
This Neural Network has a input for every output and a unique output for every input. Because of this, the neural network must be trained to 100% accuracy on the training set to get a correct translation. For this, the neural net requires 30 neurons in the hidden layer and 400 iterations with a learning rate of 0.7.
End of explanation
# x,y = morse_net.train()
f = np.load("MorseWeights.npz")
x = f['arr_0']
y = f['arr_1']
assert len(x) == 30
assert len(y) == 26
Explanation: I am commenting out the cell below. This is how you would calculate weights, but for the demonstration, I will load weights from a previous training.
End of explanation
morse_ask = NN.NN_ask(all_in, x, y)
comp_vals = [chr(morse_ask.get_ans()[i]+65) for i in range(26)]
assert np.all(comp_vals == real_letters)
Explanation: Assert 100% Accuracy
End of explanation
new_net = NN.NN_ask_morse(morse_str_to_float(morse_to_num_str(enter)), x, y)
ans = new_net.get_ans()
print("".join([chr(ans[i]) for i in range(len(ans))]))
Explanation: Translate Morse
Because the Neural Network is perfectly trained, the accuracy of the "test data" will be 100%. Giving the neural net and morse code sentence will receive a perfect translation.
End of explanation |
9,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Report counts of GO terms at various levels and depths
Reports the number of GO terms at each level and depth.
Level refers to the length of the shortest path from the top.
Depth refers to the length of the longest path from the top.
See the Gene Ontology Consorium's (GOC) advice regarding
levels and depths of a GO term
GO level and depth reporting
GO terms reported can be all GO terms in an ontology.
Or subsets of GO terms can be reported.
GO subset examples include all GO terms annotated for a species or all GO terms in a study.
Example report on full Ontology from Ontologies downloaded April 27, 2016.
```
Dep <-Depth Counts-> <-Level Counts->
Lev BP MF CC BP MF CC
00 1 1 1 1 1 1
01 24 19 24 24 19 24
02 125 132 192 223 155 336
03 950 494 501 1907 738 1143
04 1952 1465 561 4506 1815 1294
05 3376 3861 975 7002 4074 765
06 4315 1788 724 7044 1914 274
07 4646 1011 577 4948 906 60
08 4150 577 215 2017 352 6
09 3532 309 106 753 110 1
10 2386 171 24 182 40 0
11 1587 174 3 37 22 0
12 1032 70 1 1 0 0
13 418 53 0 0 0 0
14 107 17 0 0 0 0
15 33 4 0 0 0 0
16 11 0 0 0 0 0
```
1. Download Ontologies, if necessary
Step1: 2. Download Associations, if necessary
Step2: 3. Initialize GODag object
Step3: 4. Initialize Reporter class
Step4: 5. Generate depth/level report for all GO terms | Python Code:
# Get http://geneontology.org/ontology/go-basic.obo
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
Explanation: Report counts of GO terms at various levels and depths
Reports the number of GO terms at each level and depth.
Level refers to the length of the shortest path from the top.
Depth refers to the length of the longest path from the top.
See the Gene Ontology Consorium's (GOC) advice regarding
levels and depths of a GO term
GO level and depth reporting
GO terms reported can be all GO terms in an ontology.
Or subsets of GO terms can be reported.
GO subset examples include all GO terms annotated for a species or all GO terms in a study.
Example report on full Ontology from Ontologies downloaded April 27, 2016.
```
Dep <-Depth Counts-> <-Level Counts->
Lev BP MF CC BP MF CC
00 1 1 1 1 1 1
01 24 19 24 24 19 24
02 125 132 192 223 155 336
03 950 494 501 1907 738 1143
04 1952 1465 561 4506 1815 1294
05 3376 3861 975 7002 4074 765
06 4315 1788 724 7044 1914 274
07 4646 1011 577 4948 906 60
08 4150 577 215 2017 352 6
09 3532 309 106 753 110 1
10 2386 171 24 182 40 0
11 1587 174 3 37 22 0
12 1032 70 1 1 0 0
13 418 53 0 0 0 0
14 107 17 0 0 0 0
15 33 4 0 0 0 0
16 11 0 0 0 0 0
```
1. Download Ontologies, if necessary
End of explanation
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
Explanation: 2. Download Associations, if necessary
End of explanation
from goatools.obo_parser import GODag
obodag = GODag("go-basic.obo")
Explanation: 3. Initialize GODag object
End of explanation
from goatools.rpt_lev_depth import RptLevDepth
rptobj = RptLevDepth(obodag)
Explanation: 4. Initialize Reporter class
End of explanation
rptobj.write_summary_cnts_all()
Explanation: 5. Generate depth/level report for all GO terms
End of explanation |
9,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Big Data Doesn't Exist
The recent opinion piece Big Data Doesn't Exist on Tech Crunch by Slater Victoroff is an interesting discussion about the usefulness of data both big and small. Slater joins me this episode to discuss and expand on this discussion.
Slater Victoroff is CEO of indico Data Solutions, a company whose services turn raw text and image data into human insight. He, and his co-founders, studied at Olin College of Engineering where indico was born. indico was then accepted into the "Techstars Accelarator Program" in the Fall of 2014 and went on to raise $3M in seed funding. His recent essay "Big Data Doesn't Exist" received a lot of traction on TechCrunch, and I have invited Slater to join me today to discuss his perspective and touch on a few topics in the machine learning space as well.
During the interview, two noteworthy papers are mentioned and discussed. Scaling to Very Very Large Corpora for Natural Language Disambiguation by Banko and Brill, and Transfer Learning by Torrey and Shavlik. We also mentioned the ImageNet dataset and the Dogs vs. Cats dataset.
The episode winds up with a discussion of indico - Slater's company. For fun, I tried out their API for some analysis on the show notes of previous episodes of Data Skeptic. That analysis can be found at the end of the show notes.
Lastly, Slater mentioned a new project from indico called Thumbprint which performs quick analysis on twitter streams. You can see the results for @DataSkeptic or try your own.
Comments on this episode found at the bottom.
<!-- audio player -->
Trying indico API
What follows are my python tinkering trying out the indico API using the show notes from all previous episodes of Data Skeptic.
Step1: Sentiment Analysis
I almost skipped performing sentiment analysis, since I express very little sentiment in the notes. I try to keep it merely factual. Nonetheless, the plot below shows the sentiment ratings I got back, mostly positive, which seems reasonable to me. Perhaps closer to neutral would have been in line with my expectations, but my notes do tend to offer praise for the work of my guests, so I'm guessing that's what's really being detected.
Step2: But what are those low sentiment cases? What was going on in those episodes? You can see the negative sentiment episodes listed below. "Jackson Pollock Authentication Analysis" was a discussion of a response paper critical of earlier findings, so that makes perfect sense to me. A few others seem to focus around some statistical episodes where things like "error" and "fail to reject the null hypothesis" are probably mentioned, tuning some of these towards negative polarity.
Step3: Text Tags
The last feature of the indico API I looked at was text tags. Of their taxonomy, I score highest in math, which does indeed strike me as the most appropriate weighting for the show.
Step4: Keyword Extraction
The plots below are the three keywords extracted by indico when given the (often brief) show notes of every episode of Data Skeptic. I don't think I'll directly stick these in an SEM campaign without some additional steps of analysis, but these look pretty useful for finding themes, and the responses come back fast enough for use in an online algorithm, despite my offline use here. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import ConfigParser
import json
import indicoio
import requests
import xmltodict
from BeautifulSoup import BeautifulSoup
import time
import pickle
propertiesFile = "indico.properties"
cp = ConfigParser.ConfigParser()
cp.readfp(open(propertiesFile))
api_key = cp.get('config', 'api_key')
indicoio.config.api_key = api_key
fname = 'feed.xml'
url = 'http://dataskeptic.com/feed.rss'
if not(os.path.isfile(fname)):
print 'fetching'
r = requests.get(url)
f = open(fname, 'wb')
f.write(r.text.encode('utf-8'))
f.close()
with open(fname) as fd:
xml = xmltodict.parse(fd.read())
episodes = xml['rss']['channel']['item']
descriptions = []
descToTitle = {}
descToNum = {}
l = len(episodes)
for episode in episodes:
enclosure = episode['enclosure']
desc = episode['description']
desc = BeautifulSoup(desc).text
descriptions.append(desc)
descToTitle[desc] = episode['title']
descToNum[desc] = l
l = l - 1
responses = {}
for desc in descriptions:
resp = indicoio.analyze_text(desc, apis=['sentiment_hq', 'political', 'text_tags', 'keywords'])
responses[desc] = resp
time.sleep(.5)
pickle.dump(responses, open('cache_responses.pkl', 'wb'))
Explanation: Big Data Doesn't Exist
The recent opinion piece Big Data Doesn't Exist on Tech Crunch by Slater Victoroff is an interesting discussion about the usefulness of data both big and small. Slater joins me this episode to discuss and expand on this discussion.
Slater Victoroff is CEO of indico Data Solutions, a company whose services turn raw text and image data into human insight. He, and his co-founders, studied at Olin College of Engineering where indico was born. indico was then accepted into the "Techstars Accelarator Program" in the Fall of 2014 and went on to raise $3M in seed funding. His recent essay "Big Data Doesn't Exist" received a lot of traction on TechCrunch, and I have invited Slater to join me today to discuss his perspective and touch on a few topics in the machine learning space as well.
During the interview, two noteworthy papers are mentioned and discussed. Scaling to Very Very Large Corpora for Natural Language Disambiguation by Banko and Brill, and Transfer Learning by Torrey and Shavlik. We also mentioned the ImageNet dataset and the Dogs vs. Cats dataset.
The episode winds up with a discussion of indico - Slater's company. For fun, I tried out their API for some analysis on the show notes of previous episodes of Data Skeptic. That analysis can be found at the end of the show notes.
Lastly, Slater mentioned a new project from indico called Thumbprint which performs quick analysis on twitter streams. You can see the results for @DataSkeptic or try your own.
Comments on this episode found at the bottom.
<!-- audio player -->
Trying indico API
What follows are my python tinkering trying out the indico API using the show notes from all previous episodes of Data Skeptic.
End of explanation
sentiments = []
titles = []
nums = []
for desc in responses.keys():
r = responses[desc]
titles.append(descToTitle[desc])
sentiments.append(r['sentiment_hq'])
nums.append(descToNum[desc])
df = pd.DataFrame({'sentiment': sentiments, 'title': titles, 'num': nums})
df.sort('num', inplace=True)
plt.figure(figsize=(10,5))
plt.plot(df['num'], df['sentiment'], linewidth=2)
plt.plot(df['num'], np.ones(df.shape[0]) * .5)
plt.ylim(0,1)
plt.text(50,.52,"Positive sentiment")
plt.text(50,.45,"Negative sentiment")
plt.gca().xaxis.grid(False)
plt.xlabel('episode number')
plt.ylabel('sentiment')
plt.show()
Explanation: Sentiment Analysis
I almost skipped performing sentiment analysis, since I express very little sentiment in the notes. I try to keep it merely factual. Nonetheless, the plot below shows the sentiment ratings I got back, mostly positive, which seems reasonable to me. Perhaps closer to neutral would have been in line with my expectations, but my notes do tend to offer praise for the work of my guests, so I'm guessing that's what's really being detected.
End of explanation
df[df['sentiment'] < .5]
Explanation: But what are those low sentiment cases? What was going on in those episodes? You can see the negative sentiment episodes listed below. "Jackson Pollock Authentication Analysis" was a discussion of a response paper critical of earlier findings, so that makes perfect sense to me. A few others seem to focus around some statistical episodes where things like "error" and "fail to reject the null hypothesis" are probably mentioned, tuning some of these towards negative polarity.
End of explanation
values = []
for desc in responses.keys():
r = responses[desc]
items = zip(*r['text_tags'].items())
values.append(items[1])
df = pd.DataFrame(values)
df.columns = items[0]
df2 = pd.DataFrame(df.mean())
df2.columns = ['weight']
df2.sort('weight', inplace=True)
x = np.arange(df2.shape[0])
plt.figure(figsize=(5,25))
plt.barh(x, df2['weight'])
plt.yticks(x+0.4, df2.index)
plt.ylim(0, len(x))
plt.gca().yaxis.grid(False)
plt.show()
Explanation: Text Tags
The last feature of the indico API I looked at was text tags. Of their taxonomy, I score highest in math, which does indeed strike me as the most appropriate weighting for the show.
End of explanation
for desc in responses.keys():
r = responses[desc]
title = descToTitle[desc]
keywords = zip(*r['keywords'].items())
x = np.arange(len(keywords[0]))
plt.figure(figsize=(8,5))
plt.barh(x, keywords[1])
plt.yticks(x + 0.4, keywords[0])
plt.gca().yaxis.grid(False)
plt.title('#' + str(descToNum[desc]) + ': ' + title)
plt.show()
Explanation: Keyword Extraction
The plots below are the three keywords extracted by indico when given the (often brief) show notes of every episode of Data Skeptic. I don't think I'll directly stick these in an SEM campaign without some additional steps of analysis, but these look pretty useful for finding themes, and the responses come back fast enough for use in an online algorithm, despite my offline use here.
End of explanation |
9,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Filter
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Filter in multiple ways to filter out produce by their duration value.
Filter accepts a function that keeps elements that return True, and filters out the remaining elements.
Example 1
Step3: <table align="left" style="margin-right
Step4: <table align="left" style="margin-right
Step5: <table align="left" style="margin-right
Step6: <table align="left" style="margin-right
Step7: <table align="left" style="margin-right | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master//Users/dcavazos/src/beam/examples/notebooks/documentation/transforms/python/elementwise/filter-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/filter"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Filter
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Filter"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Given a predicate, filter out all elements that don't satisfy that predicate.
May also be used to filter based on an inequality with a given value based
on the comparison ordering of the element.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
def is_perennial(plant):
return plant['duration'] == 'perennial'
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(is_perennial)
| beam.Map(print)
)
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Filter in multiple ways to filter out produce by their duration value.
Filter accepts a function that keeps elements that return True, and filters out the remaining elements.
Example 1: Filtering with a function
We define a function is_perennial which returns True if the element's duration equals 'perennial', and False otherwise.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(
lambda plant: plant['duration'] == 'perennial')
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: Filtering with a lambda function
We can also use lambda functions to simplify Example 1.
End of explanation
import apache_beam as beam
def has_duration(plant, duration):
return plant['duration'] == duration
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(has_duration, 'perennial')
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Filtering with multiple arguments
You can pass functions with multiple arguments to Filter.
They are passed as additional positional arguments or keyword arguments to the function.
In this example, has_duration takes plant and duration as arguments.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennial = pipeline | 'Perennial' >> beam.Create(['perennial'])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(
lambda plant, duration: plant['duration'] == duration,
duration=beam.pvalue.AsSingleton(perennial),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 4: Filtering with side inputs as singletons
If the PCollection has a single value, such as the average from another computation,
passing the PCollection as a singleton accesses that value.
In this example, we pass a PCollection the value 'perennial' as a singleton.
We then use that value to filter out perennials.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
valid_durations = pipeline | 'Valid durations' >> beam.Create([
'annual',
'biennial',
'perennial',
])
valid_plants = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'PERENNIAL'},
])
| 'Filter valid plants' >> beam.Filter(
lambda plant, valid_durations: plant['duration'] in valid_durations,
valid_durations=beam.pvalue.AsIter(valid_durations),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 5: Filtering with side inputs as iterators
If the PCollection has multiple values, pass the PCollection as an iterator.
This accesses elements lazily as they are needed,
so it is possible to iterate over large PCollections that won't fit into memory.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
keep_duration = pipeline | 'Duration filters' >> beam.Create([
('annual', False),
('biennial', False),
('perennial', True),
])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter plants by duration' >> beam.Filter(
lambda plant, keep_duration: keep_duration[plant['duration']],
keep_duration=beam.pvalue.AsDict(keep_duration),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection),
but this requires that all the elements fit into memory.
Example 6: Filtering with side inputs as dictionaries
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary.
Each element must be a (key, value) pair.
Note that all the elements of the PCollection must fit into memory for this.
If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation |
9,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: 2d embeddings
Step2: 1d embeddings
Step3: Clustering in 2d
1d embedding vs size of airline
* find what is similar
* what is an outlier
Step4: Making results more stable
when you visualize latent spaces they should not change much when re-training or fitting additional data points
when working with autoencoders or embeddings there are two ways to make that happen
save model, do not retrain from scratch and only fit new data points with low learning rate
save output from embedding and keep new latent space similar by adding to the loss function | Python Code:
!pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
!curl -O https://raw.githubusercontent.com/jpatokal/openflights/master/data/routes.dat
# pd.read_csv?
import pandas as pd
df = pd.read_csv('routes.dat', quotechar="'", sep=',', encoding='utf-8', header=None, na_values='\\N',
names=['Airline', 'Airline ID', 'Source airport', 'Source airport ID', 'Destination airport', 'Destination airport ID', 'Codeshare', 'Stops', 'Equipment'])
# https://openflights.org/data.html#route
# Airline 2-letter (IATA) or 3-letter (ICAO) code of the airline.
# Airline ID Unique OpenFlights identifier for airline (see Airline).
# Source airport 3-letter (IATA) or 4-letter (ICAO) code of the source airport.
# Source airport ID Unique OpenFlights identifier for source airport (see Airport)
# Destination airport 3-letter (IATA) or 4-letter (ICAO) code of the destination airport.
# Destination airport ID Unique OpenFlights identifier for destination airport (see Airport)
# Codeshare "Y" if this flight is a codeshare (that is, not operated by Airline, but another carrier), empty otherwise.
# Stops Number of stops on this flight ("0" for direct)
# Equipment 3-letter codes for plane type(s) generally used on this flight, separated by spaces
# df[df['Stops'] == 1] gives only a dozen or so routes, so also drop it
df.drop(['Airline ID', 'Source airport ID', 'Destination airport ID', 'Codeshare', 'Equipment', 'Stops'], axis='columns', inplace=True)
len(df)
df.head()
sources = df['Source airport'].unique()
len(sources)
destinations = df['Destination airport'].unique()
len(destinations)
airlines = df['Airline'].unique()
len(airlines)
from tensorflow.keras.preprocessing.text import Tokenizer
airline_tokenizer = Tokenizer()
airline_tokenizer.fit_on_texts(df['Airline'])
import numpy as np
encoded_airlines = np.array(airline_tokenizer.texts_to_sequences(df['Airline'])).reshape(-1)
encoded_airlines
len(encoded_airlines)
routes = df[['Source airport', 'Destination airport']].apply(lambda x: ' '.join(x), axis=1)
routes.head()
routes_tokenizer = Tokenizer()
routes_tokenizer.fit_on_texts(routes)
encoded_routes = np.array(routes_tokenizer.texts_to_sequences(routes))
# should be a bit more 3400 as source and destination are from the same set
output_dim = len(routes_tokenizer.word_index) + 1
output_dim
encoded_routes[0]
len(encoded_routes)
from tensorflow.keras.utils import to_categorical
# sequence of airlines encoded as a unique number
x = encoded_airlines
# sequence of pair, src, dest encoded as a unique numbers
Y = to_categorical(encoded_routes)
# for now just the source
# Y = to_categorical(encoded_routes[:, 0])
Y[0]
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Airline Embeddings
Basic assumption: airlines fliying similar routes are similar
Data Sets
Single Flights: http://stat-computing.org/dataexpo/2009/the-data.html
Routes between airports: https://openflights.org/data.html#route
Advanced examples
autoencoders on tabular data: https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/2019_tf/autoencoders_tabular.ipynb
robust training on additional data: https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/2019_tf/autoencoders_stabilize.ipynb
End of explanation
%%time
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Flatten, GlobalAveragePooling1D, Dense, LSTM, GRU, SimpleRNN, Bidirectional, Embedding, RepeatVector
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.initializers import glorot_normal
seed = 3
input_dim = len(airlines) + 1
embedding_dim = 2
model = Sequential()
model.add(Embedding(name='embedding',
input_dim=input_dim,
output_dim=embedding_dim,
input_length=1,
embeddings_initializer=glorot_normal(seed=seed)))
# https://stackoverflow.com/questions/49295311/what-is-the-difference-between-flatten-and-globalaveragepooling2d-in-keras
# averages over all (global) embedding values
# model.add(GlobalAveragePooling1D())
model.add(Flatten())
model.add(Dense(units=50, activation='relu', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))
model.add(RepeatVector(2))
model.add(SimpleRNN(units=50, return_sequences=True, bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))
model.add(Dense(units=output_dim, name='output', activation='softmax', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.predict(np.array([x[0]])).shape
Y[0]
%%time
EPOCHS=25
BATCH_SIZE=10
history = model.fit(x, Y, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=1)
loss, accuracy = model.evaluate(x, Y, batch_size=BATCH_SIZE)
loss, accuracy
# plt.yscale('log')
plt.plot(history.history['loss'])
# plt.yscale('log')
plt.plot(history.history['accuracy'])
samples = pd.DataFrame(encoded_airlines).sample(n=200).values.reshape(-1)
# https://en.wikipedia.org/wiki/List_of_airline_codes
# https://en.wikipedia.org/wiki/List_of_largest_airlines_in_North_America
# https://en.wikipedia.org/wiki/List_of_largest_airlines_in_Europe
europe_airlines = ['LH', 'BA', 'SK', 'KL', 'AF', 'FR', 'SU', 'EW', 'TP', 'BT', 'U2']
us_airlines = ['AA', 'US', 'UA', 'WN', 'DL', 'AS', 'HA']
samples = [airline_tokenizer.word_index[airline_code.lower()] for airline_code in europe_airlines + us_airlines]
embedding_layer = model.get_layer('embedding')
embedding_model = Model(inputs=model.input, outputs=embedding_layer.output)
embeddings_2d = embedding_model.predict(samples).reshape(-1, 2)
# for printing only
# plt.figure(figsize=(20,5))
# plt.figure(dpi=600)
plt.axis('off')
plt.scatter(embeddings_2d[:, 0], embeddings_2d[:, 1])
for index, x_pos, y_pos in zip(samples, embeddings_2d[:, 0], embeddings_2d[:, 1]):
name = airline_tokenizer.index_word[index].upper()
# print(name, (x_pos, y_pos))
plt.annotate(name, (x_pos, y_pos))
Explanation: 2d embeddings
End of explanation
%%time
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Flatten, GlobalAveragePooling1D, Dense, LSTM, GRU, SimpleRNN, Bidirectional, Embedding, RepeatVector
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.initializers import glorot_normal
seed = 7
input_dim = len(airlines) + 1
embedding_dim = 1
model = Sequential()
model.add(Embedding(name='embedding',
input_dim=input_dim,
output_dim=embedding_dim,
input_length=1,
embeddings_initializer=glorot_normal(seed=seed)))
# model.add(GlobalAveragePooling1D())
model.add(Flatten())
model.add(Dense(units=50, activation='relu', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))
model.add(RepeatVector(2))
model.add(SimpleRNN(units=50, return_sequences=True, bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))
model.add(Dense(units=output_dim, name='output', activation='softmax', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
%%time
EPOCHS=20
BATCH_SIZE=10
history = model.fit(x, Y, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=1)
# we expect this to be substantially worse than the 2d version as the bottle neck now is much more narrow
loss, accuracy = model.evaluate(x, Y, batch_size=BATCH_SIZE)
loss, accuracy
# plt.yscale('log')
plt.plot(history.history['loss'])
# plt.yscale('log')
plt.plot(history.history['accuracy'])
import numpy as np
embedding_layer = model.get_layer('embedding')
embedding_model = Model(inputs=model.input, outputs=embedding_layer.output)
embeddings_1d = embedding_model.predict(samples).reshape(-1)
# for printing only
# plt.figure(figsize=(20,5))
# plt.figure(dpi=600)
plt.axis('off')
plt.scatter(embeddings_1d, np.zeros(len(embeddings_1d)))
for index, x_pos in zip(samples, embeddings_1d):
name = airline_tokenizer.index_word[index].upper()
# print(name, (x_pos, y_pos))
plt.annotate(name, (x_pos, 0), rotation=80)
Explanation: 1d embeddings
End of explanation
# https://en.wikipedia.org/wiki/List_of_airline_codes
# https://en.wikipedia.org/wiki/List_of_largest_airlines_in_North_America
# https://www.tvlon.com/resources/airlinecodes.htm
# https://en.wikipedia.org/wiki/List_of_largest_airlines_in_Europe
airline_size = {
'LH': 130, 'BA': 105, 'SK': 30, 'KL': 101, 'AF': 101, 'FR': 129, 'SU': 56, 'EW': 24, 'TP': 16, 'BT': 4, 'U2': 88, 'AA': 204, 'US': 204, 'UA': 158, 'WN': 164, 'DL': 192, 'AS': 46, 'HA': 12
}
sample_names = [airline_tokenizer.index_word[sample].upper() for sample in samples]
sample_sizes = [airline_size[name] * 1e6 for name in sample_names]
# for printing only
# plt.figure(figsize=(20,5))
# plt.figure(dpi=600)
# plt.axis('off')
plt.scatter(embeddings_1d, sample_sizes)
for name, x_pos, y_pos in zip(sample_names, embeddings_1d, sample_sizes):
plt.annotate(name, (x_pos, y_pos))
from sklearn.preprocessing import StandardScaler
embeddings_1d_scaled = StandardScaler().fit_transform(embeddings_1d.reshape(-1, 1))
sizes_for_samples_scaled = StandardScaler().fit_transform(np.array(sample_sizes).reshape(-1, 1))
X = np.dstack((embeddings_1d_scaled.reshape(-1), sizes_for_samples_scaled.reshape(-1)))[0]
X_scaled = StandardScaler().fit_transform(X)
X_scaled
%%time
from sklearn.cluster import DBSCAN
clf = DBSCAN(eps=0.5, min_samples=2)
clf.fit(X_scaled)
clusters = clf.labels_.astype(np.int)
clusters
import matplotlib.pyplot as plt
from itertools import cycle, islice
# last color is black to properly display label -1 as noise (black)
colors = np.append(np.array(list(islice(cycle(['#AAAAFF', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(clusters) + 1)))), ['#000000'])
# plt.figure(dpi=600)
plt.xlabel('Similarity by typical routes')
plt.ylabel('Passengers')
plt.scatter(embeddings_1d, sample_sizes, color=colors[clusters], s=200)
for name, x_pos, y_pos in zip(sample_names, embeddings_1d, sample_sizes):
plt.annotate(name, (x_pos, y_pos), fontsize=18, color='grey')
Explanation: Clustering in 2d
1d embedding vs size of airline
* find what is similar
* what is an outlier
End of explanation
# save complete model
model.save('airline-embedding-v1.h5')
del model
Explanation: Making results more stable
when you visualize latent spaces they should not change much when re-training or fitting additional data points
when working with autoencoders or embeddings there are two ways to make that happen
save model, do not retrain from scratch and only fit new data points with low learning rate
save output from embedding and keep new latent space similar by adding to the loss function
End of explanation |
9,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Descrição do Problema
A empresa Amazon deseja obter um sistema inteligente para processar os comentários de seus clientes sobre os seus produtos, podendo classificar tais comentários dentre as categorias
Step1: Importing Data
Step2: Creating Dictionary
Na variável "charsRemove" são indicados os caracteres a serem removidos do input.
Todas as palavras são transformadas para lower case e os caracteres indesejados são removidos.
Na variável "allWords", são armazenadas todas as palavras a serem utilizadas.
Step3: Removing Words / Features
Aqui são removidas as palavras que aparecem menos de 52% como positivo ou como negativo.
Step4: Sentence to Vector
Cria-se um vetor para cada sentença utilizando o dicionário elaborado como base.
Este vetor dependerá das palavras contidas em cada uma das sentenças. Cada ocorrência de palavra acarretará em igualarmos o índice associado a 1 (testou-se também incrementando esse valor, mas não foi viável).
Step5: Classifier Model
Foram testados uma série de métodos para realizar a classificação dos comentários entre positivo e negativo. O melhor resultado obtido foi utilizando Ensemble, combinando | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: 1. Descrição do Problema
A empresa Amazon deseja obter um sistema inteligente para processar os comentários de seus clientes sobre os seus produtos, podendo classificar tais comentários dentre as categorias: positivo ou negativo. Para isso ela disponibiliza três bases de dados com sentenças rotuladas.
2. Dados
Os dados estão organizados em sentença e rótulo, sendo 0 negativo e 1 positivo As bases são provenientes dos seguintes sites:
imdb.com
amazon.com
yelp.com
3. Solução
End of explanation
X_1 = pd.read_csv('amazon_cells_labelled.txt', sep="\t", usecols=[0], header=None)
Y_1 = pd.read_csv('amazon_cells_labelled.txt', sep="\t", usecols=[1], header=None)
X_2 = pd.read_csv('imdb_labelled.txt', sep="\t", usecols=[0], header=None)
Y_2 = pd.read_csv('imdb_labelled.txt', sep="\t", usecols=[1], header=None)
X_3 = pd.read_csv('yelp_labelled.txt', sep="\t", usecols=[0], header=None)
Y_3 = pd.read_csv('yelp_labelled.txt', sep="\t", usecols=[1], header=None)
X = np.concatenate((X_1.values[:,0], X_2.values[:,0]), axis=0)
Y = np.concatenate((Y_1.values[:,0], Y_2.values[:,0]), axis=0)
X = np.concatenate((X, X_3.values[:,0]), axis=0)
Y = np.concatenate((Y, Y_3.values[:,0]), axis=0)
Explanation: Importing Data
End of explanation
# Treating Sentences
allSentences = []
charsSplit = "\\`*_{}[]()>#+-.!$,:&?"
charsRemove = ".,-_;\"\'"
for x in X :
for c in charsSplit:
x = x.replace(c, ' '+ c +' ')
for c in charsRemove:
x = x.replace(c, '')
allSentences.append(x.lower() )
allWords = []
for x in allSentences :
allWords.extend(x.split(" "))
allWords = list(set(allWords))
allWords.sort()
Explanation: Creating Dictionary
Na variável "charsRemove" são indicados os caracteres a serem removidos do input.
Todas as palavras são transformadas para lower case e os caracteres indesejados são removidos.
Na variável "allWords", são armazenadas todas as palavras a serem utilizadas.
End of explanation
positiveCount = [0] * len(allWords)
negativeCount = [0] * len(allWords)
sentenceNumber = 0
for sentenceNumber, sentence in enumerate(allSentences) :
for word in sentence.split(" "):
wordIndex = allWords.index(word)
if (Y[sentenceNumber] == 1):
positiveCount[wordIndex] = positiveCount[wordIndex] + 1
else:
negativeCount[wordIndex] = negativeCount[wordIndex] + 1
allCount = np.array(positiveCount) + np.array(negativeCount)
probPositive = np.divide(positiveCount, allCount)
len(allWords)
allWordsFiltered = np.array(allWords)
len(allWordsFiltered)
index_to_remove = []
for index, prob in enumerate(probPositive):
if not((prob <= 0.48) or (prob >= 0.52)):
index_to_remove.append(index)
a = 1
allWordsFiltered = np.delete(allWordsFiltered, index_to_remove)
Explanation: Removing Words / Features
Aqui são removidas as palavras que aparecem menos de 52% como positivo ou como negativo.
End of explanation
allWords = np.array(allWordsFiltered).tolist()
X = []
for x in allSentences :
sentenceV = [0] * len(allWords)
words = x.split(" ")
for w in words :
try:
index = allWords.index(w)
sentenceV[index] = 1 #sentenceV[index] + 1
except ValueError:
pass
X.append(sentenceV)
Explanation: Sentence to Vector
Cria-se um vetor para cada sentença utilizando o dicionário elaborado como base.
Este vetor dependerá das palavras contidas em cada uma das sentenças. Cada ocorrência de palavra acarretará em igualarmos o índice associado a 1 (testou-se também incrementando esse valor, mas não foi viável).
End of explanation
from sklearn.cross_validation import cross_val_score
cross_val_k = 10
from sklearn.naive_bayes import MultinomialNB
clf1 = MultinomialNB()
from sklearn.neighbors import KNeighborsClassifier
clf2 = KNeighborsClassifier(n_neighbors=5)
from sklearn.linear_model import LogisticRegression
clf3 = LogisticRegression(penalty='l2', C=1.0)
from sklearn.ensemble import VotingClassifier
eclf = VotingClassifier(estimators=[('mnb', clf1), ('knn', clf2), ('lr', clf3)], voting='soft', weights=[3,2,2])
accuracy = cross_val_score(eclf, X, Y, cv=cross_val_k, scoring='accuracy').mean()
print('Precisão: ', accuracy)
Explanation: Classifier Model
Foram testados uma série de métodos para realizar a classificação dos comentários entre positivo e negativo. O melhor resultado obtido foi utilizando Ensemble, combinando:
- Multinomial Naive Bayes (peso 3)
- KNN (peso 2)
- Logistic Regression (peso 2)
End of explanation |
9,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercise - Functional Programming
Q
Step1: Ans | Python Code:
names = ["Aalok", "Chandu", "Roshan", "Prashant", "Saurabh"]
for i in range(len(names)):
names[i] = hash(names[i])
print(names)
Explanation: Excercise - Functional Programming
Q: Try rewriting the code below as a map. It takes a list of real names and replaces them with code names produced using a more robust strategy.
End of explanation
secret_names = map(hash, names)
print(secret_names)
Explanation: Ans:
End of explanation |
9,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Onset Detection
Automatic detection of musical events in an audio signal is one of the most fundamental tasks in music information retrieval. Here, we will show how to detect an onset, the start of a musical event.
For more reading, see this tutorial on onset detection by Juan Bello.
Load the audio file simpleLoop.wav into the NumPy array x and sampling rate fs.
Step1: Plot the signal
Step2: Listen
Step3: librosa.onset.onset_detect
librosa.onset.onset_detect returns the frame indices for estimated onsets in a signal
Step4: Plot the onsets on top of a spectrogram of the audio
Step5: essentia.standard.OnsetRate
The easiest way in Essentia to detect onsets given a time-domain signal is using OnsetRate. It returns a list of onset times and the onset rate, i.e. number of onsets per second.
Step6: essentia.standard.AudioOnsetsMarker
To verify our results, we can use AudioOnsetsMarker to add a sound at the moment of each onset. | Python Code:
x, fs = librosa.load('simpleLoop.wav', sr=44100)
print x.shape
Explanation: ← Back to Index
Onset Detection
Automatic detection of musical events in an audio signal is one of the most fundamental tasks in music information retrieval. Here, we will show how to detect an onset, the start of a musical event.
For more reading, see this tutorial on onset detection by Juan Bello.
Load the audio file simpleLoop.wav into the NumPy array x and sampling rate fs.
End of explanation
librosa.display.waveplot(x, fs)
Explanation: Plot the signal:
End of explanation
from IPython.display import Audio
Audio(x, rate=fs)
Explanation: Listen:
End of explanation
onsets = librosa.onset.onset_detect(x, fs)
print onsets # frame numbers of estimated onsets
Explanation: librosa.onset.onset_detect
librosa.onset.onset_detect returns the frame indices for estimated onsets in a signal:
End of explanation
S = librosa.stft(x)
logS = librosa.logamplitude(S)
librosa.display.specshow(logS, fs, alpha=0.75, x_axis='time')
plt.vlines(onsets, 0, logS.shape[0], color='r')
Explanation: Plot the onsets on top of a spectrogram of the audio:
End of explanation
from essentia.standard import OnsetRate
find_onsets = OnsetRate()
onset_times, onset_rate = find_onsets(x)
print onset_times
print onset_rate
Explanation: essentia.standard.OnsetRate
The easiest way in Essentia to detect onsets given a time-domain signal is using OnsetRate. It returns a list of onset times and the onset rate, i.e. number of onsets per second.
End of explanation
from essentia.standard import AudioOnsetsMarker
onsets_marker = AudioOnsetsMarker(onsets=onset_times, type='beep')
x_beeps = onsets_marker(x)
Audio(x_beeps, rate=fs)
Explanation: essentia.standard.AudioOnsetsMarker
To verify our results, we can use AudioOnsetsMarker to add a sound at the moment of each onset.
End of explanation |
9,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 38
Step1: This is the primary function of the webbrowser module, but it can be used as part of a script to improve web scraping. selenium is a more full featured web browser module.
Google Maps Opener
Step2: To run this is a script, we would need to the full path to python, followed by the script, followed by the address.
An easy way to skip this process is to create a shell script holding the script addresses, and passing arguments to it | Python Code:
import webbrowser
webbrowser.open('https://automatetheboringstuff.com')
Explanation: Lesson 38:
The Webbrowser Module
The webbrowser module has tools to manage a webbrowser from Python.
webbrowser.open() opens a new browser window at a url:
End of explanation
import webbrowser, sys, pyperclip
sys.argv # Pass system arguments to program; mapit.py '870' 'Valencia' 'St.'
# Check if command line arguments were passed; useful if this existed as a .py in the path (run via mapit 'Some address')
# For Jupyter version, will just pass in arguments earlier in document
if len(sys.argv) > 1:
# Join individual arguments into one string: mapit.py '870' 'Valencia' 'St.' > mapit.py '870 Valencia St.'
address = ' '.join(sys.argv[1:])
# Skip the first argument (mapit.py), but join every slice from [1:] with ' '
else:
# Read the clipboard if no arguments found
address = pyperclip.paste()
# Example Google Map URLs
# Default: https://www.google.com/maps/place/870+Valencia+St,+San+Francisco,+CA+94110/@37.7589845,-122.4237899,17z/data=!3m1!4b1!4m2!3m1!1s0x808f7e3db2792a09:0x4fc69a2eea9fb3d3
# Test: https://www.google.com/maps/place/870+Valencia+St,+San+Francisco,+CA+94110/
# Test: https://www.google.com/maps/place/870 Valencia St
# This works, so just concatenate the default google maps url with a spaced address variable
webbrowser.open('https://www.google.com/maps/place' + address)
Explanation: This is the primary function of the webbrowser module, but it can be used as part of a script to improve web scraping. selenium is a more full featured web browser module.
Google Maps Opener:
End of explanation
#! usr/bin/env bash
#python3 mapit.py %*
Explanation: To run this is a script, we would need to the full path to python, followed by the script, followed by the address.
An easy way to skip this process is to create a shell script holding the script addresses, and passing arguments to it:
End of explanation |
9,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
03 PyTorch CPU to GPU copy
Step1: Alloocate a PyTorch Tensor on the GPU | Python Code:
% reset -f
from __future__ import print_function
from __future__ import division
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
# call(["nvcc", "--version"]) does not work
! nvcc --version
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"])
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
Explanation: 03 PyTorch CPU to GPU copy
End of explanation
x=torch.Tensor(3,4)
if torch.cuda.is_available():
x = x.cuda()*2
print (type(x))
print (x)
import numpy as np
import torch.cuda as cu
import contextlib
import time
# allocates a tensor on GPU 1
a = torch.cuda.FloatTensor(1)
# transfers a tensor from CPU to GPU 1
b = torch.FloatTensor(1).cuda()
# Timing helper with CUDA synchonization
@contextlib.contextmanager
def timing(name):
cu.synchronize()
start_time = time.time()
yield
cu.synchronize()
end_time = time.time()
print ('{} {:6.3f} seconds'.format(name, end_time-start_time))
for shape in [(128**3,), (128,128**2), (128,128,128), (32,32,32,64)]:
print ('shape {}, {:.1f} MB'.format(shape, np.zeros(shape).nbytes/1024.**2))
with timing('from_numpy sent to GPU '): torch.from_numpy (np.zeros(shape)).cuda()
with timing('CPU constructor '): torch.FloatTensor(np.zeros(shape))
with timing('CPU constructor sent to GPU'): torch.FloatTensor(np.zeros(shape)).cuda()
with timing('GPU constructor '): cu. FloatTensor(np.zeros(shape))
print
Explanation: Alloocate a PyTorch Tensor on the GPU
End of explanation |
9,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Input luminosity function
Step1: Run the simulation, save the spectra
Step2: Simulation outputs
Step3: the table of simulated quasars, including redshift, luminosity, synthetic flux/mags in nine bands, and "observed" photometry with errors included.
also includes details of the model inputs for each quasar
Step4: the distribution in g-band magnitude
Step5: color-color diagram from observed magnitudes, including errors
Step6: the list of emission lines in the model
Step7: broad CIV equivalent width, displaying the Baldwin Effect
Step8: Example spectra
for this example the wavelength cutoff is 30 micron, but the model doesn't include warm dust and thus is invalid beyond a few micron.
Step9: zoom in on the lyman alpha - CIV region
Step10: IGM absorption model (simqso.hiforest)
an example of the forest transmission spectra at R=30,000 (the native resolution for the monte carlo forest spectra) | Python Code:
M1450 = linspace(-30,-22,20)
zz = arange(0.7,3.5,0.5)
ple = bossqsos.BOSS_DR9_PLE()
lede = bossqsos.BOSS_DR9_LEDE()
for z in zz:
if z<2.2:
qlf = ple if z<2.2 else lede
plot(M1450,qlf(M1450,z),label='z=%.1f'%z)
legend(loc='lower left')
xlim(-21.8,-30.2)
xlabel("$M_{1450}$")
ylabel("log Phi")
Explanation: Input luminosity function
End of explanation
_ = bossqsos.qsoSimulation(bossqsos.simParams,saveSpectra=True)
Explanation: Run the simulation, save the spectra
End of explanation
wave,qsos = load_sim_output('boss_dr9qlf_sim','.')
Explanation: Simulation outputs
End of explanation
qsos[::40]
Explanation: the table of simulated quasars, including redshift, luminosity, synthetic flux/mags in nine bands, and "observed" photometry with errors included.
also includes details of the model inputs for each quasar: slopes is the set of broken power law slopes defining the continuum, emLines is the set of Gaussian parameters for each emission line (wave, EW, sigma) measured in the rest frame.
End of explanation
_ = hist(qsos['obsMag'][:,1],linspace(17,22,20),log=True)
Explanation: the distribution in g-band magnitude:
End of explanation
scatter(qsos['obsMag'][:,0]-qsos['obsMag'][:,1],qsos['obsMag'][:,1]-qsos['obsMag'][:,2],
c=qsos['z'],cmap=cm.autumn_r,alpha=0.7)
colorbar()
xlabel('u-g')
ylabel('g-r')
xlim(-0.75,3)
ylim(-0.5,1.5)
Explanation: color-color diagram from observed magnitudes, including errors:
End of explanation
qsodatahdr = fits.getheader('boss_dr9qlf_sim.fits',1)
for i,n in enumerate(qsodatahdr['LINENAME'].split(',')):
print('%d:%s, '% (i,n,),end=" ")
print()
Explanation: the list of emission lines in the model:
End of explanation
scatter(qsos['absMag'],qsos['emLines'][:,13,1],c=qsos['z'],cmap=cm.autumn_r)
colorbar()
xlabel("$M_{1450}$")
ylabel("CIV equivalent width $\AA$")
Explanation: broad CIV equivalent width, displaying the Baldwin Effect:
End of explanation
figure(figsize=(14,4))
plot(wave/1e4,qsos['spec'][0])
yscale('log')
xlabel('wave [micron]')
Explanation: Example spectra
for this example the wavelength cutoff is 30 micron, but the model doesn't include warm dust and thus is invalid beyond a few micron.
End of explanation
figure(figsize=(14,4))
plot(wave,qsos['spec'][20])
xlim(3500,7500)
title('$z=%.3f$'%qsos['z'][20])
Explanation: zoom in on the lyman alpha - CIV region:
End of explanation
# XXX WARNING -- an ugly hack is needed here. Internally, a table of Voigt profiles is generated
# at startup in order to speed the forest spectra generation. This table is defined in terms of
# the wave dispersion the first time a simulation is run. Here we are changing the wavelength
# model, and thus before executing the next cells you must restart the kernel and execute only
# the first cell.
np.random.seed(12345)
wave = buildWaveGrid(dict(waveRange=(3500,4800),SpecDispersion=30000))
forest = hiforest.IGMTransmissionGrid(wave,WP11_model,1)
T = forest.next_spec(0,2.9)
figure(figsize=(14,4))
plot(wave,T)
figure(figsize=(14,4))
plot(wave,T)
xlim(4300,4800)
Explanation: IGM absorption model (simqso.hiforest)
an example of the forest transmission spectra at R=30,000 (the native resolution for the monte carlo forest spectra):
End of explanation |
9,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare catalogs
Step1: Prioritize
```
x = base catalog on Dropbox
; CORRECT FOR EXTINCTION
r = x.r - x.extinction_r
i = x.i - x.extinction_i
g = x.g - x.extinction_g
; DEFINE GRI CRITERIA, ERRORS ARE SUBTRACTED IN QUADRATURE
cgr = (g - r) - 2.sqrt(x.g_err^2 + x.r_err^2)
cri = (r - i) - 2.sqrt(x.r_err^2 + x.i_err^2)
; SELECT TARGETS
qtarget = where(x.rhost_kpc le 300 and $ ; INSIDE RVIR
x.fibermag_r le 23 and $ ; FIBERMAG CUT
x.remove eq -1 and $ ; SHRED LIST
x.phot_sg eq 'GALAXY'and $ ; GALAXIES ONLY
cgr le 0.85 and cri le 0.55 and $ ; GRI CUT
x.zquality lt 3) ; NO SPECTRA
```
The scheme below is
Step2: Note
Step3: Some special objects
Step4: http
Step5: The above object was put as SAT==2, but is at very close z to AnaK...? r~=17.6 in DECALS. Marla says this is because it's too far from the host, so not worth special-izing
Step6: Risa suggests
Step7: This is not actually quite on the center-of-light in DECALS... but it might be an HII region?
Marla wants
Step12: That one seems good
Risa's eye-balled selections
Step13: Make the master catalogs
Sky positions
For some hosts we already have sky positions from the last run, so copy those over
Step14: For the remainder, generate and visually inspect them one at a time. Edit the file to remove any that are not good sky positions
Step15: Actually generate the master catalogs
Step16: Observations
We sub-sample from the master lists generated above for individual configurations.
Step18: This function parses a dump of all the image headers from 'fitsheader ccd_?/.fits'
Step19: Night 1
Generate configuration field files
Step21: Make observing log
Step22: Night 2
Note
Step24: Make observing log
Step25: Nights 3+
These nights are by the team that's doing the time-trade. They will be primarily second-half, but with a little time in the first half
Step26: NSA 145729 is being treated a bit oddly because there might be two back-to-back-ish observations. So we're making one config in advance for both the p0 and p1 configurations, and making fld's for config 2 assuming either of those.
Step27: Logs for observations
Step28: Utilities for during the night
Pull in as-generated files from remote
Note that this assumes the .lis files were generated after the configuration was made. If need be these can be re-created from the .sds files by starting up configure and exporting the allocated fibers list.
Step29: Inspect guider fields as needed
Step30: Planning
Step31: Find possible targets
Step32: Airmass chart
Step33: Notes
Step34: Note
Step35: Basically there after 2 observations of Narnia and OBrother. AnaK and Dune 1 or maybe 2, Gilg/Ody 1.
Guess
Step36: Log from last run | Python Code:
nmstoget = ('Dune', 'AnaK', 'Odyssey', 'Gilgamesh', 'OBrother', 'Narnia', 'Catch22')
hosts_to_target = [h for h in hsd.values() if h.name in nmstoget]
assert len(hosts_to_target)==len(nmstoget)
new_targets = [hosts.NSAHost(145729), hosts.NSAHost(21709)]
hosts_to_target.extend(new_targets)
# now set to the latest base catalogs
for h in hsd.values() + hosts_to_target:
h.fnsdss = 'SAGADropbox/base_catalogs/base_sql_nsa{0}.fits.gz'.format(h.nsaid)
h._cached_sdss = None
# make sure the catalogs are loaded
for h in hosts_to_target:
h.get_sdss_catalog()
hosts_to_target
# these are the already-observed objects
# we use "h" for this only as a convenience because the method adds some useful bits that aren't in the raw catalogs
spectra = h.load_and_reprocess_sdss_catalog('SAGADropbox/data/saga_spectra_clean.fits.gz')
Explanation: Prepare catalogs
End of explanation
for h in hosts_to_target:
print('On host', h.name)
cat = h.get_sdss_catalog()
r0 = cat['r'] - cat['Ar']
cutbyrawmagcut = (r0<20.75)&(cat['r']>21)
if np.sum(cutbyrawmagcut)>0:
print('WARNING: ', np.sum(cutbyrawmagcut), 'objects have rmags in selection but are extincted too much for r>21')
pris = np.ones(len(cat))
# remove list
tokeep = cat['REMOVE']==-1
pris[~tokeep] = -cat['REMOVE'][~tokeep] # sets the REMOVE objects to -their remove value
remmskval = np.min(pris)-1
# remove anything in the remove list online but not in the catalog as remove
pris[~targeting.remove_targets_with_remlist(cat, h, maskonly=True, verbose='warning')&(pris>-1)] = remmskval
if np.sum(pris==remmskval) > 0:
print('Removed', np.sum(pris==remmskval), 'due to online remove list. Remmsk val:', remmskval)
photgood = ((cat['r'] < 21.) & # this is to cut the numbers a bit - more stringent mag cuts below
(cat['fibermag_r']<23) &
(cat['phot_sg']=='GALAXY'))
nearish = (cat['RHOST_KPC']<450) # again, to limit the numbers
pris[~photgood|~nearish] = 0
# pris ~-100 are removed due to already-observed
nospec = cat['ZQUALITY']<3
pris[~nospec] = -100-cat['ZQUALITY'][~nospec]
brighter = r0 < 20
fainter = (r0 < 20.75)&~brighter
goodcolors = targeting.colorcut_mask(cat, {'g-r': (None, 0.85, 2), 'r-i': (None, 0.55, 2)},deredden=True)
okcolors = targeting.colorcut_mask(cat, {'g-r': (None, 1.2), 'r-i': (None, 0.7)}, deredden=False)
inside = cat['RHOST_KPC']<300
pris[(pris>0)&~inside&fainter&goodcolors] = np.max(pris) + 1
pris[(pris>0)&~inside&brighter&goodcolors] = np.max(pris) + 1
pris[(pris>0)&inside&fainter&okcolors] = np.max(pris) + 1
pris[(pris>0)&inside&brighter&okcolors] = np.max(pris) + 1
pris[(pris>0)&inside&fainter&goodcolors] = np.max(pris) + 1
pris[(pris>0)&inside&brighter&goodcolors] = np.max(pris) + 1
#everything left is in pri 1
# this *shouldn't* be necessary, as ZQUALITY should be in the base catalog.
# But as a sanity check we look to see if anything in the spectral catalog is still being included
spec_this_host = spectra[spectra['HOST_NSAID']==h.nsaid]
spec_this_host = spec_this_host[np.in1d(spec_this_host['OBJID'], cat['OBJID'])]
zq = cat['ZQUALITY'].copy()
for i, zqi in zip(spec_this_host['OBJID'], spec_this_host['ZQUALITY']):
zq[cat['OBJID']==i] = zqi
if np.any(pris[zq>2]>=0):
print('POSSIBLE PROBLEM: Found some objects in spectrum list that are *not* claimed '
'as having spectra in the base catalogs. Setting them to -11x:', dict(Counter(pris[pris<-110])))
pris[zq>2] = -110 - zq[zq>2]
#de-duplicate
if len(np.unique(cat['OBJID'])) != len(cat):
_, idxs = np.unique(cat['OBJID'], return_index=True)
msk = np.ones_like(cat, dtype=bool)
msk[idxs] = 0
pris[msk] = -1000
print('WARNING: some duplicate objid found. Setting', np.sum(pris==-1000), 'dupes to pri=-1000')
cat['aat_pris'] = pris
if 'aat_pris_unjiggered' in cat.colnames:
cat.remove_column('aat_pris_unjiggered')
#informational
counter = Counter(pris)
print('Rank counts:')
for k in reversed(sorted(counter)):
print(int(k), ':', counter[k])
Explanation: Prioritize
```
x = base catalog on Dropbox
; CORRECT FOR EXTINCTION
r = x.r - x.extinction_r
i = x.i - x.extinction_i
g = x.g - x.extinction_g
; DEFINE GRI CRITERIA, ERRORS ARE SUBTRACTED IN QUADRATURE
cgr = (g - r) - 2.sqrt(x.g_err^2 + x.r_err^2)
cri = (r - i) - 2.sqrt(x.r_err^2 + x.i_err^2)
; SELECT TARGETS
qtarget = where(x.rhost_kpc le 300 and $ ; INSIDE RVIR
x.fibermag_r le 23 and $ ; FIBERMAG CUT
x.remove eq -1 and $ ; SHRED LIST
x.phot_sg eq 'GALAXY'and $ ; GALAXIES ONLY
cgr le 0.85 and cri le 0.55 and $ ; GRI CUT
x.zquality lt 3) ; NO SPECTRA
```
The scheme below is:
7: r<300 kpc , M_r<20, "good" colorcuts
6: r<300 kpc , 20<M_r<20.75, "good" colorcuts
5: r<300 kpc , M_r<20, "ok" colorcuts
4: r<300 kpc , 20<M_r<20.75, "ok" colorcuts
3: 450>r>300 kpc , M_r<20, "good" colorcuts
2: 450>r>300 kpc , 20<M_r<20.75, " good" colorcuts
1: everything else with r<21 (*not* M_r)
End of explanation
for h in new_targets:
print(h.name)
cat = h.get_sdss_catalog()
if 'aat_pris_unjiggered' in cat.colnames:
print('Already rejiggered', h, 'skipping (but still reporting stats)...')
pris = cat['aat_pris']
print(dict(Counter(pris[pris>0])))
print(dict(Counter(pris[pris<=0])))
continue
p = cat['PROBABILITY_CLASS1']
if np.all(p<0):
print('WARNING: host', h, 'does not have probs, so not re-jiggering based on ML probs')
continue
pris = np.array(cat['aat_pris_unjiggered' if 'aat_pris_unjiggered' in cat.colnames else 'aat_pris'])
goodcolorbright = pris == 7
goodcolorfaint = pris == 6
okcolor = (pris==5) | (pris==4)
pris[okcolor] = -0.75
pris[goodcolorfaint] = 4
pris[goodcolorbright] = 5
pris[goodcolorfaint&(p>.001)] = 6
pris[goodcolorbright&(p>.001)] = 7
print(dict(Counter(pris[pris>0])))
print(dict(Counter(pris[pris<=0])))
cat['aat_pris_unjiggered'] = cat['aat_pris']
cat['aat_pris'] = pris
Explanation: Note: the 2 objects in OBrother with the warning have r~21, and are definitely not interesting targets
Re-jigger priorities for new targets to account for ML prob
The re-jiggered scheme is:
```
7: r<300 kpc , M_r<20, "good" colorcuts, ML>.001
6: r<300 kpc , 20<M_r<20.75, "good" colorcuts, ML>.001
5: r<300 kpc , M_r<20, "good" colorcuts, ML<.001
4: r<300 kpc , 20<M_r<20.75, "good" colorcuts, ML<.001
<3 : same as above
```
End of explanation
special_objids = []
Explanation: Some special objects
End of explanation
possible_anak_sat = SkyCoord(353.7229, 1.2064, unit=u.deg)
spectrascs = SkyCoord(spectra['ra'], spectra['dec'], unit=u.deg)
row = spectra[np.argsort(possible_anak_sat.separation(spectrascs))[0]]
special_objids.append(row['OBJID'])
row
Explanation: http://legacysurvey.org/viewer?ra=353.722358394&dec=1.20743944443
End of explanation
del special_objids[special_objids.index(row['OBJID'])]
Explanation: The above object was put as SAT==2, but is at very close z to AnaK...? r~=17.6 in DECALS. Marla says this is because it's too far from the host, so not worth special-izing
End of explanation
possible_anak_sat = SkyCoord(354.286, 0.211, unit=u.deg)
anakcat = hsd['AnaK'].get_sdss_catalog()
anakscs = SkyCoord(anakcat['ra'], anakcat['dec'], unit=u.deg)
seps = possible_anak_sat.separation(anakscs)
closest = np.argsort(seps)[0]
print(seps[closest].arcsec)
row = anakcat[closest]
special_objids.append(row['OBJID'])
row
Explanation: Risa suggests: http://legacysurvey.org/viewer?ra=354.2826&dec=0.2110&zoom=15&layer=decals-dr2
End of explanation
possible_anak_sat = SkyCoord(354.527, 0.533964, unit=u.deg)
seps = possible_anak_sat.separation(anakscs)
closest = np.argsort(seps)[0]
print(seps[closest].arcsec)
row = anakcat[closest]
special_objids.append(row['OBJID'])
row
Explanation: This is not actually quite on the center-of-light in DECALS... but it might be an HII region?
Marla wants:RA = 354.527/DEC = 0.533964
End of explanation
def find_risa_objs(stringfromrisa, h):
risa_tab = Table.read(stringfromrisa, delimiter='\t', format='ascii', names=['unk', 'ra', 'dec', 'mag', 'cand'])
risa_sc = SkyCoord(risa_tab['ra'], risa_tab['dec'], unit=u.deg)
cat = h.get_sdss_catalog()
catsc = SkyCoord(cat['ra'], cat['dec'], unit=u.deg)
idx, d2d, _ = risa_sc.match_to_catalog_sky(catsc)
assertmsg = '{} matched of {}: {}'.format(np.sum(d2d < .1*u.arcsec),len(d2d), d2d[d2d > .1*u.arcsec])
assert np.all(d2d < .1*u.arcsec), assertmsg
return cat['OBJID'][idx]
risa_145729=
-1 224.56410 -1.0730193 19.7973 true
-1 224.59164 -1.1261357 17.9597 true
-1 224.58605 -1.1340797 19.9805 true
-1 224.65696 -1.1021396 17.4972 true
-1 224.54478 -1.1645862 19.4057 true
-1 224.50349 -1.1783027 17.7666 true
-1 224.61258 -1.2283750 20.1190 true
-1 224.66071 -1.2407656 20.3448 true
-1 224.58210 -1.2891033 20.1278 true
-1 224.68795 -0.82928863 18.7276 true
-1 224.46354 -0.85993860 20.6228 true
-1 224.43907 -1.3290346 20.2419 true
-1 224.27041 -1.0663297 19.6382 true
-1 224.92796 -1.0868430 19.9441 true
-1 224.95218 -1.2046019 20.2506 true
-1 224.98659 -1.0996963 19.2848 true
-1 224.95028 -1.2134533 19.1667 true
-1 224.56810 -0.71035594 19.5400 true
-1 224.56710 -0.71155324 18.3361 true
-1 224.63475 -0.76637428 20.5220 true
-1 224.79342 -0.82424335 19.5245 true
-1 224.26293 -1.3190454 19.7427 true
-1 224.34037 -1.3851494 19.5061 true
-1 224.67776 -1.3717603 18.9769 true
-1 224.30819 -0.89372642 19.5476 true
-1 224.95888 -1.0081097 19.6524 true
-1 225.01145 -1.2106150 19.6745 true
-1 224.27946 -0.80572367 19.0886 true
-1 224.44473 -0.64135326 18.2103 true
-1 224.59702 -0.60626247 19.4427 true
-1 224.98059 -1.2775413 20.7447 true
-1 224.25056 -1.3830278 20.6792 true
-1 224.03729 -1.0589027 20.7401 true
-1 224.94320 -0.87332390 19.9586 true
-1 224.12169 -1.2418469 18.6920 true
-1 225.09967 -1.2117895 19.8792 true
-1 224.28313 -0.67401930 20.1558 true
-1 224.18769 -0.79627184 19.9399 true
-1 224.23305 -0.67032897 20.2131 true
-1 225.00922 -0.80628957 20.5866 true
-1 224.32848 -1.5812675 18.2125 true
-1 224.27623 -1.5292467 18.4006 true
-1 224.70055 -1.6463751 18.5479 true
-1 225.06682 -1.2727903 20.5982 true
-1 224.89664 -1.5217602 19.0338 true
-1 225.02588 -1.4044669 20.2629 true
-1 224.98083 -1.4368200 20.4261 true
-1 225.07035 -0.95870445 19.6174 true
-1 224.14144 -0.71374995 20.3682 true
-1 224.18156 -0.65458846 19.8804 true
-1 224.03748 -0.86010033 20.2428 true
-1 224.29784 -1.5985485 19.0072 true
-1 224.30080 -1.5957333 20.6291 true
-1 224.65269 -1.6814901 20.2405 true
-1 224.18598 -1.4982058 19.6720 true
-1 225.18215 -0.98714751 20.2422 true
[1:-1]
special_objids.extend(find_risa_objs(risa_145729, [h for h in hosts_to_target if h.nsaid==145729][0])
risa_obrother=
1 335.97281 -3.4295662 17.7332 true
1 335.91314 -3.5404510 19.7043 true
-1 335.77781 -3.4459616 20.1615 true
-1 335.81490 -3.6025596 20.5123 true
1 336.04145 -3.2204987 20.5081 true
-1 336.09493 -3.4649021 19.7341 true
1 335.99401 -3.7007769 20.5931 true
1 336.12273 -3.5101925 20.4868 true
1 335.72556 -3.1674595 20.6372 true
1 335.84376 -3.0261104 20.7444 true
-1 336.23396 -3.1875586 20.7117 true
1 335.55249 -3.6052065 20.0624 true
1 335.65592 -3.6558837 20.5213 true
-1 335.57983 -3.6963397 19.3788 true
-1 336.30042 -3.4636766 19.8654 true
[1:-1]
special_objids.extend(find_risa_objs(risa_obrother, hsd['OBrother']))
risa_catch22=
-1 348.62149 4.5071708 17.7932 true
-1 348.73347 4.5865011 19.4766 true
-1 348.67493 4.6123235 20.3472 true
-1 348.72361 4.4495699 20.6729 true
-1 348.72881 4.4323062 19.4804 true
-1 348.55899 4.5900220 20.2576 true
-1 348.64485 4.4040044 17.6392 true
-1 348.59640 4.3492465 20.6181 true
-1 348.68132 4.3095517 20.6868 true
-1 348.68817 4.3020869 20.7035 true
-1 348.89822 4.4892740 18.9281 true
-1 348.43132 4.7245873 19.3515 true
-1 348.51966 4.7464873 19.3880 true
-1 348.39920 4.5666321 18.2252 true
-1 348.99115 4.5918658 20.7471 true
-1 348.31622 4.3290159 19.6409 true
-1 348.87290 4.8064919 18.9324 true
-1 348.63961 4.2011104 20.5785 true
-1 348.98746 4.3261837 18.3678 true
-1 348.40085 4.8681252 17.4557 true
-1 348.64976 4.8775596 18.4048 true
-1 348.82739 4.9078081 19.3681 true
-1 348.68761 4.9226928 17.8673 true
-1 348.85846 4.8346772 19.6157 true
[1:-1]
special_objids.extend(find_risa_objs(risa_catch22, hsd['Catch22']))
risa_narnia=
[1:-1]
special_objids.extend(find_risa_objs(risa_narnia, hsd['Narnia']))
for soid in special_objids:
for h in hosts_to_target:
cat = h.get_sdss_catalog()
if soid in cat['OBJID']:
print('Found oid', soid, 'in', h.name, 'so setting to pri=9')
pris = cat['aat_pris']
if np.any(pris[np.in1d(cat['OBJID'], soid)]<1):
print('WARNING: ', np.sum(pris[np.in1d(cat['OBJID'], soid)]<1), 'special objects in',
h.name, 'are pri<0... skipping')
cat['aat_pris'][np.in1d(cat['OBJID'], soid)&(pris>0)] = 9
break
else:
print('Could not find oid', soid, 'anywhere')
Explanation: That one seems good
Risa's eye-balled selections
End of explanation
# commented because it should only be run once
!ls aat_targets_jun2015/*_sky.dat
#!cp aat_targets_jun2015/*_sky.dat aat_targets_jul2016/
Explanation: Make the master catalogs
Sky positions
For some hosts we already have sky positions from the last run, so copy those over
End of explanation
#Identify sky regions for each host and write out to separate files -
from os.path import exists
for h in hosts_to_target:
outfn = 'aat_targets_jul2016/' + h.name.replace(' ','_') + '_sky.dat'
if exists(outfn):
print(outfn, 'exists, not overwriting')
else:
try:
h.get_usnob_catalog()
usnocat = None
except IOError:
# this is currently hanging as the usno server is down...
#print('Downloading USNO B1 catalog for', h.name)
#h.usnob_environs_query(dl=True)
#usnocat = None
usnocat = False
aat.select_sky_positions(h, nsky=100, outfn=outfn, rad=1*u.deg, usnocat=usnocat)
aat.imagelist_fld_targets(outfn, ttype='sky', n=np.inf)
!subl $outfn
raw_input('Wrote ' + outfn + '. Press enter to continue onto the next one once '
'you remove bad entries from this file.')
Explanation: For the remainder, generate and visually inspect them one at a time. Edit the file to remove any that are not good sky positions
End of explanation
for h in hosts_to_target:
cat = h.get_sdss_catalog()
pris = cat['aat_pris']
guides = aat.select_guide_stars_sdss(cat)
calibs = aat.select_flux_stars(cat, onlyoutside=300*u.kpc)
skyradec = 'aat_targets_jul2016/{0}_sky.dat'.format(h.name)
aat.produce_master_fld(h, datetime.date(2016, 7, 28), cat[pris>0], pris=pris[pris>0].astype(int),
guidestars=guides, fluxstars=calibs,skyradec=skyradec,
outfn='aat_targets_jul2016/{}_master.fld'.format(h.name),
randomizeorder=True, inclhost=False)
Explanation: Actually generate the master catalogs
End of explanation
def do_subsampling(h, finum, n_in_pri, **kwargs):
assert h in hosts_to_target
n_in_pri = dict(n_in_pri) #copy just in case
for i in range(9):
if i+1 not in n_in_pri:
n_in_pri[i+1] = np.inf
fnbase = 'aat_targets_jul2016/' + h.name
fnmaster = fnbase + '_master.fld'
fnconfig = kwargs.pop('fnconfig', fnbase + '_{0}.fld'.format(finum))
print('Writing', fnconfig, 'from master', fnmaster)
kwargs.setdefault('nflux', 5)
kwargs.setdefault('nguides', 30)
kwargs.setdefault('fieldname', str(finum))
if 'listorem' not in kwargs:
listorem = []
for i in range(1, finum):
globpat = fnbase + '_' + str(i) + '*.lis'
listorem.extend(glob(globpat))
assertmsg = 'Only got ' + str(len(listorem)) +' .lis file(s):'+str(listorem)+', expected ' + str(finum-1)
assert len(listorem) == (finum-1), assertmsg
kwargs['listorem'] = listorem
aat.subsample_from_master_fld(fnmaster, fnconfig, n_in_pri, **kwargs)
return fnconfig
Explanation: Observations
We sub-sample from the master lists generated above for individual configurations.
End of explanation
def make_logtab(logs, skip_res=[]):
Generates a table with the AAT observing logs.
``logs`` can be a filename or list of lines from fitsheader
logtab = Table()
logtab['num'] = [1]
logtab['ccd'] = [2]
logtab['UTDATE'] = ['2015:06:19']
logtab['UTSTART'] = ['08:56:19']
logtab['OBJECT'] = ['a'*100]
logtab['TOTALEXP'] = ['a'*20]
logtab['RUNCMD'] = ['a'*20]
logtab['GRATID'] = ['a'*20]
logtab['SOURCE'] = ['plate 0 ']
logtab['MEANRA'] = [1.1]
logtab['MEANDEC'] = [1.1]
logtab['CFG_FILE'] = ['Odyssey_1p0xxxxxxxxxx.sds']
logtab = logtab[1:]
def add_logcol(accumulated_lines):
l = accumulated_lines[0]
hdunum = int(l.split()[2])
hdupath = l.split()[4][:-1]
if hdunum != 0:
return
for rex in skip_res:
if re.match(rex, hdupath):
print('Skipping', hdupath, 'because it has', rex)
return
items = {'num': int(hdupath.split('/')[-1].split('.')[0][5:]),
'ccd': int(hdupath.split('/')[-2][4:])}
for l in accumulated_lines[1:]:
if '=' in l:
nm, data = l.split('=')
nm = nm.strip()
data = data.split('/')[:2]
if len(data)> 1:
comment = data[1]
data = data[0].replace("'", '').strip()
if nm in logtab.colnames:
items[nm] = data
logtab.add_row(items)
accumulated_lines = None
if isinstance(logs, basestring):
with open(logs) as f:
loglines = list(f)
else:
loglines = logs
for l in loglines:
if l.startswith('# HDU'):
if accumulated_lines is not None:
add_logcol(accumulated_lines)
accumulated_lines = [l]
elif l.strip() == '':
continue
elif accumulated_lines is not None:
accumulated_lines.append(l)
return logtab
sshtarget = 'visitor3@aatlxa'
Explanation: This function parses a dump of all the image headers from 'fitsheader ccd_?/.fits'
End of explanation
fnconfig = do_subsampling(hsd['Dune'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Odyssey'], 1, {1:200, 2:200, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['OBrother'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Catch22'], 1, {1:100, 2:50, 3:100})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['AnaK'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
Explanation: Night 1
Generate configuration field files
End of explanation
# use this to get the log data from the remote machine
rem_fitsheader_loc = '/home/visitor3/miniconda2/bin/fitsheader'
datadir = '/data_lxy/aatobs/OptDet_data/160728'
fitsheader_output = !ssh $sshtarget "$rem_fitsheader_loc $datadir/ccd_?/*.fits"
# use this to use the local file to make the log
fitsheader_output = !fitsheader aat_data/28jul2016/ccd_?/*.fits
logtab = make_logtab(fitsheader_output, skip_res=['.*temp.*'])
logtab[logtab['ccd']==1].show_in_notebook(display_length=20)
comments_jul28 =
* At start of night weather clear, but high humidity and possible fog
* The Dune_1 observations have the blue side centered 20 angstroms further to the red than the remaining
observations. The initial setup provides a bit more overlap but after inspecting a bit we concluded that it
would be better to tryto make sure 3727 falls into the image at the blue side
* start of Dune_1 : seeing ~1.5-2"
* start of Odyssey_1 : seeing ~2-3"
* start of OBrother_1: seeing ~4-5"
* near end of OBrother_1: seeing ~2.5". Sudden improvement in weather, humidity fell/fog disappeared,
* start of Catch22_1: seeing ~3"
* start of AnaK_1: seeing ~1.9"
[1:-1]
display.Markdown(comments_jul28)
logfn = 'aat_data/28jul2016/log_jul28'
logtab.write(logfn,format='ascii')
with open(logfn, 'a') as f:
f.write('\nComments:\n')
f.write(comments_jul28)
Explanation: Make observing log
End of explanation
fnconfig = do_subsampling(hsd['Dune'], 2, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Gilgamesh'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['OBrother'], 2, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Catch22'], 2, {1:50, 2:50, 3:200})
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Narnia'], 1, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
Explanation: Night 2
Note: master catalogs changed due to Risa's custom targets, so some added pri=9 objects appear even in repeats
End of explanation
# use this to use the local file to make the log
fitsheader_output = !fitsheader aat_data/29jul2016/ccd_?/*.fits
logtab = make_logtab(fitsheader_output, skip_res=['.*temp.*'])
logtab[logtab['ccd']==1].show_in_notebook(display_length=20)
comments_jul29 =
* start of Dune_2 : seeing ~1.3"
* start of Gilgamesh_1 : seeing ~1.6"
* start of Odyssey_1a : seeing ~1.4"
* start of OBrother_2 : seeing ~1.7"
* temperature rose dramatically near end of OBrother_2 - might lead to variable seeing
* start of Catch22_2: seeing ~2.3"
* start of Narnia_1: seeing ~1.5"
[1:-1]
display.Markdown(comments_jul29)
logfn = 'aat_data/29jul2016/log_jul29'
logtab.write(logfn,format='ascii')
with open(logfn, 'a') as f:
f.write('\nComments:\n')
f.write(comments_jul29)
Explanation: Make observing log
End of explanation
hs = [h for h in hosts_to_target if h.nsaid==145729]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 1, {1:25, 2:25, 3:50, 4:100})
!scp $fnconfig "$sshtarget:configure/"
Explanation: Nights 3+
These nights are by the team that's doing the time-trade. They will be primarily second-half, but with a little time in the first half
End of explanation
hs = [h for h in hosts_to_target if h.nsaid==145729]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 2, {1:25, 2:25, 3:50, 4:100},
listorem=['aat_targets_jul2016/NSA145729_1_p0.lis'],
fnconfig='aat_targets_jul2016/NSA145729_2_after1p0.fld')
!scp $fnconfig "$sshtarget:configure/"
hs = [h for h in hosts_to_target if h.nsaid==145729]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 2, {1:25, 2:25, 3:50, 4:100},
listorem=['aat_targets_jul2016/NSA145729_1_p1.lis'],
fnconfig='aat_targets_jul2016/NSA145729_2_after1p1.fld')
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Catch22'], 3, {1:100, 2:100, 3:300)
!scp $fnconfig "$sshtarget:configure/"
fnconfig = do_subsampling(hsd['Narnia'], 2, {1:100, 2:100, 3:300})
!scp $fnconfig "$sshtarget:configure/"
hs = [h for h in hosts_to_target if h.nsaid==21709]
assert len(hs) == 1
fnconfig = do_subsampling(hs[0], 1, {1:25, 2:25, 3:50, 4:100})
!scp $fnconfig "$sshtarget:configure/"
Explanation: NSA 145729 is being treated a bit oddly because there might be two back-to-back-ish observations. So we're making one config in advance for both the p0 and p1 configurations, and making fld's for config 2 assuming either of those.
End of explanation
# fitsheader run on the output downloaded to another computer
with open('/Users/erik/tmp/fitsheader_jul30') as f:
fitsheader_output = list(f)
logtab = make_logtab(fitsheader_output, skip_res=['.*temp.*'])
logtab[logtab['ccd']==1].show_in_notebook(display_length=20)
logfn = 'aat_data/30jul2016/log_jul30'
logtab.write(logfn,format='ascii')
Explanation: Logs for observations
End of explanation
!scp "$sshtarget:configure/*.sds" aat_targets_jul2016/
# *_* because otherwise the collision matrix list comes through too
!scp "$sshtarget:configure/*_*.lis" aat_targets_jul2016/
Explanation: Utilities for during the night
Pull in as-generated files from remote
Note that this assumes the .lis files were generated after the configuration was made. If need be these can be re-created from the .sds files by starting up configure and exporting the allocated fibers list.
End of explanation
listab, lisscs, lisheader = aat.load_lis_file('aat_targets_jul2016/Narnia_1_p1.lis')
guidemsk = listab['codes']=='F'
names = ['{0[ids]}_f#={0[fibnums]}'.format(row) for row in listab[guidemsk]]
print(targeting.sampled_imagelist(lisscs[guidemsk], None, names=names))
Explanation: Inspect guider fields as needed
End of explanation
import astroplan
Explanation: Planning
End of explanation
#if online
ufo = urllib2.urlopen('https://docs.google.com/spreadsheet/ccc?key=1b3k2eyFjHFDtmHce1xi6JKuj3ATOWYduTBFftx5oPp8&output=csv')
hosttab = QTable.read(ufo.read(), format='csv')
ufo.close()
#if offline
#hosttab = Table.read('SAGADropbox/hosts/host_catalog_flag0.csv')
hostscs = SkyCoord(u.Quantity(hosttab['RA'], u.deg),
u.Quantity(hosttab['Dec'], u.deg),
distance=u.Quantity(hosttab['distance'], u.Mpc))
allspec = Table.read('/Users/erik/Dropbox/SAGA/data/allspectaken_v5.fits.gz')
sagaobsed = allspec[~((allspec['TELNAME']=='NSA')|
(allspec['TELNAME']=='SDSS')|
(allspec['TELNAME']=='GAMA'))]
sagaobsed_nsaids = np.unique(sagaobsed['HOST_NSAID'])
#UTC time from 8:35-19:35 is AAT 18 deg window
nighttimes = Time('2016-7-28 8:45:00') + np.arange(12)*u.hour
aao = EarthLocation(lon='149:3:57.9', lat='-31:16:37.3')
aao_frame = AltAz(obstime=nighttimes, location=aao)
sunaao = get_sun(nighttimes).transform_to(aao_frame)
np.max(sunaao.alt.value)
seczs = []
for sc in hostscs:
az = sc.transform_to(aao_frame)
seczs.append(az.secz)
seczs = np.array(seczs)
hrsvis = np.sum((1<=seczs)&(seczs<1.75),axis=1)
visenough = hrsvis>2
aaoobs = astroplan.Observer(aao, 'Australia/NSW')
midnight = aaoobs.midnight(Time('2016-7-28'))
Explanation: Find possible targets
End of explanation
up_times = {}
hoststoshow = hosttab[visenough]
hosts_to_target_nsaids = []
for h in hosts_to_target:
hosts_to_target_nsaids.append(h.nsaid)
if h.nsaid not in hoststoshow['NSAID']:
print('adding', h)
hoststoshow.add_row(None)
hoststoshow['NSAID'][-1] = h.nsaid
hoststoshow['RA'][-1] = h.ra
hoststoshow['Dec'][-1] = h.dec
with open('aat_targets_jul2016/aattargs_iobserve.dat', 'w') as f:
for host in hoststoshow:
already_obs = host['NSAID'] in sagaobsed_nsaids
name = 'NSA'+str(host['NSAID'])
for nm, val in hsd.items():
if val.nsaid == host['NSAID']:
name = nm
if nm.startswith('NSA'):
name = name+'_obsed'
break
f.write(name.replace(' ','_'))
if already_obs:
f.write('-observed')
f.write(' ')
f.write(str(host['RA']) + ' ')
f.write(str(host['Dec']) + '\n')
targ = astroplan.FixedTarget(SkyCoord(host['RA'], host['Dec'], unit=u.deg), name)
tpl = (name, host['NSAID'], host['RA'], host['Dec'])
transit = aaoobs.target_meridian_transit_time(midnight, targ)
up_times[transit.jd] = tpl
timestoplot = transit + np.linspace(-6, 6, 100)*u.hour
taa = aaoobs.altaz(timestoplot, targ)
msk = taa.secz >=1
color = 'g' if already_obs else 'k'
color = 'r' if host['NSAID'] in hosts_to_target_nsaids else color
plt.plot(timestoplot.plot_date[msk], taa.secz[msk], c=color)
plt.text(transit.plot_date, aaoobs.altaz(transit, targ).secz, name, ha='center', color=color)
t0 = aaoobs.sun_rise_time(midnight, 'previous')
t1 = aaoobs.sun_set_time(midnight, 'previous')
t2 = aaoobs.twilight_evening_civil(midnight, 'previous')
t3 = aaoobs.twilight_evening_nautical(midnight, 'previous')
t4 = aaoobs.twilight_evening_astronomical(midnight, 'previous')
t5 = aaoobs.twilight_morning_astronomical(midnight, 'next')
t6 = aaoobs.twilight_morning_nautical(midnight, 'next')
t7 = aaoobs.twilight_morning_civil(midnight, 'next')
t8 = aaoobs.sun_rise_time(midnight, 'next')
t9 = aaoobs.sun_set_time(midnight, 'next')
plt.fill_between([t0.plot_date,t1.plot_date],1,3, lw=0, facecolor='y', alpha=.9)
plt.fill_between([t1.plot_date,t2.plot_date],1,3, lw=0, facecolor='y', alpha=.75)
plt.fill_between([t2.plot_date,t3.plot_date],1,3, lw=0, facecolor='y', alpha=.5)
plt.fill_between([t3.plot_date,t4.plot_date],1,3, lw=0, facecolor='y', alpha=.25)
plt.fill_between([t5.plot_date,t6.plot_date],1,3, lw=0, facecolor='y', alpha=.25)
plt.fill_between([t6.plot_date,t7.plot_date],1,3, lw=0, facecolor='y', alpha=.5)
plt.fill_between([t7.plot_date,t8.plot_date],1,3, lw=0, facecolor='y', alpha=.75)
plt.fill_between([t8.plot_date,t9.plot_date],1,3, lw=0, facecolor='y', alpha=.9)
plt.axvline(midnight.plot_date, ls=':', c='k')
plt.gca().xaxis_date(aaoobs.timezone)
plt.xlim(t1.plot_date-.05, t8.plot_date+.05)
plt.axhline(1/np.cos(65*u.deg), c='k', ls='--') #~AAT limit
plt.ylim(2.5,1)
plt.tight_layout()
#NSA urls
for jd in sorted(up_times):
name, nsaid, ra, dec = up_times[jd]
print(name, 'http://www.nsatlas.org/getAtlas.html?search=nsaid&nsaID={}&submit_form=Submit'.format(nsaid))
# DECALS URLs
for jd in sorted(up_times):
name, nsaid, ra, dec = up_times[jd]
print(name, 'http://legacysurvey.org/viewer?ra={}&dec={}&zoom=8'.format(ra, dec))
Explanation: Airmass chart
End of explanation
sagaobsed[sagaobsed['HOST_NSAID']==150307].show_in_notebook()
Explanation: Notes:
* NSA165082: in DECALS DR2, only z-band, looks like a group, elliptical
* NSA145398: in DECALS DR2, z-band and a bit of g and r, elliptical, otherwise good
* NSA145729: in DECALS DR2, z-band and some r, OK but somewhat near SDSS edge?
* NSA145879: in DECALS DR2, z-band and some g and r, elliptical, otherwise good
* NSA166141: in DECALS DR2, z-band and some g and r, otherwise good
* NSA149977: in DECALS DR2, z-band and some g and r, otherwise good
* NSA150578: in DECALS DR2, only z-band, otherwise good
* NSA153017: in DECALS DR2, only z-band, otherwise good
* Bandamanna: in DECALS DR2, only z-band, otherwise good
* NSA127226: otherwise good
* NSA129237: in DECALS DR2, only z-band, otherwise good
* NSA129387: otherwise good
* NSA130133: elliptical? otherwise good
* NSA130625: S0? otherwise good
* NSA131531: near SDSS edge, otherwise good
End of explanation
for h in hosts_to_target:
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 1
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
print('')
for h in hosts_to_target:
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 2
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
print('')
for h in hosts_to_target:
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 3
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
print('')
Explanation: Note: NSA150307 was called "Iliad" when it was observed on WIYN
Mock sets of configurations
End of explanation
h = hsd['Catch22']
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 2
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
h = hsd['Catch22']
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 3
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
h = hsd['Catch22']
fnbase = 'aat_targets_jul2016/test_runs/' + h.name
finum = 4
fnmaster = fnbase.replace('test_runs/', '') + '_master.fld'
fnconfig = fnbase + '_test_{0}.fld'.format(finum)
print('Writing', fnconfig, 'from master', fnmaster)
listorem = [fnbase + '_test_' + str(i) + '.lis' for i in range(1, finum)]
aat.subsample_from_master_fld(fnmaster, fnconfig,
{1:30, 2:30, 3:30, 4:np.inf, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux
nflux=5, nguides=30,
fieldname=str(finum), listorem=listorem)
Explanation: Basically there after 2 observations of Narnia and OBrother. AnaK and Dune 1 or maybe 2, Gilg/Ody 1.
Guess: Catch 22 requires 2 or 3
NSA 145729 and maybe 165082?
Catch22 added later
End of explanation
logtab2015 = make_logtab('aat_targets_jul2016/alljun_2015_aaomega_headers')
logtab2015[(logtab2015['ccd']==1)].show_in_notebook(display_length=20)
Explanation: Log from last run
End of explanation |
9,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
检索,查询数据
这一节学习如何检索pandas数据。
Step1: Python和Numpy的索引操作符[]和属性操作符‘.’能够快速检索pandas数据。
然而,这两种方式的效率在pandas中可能不是最优的,我们推荐使用专门优化过的pandas数据检索方法。而这些方法则是本节要介绍的。
多种索引方式
pandas支持三种不同的索引方式:
* .loc 是基本的基于label的,当然也可以和一个boolean数组一起使用。
* .iloc 是基本的基于整数位置(从0到axis的length-1)的,当然也可以和一个boolean数组一起使用。当提供检索的index越界时会有IndexError错误,注意切片索引(slice index)允许越界。
* .ix 支持基于label和整数位置混合的数据获取方式。默认是基本label的. .ix是最常用的方式,它支持所有.loc和.iloc的输入。如果提供的是纯label或纯整数索引,我们建议使用.loc或 .iloc。
以 .loc为例看一下使用方式:
对象类型 | Indexers
Series | s.loc[indexer]
DataFrame | df.loc[row_indexer, column_indexer]
Panel | p.loc[item_indexer, major_indexer, minor_indexer]
最基本的索引和选择
最基本的选择数据方式就是使用[]操作符进行索引,
对象类型 | Selection | 返回值类型
Series | series[label],这里的label是index名 | 常数
DataFrame| frame[colname],使用列名 | Series对象,相应的colname那一列
Panel | panel[itemname] | DataFrame对象,相应的itemname那一个
下面用示例展示一下
Step2: 我们使用最基本的[]操作符
Step3: Series使用index索引
Step4: 也可以给[]传递一个column name组成的的list,形如df[[col1,col2]], 如果给出的某个列名不存在,会报错
Step5: 通过属性访问 把column作为DataFrame对象的属性
可以直接把Series的index、DataFrame中的column、Panel中的item作为这些对象的属性使用,然后直接访问相应的index、column、item
Step6: 注意:使用属性和[] 有一点区别:
如果要新建一个column,只能使用[]
毕竟属性的含义就是现在存在的!不存在的列名当然不是属性了
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
使用属性要注意的:
* 如果一个已经存在的函数和列名相同,则不存在相应的属性哦
* 总而言之,属性的适用范围要比[]小
切片范围 Slicing ranges
可以使用 [] 还有.iloc切片,这里先介绍使用[]
对于Series来说,使用[]进行切片就像ndarray一样,
Step7: []不但可以检索,也可以赋值
Step8: 对于DataFrame对象来说,[]操作符按照行进行切片,非常有用。
Step9: 使用Label进行检索
警告:
.loc要求检索时输入必须严格遵守index的类型,一旦输入类型不对,将会引起TypeError。
Step10: 输入string进行检索没问题
Step11: 细心地你一定发现了,index='20160104'那一行也被检索出来了,没错,loc检索时范围是闭集合[start,end].
整型可以作为label检索,这是没问题的,不过要记住此时整型表示的是label而不是index中的下标!
.loc操作是检索时的基本操作,以下输入格式都是合法的:
* 一个label,比如:5、'a'. 记住这里的5表示的是index中的一个label而不是index中的一个下标。
* label组成的列表或者数组比如['a','b','c']
* 切片,比如'a'
Step12: loc同样支持赋值操作
Step13: 再来看看DataFramed的例子
Step14: 使用切片检索
Step15: 使用布尔数组检索
Step16: 得到DataFrame中的某一个值, 等同于df1.get_value('a','A')
Step17: 根据下标进行检索 Selection By Position
pandas提供了一系列的方法实现基于整型的检索。语义和python、numpy切片几乎一样。下标同样都是从0开始,并且进行的是半闭半开的区间检索[start,end)。如果输入 非整型label当做下标进行检索会引起IndexError。
.iloc的合法输入包括:
* 一个整数,比如5
* 整数组成的列表或者数组,比如[4,3,0]
* 整型表示的切片,比如1
Step18: iloc同样也可以进行赋值
Step19: DataFrame的示例
Step20: 进行行和列的检索
Step21: 注意下面两个例子的区别:
Step22: 如果切片检索时输入的范围越界,没关系,只要pandas版本>=v0.14.0, 就能如同Python/Numpy那样正确处理。
注意:仅限于 切片检索
Step23: 上面说到,这种优雅处理越界的能力仅限于输入全是切片,如果输入是越界的 列表或者整数,则会引起IndexError
Step24: 输入有切片,有整数,如果越界同样不能处理
Step25: 选择随机样本 Selecting Random Samples
使用sample()方法能够从行或者列中进行随机选择,适用对象包括Series、DataFrame和Panel。sample()方法默认对行进行随机选择,输入可以是整数或者小数。
Step26: 也可以输入小数,则会随机选择N*frac个样本, 结果进行四舍五入
Step27: sample()默认进行的无放回抽样,可以利用replace=True参数进行可放回抽样。
Step28: 默认情况下,每一行/列都被等可能的采样,如果你想为每一行赋予一个被抽样选择的权重,可以利用weights参数实现。
注意:如果weights中各概率相加和不等于1,pandas会先对weights进行归一化,强制转为概率和为1!
Step29: 注意:由于sample默认进行的是无放回抽样,所以输入必须n<=行数,除非进行可放回抽样。
Step30: 如果是对DataFrame对象进行有权重采样,一个简单 的方法是新增一列用于表示每一行的权重
Step31: 对列进行采样, axis=1
Step32: 我们也可以使用random_state参数 为sample内部的随机数生成器提供种子数。
Step33: 注意下面两个示例,输出是相同的,因为使用了相同的种子数
Step34: 使用赋值的方式扩充对象 Setting With Enlargement
用.loc/.ix/[]对不存在的键值进行赋值时,将会导致在对象中添加新的元素,它的键即为赋值时不存在的键。
对于Series来说,这是一种有效的添加操作。
Step35: DataFrame可以在行或者列上扩充数据
Step36: 标量值的快速获取和赋值
如果仅仅想获取一个元素,使用[]未免太繁重了。pandas提供了快速获取一个元素的方法:at和iat. 适用于Series、DataFrame和Panel。
如果loc方法,at方法的合法输入是label,iat的合法输入是整型。
Step37: 也可以进行赋值操作
Step38: 布尔检索 Boolean indexing
另一种常用的操作是使用布尔向量过滤数据。运算符有三个
Step39: DataFrame示例:
Step40: 利用列表解析和map方法能够产生更加复杂的选择标准。
Step41: 结合loc、iloc等方法可以检索多个坐标下的数据.
Step42: 使用isin方法检索 Indexing with isin
isin(is in)
对于Series对象来说,使用isin方法时传入一个列表,isin方法会返回一个布尔向量。布尔向量元素为1的前提是列表元素在Series对象中存在。看起来比较拗口,还是看例子吧:
Step43: Index对象中也有isin方法.
Step44: DataFrame同样有isin方法,参数是数组或字典。二者的区别看例子吧:
Step45: 输入一个字典的情形:
Step46: 结合isin方法和any() all()可以对DataFrame进行快速查询。比如选择每一列都符合标准的行
Step47: where()方法 The where() Method and Masking
使用布尔向量对Series对象查询时通常返回的是对象的子集。如果想要返回的shape和原对象相同,可以使用where方法。
使用布尔向量对DataFrame对象查询返回的shape和原对象相同,这是因为底层用的where方法实现。
Step48: 使用where方法
Step49: where方法还有一个可选的other参数,作用是替换返回结果中是False的值,并不会改变原对象。
Step50: 你可能想基于某种判断条件来赋值。一种直观的方法是:
Step51: 默认情况下,where方法并不会修改原始对象,它返回的是一个修改过的原始对象副本,如果你想直接修改原始对象,方法是将inplace参数设置为True
Step52: 对齐
where方法会将输入的布尔条件对齐,因此允许部分检索时的赋值。
Step53: mask
Step54: query()方法 The query() Method (Experimental)
DataFrame对象拥有query方法,允许使用表达式检索。
比如,检索列'b'的值介于列‘a’和‘c’之间的行。
注意: 需要安装numexptr。
Step55: MultiIndex query() 语法
对于DataFrame对象,可以使用MultiIndex,如同操作列名一样。
Step56: 如果index没有名字,可以给他们命名
Step57: ilevl_0意思是 0级index。
query() 用例 query() Use Cases
一个使用query()的情景是面对DataFrame对象组成的集合,并且这些对象有共同的的列名,则可以利用query方法对这个集合进行统一检索。
Step58: Python中query和pandas中query语法比较 query() Python versus pandas Syntax Comparison
Step59: query()可以去掉圆括号, 也可以用and 代替&运算符
Step60: in 和not in 运算符 The in and not in operators
query()也支持Python中的in和not in运算符,实际上是底层调用isin
Step61: ==和列表对象一起使用 Special use of the == operator with list objects
可以使用==/!=将列表和列名直接进行比较,等价于使用in/not in.
三种方法功能等价: ==/!= VS in/not in VS isin()/~isin()
Step62: 布尔运算符 Boolean Operators
可以使用not或者~对布尔表达式进行取非。
Step63: 表达式任意复杂都没关系。
Step64: query()的性能
DataFrame.query()底层使用numexptr,所以速度要比Python快,特别时当DataFrame对象非常大时。
重复数据的确定和删除 Duplicate Data
如果你想确定和去掉DataFrame对象中重复的行,pandas提供了两个方法:duplicated和drop_duplicates. 两个方法的参数都是列名。
* duplicated 返回一个布尔向量,长度等于行数,表示每一行是否重复
* drop_duplicates 则删除重复的行
默认情况下,首次遇到的行被认为是唯一的,以后遇到内容相同的行都被认为是重复的。不过两个方法都有一个keep参数来确定目标行是否被保留。
* keep='first'(默认):标记/去掉重复行除了第一次出现的那一行
* keep='last'
Step65: 可以传递列名组成的列表
Step66: 也可以检查index值是否重复来去掉重复行,方法是Index.duplicated然后使用切片操作(因为调用Index.duplicated会返回布尔向量)。keep参数同上。
Step67: 形似字典的get()方法
Serires, DataFrame和Panel都有一个get方法来得到一个默认值。
Step68: select()方法 The select() Method
Series, DataFrame和Panel都有select()方法来检索数据,这个方法作为保留手段通常其他方法都不管用的时候才使用。select接受一个函数(在label上进行操作)作为输入返回一个布尔值。
Step69: lookup()方法 The lookup()方法
输入行label和列label,得到一个numpy数组,这就是lookup方法的功能。
Step70: Index对象 Index objects
pandas中的Index类和它的子类可以被当做一个序列可重复集合(ordered multiset),允许数据重复。然而,如果你想把一个有重复值Index对象转型为一个集合这是不可以的。创建Index最简单的方法就是通过传递一个列表或者其他序列创建。
Step71: 还可以个Index命名
Step72: 返回视图VS返回副本 Returning a view versus a copy
当对pandas对象赋值时,一定要注意避免链式索引(chained indexing)。看下面的例子:
Step73: 比较下面两种访问方式:
Step74: 上面两种方法返回的结果抖一下,那么应该使用哪种方法呢?答案是我们更推荐大家使用方法二。
dfmi['one']选择了第一级列然后返回一个DataFrame对象,然后另一个Python操作dfmi_with_one['second']根据'second'检索出了一个Series。对pandas来说,这两个操作是独立、有序执行的。而.loc方法传入一个元组(slice(None),('one','second')),pandas把这当作一个事件执行,所以执行速度更快。
为什么使用链式索引赋值为报错?
刚才谈到不推荐使用链式索引是出于性能的考虑。接下来从赋值角度谈一下不推荐使用链式索引。首先,思考Python怎么解释执行下面的代码?
Step75: 但下面的代码解释后结果却不一样:
Step76: 看到__getitem__了吗?除了最简单的情况,我们很难预测他到底返回的是视图还是副本(哲依赖于数组的内存布局,这是pandas没有硬性要求的),因此不推荐使用链式索引赋值!
而dfmi.loc.__setitem__直接对dfmi进行操作。
有时候明明没有使用链式索引,也会引起SettingWithCopy警告,这是Pandas设计的bug~
Step77: 链式索引中顺序也很重要
此外,在链式表达式中,不同的顺序也可能导致不同的结果。这里的顺序指的是检索时行和列的顺序。
Step78: 正确的方式是:老老实实使用.loc | Python Code:
import numpy as np
import pandas as pd
Explanation: 检索,查询数据
这一节学习如何检索pandas数据。
End of explanation
dates = pd.date_range('1/1/2000', periods=8)
dates
df = pd.DataFrame(np.random.randn(8,4), index=dates, columns=list('ABCD'))
df
panel = pd.Panel({'one':df, 'two':df-df.mean()})
panel
Explanation: Python和Numpy的索引操作符[]和属性操作符‘.’能够快速检索pandas数据。
然而,这两种方式的效率在pandas中可能不是最优的,我们推荐使用专门优化过的pandas数据检索方法。而这些方法则是本节要介绍的。
多种索引方式
pandas支持三种不同的索引方式:
* .loc 是基本的基于label的,当然也可以和一个boolean数组一起使用。
* .iloc 是基本的基于整数位置(从0到axis的length-1)的,当然也可以和一个boolean数组一起使用。当提供检索的index越界时会有IndexError错误,注意切片索引(slice index)允许越界。
* .ix 支持基于label和整数位置混合的数据获取方式。默认是基本label的. .ix是最常用的方式,它支持所有.loc和.iloc的输入。如果提供的是纯label或纯整数索引,我们建议使用.loc或 .iloc。
以 .loc为例看一下使用方式:
对象类型 | Indexers
Series | s.loc[indexer]
DataFrame | df.loc[row_indexer, column_indexer]
Panel | p.loc[item_indexer, major_indexer, minor_indexer]
最基本的索引和选择
最基本的选择数据方式就是使用[]操作符进行索引,
对象类型 | Selection | 返回值类型
Series | series[label],这里的label是index名 | 常数
DataFrame| frame[colname],使用列名 | Series对象,相应的colname那一列
Panel | panel[itemname] | DataFrame对象,相应的itemname那一个
下面用示例展示一下
End of explanation
s = df['A'] #使用列名
s#返回的是 Series
Explanation: 我们使用最基本的[]操作符
End of explanation
s[dates[5]] #使用index名
panel['two']
Explanation: Series使用index索引
End of explanation
df
df[['B', 'A']] = df[['A', 'B']]
df
Explanation: 也可以给[]传递一个column name组成的的list,形如df[[col1,col2]], 如果给出的某个列名不存在,会报错
End of explanation
sa = pd.Series([1,2,3],index=list('abc'))
dfa = df.copy()
sa
sa.b #直接把index作为属性
dfa
dfa.A
panel.one
sa
sa.a = 5
sa
sa
dfa.A=list(range(len(dfa.index))) # ok if A already exists
dfa
dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
dfa
Explanation: 通过属性访问 把column作为DataFrame对象的属性
可以直接把Series的index、DataFrame中的column、Panel中的item作为这些对象的属性使用,然后直接访问相应的index、column、item
End of explanation
s
s[:5]
s[::2]
s[::-1]
Explanation: 注意:使用属性和[] 有一点区别:
如果要新建一个column,只能使用[]
毕竟属性的含义就是现在存在的!不存在的列名当然不是属性了
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
使用属性要注意的:
* 如果一个已经存在的函数和列名相同,则不存在相应的属性哦
* 总而言之,属性的适用范围要比[]小
切片范围 Slicing ranges
可以使用 [] 还有.iloc切片,这里先介绍使用[]
对于Series来说,使用[]进行切片就像ndarray一样,
End of explanation
s2 = s.copy()
s2[:5]=0 #赋值
s2
Explanation: []不但可以检索,也可以赋值
End of explanation
df[:3]
df[::-1]
Explanation: 对于DataFrame对象来说,[]操作符按照行进行切片,非常有用。
End of explanation
df1 = pd.DataFrame(np.random.rand(5,4), columns=list('ABCD'), index=pd.date_range('20160101',periods=5))
df1
df1.loc[2:3]
Explanation: 使用Label进行检索
警告:
.loc要求检索时输入必须严格遵守index的类型,一旦输入类型不对,将会引起TypeError。
End of explanation
df1.loc['20160102':'20160104']
Explanation: 输入string进行检索没问题
End of explanation
s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
s1
s1.loc['c':]
s1.loc['b']
Explanation: 细心地你一定发现了,index='20160104'那一行也被检索出来了,没错,loc检索时范围是闭集合[start,end].
整型可以作为label检索,这是没问题的,不过要记住此时整型表示的是label而不是index中的下标!
.loc操作是检索时的基本操作,以下输入格式都是合法的:
* 一个label,比如:5、'a'. 记住这里的5表示的是index中的一个label而不是index中的一个下标。
* label组成的列表或者数组比如['a','b','c']
* 切片,比如'a':'f'.注意loc中切片范围是闭集合!
* 布尔数组
End of explanation
s1.loc['c':]=0
s1
Explanation: loc同样支持赋值操作
End of explanation
df1 = pd.DataFrame(np.random.randn(6,4), index=list('abcdef'),columns=list('ABCD'))
df1
df1.loc[['a','b','c','d'],:]
df1.loc[['a','b','c','d']] #可以省略 ':'
Explanation: 再来看看DataFramed的例子
End of explanation
df1.loc['d':,'A':'C'] #注意是闭集合
df1.loc['a']
Explanation: 使用切片检索
End of explanation
df1.loc['a']>0
df1.loc[:,df1.loc['a']>0]
Explanation: 使用布尔数组检索
End of explanation
df1.loc['a','A']
df1.get_value('a','A')
Explanation: 得到DataFrame中的某一个值, 等同于df1.get_value('a','A')
End of explanation
s1 = pd.Series(np.random.randn(5),index=list(range(0,10,2)))
s1
s1.iloc[:3] #注意检索是半闭半开区间
s1.iloc[3]
Explanation: 根据下标进行检索 Selection By Position
pandas提供了一系列的方法实现基于整型的检索。语义和python、numpy切片几乎一样。下标同样都是从0开始,并且进行的是半闭半开的区间检索[start,end)。如果输入 非整型label当做下标进行检索会引起IndexError。
.iloc的合法输入包括:
* 一个整数,比如5
* 整数组成的列表或者数组,比如[4,3,0]
* 整型表示的切片,比如1:7
* 布尔数组
看一下Series使用iloc检索的示例:
End of explanation
s1.iloc[:3]=0
s1
Explanation: iloc同样也可以进行赋值
End of explanation
df1 = pd.DataFrame(np.random.randn(6,4),index=list(range(0,12,2)), columns=list(range(0,8,2)))
df1
df1.iloc[:3]
Explanation: DataFrame的示例:
End of explanation
df1.iloc[1:5,2:4]
df1.iloc[[1,3,5],[1,2]]
df1.iloc[1:3,:]
df1.iloc[:,1:3]
df1.iloc[1,1]#只检索一个元素
Explanation: 进行行和列的检索
End of explanation
df1.iloc[1]
df1.iloc[1:2]
Explanation: 注意下面两个例子的区别:
End of explanation
x = list('abcdef')
x
x[4:10] #这里x的长度是6
x[8:10]
s = pd.Series(x)
s
s.iloc[4:10]
s.iloc[8:10]
df1 = pd.DataFrame(np.random.randn(5,2), columns=list('AB'))
df1
df1.iloc[:,2:3]
df1.iloc[:,1:3]
df1.iloc[4:6]
Explanation: 如果切片检索时输入的范围越界,没关系,只要pandas版本>=v0.14.0, 就能如同Python/Numpy那样正确处理。
注意:仅限于 切片检索
End of explanation
df1.iloc[[4,5,6]]
Explanation: 上面说到,这种优雅处理越界的能力仅限于输入全是切片,如果输入是越界的 列表或者整数,则会引起IndexError
End of explanation
df1.iloc[:,4]
Explanation: 输入有切片,有整数,如果越界同样不能处理
End of explanation
s = pd.Series([0,1,2,3,4,5])
s
s.sample()
s.sample(n=6)
s.sample(3) #直接输入整数即可
Explanation: 选择随机样本 Selecting Random Samples
使用sample()方法能够从行或者列中进行随机选择,适用对象包括Series、DataFrame和Panel。sample()方法默认对行进行随机选择,输入可以是整数或者小数。
End of explanation
s.sample(frac=0.5)
s.sample(0.5) #必须输入frac=0.5
s.sample(frac=0.8) #6*0.8=4.8
s.sample(frac=0.7)# 6*0.7=4.2
Explanation: 也可以输入小数,则会随机选择N*frac个样本, 结果进行四舍五入
End of explanation
s
s.sample(n=6,replace=False)
s.sample(6,replace=True)
Explanation: sample()默认进行的无放回抽样,可以利用replace=True参数进行可放回抽样。
End of explanation
s = pd.Series([0,1,2,3,4,5])
s
example_weights=[0,0,0.2,0.2,0.2,0.4]
s.sample(n=3,weights=example_weights)
example_weights2 = [0.5, 0, 0, 0, 0, 0]
s.sample(n=1, weights=example_weights2)
s.sample(n=2, weights=example_weights2) #n>1 会报错,
Explanation: 默认情况下,每一行/列都被等可能的采样,如果你想为每一行赋予一个被抽样选择的权重,可以利用weights参数实现。
注意:如果weights中各概率相加和不等于1,pandas会先对weights进行归一化,强制转为概率和为1!
End of explanation
s
s.sample(7) #7不行
s.sample(7,replace=True)
Explanation: 注意:由于sample默认进行的是无放回抽样,所以输入必须n<=行数,除非进行可放回抽样。
End of explanation
df2 = pd.DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
df2
df2.sample(n=3,weights='weight_column')
Explanation: 如果是对DataFrame对象进行有权重采样,一个简单 的方法是新增一列用于表示每一行的权重
End of explanation
df3 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df3
df3.sample(1,axis=1)
Explanation: 对列进行采样, axis=1
End of explanation
df4 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df4
Explanation: 我们也可以使用random_state参数 为sample内部的随机数生成器提供种子数。
End of explanation
df4.sample(n=2, random_state=2)
df4.sample(n=2,random_state=2)
df4.sample(n=2,random_state=3)
Explanation: 注意下面两个示例,输出是相同的,因为使用了相同的种子数
End of explanation
se = pd.Series([1,2,3])
se
se[5]=5
se
Explanation: 使用赋值的方式扩充对象 Setting With Enlargement
用.loc/.ix/[]对不存在的键值进行赋值时,将会导致在对象中添加新的元素,它的键即为赋值时不存在的键。
对于Series来说,这是一种有效的添加操作。
End of explanation
dfi = pd.DataFrame(np.arange(6).reshape(3,2),columns=['A','B'])
dfi
dfi.loc[:,'C']=dfi.loc[:,'A'] #对列进行扩充
dfi
dfi.loc[3]=5 #对行进行扩充
dfi
Explanation: DataFrame可以在行或者列上扩充数据
End of explanation
s.iat[5]
df.at[dates[5],'A']
df.iat[3,0]
Explanation: 标量值的快速获取和赋值
如果仅仅想获取一个元素,使用[]未免太繁重了。pandas提供了快速获取一个元素的方法:at和iat. 适用于Series、DataFrame和Panel。
如果loc方法,at方法的合法输入是label,iat的合法输入是整型。
End of explanation
df.at[dates[-1]+1,0]=7
df
Explanation: 也可以进行赋值操作
End of explanation
s = pd.Series(range(-3, 4))
s
s[s>0]
s[(s<-1) | (s>0.5)]
s[~(s<0)]
Explanation: 布尔检索 Boolean indexing
另一种常用的操作是使用布尔向量过滤数据。运算符有三个:|(or), &(and), ~(not)。
注意:运算符的操作数要在圆括号内。
使用布尔向量检索Series的操作方式和numpy ndarray一样。
End of explanation
df[df['A'] > 0]
Explanation: DataFrame示例:
End of explanation
df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
'c' : np.random.randn(7)})
df2
criterion = df2['a'].map(lambda x:x.startswith('t'))
df2[criterion]
df2[[x.startswith('t') for x in df2['a']]]
df2[criterion & (df2['b'] == 'x')]
Explanation: 利用列表解析和map方法能够产生更加复杂的选择标准。
End of explanation
df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
Explanation: 结合loc、iloc等方法可以检索多个坐标下的数据.
End of explanation
s = pd.Series(np.arange(5), index=np.arange(5)[::-1],dtype='int64')
s
s.isin([2,4,6])
s[s.isin([2,4,6])]
Explanation: 使用isin方法检索 Indexing with isin
isin(is in)
对于Series对象来说,使用isin方法时传入一个列表,isin方法会返回一个布尔向量。布尔向量元素为1的前提是列表元素在Series对象中存在。看起来比较拗口,还是看例子吧:
End of explanation
s[s.index.isin([2,4,6])]
s[[2,4,6]]
Explanation: Index对象中也有isin方法.
End of explanation
df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
'ids2':['a', 'n', 'c', 'n']})
df
values=['a', 'b', 1, 3]
df.isin(values)
Explanation: DataFrame同样有isin方法,参数是数组或字典。二者的区别看例子吧:
End of explanation
values = {'ids': ['a', 'b'], 'vals': [1, 3]}
df.isin(values)
Explanation: 输入一个字典的情形:
End of explanation
values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
row_mark = df.isin(values).all(1)
df[row_mark]
row_mark = df.isin(values).any(1)
df[row_mark]
Explanation: 结合isin方法和any() all()可以对DataFrame进行快速查询。比如选择每一列都符合标准的行:
End of explanation
s[s>0]
Explanation: where()方法 The where() Method and Masking
使用布尔向量对Series对象查询时通常返回的是对象的子集。如果想要返回的shape和原对象相同,可以使用where方法。
使用布尔向量对DataFrame对象查询返回的shape和原对象相同,这是因为底层用的where方法实现。
End of explanation
s.where(s>0)
df[df<0]
df.where(df<0)
Explanation: 使用where方法
End of explanation
df.where(df<0, 2)
df
df.where(df<0, df) #将df作为other的参数值
Explanation: where方法还有一个可选的other参数,作用是替换返回结果中是False的值,并不会改变原对象。
End of explanation
s2 = s.copy()
s2
s2[s2<0]=0
s2
Explanation: 你可能想基于某种判断条件来赋值。一种直观的方法是:
End of explanation
df = pd.DataFrame(np.random.randn(6,5), index=list('abcdef'), columns=list('ABCDE'))
df_orig = df.copy()
df_orig.where(df < 0, -df, inplace=True);
df_orig
Explanation: 默认情况下,where方法并不会修改原始对象,它返回的是一个修改过的原始对象副本,如果你想直接修改原始对象,方法是将inplace参数设置为True
End of explanation
df2 = df.copy()
df2[df2[1:4] >0]=3
df2
df2 = df.copy()
df2.where(df2>0, df2['A'], axis='index')
Explanation: 对齐
where方法会将输入的布尔条件对齐,因此允许部分检索时的赋值。
End of explanation
s.mask(s>=0)
df.mask(df >= 0)
Explanation: mask
End of explanation
n = 10
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df[(df.a<df.b) & (df.b<df.c)]
df.query('(a < b) & (b < c)') #
Explanation: query()方法 The query() Method (Experimental)
DataFrame对象拥有query方法,允许使用表达式检索。
比如,检索列'b'的值介于列‘a’和‘c’之间的行。
注意: 需要安装numexptr。
End of explanation
n = 10
colors = np.random.choice(['red', 'green'], size=n)
foods = np.random.choice(['eggs', 'ham'], size=n)
colors
foods
index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
df = pd.DataFrame(np.random.randn(n,2), index=index)
df
df.query('color == "red"')
Explanation: MultiIndex query() 语法
对于DataFrame对象,可以使用MultiIndex,如同操作列名一样。
End of explanation
df.index.names = [None, None]
df
df.query('ilevel_0 == "red"')
Explanation: 如果index没有名字,可以给他们命名
End of explanation
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df2 = pd.DataFrame(np.random.randn(n+2, 3), columns=df.columns)
df2
expr = '0.0 <= a <= c <= 0.5'
map(lambda frame: frame.query(expr), [df, df2])
Explanation: ilevl_0意思是 0级index。
query() 用例 query() Use Cases
一个使用query()的情景是面对DataFrame对象组成的集合,并且这些对象有共同的的列名,则可以利用query方法对这个集合进行统一检索。
End of explanation
df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
df
df.query('(a<b) &(b<c)')
df[(df.a < df.b) & (df.b < df.c)]
Explanation: Python中query和pandas中query语法比较 query() Python versus pandas Syntax Comparison
End of explanation
df.query('a < b & b < c')
df.query('a<b and b<c')
Explanation: query()可以去掉圆括号, 也可以用and 代替&运算符
End of explanation
df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
'c': np.random.randint(5, size=12),
'd': np.random.randint(9, size=12)})
df
df.query('a in b')
df[df.a.isin(df.b)]
df[~df.a.isin(df.b)]
df.query('a in b and c < d') #更复杂的例子
df[df.b.isin(df.a) & (df.c < df.d)] #Python语法
Explanation: in 和not in 运算符 The in and not in operators
query()也支持Python中的in和not in运算符,实际上是底层调用isin
End of explanation
df.query('b==["a", "b", "c"]')
df[df.b.isin(["a", "b", "c"])] #Python语法
df.query('c == [1, 2]')
df.query('c != [1, 2]')
df.query('[1, 2] in c') #使用in
df.query('[1, 2] not in c')
df[df.c.isin([1, 2])] #Python语法
Explanation: ==和列表对象一起使用 Special use of the == operator with list objects
可以使用==/!=将列表和列名直接进行比较,等价于使用in/not in.
三种方法功能等价: ==/!= VS in/not in VS isin()/~isin()
End of explanation
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df['bools']=np.random.randn(len(df))>0.5
df
df.query('bools')
df.query('not bools')
df.query('not bools') == df[~df.bools]
Explanation: 布尔运算符 Boolean Operators
可以使用not或者~对布尔表达式进行取非。
End of explanation
shorter = df.query('a<b<c and (not bools) or bools>2')
shorter
longer = df[(df.a < df.b) & (df.b < df.c) & (~df.bools) | (df.bools > 2)]
longer
shorter == longer
Explanation: 表达式任意复杂都没关系。
End of explanation
df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
'c': np.random.randn(7)})
df2
df2.duplicated('a') #只观察列a的值是否重复
df2.duplicated('a', keep='last')
df2.drop_duplicates('a')
df2.drop_duplicates('a', keep='last')
df2.drop_duplicates('a', keep=False)
Explanation: query()的性能
DataFrame.query()底层使用numexptr,所以速度要比Python快,特别时当DataFrame对象非常大时。
重复数据的确定和删除 Duplicate Data
如果你想确定和去掉DataFrame对象中重复的行,pandas提供了两个方法:duplicated和drop_duplicates. 两个方法的参数都是列名。
* duplicated 返回一个布尔向量,长度等于行数,表示每一行是否重复
* drop_duplicates 则删除重复的行
默认情况下,首次遇到的行被认为是唯一的,以后遇到内容相同的行都被认为是重复的。不过两个方法都有一个keep参数来确定目标行是否被保留。
* keep='first'(默认):标记/去掉重复行除了第一次出现的那一行
* keep='last': 标记/去掉重复行除了最后一次出现的那一行
* keep=False: 标记/去掉所有重复的行
End of explanation
df2.duplicated(['a', 'b']) #此时列a和b两个元素构成每一个检索的基本单位,
df2
Explanation: 可以传递列名组成的列表
End of explanation
df3 = pd.DataFrame({'a': np.arange(6),
'b': np.random.randn(6)},
index=['a', 'a', 'b', 'c', 'b', 'a'])
df3
df3.index.duplicated() #布尔表达式
df3[~df3.index.duplicated()]
df3[~df3.index.duplicated(keep='last')]
df3[~df3.index.duplicated(keep=False)]
Explanation: 也可以检查index值是否重复来去掉重复行,方法是Index.duplicated然后使用切片操作(因为调用Index.duplicated会返回布尔向量)。keep参数同上。
End of explanation
s = pd.Series([1,2,3], index=['a', 'b', 'c'])
s
s.get('a')
s.get('x', default=-1)
s.get('b')
Explanation: 形似字典的get()方法
Serires, DataFrame和Panel都有一个get方法来得到一个默认值。
End of explanation
df = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC'))
df.select(lambda x: x=='A', axis=1)
Explanation: select()方法 The select() Method
Series, DataFrame和Panel都有select()方法来检索数据,这个方法作为保留手段通常其他方法都不管用的时候才使用。select接受一个函数(在label上进行操作)作为输入返回一个布尔值。
End of explanation
dflookup = pd.DataFrame(np.random.randn(20, 4), columns=list('ABCD'))
dflookup
dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D'])
Explanation: lookup()方法 The lookup()方法
输入行label和列label,得到一个numpy数组,这就是lookup方法的功能。
End of explanation
index = pd.Index(['e', 'd', 'a', 'b'])
index
'd' in index
Explanation: Index对象 Index objects
pandas中的Index类和它的子类可以被当做一个序列可重复集合(ordered multiset),允许数据重复。然而,如果你想把一个有重复值Index对象转型为一个集合这是不可以的。创建Index最简单的方法就是通过传递一个列表或者其他序列创建。
End of explanation
index = pd.Index(['e', 'd', 'a', 'b'], name='something')
index.name
index = pd.Index(list(range(5)), name='rows')
columns = pd.Index(['A', 'B', 'C'], name='cols')
df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
df
df['A']
Explanation: 还可以个Index命名
End of explanation
dfmi = pd.DataFrame([list('abcd'),
list('efgh'),
list('ijkl'),
list('mnop')],
columns=pd.MultiIndex.from_product([['one','two'],
['first','second']]))
dfmi
Explanation: 返回视图VS返回副本 Returning a view versus a copy
当对pandas对象赋值时,一定要注意避免链式索引(chained indexing)。看下面的例子:
End of explanation
dfmi['one']['second']
dfmi.loc[:,('one','second')]
Explanation: 比较下面两种访问方式:
End of explanation
dfmi.loc[:,('one','second')]=value
#实际是
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
Explanation: 上面两种方法返回的结果抖一下,那么应该使用哪种方法呢?答案是我们更推荐大家使用方法二。
dfmi['one']选择了第一级列然后返回一个DataFrame对象,然后另一个Python操作dfmi_with_one['second']根据'second'检索出了一个Series。对pandas来说,这两个操作是独立、有序执行的。而.loc方法传入一个元组(slice(None),('one','second')),pandas把这当作一个事件执行,所以执行速度更快。
为什么使用链式索引赋值为报错?
刚才谈到不推荐使用链式索引是出于性能的考虑。接下来从赋值角度谈一下不推荐使用链式索引。首先,思考Python怎么解释执行下面的代码?
End of explanation
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
Explanation: 但下面的代码解释后结果却不一样:
End of explanation
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
foo['quux'] = value # We don't know whether this will modify df or not!
return foo
Explanation: 看到__getitem__了吗?除了最简单的情况,我们很难预测他到底返回的是视图还是副本(哲依赖于数组的内存布局,这是pandas没有硬性要求的),因此不推荐使用链式索引赋值!
而dfmi.loc.__setitem__直接对dfmi进行操作。
有时候明明没有使用链式索引,也会引起SettingWithCopy警告,这是Pandas设计的bug~
End of explanation
dfb = pd.DataFrame({'a' : ['one', 'one', 'two',
'three', 'two', 'one', 'six'],
'c' : np.arange(7)})
dfb
dfb['c'][dfb.a.str.startswith('o')] = 42 #虽然会引起SettingWithCopyWarning 但也能得到正确结果
pd.set_option('mode.chained_assignment','warn')
dfb[dfb.a.str.startswith('o')]['c'] = 42 #这实际上是对副本赋值!
Explanation: 链式索引中顺序也很重要
此外,在链式表达式中,不同的顺序也可能导致不同的结果。这里的顺序指的是检索时行和列的顺序。
End of explanation
dfc = pd.DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]})
dfc
dfc.loc[0,'A'] = 11
dfc
Explanation: 正确的方式是:老老实实使用.loc
End of explanation |
9,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic
In this notebook, I explore the titanic data set provided by Kaggle, to try and predict survival rates.
To process the data, certain missing values were replaced with medians or averages, depending on the variable, but split by class since survival rates varied between them. New variables were added, to explain, for instance, whether the person was a minor, or accompanied, and these were shown to improve results, although only slightly.
Several models were tested
Step1: Pre-processing Data
First, we clean off the data, by checking for missing values, and imputing them as necessary.
Step2: Let's check the DataFrame for missing values
Step3: From train.csv, at least two variables have missing values
Step4: Age has some missing values, and its distribution seems to vary by class, so to impute them we opt for using the median from each passenger class.
Step5: Fare
Step6: While Class 1 fare prices peak at 500, Class 2 prices peak slightly above 70, and Class 3 prices below that. As with Age, we opt for the medium as a replacement value for the missing ones.
Step7: Embarked
Step8: The variable Embarked also has one missing value. We're going to replace it with the most common value for its class as well.
Now the data is ready for some exploratory analysis.
Exploration
Step9: A passenger's class appears to be one of the highest influencers in determining survival rates. More people from Class 3 died than any other class; this could be a given, since most of the passengers were from Class 3
Step10: Ship's voyage was Belfast, Southmapton, Cherbourg, and Queenstown, with New York as the final destination. Around 80% of the crew members were men from Southampton[1], and most passengers did board from that port as well, which explains the higher numbers of people from Southampton compared to the other ports.
Survival rates for women are consistently higher than men. However, there are noticeable differences between survival rates within each sex when measured against the port of embarcation and passenger's class, hinting that both might be useful predictors.
To use them in numberic algorithms, we'll convert them into numeric form, and we create new variables for that.
Step11: Adults with siblings or spouses are more likely to have survived than those without, and conversely, children without siblings are more likely to have survived than those with.
Step12: While survival rates for women are higher, we consider that there were close to twice as many men on the ship compared to women.
Step13: However, total survival rate for females is around 17.8% of the the whole ship, while for men is less than half of that, i.e. roughly 8.33%. This means that the higher number of female survivors is not solely due to there being less females than males on board.
Step14: Survival rates among females is 50%, and 12.93% among males. This assumes that the data provided in train.csv is representative of the ship's total data.
For some of the algorithms we're going to use, applying dimensionality reduction can help reduce complexity, and improve performance in terms of speed.
Dimensionality Reduction
Step15: From this vector, the first 9 components have are significant
Step16: Prediction
We'll try to build a random forest ensemble to try and predict which passengers survived the titanic ship wreck. The ensembles will comprise several decision trees, which in sklearn are based on an optimised version of the CART algorithm, as of this writing.
Not all variables in the data set are necessary for our model, so we should drop some of them.
PassengerId is just an indentifier, as are Ticket and Name, and none would add value to our model. Cabin is the location of a passenger's cabin. Perhaps in location-based model it would be useful, provided that we knew the positions of each cabin, and assumed passengers in higher cabins had higher chances of survival, for instance. But we don't have that information, so we discard it.
Since our test data has no outcome variable, we're going to split the training data into three
Step17: Now, we can build a classifier, and run predictions. The classifiers in sklearn work with numpy arrays, so we need to get the numpy array that corresponds to our DataFrame, and that means we must convert our categorical variables into integers.
Let's make sure we don't have any missing data
Step18: And fit/train the model
Random Forest
Original Data Based Random Forest
Step19: Testing performance
Step20: Validation performance
Step21: PCA Based Random Forest
Step22: Testing performance
Step23: Validation performance
Step24: SVM
Original Data Based SVM
Step25: Testing Performance
Step26: Validation Performance
Step27: PCA Based SVM
Step28: Neural Network
Step29: Logistic Regression
Step30: Save results | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import sklearn
from sklearn.ensemble import (RandomForestClassifier, RandomForestRegressor)
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.learning_curve import validation_curve
import sknn.mlp
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
import scipy.stats as stats
import requests
import os.path
import tempfile
import uuid
import random
import csv
pylab.rcParams['figure.figsize'] = (16, 12) # that's default image size for this interactive session
plt.figure(figsize=(16, 12))
sns.set_style('whitegrid')
def plot_validation_curve(traning_scores, testing_scores, param_range, log=False, title=''):
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title(title)
plt.xlabel('$\gamma$')
plt.ylabel('Score')
plt.ylim(0.0, 1.1)
if log:
plt.semilogx(param_range, train_scores_mean, label='Training score', color='r')
else:
plt.plot(param_range, train_scores_mean, label='Training score', color='r')
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2, color='r')
if log:
plt.semilogx(param_range, test_scores_mean, label='Cross-validation score', color='g')
else:
plt.plot(param_range, test_scores_mean, label='Cross-validation score', color='g')
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2, color='g')
plt.legend(loc='best')
plt.show()
Explanation: Titanic
In this notebook, I explore the titanic data set provided by Kaggle, to try and predict survival rates.
To process the data, certain missing values were replaced with medians or averages, depending on the variable, but split by class since survival rates varied between them. New variables were added, to explain, for instance, whether the person was a minor, or accompanied, and these were shown to improve results, although only slightly.
Several models were tested: Random Forests, SVM, Neural Networks, and Logistic Regression. Best score obtained on Kaggle was 0.78947, with an SVM based model.
End of explanation
np.random.seed(7)
train_file = 'train.csv'
assess_file = 'test.csv'
source_df = pd.read_csv(train_file, header=0)
assess_df = pd.read_csv(assess_file, header=0)
composite_df = pd.concat([source_df, assess_df])
composite_df.head()
Explanation: Pre-processing Data
First, we clean off the data, by checking for missing values, and imputing them as necessary.
End of explanation
source_df.apply(lambda x: sum(pd.isnull(x)))
assess_df.apply(lambda x: sum(pd.isnull(x)))
Explanation: Let's check the DataFrame for missing values
End of explanation
for key, grp in composite_df.groupby(['Pclass']):
print('Class[{}]: Mode({}), Median({})'.format(key, grp.Age.dropna().mode()[0],grp.Age.dropna().median()))
for df in (source_df, assess_df):
for key, grp in df.groupby(['Pclass']):
df.loc[(df.Age.isnull()) & (df.Pclass == key), 'Age'] = composite_df.loc[composite_df.Pclass == key, 'Age'].dropna().mode()[0]
_, axes = plt.subplots(2, 2)
quadrant = [(0,0),(0,1),(1,0),(1,1)]
p = composite_df.Age.plot.hist(stacked=True, bins=20, ax=axes[0,0])
p.set_title('Age Distribution')
for idx, (key, grp) in enumerate(composite_df.groupby(['Pclass'])):
p = grp.Age.plot.hist(stacked=False, bins=20, label=key, ax=axes[quadrant[idx+1][0], quadrant[idx+1][1]])
p.set_title('Age Dist, Class {0}'.format(key))
plt.show()
Explanation: From train.csv, at least two variables have missing values: Age, and Embarked. In the test.csv data, one entry of Fare, and 86 of Age also have missing values. The entry Cabin has several missing values, for most of the records, but since we won't using on our model, we need not worry about this.
Before choosing a strategy for imputing them, we look at the distribution of the variables Age, and Fare.
Age
End of explanation
for df in (source_df, assess_df, composite_df):
df['NewAge'] = df.Age.apply(lambda x: x**(0.5) if x else x)
_, axes = plt.subplots(2, 2)
quadrant = [(0,0),(0,1),(1,0),(1,1)]
p = composite_df.NewAge.plot.hist(stacked=True, bins=20, ax=axes[0,0])
p.set_title('Age Distribution')
for idx, (key, grp) in enumerate(composite_df.groupby(['Pclass'])):
p = grp.NewAge.plot.hist(stacked=False, bins=20, label=key, ax=axes[quadrant[idx+1][0], quadrant[idx+1][1]])
p.set_title('NewAge Dist, Class {0}'.format(key))
plt.show()
Explanation: Age has some missing values, and its distribution seems to vary by class, so to impute them we opt for using the median from each passenger class.
End of explanation
for key, grp in composite_df.groupby(['Pclass']):
print('Class[{}]: Mode({}), Median({})'.format(key, grp.Fare.dropna().mode()[0],grp.Fare.dropna().median()))
for df in (source_df, assess_df):
for key, grp in df.groupby(['Pclass']):
df.loc[(df.Fare.isnull()) & (df.Pclass == key), 'Fare'] = composite_df.loc[composite_df.Pclass == key, 'Fare'].dropna().mode()[0]
_, axes = plt.subplots(2, 2)
quadrant = [(0,0),(0,1),(1,0),(1,1)]
p = composite_df.Fare.plot.hist(stacked=True, bins=20, ax=axes[0,0])
p.set_title('Fare Distribution')
for idx, (key, grp) in enumerate(composite_df.groupby(['Pclass'])):
p = grp.Fare.plot.hist(stacked=False, bins=20, label=key, ax=axes[quadrant[idx+1][0], quadrant[idx+1][1]])
p.set_title('Fare Dist, Class {0}'.format(key))
plt.show()
Explanation: Fare
End of explanation
for df in (source_df, assess_df, composite_df):
df['NewFare'] = df.Fare.apply(lambda x: np.log(np.log(x)) if x else x)
_, axes = plt.subplots(2, 2)
quadrant = [(0,0),(0,1),(1,0),(1,1)]
p = composite_df.NewFare.plot.hist(stacked=True, bins=20, ax=axes[0,0])
p.set_title('Fare Distribution')
for idx, (key, grp) in enumerate(composite_df.groupby(['Pclass'])):
p = grp.NewFare.plot.hist(stacked=False, bins=20, label=key, ax=axes[quadrant[idx+1][0], quadrant[idx+1][1]])
p.set_title('NewFare Dist, Class {0}'.format(key))
plt.show()
composite_df.head()
Explanation: While Class 1 fare prices peak at 500, Class 2 prices peak slightly above 70, and Class 3 prices below that. As with Age, we opt for the medium as a replacement value for the missing ones.
End of explanation
for key, grp in composite_df.groupby(['Pclass']):
print('Class[{}]: Mode({})'.format(key, grp.Embarked.astype(str).mode()[0]))
for df in (source_df, assess_df):
for key, grp in df.groupby(['Pclass']):
df.loc[(df.Embarked.isnull()) & (df.Pclass == key), 'Embarked'] = composite_df.loc[composite_df.Pclass == key, 'Embarked'].dropna().mode()[0]
Explanation: Embarked
End of explanation
for df in (source_df, assess_df):
for col in df.keys():
if col in ('Fare' 'Age'):
df[col] = df[col].apply(pd.to_numeric)
elif col in ('Survived', 'Pclass', 'SibSp', 'Parch'):
df[col] = df[col].astype(np.int64)
sns.factorplot(x='Pclass', y='Survived', hue='Sex', data=source_df, kind="bar", palette="muted")
sns.factorplot(x='Embarked', y='Survived', hue='Sex', data=source_df, kind="bar", palette="muted")
Explanation: The variable Embarked also has one missing value. We're going to replace it with the most common value for its class as well.
Now the data is ready for some exploratory analysis.
Exploration
End of explanation
source_df.groupby('Pclass').size()
Explanation: A passenger's class appears to be one of the highest influencers in determining survival rates. More people from Class 3 died than any other class; this could be a given, since most of the passengers were from Class 3
End of explanation
testing_passenger_ids = assess_df['PassengerId']
fitnessdf = source_df.drop(['Name', 'Ticket', 'Cabin', 'PassengerId'], axis=1)
assessmentdf = assess_df.drop(['Name', 'Ticket', 'Cabin', 'PassengerId'], axis=1)
for df in (fitnessdf, assessmentdf):
partners = df.SibSp + df.Parch
df['FamilySize'] = partners
df['FamilySize'] *= 1/(df['FamilySize'].max())
df['IsYoung'] = df.Age.apply(lambda age: 1 if age < 18 else 0)
df['LifeStage'] = df.Age.apply(lambda age: 1 if age < 2 else 2 if age < 12 else 3 if age < 18 else 4 if age < 45 else 5)
df['WithSibSp'] = df.SibSp.apply(lambda sibsp: 1 if sibsp > 0 else 0)
df['WithParch'] = df.Parch.apply(lambda parch: 1 if parch > 0 else 0)
df['Embarked'] = df.Embarked.map({'S': 1, 'C': 2, 'Q': 3})
for key in df.Embarked.unique():
df['{}.{}'.format('Embarked', key)] = df.Embarked.apply(lambda port: 1 if port == key else 0)
df.Sex = df.Sex.apply(lambda sex: 1 if sex == 'female' else 0)
df.drop(['Age', 'Fare'], axis=1, inplace=True)
fitnessdf.head()
sns.factorplot(x='LifeStage', y='Survived', hue='WithParch', data=fitnessdf, kind="bar", palette="muted")
sns.factorplot(x='IsYoung', y='Survived', hue='WithParch', data=fitnessdf, kind="bar", palette="muted")
sns.factorplot(x='IsYoung', y='Survived', hue='WithSibSp', data=fitnessdf, kind='bar', palette='muted')
sns.factorplot(x='LifeStage', y='Survived', hue='WithSibSp', data=fitnessdf, kind='bar', palette='muted')
Explanation: Ship's voyage was Belfast, Southmapton, Cherbourg, and Queenstown, with New York as the final destination. Around 80% of the crew members were men from Southampton[1], and most passengers did board from that port as well, which explains the higher numbers of people from Southampton compared to the other ports.
Survival rates for women are consistently higher than men. However, there are noticeable differences between survival rates within each sex when measured against the port of embarcation and passenger's class, hinting that both might be useful predictors.
To use them in numberic algorithms, we'll convert them into numeric form, and we create new variables for that.
End of explanation
for df in (fitnessdf, assessmentdf):
df.drop(['WithParch', 'LifeStage'], axis=1, inplace=True)
Explanation: Adults with siblings or spouses are more likely to have survived than those without, and conversely, children without siblings are more likely to have survived than those with.
End of explanation
ratiof = sum(composite_df.Sex == 'female')/len(composite_df)*100
ratiom = sum(composite_df.Sex == 'male')/len(composite_df)*100
print('Female: {:.2f}%, Male: {:.2f}%'.format(ratiof, ratiom))
Explanation: While survival rates for women are higher, we consider that there were close to twice as many men on the ship compared to women.
End of explanation
ratiof = sum((composite_df.Sex == 'female') & (composite_df.Survived == 1))/len(composite_df)*100
ratiom = sum((composite_df.Sex == 'male') & (composite_df.Survived == 1))/len(composite_df)*100
print('Total survival rates')
print('Female: {:.2f}%, Male: {:.2f}%'.format(ratiof, ratiom))
ratiof = sum((composite_df.Sex == 'female') & (composite_df.Survived == 1))/sum(composite_df.Sex == 'female')*100
ratiom = sum((composite_df.Sex == 'male') & (composite_df.Survived == 1))/sum(composite_df.Sex == 'male')*100
print('In-sex survival rates')
print('Female: {:.2f}%, Male: {:.2f}%'.format(ratiof, ratiom))
Explanation: However, total survival rate for females is around 17.8% of the the whole ship, while for men is less than half of that, i.e. roughly 8.33%. This means that the higher number of female survivors is not solely due to there being less females than males on board.
End of explanation
from sklearn.decomposition import PCA
pca, reduced = {}, {}
fitnessval, target = fitnessdf.values[:, 1:], fitnessdf.values[:, 0]
gpca = PCA(whiten=True)
greduced = gpca.fit_transform(fitnessval)
gpca.explained_variance_ratio_
Explanation: Survival rates among females is 50%, and 12.93% among males. This assumes that the data provided in train.csv is representative of the ship's total data.
For some of the algorithms we're going to use, applying dimensionality reduction can help reduce complexity, and improve performance in terms of speed.
Dimensionality Reduction
End of explanation
pca_nfactors = 9
Explanation: From this vector, the first 9 components have are significant
End of explanation
n = len(fitnessdf)
training_indeces = np.random.choice(range(0, n), size=int(n*.65), replace=False)
testing_indeces = np.random.choice([x for x in range(0, n) if x not in training_indeces], size=int(n*.20), replace=False)
validation_indeces = [x for x in range(0, n) if x not in np.concatenate((training_indeces, testing_indeces))]
traindf = fitnessdf.loc[training_indeces][:]
testdf = fitnessdf.loc[testing_indeces][:]
validatedf = fitnessdf.loc[validation_indeces][:]
dataframes = dict(train=traindf, test=testdf, validate=validatedf, assess=assessmentdf)
print('Train[{:.2f}%], Test[{:.2f}%], Validate[{:.2f}%]'.format(len(traindf)/n*100, len(testdf)/n*100, len(validatedf)/n*100))
dataframes['train'].head()
Explanation: Prediction
We'll try to build a random forest ensemble to try and predict which passengers survived the titanic ship wreck. The ensembles will comprise several decision trees, which in sklearn are based on an optimised version of the CART algorithm, as of this writing.
Not all variables in the data set are necessary for our model, so we should drop some of them.
PassengerId is just an indentifier, as are Ticket and Name, and none would add value to our model. Cabin is the location of a passenger's cabin. Perhaps in location-based model it would be useful, provided that we knew the positions of each cabin, and assumed passengers in higher cabins had higher chances of survival, for instance. But we don't have that information, so we discard it.
Since our test data has no outcome variable, we're going to split the training data into three: training (65%), testing (20%), validation (15%). The first to train the model, the second to tune parameters for performance, and the third to run a blind test to verify our model's generalisation. This leaves us with less data for each stage (traning, testing, validation), but hopefully it'll yield better results.
End of explanation
for key, df in dataframes.items():
print('{} missing values count:'.format(key), sum(sum(pd.isnull(df[col])) for col in df.keys()))
Explanation: Now, we can build a classifier, and run predictions. The classifiers in sklearn work with numpy arrays, so we need to get the numpy array that corresponds to our DataFrame, and that means we must convert our categorical variables into integers.
Let's make sure we don't have any missing data
End of explanation
param_range = np.arange(1, 15, 1)
train_scores, test_scores = validation_curve(
RandomForestClassifier(),
traindf.values[0::,1::],
traindf.values[0::,0],
param_name="max_depth",
param_range = param_range,
cv=10,
scoring="accuracy",
n_jobs=-1)
plot_validation_curve(train_scores, test_scores, param_range, log=False, title='RF: Max Depth')
randforest = RandomForestClassifier(n_estimators=11, n_jobs=-1, max_depth=6)
randforest = randforest.fit(traindf.values[0::,1::], traindf.values[0::,0])
randforest.score(traindf.values[0::,1::], traindf.values[0::,0])
Explanation: And fit/train the model
Random Forest
Original Data Based Random Forest
End of explanation
randforest.score(testdf.values[0::,1::], testdf.values[0::,0])
Explanation: Testing performance
End of explanation
randforest.score(validatedf.values[0::,1::], validatedf.values[0::,0])
Explanation: Validation performance
End of explanation
param_range = np.arange(1, 50, 1)
train_scores, test_scores = validation_curve(
RandomForestClassifier(n_estimators=10),
greduced[training_indeces,:pca_nfactors],
traindf.values[0::,0],
param_name="min_samples_split",
param_range = param_range,
cv=10,
scoring="accuracy",
n_jobs=-1)
plot_validation_curve(train_scores, test_scores, param_range, log=False, title='RF/PCA: Min Sample Split')
randforestRed = RandomForestClassifier(n_estimators=10, n_jobs=-1, min_samples_split=30)
randforestRed = randforestRed.fit(greduced[training_indeces,:pca_nfactors], traindf.values[0::,0])
randforestRed.score(greduced[training_indeces,:pca_nfactors], traindf.values[0::,0])
Explanation: PCA Based Random Forest
End of explanation
randforestRed.score(greduced[testing_indeces,:pca_nfactors], testdf.values[0::,0])
Explanation: Testing performance
End of explanation
randforestRed.score(greduced[validation_indeces,:pca_nfactors], validatedf.values[0::,0])
Explanation: Validation performance
End of explanation
param_range = np.logspace(-6, 3, 10)
train_scores, test_scores = validation_curve(
svm.SVC(kernel='rbf', probability=True),
traindf.values[:,1::],
traindf.values[0::,0],
param_name='gamma',
param_range = param_range,
cv=10,
scoring="accuracy",
n_jobs=-1)
plot_validation_curve(train_scores, test_scores, param_range, title='SVC: Gamma Validation Curve', log=True)
from sklearn import svm
smvc = svm.SVC(kernel='rbf', gamma=10**-1, probability=True)
smvc.fit(traindf.values[0::,1::], traindf.values[0::,0])
smvc.score(traindf.values[0::,1::], traindf.values[0::,0])
Explanation: SVM
Original Data Based SVM
End of explanation
smvc.score(testdf.values[0::,1::], testdf.values[0::,0])
Explanation: Testing Performance
End of explanation
smvc.score(validatedf.values[0::,1::], validatedf.values[0::,0])
Explanation: Validation Performance
End of explanation
param_range = np.logspace(-6, 6, 10)
train_scores, test_scores = validation_curve(
svm.SVC(probability=True),
greduced[training_indeces,:pca_nfactors],
traindf.values[0::,0],
param_name='gamma',
param_range = param_range,
cv=10,
scoring='accuracy',
n_jobs=-1)
plot_validation_curve(train_scores, test_scores, param_range, title='SVC/Reduced: Gamma Validation Curve', log=True)
from sklearn import svm
smvcRed = svm.SVC(gamma=10**-1)
smvcRed.fit(greduced[training_indeces,:pca_nfactors], traindf.values[0::,0])
smvcRed.score(greduced[training_indeces,:pca_nfactors], traindf.values[0::,0])
smvcRed.score(greduced[testing_indeces,:pca_nfactors], testdf.values[0::,0])
smvcRed.score(greduced[validation_indeces,:pca_nfactors], validatedf.values[0::,0])
Explanation: PCA Based SVM
End of explanation
nn = sknn.mlp.Classifier(
layers=[
sknn.mlp.Layer("Tanh", units=10),
sknn.mlp.Layer("Tanh", units=10),
sknn.mlp.Layer("Softmax")],
learning_rate=0.01,
n_iter=300)
nn.fit(traindf.values[0::,1::], traindf.values[0::,0])
nn.score(traindf.values[0::,1::], traindf.values[0::,0])
nn.score(testdf.values[0::,1::], testdf.values[0::,0])
nn.score(validatedf.values[0::,1::], validatedf.values[0::,0])
Explanation: Neural Network
End of explanation
logreg = LogisticRegression(solver='lbfgs', tol=10**-8, n_jobs=-1)
logreg.fit(traindf.values[0::,1::], traindf.values[0::,0])
logreg.score(traindf.values[0::,1::], traindf.values[0::,0])
logreg.score(testdf.values[0::,1::], testdf.values[0::,0])
logreg.score(validatedf.values[0::,1::], validatedf.values[0::,0])
Explanation: Logistic Regression
End of explanation
models = {'rf': randforest, 'rfpca': randforestRed, 'svm': smvc, 'svmpca': smvcRed, 'nn': nn, 'logreg': logreg}
for k, m in models.items():
if k.endswith('pca'):
pca = PCA(whiten=True)
data = pca.fit_transform(assessmentdf.values[:,:])
data = data[:,:pca_nfactors]
else:
data = assessmentdf.values[:,:]
predictions = m.predict(data)
if k is 'nn':
predictions = [int(x[0]) for x in predictions]
else:
predictions = predictions.astype(int)
with open('predictions-{}.csv'.format(k), 'w') as fp:
writer = csv.writer(fp)
writer.writerow(['PassengerId', 'Survived'])
for idx, val in zip(testing_passenger_ids, predictions):
writer.writerow([idx, val])
Explanation: Save results
End of explanation |
9,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating the effect of a Member Rewards program
An example on how DoWhy can be used to estimate the effect of a subscription or a rewards program for customers.
Suppose that a website has a membership rewards program where customers receive additional benefits if they sign up. How do we know if the program is effective? Here the relevant causal question is
Step2: The importance of time
Time plays a crucial role in modeling this problem.
Rewards signup can affect the future transactions, but not those that happened before it. In fact, the transactions prior to the rewards signup can be assumed to cause the rewards signup decision. Therefore we split up the variables for each user
Step3: More generally, we can include any activity data for the customer in the above graph. All prior- and post-activity data will occupy the same place (and have the same edges) as the Amount spent node (prior and post respectively).
II. Identifying the causal effect
For the sake of this example, let us assume that unobserved confounding does not play a big part.
Step4: Based on the graph, DoWhy determines that the signup month and amount spent in the pre-treatment months (signup_month, pre_spend) needs to be conditioned on.
III. Estimating the effect
We now estimate the effect based on the backdoor estimand, setting the target units to "att".
Step5: The analysis tells us the Average Treatment Effect on the Treated (ATT). That is, the average effect on total spend for the customers that signed up for the Rewards Program in month i=3 (compared to the case where they had not signed up). We can similarly calculate the effects for customers who signed up in any other month by changing the value of i(line 2 above) and then rerunning the analysis.
Note that the estimation suffers from left and right-censoring.
1. Left-censoring | Python Code:
# Creating some simulated data for our example
import pandas as pd
import numpy as np
num_users = 10000
num_months = 12
signup_months = np.random.choice(np.arange(1, num_months), num_users) * np.random.randint(0,2, size=num_users) # signup_months == 0 means customer did not sign up
df = pd.DataFrame({
'user_id': np.repeat(np.arange(num_users), num_months),
'signup_month': np.repeat(signup_months, num_months), # signup month == 0 means customer did not sign up
'month': np.tile(np.arange(1, num_months+1), num_users), # months are from 1 to 12
'spend': np.random.poisson(500, num_users*num_months) #np.random.beta(a=2, b=5, size=num_users * num_months)*1000 # centered at 500
})
# A customer is in the treatment group if and only if they signed up
df["treatment"] = df["signup_month"]>0
# Simulating an effect of month (monotonically decreasing--customers buy less later in the year)
df["spend"] = df["spend"] - df["month"]*10
# Simulating a simple treatment effect of 100
after_signup = (df["signup_month"] < df["month"]) & (df["treatment"])
df.loc[after_signup,"spend"] = df[after_signup]["spend"] + 100
df
Explanation: Estimating the effect of a Member Rewards program
An example on how DoWhy can be used to estimate the effect of a subscription or a rewards program for customers.
Suppose that a website has a membership rewards program where customers receive additional benefits if they sign up. How do we know if the program is effective? Here the relevant causal question is:
What is the impact of offering the membership rewards program on total sales?
And the equivalent counterfactual question is,
If the current members had not signed up for the program, how much less would they have spent on the website?
In formal language, we are interested in the Average Treatment Effect on the Treated (ATT).
I. Formulating the causal model
Suppose that the rewards program was introduced in January 2019. The outcome variable is the total spends at the end of the year.
We have data on all monthly transactions of every user and on the time of signup for those who chose to signup for the rewards program. Here's what the data looks like.
End of explanation
import dowhy
# Setting the signup month (for ease of analysis)
i = 3
causal_graph = digraph {
treatment[label="Program Signup in month i"];
pre_spends;
post_spends;
Z->treatment;
pre_spends -> treatment;
treatment->post_spends;
signup_month->post_spends;
signup_month->treatment;
}
# Post-process the data based on the graph and the month of the treatment (signup)
# For each customer, determine their average monthly spend before and after month i
df_i_signupmonth = (
df[df.signup_month.isin([0, i])]
.groupby(["user_id", "signup_month", "treatment"])
.apply(
lambda x: pd.Series(
{
"pre_spends": x.loc[x.month < i, "spend"].mean(),
"post_spends": x.loc[x.month > i, "spend"].mean(),
}
)
)
.reset_index()
)
print(df_i_signupmonth)
model = dowhy.CausalModel(data=df_i_signupmonth,
graph=causal_graph.replace("\n", " "),
treatment="treatment",
outcome="post_spends")
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
Explanation: The importance of time
Time plays a crucial role in modeling this problem.
Rewards signup can affect the future transactions, but not those that happened before it. In fact, the transactions prior to the rewards signup can be assumed to cause the rewards signup decision. Therefore we split up the variables for each user:
Activity prior to the treatment (assumed a cause of the treatment)
Activity after the treatment (is the outcome of applying treatment)
Of course, many important variables that affect signup and total spend are missing (e.g., the type of products bought, length of a user's account, geography, etc.). This is a critical assumption in the analysis, one that needs to be tested later using refutation tests.
Below is the causal graph for a user who signed up in month i=3. The analysis will be similar for any i.
End of explanation
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Explanation: More generally, we can include any activity data for the customer in the above graph. All prior- and post-activity data will occupy the same place (and have the same edges) as the Amount spent node (prior and post respectively).
II. Identifying the causal effect
For the sake of this example, let us assume that unobserved confounding does not play a big part.
End of explanation
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching",
target_units="att")
print(estimate)
Explanation: Based on the graph, DoWhy determines that the signup month and amount spent in the pre-treatment months (signup_month, pre_spend) needs to be conditioned on.
III. Estimating the effect
We now estimate the effect based on the backdoor estimand, setting the target units to "att".
End of explanation
refutation = model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter",
placebo_type="permute", num_simulations=20)
print(refutation)
Explanation: The analysis tells us the Average Treatment Effect on the Treated (ATT). That is, the average effect on total spend for the customers that signed up for the Rewards Program in month i=3 (compared to the case where they had not signed up). We can similarly calculate the effects for customers who signed up in any other month by changing the value of i(line 2 above) and then rerunning the analysis.
Note that the estimation suffers from left and right-censoring.
1. Left-censoring: If a customer signs up in the first month, we do not have enough transaction history to match them to similar customers who did not sign up (and thus apply the backdoor identified estimand).
2. Right-censoring: If a customer signs up in the last month, we do not enough future (post-treatment) transactions to estimate the outcome after signup.
Thus, even if the effect of signup was the same across all months, the estimated effects may be different by month of signup, due to lack of data (and thus high variance in estimated pre-treatment or post-treatment transactions activity).
IV. Refuting the estimate
We refute the estimate using the placebo treatment refuter. This refuter substitutes the treatment by an independent random variable and checks whether our estimate now goes to zero (it should!).
End of explanation |
9,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vectorized Operations
not necessary to write loops for element-by-element operations
pandas' Series objects can be passed to MOST NumPy functions
documentation
Step1: add Series without loop
Step2: Series within arithmetic expression
Step3: Series used as argument to NumPy function
Step4: A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same labels.
Step5: Apply Python functions on an element-by-element basis
Step6: Vectorized string methods
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. | Python Code:
import pandas as pd
import numpy as np
my_dictionary = {'a' : 45., 'b' : -19.5, 'c' : 4444}
my_series = pd.Series(my_dictionary)
my_series
Explanation: Vectorized Operations
not necessary to write loops for element-by-element operations
pandas' Series objects can be passed to MOST NumPy functions
documentation: http://pandas.pydata.org/pandas-docs/stable/basics.html
End of explanation
my_series + my_series
Explanation: add Series without loop
End of explanation
3 * my_series + 5
Explanation: Series within arithmetic expression
End of explanation
np.exp(my_series)
Explanation: Series used as argument to NumPy function
End of explanation
my_series[1:]
my_series[:-1]
my_series[1:] + my_series[:-1]
Explanation: A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same labels.
End of explanation
def multiply_by_ten (input_element):
return input_element * 10.0
my_series.map(multiply_by_ten)
Explanation: Apply Python functions on an element-by-element basis
End of explanation
series_of_strings = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
series_of_strings.str.lower()
Explanation: Vectorized string methods
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically.
End of explanation |
9,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.has_bounded_lag
Check if the transducer has bounded lag, i.e. that the difference of length between the input and output words is bounded, for every word accepted.
It is a pre-condition for transducer synchronization.
Preconditions
Step1: This automaton has a bounded lag
Step2: This transducer, however, doesn't have a bounded lag
Step3: In the case of more than 2 tapes, has_bounded_lag checks that every tape has a bounded lag compared to the first one (incidentally, if that is the case, it will insure that every tapes has a bounded lag in respect to every other). This transducer has a bounded lag if you only consider the first 2 tapes, but the third tape doesn't. | Python Code:
import vcsn
ctx = vcsn.context("lat<lan_char(ab), lan_char(xy)>, b")
ctx
a = ctx.expression(r"'a,x''b,y'*'a,\e'").automaton()
a
Explanation: automaton.has_bounded_lag
Check if the transducer has bounded lag, i.e. that the difference of length between the input and output words is bounded, for every word accepted.
It is a pre-condition for transducer synchronization.
Preconditions:
The automaton has at least 2 tapes
Examples
End of explanation
a.has_bounded_lag()
b = ctx.expression(r"(\e|x)(a|\e)*(b|y)").automaton()
b
Explanation: This automaton has a bounded lag: there is at most a difference of 1 between the length of the input and the length of the output (e.g., $abba \rightarrow xyy$).
End of explanation
b.has_bounded_lag()
ctx_3 = vcsn.context("lat<lan_char(ab), lan_char(jk), lan_char(xy)>, b")
c = ctx_3.expression(r"(a|j|x)(b|k|\e)*").automaton()
c
Explanation: This transducer, however, doesn't have a bounded lag: there can be an arbitrary large difference between the input and output. For example, $ab \rightarrow xy$, but $aaaaaaaaab \rightarrow xy$.
End of explanation
c.has_bounded_lag()
Explanation: In the case of more than 2 tapes, has_bounded_lag checks that every tape has a bounded lag compared to the first one (incidentally, if that is the case, it will insure that every tapes has a bounded lag in respect to every other). This transducer has a bounded lag if you only consider the first 2 tapes, but the third tape doesn't.
End of explanation |
9,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GlobalAveragePooling2D
[pooling.GlobalAveragePooling2D.0] input 6x6x3, data_format='channels_last'
Step1: [pooling.GlobalAveragePooling2D.1] input 3x6x6, data_format='channels_first'
Step2: [pooling.GlobalAveragePooling2D.2] input 5x3x2, data_format='channels_last'
Step3: export for Keras.js tests | Python Code:
data_in_shape = (6, 6, 3)
L = GlobalAveragePooling2D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: GlobalAveragePooling2D
[pooling.GlobalAveragePooling2D.0] input 6x6x3, data_format='channels_last'
End of explanation
data_in_shape = (3, 6, 6)
L = GlobalAveragePooling2D(data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.GlobalAveragePooling2D.1] input 3x6x6, data_format='channels_first'
End of explanation
data_in_shape = (5, 3, 2)
L = GlobalAveragePooling2D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.GlobalAveragePooling2D.2] input 5x3x2, data_format='channels_last'
End of explanation
import os
filename = '../../../test/data/layers/pooling/GlobalAveragePooling2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
9,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Following on from Guide to the Sequential Model
10 May 2017 - WH Nixalo
Getting started with the Keras Sequential model
The Sequential model is a linear stack of layers.
You can create a Sequential model by passing a list of layer instances to the constructor
Step1: You can also simply add layers via the .add() method
Step2: Specifying the input shape
The first layer in a Sequential model needs to receive info about its input shape -- & only the first because the following layers can do automatic shape inference.
Pass an input_shape arg to the first layer. Tuple of ints or None, where None indicates any (+) int may be expected.
2D layers, like Dense (aka. Fully-Connected/Linear) via argument input_dim; and some 3D temporal layers via input_dim and input_length.
To specify a fixed batch size for inputs (useful for stateful RecNets), pass a batch_size argument. If you pass both batch_size=32 & input_shape=(6,8), it'll expect every batch of inputs to have batch shape (32, 6, 8)
Both of the below are strictly equivalent
Step3: Compilation
Before training a model, the learning process must be configured via the compile moethod. It has 3 parameters
Step4: Training
Keras models are traind on NumPy arrays of input data & labels. You'll usually use the fit function to train a model. Fit Documentation
Step5: Keras Examples
Step6: MLP for binary classification
Step7: VGG-like ConvNet
Step8: Sequence classification with LSTM
Step9: Sequence classification with 1D convolutions
Step10: Stacked LSTM for sequence classification
Step11: Stacked LSTM model, rendered "stateful" | Python Code:
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),])
Explanation: Following on from Guide to the Sequential Model
10 May 2017 - WH Nixalo
Getting started with the Keras Sequential model
The Sequential model is a linear stack of layers.
You can create a Sequential model by passing a list of layer instances to the constructor:
End of explanation
model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Activation('relu'))
Explanation: You can also simply add layers via the .add() method:
End of explanation
model = Sequential()
mode.add(Dense(32, input_shape=(784,)))
model = Sequential()
mode.add(Dense(32, input_dim=784))
Explanation: Specifying the input shape
The first layer in a Sequential model needs to receive info about its input shape -- & only the first because the following layers can do automatic shape inference.
Pass an input_shape arg to the first layer. Tuple of ints or None, where None indicates any (+) int may be expected.
2D layers, like Dense (aka. Fully-Connected/Linear) via argument input_dim; and some 3D temporal layers via input_dim and input_length.
To specify a fixed batch size for inputs (useful for stateful RecNets), pass a batch_size argument. If you pass both batch_size=32 & input_shape=(6,8), it'll expect every batch of inputs to have batch shape (32, 6, 8)
Both of the below are strictly equivalent:
End of explanation
# For a multi-class classification problem
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# For binary classification
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
# For mean squared error regression
model.compile(optimizer='rmsprop',
loss='mse')
# For custom metrics
import keras.backend as K
def mean_pred(y_true y_pred):
return K.mean(y_pred)
model.compile(optimizer='rmspropr',
loss='binary_crossentropy',
metrics=['accuracy', mean_pred])
Explanation: Compilation
Before training a model, the learning process must be configured via the compile moethod. It has 3 parameters:
* Optimizer. Either string identifier of an existing optimizer (rmspropr, adagrad, etc), or an instance of the Optimizer class. See: Optimizers
* Loss Function. String identifier of an existing loss fn (categorical_crossentropy, mse, etc), or an objective function. See: Losses
* List of Metrics. For any classification problem you'll want to set this to metrics=['accuracy']. Metric: string identifier of existing metric, or custom metric function.
End of explanation
# For a single-input model with 2 classes (binary classification):
model = Sequential()
model.add(Dense(32, activatin='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmspropr',
loss='binary_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(2, size=(1000, 1))
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, labels, epochs=10, batch_size=32)
# For a single-input model with 10 classes (categorical classification):
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(10, size=(1000, 1))
# Convert labels to categorical one-hot encoding
one_hot_labels = keras.utils.to_cateogircal(labels, num_classes=10)
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, one_hot_labels, epochs=10, batch_size=32)
Explanation: Training
Keras models are traind on NumPy arrays of input data & labels. You'll usually use the fit function to train a model. Fit Documentation
End of explanation
from keras.models import Sequential
from keras.layers improt Dense, Dropout, Activation
from keras.optimizers import SGD
# Generate dummy data
import numpy as np
x_train = np.random.random((1000, 20))
y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10)
x_test = np.random.random((100, 20))
y_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden nodes
# in the first layer, specify the expected input data shape
# here: 20-dimensional vectors.
model.add(Dense(64, activation='reulu', input_dim=20))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
mode.fit(x_train, y_train,
epochs=20,
batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
Explanation: Keras Examples:
Github Folder
Multilayer Perceptron (MLP) for multi-class softmax classificaiton:
End of explanation
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
# Generate dummy data
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000, 1))
x_test = np.random.random((100, 20))
y_test = np.random.randint(2, size=(100, 1))
model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(x_train, y_train,
epochs=20,
batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
Explanation: MLP for binary classification:
End of explanation
# <...>
Explanation: VGG-like ConvNet:
End of explanation
# <...>
Explanation: Sequence classification with LSTM:
End of explanation
# <...>
Explanation: Sequence classification with 1D convolutions:
End of explanation
# <...>
Explanation: Stacked LSTM for sequence classification
End of explanation
# <...>
Explanation: Stacked LSTM model, rendered "stateful"
End of explanation |
9,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Get The Data
You can get the data on Kaggle's site.
Step2: Data Cleaning
Step3: Sex
Here we convert the gender labels (male, female) into a dummy variable (1, 0).
Step4: Embarked
Step5: Social Class
Step6: Impute Missing Values
A number of values of the Age feature are missing and will prevent the random forest to train. We get around this we will fill in missing values with the mean value of age (a useful fiction).
Age
Step7: Fare
Step8: Search For Optimum Parameters
Step9: Retrain The Random Forest With The Optimum Parameters
Step10: Create The Kaggle Submission | Python Code:
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV, cross_val_score
import csv as csv
Explanation: Title: Titanic Competition With Random Forest
Slug: titanic_competition_with_random_forest
Summary: Python code to make a submission to the titanic competition using a random forest.
Date: 2016-12-29 00:01
Category: Machine Learning
Tags: Trees And Forests
Authors: Chris Albon
This was my first attempt at a Kaggle submission and conducted mostly to understand the Kaggle competition process.
Preliminaries
End of explanation
# Load the data
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
Explanation: Get The Data
You can get the data on Kaggle's site.
End of explanation
# Create a list of the features we will eventually want for our model
features = ['Age', 'SibSp','Parch','Fare','male','embarked_Q','embarked_S','Pclass_2', 'Pclass_3']
Explanation: Data Cleaning
End of explanation
# Create an encoder
sex_encoder = preprocessing.LabelEncoder()
# Fit the encoder to the train data so it knows that male = 1
sex_encoder.fit(train['Sex'])
# Apply the encoder to the training data
train['male'] = sex_encoder.transform(train['Sex'])
# Apply the encoder to the training data
test['male'] = sex_encoder.transform(test['Sex'])
Explanation: Sex
Here we convert the gender labels (male, female) into a dummy variable (1, 0).
End of explanation
# Convert the Embarked training feature into dummies using one-hot
# and leave one first category to prevent perfect collinearity
train_embarked_dummied = pd.get_dummies(train["Embarked"], prefix='embarked', drop_first=True)
# Convert the Embarked test feature into dummies using one-hot
# and leave one first category to prevent perfect collinearity
test_embarked_dummied = pd.get_dummies(test["Embarked"], prefix='embarked', drop_first=True)
# Concatenate the dataframe of dummies with the main dataframes
train = pd.concat([train, train_embarked_dummied], axis=1)
test = pd.concat([test, test_embarked_dummied], axis=1)
Explanation: Embarked
End of explanation
# Convert the Pclass training feature into dummies using one-hot
# and leave one first category to prevent perfect collinearity
train_Pclass_dummied = pd.get_dummies(train["Pclass"], prefix='Pclass', drop_first=True)
# Convert the Pclass test feature into dummies using one-hot
# and leave one first category to prevent perfect collinearity
test_Pclass_dummied = pd.get_dummies(test["Pclass"], prefix='Pclass', drop_first=True)
# Concatenate the dataframe of dummies with the main dataframes
train = pd.concat([train, train_Pclass_dummied], axis=1)
test = pd.concat([test, test_Pclass_dummied], axis=1)
Explanation: Social Class
End of explanation
# Create an imputer object
age_imputer = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
# Fit the imputer object on the training data
age_imputer.fit(train['Age'].reshape(-1, 1))
# Apply the imputer object to the training and test data
train['Age'] = age_imputer.transform(train['Age'].reshape(-1, 1))
test['Age'] = age_imputer.transform(test['Age'].reshape(-1, 1))
Explanation: Impute Missing Values
A number of values of the Age feature are missing and will prevent the random forest to train. We get around this we will fill in missing values with the mean value of age (a useful fiction).
Age
End of explanation
# Create an imputer object
fare_imputer = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
# Fit the imputer object on the training data
fare_imputer.fit(train['Fare'].reshape(-1, 1))
# Apply the imputer object to the training and test data
train['Fare'] = fare_imputer.transform(train['Fare'].reshape(-1, 1))
test['Fare'] = fare_imputer.transform(test['Fare'].reshape(-1, 1))
Explanation: Fare
End of explanation
# Create a dictionary containing all the candidate values of the parameters
parameter_grid = dict(n_estimators=list(range(1, 5001, 1000)),
criterion=['gini','entropy'],
max_features=list(range(1, len(features), 2)),
max_depth= [None] + list(range(5, 25, 1)))
# Creata a random forest object
random_forest = RandomForestClassifier(random_state=0, n_jobs=-1)
# Create a gridsearch object with 5-fold cross validation, and uses all cores (n_jobs=-1)
clf = GridSearchCV(estimator=random_forest, param_grid=parameter_grid, cv=5, verbose=1, n_jobs=-1)
# Nest the gridsearchCV in a 3-fold CV for model evaluation
cv_scores = cross_val_score(clf, train[features], train['Survived'])
# Print results
print('Accuracy scores:', cv_scores)
print('Mean of score:', np.mean(cv_scores))
print('Variance of scores:', np.var(cv_scores))
Explanation: Search For Optimum Parameters
End of explanation
# Retrain the model on the whole dataset
clf.fit(train[features], train['Survived'])
# Predict who survived in the test dataset
predictions = clf.predict(test[features])
Explanation: Retrain The Random Forest With The Optimum Parameters
End of explanation
# Grab the passenger IDs
ids = test['PassengerId'].values
# Create a csv
submission_file = open("submission.csv", "w")
# Write to that csv
open_file_object = csv.writer(submission_file)
# Write the header of the csv
open_file_object.writerow(["PassengerId","Survived"])
# Write the rows of the csv
open_file_object.writerows(zip(ids, predictions))
# Close the file
submission_file.close()
Explanation: Create The Kaggle Submission
End of explanation |
9,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
Step1: Principal Component/EOF analysis
GOAL
Step2: EOF analysis
While each profile lives in an 80 dimensional space, we would like to see if we can classify the variability in fewer components. To begin we form a de-meaned data matrix $X$ where each row is a profile.
Step3: Applying the SVD
We now use the SVD to factor the data matrix as $X = U\Sigma V^T$
Step4: And begin by looking at the spectrum of singular values $\Sigma$. Defining the variance as $\Sigma^2$ then we can also calculate the cumulative contribution to the total variance as
$$
g_k = \frac{\sum_{i=0}^k \sigma_i^2}{\sum_{i=0}^n \sigma_i^2}
$$
Plotting both $\Sigma$ and $g$ shows that $\sim$ 80% of the total variance can be explained by the first 4-5 Components
Step5: Plotting the first 4 Singular Vectors in $V$, shows them to reflect some commonly occuring patterns in the data
Step6: For example, the first EOF pattern is primarily a symmetric pattern with an axial high surrounded by two off axis troughs (or an axial low with two flanking highs, the EOF's are just unit vector bases for the row-space and can be added with any positive or negative coefficient). The Second EOF is broader and all of one sign while the third EOF encodes assymetry.
Reconstruction
Using the SVD we can also decompose each profile into a weighted linear combination of EOF's i.e.
$$
X = U\Sigma V^T = C V^T
$$
where $C = U\Sigma$ is a matrix of coefficients that describes the how each data row is decomposed into the relevant basis vectors. We can then produce a k-rank truncated representation of the data by
$$
X_k = C_k V_k^T
$$
where $C_k$ is the first $k$ columns of $C$ and $V_k$ is the first $k$ EOF's.
Here we show the original data and the reconstructed data using the first 5 EOF's
Step7: And we can consider a few reconstructed profiles compared with the original data
Step8: projection of data onto a subspace
We can also use the Principal Components to look at the projection of the data onto a lower dimensional space as the coefficients $C$, are simply the coordinates of our data along each principal component. For example we can view the data in the 2-Dimensional space defined by the first 2 EOF's by simply plotting C_1 against C_2. | Python Code:
%matplotlib inline
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
import csv
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
End of explanation
# read the data from the csv file
data = np.genfromtxt('m80.csv', delimiter='')
data_mean = np.mean(data,0)
# and plot out a few profiles and the mean depth.
plt.figure()
rows = [ 9,59,99]
labels = [ 'slow','medium','fast']
for i,row in enumerate(rows):
plt.plot(data[row,:],label=labels[i])
plt.hold(True)
plt.plot(data_mean,'k--',label='mean')
plt.xlabel('Distance across axis (km)')
plt.ylabel('Relative Elevation (m)')
plt.legend(loc='best')
plt.title('Example cross-axis topography of mid-ocean ridges')
plt.show()
Explanation: Principal Component/EOF analysis
GOAL: Demonstrate the use of the SVD to calculate principal components or "Empirical Orthogonal Functions" in a geophysical data set. This example is modified from a paper by Chris Small (LDEO)
Small, C., 1994. A global analysis of mid-ocean ridge axial topography. Geophys J Int 116, 64–84. doi:10.1111/j.1365-246X.1994.tb02128.x
The Data
Here we will consider a set of topography profiles taken across the global mid-ocean ridge system where the Earth's tectonic plates are spreading apart.
<table>
<tr align=center><td><img align=center src="./images/World_OceanFloor_topo_green_brown_1440x720.jpg"><td>
</table>
The data consists of 156 profiles from a range of spreading rates. Each profile contains 80 samples so is in effect a vector in $R^{80}$
End of explanation
plt.figure()
X = data - data_mean
plt.imshow(X)
plt.xlabel('Distance across axis (Km)')
plt.ylabel('Relative Spreading Rate')
plt.colorbar()
plt.show()
Explanation: EOF analysis
While each profile lives in an 80 dimensional space, we would like to see if we can classify the variability in fewer components. To begin we form a de-meaned data matrix $X$ where each row is a profile.
End of explanation
# now calculate the SVD of the de-meaned data matrix
U,S,Vt = la.svd(X,full_matrices=False)
Explanation: Applying the SVD
We now use the SVD to factor the data matrix as $X = U\Sigma V^T$
End of explanation
# plot the singular values
plt.figure()
plt.semilogy(S,'bo')
plt.grid()
plt.title('Singular Values')
plt.show()
# and cumulative percent of variance
g = np.cumsum(S*S)/np.sum(S*S)
plt.figure()
plt.plot(g,'bx-')
plt.title('% cumulative percent variance explained')
plt.grid()
plt.show()
Explanation: And begin by looking at the spectrum of singular values $\Sigma$. Defining the variance as $\Sigma^2$ then we can also calculate the cumulative contribution to the total variance as
$$
g_k = \frac{\sum_{i=0}^k \sigma_i^2}{\sum_{i=0}^n \sigma_i^2}
$$
Plotting both $\Sigma$ and $g$ shows that $\sim$ 80% of the total variance can be explained by the first 4-5 Components
End of explanation
plt.figure()
num_EOFs=3
for row in range(num_EOFs):
plt.plot(Vt[row,:],label='EOF{}'.format(row+1))
plt.grid()
plt.xlabel('Distance (km)')
plt.title('First {} EOFs '.format(num_EOFs))
plt.legend(loc='best')
plt.show()
Explanation: Plotting the first 4 Singular Vectors in $V$, shows them to reflect some commonly occuring patterns in the data
End of explanation
# recontruct the data using the first 5 EOF's
k=5
Ck = np.dot(U[:,:k],np.diag(S[:k]))
Vtk = Vt[:k,:]
data_k = data_mean + np.dot(Ck,Vtk)
plt.figure()
plt.imshow(data_k)
plt.colorbar()
plt.title('reconstructed data')
plt.show()
Explanation: For example, the first EOF pattern is primarily a symmetric pattern with an axial high surrounded by two off axis troughs (or an axial low with two flanking highs, the EOF's are just unit vector bases for the row-space and can be added with any positive or negative coefficient). The Second EOF is broader and all of one sign while the third EOF encodes assymetry.
Reconstruction
Using the SVD we can also decompose each profile into a weighted linear combination of EOF's i.e.
$$
X = U\Sigma V^T = C V^T
$$
where $C = U\Sigma$ is a matrix of coefficients that describes the how each data row is decomposed into the relevant basis vectors. We can then produce a k-rank truncated representation of the data by
$$
X_k = C_k V_k^T
$$
where $C_k$ is the first $k$ columns of $C$ and $V_k$ is the first $k$ EOF's.
Here we show the original data and the reconstructed data using the first 5 EOF's
End of explanation
# show the original 3 profiles and their recontructed values using the first k EOF's
for i,row in enumerate(rows):
plt.figure()
plt.plot(data_k[row,:],label='k={}'.format(k))
plt.hold(True)
plt.plot(data[row,:],label='original data')
Cstring = [ '{:3.0f}, '.format(Ck[row,i]) for i in range(k) ]
plt.title('Reconstruction profile {}:\n C_{}='.format(row,k)+''.join(Cstring))
plt.legend(loc='best')
plt.show()
Explanation: And we can consider a few reconstructed profiles compared with the original data
End of explanation
# plot the data in the plane defined by the first two principal components
plt.figure()
plt.scatter(Ck[:,0],Ck[:,1])
plt.xlabel('$V_1$')
plt.ylabel('$V_2$')
plt.grid()
plt.title('Projection onto the first two principal components')
plt.show()
# Or consider the degree of assymetry (EOF 3) as a function of spreading rate
plt.figure()
plt.plot(Ck[:,2],'bo')
plt.xlabel('Spreading rate')
plt.ylabel('$C_3$')
plt.grid()
plt.title('Degree of assymetry')
plt.show()
Explanation: projection of data onto a subspace
We can also use the Principal Components to look at the projection of the data onto a lower dimensional space as the coefficients $C$, are simply the coordinates of our data along each principal component. For example we can view the data in the 2-Dimensional space defined by the first 2 EOF's by simply plotting C_1 against C_2.
End of explanation |
9,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
逻辑斯特回归示例
逻辑斯特回归
正则化后的逻辑斯特回归
Step1: 逻辑斯特回归
Step2: 逻辑斯特回归假设
$$ h_{\theta}(x) = g(\theta^{T}x)$$
$$ g(z)=\frac{1}{1+e^{−z}} $$
Step3: 其实scipy包里有一个函数可以完成一样的功能
Step4: 求偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m}\sum_{i=1}^{m} ( h_\theta (x^{(i)})-y^{(i)})x^{(i)}_{j} $$
向量化的偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m} X^T(g(X\theta)-y)$$
Step5: 最小化损失函数
Step6: 做一下预测吧
Step7: 咱们来看看考试1得分45,考试2得分85的同学通过概率有多高
Step8: 画决策边界
Step9: 加正则化项的逻辑斯特回归
Step10: 咱们整一点多项式特征出来(最高6阶)
Step11: 正则化后损失函数
$$ J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\big[-y^{(i)}\, log\,( h_\theta\,(x^{(i)}))-(1-y^{(i)})\,log\,(1-h_\theta(x^{(i)}))\big] + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$
向量化的损失函数(矩阵形式)
$$ J(\theta) = \frac{1}{m}\big((\,log\,(g(X\theta))^Ty+(\,log\,(1-g(X\theta))^T(1-y)\big) + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$
Step12: 偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m}\sum_{i=1}^{m} ( h_\theta (x^{(i)})-y^{(i)})x^{(i)}{j} + \frac{\lambda}{m}\theta{j}$$
向量化的偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m} X^T(g(X\theta)-y) + \frac{\lambda}{m}\theta_{j}$$
$$\text{注意,我们另外自己加的参数 } \theta_{0} \text{ 不需要被正则化}$$ | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from sklearn.preprocessing import PolynomialFeatures
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
sns.set_context('notebook')
sns.set_style('white')
%matplotlib inline
def load_data(file, delimeter=','):
data = np.loadtxt(file, delimiter=delimeter)
print('load_data: dimensions: ',data.shape)
print(data[1:6,:])
return data
def plot_data(data, label_x, label_y, label_pos, label_neg, axes=None):
if axes == None: axes = plt.gca()
# 获得正负样本的下标(即哪些是正样本,哪些是负样本)
neg = data[:,2] == 0
pos = data[:,2] == 1
axes.scatter(data[pos][:,0], data[pos][:,1], marker='+', c='k',
s=60, linewidth=2, label=label_pos)
axes.scatter(data[neg][:,0], data[neg][:,1], marker='o', c='y',
s=60, label=label_neg)
axes.set_xlabel(label_x)
axes.set_ylabel(label_y)
axes.legend(frameon= True, fancybox = True);
Explanation: 逻辑斯特回归示例
逻辑斯特回归
正则化后的逻辑斯特回归
End of explanation
data = load_data('input/data1.txt', ',')
# X = np.c_[np.ones((data.shape[0],1)), data[:,0:2]]
# y = np.c_[data[:,2]]
plot_data(data, 'Exam 1 score', 'Exam 2 score', 'Pass', 'Fail')
Explanation: 逻辑斯特回归
End of explanation
#定义sigmoid函数
def sigmoid(z):
return(1 / (1 + np.exp(-z)))
Explanation: 逻辑斯特回归假设
$$ h_{\theta}(x) = g(\theta^{T}x)$$
$$ g(z)=\frac{1}{1+e^{−z}} $$
End of explanation
#定义损失函数
def costFunction(theta, X, y):
m = y.size
h = sigmoid(X.dot(theta))
J = -1.0*(1.0/m)*(np.log(h).T.dot(y)+np.log(1-h).T.dot(1-y))
if np.isnan(J[0]):
return(np.inf)
return J[0]
Explanation: 其实scipy包里有一个函数可以完成一样的功能:<BR>
http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.expit.html#scipy.special.expit
损失函数
$$ J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\big[-y^{(i)}\, log\,( h_\theta\,(x^{(i)}))-(1-y^{(i)})\,log\,(1-h_\theta(x^{(i)}))\big]$$
向量化的损失函数(矩阵形式)
$$ J(\theta) = \frac{1}{m}\big((\,log\,(g(X\theta))^Ty+(\,log\,(1-g(X\theta))^T(1-y)\big)$$
End of explanation
#求解梯度
def gradient(theta, X, y):
m = y.size
h = sigmoid(X.dot(theta.reshape(-1,1)))
grad =(1.0/m)*X.T.dot(h-y)
return(grad.flatten())
initial_theta = np.zeros(X.shape[1])
cost = costFunction(initial_theta, X, y)
grad = gradient(initial_theta, X, y)
print('Cost: \n', cost)
print('Grad: \n', grad)
Explanation: 求偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m}\sum_{i=1}^{m} ( h_\theta (x^{(i)})-y^{(i)})x^{(i)}_{j} $$
向量化的偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m} X^T(g(X\theta)-y)$$
End of explanation
res = minimize(costFunction, initial_theta, args=(X,y), jac=gradient, options={'maxiter':400})
res
Explanation: 最小化损失函数
End of explanation
def predict(theta, X, threshold=0.5):
p = sigmoid(X.dot(theta.T)) >= threshold
return(p.astype('int'))
Explanation: 做一下预测吧
End of explanation
sigmoid(np.array([1, 45, 85]).dot(res.x.T))
Explanation: 咱们来看看考试1得分45,考试2得分85的同学通过概率有多高
End of explanation
plt.scatter(45, 85, s=60, c='r', marker='v', label='(45, 85)')
plotData(data, 'Exam 1 score', 'Exam 2 score', 'Admitted', 'Not admitted')
x1_min, x1_max = X[:,1].min(), X[:,1].max(),
x2_min, x2_max = X[:,2].min(), X[:,2].max(),
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
h = sigmoid(np.c_[np.ones((xx1.ravel().shape[0],1)), xx1.ravel(), xx2.ravel()].dot(res.x))
h = h.reshape(xx1.shape)
plt.contour(xx1, xx2, h, [0.5], linewidths=1, colors='b');
Explanation: 画决策边界
End of explanation
data2 = loaddata('input/data2.txt', ',')
# 拿到X和y
y = np.c_[data2[:,2]]
X = data2[:,0:2]
# 画个图
plotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0')
Explanation: 加正则化项的逻辑斯特回归
End of explanation
poly = PolynomialFeatures(6)
XX = poly.fit_transform(data2[:,0:2])
# 看看形状(特征映射后x有多少维了)
XX.shape
Explanation: 咱们整一点多项式特征出来(最高6阶)
End of explanation
# 定义损失函数
def costFunctionReg(theta, reg, *args):
m = y.size
h = sigmoid(XX.dot(theta))
J = -1.0*(1.0/m)*(np.log(h).T.dot(y)+np.log(1-h).T.dot(1-y)) + (reg/(2.0*m))*np.sum(np.square(theta[1:]))
if np.isnan(J[0]):
return(np.inf)
return(J[0])
Explanation: 正则化后损失函数
$$ J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\big[-y^{(i)}\, log\,( h_\theta\,(x^{(i)}))-(1-y^{(i)})\,log\,(1-h_\theta(x^{(i)}))\big] + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$
向量化的损失函数(矩阵形式)
$$ J(\theta) = \frac{1}{m}\big((\,log\,(g(X\theta))^Ty+(\,log\,(1-g(X\theta))^T(1-y)\big) + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$
End of explanation
def gradientReg(theta, reg, *args):
m = y.size
h = sigmoid(XX.dot(theta.reshape(-1,1)))
grad = (1.0/m)*XX.T.dot(h-y) + (reg/m)*np.r_[[[0]],theta[1:].reshape(-1,1)]
return(grad.flatten())
initial_theta = np.zeros(XX.shape[1])
costFunctionReg(initial_theta, 1, XX, y)
fig, axes = plt.subplots(1,3, sharey = True, figsize=(17,5))
# 决策边界,咱们分别来看看正则化系数lambda太大太小分别会出现什么情况
# Lambda = 0 : 就是没有正则化,这样的话,就过拟合咯
# Lambda = 1 : 这才是正确的打开方式
# Lambda = 100 : 卧槽,正则化项太激进,导致基本就没拟合出决策边界
for i, C in enumerate([0.0, 1.0, 100.0]):
# 最优化 costFunctionReg
res2 = minimize(costFunctionReg, initial_theta, args=(C, XX, y), jac=gradientReg, options={'maxiter':3000})
# 准确率
accuracy = 100.0*sum(predict(res2.x, XX) == y.ravel())/y.size
# 对X,y的散列绘图
plotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0', axes.flatten()[i])
# 画出决策边界
x1_min, x1_max = X[:,0].min(), X[:,0].max(),
x2_min, x2_max = X[:,1].min(), X[:,1].max(),
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
h = sigmoid(poly.fit_transform(np.c_[xx1.ravel(), xx2.ravel()]).dot(res2.x))
h = h.reshape(xx1.shape)
axes.flatten()[i].contour(xx1, xx2, h, [0.5], linewidths=1, colors='g');
axes.flatten()[i].set_title('Train accuracy {}% with Lambda = {}'.format(np.round(accuracy, decimals=2), C))
Explanation: 偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m}\sum_{i=1}^{m} ( h_\theta (x^{(i)})-y^{(i)})x^{(i)}{j} + \frac{\lambda}{m}\theta{j}$$
向量化的偏导(梯度)
$$ \frac{\delta J(\theta)}{\delta\theta_{j}} = \frac{1}{m} X^T(g(X\theta)-y) + \frac{\lambda}{m}\theta_{j}$$
$$\text{注意,我们另外自己加的参数 } \theta_{0} \text{ 不需要被正则化}$$
End of explanation |
9,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with the Keras Sequential model
The Sequential model is a linear stack of layers.
Step1: Same neural network architecture as before, but now in keras
Step2: Compilation Step
Step3: Training
Step4: Q
Step5: Q
Step6: Exercise | Python Code:
# simulate data
X, Y = backprop_make_classification()
plt.scatter(X[:, 0], X[:, 1], c=Y.argmax(1))
Explanation: Getting started with the Keras Sequential model
The Sequential model is a linear stack of layers.
End of explanation
model = Sequential()
model.add(Dense(3, input_dim=2)) # input layer is implicit
model.add(Activation('sigmoid'))
model.add(Dense(2)) # input dimensions are inferred
model.add(Activation('sigmoid'))
Explanation: Same neural network architecture as before, but now in keras:
1. Input layer has two neurons
2. Hidden layer has three neurons
3. Output layer has two neurons
End of explanation
sgd = SGD(lr=0.1)
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=['accuracy'])
Explanation: Compilation Step:
End of explanation
model.fit(X, Y)
# make dummy data
backprop_decision_boundary(model.predict, X, Y)
y_hat = model.predict(X)
print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1)))
Explanation: Training:
End of explanation
model = Sequential()
model.add(Dense(3, input_dim=2)) # input layer is implicit
model.add(Activation('sigmoid'))
model.add(Dense(2)) # input dimensions are inferred
model.add(Activation('sigmoid'))
# Why design the NN again?
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=['accuracy'])
model.fit(X, Y, epochs=10000, verbose=0)
backprop_decision_boundary(model.predict, X, Y)
y_hat = model.predict(X)
print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1)))
Explanation: Q: What went wrong?
End of explanation
model = Sequential()
model.add(Dense(3, input_dim=2)) # input layer is implicit
model.add(Activation('sigmoid'))
model.add(Dense(2)) # input dimensions are inferred
model.add(Activation('sigmoid'))
sgd.lr = 0.4
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=['accuracy'])
model.fit(X, Y, epochs=1000, verbose=0)
backprop_decision_boundary(model.predict, X, Y)
y_hat = model.predict(X)
print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1)))
Explanation: Q: How do we reduce epochs?
End of explanation
digits = load_digits()
X = digits.data
X /= 255
y = digits.target
y = OneHotEncoder().fit_transform(y.reshape(-1, 1)) # What is this?
y = y.todense()
# enter code here
Explanation: Exercise: Make a neural network to classify MNIST data
Hints:
1. Two hidden layers, first of size 128, second of size 63.
2. Use "categorical_crossentropy" loss function
3. Use the RMSprop optimizer
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.