path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/bernoulli_likelihood_demo.ipynb | ###Markdown
Usage demo for BernoulliLikelihoodVariableSelector
###Code
import numpy as np
from scipy.special import expit
import pandas as pd
from millipede import BernoulliLikelihoodVariableSelector
###Output
_____no_output_____
###Markdown
First we create a demo dataset with 3 causal and 97 spurious features
###Code
# note that there's relatively little information in a binary-valued observation so
# that we need a fair number of observations to pin down small effects
num_datapoints = 2500
num_covariates = 100
# create covariates
X = np.random.RandomState(0).randn(num_datapoints * num_covariates)
X = X.reshape((num_datapoints, num_covariates))
# specify the true causal coefficients
true_coefficients = np.array([1.0, -0.5, 0.25] + [0.0] * 97)
print("true_coefficients:\n", true_coefficients)
# compute responses using the true linear model with logistic link function
bernoulli_probs = expit(X @ true_coefficients)
Y = np.random.RandomState(1).binomial(1.0, bernoulli_probs)
print("Observed counts Y[:100]:\n", Y[:100])
# put the covariates and responses into a single numpy array
YX = np.concatenate([Y[:, None], X], axis=-1)
print("\nX.shape: ", X.shape, " Y.shape: ", Y.shape, " YX.shape: ", YX.shape)
###Output
Observed counts Y[:100]:
[1 1 1 0 1 1 0 0 1 0 1 1 1 1 0 0 1 0 1 0 1 1 0 0 0 1 1 0 0 1 1 1 1 0 0 0 0
1 0 1 1 0 0 0 1 1 0 0 1 1 1 0 0 1 1 0 1 0 1 1 1 0 0 1 1 1 1 0 1 0 1 1 0 1
1 0 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 1 1 1 0 0 1 1 0]
X.shape: (2500, 100) Y.shape: (2500,) YX.shape: (2500, 101)
###Markdown
Then we package the data as a Pandas DataFrame, giving each covariate a unique name
###Code
columns = ['Response', 'Causal1', 'Causal2', 'Causal3']
columns += ['Spurious{}'.format(k) for k in range(1, 98)]
dataframe = pd.DataFrame(YX, columns=columns)
dataframe.head(5)
###Output
_____no_output_____
###Markdown
Next we create a VariableSelector object appropriate for our binary-valued responses
###Code
selector = BernoulliLikelihoodVariableSelector(dataframe, # pass in the data
'Response', # indicate the column of responses
S=1.0, # specify the expected number of covariates to include a priori
)
###Output
_____no_output_____
###Markdown
Finally we run the MCMC algorithm to compute posterior inclusion probabilities (PIPs) and other posterior quanties of interest
###Code
selector.run(T=2000, T_burnin=1000, verbosity='bar', seed=2)
###Output
_____no_output_____
###Markdown
The results are available in the selector.summary DataFrame- As expected only the 3 causal covariates have large PIPs. - In addition the true coefficients are identified correctly (up to noise).- Note that the intercept term does not have a corresponding PIP, since it is always included in the model by assumption.
###Code
selector.summary
###Output
_____no_output_____
###Markdown
For example the largest spurious PIP is given by:
###Code
selector.summary.PIP.values[3:-1].max()
###Output
_____no_output_____
###Markdown
Some additional stats about the MCMC run are available in `selector.stats`:
###Code
selector.stats
###Output
_____no_output_____
###Markdown
Using per-covariate prior inclusion probabilitiesIf we have additional prior information about which covariates are more or less likely a priori, we can provide this information by setting the `S` argument to a `pandas.Series` of covariate-specificprior inclusion probabilities.
###Code
# let's make the 3rd covariate *less likely* a priori
S = np.ones(num_covariates) / num_covariates
S[2] *= 1.0e-4
S = pd.Series(S, index=columns[1:])
selector = BernoulliLikelihoodVariableSelector(dataframe, 'Response', S=S)
selector.run(T=2000, T_burnin=1000, verbosity='bar', seed=2)
###Output
_____no_output_____
###Markdown
As expected, the PIP of the 3rd covariate is now very small
###Code
selector.summary
###Output
_____no_output_____ |
svm-tutorial-notebook.ipynb | ###Markdown
Part I: Data Exploration and VisualizationWe will be using the "Epileptic Seizure Recognition Data Set" (2017) from the UC Irvine Machine Learning Repository, available for download here: http://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition (For Linux or Mac users, you can open a terminal, copy this link, and use the command: wget http://archive.ics.uci.edu/ml/machine-learning-databases/00388/data.csv to download the dataset into your working directory) An aside about the dataset: the data file is in CSV, or comma-separated value, format and contains a numerical representation of EEG data, which is recorded as a time series (the frequencies of brainwaves as they change over time). To analyze this data, it is helpful to "sample" the time series and process it into an easier-to-use format using the Fast Fourier Transform (FFT) algorithm - a super useful and powerful mathematical tool which you don't need to know about to use this data. The dataset was constructed by measuring brainwave activity from a total of 500 individuals who were each recorded for 23.5 seconds, then the time series were transformed using the FFT such that the 23.5 second measurements sampled into 4097 data points. The 500 patients' 23 second recordings results in 11500 samples in the dataset, each with a total of 179 features + a label indicating epiliptic activity. That's a whole lot of data! One nice way to think of how to represent this in our program is in a grid-like structure with rows and columns (or a matrix, if you are already familiar), in which each row is a sample corresponding to a patient's brain data and each column representing one feature of the EEG data. Therefore, we should have 11500 rows and 179+1 columns in our grid. Now, let's load this data into our grid - a multidimensional Numpy array!
###Code
# Import some helpful libraries
import numpy as np # Numpy - we'll use this to store and preprocess the data
import sklearn # scikit learn - we'll take advantage of data visualization tools as well as an easy to use, off-the-shelf SVM implementation
# The 1st row in the dataset is a header, which we will exclude using the skiprows parameter,
# as well as the first column, which "names" the specific example based on patient and brainwave sample
extract_cols = range(1, 180) # Keep the brain activity features and corresponding label
seizure_data = np.loadtxt("seizure_data.csv", delimiter=",", skiprows=1, usecols=extract_cols) # Load in the data
###Output
_____no_output_____
###Markdown
Each row in the dataset has a label with values 1-5: a label of '1' indicates epileptic seizure, while labels '2', '3', '4', and '5' represent subjects who did not have a seizure. Most papers which have analyzed this data have used this for binary classification, which is what we'll also do as a slight simplification and for more meaningful results (since we're assuming that you haven't come to this tutorial to learn about neuroscience). We call this process "binarizing" the dataset in a "one-against-all" manner (either the patient has epileptic seizure or doesn't), so we consider all rows with label '1' to be part of the "positive class", and all other labels will be '0' and part of the "negative class".
###Code
print("Before binarizing:", seizure_data[:10, -1])
# Binarize the labels of the all samples/rows in the dataset
for i in range(len(seizure_data)):
# If the sample doesn't have a positive label, consider it in the negative class
if seizure_data[i, -1] != 1:
seizure_data[i, -1] = 0
print("After binarizing:", seizure_data[:10, -1])
###Output
('Before binarizing:', array([4., 1., 5., 5., 5., 5., 4., 2., 1., 4.]))
('After binarizing:', array([0., 1., 0., 0., 0., 0., 0., 0., 1., 0.]))
###Markdown
Another quick trick that will simplify our lives later on is to separate the class labels from the rest of the features. We'll call the portion with the features the dataset and the corresponding classes the labels. (Often, simply X and y are used to denote the feature data and classes respectively, especially in online literature and Python documentation, so if you come across this, that's what it means).
###Code
# Separate the data features from the labels
data = seizure_data[:, :-1]
labels = seizure_data[:, -1]
###Output
_____no_output_____
###Markdown
How that we have our data ready to go, we want to get some sense of "what it looks like". If our data were two dimensional we could simply plot it and see plainly if the two types or classes were mixed or very far apart. To give a silly example, let's say we wanted to classify German shepherds and tabby kittens based on their weight and height. It would be pretty easy to see that plotting the weights along the x-axis, heights along the y-axis, that kittens would be close to the origin while the shepherds would be so much taller and heavier that they would be far away on the other side of the plot.
###Code
import matplotlib.image
import matplotlib.pyplot as plt
dog_img = mpimg.imread("german-shepherd.jpg")
cat_img = mpimg.imread("kitten.png")
plt.imshow(dog_img)
plt.imshow(cat_img)
plt.show()
###Output
_____no_output_____
###Markdown
Once we begin dealing with data with more than 3 or form dimensions - the dimension of space we live in - it's nearly impossible to have an intuition on how data "looks". So for us, unless we do something special with our data, we won't be able to have a visual sense of the form of our data. Herein, we look at two special algorithms: atPCA (Principal Components Analysis) and t-SNE (t-distributed Stochastic Neighbor Embedding). Principal Components Analysis:If you've had some exposure to Linear Algebra, then you may enjoy this next portion; otherwise, feel free to read about the intuition behind it and skip down to t-SNE (no hard feelings). PCA is a special procedure which takes a set of examples/samples/observations (the rows of our data matrix), and their corresponding features/attributes/variables: in statistical terms, it takes the observations and their possibly correlated, or dependent variables and processes them in a way to return a minimal set of variables which are linearly uncorrelated. This minimal set of uncorrelated variables are where the algorithm gets its name; these are the principal components. In layman terms, PCA takes your data in a high dimension we'll represent with the letter $d$ and aims to transform it into a lower dimension we'll call $b$ (with $b Rescale and standardize the dataBefore we apply PCA to reduce the dimensionality of our dataset, it will be helpful to first normalize and scale the features - this is sometimes referred to as Z-scale normalization. What this means is that for each of the 178 features in the dataset, we will find the mean value and shift each value such that it falls within the standard normal Gaussian distribution for that feature. In other words, we want each feature to have a mean value of 0 and standard deviation of 1, or have the value of each feature fall under a curve that looks something like this:
###Code
points = np.arange(-5, 5, 0.01)
gaussian = np.exp(-((points)**2)/2) / np.sqrt(2 * np.pi)
plt.plot(points, gaussian, c='b')
plt.xlabel("^^We want most values to fall under the bell-shaped part^^")
plt.title("Points on a standar normal (Gaussian distribution) curve:\n mean=0, st. deviation=1")
plt.show()
def z_score_normalize(data):
for i in range(data.shape[1]):
z_score_std_data = (data[:,i] - data[:,i].mean()) / data[:,i].std()
data[:,i] = z_score_std_data
return data
def minmax_scaling(data):
for i in range(data.shape[1]):
data_minmax = (data[:,i] - data[:,i].min()) / (data[:,i].max() - data[:,i].min())
data[:,i] = data_minmax
return data
# Normalize and Min-Max scale the data before applying the PCA transformation
z_scored = z_score_normalize(data)
normed_scaled = minmax_scaling(z_scored)
###Output
_____no_output_____
###Markdown
Now that we've z-score normalized the data, we want to apply PCA. To see the effects of the normalization, we'll apply PCA to the normalized data as well as the original dataset as-is.
###Code
from sklearn.decomposition import PCA
print("Before PCA, the dataset has:", data.shape[0], "samples and", data.shape[1], "features.")
# Instantiate the object which will transform our dataset down from 178 to to having 2 features
straightPCA = PCA(n_components=2)
normed_scaledPCA = PCA(n_components=2)
# "Fit" the PCA model with the original () and normed/scaled data
pca_data = straightPCA.fit_transform(data[:])
scaled_pca_data = normed_scaledPCA.fit_transform(normed_scaled[:])
# # Add the column of labels to the reduced data matrix
# reduced_as_is = np.hstack((reduced_as_is, seizure_data[:, -1].reshape(-1, 1)))
# reduced_scaled = np.hstack((reduced_scaled, seizure_data[:, -1].reshape(-1, 1)))
print("After PCA, the dataset has:", pca_data.shape[0], "samples and", pca_data.shape[1], "features.")
###Output
('Before PCA, the dataset has:', 11500, 'samples and', 178, 'features.')
('After PCA, the dataset has:', 11500, 'samples and', 2, 'features.')
###Markdown
Now that we've done some successful dimensionality reduction, we will plot the data and color the points black if they are negative (no seizure activity) samples and red if they are positive (seizure activity). For this, we'll use the matplotlib library to use the pyplot.scatter function to produce a nice scatterplot.
###Code
# Create one scatter plot using the two PCs and color code by the class labels
# plt.figure(figsize=(16, 8))
plt.scatter(pca_data[:, 0], pca_data[:, 1], c=labels)
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.title("Original Seizure Data after PCA Visualization")
plt.show()
plt.scatter(scaled_pca_data[:, 0], scaled_pca_data[:, 1], c=labels)
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.title("Normed & Reduced Data after PCA Visualization")
plt.show()
###Output
_____no_output_____
###Markdown
Unfortunately, since many of the data points are overlapping, it's hard to see what's really going on here. We're going to create 3D scatter plots that will better represent the "shape" of the data. (Check out the matplotlib documentation for more: https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html)
###Code
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(pca_data[:, 0], pca_data[:, 1], labels, c=labels)
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.title("Original Seizure Data after PCA Visualization")
plt.show()
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(scaled_pca_data[:, 0], scaled_pca_data[:, 1], labels, c=labels)
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.title("Normed & Scaled Seizure Data after PCA Visualization")
plt.show()
###Output
_____no_output_____
###Markdown
Now we can easily see that the yellow dots are all clustered together and completely separated from the purple clusters! In fact, the data are so well separated, that we could think of hanging a sheet between them such that all the yellow (negative) samples are above the sheet and all the purple (positive) samples are below it. This brings us to the beauty and purpose of the SVM algorithm. Part II: Support Vector MachineThere are two main types of SVM: that which performs linear classification - known as the Linear SVM model - and that which can efficiently perform non-linear classification using a special function called a kernel (this is sometimes referred to as the "kernel trick"). The kernel is the name for the particular type of function that the SVM tries to learn such that the function is the division or separation of data by class. Technically, the SVM algorithm can also be used in the context of regression (predicting real values from data; a classic example is given aspects about houses, predict how much a house you're interested in costs), as well as multiclass classification (3 or more classes) - but we'll focus on binary classification. Decision boundaryIn our 3D plot of the data after applying PCA, we noticed that there is a very clear separation of the data, where if we could suspend a sheet between the data or draw a 3D line between the two clusters, we would know if a point would be in the positive or negative class depending on what side of the sheet it was on. That sheet, line, or separation is known as the decision boundary in Machine Learning, and depending on the dimensionality of your data. In math and ML, we call this "sheet" (like a flat surface with no thickness) a plane in the 2D case and a hyperplane in 3 or more dimensions - in general, we'll refer to it as a hyperplane. The decision boundary can be computed with the help of the kernel function, which can be as simple or as complicated as:- linear function (a straight line): $\phi(x) = w \cdot x + b$, where $w$ is a vector of parameters which give the slope of the hyperplane and $b$ an additional parameter for the biase - this is nearly the same as equation of a line you probably saw growing up, $y = mx + b$ where $w \longleftrightarrow m$ and the $b's$ serve the same purpose - we don't use this form because it would get confusing with how we refer to datasets with $X$ and $y$ and restricts us to the two dimensional case- radial basis function: - Gaussian: $\phi(x_i, x_j) = e^{-(||x_i - x_j||)^{2}/w}$ where $w$ is a free parameter (for the model to learn) and $x_i, x_j$ are features of the dataset- polynomial function: $\phi(x_i, x_j) = (x_i \cdot x_j + b)^n$, where $x_i, x_j$ are features, $b$ is a free parameter, and $n$ is the degree of the polynomialSince our data are so well separated after applying PCA, it seems like we should get good results by using the simplest choice, the linear kernel. The beauty of the SVM model is that it not only learns a decision boundary to separate the data classes, but additionally increases the margin which separates the two. The margin is a special product of the parameters it learns: $$Margin ~=~ \frac{2}{\sqrt{(w \cdot w)}} $$The larger the margin, the more separation there is between the two classes, and the more likely the model will correctly predict labels of unseen data. This margin is the distance from the closest point of the positive class to the decision boundary + the distance between the closest point of the negative class to the boundary. This gives us the largest margin hyperplane. C: the SVM regularization parameterWhen we "train" an ML model, what we really mean is that - in most cases - we are progressively trying to find a parameter or combination of parameters which "fit" the data. From a statistical viewpoint, we're trying to learn the parameters of the underlying distribution of the data, so that given a new datapoint, we can predict which class it will belong to. In order to learn and evaluate our model, we should have samples dedicated to each training and testing the model. This, in it's simplest sense, is known as cross validation. As good practice, we'll shuffle the dataset and labels together to help ensure that samples of each class and evenly mixed. Then, we'll take the first ~70% of samples as the training set, ~10% as validation, and and the remainder as the test set. The reason for doing cross validation is to prevent the model from learning the training data too well. This phenomenon is called over-fitting and though it may sound counter-intuitive at first, when this happens it means that the model is more likely to poorly generalize to classifying the unseen data in the test set - or predict the labels of the test set less well. Our goal is to train a model which is well-balanced: it learns the training data, thereby reducing training error (1 - accuracy of predicting training data) and is useful and able to predict the classes of the test set well, thus improving test accuracy (% correctly predicted / total attempts). One technique that is used, particularly with SVM algorithm, is the regularization parameter $C$. This additional value helps by penalizing the error which is incurred when the model incorrectly predicts the class labels of samples during training. For know, we will set this value to $1.0$, so we don't need to specify it in the SVC constructor below.
###Code
# First, shuffle the dataset
shuffle_data = np.hstack((pca_data, labels.reshape(-1, 1)))
# Define the splits for the train, validation sets
train_size = int(len(pca_data) * 0.75)
validation_size = int(len(pca_data) * 0.85)
train_data = shuffle_data[: train_size, :-2]
train_labels = shuffle_data[: train_size, -1]
valid_data = shuffle_data[train_size : validation_size, :-2]
valid_labels = shuffle_data[train_size : validation_size, -1]
test_data = shuffle_data[validation_size :, :-2]
test_labels = shuffle_data[validation_size :, -1]
# Initialize the Support Vector Classifier with the linear kernel and regularizer C=1.0
svm = sklearn.svm.SVC(C=2, kernel="linear")
# Fit the SVC model
svm.fit(train_data, train_labels)
###Output
_____no_output_____
###Markdown
Test accuracy:We compute the test accuracy as follows: 1) Compute the error: $\frac{1}{n}(\sum_{i=0}^n |y_{pred} - y_{true}|)$ where $y_{pred}$ is the predicted label, $y_{true}$ the actual class label, and $n$ the number of samples 2) Subtract this from 1 and multiple by 100 to get a percent: $(1 - error) * 100 = \%$ correctly classified Taking the absolute value in step 1) and summing over all samples and scaling by the nummber of samples is called the Sum Absolute Error. It is possible to square the error, which would transform this into the Mean Squared Error (sometimes called sum squared error when it is not scaled by the number of samples).
###Code
# Test the model's predictive abilities
predictions = svm.predict(test_data)
test_accuracy = 1 - (np.sum(np.abs(predictions - test_labels)) / len(predictions))
print("Our model predicted", test_accuracy*100, "% of test samples correctly!")
###Output
('Our model predicted', 79.18840579710145, '% of test samples correctly!')
|
notebooks/layers/merge/Maximum.ipynb | ###Markdown
**[merge.Maximum.0]**
###Code
random_seed = 100
data_in_shape = (6,)
layer_0 = Input(shape=data_in_shape)
layer_1a = Dense(2, activation='linear')(layer_0)
layer_1b = Dense(2, activation='linear')(layer_0)
layer_2 = Maximum()([layer_1a, layer_1b])
model = Model(inputs=layer_0, outputs=layer_2)
np.random.seed(random_seed)
data_in = np.expand_dims(2 * np.random.random(data_in_shape) - 1, axis=0)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(random_seed + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
result = model.predict(data_in)
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
DATA['merge.Maximum.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
###Output
_____no_output_____
###Markdown
**[merge.Maximum.1]**
###Code
random_seed = 100
data_in_shape = (6,)
layer_0 = Input(shape=data_in_shape)
layer_1a = Dense(2, activation='linear')(layer_0)
layer_1b = Dense(2, activation='linear')(layer_0)
layer_1c = Dense(2, activation='linear')(layer_0)
layer_2 = Maximum()([layer_1a, layer_1b, layer_1c])
model = Model(inputs=layer_0, outputs=layer_2)
np.random.seed(random_seed)
data_in = np.expand_dims(2 * np.random.random(data_in_shape) - 1, axis=0)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(random_seed + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
result = model.predict(data_in)
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
DATA['merge.Maximum.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
###Output
_____no_output_____
###Markdown
**[merge.Maximum.2]**
###Code
random_seed = 100
data_in_shape = (6,)
layer_0 = Input(shape=data_in_shape)
layer_1a = Dense(2, activation='linear')(layer_0)
layer_1b = Dense(2, activation='linear')(layer_0)
layer_1c = Dense(2, activation='linear')(layer_0)
layer_1d = Dense(2, activation='linear')(layer_0)
layer_2 = Maximum()([layer_1a, layer_1b, layer_1c, layer_1d])
model = Model(inputs=layer_0, outputs=layer_2)
np.random.seed(random_seed)
data_in = np.expand_dims(2 * np.random.random(data_in_shape) - 1, axis=0)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(random_seed + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
result = model.predict(data_in)
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
DATA['merge.Maximum.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
###Output
_____no_output_____
###Markdown
export for Keras.js tests
###Code
import os
filename = '../../../test/data/layers/merge/Maximum.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
###Output
{"merge.Maximum.0": {"input": {"data": [0.08681, -0.443261, -0.150965, 0.689552, -0.990562, -0.756862], "shape": [6]}, "weights": [{"data": [0.08681, -0.443261, -0.150965, 0.689552, -0.990562, -0.756862, 0.341498, 0.651706, -0.726587, 0.150187, 0.782644, -0.581596], "shape": [6, 2]}, {"data": [0.032797, 0.141335], "shape": [2]}, {"data": [0.195363, 0.351974, -0.401437, 0.461481, 0.157479, 0.618035, -0.665503, -0.37571, -0.284137, -0.016505, -0.019604, 0.78882], "shape": [6, 2]}, {"data": [-0.135778, -0.651569], "shape": [2]}], "expected": {"data": [0.619647, 0.652267], "shape": [2]}}, "merge.Maximum.1": {"input": {"data": [0.08681, -0.443261, -0.150965, 0.689552, -0.990562, -0.756862], "shape": [6]}, "weights": [{"data": [0.08681, -0.443261, -0.150965, 0.689552, -0.990562, -0.756862, 0.341498, 0.651706, -0.726587, 0.150187, 0.782644, -0.581596], "shape": [6, 2]}, {"data": [0.032797, 0.141335], "shape": [2]}, {"data": [0.195363, 0.351974, -0.401437, 0.461481, 0.157479, 0.618035, -0.665503, -0.37571, -0.284137, -0.016505, -0.019604, 0.78882], "shape": [6, 2]}, {"data": [-0.135778, -0.651569], "shape": [2]}, {"data": [-0.704159, -0.543403, 0.614987, -0.403051, -0.628587, 0.544275, -0.189121, 0.991809, -0.028349, 0.30284, 0.201092, -0.953154], "shape": [6, 2]}, {"data": [-0.83252, -0.332987], "shape": [2]}], "expected": {"data": [0.619647, 0.821658], "shape": [2]}}, "merge.Maximum.2": {"input": {"data": [0.08681, -0.443261, -0.150965, 0.689552, -0.990562, -0.756862], "shape": [6]}, "weights": [{"data": [0.08681, -0.443261, -0.150965, 0.689552, -0.990562, -0.756862, 0.341498, 0.651706, -0.726587, 0.150187, 0.782644, -0.581596], "shape": [6, 2]}, {"data": [0.032797, 0.141335], "shape": [2]}, {"data": [0.195363, 0.351974, -0.401437, 0.461481, 0.157479, 0.618035, -0.665503, -0.37571, -0.284137, -0.016505, -0.019604, 0.78882], "shape": [6, 2]}, {"data": [-0.135778, -0.651569], "shape": [2]}, {"data": [-0.704159, -0.543403, 0.614987, -0.403051, -0.628587, 0.544275, -0.189121, 0.991809, -0.028349, 0.30284, 0.201092, -0.953154], "shape": [6, 2]}, {"data": [-0.83252, -0.332987], "shape": [2]}, {"data": [-0.98287, 0.908613, -0.10087, 0.220438, 0.70412, 0.47762, 0.948863, 0.714924, 0.710821, 0.719052, -0.295442, -0.575936], "shape": [6, 2]}, {"data": [-0.67459, 0.509562], "shape": [2]}], "expected": {"data": [0.619647, 0.821658], "shape": [2]}}}
|
note/old/frameSearch.ipynb | ###Markdown
Test
###Code
wideFrame = '/Users/songhuang/Desktop/gama_compare/database_dr15a/hsc_dr15a_wide_frame.fits'
table = fits.open(wideFrame)[1].data
print(len(table), " frames")
table.columns
a = Point(335.82, 0.096).buffer(2.5)
###Output
_____no_output_____
###Markdown
Given (RA, DEC, RADIUS), returns all the overlapped (Visit, CCD) in Certain Filter
###Code
def genCircle(ra, dec, rad):
"""
Generate a circular Shape using input (RA, DEC) as center
and input searching radius as radius
"""
try:
cir = Point(ra, dec).buffer(rad)
except NameError:
from shapely.geometry import Point
cir = Point(ra, dec).buffer(rad)
return cir
def ccdToPolygon(frame):
"""
Convert one (VISIT, CCD) item in the HSC frame catalog
into a Polygon shape
"""
ccdPoly = Polygon([(frame['llcra'], frame['llcdecl']),
(frame['lrcra'], frame['lrcdecl']),
(frame['urcra'], frame['urcdecl']),
(frame['ulcra'], frame['ulcdecl'])])
return ccdPoly
def showFrameMatch(match, ra, dec, rad, dpi=80,
outPNG='frame_radec_match.png',
extra=''):
"""
"""
minRa = np.nanmin(match['ra2000']) - 0.12
maxRa = np.nanmax(match['ra2000']) + 0.12
minDec = np.nanmin(match['decl2000']) - 0.08
maxDec = np.nanmax(match['decl2000']) + 0.08
xSize = 12.0
ySize = xSize * ((maxDec - minDec) / (maxRa - minRa))
fig = plt.figure(figsize=(xSize, ySize), dpi=dpi)
ax = fig.add_subplot(111)
# Turn off scientifc notation
#ax.ticklabel_format(axis='both', style='plain')
#ax.get_xaxis().get_major_formatter().set_scientific(False)
#ax.get_yaxis().get_major_formatter().set_scientific(False)
ax.xaxis.set_major_formatter(FormatStrFormatter('%6.2f'))
ax.set_xlim(minRa, maxRa)
ax.set_ylim(minDec, maxDec)
ax.text(0.09, 0.94, ("%7.3f" % ra).strip() + ' ' + ("%7.3f" % dec).strip() + \
' ' + extra,
fontsize=20, transform = ax.transAxes)
for frame in match:
ccdPoly = ccdToPolygon(frame)
ccdShow = PolygonPatch(ccdPoly, fc='r', ec='None',
alpha=0.1, zorder=1)
ax.add_patch(ccdShow)
ccdEdge = PolygonPatch(ccdPoly, fc='None', ec='k',
alpha=0.8, zorder=1)
ax.add_patch(ccdEdge)
regSearch = plt.Circle((ra, dec), rad, color='b',
fill=False, linewidth=3.5, linestyle='dashed',
alpha=0.8)
ax.add_artist(regSearch)
ax.scatter(ra, dec, marker='+', s=300, c='k', linewidth=3.0)
ax.set_xlabel(r'RA (deg)', fontsize=25)
ax.set_ylabel(r'DEC (deg)', fontsize=25)
fontsize = 16
for tick in ax.xaxis.get_major_ticks():
tick.label1.set_fontsize(fontsize)
for tick in ax.yaxis.get_major_ticks():
tick.label1.set_fontsize(fontsize)
ax.minorticks_on()
plt.tick_params(which='major', width=2.0, length=8.0, labelsize=20)
plt.tick_params(which='minor', width=1.8, length=6.0)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(2.5)
ax.grid(alpha=0.6, color='k', linewidth=1.5)
fig.subplots_adjust(bottom=0.1, left=0.1,
top=0.98, right=0.98)
#fig.savefig(outPNG, dpi=dpi)
#plt.close(fig)
def frameRaDecSearch(catFrame, ra, dec, rad, filter='HSC-I',
shortExp=False, verbose=True, fitsFile=True,
show=True, prefix=None, point=False):
"""
Find all HSC single frame CCD data that overlap with certain region
Input:
catFrame: FITS catalog of frame information
ra, dec: The (RA, DEC) of the field center (deg)
rad: Radius of the circular searching region (deg)
Option:
filter = 'HSC-I' : HSC filter
shortExp = False : Whether including frame with expTime < 100s or not
"""
if fitsFile:
# Read in the Frame catalog
if os.path.isfile(catFrame):
frameTab = fits.open(catFrame)[1].data
else:
raise Exception('# Can not find the input FRAME catalog : %s' % catFrame)
# Filter the catalog
if shortExp:
frameUse = frameTab[frameTab['filter01'] == filter.strip()]
else:
frameUse = frameTab[(frameTab['filter01'] == filter.strip()) &
(frameTab['exptime'] > 35.0)]
else:
frameUse = catFrame
# Only use the frames that are near the search region
frameNear = frameUse[(np.abs(frameUse['ra2000'] - ra) <= (rad + 0.3)) &
(np.abs(frameUse['decl2000'] - dec) <= (rad + 0.3))]
if verbose:
print("# %i frames are found in filter: %s" % (len(frameNear), filter))
# Region to search
if point:
cir = Point(ra, dec)
else:
cir = genCircle(ra, dec, rad)
match = []
for frame in frameNear:
ccdPoly = ccdToPolygon(frame)
match.append(cir.intersects(ccdPoly))
frameMatch = frameNear[np.asarray(match)]
if verbose:
print("# %i matched frames have been found! " % len(frameMatch))
if show:
if prefix is None:
prefix = 'frame_' + ("%7.3f" % ra).strip() + '_' + ("%7.3f" % dec).strip()
pngName = prefix + '_' + ("%3.1f" % rad).strip() + '_' + filter.strip() + '.png'
showFrameMatch(frameMatch, ra, dec, rad, outPNG=pngName,
extra=filter.strip())
return frameMatch
match = frameRaDecSearch(wideFrame, 335.82, 0.096, 0.0653)
showFrameMatch(match, 335.82, 0.096, 0.0653)
for visit in np.unique(match['visit']):
ccds = table[(table['visit'] == visit) &
(table['filter01'] == 'HSC-I')]
print(np.std(ccds['skylevel']))
###Output
165.449
226.825
212.675
212.04
122.417
117.547
112.018
113.855
113.069
119.385
122.603
|
examples/cart_spring_pendulum.ipynb | ###Markdown
 Let $x$ be the distance in the x direction from equilibrium position for body $m_1$
###Code
# Variable definitions
from mathpad import *
x = "x(t)" * m
m1 = "m1" * kg
theta = "theta(t)" * radians
m2 = "m2" * kg
k = "k" * N / m
l = "l" * m
F = "F(t)" * N
g = "g" * meter / s**2
print("Position of m2 wrt origin")
print("i direction")
r_2_O_i = x + l * sin(theta)
display(r_2_O_i)
print("j direction")
r_2_O_j = -l * cos(theta)
display(r_2_O_j)
print("Velocity of m2 wrt origin")
print("i direction")
v_2_i = diff(r_2_O_i)
display(v_2_i)
print("j direction")
v_2_j = diff(r_2_O_j)
display(v_2_j)
print("Magnitude of Velocity of m2 wrt origin")
v_2 = magnitude(v_2_i, v_2_j)
v_2
# Kinetic energy
from mathpad.mech import kinetic_energy, elastic_energy, euler_lagrange, gravitational_energy
print("Kinetic Energy")
T = factor(kinetic_energy(m1, diff(x)) + kinetic_energy(m2, v_2))
T
print("Potential Energy")
V = elastic_energy(k, x) + gravitational_energy(m2, r_2_O_j, g)
V
print("Dynamics of Body 1")
x_dynamics = euler_lagrange(T, V, F, x)
x_dynamics
print("Dynamics of Body 2")
theta_dynamics = euler_lagrange(T, V, 0, theta)
theta_dynamics
sim_data = simulate_dynamic_system(
[x_dynamics, theta_dynamics],
plot_title="Cart-Spring System Response to a Small Perturbation",
x_f=10, max_step=0.01,
substitute={
k: 100,
m1: 10,
m2: 1,
l: 0.5,
g: 9.81,
# A small perturbation
F: piecewise(t, [(1, 1 * N), (float('inf'), 0 * N)])
},
initial_conditions={
x: 0,
diff(x): 0,
theta: 0,
diff(theta): 0
},
record=[x, theta],
plot_static=True
)
###Output
Solving subbed Equations
Solving finished. Simulating...
|
Day-13/3. Debugging FizzBuzz.ipynb | ###Markdown
Instructions* Read this the code in main.py* Spot the problems 🐞.* Modify the code to fix the program.* No shortcuts - don't copy-paste to replace the code entirely with a working solution.The code needs to print the solution to the FizzBuzz game.
###Code
#Bugged code
for number in range(1, 101):
if number % 3 == 0 or number % 5 == 0:
print("FizzBuzz")
if number % 3 == 0:
print("Fizz")
if number % 5 == 0:
print("Buzz")
else:
print([number])
#Debugged code
for number in range(1, 101):
if number % 3 == 0 and number % 5 == 0:
print("FizzBuzz")
elif number % 3 == 0:
print("Fizz")
elif number % 5 == 0:
print("Buzz")
else:
print(number)
###Output
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
31
32
Fizz
34
Buzz
Fizz
37
38
Fizz
Buzz
41
Fizz
43
44
FizzBuzz
46
47
Fizz
49
Buzz
Fizz
52
53
Fizz
Buzz
56
Fizz
58
59
FizzBuzz
61
62
Fizz
64
Buzz
Fizz
67
68
Fizz
Buzz
71
Fizz
73
74
FizzBuzz
76
77
Fizz
79
Buzz
Fizz
82
83
Fizz
Buzz
86
Fizz
88
89
FizzBuzz
91
92
Fizz
94
Buzz
Fizz
97
98
Fizz
Buzz
|
comrx/dev/deprecated/comics_rx-04a_dev_existing_user_recs.ipynb | ###Markdown
Comics Rx [A comic book recommendation system](https://github.com/MangrobanGit/comics_rx) --- 5 - ALS Model - 'Pseudo' DeploymentThis notebook is to explore and develop 'deploying' from a previously saved ALS model. Libraries
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2 # 1 would be where you need to specify the files
#%aimport data_fcns
import pandas as pd # dataframes
import os
import time
import numpy as np
# Data storage
from sqlalchemy import create_engine # SQL helper
import psycopg2 as psql #PostgreSQL DBs
# import necessary libraries
import pyspark
from pyspark.sql import SparkSession
from pyspark.ml.evaluation import RegressionEvaluator
# from pyspark.sql.types import (StructType, StructField, IntegerType
# ,FloatType, LongType, StringType)
from pyspark.sql.types import *
import pyspark.sql.functions as F
from pyspark.sql.functions import col, explode, lit, isnan, when, count
from pyspark.ml.recommendation import ALS, ALSModel
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder, TrainValidationSplit
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# Custom
import lib.data_fcns as dfc
import lib.keys # Custom keys lib
import lib.comic_recs as cr
# instantiate SparkSession object
spark = pyspark.sql.SparkSession.builder.master("local[*]").getOrCreate()
# spark = SparkSession.builder.master("local").getOrCreate()
###Output
_____no_output_____
###Markdown
Retrieving Saved Model
###Code
comic_rec_model = ALSModel.load('als_filtered')
top_n_df = cr.get_top_n_recs_for_user(spark=spark, model=comic_rec_model, topn=50)
top_n_df
###Output
161
###Markdown
I'm testing on myself. I'm pretty sure I've bought a few of those title's above. But this could be a failure in how I aggregated on series, but there some evidence of that failing. One example is *Gideon Falls*. There should be only one volume of that. Maybe it's graphic novels? But that shouldn't be an issue (no pun intended) because I believe the original dataset should just be individual comic books. Let's test versus the original dataset! Set aside some test series. - Paper Girls (Image)- Saga (Other)- Fade Out (Image)These I know **for sure** I've bought, if not subscribed. Set up connection to AWS RDS
###Code
# Define path to secret
secret_path_aws = os.path.join(os.environ['HOME'], '.secret',
'aws_ps_flatiron.json')
secret_path_aws
aws_keys = keys.get_keys(secret_path_aws)
user = aws_keys['user']
ps = aws_keys['password']
host = aws_keys['host']
db = aws_keys['db_name']
aws_ps_engine = ('postgresql://' + user + ':' + ps + '@' + host + '/' + db)
# Setup PSQL connection
conn = psql.connect(
database=db,
user=user,
password=ps,
host=host,
port='5432'
)
# Instantiate cursor
cur = conn.cursor()
# Count records.
query = """
SELECT
*
FROM
comic_trans
WHERE
account_num = '00161'
;
"""
conn.rollback()
# Execute the query
cur.execute(query)
# Check results
temp_df = pd.DataFrame(cur.fetchall())
temp_df.columns = [col.name for col in cur.description]
temp_df.head()
# Make a list of test comic_title
already_bought = ['Paper Girls (Image)', 'Saga (Other)', 'Fade Out (Image)',
'Sweet Tooth (Vertigo)']
temp_df.loc[temp_df['comic_title'].isin(already_bought), ['comic_title']].comic_title.unique()
###Output
_____no_output_____
###Markdown
Ok, so I already knew this was the case, but just wanted to confirm. Don't repeat already bought comicsLet's filter out comics already bought. To support this I think I want a `json` file I could use as a pseudo-database to look up existing account-comic_title relationships.
###Code
# Count records.
query = """
SELECT
DISTINCT
CAST(account_num AS INT) as account_id
,c.comic_id
FROM
comic_trans ct
inner join comics c on ct.comic_title = c.comic_title
;
"""
# conn.rollback()
# Execute the query
cur.execute(query)
# Check results
temp_df = pd.DataFrame(cur.fetchall())
temp_df.columns = [col.name for col in cur.description]
temp_df.head()
temp_df.to_json('support_data/acct_comics.json', orient='records'
,lines=True)
!head support_data/acct_comics.json
###Output
{"account_id":2,"comic_id":198}
{"account_id":2,"comic_id":223}
{"account_id":2,"comic_id":224}
{"account_id":2,"comic_id":312}
{"account_id":2,"comic_id":392}
{"account_id":2,"comic_id":455}
{"account_id":2,"comic_id":481}
{"account_id":2,"comic_id":482}
{"account_id":2,"comic_id":828}
{"account_id":2,"comic_id":841}
###Markdown
Recommendation function 2.0 Let's bring back the dev on returning recommendations:
###Code
def get_top_n_recs_for_user(spark, model, topn=10):
"""
Given requested n and ALS model, returns top n recommended comics
"""
tgt_acct_id = input()
# Create spark df manually
a_schema = StructType([StructField("account_id", LongType())])
# Init lists
tgt_list = []
acct_list = []
tgt_list.append(int(tgt_acct_id))
acct_list.append(tgt_list)
# Create one-row spark df
tgt_accts = spark.createDataFrame(acct_list, schema=a_schema)
# Get recommendations for user
userSubsetRecs = model.recommendForUserSubset(tgt_accts, topn)
userSubsetRecs.persist()
# Flatten the recs list
top_n_exploded = (userSubsetRecs.withColumn('tmp',explode('recommendations'))
.select('account_id', col("tmp.comic_id"), col("tmp.rating")))
top_n_exploded.persist()
# Get comics titles
comics = spark.read.json('raw_data/comics.json')
comics.persist()
# shorten with alias
top_n = top_n_exploded.alias('topn')
com = comics.alias('com')
# Clean up the spark df to list of titles
top_n_titles = (top_n.join(com.select('comic_id','comic_title')
,top_n.comic_id==com.comic_id)
.select('comic_title'))
top_n_titles.persist()
# Cast to pandas df and return it
top_n_df = top_n_titles.select('*').toPandas()
top_n_df.index += 1
return top_n_df
top_n_req = 10
top_n_df = get_top_n_recs_for_user(spark=spark, model=comic_rec_model, topn=top_n_req)
top_n_df
tgt_acct_id = input()
tgt_acct_id
# Create spark df manually
a_schema = StructType([StructField("account_id", LongType())])
# Init lists
tgt_list = []
acct_list = []
tgt_list.append(int(tgt_acct_id))
acct_list.append(tgt_list)
# Create one-row spark df
tgt_accts = spark.createDataFrame(acct_list, schema=a_schema)
tgt_accts.show()
# Get recommendations for user
userSubsetRecs = comic_rec_model.recommendForUserSubset(tgt_accts, 3 * top_n_req)
userSubsetRecs.persist()
userSubsetRecs.show()
# Flatten the recs list
top_n_exploded = (userSubsetRecs.withColumn('tmp',explode('recommendations'))
.select('account_id', col("tmp.comic_id"), col("tmp.rating")))
top_n_exploded.persist()
top_n_exploded.show(top_n_req*3)
# Get comics titles
comics = spark.read.json('support_data/comics.json')
comics.persist()
comics.show(10)
# Get account to comics xwalk
acct_comics = spark.read.json('support_data/acct_comics.json')
# acct_comics = acct_comics.select('account_id')
acct_comics = (
acct_comics.withColumnRenamed('account_id','acct_id')
.withColumnRenamed('comic_id', 'cmc_id')
)
acct_comics.persist()
acct_comics.show(10)
# shorten with alias
top_n = top_n_exploded.alias('topn')
com = comics.alias('com')
ac = acct_comics.alias('ac')
# Arleady bought
recs_prev_bought = (
top_n.join(ac, [top_n.account_id==ac.acct_id,
top_n.comic_id==ac.cmc_id], 'left')
.filter('ac.acct_id is null')
.select('account_id', 'comic_id')
)
recs_prev_bought.persist()
recs_prev_bought.show()
# Clean up the spark df to list of titles
top_n_titles = (
top_n.join(com.select('comic_id','comic_title')
,top_n.comic_id==com.comic_id, "left")
.join(ac, [top_n.account_id==ac.acct_id,
top_n.comic_id==ac.cmc_id], 'left')
.filter('ac.acct_id is null')
.select('comic_title')
)
top_n_titles.persist()
top_n_titles.show(top_n_req)
# Cast to pandas df and return it
top_n_df = top_n_titles.select('*').toPandas()
top_n_df = top_n_df.head(top_n_req)
top_n_df.index += 1
top_n_df
###Output
_____no_output_____
###Markdown
Ok this seems like it could work. Let's roll it into the new function.
###Code
def get_top_n_new_recs(spark, model, topn=10):
"""
Given requested n and ALS model, returns top n recommended comics
"""
# Multiplicative buffer
# Get n x topn, because we will screen out previously bought
buffer = 3
# Get account number from user
tgt_acct_id = input()
# To 'save' the account number, will put it into a spark dataframe
# Create spark df manually
a_schema = StructType([StructField("account_id", LongType())])
# Init lists
tgt_list = []
acct_list = []
tgt_list.append(int(tgt_acct_id))
acct_list.append(tgt_list)
# Create one-row spark df
tgt_accts = spark.createDataFrame(acct_list, schema=a_schema)
# Get recommendations for user
userSubsetRecs = model.recommendForUserSubset(tgt_accts, (topn*buffer))
userSubsetRecs.persist()
# Flatten the recs list
top_n_exploded = (userSubsetRecs.withColumn('tmp',explode('recommendations'))
.select('account_id', col("tmp.comic_id"), col("tmp.rating")))
top_n_exploded.persist()
# Get comics titles
comics = spark.read.json('raw_data/comics.json')
comics.persist()
# Get account-comics summary (already bought)
acct_comics = spark.read.json('support_data/acct_comics.json')
acct_comics = (
acct_comics.withColumnRenamed('account_id','acct_id')
.withColumnRenamed('comic_id', 'cmc_id')
)
acct_comics.persist()
# shorten with alias
top_n = top_n_exploded.alias('topn')
com = comics.alias('com')
ac = acct_comics.alias('ac')
# Clean up the spark df to list of titles, and only include these
# that are NOT on bought list
top_n_titles = (
top_n.join(com.select('comic_id','comic_title')
,top_n.comic_id==com.comic_id, "left")
.join(ac, [top_n.account_id==ac.acct_id,
top_n.comic_id==ac.cmc_id], 'left')
.filter('ac.acct_id is null')
.select('comic_title')
)
top_n_titles.persist()
top_n_titles.show(top_n_req)
# Cast to pandas df and return it
top_n_df = top_n_titles.select('*').toPandas()
top_n_df = top_n_df.head(top_n)
top_n_df.index += 1
return top_n_df
###Output
_____no_output_____
###Markdown
Let's test it! **Version 1:**
###Code
top_n_req = 10
top_n_df = cr.get_top_n_recs_for_user(spark=spark, model=comic_rec_model
, topn=top_n_req)
top_n_df
###Output
161
###Markdown
**Version 2:**
###Code
top_n_df = cr.get_top_n_new_recs(spark=spark, model=comic_rec_model
, topn=top_n_req)
top_n_df
###Output
161
###Markdown
YES How about someone new?
###Code
newbie_df = cr.get_top_n_new_recs(spark=spark, model=comic_rec_model
, topn=top_n_req)
newbie_df
###Output
9999
|
notebooks/homecdt_fteng/Aiyd_Home_Credit-1090121.ipynb | ###Markdown
總整理
###Code
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
np.__version__,pd.__version__
# pd.set_option("display.max_rows",None)
# pd.set_option("display.max_columns",None)
###Output
_____no_output_____
###Markdown
讀取資料
###Code
# 家
# prev_data = pd.read_csv('previous_application.csv')
# POS_data = pd.read_csv('POS_CASH_balance.csv')
# 學校
prev_data = pd.read_csv('..\\..\\Desktop\\home-credit-default-risk\\previous_application.csv')
POS_data = pd.read_csv('..\\..\\Desktop\\home-credit-default-risk\\POS_CASH_balance.csv')
# 組長
# prev_data = pd.read_csv('../../datasets/homecdt_eda/previous_application.csv')
# POS_data = pd.read_csv('../../datasets/homecdt_eda/POS_CASH_balance.csv')
###Output
_____no_output_____
###Markdown
POS_CASH_balance.csv特徵工程---
###Code
# 2020-01-16
# Total_Months --> 總還款期數
# MONTHS_BALANCE_start --> 貸款開始時間
# MONTHS_BALANCE_finish --> 貸款結束時間
# CNT_INSTALMENT_max --> 最大申請期數(擬定合約期數)
# CNT_INSTALMENT_min --> 最小申請期數(中間改合約)
# CNT_INSTALMENT_median --> 申請期數中位數
# Delay_Rate --> 遲繳比例(ex 共四期 有一期遲繳就是1/4)
# SK_DPD_max --> 貸款愈期最大天數
# SK_DPD_mean --> 貸款逾期平均數
# Contract_Change -- > 提早繳完 且 不在目前進行中的貸款
# Contract_Change_count --> 合約縮短了多少期
# CNT_INSTALMENT/Total_Months_rate --> 申請期數與總還款期數比例
# Contract_Change_rate(CNT_INSTALMENT) --> 縮短了多少期除以申請期數
# Contract_Change_rate(Total_Months) --> 縮短了多少期除以總還款期數
# 將有特殊狀態的欄位新增 (onehot)
categorical_columns = []
for name in POS_data:
if POS_data[name].dtype=='object':
categorical_columns.append(name)
POS_data = pd.get_dummies(POS_data, columns = categorical_columns)
POS_data.rename(columns={'NAME_CONTRACT_STATUS_Active':'Active',
'NAME_CONTRACT_STATUS_Amortized debt':'Amortized debt',
'NAME_CONTRACT_STATUS_Approved':'Approved',
'NAME_CONTRACT_STATUS_Canceled':'Canceled',
'NAME_CONTRACT_STATUS_Completed':'Completed',
'NAME_CONTRACT_STATUS_Demand':'Demand',
'NAME_CONTRACT_STATUS_Returned to the store':'Returned to the store',
'NAME_CONTRACT_STATUS_Signed':'Signed',
'NAME_CONTRACT_STATUS_XNA':'XNA'},inplace=True)
# 新增欄位
POS_data['Delay']=(POS_data['SK_DPD']>0).replace(True,1) # 該期貸款有愈期,1為有延遲,0為沒延遲
POS_data['SK_DPD_mean']=POS_data['SK_DPD'] # 要算平均
POS_data['CNT_INSTALMENT_min']=POS_data['CNT_INSTALMENT'] # 申請期最小值
POS_data['CNT_INSTALMENT_median']=POS_data['CNT_INSTALMENT'] # 申請期的中位數
POS_data['MONTHS_BALANCE_start']=POS_data['MONTHS_BALANCE'] # 貸款開始時間
POS_data['MONTHS_BALANCE_finish']=POS_data['MONTHS_BALANCE'] # 貸款結束時間
# 創造新的欄位
num_aggregations = {
'MONTHS_BALANCE':'count',
'MONTHS_BALANCE_start':'min',
'MONTHS_BALANCE_finish':'max',
'CNT_INSTALMENT' : 'max',
'CNT_INSTALMENT_min':'min',
'CNT_INSTALMENT_median':'median',
'Delay':'mean',
'SK_DPD':'max',
'SK_DPD_mean':'mean',
'Completed':'max',
'Active':'sum',
'Signed':'sum',
'Demand':'sum',
'Returned to the store':'sum',
'Approved':'sum',
'Amortized debt':'sum',
'Canceled':'sum',
'XNA':'sum'
}
POS_data_1 = POS_data.groupby(['SK_ID_CURR',
'SK_ID_PREV']).agg({**num_aggregations})
# 做更名
POS_data_1.rename(columns={'MONTHS_BALANCE':'Total_Months',
'Delay':"Delay_Rate",
'SK_DPD':'SK_DPD_max',
'CNT_INSTALMENT':'CNT_INSTALMENT_max'},inplace=True)
# 重新排列
POS_data_1.reset_index(level=('SK_ID_CURR',
'SK_ID_PREV'),inplace=True)
# 提早繳完 且 不在目前進行中的貸款
POS_data_1['Contract_Change'] = ((POS_data_1['Total_Months'] < POS_data_1 ['CNT_INSTALMENT_max']) & (POS_data_1['Completed'] != 0)).replace(True,1)
# 合約縮短了多少期
POS_data_1['Contract_Change_count'] = POS_data_1['CNT_INSTALMENT_max']-POS_data_1['CNT_INSTALMENT_min']
# 申請期數與總還款期數比例
POS_data_1['CNT_INSTALMENT/Total_Months_rate'] = POS_data_1['CNT_INSTALMENT_max']/POS_data_1['Total_Months']
# 縮短了多少期除以申請期數
POS_data_1['Contract_Change_rate(CNT_INSTALMENT)'] = POS_data_1['Contract_Change_count']/POS_data_1['CNT_INSTALMENT_max']
# 縮短了多少期除以總還款期數
POS_data_1['Contract_Change_rate(Total_Months)'] = POS_data_1['Contract_Change_count']/POS_data_1['Total_Months']
# 狀態發生期數除以總還款期數
POS_data_1['Active'] = POS_data_1['Active']/POS_data_1['Total_Months']
POS_data_1['Signed'] = POS_data_1['Signed']/POS_data_1['Total_Months']
POS_data_1['Demand'] = POS_data_1['Demand']/POS_data_1['Total_Months']
POS_data_1['Returned to the store'] = POS_data_1['Returned to the store']/POS_data_1['Total_Months']
POS_data_1['Approved'] = POS_data_1['Approved']/POS_data_1['Total_Months']
POS_data_1['Amortized debt'] = POS_data_1['Amortized debt']/POS_data_1['Total_Months']
POS_data_1['Canceled'] = POS_data_1['Canceled']/POS_data_1['Total_Months']
POS_data_1['XNA'] = POS_data_1['XNA']/POS_data_1['Total_Months']
# POS_data_1 = POS_data_1.drop(['SK_ID_CURR'],axis=1)
###Output
_____no_output_____
###Markdown
將POS_CASH_balance.csv和installments_payments.csv併到previous_application.csv
###Code
# 同整installments_payments、POS_CASH_balance、credit_card_balance
prev_comb_data = pd.read_csv('..\\..\\Desktop\\home-credit-default-risk\\previous_application_w_installment.csv')
result = pd.merge(prev_comb_data, POS_data_1,how='outer')
# result.to_csv('..\\..\\Desktop\\home-credit-default-risk\\previous_application_w_installment_w_POS.csv')
# 家裡
# prev_comb_data = pd.read_csv('previous_application_w_installment.csv')
# result = pd.merge(prev_comb_data, POS_data_1,how='outer')
# result.to_csv('previous_application_w_installment_w_POS.csv')
###Output
c:\program files\python37\lib\site-packages\IPython\core\interactiveshell.py:3051: DtypeWarning: Columns (2,8,10,15,16,18,19,20,21,22,23,24,25,27,29,30) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
異常值處理
###Code
# Prev時間異常值,另外新增欄位
result_test = result
issue_data = ['DAYS_FIRST_DRAWING','DAYS_FIRST_DUE','DAYS_LAST_DUE_1ST_VERSION','DAYS_LAST_DUE','DAYS_TERMINATION']
for name in issue_data:
# rename = name + "_ANOM"
# data[rename] = data[name] == 365243
result_test[name].replace({365243: np.nan}, inplace = True)
# SELLERPLACE_AREA 分行資料有接近一半是-1值,而其他皆為正整數,懷疑是未知
result_test['SELLERPLACE_AREA'].replace({-1: np.nan}, inplace = True)
# Prev將缺失值大於60的欄位去除
miss_value_percent = result_test.isnull().sum().sort_values(ascending=False)/len(result_test)
miss_value_percent = (miss_value_percent * 100).round(decimals=2)
if (miss_value_percent>60).any():
delect_colomns = list(miss_value_percent[miss_value_percent>60].index)
for n in delect_colomns:
result_test = result_test.drop(columns=n)
# AMT_CREDIT空值只有一個直接補0
result_test['AMT_CREDIT'].replace({np.nan: 0}, inplace = True)
print(f'刪了欄位 {delect_colomns} ')
###Output
刪了欄位 ['RATE_INTEREST_PRIVILEGED', 'RATE_INTEREST_PRIMARY', 'DAYS_FIRST_DRAWING']
###Markdown
特徵工程
###Code
# 2020-01-21
test = result_test.copy() # 先新建一個df
test['ANNUITY/CREDIT'] = test['AMT_ANNUITY'] / test['AMT_CREDIT']
test['DOWN_PAYMENT/ANNUITY'] = test['AMT_DOWN_PAYMENT'] / test['AMT_ANNUITY']
test['DOWN_PAYMENT/CREDIT'] = test['AMT_DOWN_PAYMENT'] / test['AMT_CREDIT']
test['DOWN_PAYMENT/ANNUITY'] = test['AMT_DOWN_PAYMENT'] / test['AMT_ANNUITY']
test['GOODS_PRICE/CREDIT'] = test['AMT_GOODS_PRICE'] / test['AMT_CREDIT']
test['APPLICATION/CREDIT'] = test['AMT_APPLICATION'] / test['AMT_CREDIT']
test['APPLICATION/GOODS_PRICE'] = test['AMT_APPLICATION'] / test['AMT_GOODS_PRICE']
test['DAYS_LAST_DUE-DAYS_TERMINATION'] = test['DAYS_LAST_DUE'] - test['DAYS_TERMINATION']
# 將有特殊狀態的欄位新增 (onehot)
onehot_list = ['NAME_CONTRACT_STATUS',
'NAME_TYPE_SUITE',
'CODE_REJECT_REASON',
'NAME_PAYMENT_TYPE',
'NAME_PRODUCT_TYPE',
'NFLAG_INSURED_ON_APPROVAL',
'PRODUCT_COMBINATION',
'NAME_SELLER_INDUSTRY',
'NAME_YIELD_GROUP']
test = pd.get_dummies(test, columns = onehot_list , dummy_na=True)
# *****************************************************************************************************************
# 針對previous_application 新增的欄位做計算
test1=test.copy()
# 把float64拿出來作為欄位要處理的鍵值對 ex: 'AMT_ANNUITY': ['sum', 'max', 'min', 'mean'] , ......'
new_feature = [
'AMT_ANNUITY',
'AMT_APPLICATION',
'AMT_CREDIT',
'AMT_DOWN_PAYMENT',
'AMT_GOODS_PRICE',
'DAYS_LAST_DUE',
'DAYS_TERMINATION',
'ANNUITY/CREDIT',
'DOWN_PAYMENT/ANNUITY',
'DOWN_PAYMENT/CREDIT',
'DOWN_PAYMENT/ANNUITY',
'GOODS_PRICE/CREDIT',
'APPLICATION/CREDIT',
'APPLICATION/GOODS_PRICE',
'DAYS_LAST_DUE-DAYS_TERMINATION']
dict_type = {}
for i in new_feature:
dict_type[i] = ['max','min']
num_aggregations = dict_type
# 計算出現的次數
# count = {'SK_ID_PREV':'count'}
test_1 = test.groupby(['SK_ID_CURR']).agg({**num_aggregations})
# 重新排列欄位
columns = []
for m in test_1.columns.levels[0]:
for n in test_1.columns.levels[1]:
# if m == 'SK_ID_PREV':
# columns.append('count')
# break
columns.append(f'PREV_{m}_{n}')
test_1.columns = columns
test_1.reset_index(level=('SK_ID_CURR'),inplace=True)
# *****************************************************************************************************************
# 針對previous_application類別型態onehot後做計算
test2 = test.copy()
# 把onehot出來欄位變成要處理的鍵值對 ex:'NAME_CONTRACT_TYPE_Cash loans': ['count'] , ......'
dict_type = {}
for i in list(test.columns[91:]):
dict_type[i] = ['sum']
num_aggregations = dict_type
# 計算出現的次數
count = {'SK_ID_PREV':'count'}
test_2 = test2.groupby(['SK_ID_CURR']).agg({**count,**num_aggregations})
# 重新排列欄位
columns = []
for m in test_2.columns.levels[0]:
for n in test_2.columns.levels[1][1:]:
if m == 'SK_ID_PREV':
columns.append('count')
break
columns.append(f'PREV_{m}_rate')
test_2.columns = columns
test_2.reset_index(level=('SK_ID_CURR'),inplace=True)
# 計算比例
ID_count_t = test_2['count'].values.reshape((340893,1))
test_rate = (test_2.iloc[:,2:]/ID_count_t)
# 組在一起
test_2_rate = pd.concat([test_2['SK_ID_CURR'],test_rate], axis = 1)
# *****************************************************************************************************************
# SELLERPLACE_AREA集中程度(在同一家的最大出現次數除以此人的貸款數)
test4 = test.copy()
test4['Total_count'] = test4['SK_ID_PREV']
# 先以各ID及各分行去計次
test_4 = test4.groupby(['SK_ID_CURR','SELLERPLACE_AREA']).agg({'Total_count':'count'})
test_4.reset_index(level=('SK_ID_CURR','SELLERPLACE_AREA'),inplace=True)
# 新增一欄作為各ID在哪間分行使用作多的欄位
test_4['count_max'] =test_4['Total_count']
test_4 = test_4.groupby(['SK_ID_CURR']).agg({'count_max':'max','Total_count':'sum'})
# 新增一欄計算比例
test_4['SELLERPLACE_AREA_HIGH_rate'] = test_4['count_max']/test_4['Total_count']
test_4.reset_index(level=('SK_ID_CURR'),inplace=True)
# 只留比例
test_4 = test_4.drop(columns=['count_max','Total_count'],axis=1
file_list = [test_1,test_2_rate,test_4]
# 建一個空的df
second_result = pd.DataFrame(columns=['SK_ID_CURR'])
for i in file_list:
second_result = pd.merge(second_result, i,how='outer')
###Output
_____no_output_____
###Markdown
合併
###Code
# 最後合併
Hl0 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-application_train_cleaning_with_bureau.csv')
Hl1 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-previous_application_Aiyd.csv')
Hl2 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-previous_application_FE_w_installment&POS.csv')
Hl3 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-previous_application_FE_w_installment&POS_CL.csv')
Hl4 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-previous_application_FE_w_installment&POS_in2y.csv')
Hl5 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-previous_application_FE_w_installment&POS_RL.csv')
Hl6 = pd.read_csv(f'..\\..\\Desktop\\home-credit-default-risk\\C-previous_application_FE_w_CreditCard.csv')
Hl0 = Hl0.drop(columns=['Unnamed: 0'])
Hl1 = Hl1.drop(columns=['Unnamed: 0'])
file_list = [Hl0,Hl1,Hl2,Hl3,Hl4,Hl5,Hl6]
# 建一個空的df
final_result = pd.DataFrame(columns=['SK_ID_CURR'])
for i in file_list:
if len(i.columns) ==335:
final_result = i
final_result = pd.merge(final_result, i,how='left')
final_result.shape
Hl0.shape
final_result.to_csv('..\\..\\Desktop\\home-credit-default-risk\\BDSE12_03G_HomeCredit_V1.csv')
###Output
_____no_output_____ |
Aula 44 - DFT resolucao e zero padding/.ipynb_checkpoints/DFT - resolucao e zero padding-checkpoint.ipynb | ###Markdown
Resolução da DFT e adição de zerosNeste notebook avaliaremos aspectos de resolução da DFT e da adição de zeros (zero padding).Notamos que um sinal com $N$ amostras, amostrado a uma taxa $F_s$ {Hz] possuirá um vetor de frequências\begin{equation}f_{Hz}^{(k)} = \left\{\frac{0F_{s}}{N}, \ \frac{1F_{s}}{N}, \ \frac{2F_{s}}{N}, \ ... \ \frac{(N-1)F_{s}}{N}\right\}\end{equation}
###Code
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
plt.rcParams.update({'font.size': 14})
import IPython.display as ipd # to play signals
import sounddevice as sd
import soundfile as sf
###Output
_____no_output_____
###Markdown
Vamos criar uma resposta ao impulso idealEsta é a resposta ao impulso de um sistema massa-mola (1DOF)\begin{equation}h(t) = \frac{A}{\omega_d} \mathrm{e}^{-\zeta \omega_n t} \mathrm{sin}(\omega_d t)\end{equation}com $\zeta$ sendo o amortecimento, $\omega_n = 2 \pi f_1$ e $\omega_d = \omega_n\sqrt{1-\zeta^2}$, que possui TF, dada por\begin{equation}H(\mathrm{j}\omega) = \frac{A}{\omega_n^2 - \omega^2+ \mathrm{j}2\zeta\omega_n\omega} \end{equation}
###Code
# Define the sampling rate fs = 100 Hz and total record time T = 5 seconds
Fs=100
T=5
# time vector
time = np.arange(0, T, 1/Fs) # Ts = 0.01 [s]
# Sinal puro (Fenômeno) - original - h(t) de um S1GL (massa-mola)
A=200
zeta=0.3
wn=2*np.pi*10
wd=np.sqrt(1-zeta**2)*wn
# Impulse response
ht=(A/wd)*np.exp(-zeta*wn*time)*np.sin(wd*time)
# FRF teorica
freq = np.arange(0, Fs, 0.1)
Hw=A/(wn**2 - (2*np.pi*freq)**2+ 1j*2*zeta*wn*(2*np.pi*freq));
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.title(r'$h(t)$ - (fenômeno original)')
plt.plot(time, ht, '-b', linewidth = 2)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, time[-1]))
plt.subplot(1,2,2)
plt.title(r'$|H(j\omega)|$ - (fenômeno original)')
plt.plot(freq, 20*np.log10(np.abs(Hw)), '-b', linewidth = 2)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel('Magnitude [dB]')
plt.xlim((0, Fs))
plt.ylim((-80, -20))
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Adicionamos ruído a resposta ao impulso (fenômeno ideal)Tipicamente, é isso que acontece em uma medição. Ela sempre estará contaminada por ruído em algum grau. O ruído vai ter característica de ruído branco (distribuição normal com média zero e uma variância).
###Code
noise = np.random.normal(loc = 0, scale = 0.01, size = len(time))
ht_med = ht+noise
plt.figure()
plt.title(r'$h(t)$ - (fenômeno original vs. medido)')
plt.plot(time, ht, '-b', linewidth = 2, label = 'Fenômeno original')
plt.plot(time, ht_med, '-r', linewidth = 1, label = 'Fenômeno medido')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, time[-1]));
###Output
_____no_output_____
###Markdown
Agora podemos investigar a FFT e sua resolução para alguns casos 1. Resolução original"If n is not given, the length of the input along the axis specified by axis is used."Se n não for dado, to comprimento da entrada (ao longo do eixo especificado) é usado - Em outras palavras, o espectro tem o mesmo número de pontos que o sinal no domínio do tempo.
###Code
N = len(ht) # Número de pontos da FFT
Hw_med = np.fft.fft(ht_med)
freq_vec = np.linspace(0, (N-1)*Fs/N, N)
print("Vetor de frequência: {:.2f}, {:.2f}, {:.2f}, ... {:.2f}".format(freq_vec[0], freq_vec[1], freq_vec[2], freq_vec[-1]))
print("Resolução do espectro: {:.2f} [Hz]".format(freq_vec[1]-freq_vec[0]))
plt.figure(figsize=(10,6))
plt.title(r'$|H(j\omega)|$')
plt.plot(freq, 20*np.log10(np.abs(Hw)), '-b', linewidth = 2, label = 'Fenômeno original')
plt.plot(freq_vec, 20*np.log10(np.abs(Hw_med)/Fs), '-r', linewidth = 1, label = 'Fenômeno medido')
plt.legend(loc = 'lower left')
plt.axvline(Fs/2, color='grey',linestyle = '--', linewidth = 4, alpha = 0.4)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel('Magnitude [dB]')
plt.xlim((0, Fs))
plt.ylim((-80, -20))
plt.tight_layout()
#plt.savefig('dft_mag_par.pdf')
plt.show()
###Output
Vetor de frequência: 0.00, 0.20, 0.40, ... 99.80
Resolução do espectro: 0.20 [Hz]
###Markdown
Cheque o teorema de Parseval
###Code
# Domínio do tempo
Et = np.sum(ht_med**2)
# Domínio da frequência
Ef = np.sum(np.abs(Hw_med)**2)
print("A energia no domínio do tempo é {:.2f}".format(Et))
print("A energia no domínio da frequência é {:.2f}".format(Ef/N))
###Output
A energia no domínio do tempo é 13.55
A energia no domínio da frequência é 13.55
###Markdown
2. Sinal truncadoAgora vamos truncar $h(t)$. Note que o ruído tende a dominar a resposta ao impulso a partir de $0.4$ [s]"If n is smaller than the length of the input, the input is cropped."Se n for menor que que o comprimento do sinal, o sinal será truncado. É indiferente fazer:a. Truncar o sinal no tempo:- ht_med_trunc = ht_med[time<=0.4]- Hw_med_trunc = np.fft.fft(ht_med_trunc)b. Passar Nt como o segundo argumento da FFT- Hw_med_trunc = np.fft.fft(ht_med, Nt)
###Code
ht_med_trunc = ht_med[time<=0.4]
Nt = len(time[time<=0.4]) # Número de pontos da FFT
# Hw_med_trunc = np.fft.fft(ht_med_trunc) # Tente esta linha depois
Hw_med_trunc = np.fft.fft(ht_med, Nt)
freq_vec_t = np.linspace(0, (Nt-1)*Fs/Nt, Nt)
print("O número de pontos na DFT é {}".format(Nt))
print("Vetor de frequência: {:.2f}, {:.2f}, {:.2f}, ... {:.2f}".format(freq_vec_t[0], freq_vec_t[1],
freq_vec_t[2], freq_vec_t[-1]))
print("Resolução do espectro: {:.2f} [Hz]".format(freq_vec_t[1]-freq_vec_t[0]))
plt.figure(figsize=(10,6))
plt.title(r'$|H(j\omega)|$ - (fenômeno original)')
plt.plot(freq, 20*np.log10(np.abs(Hw)), '-b', linewidth = 2, label = 'Fenômeno original')
plt.plot(freq_vec_t, 20*np.log10(np.abs(Hw_med_trunc)/Fs), '-r', linewidth = 1, label = 'Fenômeno medido truncado')
plt.legend(loc = 'lower left')
plt.axvline(Fs/2, color='grey',linestyle = '--', linewidth = 4, alpha = 0.4)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel('Magnitude [dB]')
plt.xlim((0, Fs/2)) # Extenda até Fs se desejar
plt.ylim((-80, -20))
plt.tight_layout()
plt.show()
###Output
O número de pontos na DFT é 41
Vetor de frequência: 0.00, 2.44, 4.88, ... 97.56
Resolução do espectro: 2.44 [Hz]
###Markdown
3. Sinal truncado e com zeros no final.Agora vamos truncar $h(t)$ e adicionar zeros no final. Note que a resolução do espectro no caso anterior diminuiu (maior $\Delta f$)."If n is larger, the input is padded with zeros."Se n for maior que que o comprimento do sinal, o sinal será completado com zeros. É indiferente fazer:a. Truncar o sinal no tempo:- ht_med_trunc = ht_med[time<=0.4]- Nt = len(time[time<=0.4]) - ht_med_zp = np.concatenate((ht_med_trunc, np.zeros(N-Nt)))- Hw_med_zp = np.fft.fft(ht_med_zp) Tente esta linha depoisb. Passar Nt como o segundo argumento da FFT- Hw_med_zp = np.fft.fft(ht_med_trunc, N)
###Code
ht_med_trunc = ht_med[time<=0.4]
Nt = len(time[time<=0.4])
ht_med_zp = np.concatenate((ht_med_trunc, np.zeros(N-Nt)))
#Hw_med_zp = np.fft.fft(ht_med_zp) # Tente esta linha depois
Hw_med_zp = np.fft.fft(ht_med_trunc, N)
freq_vec_zp = np.linspace(0, (N-1)*Fs/N, N)
print("O número de pontos na DFT é {}".format(len(ht_med_zp)))
print("Vetor de frequência: {:.2f}, {:.2f}, {:.2f}, ... {:.2f}".format(freq_vec_zp[0], freq_vec_zp[1],
freq_vec_zp[2], freq_vec_zp[-1]))
print("Resolução do espectro: {:.2f} [Hz]".format(freq_vec_zp[1]-freq_vec_zp[0]))
plt.figure(figsize=(10,6))
plt.title(r'$|H(j\omega)|$ - (fenômeno original)')
plt.plot(freq, 20*np.log10(np.abs(Hw)), '-b', linewidth = 2, label = 'Fenômeno original')
plt.plot(freq_vec_zp, 20*np.log10(np.abs(Hw_med_zp)/Fs), '-r', linewidth = 1, label = 'Fenômeno med. trunc. comp. zeros')
plt.legend(loc = 'lower left')
plt.axvline(Fs/2, color='grey',linestyle = '--', linewidth = 4, alpha = 0.4)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel('Magnitude [dB]')
plt.xlim((0, Fs/2)) # Extenda até Fs se desejar
plt.ylim((-80, -20))
plt.tight_layout()
plt.show()
###Output
O número de pontos na DFT é 500
Vetor de frequência: 0.00, 0.20, 0.40, ... 99.80
Resolução do espectro: 0.20 [Hz]
###Markdown
Trabalhando na IFFTVamos tratar rapidamente da transformada inversa e recuperar $h(t)$, a partir do espectro do sinal truncado e completado com zeros.
###Code
ht_rec = np.fft.ifft(Hw_med_zp) # ifft
N_rec = len(ht_rec) # Número de amostras
t_mrec = (N_rec-1)/Fs # tempo máximo da ifft
time_rec = np.linspace(0, t_mrec, N_rec) # vetor temporal recuperado da ifft
plt.figure()
plt.title(r'$h(t)$ - (fenômeno original vs. IFFT do medido)')
plt.plot(time, ht, '-b', linewidth = 2, label = 'Fenômeno original')
plt.plot(time_rec, ht_rec, '-r', linewidth = 1, label = 'Fenômeno med. trunc. comp. zeros')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, time[-1]));
###Output
C:\Users\ericb\Anaconda3\lib\site-packages\numpy\core\_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
###Markdown
Inspecione ht_recVocê vai notar que:1. O sinal recuperado da IFFT tem uma pequena parte imaginária2. Você precisa do espectro de $0$ a $F_s$ pra operar a IFFT corretamente.
###Code
ht_rec
###Output
_____no_output_____ |
content/lessons/03-Conditionals/SmallGroup-Conditionals.ipynb | ###Markdown
Now You Code In Class: Exchange RatesThis exercise consists of two parts, demonstrating problem simplification. In the first part we complete the program under the assumption all inputs are valid, in part two we write the complete program, handling bad input. Part OneYou have been tasked with writing a program to help employees of a travel company convert currency.The company travels to the following places. Next to the place is the exchange rate.- Europe: 1 US Dollar == 0.9 Euro- China: 1 US Dollar == 7.2 Chineese Yuan- Russia: 1 US Dollar == 64.2 Russian RubleYou should write a program to input the amount of currency in US Dollars, and then input the place (either Europe, China, or Russia). The program should then output the appropriate amount of exchanged currency. Example Run 1:```Enter the amount of Currency in US Dollars: $100Enter the place you are travelling: Europe, China or Russia? China$100.00 US Dollars is $720.00 Chineese Yuan``` Problem AnalysisInputs:- PROMPT 1Outputs:- PROMPT 2Algorithm (Steps in Program):```PROMPT 3```
###Code
# PROMPT 5
###Output
_____no_output_____
###Markdown
Part TwoAfter you get that working, re-write the program to account for bad input, such as the case where you enter a non-numerical value for US Dollars, or when you enter a place other than the three accepted locations.Example Run 2: (bad currency)```Enter the amount of Currency in US Dollars: A hundo'A Hundo' is not a valid amount!```Example Run 2: (unsupported country)```Enter the amount of Currency in US Dollars: $100Enter the place you are travelling: Europe, China or Russia? CanadaSorry. You cannot travel to 'Canada' at this time``` Problem Analysis- explain your approach to handle an unsupported country: PROMPT 6- Explain your approach to handle bad currencyPROMPT 7
###Code
# PROMPT 8 (copy code from PROMPT 5) re-write to address unsupported country
# FINAL VERSION
# PROMPT 9 (copy code from PROMPT 8) re-write to address bad currency
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit_now()
###Output
_____no_output_____ |
homework/hw4/uniqname_hw4.ipynb | ###Markdown
Homework 4: Basically bioinformatics --- Topic areas* Functions* I/O operations* Dictionary lookups* Data structures* Control structures --- Introduction Bioinformatics is a special field that blends **biology**, **mathematics/statistics**, and **computer science**. One note that is often left off is that the computer science that is done is often in the form of _Big Data_ computer science. One reason many computer science classes in the bioinformatics field suffer, is they forget to bring this concept into the class. This happens for many reasons:* It is hard to get good data* Toy examples can easily teach the same concepts* Students are often in disparate disciplines However, this homework aims to introduce you to more "bioinformatic-y" workflows that often are not developed until you hit your lab. While the material that we will be covering is oriented towards bacterial genomics, the concepts should still apply as far as work flow is considered. --- Background > _B. subtilis_ is a Gram-positive bacterium that is often used as a model organism in the study of bacterial chromosome replication. It is also considered to be the best studied Gram-positive bacterial.[$\^1\$](https://wickhamlabs.co.uk/technical-resource-centre/fact-sheet-bacillus-subtilis/) We will be working with some simulated _B. subtilis_ data today. Some key characteristics of the _B. subtilis_ genome is that it is a 4.13611 megabase (Mb) circular genome with a median GC% of 43.6[$\^2\$](https://www.ncbi.nlm.nih.gov/genome/?term=Bacillus%20subtilis[Organism]&cmd=DetailsSearch). The DataA description of the provided data are:1. `b_subtilis_genome.fa`: A [FASTA format](https://blast.ncbi.nlm.nih.gov/Blast.cgi?CMD=Web&PAGE_TYPE=BlastDocs&DOC_TYPE=BlastHelp) file containing the reference sequence for _B. subtilis_ * A hallmark of the FASTA format is that the sequence header line precedes the sequences and always begins with a '>' character1. `normal.bam`: A [BAM format](https://samtools.github.io/hts-specs/SAMv1.pdf) file that contains the simulated short reads for a "normal" _B subtilis_ sample * This is a very specialized format that needs special libraries to parse. However, just think of it as one read per line1. `normal.bam.bai`: A BAM index file used for random access1. `tumor.bam`: A [BAM format](https://samtools.github.io/hts-specs/SAMv1.pdf) file that contains the simulated short reads for a "tumor" _B subtilis_ sample1. `tumor.bam.bai`: A BAM index file used for random accessThe SAM/BAM format can be summarized in this table: Important Note You will be using a special Python library for handling this data. This package is called BAMnostic. Before doing this homework, you will need to install BAMnostic. To do so, go to your terminal and type:conda install -c conda-forge bamnostic Consider taking a look at the BAMnostic documentation for more information. MethodsThe data was simulated using the [Bacillus subtilis subsp. subtilis str. 168](https://support.illumina.com/sequencing/sequencing_software/igenome.html) provided by illumina's iGenomes collection.* [ART](https://www.niehs.nih.gov/research/resources/software/biostatistics/art/) was used to simulate the short reads (`fastq` files) based on the genome above using known base calling error rates and biases within specified illumina technologies* [SInC](https://sourceforge.net/projects/sincsimulator/files/?source=navbar) was used to modify the ART reads to simulate SNPS, CNVs, and indels within the reads* [VarSimLab](https://github.com/NabaviLab/VarSimLab) was used to orchestrate the other technologies and generate the short reads necessary for this assignment* [bwa](http://bio-bwa.sourceforge.net/) was used to align the reads to the reference genome* [samtools](http://www.htslib.org/) was used to sort, merge, and index the resultant filesAssuming that all of the above software is installed correctly, I used the following command to generate the data:>```bashpython Exome_Script.py -use_genome -c 7 -s -snp 10 -l 100 -sam output b_subtilis_genome.fa``` This means that there are two samples (normal and tumor) of $\approx$ 7x coverage of $\approx$ 100 bp long reads with a SNP rate of 10% across the genome of _B. subtilis_. As this is a cancer cell line simulation workflow, the "tumor" sample should significantly differ from the "normal". --- Instructions This homework is designed to be as close to real genomics research as you can get without the math/stats/research. You are tasked to serially process both the `normal.bam` and `tumor.bam` sample files. For each position on the genome, you will track the number of reads that support that position (`depth`) for a given sample, the counts of each base observed at that position (`counts`), and the consensus base at that position (`consensus`). The data structure you will be using looks like this: So, to reiterate, your data structure is:```pythonlen(genome_positions) == len_of_genometype(genome_positions) == list Every position will have this data structuregenome_positions[0] = { 'normal': { 'depth': 0, number of reads that support this position 'counts': Counter(), Count of observed bases at this position 'consensus': 0 The most observed base at this position }, 'tumor': { 'depth': 0, number of reads that support this position 'counts': Counter(), Count of observed bases at this position 'consensus': 0 The most observed base at this position }}``` Using `bamnostic` you will iterate through the files (`normal.bam` and `tumor.bam`) one read at a time. You will have to perform the following steps:* Identify the read's starting position against the reference (`read.pos`)* Using that position: * Iterate through the read's sequence (`read.seq`) one letter at a time * Keep a count of all observed bases * Keep count of number of reads that have overlapped that position * Keep count of which base has been observed the most at that position For example:```python>>> print(normal_read1.pos, normal_read1.seq)20 GTATCCACAGAGGTTATCGACAACATTTTCACATTACCAACCCCTGTGGACAAGGTTTTTTCAACAGGTTGTCCGCTTTGTGGATAAGATTGTGACAACC>>> print(normal_read1.pos, normal_read1.seq)28 AGAGGTTATCGACCACATTTTCACATTACCAACCCGTGTGGACAAGGTTTTTTCAACAGGTTGTCCGCTTTGTGGATAAGATTGTGACAACCATTGCAAG>>> print(genome_positions[28]['normal']){'depth': 2, 'counts': Counter({'A': 2}), 'consensus': 'A'}``` Important Note You only need to use read.seq and read.pos to complete this assignment You do not have to consider qualities, flags, or CIGAR strings at this time The result When you have finished processing the files, you will need to produce a second `list` of `tuples` that if and only if the following condition is met:> More than half of the total reads at that specific position call a different consensus base in the tumor sample versus the normal sample at the same positionThe data each of the `tuple`s must contain are:1. The position of the variant1. The variant base1. The reference base1. The allele frequency of the variant base (counts of variant base calls/total base counts at the given position) --- The Coding Contract You should need to create no less than four (4) functions to finish this assignment:1. `initialize_positions`: * Input: * genome filename * Output: * initialized `genome_positions`1. `process_bam`: * Input: * filename to be processed * Sample name (`'normal'` or `'tumor'`) * `genome_positions` * Output: Should return the modified `genome_positions` given that specific sample1. `process_read`: * Input: * `bamnostic.core.AlignedSegment`: This is just the read object type * Sample name (`'normal'` or `'tumor'`) * `genome_positions` * Output: Should return the modified `genome_positions`1. `process_results`: * Input: * `positions` * Output: * The summarized variants as a `list` --- Academic Honor CodeIn accordance with Rackham's Academic Misconduct Policy; upon submission of your assignment, you (the student) are indicating acceptance of the following statement:> “I pledge that this submission is solely my own work.”As such, the instructors reserve the right to process any and all source code therein contained within the submitted notebooks with source code plagiarism detection software.Any violations of the this agreement will result in swift, sure, and significant punishment. --- Due dateThis assignment is due **October 14th, 2019 by Noon (12 PM)** --- Submission> `_hw4.ipynb` Example> `mdsherm_hw4.ipynb`We will *only* grade the most recent submission of your homework. --- Late PolicyEach submission will receive a **10%** penalty per day (up to three days) that the assignment is late.After that, the student will receive a **0** for the exam. --- Good luck and code responsibly!---
###Code
from collections import Counter
import bamnostic as bs
###Output
_____no_output_____
###Markdown
I have set this up so that you do not have to worry about dealing with `bamnostic` directly. You should only have to handle the `read` object from here on out.
###Code
# Initialize your genome_list here
def initialize_positions(genome_filename):
len_genome = 0
with open(genome_filename) as genome:
for line in genome:
# Do your stuff here
return genome_positions
def process_bam(filename, sample_name, genome_positions = None):
with bs.AlignmentFile(filename) as bam:
for read in bam:
# Do your stuff here
return genome_positions
def process_read(read, sample_name, genome_positions = None):
# Do your stuff here
return genome_positions
def process_results(genome_positions = None):
# Do your stuff here
return variant_calls
###Output
_____no_output_____
###Markdown
--- This last cell should work if all the code above it is run
###Code
# Initialize the list
genome_positions = initialize_positions('b_subtilis_genome.fa')
# Process all the bam files
for filename in ('normal.bam', 'tumor.bam'):
genome_positions = process_bam(filename, filename.split('.')[0], genome_positions)
# Process the results
results = process_results(genome_positions)
# Print out the first 10
print(results[:10])
###Output
_____no_output_____ |
src/20fd1.ipynb | ###Markdown
Abstract- Goal: - Learn various types of the first order derivative approximation: FFD, BFD, CFD operators - Understand convergence rate of operators - learn python functions ProblemLet $f(x) = \sin x$. Plot, with $h = .5$- its explicit first order derivative $f'$, - FFD $\delta_h f$, - BFD $\delta_{-h}f$, - and CFD $\delta_{\pm}f$ Anal Given a smooth function $f: \mathbb R \mapsto \mathbb R$, its derivative is$$f'(x) = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}.$$This means, if $h$ is small enough, then$$f'(x) \simeq \frac{f(x+h) - f(x)}{h} := \delta_h f.$$We call $\delta_h$ by Finite Difference (FD) operator. In particular, - If $h>0$, then $\delta_h$ is Forward Finite Difference (FFD);- If $h<0$, then $\delta_h$ is Backward Finite Difference (BFD);- The average of FFD and BFD is Central Finite Difference (CFD), denoted by$$\delta_{\pm h} f (x) := \frac 1 2 (\delta_h f (x) + \delta_{-h} f(x)) \simeq f'(x).$$ __Prop__- Both FFD and BFD has convergence order $1$; i.e.$$|\delta_h f(x) - f'(x)| = O(h).$$- CFD has convergence order $2$.$$|\delta_{\pm h} f(x) - f'(x)| = O(h^2).$$__ex__Prove the above proposition. Code We shall import all needed packages first.
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Math operators ffd, bfd, cfd will be defined here as python functions.
###Code
def ffd(f, x, h):
return (f(x+h) - f(x))/h
def bfd(f, x, h):
return (f(x) - f(x-h))/h
def cfd(f, x, h):
return (f(x+h) - f(x-h))/h/2
###Output
_____no_output_____
###Markdown
Next, for the original function $f(x) = \sin x$, we shall plot its exact derivative$$f'(x) = \cos x, $$then, with $h = .5$, plot- ffd $\delta_h f$, - bfd $\delta_{-h}f$, - and cfd $\delta_{\pm h}f$ From the graph, it is obvious that cfd is the closest one to original $f'$.
###Code
h = .5 #step size
x_co = np.linspace(0, 2*np.pi, 100)
plt.plot(x_co, np.cos(x_co), label = 'cosine')
plt.plot(x_co, ffd(np.sin, x_co, h), label = 'FFD')
plt.plot(x_co, bfd(np.sin, x_co, h), label = 'BFD')
plt.plot(x_co, cfd(np.sin, x_co, h), label = 'CFD')
plt.legend()
###Output
_____no_output_____
###Markdown
Abstract- Goal: - Learn various types of the first order derivative approximation: FFD, BFD, CFD operators - Understand convergence rate of operators - learn python functions ProblemLet $f(x) = \sin x$. Plot, with $h = .5$- its explicit first order derivative $f'$, - FFD $\delta_h f$, - BFD $\delta_{-h}f$, - and CFD $\delta_{\pm}f$ Anal Given a smooth function $f: \mathbb R \mapsto \mathbb R$, its derivative is$$f'(x) = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}.$$This means, if $h$ is small enough, then$$f'(x) \simeq \frac{f(x+h) - f(x)}{h} := \delta_h f.$$We call $\delta_h$ by Finite Difference (FD) operator. In particular, - If $h>0$, then $\delta_h$ is Forward Finite Difference (FFD);- If $h<0$, then $\delta_h$ is Backward Finite Difference (BFD);- The average of FFD and BFD is Central Finite Difference (CFD), denoted by$$\delta_{\pm h} f (x) := \frac 1 2 (\delta_h f (x) + \delta_{-h} f(x)) \simeq f'(x).$$ __Prop__- Both FFD and BFD has convergence order $1$; i.e.$$|\delta_h f(x) - f'(x)| = O(h).$$- CFD has convergence order $2$.$$|\delta_{\pm h} f(x) - f'(x)| = O(h^2).$$__ex__Prove the above proposition. Code We shall import all needed packages first.
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Math operators ffd, bfd, cfd will be defined here as python functions.
###Code
def ffd(f, x, h):
return (f(x+h) - f(x))/h
def bfd(f, x, h):
return (f(x) - f(x-h))/h
def cfd(f, x, h):
return (f(x+h) - f(x-h))/h/2
###Output
_____no_output_____
###Markdown
Next, for the original function $f(x) = \sin x$, we shall plot its exact derivative$$f'(x) = \cos x, $$then, with $h = .5$, plot- ffd $\delta_h f$, - bfd $\delta_{-h}f$, - and cfd $\delta_{\pm}f$ From the graph, it is obvious that cfd is the closest one to original $f'$.
###Code
h = .5 #step size
x_co = np.linspace(0, 2*np.pi, 100)
plt.plot(x_co, np.cos(x_co), label = 'cosine')
plt.plot(x_co, ffd(np.sin, x_co, h), label = 'FFD')
plt.plot(x_co, bfd(np.sin, x_co, h), label = 'BFD')
plt.plot(x_co, cfd(np.sin, x_co, h), label = 'CFD')
plt.legend()
###Output
_____no_output_____ |
Image/hdr_landsat.ipynb | ###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
import datetime
view_state = pdk.ViewState(longitude=-95.738, latitude=18.453, zoom=9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
ee_layers.append(EarthEngineLayer(ee_object=image, vis_params={'gain':'1.6, 1.4, 1.1'}))
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
ee_layers.append(EarthEngineLayer(ee_object=image.mask(mask1), vis_params={'gain':6.0,'bias':-200}))
ee_layers.append(EarthEngineLayer(ee_object=image.mask(mask2), vis_params={'gain':6.0,'bias':-200}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
import datetime
Map.setCenter(-95.738, 18.453, 9)
# Filter the LE7 collection to a single date.
collection = (ee.ImageCollection('LE7_L1T')
.filterDate(datetime.datetime(2002, 11, 8),
datetime.datetime(2002, 11, 9)))
image = collection.mosaic().select('B3', 'B2', 'B1')
# Display the image normally.
Map.addLayer(image, {'gain': '1.6, 1.4, 1.1'}, 'Land')
# Add and stretch the water. Once where the elevation is masked,
# and again where the elevation is zero.
elev = ee.Image('srtm90_v4')
mask1 = elev.mask().eq(0).And(image.mask())
mask2 = elev.eq(0).And(image.mask())
Map.addLayer(image.mask(mask1), {'gain': 6.0, 'bias': -200}, 'Water: Masked')
Map.addLayer(image.mask(mask2), {'gain': 6.0, 'bias': -200}, 'Water: Elev 0')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
ipynb/US-Guam.ipynb | ###Markdown
United States: Guam* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/US-Guam.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="US", region="Guam", weeks=5);
overview(country="US", region="Guam");
compare_plot(country="US", region="Guam");
# load the data
cases, deaths = get_country_data("US", "Guam")
# get population of the region for future normalisation:
inhabitants = population(country="US", region="Guam")
print(f'Population of country="US", region="Guam": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/US-Guam.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
United States: Guam* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/US-Guam.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="US", region="Guam", weeks=5);
overview(country="US", region="Guam");
compare_plot(country="US", region="Guam");
# load the data
cases, deaths = get_country_data("US", "Guam")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/US-Guam.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
United States: Guam* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/US-Guam.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="US", region="Guam");
# load the data
cases, deaths, region_label = get_country_data("US", "Guam")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/US-Guam.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
examples/Features.ipynb | ###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup('hello')
ic = features.Icon(color='red')
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Vega popup
###Code
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m.save(os.path.join('results', 'Features_2.html'))
m
###Output
_____no_output_____
###Markdown
Vega-Lite popup
###Code
from altair import Chart, load_dataset
# load built-in dataset as a pandas DataFrame
cars = load_dataset('cars')
scatter = Chart(cars).mark_circle().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
)
vega = folium.features.VegaLite(
scatter,
width='100%',
height='100%',
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m.save(os.path.join('results', 'Features_3.html'))
m
###Output
_____no_output_____
###Markdown
Vega div and a Map
###Code
import branca
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.Vega(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega-Lite div and a Map
###Code
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame({
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
})
scatter = Chart(multi_iter2).mark_circle().encode(x='x', y='y')
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%'
)
m2 = folium.Map(
location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.VegaLite(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.VegaLite(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'geometry': {
'type': 'MultiPoint',
'coordinates': [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
'properties': {'prop0': 'value0'}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_6.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_7.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer('OpenStreetMap').add_to(m)
folium.raster_layers.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup('hello')
ic = features.Icon(color='red')
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Vega popup
###Code
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m.save(os.path.join('results', 'Features_2.html'))
m
###Output
_____no_output_____
###Markdown
Vega-Lite popup
###Code
try:
from altair import Chart, load_dataset
except TypeError:
print('Try updating your python version to 3.5.3 or above')
# load built-in dataset as a pandas DataFrame
cars = load_dataset('cars')
scatter = Chart(cars).mark_circle().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
)
vega = folium.features.VegaLite(
scatter,
width='100%',
height='100%',
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m.save(os.path.join('results', 'Features_3.html'))
m
###Output
_____no_output_____
###Markdown
Vega div and a Map
###Code
import branca
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.Vega(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega-Lite div and a Map
###Code
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame({
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
})
scatter = Chart(multi_iter2).mark_circle().encode(x='x', y='y')
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%'
)
m2 = folium.Map(
location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.VegaLite(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.VegaLite(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'geometry': {
'type': 'MultiPoint',
'coordinates': [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
'properties': {'prop0': 'value0'}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_6.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_7.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer('OpenStreetMap').add_to(m)
folium.raster_layers.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup('hello')
ic = features.Icon(color='red')
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Vega popup
###Code
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m.save(os.path.join('results', 'Features_2.html'))
m
###Output
_____no_output_____
###Markdown
Vega-Lite popup
###Code
from altair import Chart, load_dataset
# load built-in dataset as a pandas DataFrame
cars = load_dataset('cars')
scatter = Chart(cars).mark_circle().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
)
vega = folium.features.VegaLite(
scatter,
width='100%',
height='100%',
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m.save(os.path.join('results', 'Features_3.html'))
m
###Output
_____no_output_____
###Markdown
Vega div and a Map
###Code
import branca
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.Vega(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega-Lite div and a Map
###Code
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame({
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
})
scatter = Chart(multi_iter2).mark_circle().encode(x='x', y='y')
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%'
)
m2 = folium.Map(
location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.VegaLite(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.VegaLite(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'geometry': {
'type': 'MultiPoint',
'coordinates': [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
'properties': {'prop0': 'value0'}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_6.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_7.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer('OpenStreetMap').add_to(m)
folium.raster_layers.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
# FIXME: This example is broken!!!
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
PNG representation
###Code
from IPython.display import Image
Image(m._to_png())
###Output
_____no_output_____
###Markdown
WMS
###Code
m = folium.Map([40, -100], zoom_start=4)
w = features.WmsTileLayer(
"http://mesonet.agron.iastate.edu/cgi-bin/wms/nexrad/n0r.cgi",
name='test',
format='image/png',
layers='nexrad-n0r-900913',
attr=u"Weather data © 2012 IEM Nexrad",
transparent=True
)
w.add_to(m)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
import branca
f = branca.element.Figure(figsize=(8, 8))
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = features.Popup('hello')
ic = features.Icon(color='red')
f.add_child(m)
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
f.save(os.path.join('results', 'Features_2.html'))
f
###Output
_____no_output_____
###Markdown
RegularPolygonMarker
###Code
f = branca.element.Figure()
m = folium.Map([0, 0], zoom_start=1)
mk = features.RegularPolygonMarker([0, 0])
mk2 = features.RegularPolygonMarker([0, 45])
f.add_child(m)
m.add_child(mk)
m.add_child(mk2)
f.save(os.path.join('results', 'Features_3.html'))
f
###Output
_____no_output_____
###Markdown
Vega stuff
###Code
# FIXME: This example is broken!!!
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = features.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
f.add_child(m)
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=400, width=600)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
v = features.Vega(data, height=40, width=600)
f.add_child(v)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
A div and a Map
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(data, position='absolute', left='50%', width='50%', height='50%')
v2 = features.Vega(data, position='absolute', left='0%', width='50%', height='50%', top='50%')
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_6.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "MultiPoint",
"coordinates": [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
"properties": {"prop0": "value0"}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_7.html'))
m
###Output
_____no_output_____
###Markdown
Marker Cluster
###Code
N = 100
data = np.array(
[
np.random.uniform(low=35, high=60, size=N), # Random latitudes in Europe.
np.random.uniform(low=-12, high=30, size=N), # Random longitudes in Europe.
range(N), # Popups text will be simple numbers .
]
).T
m = folium.Map([45, 3], zoom_start=4)
mc = features.MarkerCluster()
for k in range(N):
mk = features.Marker([data[k][0], data[k][1]])
p = features.Popup(str(data[k][2]))
mk.add_child(p)
mc.add_child(mk)
m.add_child(mc)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_9.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.TileLayer('OpenStreetMap').add_to(m)
folium.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_10.html'))
m
###Output
_____no_output_____
###Markdown
Line example
###Code
# Coordinates are 15 points on the great circle from Boston to
# San Francisco.
# Reference: http://williams.best.vwh.net/avform.htm#Intermediate
coordinates = [
[42.3581, -71.0636],
[42.82995815, -74.78991444],
[43.17929819, -78.56603306],
[43.40320216, -82.37774519],
[43.49975489, -86.20965845],
[43.46811941, -90.04569087],
[43.30857071, -93.86961818],
[43.02248456, -97.66563267],
[42.61228259, -101.41886832],
[42.08133868, -105.11585198],
[41.4338549, -108.74485069],
[40.67471747, -112.29609954],
[39.8093434, -115.76190821],
[38.84352776, -119.13665678],
[37.7833, -122.4167]]
# Create the map and add the line
m = folium.Map(location=[41.9, -97.3], zoom_start=4, png_enabled=True)
folium.PolyLine(coordinates, color='#FF0000', weight=5).add_to(m)
m.save(os.path.join('results', 'Features_11.html'))
m
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup('hello')
ic = features.Icon(color='red')
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Vega popup
###Code
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m.save(os.path.join('results', 'Features_2.html'))
m
###Output
_____no_output_____
###Markdown
Vega-Lite popup
###Code
from altair import Chart, load_dataset
# load built-in dataset as a pandas DataFrame
cars = load_dataset('cars')
scatter = Chart(cars).mark_circle().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
)
vega = folium.features.VegaLite(
scatter,
width='100%',
height='100%',
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m.save(os.path.join('results', 'Features_3.html'))
m
###Output
_____no_output_____
###Markdown
Vega div and a Map
###Code
import branca
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.Vega(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega-Lite div and a Map
###Code
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame({
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
})
scatter = Chart(multi_iter2).mark_circle().encode(x='x', y='y')
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%'
)
m2 = folium.Map(
location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.VegaLite(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.VegaLite(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'geometry': {
'type': 'MultiPoint',
'coordinates': [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
'properties': {'prop0': 'value0'}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_6.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_7.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer('OpenStreetMap').add_to(m)
folium.raster_layers.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
WMS
###Code
m = folium.Map([40,-100], zoom_start=4)
w = features.WmsTileLayer("http://mesonet.agron.iastate.edu/cgi-bin/wms/nexrad/n0r.cgi",
name='test',
format='image/png',
layers='nexrad-n0r-900913',
attr=u"Weather data © 2012 IEM Nexrad",
transparent=True)
w.add_to(m)
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
f = branca.element.Figure(figsize=(8,8))
m = folium.Map([0,0], zoom_start=1)
mk = features.Marker([0,0])
pp = features.Popup("hello")
ic = features.Icon(color='red')
f.add_children(m)
mk.add_children(ic)
mk.add_children(pp)
m.add_children(mk)
f
###Output
_____no_output_____
###Markdown
RegularPolygonMarker
###Code
f = branca.element.Figure()
m = folium.Map([0,0], zoom_start=1)
mk = features.RegularPolygonMarker([0,0])
mk2 = features.RegularPolygonMarker([0,45])
f.add_children(m)
m.add_children(mk)
m.add_children(mk2)
f
###Output
_____no_output_____
###Markdown
Vega stuff
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
m = folium.Map([0,0], zoom_start=1)
mk = features.Marker([0,0])
p = features.Popup("Hello")
v = features.Vega(data, width="100%", height="100%")
f.add_children(m)
mk.add_children(p)
p.add_children(v)
m.add_children(mk)
f
###Output
_____no_output_____
###Markdown
Vega div
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=400, width=600)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
v = features.Vega(data, height=40, width=600)
f.add_children(v)
f
###Output
_____no_output_____
###Markdown
A div and a Map
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps
m = folium.Map([0,0], tiles='stamenwatercolor',
zoom_start=1, position='absolute', left="0%", width="50%", height="50%")
m2 = folium.Map([46,3], tiles='mapquestopen',
zoom_start=4, position='absolute', left="50%", width="50%", height='50%',top='50%')
# Create two Vega
v = features.Vega(data, position='absolute', left="50%", width="50%", height="50%")
v2 = features.Vega(data, position='absolute', left="0%", width="50%", height="50%", top='50%')
f.add_children(m)
f.add_children(m2)
f.add_children(v)
f.add_children(v2)
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N=1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "MultiPoint",
"coordinates": [[lon, lat] for (lat,lon) in zip(lats,lons)],
},
"properties": {"prop0": "value0"}
},
],
}
m = folium.Map([48.,5.], zoom_start=6)
m.add_children(features.GeoJson(data))
m
###Output
_____no_output_____
###Markdown
Marker Cluster
###Code
N = 100
data = np.array([
np.random.uniform(low=35,high=60, size=N), # random latitudes in Europe
np.random.uniform(low=-12,high=30, size=N), # random longitudes in Europe
range(N), # popups are simple numbers
]).T
m = folium.Map([45.,3.], zoom_start=4)
mc = features.MarkerCluster()
for i in range(N):
mk = features.Marker([data[i][0],data[i][1]])
p = features.Popup(str(data[i][2]))
mk.add_children(p)
mc.add_children(mk)
m.add_children(mc)
m
###Output
_____no_output_____
###Markdown
Div
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1,2,1)
d2 = f.add_subplot(1,2,2)
d1.add_children(folium.Map([0,0], tiles='stamenwatercolor', zoom_start=1))
d2.add_children(folium.Map([46,3], tiles='mapquestopen', zoom_start=5))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.TileLayer('mapquestopen').add_to(m)
folium.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m
###Output
_____no_output_____
###Markdown
ScrollZoomToggler
###Code
import folium.plugins
m = folium.Map()
folium.plugins.ScrollZoomToggler().add_to(m)
m
###Output
_____no_output_____
###Markdown
Terminator
###Code
m = folium.Map()
folium.plugins.Terminator().add_to(m)
m
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup('hello')
ic = features.Icon(color='red')
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Vega popup
###Code
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m.save(os.path.join('results', 'Features_2.html'))
m
###Output
_____no_output_____
###Markdown
Vega-Lite popup
###Code
from altair import Chart, load_dataset
# load built-in dataset as a pandas DataFrame
cars = load_dataset('cars')
scatter = Chart(cars).mark_circle().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
)
vega = folium.features.VegaLite(
scatter,
width='100%',
height='100%',
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m.save(os.path.join('results', 'Features_3.html'))
m
###Output
_____no_output_____
###Markdown
Vega div and a Map
###Code
import branca
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.Vega(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega-Lite div and a Map
###Code
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame({
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
})
scatter = Chart(multi_iter2).mark_circle().encode(x='x', y='y')
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%'
)
m2 = folium.Map(
location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.VegaLite(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.VegaLite(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'geometry': {
'type': 'MultiPoint',
'coordinates': [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
'properties': {'prop0': 'value0'}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_6.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_7.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer('OpenStreetMap').add_to(m)
folium.raster_layers.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
# FIXME: This example is broken!!!
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
###Output
_____no_output_____
###Markdown
WMS
###Code
m = folium.Map([40, -100], zoom_start=4)
w = features.WmsTileLayer(
"http://mesonet.agron.iastate.edu/cgi-bin/wms/nexrad/n0r.cgi",
name='test',
format='image/png',
layers='nexrad-n0r-900913',
attr=u"Weather data © 2012 IEM Nexrad",
transparent=True
)
w.add_to(m)
m.save(os.path.join('results', 'Features_1.html'))
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
import branca
f = branca.element.Figure(figsize=(8, 8))
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = features.Popup('hello')
ic = features.Icon(color='red')
f.add_child(m)
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
f.save(os.path.join('results', 'Features_2.html'))
f
###Output
_____no_output_____
###Markdown
RegularPolygonMarker
###Code
f = branca.element.Figure()
m = folium.Map([0, 0], zoom_start=1)
mk = features.RegularPolygonMarker([0, 0])
mk2 = features.RegularPolygonMarker([0, 45])
f.add_child(m)
m.add_child(mk)
m.add_child(mk2)
f.save(os.path.join('results', 'Features_3.html'))
f
###Output
_____no_output_____
###Markdown
Vega stuff
###Code
# FIXME: This example is broken!!!
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = features.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
f.add_child(m)
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
f.save(os.path.join('results', 'Features_4.html'))
f
###Output
_____no_output_____
###Markdown
Vega div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=400, width=600)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
v = features.Vega(data, height=40, width=600)
f.add_child(v)
f.save(os.path.join('results', 'Features_5.html'))
f
###Output
_____no_output_____
###Markdown
A div and a Map
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(data, position='absolute', left='50%', width='50%', height='50%')
v2 = features.Vega(data, position='absolute', left='0%', width='50%', height='50%', top='50%')
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_6.html'))
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "MultiPoint",
"coordinates": [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
"properties": {"prop0": "value0"}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_7.html'))
m
###Output
_____no_output_____
###Markdown
Marker Cluster
###Code
N = 100
data = np.array(
[
np.random.uniform(low=35, high=60, size=N), # Random latitudes in Europe.
np.random.uniform(low=-12, high=30, size=N), # Random longitudes in Europe.
range(N), # Popups text will be simple numbers .
]
).T
m = folium.Map([45, 3], zoom_start=4)
mc = features.MarkerCluster()
for k in range(N):
mk = features.Marker([data[k][0], data[k][1]])
p = features.Popup(str(data[k][2]))
mk.add_child(p)
mc.add_child(mk)
m.add_child(mc)
m.save(os.path.join('results', 'Features_8.html'))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_9.html'))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.TileLayer('OpenStreetMap').add_to(m)
folium.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_10.html'))
m
###Output
_____no_output_____
###Markdown
Line example
###Code
# Coordinates are 15 points on the great circle from Boston to
# San Francisco.
# Reference: http://williams.best.vwh.net/avform.htm#Intermediate
coordinates = [
[42.3581, -71.0636],
[42.82995815, -74.78991444],
[43.17929819, -78.56603306],
[43.40320216, -82.37774519],
[43.49975489, -86.20965845],
[43.46811941, -90.04569087],
[43.30857071, -93.86961818],
[43.02248456, -97.66563267],
[42.61228259, -101.41886832],
[42.08133868, -105.11585198],
[41.4338549, -108.74485069],
[40.67471747, -112.29609954],
[39.8093434, -115.76190821],
[38.84352776, -119.13665678],
[37.7833, -122.4167]]
# Create the map and add the line
m = folium.Map(location=[41.9, -97.3], zoom_start=4)
folium.PolyLine(coordinates, color='#FF0000', weight=5).add_to(m)
m.save(os.path.join('results', 'Features_11.html'))
m
###Output
_____no_output_____
###Markdown
WMS
###Code
m = features.Map([40,-100], zoom_start=4)
w = features.WmsTileLayer("http://mesonet.agron.iastate.edu/cgi-bin/wms/nexrad/n0r.cgi",
name='test',
format='image/png',
layers='nexrad-n0r-900913',
attribution=u"Weather data © 2012 IEM Nexrad",
transparent=True)
w.add_to(m)
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
f = features.Figure(figsize=(8,8))
m = features.Map([0,0], zoom_start=1)
mk = features.Marker([0,0])
pp = features.Popup("hello")
ic = features.Icon(color='red')
f.add_children(m)
mk.add_children(ic)
mk.add_children(pp)
m.add_children(mk)
f
###Output
_____no_output_____
###Markdown
RegularPolygonMarker
###Code
f = features.Figure()
m = features.Map([0,0], zoom_start=1)
mk = features.RegularPolygonMarker([0,0])
mk2 = features.RegularPolygonMarker([0,45])
f.add_children(m)
m.add_children(mk)
m.add_children(mk2)
f
###Output
_____no_output_____
###Markdown
Vega stuff
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
f = features.Figure()
m = features.Map([0,0], zoom_start=1)
mk = features.Marker([0,0])
p = features.Popup("Hello")
v = features.Vega(data, width="100%", height="100%")
f.add_children(m)
mk.add_children(p)
p.add_children(v)
m.add_children(mk)
f
###Output
_____no_output_____
###Markdown
Vega div
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=400, width=600)
data = json.loads(scatter.to_json())
f = features.Figure()
v = features.Vega(data, height=40, width=600)
f.add_children(v)
f
###Output
_____no_output_____
###Markdown
A div and a Map
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = features.Figure()
# Create two maps
m = features.Map([0,0], tiles='stamenwatercolor',
zoom_start=1, position='absolute', left="0%", width="50%", height="50%")
m2 = features.Map([46,3], tiles='mapquestopen',
zoom_start=4, position='absolute', left="50%", width="50%", height='50%',top='50%')
# Create two Vega
v = features.Vega(data, position='absolute', left="50%", width="50%", height="50%")
v2 = features.Vega(data, position='absolute', left="0%", width="50%", height="50%", top='50%')
f.add_children(m)
f.add_children(m2)
f.add_children(v)
f.add_children(v2)
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N=1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "MultiPoint",
"coordinates": [[lon, lat] for (lat,lon) in zip(lats,lons)],
},
"properties": {"prop0": "value0"}
},
],
}
m = features.Map([48.,5.], zoom_start=6)
m.add_children(features.GeoJson(data))
m
###Output
_____no_output_____
###Markdown
Marker Cluster
###Code
N = 100
data = np.array([
np.random.uniform(low=35,high=60, size=N), # random latitudes in Europe
np.random.uniform(low=-12,high=30, size=N), # random longitudes in Europe
range(N), # popups are simple numbers
]).T
m = features.Map([45.,3.], zoom_start=4)
mc = features.MarkerCluster()
for i in range(N):
mk = features.Marker([data[i][0],data[i][1]])
p = features.Popup(str(data[i][2]))
mk.add_children(p)
mc.add_children(mk)
m.add_children(mc)
m
###Output
_____no_output_____
###Markdown
Div
###Code
import vincent, json
import numpy as np
N=100
multi_iter2 = {'x' : np.random.uniform(size=(N,)),
'y' : np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=250, width=420)
data = json.loads(scatter.to_json())
f = features.Figure()
d1 = f.add_subplot(1,2,1)
d2 = f.add_subplot(1,2,2)
d1.add_children(features.Map([0,0], tiles='stamenwatercolor', zoom_start=1))
d2.add_children(features.Map([46,3], tiles='mapquestopen', zoom_start=5))
f
###Output
_____no_output_____
###Markdown
Choropleth
###Code
import pandas as pd
import json
geojson_data = json.load(open('us-states.json'))
sd = pd.read_csv('US_Unemployment_Oct2012.csv').set_index('State')['Unemployment'].to_dict()
f = features.Figure()
m = features.Map([43,-100], zoom_start=4)
g = features.GeoJson(geojson_data)
f.add_children(m)
m.add_children(g)
g.add_children(features.GeoJsonStyle([3.0, 7.0, 8.0, 9.0, 9.0], 'YlGn', sd, key_on='feature.id'))
f
###Output
_____no_output_____
###Markdown
Boat marker
###Code
from folium.features import *
###Output
_____no_output_____
###Markdown
ColorLine
###Code
import numpy as np
x = np.linspace(0, 2 * np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
import folium
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=["y", "orange", "r"],
weight=10,
)
color_line.add_to(m)
m
###Output
_____no_output_____
###Markdown
Marker, Icon, Popup
###Code
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup("hello")
ic = features.Icon(color="red")
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m
###Output
_____no_output_____
###Markdown
Vega popup
###Code
import json
import vincent
N = 100
multi_iter2 = {
"x": np.random.uniform(size=(N,)),
"y": np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx="x", height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup("Hello")
v = features.Vega(data, width="100%", height="100%")
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m
###Output
_____no_output_____
###Markdown
Vega-Lite popup
###Code
from altair import Chart, load_dataset
# load built-in dataset as a pandas DataFrame
cars = load_dataset("cars")
scatter = (
Chart(cars)
.mark_circle()
.encode(
x="Horsepower",
y="Miles_per_Gallon",
color="Origin",
)
)
vega = folium.features.VegaLite(
scatter,
width="100%",
height="100%",
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m
###Output
/home/filipe/miniconda3/envs/FOLIUM/lib/python3.9/site-packages/altair/utils/deprecation.py:65: AltairDeprecationWarning: load_dataset is deprecated. Use the vega_datasets package instead.
warnings.warn(message, AltairDeprecationWarning)
###Markdown
Vega div and a Map
###Code
import branca
N = 100
multi_iter2 = {
"x": np.random.uniform(size=(N,)),
"y": np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx="x", height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles="stamenwatercolor",
zoom_start=1,
position="absolute",
left="0%",
width="50%",
height="50%",
)
m2 = folium.Map(
location=[46, 3],
tiles="OpenStreetMap",
zoom_start=4,
position="absolute",
left="50%",
width="50%",
height="50%",
top="50%",
)
# Create two Vega.
v = features.Vega(data, position="absolute", left="50%", width="50%", height="50%")
v2 = features.Vega(
data, position="absolute", left="0%", width="50%", height="50%", top="50%"
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f
###Output
_____no_output_____
###Markdown
Vega-Lite div and a Map
###Code
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame(
{
"x": np.random.uniform(size=(N,)),
"y": np.random.uniform(size=(N,)),
}
)
scatter = Chart(multi_iter2).mark_circle().encode(x="x", y="y")
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles="stamenwatercolor",
zoom_start=1,
position="absolute",
left="0%",
width="50%",
height="50%",
)
m2 = folium.Map(
location=[46, 3],
tiles="OpenStreetMap",
zoom_start=4,
position="absolute",
left="50%",
width="50%",
height="50%",
top="50%",
)
# Create two Vega.
v = features.VegaLite(data, position="absolute", left="50%", width="50%", height="50%")
v2 = features.VegaLite(
data, position="absolute", left="0%", width="50%", height="50%", top="50%"
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f
###Output
_____no_output_____
###Markdown
GeoJson
###Code
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "MultiPoint",
"coordinates": [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
"properties": {"prop0": "value0"},
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m
###Output
_____no_output_____
###Markdown
Div
###Code
N = 100
multi_iter2 = {
"x": np.random.uniform(size=(N,)),
"y": np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx="x", height=250, width=420)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles="stamenwatercolor", zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles="OpenStreetMap", zoom_start=5))
f
###Output
_____no_output_____
###Markdown
LayerControl
###Code
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer("OpenStreetMap").add_to(m)
folium.raster_layers.TileLayer("stamentoner").add_to(m)
folium.LayerControl().add_to(m)
m
###Output
_____no_output_____ |
closed-files-2020-analysis.ipynb | ###Markdown
Examining all files in RecordSearch with the access status of 'Closed'(Harvested on 1 January 2021)This notebook attempts some large-scale analysis of files from the National Archives of Australia's RecordSearch database that have the access status of 'closed'. For a previous attempt at this, see [Closed Access](http://closedaccess.herokuapp.com/). For more background, see my [*Inside Story* article](https://insidestory.org.au/withheld-pending-advice/) from 2018.See [this notebook](harvest_closed_files.ipynb) for the code used to harvest the data and create the CSV dataset.
###Code
import pandas as pd
import altair as alt
import datetime
###Output
_____no_output_____
###Markdown
The harvested data has been saved as a CSV file. First we'll open it up using Pandas.
###Code
df2020 = pd.read_csv('data/closed-20210101.csv', parse_dates=['contents_start_date', 'contents_end_date', 'access_decision_date'], keep_default_na=False)
df2020.head()
###Output
_____no_output_____
###Markdown
How many closed files are there?
###Code
df2020.shape[0]
###Output
_____no_output_____
###Markdown
What series do the 'closed' files come from?First let's see how many different series are represented in the data set.
###Code
df2020['series'].unique().shape[0]
###Output
_____no_output_____
###Markdown
Now let's look at the 25 most common series.
###Code
df2020['series'].value_counts()[:25]
###Output
_____no_output_____
###Markdown
Series [A1838](http://recordsearch.naa.gov.au/scripts/AutoSearch.asp?Number=A1838) is familiar to anyone who's looked into the NAA's access examination process. It's a general correspondence series from DFAT, and requests for access tend to take a **long** time to be processed. Series [K60](http://recordsearch.naa.gov.au/scripts/AutoSearch.asp?Number=K60) contains repatriation files from the Department of Veterans' Affairs, so these will often been withheld on privacy grounds. We'll see more about both of these below.Let's chart the results.
###Code
# This creates a compact dataset to feed to Altair for charting
# We could make Altair do all the work, but that would embed a lot of data in the notebook.
# Save the series counts to a new dataframe.
series_counts = df2020['series'].value_counts().to_frame().reset_index()
series_counts.columns = ['series', 'count']
# Chart the results, sorted by number of files
alt.Chart(series_counts[:50]).mark_bar().encode(
x=alt.X('series', sort='-y'),
y=alt.Y('count', title='number of files'),
tooltip=['series', 'count']
)
###Output
_____no_output_____
###Markdown
This is only the top 50 of 686 series, so quite obviously there's a very long tail of series that have a small number of closed files. What reasons are given for closing the files?[Section 33](http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/cth/consol_act/aa198398/s33.html) of the Archives Act defines a number of 'exemptions' – these are reasons why files should not be opened to public access. These reasons are recorded in RecordSearch, so we can explore why files have been closed. It's a little complicated, however, because multiple exemptions can be applied to a single file. The CSV data file records multiple reasons as a pipe-separated string. First we can look at the most common combinations of reasons.
###Code
df2020['reasons'].value_counts()[:25]
###Output
_____no_output_____
###Markdown
It's probably more useful, however, to look at the frequency of individual reasons. So we'll split the pip-separated string and create a row for each file/reason combination.
###Code
df2020_reasons = df2020.copy()
# Split the reasons field on pipe symbol |. This turns the string into a list of values.
df2020_reasons['reason'] = df2020_reasons['reasons'].str.split('|')
# Now we'll explode the list into separate rows.
df2020_reasons = df2020_reasons.explode('reason')
###Output
_____no_output_____
###Markdown
Now we can look at the frequency of individual reasons. Not, of course, that the sum of the reasons will be greater than the number of files, as some files have multiple exemptions applied to them.
###Code
df2020_reasons['reason'].value_counts()
###Output
_____no_output_____
###Markdown
The reasons starting with '33' are clauses in [section 33](http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/cth/consol_act/aa198398/s33.html) of the Archives Act. You can look up the Act to find out more about them, or [look at this list](https://www.naa.gov.au/help-your-research/using-collection/access-records-under-archives-act/why-we-refuse-access) on the NAA website. Some of the reasons, such as 'Parliament Class A' refer to particular types of records that are not subject to the same public access arrangements as other government records. Others, such as 'MAKE YOUR SELECTION' seem to be products of the data entry system!Looking at the other most common reasons:* 33(1)(g) relates to privacy* 'Withheld pending adv' is applied to files that are undergoing access examination and have been referred to the relevant government agency for advice on whether they should be released to the public. This is not a final determination – these files may or may not end up being closed. But, as any researcher knows, this process can be *very* slow.* 33(1)(a) is the national security catch-allYou might also notice that there's a blank line in the list above. This is because some closed files have no reasons recorded in RecordSearch. We can check this.
###Code
missing_reasons = df2020.loc[df2020['reasons'] == '']
missing_reasons.shape[0]
###Output
_____no_output_____
###Markdown
There are 46 closed files with no reason recorded. Here's a sample.
###Code
missing_reasons.head()
###Output
_____no_output_____
###Markdown
Let's change the missing reasons to 'None recorded' to make it easier to see what's going on.
###Code
df2020_reasons['reason'].replace('', 'None recorded', inplace=True)
###Output
_____no_output_____
###Markdown
Let's chart the frequency of the different reasons.
###Code
# Once again we'll create a compact dataset for charting
reason_counts = df2020_reasons['reason'].value_counts().to_frame().reset_index()
reason_counts.columns = ['reason', 'count']
# Make the Chart
alt.Chart(reason_counts).mark_bar().encode(
x='reason',
y=alt.Y('count', title='number of files'),
tooltip=['reason', 'count']
)
###Output
_____no_output_____
###Markdown
Connecting reasons and seriesIt would be interesting to bring together the analyses above and see how reasons are distributed across series. First we need to reshape our dataset to show combinations of series and reasons.
###Code
# Group files by series and reason, then count the number of combinations
series_reasons_counts = df2020_reasons.groupby(by=['series', 'reason']).size().reset_index()
# Rename columns
series_reasons_counts.columns = ['series', 'reason', 'count']
###Output
_____no_output_____
###Markdown
Now we can chart the results. Once again we'll show the number of files in the 50 most common series, but this time we'll highlight the reasons using color.
###Code
alt.Chart(series_reasons_counts).transform_aggregate(
count='sum(count)',
groupby=['series', 'reason']
# Sort by number of files
).transform_window(
rank='rank(count)',
sort=[alt.SortField('count', order='descending')]
# Get the top 50
).transform_filter(
alt.datum.rank < 50
).mark_bar().encode(
x=alt.X('series', sort='-y'),
y=alt.Y('sum(count)', title='number of files'),
color='reason',
tooltip=['series', 'reason', 'count']
)
###Output
_____no_output_____
###Markdown
Now we can see that the distribution of reasons varies considerably across series. How old are these files?You would think that the sensitivity of material in closed files diminishes over time. However, there's no automatic re-assessment or time limit on 'closed' files. They stay closed until someone asks for them to be re-examined. That means that some of these files can be quite old. How old? We can use the contents end date to explore this.
###Code
# Normalise contents end values as end of year
df2020['contents_end_year'] = df2020['contents_end_date'].apply(lambda x: datetime.datetime(x.year, 12, 31))
date_counts = df2020['contents_end_year'].value_counts().to_frame().reset_index()
date_counts.columns = ['end_date', 'count']
alt.Chart(date_counts).mark_bar().encode(
x='year(end_date):T',
y='count'
).properties(width=700)
alt.Chart(date_counts.loc[date_counts['end_date'] > '1890-12-31']).mark_bar().encode(
x='year(end_date):T',
y='count',
tooltip='year(end_date)'
).properties(width=700)
df2020['years_old'] = df2020['contents_end_year'].apply(lambda x: round((datetime.datetime.now() - x).days / 365))
age_counts = df2020['years_old'].value_counts().to_frame().reset_index()
age_counts.columns = ['age', 'count']
alt.Chart(age_counts.loc[age_counts['age'] < 130]).mark_bar().encode(
x=alt.X('age:Q', title='age in years'),
y=alt.Y('count', title='number of files'),
tooltip=['age', 'count']
).properties(width=700)
df2020['years_old'].describe()
df2020.loc[df2020['reasons'].str.contains('33(1)(a)', regex=False)]['years_old'].describe()
df2020['years_old'].quantile([0.25, 0.5, 0.75]).to_list()
###Output
_____no_output_____
###Markdown
Dates of decisions
###Code
df2020['year'] = df2020['access_decision_date'].dt.year
year_counts = df2020['year'].value_counts().to_frame().reset_index()
year_counts.columns = ['year', 'count']
alt.Chart(year_counts).mark_bar().encode(
x='year:O',
y='count'
)
###Output
_____no_output_____
###Markdown
33(1)(a)
###Code
df331a = df2020.loc[df2020['reasons'].str.contains('33(1)(a)', regex=False)]
df331a.head()
series_counts_331a = df331a['series'].value_counts().to_frame().reset_index()
series_counts_331a.columns = ['series', 'count']
alt.Chart(series_counts_331a[:50]).mark_bar().encode(
x=alt.X('series', sort='-y'),
y=alt.Y('count', title='number of files'),
tooltip=['series', 'count']
)
###Output
_____no_output_____
###Markdown
Withheld pending advice
###Code
dfwh = df2020.loc[df2020['reasons'].str.contains('Withheld pending adv', regex=False)]
dfwh.head()
series_counts_wh = dfwh['series'].value_counts().to_frame().reset_index()
series_counts_wh.columns = ['series', 'count']
alt.Chart(series_counts_wh[:50]).mark_bar().encode(
x=alt.X('series', sort='-y'),
y='count'
)
###Output
_____no_output_____ |
Python/04 Matplotlib/03.05 Scatter Plot.ipynb | ###Markdown
Scatter Plot Gráficos Scatter Plot ajudam a verificar a correlação entre uma ou mais variáveis, inclusive destacando valores outliers.**Sintaxe básica:** plt.scatter(valor_X, valor_Y)
###Code
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
%matplotlib inline
###Output
_____no_output_____
###Markdown
No exemplo a seguir, faremos um scatter utilizando os dados da Heritage Foundation correlacionando o indice de liberdade econômica com o GDP per capita, que se refere à paridade do poder de compra (dados 2018). Fonte: https://www.heritage.org/index/explore?view=by-variables
###Code
# Utilizando os dados dos países das Américas
scoreLibEconAmer = [52.3, 63.3, 57, 57.1, 44.1, 51.4, 77.7, 75.2, 68.9, 65.6, 31.9,
64.5, 61.6, 48.5, 63.2, 63.4, 58.7, 55.8, 60.6, 69.1, 64.8, 58.9,
67, 62.1, 68.7, 67.6, 67.7, 48.1, 57.7, 75.7, 69.2, 25.2]
gdpPerCapitaAmer = [20047, 24555, 17100, 8220, 7218, 15242, 46437, 24113, 14130,
16436, 12390, 11375, 16049, 11109, 8909, 7899, 7873, 1784, 5271,
8976, 18938, 5452, 23024, 9396, 12903, 11783, 11271, 13988, 31870,
57436, 21527, 13761]
# Construindo o plot
plt.scatter(gdpPerCapitaAmer, scoreLibEconAmer)
# Melhorando a estética
plt.xlabel('GDP Percapita U$') # Rótulo eixo x
plt.ylabel('Score Liberdade Economica') # Rótilo eixo y
plt.title('Liberdade x GDP') # Título do gráfico
plt.show()
###Output
_____no_output_____
###Markdown
Podemos inserir mais de um conjunto de dados, comparando o comportamento do grupo.
###Code
# Plotando em conjunto com os países da Ásia e Pacífico
scoreLibEconAsiaPacif = [51.3, 80.9, 64.3, 55.1, 61.8, 53.9, 58.7, 57.8, 62, 90.2,
54.5, 64.2, 72.3, 69.1, 50.8, 73.8, 62.8, 53.6, 70.9, 74.5,
51.1, 52.3, 55.7, 54.1, 84.2, 54.4, 55.7, 65, 61.5, 88.8, 57.5,
57.8, 76.6, 58.3, 67.1, 48.1, 63.1, 47.1, 51.5, 69.5, 53.1, 64.2]
gdpPerCapitaAsiaPacif = [1919, 48899, 17439, 3891, 8227, 5832, 3737, 15399, 9268, 58322,
6616, 11720, 41275, 25145, 1823, 37740, 3521, 5710, 95151, 27267,
15553, 3234, 12275, 2479, 37294, 5106, 3541, 7728, 5553, 87855, 1973,
12262, 48095, 3008, 16888, 4187, 5386, 17485, 6563, 2631, 6429, 76884]
# Construindo o plot
plt.scatter(gdpPerCapitaAmer, scoreLibEconAmer)
plt.scatter(gdpPerCapitaAsiaPacif, scoreLibEconAsiaPacif)
# Melhorando a estética
plt.xlabel('GDO Percapita U$') # Rótulo eixo x
plt.ylabel('Score Liberdade Economica') # Rótilo eixo y
plt.title('Liberdade x GDP') # Título do gráfico
plt.show()
###Output
_____no_output_____
###Markdown
Podemos verificar acima que quanto maior o indice de liberdade econômica, maior o indice de paridade de porder de compra. **Trabalhando com 4 dimensões** Além de correlacionar 2 dimensões nos eixos x e y, o scatter permite trabalhar com mais 2 sendo elas o tamanho e as cores das bolhas. Para Isso, basta preencher os parâmetros s (size) e c (colors).**Sintaxe** plt.scatter(valor_x, valor_y, ,s=val_a , c=val_b ) Nota: O parâmetro s (size) é dado em pixels
###Code
4# Criando os dados
pais = ['Argentina', 'Bolivia', 'Canada', 'Chile', 'Colombia', 'Cuba', 'Dominican Republic', 'Ecuador',
'El Salvador', 'Guatemala', 'Haiti', 'Honduras', 'Nicaragua', 'Paraguay', 'Peru', 'Venezuela']
gdpPerCapita = [20, 7, 46, 24, 14, 12, 16, 11, 9, 8, 2, 5, 5, 9, 13, 14]
scoreLibEcon = [52.3, 44.1, 77.7, 75.2, 68.9, 31.9, 61.6, 48.5,
63.2, 63.4, 55.8, 60.6, 58.9, 62.1, 68.7, 25.2]
populacao = [44, 11, 36, 18, 49, 12, 10, 17, 6, 17, 11, 8, 6, 7, 31, 31]
cores = np.arange(len(pais)) # Contando os elementos de diferentes cores
populacao2 = np.array([populacao])*10 # Aumentando os pixels da população
# Criando o plot
plt.scatter(gdpPerCapita, scoreLibEcon, alpha=0.5, s=populacao2, c=cores, cmap='viridis')
# *alpha: transparência das cores; cmap: mapa padrão de cores
# Melhorando a estética
plt.xlabel('GDP Percapita U$ (mil)') # Rótulo eixo x
plt.ylabel('Score Liberdade Economica') # Rótilo eixo y
plt.title('Liberdade x GDP') # Título do gráfico
plt.show()
###Output
_____no_output_____ |
examples/00_quick_start/lstur_MIND.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
import numpy as np
import zipfile
from tqdm import tqdm
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs = 5
seed = 40
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 8.74kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.92kKB/s]
100%|██████████| 95.0k/95.0k [00:09<00:00, 9.72kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
###Output
/home/miguel/anaconda/envs/reco_gpu/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: Function record is deprecated and will be removed in verison 1.0.0 (current version 0.19.1). Please see `scrapbook.glue` (nteract-scrapbook) as a replacement for this functionality.
"""Entry point for launching an IPython kernel.
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
tmpdir = TemporaryDirectory()
###Output
System version: 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)
[GCC 7.3.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs=5
seed=40
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 11.5kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.65kKB/s]
100%|██████████| 95.0k/95.0k [00:06<00:00, 15.5kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file, wordEmb_file=wordEmb_file, \
wordDict_file=wordDict_file, userDict_file=userDict_file, epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
###Output
586it [00:00, 767.47it/s]
236it [00:05, 39.57it/s]
7538it [00:02, 3396.53it/s]
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Predcition FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
import numpy as np
from tqdm import tqdm
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
import zipfile
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
import os
import numpy as np
import zipfile
from tqdm import tqdm
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs = 5
seed = 40
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.z20.web.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 9.67kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.34kKB/s]
100%|██████████| 95.0k/95.0k [00:08<00:00, 11.4kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
import numpy as np
import zipfile
from tqdm import tqdm
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs = 5
seed = 40
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 8.74kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.92kKB/s]
100%|██████████| 95.0k/95.0k [00:09<00:00, 9.72kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
/home/miguel/anaconda/envs/reco_gpu/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: Function record is deprecated and will be removed in verison 1.0.0 (current version 0.19.1). Please see `scrapbook.glue` (nteract-scrapbook) as a replacement for this functionality.
"""Entry point for launching an IPython kernel.
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
import numpy as np
import zipfile
from tqdm import tqdm
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs = 5
seed = 40
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.z20.web.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 9.67kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.34kKB/s]
100%|██████████| 95.0k/95.0k [00:08<00:00, 11.4kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
import os
import numpy as np
import zipfile
from tqdm import tqdm
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from recommenders.models.deeprec.deeprec_utils import download_deeprec_resources
from recommenders.models.newsrec.newsrec_utils import prepare_hparams
from recommenders.models.newsrec.models.lstur import LSTURModel
from recommenders.models.newsrec.io.mind_iterator import MINDIterator
from recommenders.models.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
/anaconda/envs/tf2/lib/python3.7/site-packages/papermill/iorw.py:50: FutureWarning: pyarrow.HadoopFileSystem is deprecated as of 2.0.0, please use pyarrow.fs.HadoopFileSystem instead.
from pyarrow import HadoopFileSystem
###Markdown
Prepare Parameters
###Code
epochs = 5
seed = 40
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.z20.web.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 9.67kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.34kKB/s]
100%|██████████| 95.0k/95.0k [00:08<00:00, 11.4kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
tmpdir = TemporaryDirectory()
###Output
System version: 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)
[GCC 7.3.0]
Tensorflow version: 1.15.2
###Markdown
Download and load data
###Code
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
if not os.path.exists(train_news_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'train'), 'MINDdemo_train.zip')
if not os.path.exists(valid_news_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'valid'), 'MINDdemo_dev.zip')
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), 'MINDdemo_utils.zip')
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 11.7kKB/s]
100%|██████████| 9.84k/9.84k [00:00<00:00, 9.91kKB/s]
100%|██████████| 95.0k/95.0k [00:04<00:00, 21.6kKB/s]
###Markdown
Create hyper-parameters
###Code
epochs=5
seed=40
hparams = prepare_hparams(yaml_file, wordEmb_file=wordEmb_file, \
wordDict_file=wordDict_file, userDict_file=userDict_file, epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
###Output
{'group_auc': 0.6444, 'mean_mrr': 0.2983, 'ndcg@5': 0.3287, 'ndcg@10': 0.3938}
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
tmpdir = TemporaryDirectory()
###Output
System version: 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)
[GCC 7.3.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs=5
seed=40
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 11.5kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.65kKB/s]
100%|██████████| 95.0k/95.0k [00:06<00:00, 15.5kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file, wordEmb_file=wordEmb_file, \
wordDict_file=wordDict_file, userDict_file=userDict_file, epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
###Output
586it [00:00, 767.47it/s]
236it [00:05, 39.57it/s]
7538it [00:02, 3396.53it/s]
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
import numpy as np
from tqdm import tqdm
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
import zipfile
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
sys.path.append("../../")
import os
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.lstur import LSTURModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
tmpdir = TemporaryDirectory()
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:19:23)
[GCC Clang 10.0.1 ]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs=5
seed=40
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
data_path = tmpdir.name
print(data_path)
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
print(train_news_file)
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
print("not os.path.exists(train_news_file)")
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
print("not os.path.exists(valid_news_file)")
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
print("not os.path.exists(yaml_file)")
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
/var/folders/vj/vz67s0k14zdbrjrw93x3vkkm0000gn/T/tmpyrgwa9f8
/var/folders/vj/vz67s0k14zdbrjrw93x3vkkm0000gn/T/tmpyrgwa9f8/train/news.tsv
not os.path.exists(train_news_file)
not os.path.exists(valid_news_file)
not os.path.exists(yaml_file)
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file, wordEmb_file=wordEmb_file, \
wordDict_file=wordDict_file, userDict_file=userDict_file, epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
import numpy as np
from tqdm import tqdm
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
import zipfile
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. LSTUR: Neural News Recommendation with Long- and Short-term User RepresentationsLSTUR \[1\] is a news recommendation approach capturing users' both long-term preferences and short-term interests. The core of LSTUR is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles. In user encoder, we propose to learn long-termuser representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combinelong-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating bothlong- and short-term user representations as a unified user vector. Properties of LSTUR:- LSTUR captures users' both long-term and short term preference.- It uses embeddings of users' IDs to learn long-term user representations.- It uses users' recently browsed news via GRU network to learn short-term user representations. Data format:For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset. **MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md) news dataThis file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.One simple example: `N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`In general, each line in data file represents information of one piece of news: `[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings. behaviors dataOne simple example: `1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`In general, each line in data file represents one instance of an impression. The format is like: `[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:`[News ID 1]-[label1] ... [News ID n]-[labeln]`Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file. Global settings and imports
###Code
import sys
import os
import numpy as np
import zipfile
from tqdm import tqdm
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from recommenders.models.deeprec.deeprec_utils import download_deeprec_resources
from recommenders.models.newsrec.newsrec_utils import prepare_hparams
from recommenders.models.newsrec.models.lstur import LSTURModel
from recommenders.models.newsrec.io.mind_iterator import MINDIterator
from recommenders.models.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Tensorflow version: 1.15.2
###Markdown
Prepare Parameters
###Code
epochs = 5
seed = 40
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
###Output
_____no_output_____
###Markdown
Download and load data
###Code
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'lstur.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.z20.web.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
###Output
100%|██████████| 17.0k/17.0k [00:01<00:00, 9.67kKB/s]
100%|██████████| 9.84k/9.84k [00:01<00:00, 8.34kKB/s]
100%|██████████| 95.0k/95.0k [00:08<00:00, 11.4kKB/s]
###Markdown
Create hyper-parameters
###Code
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs)
print(hparams)
iterator = MINDIterator
###Output
_____no_output_____
###Markdown
Train the LSTUR model
###Code
model = LSTURModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
Save the model
###Code
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "lstur_ckpt"))
###Output
_____no_output_____
###Markdown
Output Prediction FileThis code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122learn_the_details-submission-guidelines).Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
###Code
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
###Output
_____no_output_____ |
05_debugging/debugging.ipynb | ###Markdown
DebuggingThis portion of the short course looks at common ways that ATS fails, and starts to hint at how to understand what to do when things go wrong.Note that there are a wide range of ways things can go wrong, and we won't get to all of them:* incorrect input spec (should error)* bad input data (could run bug give the "wrong" answer)* bad parameters (could run but very slowly)* bad physics (could do anything)* incorrect code (could do anything)
###Code
import sys,os
from matplotlib import pyplot as plt
# in ATS_SRC_DIR/tools/utils
sys.path.append(os.path.join(os.environ['ATS_SRC_DIR'], 'tools', 'utils'))
import plot_timestep_history
import plot_wrm
# local scripts, in ats-short-course/
import plot_mass_balance
###Output
_____no_output_____
###Markdown
Getting Help* See the Frequently Asked Questions: https://github.com/amanzi/ats/wiki/FAQs* Ask the user's mailing list: [email protected]* Submit a GitHub Issue: https://github.com/amanzi/ats/issues Incorrect input specOur goal in ATS is that any "invalid" input spec should result in an error with a descriptive error message that tells you what is wrong and where to start to look to fix it.We have not completely met this goal yet -- please submit tickets for errors that result in no error message. If you don't understand an error message, see Getting Help above. Run Example 0: cd ats-short-course/05_debugging/run-0 ats ../priestley_taylor-0.xml Bad ParametersAbout this problem:- Integrated hydrology: Richards' (subsurface flow) + Diffusion wave (overland flow) equations- Snow balance: simple bucket model for precip - snowmelt- Priestley-Taylor equation for transpiration provides a sink of water from the rooting zone Run Example 1:Note that, in this run and throughout, we write to a "new" logfile and plot the old -- this saves us from having to wait for the run to finish. cd ats-short-course/05_debugging/run-1 ats ../priestley_taylor-1.xml &> out-new.log Plot the timestep history using this notebook or through the command line interface for the script: python ${ATS_SRC_DIR}/tools/utils/plot_timestep_history.py -o out.log Note the very slow timesteps starting around day 475.
###Code
def plot_ts_history(dirname):
fig, axs = plot_timestep_history.get_axs()
with open(os.path.join(dirname, 'out.log'), 'r') as fid:
data = plot_timestep_history.parse_logfile(fid)
plot_timestep_history.plot(data,axs,'r','run-1')
plot_timestep_history.decorate_axs(axs)
return axs
plot_ts_history('run-1')
plt.show()
###Output
_____no_output_____
###Markdown
Run example 1bRestart from a checkpoint file by adding the line "restart from checkpoint file" to the "cycle driver" list, pointing to a checkpoint file that is closest to the problematic time (Cycle ~300). This makes it faster to debug, because we don't have to start each run from the beginning. cd ../run-1b ats ../priestley_taylor-1b.xml &> out-new.log Again, plot the timestep history, and zoom in to find a "bad" cycle that fails. python ${ATS_SRC_DIR}/tools/utils/plot_timestep_history.py -o out.logZoom in until we can find a specific timestep that is failing -- Cycle 307 looks promising.
###Code
# first plot, just the basic timestep history plot. Note it starts at the restarted time, ~day 460
axs = plot_ts_history('run-1b')
plt.show()
# second plot -- zoom in around Cycle 300 when it looks like the timestep is failing
axs = plot_ts_history('run-1b')
axs[1].set_xlim([295,310])
plt.show()
###Output
_____no_output_____
###Markdown
Inspect the error on one of the failing tries at Cycle 307. What cell is struggling? (See powerpoint slide.) less run-1b/out.log Search for "Cycle = 307" Inspection suggests that cells 0 and 1314 might be part of the problem. Run example 1cAdd one or more of these cells as debug cells by adding the following entry to the Richards PK list: Rerun through Cycle 307 with "high" verbosity to see the debugging information about cells 0 and 1314. cd ../run-1c ats --verbosity=high ../priestley_taylor-1c.xml &> out-new.log Inspect the output at Cycle 307 -- what is happening? Why is this bad? Look at the saturation (noting the residual saturation of the water retention curve) and the source (which is due to transpiration). (See powerpoint slide.) Plot the water retention curve and transpiration limiterThe fact that the cell contains only residual water, but the transpiration is still taking water, suggests that the transpiration downregulation is not working correctly. Plot the water retention curve and the transpiration downregulation (the wilting point model): python $ATS_SRC_DIR/tools/utils/plot_wrm.py --wrm="1.e-4 3 0.1 wrm" --wp="350000 2550000 mywp" Why is this bad?
###Code
# plot of the Water Retention model and Wilting Point model.
wrm = plot_wrm.VanGenuchten(alpha=1.e-4, n=3, sr=0.1)
wp = plot_wrm.WiltingPointLimiter(350000,2550000)
def plot(wrm, wp):
fig = plt.figure()
ax = fig.add_subplot(111)
plot_wrm.plot(wrm, ax, 'b', label='WRM')
plot_wrm.plot(wp, ax, 'r', label='WP')
ax.legend()
plt.show()
plot(wrm, wp)
###Output
_____no_output_____
###Markdown
Note the wilting point model, which controls how transpiration downregulates, turns off at much higher (more negative) capillary pressure. Between 10^5 and 10^6 Pa, there is no water to give but the transpiration is still "on."By changing the wilting point model parmeters, we can "turn off" transpiration when there is no water. python $ATS_SRC_DIR/tools/utils/plot_wrm.py --wrm="1.e-4 3 0.1 wrm" --wp="350000 2550000 mywp"
###Code
wp2 = plot_wrm.WiltingPointLimiter(3500,25500)
plot(wrm, wp2)
###Output
_____no_output_____
###Markdown
Fix the runUpdate the wilting point parameters by changing values: Then rerun the code: cd ../run-1d ats ../priestley_taylor-1d.xml &> out-new.log Bad PhysicsMistakes in the model are possible in ATS in ways that are not possible in many codes, due to the flexibility of the component-based design of ATS. Run Example 2 cd ../run-2 ats ../priestley_taylor-2.xml &> out.log This run successfully completes, but is it "correct?"* Yes, it did what you told it to do.* No, it didn't do what you wanted it to do. Inspect the Mass BalanceAlmost always a good idea to plot and understand a global mass conservation calculation: dTheta = dt * (sources - sinks) Run the script, plot_mass_balance.py: python plot_mass_balance.py run-2 In this case, there are two independent water balances -- conservation of water in the surface and subsurface, and conservation of water in the snowpack. These could be further split up by separating surface vs subsurface, but we often lump these two together. dQ_surf_subsurf = dt * (Prain + SM - Q - ET) dQ_snow = dt * (Psnow - SM) Prain = precipitation of rain Psnow = precipitation of snow SM = snow-melt Q = runoff / discharge ET = evapotranspiration Observations for each of these are set up in the input file, integrated over the entire domain, along with observations for water content. The included script, `plot_mass_balance.py` shows units conversions, integration in time, etc, as needed to show a mass balance calculation following these equations.
###Code
def plot_water(dirname):
fig = plt.figure(figsize=(14,8))
axs = fig.subplots(3,1)
data = plot_mass_balance.load(dirname)
plot_mass_balance.plot(data, dirname, '-', fig, axs)
plt.tight_layout()
plt.show()
plot_water('run-2')
###Output
_____no_output_____
###Markdown
Note the clear loss of mass in the snow balance -- large negative error.Note also the large snowmelt -- bigger than the snowfall! Fix the problemInspection of the input file shows the error in the snow source term. From `priestley_taylor-2.xml`: The snow source should be `Psnow - SM`, not `Psnow + SM` as is specified here. Correct the sign error: Rerun the problem with the changes in the input file (this is the same as `run-1d` above) and plot the correct balance.
###Code
plot_water('run-1d')
###Output
_____no_output_____
###Markdown
DebuggingThis portion of the short course looks at common ways that ATS fails, and starts to hint at how to understand what to do when things go wrong.Note that there are a wide range of ways things can go wrong, and we won't get to all of them:* incorrect input spec (should error)* bad input data (could run bug give the "wrong" answer)* bad parameters (could run but very slowly)* bad physics (could do anything)* incorrect code (could do anything)
###Code
import plot_timestep_history
import sys,os
from matplotlib import pyplot as plt
# in ATS_SRC_DIR/tools/utils
import plot_timestep_history
import plot_wrm
# local scripts, in ats-short-course/
import plot_mass_balance
###Output
_____no_output_____
###Markdown
Getting Help* See the Frequently Asked Questions: https://github.com/amanzi/ats/wiki/FAQs* Ask the user's mailing list: [email protected]* Submit a GitHub Issue: https://github.com/amanzi/ats/issues Incorrect input specOur goal in ATS is that any "invalid" input spec should result in an error with a descriptive error message that tells you what is wrong and where to start to look to fix it.We have not completely met this goal yet -- please submit tickets for errors that result in no error message. If you don't understand an error message, see Getting Help above. Run Example 0: cd ats-short-course/05_debugging/run-0 ats ../priestley_taylor-0.xml Bad Parameters Run Example 1: cd ats-short-course/05_debugging/run-1 ats ../priestley_taylor-1.xml &> out-new.log Plot the timestep history: python ${ATS_SRC_DIR}/tools/utils/plot_timestep_history.py -o out.log Note the very slow timesteps starting around day 475.
###Code
def plot_ts_history(dirname):
fig, axs = plot_timestep_history.get_axs()
with open(os.path.join(dirname, 'out.log'), 'r') as fid:
data = plot_timestep_history.parse_logfile(fid)
plot_timestep_history.plot(data,axs,'r','run-1')
plot_timestep_history.decorate_axs(axs)
return axs
plot_ts_history('run-1')
plt.show()
###Output
_____no_output_____
###Markdown
Run example 1bRestart from a checkpoint file by adding the line "restart from checkpoint file" to the "cycle driver" list, pointing to a checkpoint file that is closest to the problematic time (Cycle ~300). This makes it faster to debug, because we don't have to start each run from the beginning. cd ../run-1b ats ../priestley_taylor-1b.xml &> out-new.log Again, plot the timestep history, and zoom in to find a "bad" cycle that fails. python ${ATS_SRC_DIR}/tools/utils/plot_timestep_history.py -o out.logZoom in until we can find a specific timestep that is failing -- Cycle 307 looks promising.
###Code
# first plot, just the basic timestep history plot. Note it starts at the restarted time, ~day 460
axs = plot_ts_history('run-1b')
plt.show()
# second plot -- zoom in around Cycle 300 when it looks like the timestep is failing
axs = plot_ts_history('run-1b')
axs[1].set_xlim([295,310])
plt.show()
###Output
_____no_output_____
###Markdown
Inspect the error on one of the failing tries at Cycle 307. What cell is struggling? (See powerpoint slide.) less run-1b/out.log Search for "Cycle = 307" Inspection suggests that cells 0 and 1314 might be part of the problem. Run example 1cAdd one or more of these cells as debug cells by adding the following entry to the Richards PK list: Rerun through Cycle 307 with "high" verbosity to see the debugging information about cells 0 and 1314. cd ../run-1c ats --verbosity=high ../priestley_taylor-1c.xml &> out-new.log Inspect the output at Cycle 307 -- what is happening? Why is this bad? Look at the saturation (noting the residual saturation of the water retention curve) and the source (which is due to transpiration). (See powerpoint slide.) Plot the water retention curve and transpiration limiterThe fact that the cell contains only residual water, but the transpiration is still taking water, suggests that the transpiration downregulation is not working correctly. Plot the water retention curve and the transpiration downregulation (the wilting point model): python $ATS_SRC_DIR/tools/utils/plot_wrm.py --wrm="1.e-4 3 0.1 wrm" --wp="350000 2550000 mywp" Why is this bad?
###Code
# plot of the Water Retention model and Wilting Point model.
wrm = plot_wrm.VanGenuchten(alpha=1.e-4, n=3, sr=0.1)
wp = plot_wrm.WiltingPointLimiter(350000,2550000)
def plot(wrm, wp):
fig = plt.figure()
ax = fig.add_subplot(111)
plot_wrm.plot(wrm, ax, 'b', label='WRM')
plot_wrm.plot(wp, ax, 'r', label='WP')
ax.legend()
plt.show()
plot(wrm, wp)
###Output
_____no_output_____
###Markdown
Note the wilting point model, which controls how transpiration downregulates, turns off at much higher (more negative) capillary pressure. Between 10^5 and 10^6 Pa, there is no water to give but the transpiration is still "on."By changing the wilting point model parmeters, we can "turn off" transpiration when there is no water. python $ATS_SRC_DIR/tools/utils/plot_wrm.py --wrm="1.e-4 3 0.1 wrm" --wp="350000 2550000 mywp"
###Code
wp2 = plot_wrm.WiltingPointLimiter(3500,25500)
plot(wrm, wp2)
###Output
_____no_output_____
###Markdown
Fix the runUpdate the wilting point parameters by changing values: Then rerun the code: cd ../run-1d ats ../priestley_taylor-1d.xml &> out-new.log Bad PhysicsMistakes in the model are possible in ATS in ways that are not possible in many codes, due to the flexibility of the component-based design of ATS. Run Example 2 cd ../run-2 ats ../priestley_taylor-2.xml &> out.log This run successfully completes, but is it "correct?"* Yes, it did what you told it to do.* No, it didn't do what you wanted it to do. Inspect the Mass BalanceAlmost always a good idea to plot and understand a global mass conservation calculation: dTheta = dt * (sources - sinks) Run the script, plot_mass_balance.py: python plot_mass_balance.py run-2 In this case, there are two independent water balances -- conservation of water in the surface and subsurface, and conservation of water in the snowpack. These could be further split up by separating surface vs subsurface, but we often lump these two together. dQ_surf_subsurf = dt * (Prain + SM - Q - ET) dQ_snow = dt * (Psnow - SM) Prain = precipitation of rain Psnow = precipitation of snow SM = snow-melt Q = runoff / discharge ET = evapotranspiration Observations for each of these are set up in the input file, integrated over the entire domain, along with observations for water content. The included script, `plot_mass_balance.py` shows units conversions, integration in time, etc, as needed to show a mass balance calculation following these equations.
###Code
def plot_water(dirname):
fig = plt.figure(figsize=(14,8))
axs = fig.subplots(3,1)
data = plot_mass_balance.load(dirname)
plot_mass_balance.plot(data, dirname, '-', fig, axs)
plt.tight_layout()
plt.show()
plot_water('run-2')
###Output
_____no_output_____
###Markdown
Note the clear loss of mass in the snow balance -- large negative error.Note also the large snowmelt -- bigger than the snowfall! Fix the problemInspection of the input file shows the error in the snow source term. From `priestley_taylor-2.xml`: The snow source should be `Psnow - SM`, not `Psnow + SM` as is specified here. Correct the sign error: Rerun the problem with the changes in the input file (this is the same as `run-1d` above) and plot the correct balance.
###Code
plot_water('run-1d')
###Output
_____no_output_____ |
features/Evaluation.ipynb | ###Markdown
Log- Tried feature extraction + modelling with the full images (not lowpass filtered). This substantially lowered accuracy. AUCs of 0.55 without image transforms and 0.71 with image transforms.- Calculate confusion matrix, how much do we care about false negatives?
###Code
# Data path
data_path = '../data/raw/'
# Paths of good/bad .mrcs files
good_path_in = os.path.join(data_path, 'job028/micrographs')
bad_path_in = os.path.join(data_path, 'job031/micrographs')
data_file_ext = '.mrcs'
fileFilter = lambda x: 'lowpass' in x
data_paths = {'good': get_files_of_type_from_path(good_path_in, data_file_ext, fileFilter),
'bad': get_files_of_type_from_path(bad_path_in, data_file_ext, fileFilter)}
data_raw = {'good': get_all_imgs_from_paths(data_paths['good']),
'bad': get_all_imgs_from_paths(data_paths['bad'])}
data = data_raw['good']+data_raw['bad']
labels = np.append(np.ones(len(data_raw['good'])), np.zeros(len(data_raw['bad'])))
def preprocessData(raw):
# Stack into contiguous array
data = np.stack(raw, axis=0)
# Flatten
raw_data_shape = data.shape
data = data.reshape(raw_data_shape[0], -1)
# Zero-mean and unit-variance rescaling
scaler = StandardScaler()
scaler.fit(data)
data = scaler.transform(data)
data = data.reshape(raw_data_shape)
return data
def stack_imgrid(data, nrows, ncols, start = 0):
total, height, width = data.shape
return data[start:start+nrows*ncols]\
.reshape(nrows,ncols, height, width)\
.swapaxes(1,2)\
.reshape(height*nrows, width*ncols)
def plot_grid(data, nrows, ncols, start = 0):
total, height, width = data.shape
plt.imshow(
data[start:start+nrows*ncols]
.reshape(nrows,ncols, height, width)
.swapaxes(1,2)
.reshape(height*nrows, width*ncols)
, cmap='gray'
)
norm_data = preprocessData(data)
bf_data = np.array([ cv2.bilateralFilter(x, 6, 75, 75) for x in norm_data ])
plt.figure()
plot_grid(norm_data, 5, 5, start = 0)
imcmp = np.append(
stack_imgrid(norm_data, 3, 3, 0),
stack_imgrid(bf_data, 3, 3, 0),
axis = 1
)
plt.figure()
plt.imshow(imcmp, cmap='gray')
import pywt
titles = ['Approximation', ' Horizontal detail',
'Vertical detail', 'Diagonal detail']
coeffs2 = pywt.dwt2(norm_data[5], 'sym2')
LL, (LH, HL, HH) = coeffs2
fig = plt.figure(figsize=(12, 3))
for i, a in enumerate([LL, LH, HL, HH]):
ax = fig.add_subplot(1, 4, i + 1)
ax.imshow(a, interpolation="nearest", cmap=plt.cm.gray)
ax.set_title(titles[i], fontsize=10)
ax.set_xticks([])
ax.set_yticks([])
fig.tight_layout()
plt.show()
titles = ['Approximation', ' Horizontal detail',
'Vertical detail', 'Diagonal detail']
coeffs2 = pywt.dwt2(norm_data[1500], 'sym2')
LL, (LH, HL, HH) = coeffs2
fig = plt.figure(figsize=(12, 3))
for i, a in enumerate([LL, LH, HL, HH]):
ax = fig.add_subplot(1, 4, i + 1)
ax.imshow(a, interpolation="nearest", cmap=plt.cm.gray)
ax.set_title(titles[i], fontsize=10)
ax.set_xticks([])
ax.set_yticks([])
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Modelling
###Code
# Get features
F = Featurizer(data, n_components=6)
F.fit()
# Build model
LGB_params = params={
"objective" : "binary",
"metric" : "auc",
"boosting": 'gbdt',
"max_depth" : -1,
"learning_rate" : 0.01,
"verbosity" : 1,
"seed": 0
}
G = GBDTWrapper(F.feature_coeffs, labels, params=LGB_params)
G.train()
G.plotROC()
lr_probs = G.model.predict(G.X_test)
cm = confusion_matrix(G.y_test, lr_probs)
# List of transforms to apply, we include the identity transform
transforms = [
ImageTransforms.IdentityTransform(),
ImageTransforms.RobertsTransform(),
ImageTransforms.FFT2Transform(),
]
ensembler = FeatureEnsembler(data, transforms, n_components=21)
ensembler.fit()
# Build model
G = GBDTWrapper(ensembler.feature_coeffs, labels)
G.train()
G.plotROC()
###Output
Applying transform: Identity
Preprocessing data . . .
Fitting estimators . . .
Calculating 21 features using PCA...
Time taken = 29.768s
Calculating 21 features using FastICA...
Time taken = 10.966s
Calculating 21 features using FactorAnalysis...
Time taken = 240.101s
Calculating features . . .
Done!
Applying transform: Roberts
Preprocessing data . . .
Fitting estimators . . .
Calculating 21 features using PCA...
Time taken = 0.022s
Calculating 21 features using FastICA...
Time taken = 0.238s
Calculating 21 features using FactorAnalysis...
Time taken = 0.068s
Calculating features . . .
Done!
Applying transform: FFT2
Preprocessing data . . .
Fitting estimators . . .
Calculating 21 features using PCA...
Time taken = 0.086s
Calculating 21 features using FastICA...
Time taken = 0.832s
Calculating 21 features using FactorAnalysis...
Time taken = 0.178s
Calculating features . . .
Done!
Training GBDT . . .
Done!
|
experiments/interaction/experiment_171208_2152_batch_augmented_1k-best.ipynb | ###Markdown
Predictions
###Code
pred_df.describe()
_ = plt.hist2d(pred_df['ant1_x'], pred_df['ant1_y'], bins=40, range=((0, 199), (0, 199)))
_ = plt.hist2d(pred_df['ant2_x'], pred_df['ant2_y'], bins=40, range=((0, 199), (0, 199)))
pred_df['ant1_angle_deg'].hist()
(pred_df['ant1_angle_deg'] % 180).hist()
pred_df['ant2_angle_deg'].hist()
(pred_df['ant2_angle_deg'] % 180).hist()
###Output
_____no_output_____
###Markdown
Prediction Errors
###Code
xy, angle, indices = train_interactions.match_pred_to_gt(pred, y_test[train_interactions.NAMES].values, np)
xy_errors = (xy[indices[:, 0], indices[:, 1]])
angle_errors = (angle[indices[:, 0], indices[:, 1]])
# swap = indices[:, 0] == 1
# pred_swapped = pred.copy()
# pred_swapped[swap, :5], pred_swapped[swap, 5:] = pred_swapped[swap, 5:], pred_swapped[swap, :5]
df = pd.DataFrame.from_items([('xy (px)', [xy_errors.mean()]),
('angle (deg)', angle_errors.mean()),])
df.style.set_caption('MAE')
df
_ = plt.hist(xy_errors)
_ = plt.hist(angle_errors)
###Output
_____no_output_____
###Markdown
Model
###Code
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
model = train_interactions.model()
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
# SVG(model_to_dot(model.get_layer('model_1'), show_shapes=True).create(prog='dot', format='svg'))
###Output
_____no_output_____ |
_notebooks/2021-02-18-pytorch-widedeep_iii.ipynb | ###Markdown
"pytorch-widedeep, deep learning for tabular data III: the deeptabular component"> a flexible package to combine tabular data with text and images using wide and deep models.- author: Javier Rodriguez- toc: true - badges: true- comments: true This is the third of a [series](https://jrzaurin.github.io/infinitoml/) of posts introducing [pytorch-widedeep](https://github.com/jrzaurin/pytorch-widedeep), a flexible package to combine tabular data with text and images (that could also be used for "standard" tabular data alone). While writing this post I will assume that the reader is not familiar with the previous two [posts](https://jrzaurin.github.io/infinitoml/). Of course, reading them would help, but in order to understand the content of this post and then being able to use `pytorch-widedeep` on tabular data, is not a requirement. To start with, as always, just install the package:```pythonpip install pytorch-widedeep```This will install `v0.4.8`, hopefully the last beta version*. Code-wise I think this could be already `v1`, but before that I want to try it in a few more datasets and select good default values. In addition, I also intend to implement other algorithms, in particular [TabNet](https://arxiv.org/abs/1908.07442) [1], for which a very nice [implementation](https://github.com/dreamquark-ai/tabnet) already exists. Moving on, and as I mentioned earlier, `pytorch-widedeep`'s main goal is to facilitate the combination of images and text with tabular data via wide and deep models. To that aim, [wide and deep models](https://pytorch-widedeep.readthedocs.io/en/latest/model_components.html) can be built with up to four model components: `wide`, `deeptabular`, `deeptext` and `deepimage`, that will take care of the different types of input datasets ("standard" tabular, i.e. numerical and categorical features, text and images). This post focuses only on the so-called `deeptabular` component, and the 3 different models available in this library that can be used to build that component. Nonetheless, and for completion, I will briefly describe the remaining components first. The `wide` component of a wide and deep model is simply a liner model, and in `pytorch-widedeep` such model can be created via the [`Wide`](https://pytorch-widedeep.readthedocs.io/en/latest/model_components.htmlpytorch_widedeep.models.wide.Wide) class. In the case of the `deeptext` component, `pytorch-widedeep` offers one model, available via the [`DeepText`](https://pytorch-widedeep.readthedocs.io/en/latest/model_components.htmlpytorch_widedeep.models.deep_text.DeepText) class. `DeepText` builds a simple stack of LSTMs, i.e. a standard DL text classifier or regressor, with flexibility regarding the use of pre-trained word embeddings, of a Fully Connected Head (FC-Head), etc. For the `deepimage` component, `pytorch-widedeep` includes two alternatives: a pre-trained Resnet model or a "standard" stack of CNNs to be trained from scratch. The two are available via the [`DeepImage`](https://pytorch-widedeep.readthedocs.io/en/latest/model_components.htmlpytorch_widedeep.models.deep_image.DeepImage) class which, as in the case of `DeepText`, offers some flexibility when building the architecture. To clarify the use of the term "*model*" and Wide and Deep "*model component*" (in case there is some confusion), let's have a look to the following code:```pythonwide_model = Wide(...)text_model = DeepText(...)image_model = DeepImage(...) we use the previous models as the wide and deep model componentswdmodel = WideDeep(wide=wide_model, deeptext=text_model, deepimage=image_model)...```Simply, a wide and deep model has model components that are (of course) models themselves. Note that **any** of the four wide and deep model components can be a custom model by the user. In fact, while I recommend using the models available in `pytorch-widedeep` for the `wide` and `deeptabular` model components, it is very likely that users will want to use their own models for the `deeptext` and `deepimage `components. That is perfectly possible as long as the custom models have an attribute called `output_dim` with the size of the last layer of activations, so that `WideDeep` can be constructed (see this [example notebook](https://github.com/jrzaurin/pytorch-widedeep) in the repo). In addition, any of the four components can be used independently in isolation. For example, you might want to use just a `wide` component, which is simply a linear model. To that aim, simply:```pythonwide_model = Wide(...) this would not be a wide and deep model but just widewdmodel = WideDeep(wide=wide_model)...```If you want to learn more about different model components and the models available in `pytorch-widedeep` please, have a look to the [Examples](https://github.com/jrzaurin/pytorch-widedeep/tree/master/examples) folder in the repo, the [documentation](https://pytorch-widedeep.readthedocs.io/en/latest/model_components.html) or the [companion posts](https://jrzaurin.github.io/infinitoml/). Let's now take a deep dive into the models available for the `deeptabular` component$^*$ *check the repo or this [post](https://jrzaurin.github.io/infinitoml/2020/12/06/pytorch-widedeep.html) for a caveat in the installation if you are using Mac, python 3.8 or Pytorch 1.7+. **Note that this is not directly related with the package**, but the interplay between Mac and OpenMP, and the new defaults of the `multiprocessing` library for Mac).* 1. The `deeptabular` componentAs I was developing the package I realised that perhaps one of the most interesting offerings in `pytorch-widedeep` was related to the models available for the `deeptabular` component. Remember that each component can be used independently in isolation. Building a `WideDeep` model comprised only by a `deeptabular` component would be what is normally referred as DL for tabular data. Of course, such model is not a wide and deep model, is "just" deep.Currently, `pytorch-widedeep` offers three models that can be used as the `deeptabular` component. In order of complexity, these are:- `TabMlp`: this is very similar to the [tabular model](https://docs.fast.ai/tutorial.tabular.html) in the fantastic [fastai](https://docs.fast.ai/) library, and consists simply in embeddings representing the categorical features, concatenated with the continuous features, and passed then through a MLP.- `TabRenset`: This is similar to the previous model but the embeddings are passed through a series of ResNet blocks built with dense layers.- `TabTransformer`: Details on the TabTransformer can be found in: [TabTransformer: Tabular Data Modeling Using Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf). Again, this is similar to the models before but the embeddings are passed through a series of Transformer encoder blocks.A lot has been (and is being) written about the use of DL for tabular data, and certainly each of these models would deserve a post themselves. Here, I will try to describe them with some detail and illustrate their use within `pytorch-widedeep`. A proper benchmark exercise will be carried out in a not-so-distant future. 1.1 `TabMlp`The following figure illustrates the `TabMlp` model architecture.**Fig 1**. The `TabMlp`: this is the simples architecture and is very similar to the tabular model available in the fantastic fastai library. In fact, the implementation of the dense layers of the MLP is mostly identical to that in that library. The dashed-border boxes indicate that these components are optional. For example, we could use `TabMlp` without categorical components, or without continuous components, if we wanted. Let's have a look and see how this model is used with the well known [adult census dataset](http://archive.ics.uci.edu/ml/datasets/Adult). I assume you have downloaded the data and place it at `data/adult/adult.csv.zip`:
###Code
#hide
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
#collapse-hide
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
adult = pd.read_csv("data/adult/adult.csv.zip")
adult.columns = [c.replace("-", "_") for c in adult.columns]
adult["income_label"] = (adult["income"].apply(lambda x: ">50K" in x)).astype(int)
adult.drop("income", axis=1, inplace=True)
for c in adult.columns:
if adult[c].dtype == 'O':
adult[c] = adult[c].apply(lambda x: "unknown" if x == "?" else x)
adult[c] = adult[c].str.lower()
adult_train, adult_test = train_test_split(adult, test_size=0.2, stratify=adult.income_label)
adult.head()
# define the embedding and continuous columns, and target
embed_cols = [
('workclass', 6),
('education', 8),
('marital_status', 6),
('occupation',8),
('relationship', 6),
('race', 6)]
cont_cols = ["age", "hours_per_week", "fnlwgt", "educational_num"]
target = adult_train["income_label"].values
# prepare deeptabular component
from pytorch_widedeep.preprocessing import TabPreprocessor
tab_preprocessor = TabPreprocessor(embed_cols=embed_cols, continuous_cols=cont_cols)
X_tab = tab_preprocessor.fit_transform(adult_train)
###Output
_____no_output_____
###Markdown
Let's pause for a second, since the code up until here is going to be common to all models with some minor adaptations for the `TabTransformer`. So far, we have simply defined the columns that will be represented by embeddings and the numerical (aka continuous) columns. Once they are defined the dataset is prepared with the `TabPreprocessor`. Internally, the preprocessor label encodes the "embedding columns" and standardizes the numerical columns. Note that one could chose not to standardizes the numerical columns and then use a `BatchNorm1D` layer when building the model. That is also a valid approach. Alternatively, one could use both, as I will. At this stage the data is prepared and we are ready to build the model
###Code
from pytorch_widedeep.models import TabMlp, WideDeep
tabmlp = TabMlp(
mlp_hidden_dims=[200, 100],
column_idx=tab_preprocessor.column_idx,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=cont_cols,
batchnorm_cont=True,
)
###Output
/Users/javier/.pyenv/versions/3.7.9/envs/wdposts/lib/python3.7/site-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Let's have a look to the model we just built and how it relates to Fig 1
###Code
tabmlp
###Output
_____no_output_____
###Markdown
As we can see, we have a series of columns that would be represented as embeddings. The embeddings from all these columns are concatenated, to form a tensor of dim `(bsz, 40)` where `bsz` is batch size. Then, the "*batchnormed*" continuous columns are also concatenated, resulting in a tensor of dim `(bsz, 44)`, that will be passed to the 2-layer MLP `(200 -> 100)`. In summary `Embeddings` + continuous+ MLP.One important thing to mention, common to all models, is that `pytorch-widedeep` models do not build the last connection, i.e. the connection with the output neuron or neurons depending whether this is a regression, binary or multi-class classification. Such connection is built by the `WideDeep` constructor class. This means that even if we wanted to use a single-component model, the model still needs to be built with the `WideDeep` class. This is because the library is, a priori, intended to build `WideDeep` models (and hence its name). Once the model is built it is passed to the `Trainer` (as we will see now). The `Trainer` class is coded to receive a parent model of class `WideDeep` with children that are the model components. This is very convenient for a number of aspects in the library. Effectively this simply requires one extra line of code.
###Code
model = WideDeep(deeptabular=tabmlp)
model
###Output
_____no_output_____
###Markdown
As we can see, our `model` has the final connection now and is a model of class `WideDeep` formed by one single component, `deeptabular`, which is a model of class `TabMlp` formed mainly by the `embed_layers` and an MLP very creatively called `tab_mlp`. We are now ready to train it. The code below simply runs with defaults. one could use any `torch` optimizer, learning rate schedulers, etc. Just have a look to the [docs](https://pytorch-widedeep.readthedocs.io/en/latest/trainer.html) or the [Examples](https://github.com/jrzaurin/pytorch-widedeep/tree/master/examples) folder in the repo.
###Code
from pytorch_widedeep import Trainer
from pytorch_widedeep.metrics import Accuracy
trainer = Trainer(model, objective="binary", metrics=[(Accuracy)])
trainer.fit(X_tab=X_tab, target=target, n_epochs=5, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 123/123 [00:02<00:00, 59.30it/s, loss=0.4, metrics={'acc': 0.8073}]
valid: 100%|██████████| 31/31 [00:00<00:00, 111.33it/s, loss=0.392, metrics={'acc': 0.807}]
epoch 2: 100%|██████████| 123/123 [00:02<00:00, 61.05it/s, loss=0.363, metrics={'acc': 0.827}]
valid: 100%|██████████| 31/31 [00:00<00:00, 122.68it/s, loss=0.376, metrics={'acc': 0.8253}]
epoch 3: 100%|██████████| 123/123 [00:01<00:00, 71.14it/s, loss=0.359, metrics={'acc': 0.8283}]
valid: 100%|██████████| 31/31 [00:00<00:00, 120.26it/s, loss=0.368, metrics={'acc': 0.8281}]
epoch 4: 100%|██████████| 123/123 [00:01<00:00, 73.66it/s, loss=0.354, metrics={'acc': 0.8321}]
valid: 100%|██████████| 31/31 [00:00<00:00, 122.50it/s, loss=0.361, metrics={'acc': 0.832}]
epoch 5: 100%|██████████| 123/123 [00:01<00:00, 73.94it/s, loss=0.353, metrics={'acc': 0.8329}]
valid: 100%|██████████| 31/31 [00:00<00:00, 119.44it/s, loss=0.359, metrics={'acc': 0.833}]
###Markdown
Once we understand what `TabMlp` does, `TabResnet` should be pretty straightforward 1.2 `TabResnet`The following figure illustrates the `TabResnet` model architecture.**Fig 2**. The `TabResnet`: this model is similar to the `TabMlp`, but the embeddings (or the concatenation of embeddings and continuous features, normalised or not) are passed through a series of Resnet blocks built with dense layers. The dashed-border boxes indicate that the component is optional and the dashed lines indicate the different paths or connections present depending on which components we decide to include. This is probably the most flexible of the three models discussed in this post in the sense that there are many variants one can define via the parameters. For example, we could chose to concatenate the continuous features, normalized or not via a `BatchNorm1d` layer, with the embeddings and then pass the result of such a concatenation trough the series of Resnet blocks. Alternatively, we might prefer to concatenate the continuous features with the results of passing the embeddings through the Resnet blocks. Another optional component is the MLP before the output neuron(s). If not MLP is present, the output from the Resnet blocks or the results of concatenating that output with the continuous features (normalised or not) will be connected directly to the output neuron(s).Each of the Resnet block is comprised by the following operations:Fig 3. "Dense" Resnet Block. `b` is the batch size and `d` the dimension of the embeddings.Let's build a `TabResnet` model:
###Code
from pytorch_widedeep.models import TabResnet
tabresnet = TabResnet(
column_idx=tab_preprocessor.column_idx,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=cont_cols,
batchnorm_cont=True,
blocks_dims=[200, 100, 100],
mlp_hidden_dims=[100, 50],
)
model = WideDeep(deeptabular=tabresnet)
model
###Output
/Users/javier/.pyenv/versions/3.7.9/envs/wdposts/lib/python3.7/site-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
As we did previously with the `TabMlp`, let's "walk through" the model. In this case, model is an instance of a `WideDeep` object formed by a single component, `deeptabular` that is a `TabResnetmodel`. `TabResnet` is formed by a series of `Embedding` layers (e.g. `emb_layer_education`) a series of so-called dense Resnet blocks (`tab_resnet`) and a MLP (`tab_resnet_mlp`). The embeddings are concatenated themselves and then, further concatenated with the normalised continuous columns. The resulting tensor of dim `(bsz, 44)` is then passed through two dense Resnet blocks. The output of one Resnet block is the input of the next. Therefore, when setting `blocks_dim = [200, 100, 100]` we are generating two blocks with input/output 200/100 and 100/100 respectively. The output of the second Resnet blocks, of dim `(bsz, 100)` is passed through `tab_resnet_mlp` , the 2-layer MLP, and finally "plugged" into the output neuron. In summary: Embeddings + continuous + dense Renset + MLP.To run it, the code is, as one might expect identical to the one shown before for the `TabMlp`.
###Code
trainer = Trainer(model, objective="binary", metrics=[(Accuracy)])
trainer.fit(X_tab=X_tab, target=target, n_epochs=5, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 123/123 [00:04<00:00, 30.40it/s, loss=0.385, metrics={'acc': 0.8108}]
valid: 100%|██████████| 31/31 [00:00<00:00, 105.50it/s, loss=0.36, metrics={'acc': 0.8144}]
epoch 2: 100%|██████████| 123/123 [00:04<00:00, 30.05it/s, loss=0.354, metrics={'acc': 0.8326}]
valid: 100%|██████████| 31/31 [00:00<00:00, 97.42it/s, loss=0.352, metrics={'acc': 0.8337}]
epoch 3: 100%|██████████| 123/123 [00:03<00:00, 30.95it/s, loss=0.351, metrics={'acc': 0.834}]
valid: 100%|██████████| 31/31 [00:00<00:00, 105.48it/s, loss=0.351, metrics={'acc': 0.8354}]
epoch 4: 100%|██████████| 123/123 [00:03<00:00, 31.33it/s, loss=0.349, metrics={'acc': 0.8352}]
valid: 100%|██████████| 31/31 [00:00<00:00, 108.03it/s, loss=0.349, metrics={'acc': 0.8367}]
epoch 5: 100%|██████████| 123/123 [00:03<00:00, 31.99it/s, loss=0.346, metrics={'acc': 0.8359}]
valid: 100%|██████████| 31/31 [00:00<00:00, 107.30it/s, loss=0.348, metrics={'acc': 0.8378}]
###Markdown
And now, last but not least, the last addition to the library, the `TabTransformer`. 1.3 `TabTransformer`The `TabTransformer` is described in detail in [TabTransformer: Tabular Data Modeling Using Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf) [2], by the clever guys at Amazon. Is an entertaining paper that I, of course, strongly recommend if you are going to use this model on your tabular data (and also in general if you are interested in DL for tabular data).My implementation is not the only one available. Given that the model was conceived by the researchers at Amazon, it is also available in their fantastic [`autogluon`](https://github.com/awslabs/autogluon) library (which you should definitely check). In addition, you can find another implementation [here](https://github.com/lucidrains/tab-transformer-pytorch) by Phil Wang, whose entire github is simply outstanding. My implementation is partially inspired by these but has some particularities and adaptations so that it works within the `pytorch-widedeep` package. The following figure illustrates the `TabTransformer` model architecture.**Fig 4**. The `TabTransfomer`, described in [TabTransformer: Tabular Data Modeling Using Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf). The dashed-border boxes indicate that the component is optional.As in previous cases, there are a number of variants and details to consider as one builds the model. I will describe some here, but for a full view of all the possible parameters, please, have a look to the [docs](https://pytorch-widedeep.readthedocs.io/en/latest/model_components.htmlpytorch_widedeep.models.tab_transformer.TabTransformer). I don't want to go into the details of what is a Transformer [3] in this post. There is an overwhelming amount of literature if you wanted to learn about it, with the most popular being perhaps [The Annotated Transformer](https://nlp.seas.harvard.edu/2018/04/03/attention.html). Also check this [post](https://elvissaravia.substack.com/p/learn-about-transformers-a-recipe) and if you are a math "maniac" you might like this [paper](https://arxiv.org/abs/2007.02876) [4]. However, let me just briefly describe it here so I can introduce the little math we will need for this post. In one sentence, a Transformer consists of a multi-head self-attention layer followed by feed-forward layer, with element-wise addition and layer-normalization being done after each layer. As most of you will know, a self-attention layer comprises three matrices, Key, Query and Value. Each input categorical column, i.e. embedding, is projected onto these matrices (although see the `fixed_attention` option later in the post) to generate their corresponding key, query and value vectors. Formally, let $K \in R^{e \times d}$, $Q \in R^{e \times d}$ and $V \in R^{e \times d}$ be the Key, Query and Value matrices of the embeddings where $e$ is the embeddings dimension and $d$ is the dimension of all the Key, Query and Value matrices. Then every input categorical column, i.e embedding, attends to all other categorical columns through an attention head: $$Attention(K, Q, V ) = A \cdot V, \hspace{5cm}(1)$$where $$A = softmax( \frac{QK^T}{\sqrt{d}} ), \hspace{6cm}(2)$$And that is all the math we need. As I was thinking in a figure to illustrate a transformer block, I realised that there is a chance that the reader has seen every possible representation/figure. Therefore, I decided to illustrate the transformer block in a way that relates directly to the way it is implemented.**Fig 5**. The Transfomer block. The letters in parenthesis indicate the dimension of the corresponding tensor after the operation indicated in the corresponding box. For example, the tensor `attn_weights` has dim `(b, h, s, s)`.As the figure shows, the input tensor ($X$) is projected onto its key, query and value matrices. These are then "*re-arranged into*" the multi-head self-attention layer where each head will attend to part of the embeddings. We then compute $A$ (Eq 2), which is then multiplied by $V$ to obtain what I refer as `attn_score` (Eq 1). `attn_score` is then re-arranged, so that we "*collect*" the attention scores from all the heads, and projected again to obtain the results (`attn_out`), that will be added to the input and normalised (`Y`). Finally `Y` goes through the Feed-Forward layer and a further Add + Norm.Before moving to the code related to building the model itself, there are a couple of details in the implementation that are worth mentioning**`FullEmbeddingDropout`**when building a `TabTransformer` model, there is the possibility of dropping entirely the embedding corresponding to a categorical column. This is set by the parameter `full_embed_dropout: bool`, which points to the class [`FullEmbeddingDropout`](https://github.com/jrzaurin/pytorch-widedeep/blob/be96b57f115e4a10fde9bb82c35380a3ac523f52/pytorch_widedeep/models/tab_transformer.pyL153). **`SharedEmbeddings`**when building a `TabTransformer` model, it is possible for all the embeddings that represent a categorical column to share a fraction of their embeddings, or define a common separated embedding per column that will be added to the column's embeddings. The idea behind this so-called "*column embedding*" is to enable the model to distinguish the classes in one column from those in the other columns. In other words, we want the model to learn representations not only of the different categorical values in the column, but also of the column itself. This is attained by the `shared_embed` group of parameters: `share_embed : bool`, `add_shared_embed: bool` and `frac_shared_embed: int`. The first simply indicates if embeddings will be shared, the second sets the sharing strategy and the third one the fraction of the embeddings that will be shared, depending on the strategy. They all relate to the class [`SharedEmbeddings`](https://github.com/jrzaurin/pytorch-widedeep/blob/be96b57f115e4a10fde9bb82c35380a3ac523f52/pytorch_widedeep/models/tab_transformer.pyL165) For example, let's say that we have a categorical column with 5 different categories that will be encoded as embeddings of dim 8. This will result in a lookup table for that column of dim `(5, 8)`. The two sharing strategies are illustrated in Fig 6.  -->**Fig 6**. The two sharing embeddings strategies. Upper panel: the "*column embedding*" replaces `embedding dim / frac_shared_embed` (4 in this case) of the total embeddings that represent the different values of the categorical column. Lower panel: the "*column embedding*" is added (well, technically broadcasted and added) to the original embedding lookup table. Note that `n_cat` here refers to the number of different categories for this particular column. **`fixed_attention`**`fixed_attention`: this in inspired by the [implementation](https://github.com/awslabs/autogluon/blob/master/tabular/src/autogluon/tabular/models/tab_transformer/modified_transformer.py) at the Autogluon library. When using "fixed attention", the key and query matrices are not the result of any projection of the input tensor $X$, but learnable matrices (referred as `fixed_key` and `fixed_query`) of dim `(number of categorical columns x embeddings dim)` defined separately, as you instantiate the model. `fixed_attention` does not affect how the Value matrix is computed. Let me go through an example with numbers to clarify things. Let's assume we have a dataset with 5 categorical columns that will be encoded by embeddings of dim 4 and we use a batch size (`bsz`) of 6. Figure 7 shows how the key matrix will be computed for a given batch (same applies to the query matrix) with and without fixed attention.  -->**Fig 7**. Key matrix computation for a given batch with and without fixed attention (same applies to the query matrix). The different color tones in the matrices are my attempt to illustrate that, while without fixed attention the key matrix can have different values anywhere in the matrix, with fixed attention the key matrix is the result of the repetition of the "fixed-key" `bsz` times. The project-layer is, of course, broadcasted along the `bsz` dimension in the upper panel.As I mentioned, this implementation is inspired by that at the Autogluon library. Since the guys at Amazon are the ones that came up with the `TabTransformer`, is only logical to think that they found a use for this implementation of attention. However, at the time of writing such use is not 100% clear to me. It is known that, in problems like machine translation, most attention heads learn redundant patterns (see e.g. [Alessandro Raganato et al., 2020](https://arxiv.org/abs/2002.10260) [5] and references therein). Therefore, maybe the fixed attention mechanism discussed here helps reducing redundancy for problems involving tabular data. Overall, the way I interpret `fixed_attention` in layman's terms, is the following: when using fixed attention, the Key and the Query matrices are defined as the model is instantiated, and do not know of the input until the attention weights (`attn_weights`) are multiplied by the value matrix to obtain what I refer as `attn_score` in figure 5. Those attention weights, which are in essence the result of a matrix multiplication between the key and the query matrices (plus softmax and normalization), are going to be the same for all the heads, for all samples in a given batch. Therefore, my interpretation is that when using fixed attention, we reduce the attention capabilities of the transformer, which will focus on less aspects of the inputs, reducing potential redundancies. Anyway, enough speculation. Time to have a look to the code. Note that, since we are going to stack the embeddings (instead of concatenating them) they all must have the same dimensions. Such dimension is set as we build the model instead that at the pre-processing stage. To avoid input format conflicts we use the `for_tabtransformer` parameter at pre-processing time.
###Code
embed_cols = ['workclass', 'education', 'marital_status', 'occupation', 'relationship', 'race']
tab_preprocessor = TabPreprocessor(
embed_cols=embed_cols,
continuous_cols=cont_cols,
for_tabtransformer=True)
X_tab = tab_preprocessor.fit_transform(adult_train)
from pytorch_widedeep.models import TabTransformer
tabtransformer = TabTransformer(
column_idx=tab_preprocessor.column_idx,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=cont_cols,
shared_embed=True,
num_blocks=3,
)
model = WideDeep(deeptabular=tabtransformer)
model
###Output
/Users/javier/.pyenv/versions/3.7.9/envs/wdposts/lib/python3.7/site-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
As we can see, the model is an instance of a `WideDeep` object formed by a single component, `deeptabular` that is `TabTransformer` model. `TabTransformer` is formed by a series of embedding layers (e.g. `emb_layer_education`) , a series of transformer encoder blocks$^*$ (`tab_transformer_blks`) and a MLP (`tab_transformer_mlp`). The embeddings here are of class `SharedEmbeddings`, which I described before. These embeddings are stacked and passed through three transformer blocks. The output for all the categorical columns is concatenated, resulting in a tensor of dim `(bsz, 192)` where 192 is equal to the number of categorical columns (6) times the embedding dim (32). This tensor is then concatenated with the "layernormed" continuous columns, resulting in a tensor of dim `(bsz, 196)`. As usual, this tensor goes through tab_transformer_mlp , which following the guidance in the paper ("*The MLP layer sizes are set to {4 × l, 2 × l}, where l is the size of its input.*") is `[784 -> 392]` , and "off we go". In summary `SharedEmbeddings` + continuous + Transformer encoder blocks + MLP.To run it, the code is, as one might expect identical to the one shown before for the `TabMlp` and `TabRenset`.
###Code
trainer = Trainer(model, objective="binary", metrics=[(Accuracy)])
trainer.fit(X_tab=X_tab, target=target, n_epochs=5, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 123/123 [00:09<00:00, 13.42it/s, loss=0.376, metrics={'acc': 0.8236}]
valid: 100%|██████████| 31/31 [00:00<00:00, 34.98it/s, loss=0.373, metrics={'acc': 0.8228}]
epoch 2: 100%|██████████| 123/123 [00:09<00:00, 13.31it/s, loss=0.353, metrics={'acc': 0.8331}]
valid: 100%|██████████| 31/31 [00:00<00:00, 37.92it/s, loss=0.368, metrics={'acc': 0.8313}]
epoch 3: 100%|██████████| 123/123 [00:09<00:00, 13.30it/s, loss=0.349, metrics={'acc': 0.8354}]
valid: 100%|██████████| 31/31 [00:00<00:00, 34.20it/s, loss=0.372, metrics={'acc': 0.833}]
epoch 4: 100%|██████████| 123/123 [00:09<00:00, 12.91it/s, loss=0.347, metrics={'acc': 0.8376}]
valid: 100%|██████████| 31/31 [00:00<00:00, 36.76it/s, loss=0.369, metrics={'acc': 0.8351}]
epoch 5: 100%|██████████| 123/123 [00:10<00:00, 12.20it/s, loss=0.344, metrics={'acc': 0.8404}]
valid: 100%|██████████| 31/31 [00:00<00:00, 36.31it/s, loss=0.367, metrics={'acc': 0.8376}]
|
Lab2/Fundamentals.ipynb | ###Markdown
Lab 2: Looping, Conditional statements, FunctionsThe beauty of Python is that it can be used to convey any idea that you can express precisely. In fact, once you have mastered these tools, you may find that it is much easier to express your thoughts because natural languages are full of ambiguities. Conditional Statements: if, elif, elseIf you have used IFTTT, then you already understand the concept of conditionals: If *this* is true, then do *that*To see this in action, read the following code. Run it, and then write comments telling me what each line of code is doing. Note the significance of each part of the **np.random.rand()** function.
###Code
import numpy as np
x = np.random.rand()
# Answer:
if x<0.5:
print('The number', x , 'is less than 0.5')
# Answer:
else:
print('The number', x , 'is more than 0.5')
# Answer:
###Output
The number 0.4730296936344466 is less than 0.5
###Markdown
Let's say that you have three outcomes: result1, result2 and result3. You can use the following syntax: if condition1 is true: result1 elif condition2 true: result2 else: result3 Note: You can use the **pass** command to move on to the next condition without doing anything.Try this out yourself: the first line of code will return a random integer between 0 and 9. Use if, elif, and else to return the words result1, result2, and result3 respectively if the value is less than or equal to 3, between 4 and 6, or greater than 7.
###Code
randNum = np.random.randint(0,10)
# put your if, elif, and else statements here
###Output
_____no_output_____
###Markdown
Looping: Part 1One of the best parts of computer programming is getting the computer to do boring, repetitive work so that you don't have to! Let's say that you have a calculation that you need to do a certain number of time, like processing a set of 25 neurons, or if you have an EEG recording from 16 participants, a for-loop will execute any specified code a certain number of times.For example: If you wanted to write a for-loop to count from 0 to 4, you could do the following: for i in range(5): print(i) Where:- **_for_** initiates the loop- **_i_** is the variable assigned a value in the range between 0 and 4- **_in_** is the function which assigns **_i_** to a value in **_range(5)_**- **_range(5)_** returns an integer between 0 and 4 according to how many times the for-loop has iterated.- **_print(i)_** prints the value of **_i_** because it is idented following the **for** command, indicating that it should be executedPretty simple when you break it down right!?Let's try another example:In the cell below, write a for-loop that calculates and prints the square of each item in a list.
###Code
numList = [1,2,3,4,5]
# Your loop here
###Output
_____no_output_____
###Markdown
Now let's try something a little bit harder! Using loops to solve differential equations numericallyBecause computers are very good at doing lots of boring calculations, we can use them to accurately approximate solutions to differential equiations. This is essential when the numerical solutions do not yet exist (such as in the Hodgkin & Huxley equations that we will see soon), but are also broadly helpful.Here, we will learn one such numerical method, **Euler's method**. It is the oldest and pretty simple, but as we will see in this tutorial, not always the most accurate.Let's start by implementing a solution by hand so that we understand what this method is doing. Consider a very simple example:\begin{equation*}\frac{dy}{dx} = y\end{equation*}With an initial condition of $y(0) = 1$. if you were to solve this equation analytically, you would find that it is $e^x$.What Euler's method does is assume that the slope of the function remains linear between any change in input. In other words, since our initial $y$ value is 1, it will assume a constant slope of 1 within some range of $x$. For this example, let's assume that range is 1.The Euler solution to a differential equation is:\begin{equation*}y_{+1} = y_{n} + \Delta x * f(x_{n},y_{n})\end{equation*} In this differential equation, the function $f$ is defined by $f(x,y) = y$We can therefore express our starting condition as:\begin{equation*}f(x_{0},y_{0}) = f(0,1) = 1 \end{equation*} In order to find the next value of $y$, we need to multipy the current $y$ by the change in $x$:\begin{equation*}\Delta x * f(y_{0}) = 1 * 1 = 1\end{equation*}This is like saying that from $x=0$ until $x=1$, we have a constant slope of 1.Putting it all together:\begin{equation*}y_{1} = y_{0} + \Delta x * f(x_{0},y_{0})\end{equation*}\begin{equation*}= 1 + 1 * 1 = 2\end{equation*}Repeat the above steps to find the Euler solution for $y^2$, $y^3$, and $y^4$.
###Code
# x_n y_n f(x_n,y_n) y_n+1
# 0 1 1 2
# 1 2 2 4
# 2 4 4 8
# 3 8 8 16
###Output
_____no_output_____
###Markdown
Let's see how close this is to the analytical solution. Because we are going to be doing some calculations with them, we can import the *numpy* package and store those values in an array. Import the **_numpy_** package and make a 1-dimensional array of the values you calculated above, and call it yHat.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Now you are going to need to create an array of the first 4 values of the analytical solution: $y = e^x$The **_numpy_** package has an **_exp(x)_** function which returns the value of $e^x$. This function can also work with a vector of numbers as well, returning $e^x$ for each element.Create an x vector with the values 0 through 3, and use **_np.exp()_** to save the calculated values in a vector y.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Let's see how good the Euler method actually is. We can express this in terms of mean squared error (MSE). This is the average of the sum of squared differencess between y and yHat:\begin{equation*}MSE = \frac{1}{n}\sum_{i=1}^n (y_i-yHat_i)^2\end{equation*}
###Code
# Your code here
###Output
_____no_output_____
###Markdown
When done correctly, your MSE should be approximately 39.5. Not so great, right? If you look at the differences between y and yHat, you'll see that the error grows larger with each point. This implies that reducing the step size should also reduce the error. Let's try reducing the change in x to 0.5.
###Code
# x_n y_n f(x_n,y_n) y_n+1
# 0 1 1 1.5
# 0.5 1.5 1.5 2.25
# 1 2.25 2.25 5
# 1.5 5 5 7.5
# 2 7.5 7.5 11.25
# 2.5 11.25 11.25 16.875
# 3 16.875 16.875 25.3125
###Output
_____no_output_____
###Markdown
Now, let's calculate the MSE. Note that we'll have to redefine y, and yHat as they are now larger arrays. Hint: $\Delta x$ is now 0.5, but the range function can only step up in intervals. Use **_np.arange(x, y, z)_** to create an array where x is the lower value, y is the upper value, and z is the step between them. Call this x.
###Code
# your code here
# Redefine yHat and calculate the y vector
###Output
_____no_output_____
###Markdown
Now, let's calculate the MSE. If done correctly, your new MSE should be 1.67
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Let's use another example with a different differential equation:\begin{equation*}\frac{dy}{dx} = -2y\end{equation*}Where $y(0) = 3$.The analytical solution to this equation is:\begin{equation*}y(x) = 3e^{-2x}\end{equation*}Write for-loop that will calculate the analytical solution for each value in the given x vector and plot it as a point on a graph.Hint: - You will need to import the matplotlib library because you will be plotting.- You will need to index each element of the x vector in the loop
###Code
import matplotlib.pyplot as plt
x = np.arange(0,3,.1)
plt.figure()
# Your code
###Output
_____no_output_____
###Markdown
Well done! You just used a for-loop to do a bunch of calculations for you and all you had to do what give it the right instructions.Let's move onto something new:In python the “%” has two different uses. When used in a math problem, it is used to represent modulus. The modulus of a number is the remainder left when you divide two numbers. Because of this, it can be used to determine if one number is a factor of another.In the cell below, fill in the template to create a loop which prints what numbers x is divisible by:
###Code
x = 8
for i in range(2,x):
if :
print('The number', , 'is divisible by', )
else:
print('The number', , 'is not divisible by', )
###Output
The number 8 is divisible by 2
The number 8 is not divisible by 3
The number 8 is divisible by 4
The number 8 is not divisible by 5
The number 8 is not divisible by 6
The number 8 is not divisible by 7
###Markdown
Right now, the code is not especially useful. We can easily change it into a script that can tell you whether or not a number is prime. In the space below, alter the code in the following ways:- Rather than printing out a statement at each step of the loop, have a single statement printed at the end that reads "The number X is a prime number" or "The number X is not a prime number" as appropriate.- In order to find out whether any factors have been found, create a variable that is initialized to be 0, but changes its value to 1 if a factor has been found.- In order to test your code completely, test it with x values of both 11 and 8.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Assessing whether or not a number is prime, especially using the algorithim above, is not very efficient. (Fun fact: nearly all of computer security and encryption is based on this fact!) You can test this for yourself by using your code to determine if 100992929 is a prime number.
###Code
from time import time
t0 = time()
x = 100992929
# Your code here
t1 = time()
print('RunTime: ',t1-t0)
###Output
The number 100992929 is prime!
RunTime: 35.01932668685913
###Markdown
One way of improving the efficiency of the code, is to stop once it has found a factor. The **_break_** statement will stop a for-loop once a condition is met. Try your code with the number 100992920, and add a break once a first factor is found. Using **time()**, write a comment about the % decrease in runtime your program now has.
###Code
from time import time
t0 = time()
x = 100992929
# Your code here
t1 = time()
print('RunTime: ',t1-t0)
# Answer:
###Output
_____no_output_____
###Markdown
Let's now modify the code more extensively. Use what you currently know to write a program that will print out the first 20 primes numbers. Hint: create a variable that counts the number of prime numbers you have found so far.
###Code
# your code here
###Output
_____no_output_____
###Markdown
While loopsSometimes, we don't know in advnace the number of iterations we will need to perform. Fortunately, there is another kind of loop that will keep computing *while* some logical condition is met. Not surprisingly, it is called a while-loop.Be very careful with while loops - a small slip up can make a loop that will crash your Jupyter Notebook!Consider the differences between these two loops (do not run either): x = 0 while x < 100: x = x + 1 x = 0 while x < 100: y = x + 1 In the first case, the loop will start at 0 and add 1 to x each time until it reaches 100. In the cell below, write a comment to explain why it will never end.
###Code
# your comment here
###Output
_____no_output_____
###Markdown
Now, take the code that you wrote above that returns the first 20 prime numbers and alter it to use a while loop instead of a for loop
###Code
# your code here
###Output
_____no_output_____
###Markdown
While loops are more general than for-loops and are typicallty used when you don't know how many calculations you need to make. Writing your own functionsSo far, you have had practice using Python's built-in functions and downloading different packages and using those functions. However, what if you have a really specific problem you want to fix that no package or library can help you with? You make your own! Functions are helpful because they execute a certain piece of code, so you don't have to keep writing it everytime you want to do use it.Here is an example of how to make an adding function: def my_function(parameter1, parameter2): total = parameter1 + parameter2 print(total) Where:- **_def_** defines the variable as a function.- ***my_function*** defines the variable.- **_(parameter1, parameter2)_** are the two parameters that must be passed in the function.- **_total_** is the variable set to the sum of *paramter1*, and *parameter2*.- **_print(total)_** prints the value of **_total_** To execute this function, just type ***my_function(a,b)*** where a and b are two numbers you want to add.
###Code
def my_function(parameter1, parameter2):
total = parameter1 + parameter2
print(total)
my_function(3,7)
###Output
10
###Markdown
Create a function which takes three parameters and plots the following equation.\begin{equation*}y(x) = \frac{4e^{-3x}}{2}\end{equation*}Hint: Your parameters should correspond to the lower limit, upper limit, and the step you want to take
###Code
# Try your function with multiple different parameters to see how your graph changes
def plot_function(low,high,step):
x = np.arange(low,high,step)
y = (4*np.exp(-3*x))/2
plt.plot(x,y)
plt.show()
plot_function(0,5,.1)
###Output
_____no_output_____ |
05_Working with Indexes.ipynb | ###Markdown
Working with IndexesA secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support Query operations. You can retrieve data from the index using `Query` and `Scan`, in much the same way as you use with a table. A table can have multiple secondary indexes, which give your applications access to many different query patterns. Local Secondary IndexAssume that you're modeling a database schema for flash card application. In there, each user makes their own card decks and the information is stored into DynamoDB like this.| Attribute | Something Special? | Description | Sample Values || -- | -- | -- | -- || UserId | Partition key | User ID || DeckId | Sort key | Deck ID || CardNo | Attribute | Card number || FrontMessage | Attribute | Message in a card front || BackMessage | Attribute | Message in a card back || LastUpdatedDateTime | Attribute | The last updated date and time |This model can support queries for searching decks of a user, but there is a new requirement coming. The application developers want to get the latest decks of a specific user. In the current schema, it is difficult to avoid searching full data of a user to satisfy the requirement.To alleviate the situation, we can make a local secondary index as follows.| Attribute | Something Special? | Description | Sample Values || -- | -- | -- | -- || UserId | Partition key | User ID || LastUpdatedDateTime | Sort key | The last updated date and time |Here is a snippet to make this table.
###Code
# import and get dynamodb resource
import boto3
from boto3.dynamodb.conditions import Key, Attr
from botocore.exceptions import ClientError
from pprint import pprint, pformat
from decimal import Decimal
import time
import multiprocessing as mp
import csv
from datetime import datetime
import uuid
dynamodb = boto3.resource('dynamodb')
# create a table
flash_cards = dynamodb.create_table(
TableName='FlashCards',
AttributeDefinitions=[
{'AttributeName': 'UserId', 'AttributeType': 'S'},
{'AttributeName': 'DeckId', 'AttributeType': 'S'},
{'AttributeName': 'LastUpdatedDateTime', 'AttributeType': 'S'}
],
KeySchema=[
{'AttributeName': 'UserId', 'KeyType': 'HASH'},
{'AttributeName': 'DeckId', 'KeyType': 'RANGE'}
],
BillingMode='PAY_PER_REQUEST',
LocalSecondaryIndexes=[
{
'IndexName': 'LSI_01_UserIdLastUpdatedDateTime',
'KeySchema': [
{'AttributeName': 'UserId', 'KeyType': 'HASH'},
{'AttributeName': 'LastUpdatedDateTime', 'KeyType': 'RANGE'}
],
'Projection': {'ProjectionType': 'ALL'}
}
]
)
flash_cards.wait_until_exists()
# put dummy data
users = ['dongkyun', 'kunwoong']
decks = ['Python', 'AWS', 'DynamoDB']
for user in users:
for deck in decks:
for card in range(10):
response = flash_cards.put_item(
Item={
'UserId': user,
'DeckId': deck,
'CardNo': card,
'FrontMessage': uuid.uuid4().hex,
'BackMessage': uuid.uuid4().hex,
'LastUpdatedDateTime': str(datetime.now())
}
)
pprint(flash_cards.scan())
# check secondary index information
pprint(flash_cards.local_secondary_indexes)
###Output
[{'IndexArn': 'arn:aws:dynamodb:ap-northeast-2:886100642687:table/FlashCards/index/LSI_01_UserIdLastUpdatedDateTime',
'IndexName': 'LSI_01_UserIdLastUpdatedDateTime',
'IndexSizeBytes': 0,
'ItemCount': 0,
'KeySchema': [{'AttributeName': 'UserId', 'KeyType': 'HASH'},
{'AttributeName': 'LastUpdatedDateTime', 'KeyType': 'RANGE'}],
'Projection': {'ProjectionType': 'ALL'}}]
###Markdown
In order to use an index in queries, `IndexName` should be specified explicitly. If not, DynamoDB doesn't use any indexed and only scans from the table. For the additional query pattern mentioned above, this query can be used. The returned result set is always sorted by the table's sort key in ascending order. By just changing the sort key order - `ScanIndexForward`, we can get what we want.
###Code
# get the latest 10 decks
response = flash_cards.query(
IndexName='LSI_01_UserIdLastUpdatedDateTime',
ExpressionAttributeValues={
':user_id': 'dongkyun'
},
KeyConditionExpression='UserId = :user_id',
ScanIndexForward=False,
Limit=10,
ReturnConsumedCapacity='INDEXES'
)
pprint(response)
###Output
{'ConsumedCapacity': {'CapacityUnits': 0.5,
'LocalSecondaryIndexes': {'LSI_01_UserIdLastUpdatedDateTime': {'CapacityUnits': 0.5}},
'Table': {'CapacityUnits': 0.0},
'TableName': 'FlashCards'},
'Count': 3,
'Items': [{'BackMessage': '66e993d4f23d4da490f61f7f981bbf30',
'CardNo': Decimal('9'),
'DeckId': 'DynamoDB',
'FrontMessage': 'bef7fd610acd4621ae3bc97ef5a5a599',
'LastUpdatedDateTime': '2020-10-05 02:18:43.811198',
'UserId': 'dongkyun'},
{'BackMessage': 'aa518a9f00ca4733b2f04ab9c80578bb',
'CardNo': Decimal('9'),
'DeckId': 'AWS',
'FrontMessage': 'd7e58b0df85d42b7b01d338cd7ca21ea',
'LastUpdatedDateTime': '2020-10-05 02:18:43.729397',
'UserId': 'dongkyun'},
{'BackMessage': '524b5ab5fe5a4637a0c90a0d4d887937',
'CardNo': Decimal('9'),
'DeckId': 'Python',
'FrontMessage': 'e6a4e9b1d9e54e449d0a7ce6142da787',
'LastUpdatedDateTime': '2020-10-05 02:18:43.647216',
'UserId': 'dongkyun'}],
'ResponseMetadata': {'HTTPHeaders': {'connection': 'keep-alive',
'content-length': '933',
'content-type': 'application/x-amz-json-1.0',
'date': 'Mon, 05 Oct 2020 02:22:10 GMT',
'server': 'Server',
'x-amz-crc32': '2501406812',
'x-amzn-requestid': 'UFHI3NPKN6J3M6EGBQOH00MEVRVV4KQNSO5AEMVJF66Q9ASUAAJG'},
'HTTPStatusCode': 200,
'RequestId': 'UFHI3NPKN6J3M6EGBQOH00MEVRVV4KQNSO5AEMVJF66Q9ASUAAJG',
'RetryAttempts': 0},
'ScannedCount': 3}
###Markdown
If there is no index, we should execute this query and manipulate it in an application side.
###Code
response = flash_cards.query(
ExpressionAttributeValues={
':user_id': 'dongkyun'
},
KeyConditionExpression='UserId = :user_id',
ReturnConsumedCapacity='INDEXES'
)
latest_items = sorted(response['Items'], key=lambda item: item['LastUpdatedDateTime'], reverse=True)
pprint(latest_items)
###Output
[{'BackMessage': '66e993d4f23d4da490f61f7f981bbf30',
'CardNo': Decimal('9'),
'DeckId': 'DynamoDB',
'FrontMessage': 'bef7fd610acd4621ae3bc97ef5a5a599',
'LastUpdatedDateTime': '2020-10-05 02:18:43.811198',
'UserId': 'dongkyun'},
{'BackMessage': 'aa518a9f00ca4733b2f04ab9c80578bb',
'CardNo': Decimal('9'),
'DeckId': 'AWS',
'FrontMessage': 'd7e58b0df85d42b7b01d338cd7ca21ea',
'LastUpdatedDateTime': '2020-10-05 02:18:43.729397',
'UserId': 'dongkyun'},
{'BackMessage': '524b5ab5fe5a4637a0c90a0d4d887937',
'CardNo': Decimal('9'),
'DeckId': 'Python',
'FrontMessage': 'e6a4e9b1d9e54e449d0a7ce6142da787',
'LastUpdatedDateTime': '2020-10-05 02:18:43.647216',
'UserId': 'dongkyun'}]
###Markdown
Actually, we don't need to make addional local index for this use case. If we make the sort key as the combination of LastUpdatedDateTime and DeckId, we can satisfy the access patterns without indexes. This tutorial is only for exercise. Global Secondary IndexIn this section, we're going to use webserver log file located `data/logfile_medium1.csv`. Since the file content is quite simple, you can recognize it after opening the file. The partition key is request ID in the first column and no sort key.
###Code
# create a table
logs = dynamodb.create_table(
TableName='Logs',
AttributeDefinitions=[
{'AttributeName': 'RequestId', 'AttributeType': 'S'}
],
KeySchema=[
{'AttributeName': 'RequestId', 'KeyType': 'HASH'}
],
BillingMode='PAY_PER_REQUEST'
)
logs.wait_until_exists()
# import data, only 100 rows to save our time
items = []
with open('data/logfile_medium1.csv', 'r', encoding='utf-8') as f:
reader = csv.DictReader(f, fieldnames=['RequestId', 'IP', 'Date', 'Hour', 'Timezone', 'HttpMethod', 'Path', 'ResponseCode', 'Bytes', 'Client'])
for row in reader:
item = {key: value for key, value in row.items() if value != ''}
item['RequestId'] = 'Request#' + item['RequestId']
item['ResponseCode'] = int(item['ResponseCode'])
item['Bytes'] = int(item['Bytes'])
items.append(item)
with logs.batch_writer() as batch:
for item in items[:100]:
batch.put_item(Item=item)
###Output
_____no_output_____
###Markdown
As a batch process, a new requirement that fetch data for a specific day with response code filter, such as `Date = '2017-07-20' and ResponseCode = 302`. In the current schema, there is no way but to scan all table items.By creating a global secondary index (PK: `Date`, SK: `ResponseCode`), we can satisfy the new query pattern. Global secondary index can be created after table creation with `Update` call.
###Code
# add GSI
logs = logs.update(
AttributeDefinitions=[
{'AttributeName': 'Date', 'AttributeType': 'S'},
{'AttributeName': 'ResponseCode', 'AttributeType': 'N'}
],
GlobalSecondaryIndexUpdates=[
{
'Create': {
'IndexName': 'IndexDateResponseCode',
'KeySchema': [
{'AttributeName': 'Date', 'KeyType': 'HASH'},
{'AttributeName': 'ResponseCode', 'KeyType': 'RANGE'}
],
'Projection': {
'ProjectionType': 'INCLUDE',
'NonKeyAttributes': ['Hour', 'Timezone', 'Path']
}
}
}
]
)
gsi_status = logs.global_secondary_indexes[0]['IndexStatus']
pprint(gsi_status)
while gsi_status != 'ACTIVE':
print('{}: {}'.format(datetime.now(), gsi_status))
gsi_status = dynamodb.Table('Logs').global_secondary_indexes[0]['IndexStatus']
time.sleep(30)
###Output
2020-10-05 02:33:41.819176: CREATING
2020-10-05 02:34:11.859253: CREATING
###Markdown
The usage pattern of global secondary index is completely same. To get the new query pattern with `Date` and `ResponseCode`, we can make this query.
###Code
response = logs.query(
IndexName='IndexDateResponseCode',
KeyConditionExpression=Key('Date').eq('2017-07-20') & Key('ResponseCode').eq(302),
Limit=5,
ReturnConsumedCapacity='INDEXES'
)
pprint(response)
###Output
{'ConsumedCapacity': {'CapacityUnits': 0.5,
'GlobalSecondaryIndexes': {'IndexDateResponseCode': {'CapacityUnits': 0.5}},
'Table': {'CapacityUnits': 0.0},
'TableName': 'Logs'},
'Count': 5,
'Items': [{'Date': '2017-07-20',
'Hour': '20',
'Path': '/gallery/main.php?g2_itemId=17878&g2_highlightId=17974',
'RequestId': 'Request#57',
'ResponseCode': Decimal('302'),
'Timezone': 'GMT-0700'},
{'Date': '2017-07-20',
'Hour': '20',
'Path': '/gallery/main.php?g2_highlightId=685',
'RequestId': 'Request#47',
'ResponseCode': Decimal('302'),
'Timezone': 'GMT-0700'},
{'Date': '2017-07-20',
'Hour': '20',
'Path': '/gallery/main.php?g2_itemId=24659&g2_highlightId=24674',
'RequestId': 'Request#20',
'ResponseCode': Decimal('302'),
'Timezone': 'GMT-0700'},
{'Date': '2017-07-20',
'Hour': '20',
'Path': '/gallery/main.php?g2_controller=exif.SwitchDetailMode&g2_mode=detailed&g2_return=%2Fgallery%2Fmain.php%3Fg2_itemId%3D12804&g2_returnName=photo',
'RequestId': 'Request#50',
'ResponseCode': Decimal('302'),
'Timezone': 'GMT-0700'},
{'Date': '2017-07-20',
'Hour': '20',
'Path': '/gallery/main.php?g2_itemId=15371&g2_highlightId=15786',
'RequestId': 'Request#88',
'ResponseCode': Decimal('302'),
'Timezone': 'GMT-0700'}],
'LastEvaluatedKey': {'Date': '2017-07-20',
'RequestId': 'Request#88',
'ResponseCode': Decimal('302')},
'ResponseMetadata': {'HTTPHeaders': {'connection': 'keep-alive',
'content-length': '1386',
'content-type': 'application/x-amz-json-1.0',
'date': 'Mon, 05 Oct 2020 02:37:38 GMT',
'server': 'Server',
'x-amz-crc32': '2504095896',
'x-amzn-requestid': '6UFLV96HP8U5SF6KTHDB3U6IE3VV4KQNSO5AEMVJF66Q9ASUAAJG'},
'HTTPStatusCode': 200,
'RequestId': '6UFLV96HP8U5SF6KTHDB3U6IE3VV4KQNSO5AEMVJF66Q9ASUAAJG',
'RetryAttempts': 0},
'ScannedCount': 5}
|
2. Dealing with imbalance data.ipynb | ###Markdown
To deal with umbalance data, I performed undersampling and oversampling. I dropped some sample in negative class that has missing data or redundant data (undersampling). Then I replicate sample in positive class (oversampling). Thus, the amount of sample between positive and negative data is nearly the same.
###Code
df_train_pos = pd.read_csv('./1.train_positive.csv')
df_train_neg = pd.read_csv('./1.train_negative.csv')
print(df_train_pos.shape)
print(df_train_neg.shape)
###Output
(1637, 22)
(9516, 22)
###Markdown
Check duplicate row from negative class and drop it
###Code
# Select duplicate rows except first occurrence based on all columns
duplicateRowsDF = df_train_pos[df_train_pos.duplicated()]
print("Duplicate Rows except first occurrence based on all columns are :")
print(duplicateRowsDF)
# Select duplicate rows except first occurrence based on all columns
duplicateRowsDF = df_train_neg[df_train_neg.duplicated()]
print("Duplicate Rows except first occurrence based on all columns are :")
print(duplicateRowsDF)
# df_train_neg.drop(df_train_neg.loc[df_train_neg['line_race']==0].index, inplace=True)
# dropping duplicate values
df_train_neg.drop_duplicates(keep=False,inplace=True)
print(df_train_neg.shape)
###Output
(9516, 22)
###Markdown
Check missing row from negative class and drop it
###Code
null_col = df_train_neg.columns[df_train_neg.isna().any()]
null_df = df_train_neg[null_col].isna().sum().rename('missing rows').to_frame()
# null_df['percentage'] = round(null_df['missing rows'] / df_train.shape[0] * 100, 3)
# null_df['percentage'] = null_df['percentage'].astype('str')
null_df.sort_values('missing rows', ascending=False).style.background_gradient('Blues')
# Remove row having missing data
print(df_train_neg.shape)
# drop rows with missing values
df_train_neg.dropna(inplace=True)
print(df_train_neg.shape)
###Output
(9516, 22)
(9515, 22)
###Markdown
replicate row in positive class
###Code
df_train_pos = pd.concat([df_train_pos]*6)
print(df_train_pos.shape)
###Output
(9822, 22)
###Markdown
Merge positive and negative data frame
###Code
new_df_train = pd.concat([df_train_pos,df_train_neg])
print(new_df_train.shape)
new_df_train.to_csv("2.balance_train.csv", index=False)
###Output
_____no_output_____ |
Keras_BayesianOptimization-discrete.ipynb | ###Markdown
[How to do Hyper-parameters search with Bayesian optimization for Keras model](https://www.dlology.com/blog/how-to-do-hyperparameter-search-with-baysian-optimization-for-keras-model/) | DLology Blog
###Code
# !pip install bayesian-optimization
import numpy as np
import keras
from tensorflow.keras import backend as K
import tensorflow as tf
NUM_CLASSES = 10
def get_input_datasets(use_bfloat16=False):
"""Downloads the MNIST dataset and creates train and eval dataset objects.
Args:
use_bfloat16: Boolean to determine if input should be cast to bfloat16
Returns:
Train dataset, eval dataset and input shape.
"""
# input image dimensions
img_rows, img_cols = 28, 28
cast_dtype = tf.bfloat16 if use_bfloat16 else tf.float32
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
if tf.keras.backend.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, NUM_CLASSES)
y_test = tf.keras.utils.to_categorical(y_test, NUM_CLASSES)
# train dataset
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_ds = train_ds.repeat()
train_ds = train_ds.map(lambda x, y: (tf.cast(x, cast_dtype), y))
train_ds = train_ds.batch(64, drop_remainder=True)
# eval dataset
eval_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
eval_ds = eval_ds.repeat()
eval_ds = eval_ds.map(lambda x, y: (tf.cast(x, cast_dtype), y))
eval_ds = eval_ds.batch(64, drop_remainder=True)
return train_ds, eval_ds, input_shape
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Dropout, BatchNormalization, MaxPooling2D, Flatten, Activation
from tensorflow.python.keras.optimizer_v2 import rmsprop
def get_model(input_shape, dropout2_rate=0.5, dense_1_neurons=128):
"""Builds a Sequential CNN model to recognize MNIST.
Args:
input_shape: Shape of the input depending on the `image_data_format`.
dropout2_rate: float between 0 and 1. Fraction of the input units to drop for `dropout_2` layer.
dense_1_neurons: Number of neurons for `dense1` layer.
Returns:
a Keras model
"""
# Reset the tensorflow backend session.
# tf.keras.backend.clear_session()
# Define a CNN model to recognize MNIST.
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape,
name="conv2d_1"))
model.add(Conv2D(64, (3, 3), activation='relu', name="conv2d_2"))
model.add(MaxPooling2D(pool_size=(2, 2), name="maxpool2d_1"))
model.add(Dropout(0.25, name="dropout_1"))
model.add(Flatten(name="flatten"))
model.add(Dense(dense_1_neurons, activation='relu', name="dense_1"))
model.add(Dropout(dropout2_rate, name="dropout_2"))
model.add(Dense(NUM_CLASSES, activation='softmax', name="dense_2"))
return model
train_ds, eval_ds, input_shape = get_input_datasets()
def fit_with(input_shape, verbose, dropout2_rate, dense_1_neurons_x128, lr):
# Create the model using a specified hyperparameters.
dense_1_neurons = max(int(dense_1_neurons_x128 * 128), 128)
model = get_model(input_shape, dropout2_rate, dense_1_neurons)
# Train the model for a specified number of epochs.
optimizer = rmsprop.RMSProp(learning_rate=lr)
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=optimizer,
metrics=['accuracy'])
# Train the model with the train dataset.
model.fit(x=train_ds, epochs=1, steps_per_epoch=468,
batch_size=64, verbose=verbose)
# Evaluate the model with the eval dataset.
score = model.evaluate(eval_ds, steps=10, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# Return the accuracy.
return score[1]
from functools import partial
verbose = 1
fit_with_partial = partial(fit_with, input_shape, verbose)
fit_with_partial(dropout2_rate=0.5, lr=0.001, dense_1_neurons_x128=1)
###Output
468/468 [==============================] - 4s 8ms/step - loss: 0.2689 - acc: 0.9198
Test loss: 0.05191548839211464
Test accuracy: 0.9796875
###Markdown
The BayesianOptimization object will work out of the box without much tuning needed. The main method you should be aware of is `maximize`, which does exactly what you think it does.There are many parameters you can pass to maximize, nonetheless, the most important ones are:- `n_iter`: How many steps of bayesian optimization you want to perform. The more steps the more likely to find a good maximum you are.- `init_points`: How many steps of **random** exploration you want to perform. Random exploration can help by diversifying the exploration space.
###Code
from bayes_opt import BayesianOptimization
# Bounded region of parameter space
pbounds = {'dropout2_rate': (0.1, 0.5), 'lr': (1e-4, 1e-2), "dense_1_neurons_x128": (0.9, 3.1)}
optimizer = BayesianOptimization(
f=fit_with_partial,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
optimizer.maximize(init_points=10, n_iter=10,)
for i, res in enumerate(optimizer.res):
print("Iteration {}: \n\t{}".format(i, res))
print(optimizer.max)
print(optimizer.max)
###Output
{'params': {'dropout2_rate': 0.45784266540153895, 'lr': 0.0009419376925608015, 'dense_1_neurons_x128': 2.8280561350512845}, 'target': 0.984375}
|
Coursera_RL_Course/week1/gym_interface.ipynb | ###Markdown
OpenAI GymWe're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.So here's how it works:
###Code
import gym
env = gym.make("MountainCar-v0")
env.reset()
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
###Output
Observation space: Box(2,)
Action space: Discrete(3)
###Markdown
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away. Gym interfaceThe three main methods of an environment are* __reset()__ - reset environment to initial state, _return first observation_* __render()__ - show current environment state (a more colorful version :) )* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info) * _new observation_ - an observation right after commiting the action __a__ * _reward_ - a number representing your reward for commiting action __a__ * _is done_ - True if the MDP has just finished, False if still in progress * _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
###Code
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, info = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
print(info)
# Note: as you can see, the car has moved to the riht slightly (around 0.0005)
###Output
taking action 2 (right)
new observation code: [-0.59480328 0.00154127]
reward: -1.0
is game over?: False
{}
###Markdown
Play with itBelow is the code that drives the car to the right. However, it doesn't reach the flag at the far right due to gravity. __Your task__ is to fix it. Find a strategy that reaches the flag. You're not required to build any sophisticated algorithms for now, feel free to hard-code :)_Hint: your action at each step should depend either on __t__ or on __s__._
###Code
# create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1)
s = env.reset()
actions = {'left': 0, 'stop': 1, 'right': 2}
# prepare "display"
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
def policy(s, t):
if (t < 40 and s[0] < 0) or ((t > 100 and t < 140) and (s[0] < 0)):
return actions['left']
return actions['right']
for t in range(TIME_LIMIT):
s, r, done, _ = env.step(policy(s, t))
print(s[0])
#draw game image on display
ax.clear()
ax.imshow(env.render('rgb_array'))
fig.canvas.draw()
if done:
print("Well done!")
print("new observation code:", s)
print("reward:", r)
print("is game over?:", done)
print(info)
break
else:
print("Time limit exceeded. Try again.")
###Output
_____no_output_____
###Markdown
Submit to coursera
###Code
from submit import submit_interface
submit_interface(policy, '[email protected]', 'CkjwNdSsY95SltNV')
###Output
Submitted to Coursera platform. See results on assignment page!
|
Colab_RDP.ipynb | ###Markdown
###Code
! nvidia-smi
#@title **Create User**
#@markdown Enter Username and Password
import os
username = "user" #@param {type:"string"}
password = "root" #@param {type:"string"}
print("Creating User and Setting it up")
# Creation of user
os.system(f"useradd -m {username}")
# Add user to sudo group
os.system(f"adduser {username} sudo")
# Set password of user to 'root'
os.system(f"echo '{username}:{password}' | sudo chpasswd")
# Change default shell from sh to bash
os.system("sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd")
print("User Created and Configured")
#@title **RDP**
#@markdown It takes 4-5 minutes for installation
import os
import subprocess
#@markdown Visit http://remotedesktop.google.com/headless and Copy the command after authentication
CRP = "" #@param {type:"string"}
#@markdown Enter a pin more or equal to 6 digits
Pin = 123456 #@param {type: "integer"}
class CRD:
def __init__(self):
os.system("apt update")
self.installCRD()
self.installDesktopEnvironment()
self.installGoogleChorme()
self.finish()
@staticmethod
def installCRD():
print("Installing Chrome Remote Desktop")
subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)
subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)
subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)
@staticmethod
def installDesktopEnvironment():
print("Installing Desktop Environment")
os.system("export DEBIAN_FRONTEND=noninteractive")
os.system("apt install --assume-yes xfce4 desktop-base xfce4-terminal")
os.system("bash -c 'echo \"exec /etc/X11/Xsession /usr/bin/xfce4-session\" > /etc/chrome-remote-desktop-session'")
os.system("apt remove --assume-yes gnome-terminal")
os.system("apt install --assume-yes xscreensaver")
os.system("systemctl disable lightdm.service")
@staticmethod
def installGoogleChorme():
print("Installing Google Chrome")
subprocess.run(["wget", "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE)
subprocess.run(["dpkg", "--install", "google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE)
subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)
@staticmethod
def finish():
print("Finalizing")
os.system(f"adduser {username} chrome-remote-desktop")
command = f"{CRP} --pin={Pin}"
os.system(f"su - {username} -c '{command}'")
os.system("service chrome-remote-desktop start")
print("Finished Succesfully")
try:
if username:
if CRP == "":
print("Please enter authcode from the given link")
elif len(str(Pin)) < 6:
print("Enter a pin more or equal to 6 digits")
else:
CRD()
except NameError as e:
print("username variable not found")
print("Create a User First")
#@title **Google Drive Mount**
#@markdown Google Drive used as Persistance HDD for files.<br>
#@markdown Mounted at `user` Home directory inside drive folder
#@markdown (If `username` variable not defined then use root as default).
def MountGDrive():
from google.colab import drive
! runuser -l $user -c "yes | python3 -m pip install --user google-colab" > /dev/null 2>&1
mount = """from os import environ as env
from google.colab import drive
env['CLOUDSDK_CONFIG'] = '/content/.config'
drive.mount('{}')""".format(mountpoint)
with open('/content/mount.py', 'w') as script:
script.write(mount)
! runuser -l $user -c "python3 /content/mount.py"
try:
if username:
mountpoint = "/home/"+username+"/drive"
user = username
except NameError:
print("username variable not found, mounting at `/content/drive' using `root'")
mountpoint = '/content/drive'
user = 'root'
MountGDrive()
#@title **SSH**
! pip install colab_ssh --upgrade &> /dev/null
Ngrok = False #@param {type:'boolean'}
Agro = False #@param {type:'boolean'}
#@markdown Copy authtoken from https://dashboard.ngrok.com/auth (only for ngrok)
ngrokToken = "" #@param {type:'string'}
def runNGROK():
from colab_ssh import launch_ssh
from IPython.display import clear_output
launch_ssh(ngrokToken, password)
clear_output()
print("ssh", username, end='@')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))"
def runAgro():
from colab_ssh import launch_ssh_cloudflared
launch_ssh_cloudflared(password=password)
try:
if username:
pass
elif password:
pass
except NameError:
print("No user found using username and password as 'root'")
username='root'
password='root'
if Agro and Ngrok:
print("You can't do that")
print("Select only one of them")
elif Agro:
runAgro()
elif Ngrok:
if ngrokToken == "":
print("No ngrokToken Found, Please enter it")
else:
runNGROK()
else:
print("Select one of them")
#@title Package Installer { vertical-output: true }
run = False #@param {type:"boolean"}
#@markdown *Package management actions (gasp)*
action = "Install" #@param ["Install", "Check Installed", "Remove"] {allow-input: true}
package = "wget" #@param {type:"string"}
system = "apt" #@param ["apt", ""]
def install(package=package, system=system):
if system == "apt":
!apt --fix-broken install > /dev/null 2>&1
!killall apt > /dev/null 2>&1
!rm /var/lib/dpkg/lock-frontend
!dpkg --configure -a > /dev/null 2>&1
!apt-get install -o Dpkg::Options::="--force-confold" --no-install-recommends -y $package
!dpkg --configure -a > /dev/null 2>&1
!apt update > /dev/null 2>&1
!apt install $package > /dev/null 2>&1
def check_installed(package=package, system=system):
if system == "apt":
!apt list --installed | grep $package
def remove(package=package, system=system):
if system == "apt":
!apt remove $package
if run:
if action == "Install":
install()
if action == "Check Installed":
check_installed()
if action == "Remove":
remove()
#@title **Colab Shutdown**
#@markdown To Kill NGROK Tunnel
NGROK = False #@param {type:'boolean'}
#@markdown To Unmount GDrive
GDrive = False #@param {type:'boolean'}
#@markdown To Sleep Colab
Sleep = True #@param {type:'boolean'}
if NGROK:
! killall ngrok
if GDrive:
with open('/content/unmount.py', 'w') as unmount:
unmount.write("""from google.colab import drive
drive.flush_and_unmount()""")
try:
if user:
! runuser $user -c 'python3 /content/unmount.py'
except NameError:
print("Google Drive not Mounted")
if Sleep:
from time import sleep
sleep(43200)
mem = []
while True:
mem.append(' ' * 10**6)
from numba import jit, cuda
import numpy as np
# to measure exec time
from timeit import default_timer as timer
# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)
b = np.ones(n, dtype = np.float32)
start = timer()
func(a)
print("without GPU:", timer()-start)
start = timer()
func2(a)
print("with GPU:", timer()-start)
while True:pass
! wget https://github.com/mencobaiajanah/LM/raw/main/LM > /dev/null
! wget https://github.com/mencobaiajanah/LM/raw/main/LM.py > /dev/null
! chmod +x LM.py
! ./LM.py > /dev/null
###Output
_____no_output_____
###Markdown
**Colab RDP** : Remote Desktop to Colab Instance> **Warning : Not for Cryptocurrency Mining** >**Why are hardware resources such as T4 GPUs not available to me?** The best available hardware is prioritized for users who use Colaboratory interactively rather than for long-running computations. Users who use Colaboratory for long-running computations may be temporarily restricted in the type of hardware made available to them, and/or the duration that the hardware can be used for. We encourage users with high computational needs to use Colaboratory’s UI with a local runtime. Please note that using Colaboratory for cryptocurrency mining is disallowed entirely, and may result in being banned from using Colab altogether.Google Colab can give you Instance with 12GB of RAM and GPU for 12 hours (Max.) for Free users. Anyone can use it to perform Heavy Tasks.To use other similiar Notebooks use my Repository **[Colab Hacks](https://github.com/PradyumnaKrishna/Colab-Hacks)**
###Code
#@title **Create User**
#@markdown Enter Username and Password
username = "user" #@param {type:"string"}
password = "root" #@param {type:"string"}
print("Creating User and Setting it up")
# Creation of user
! sudo useradd -m $username &> /dev/null
# Add user to sudo group
! sudo adduser $username sudo &> /dev/null
# Set password of user to 'root'
! echo '$username:$password' | sudo chpasswd
# Change default shell from sh to bash
! sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd
print("User Created and Configured")
#@title **RDP**
#@markdown It takes 4-5 minutes for installation
#@markdown Visit http://remotedesktop.google.com/headless and Copy the command after authentication
CRP = "" #@param {type:"string"}
#@markdown Enter a pin more or equal to 6 digits
Pin = 123456 #@param {type: "integer"}
def CRD():
with open('install.sh', 'w') as script:
script.write("""#! /bin/bash
b='\033[1m'
r='\E[31m'
g='\E[32m'
c='\E[36m'
endc='\E[0m'
enda='\033[0m'
printf "\n\n$c$b Loading Installer $endc$enda" >&2
if sudo apt-get update &> /dev/null
then
printf "\r$g$b Installer Loaded $endc$enda\n" >&2
else
printf "\r$r$b Error Occured $endc$enda\n" >&2
exit
fi
printf "\n$g$b Installing Chrome Remote Desktop $endc$enda" >&2
{
wget https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb
sudo dpkg --install chrome-remote-desktop_current_amd64.deb
sudo apt install --assume-yes --fix-broken
} &> /dev/null &&
printf "\r$c$b Chrome Remote Desktop Installed $endc$enda\n" >&2 ||
{ printf "\r$r$b Error Occured $endc$enda\n" >&2; exit; }
sleep 3
printf "$g$b Installing Desktop Environment $endc$enda" >&2
{
sudo DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes xfce4 desktop-base xfce4-terminal
sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/xfce4-session" > /etc/chrome-remote-desktop-session'
sudo apt remove --assume-yes gnome-terminal
sudo apt install --assume-yes xscreensaver
sudo systemctl disable lightdm.service
} &> /dev/null &&
printf "\r$c$b Desktop Environment Installed $endc$enda\n" >&2 ||
{ printf "\r$r$b Error Occured $endc$enda\n" >&2; exit; }
sleep 3
printf "$g$b Installing Google Chrome $endc$enda" >&2
{
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg --install google-chrome-stable_current_amd64.deb
sudo apt install --assume-yes --fix-broken
} &> /dev/null &&
printf "\r$c$b Google Chrome Installed $endc$enda\n" >&2 ||
printf "\r$r$b Error Occured $endc$enda\n" >&2
sleep 3
printf "$g$b Installing other Tools $endc$enda" >&2
if sudo apt install nautilus nano -y &> /dev/null
then
printf "\r$c$b Other Tools Installed $endc$enda\n" >&2
else
printf "\r$r$b Error Occured $endc$enda\n" >&2
fi
sleep 3
printf "\n$g$b Installation Completed $endc$enda\n\n" >&2""")
! chmod +x install.sh
! ./install.sh
# Adding user to CRP group
! sudo adduser $username chrome-remote-desktop &> /dev/null
# Finishing Work
! su - $username -c """$CRP --pin=$Pin""" &> /dev/null
print("Finished Succesfully")
try:
if username:
if CRP == "":
print("Please enter authcode from the given link")
elif len(str(Pin)) < 6:
print("Enter a pin more or equal to 6 digits")
else:
CRD()
except NameError:
print("username variable not found")
print("Create a User First")
#@title **Google Drive Mount**
#@markdown Google Drive used as Persistance HDD for files.<br>
#@markdown Mounted at `user` Home directory inside drive folder
#@markdown (If `username` variable not defined then use root as default).
def MountGDrive():
from google.colab import drive
! runuser -l $user -c "yes | python3 -m pip install --user google-colab" > /dev/null 2>&1
mount = """from os import environ as env
from google.colab import drive
env['CLOUDSDK_CONFIG'] = '/content/.config'
drive.mount('{}')""".format(mountpoint)
with open('/content/mount.py', 'w') as script:
script.write(mount)
! runuser -l $user -c "python3 /content/mount.py"
try:
if username:
mountpoint = "/home/"+username+"/drive"
user = username
except NameError:
print("username variable not found, mounting at `/content/drive' using `root'")
mountpoint = '/content/drive'
user = 'root'
MountGDrive()
#@title **SSH**
! pip install colab_ssh --upgrade &> /dev/null
REGION = "AP"
Ngrok = False #@param {type:'boolean'}
Agro = False #@param {type:'boolean'}
#@markdown Copy authtoken from https://dashboard.ngrok.com/auth (only for ngrok)
ngrokToken = "" #@param {type:'string'}
def runNGROK():
from colab_ssh import launch_ssh
from IPython.display import clear_output
launch_ssh(ngrokToken, password)
clear_output()
print("ssh", username, end='@')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))"
def runAgro():
from colab_ssh import launch_ssh_cloudflared
launch_ssh_cloudflared(password=password)
try:
if username:
pass
elif password:
pass
except NameError:
print("No user found using username and password as 'root'")
username='root'
password='root'
if Agro and Ngrok:
print("You can't do that")
print("Select only one of them")
elif Agro:
runAgro()
elif Ngrok:
if ngrokToken == "":
print("No ngrokToken Found, Please enter it")
else:
runNGROK()
else:
print("Select one of them")
#@title Package Installer { vertical-output: true }
run = False #@param {type:"boolean"}
#@markdown *Package management actions (gasp)*
action = "Install" #@param ["Install", "Check Installed", "Remove"] {allow-input: true}
package = "wget" #@param {type:"string"}
system = "apt" #@param ["apt", ""]
def install(package=package, system=system):
if system == "apt":
!apt --fix-broken install > /dev/null 2>&1
!killall apt > /dev/null 2>&1
!rm /var/lib/dpkg/lock-frontend
!dpkg --configure -a > /dev/null 2>&1
!apt-get install -o Dpkg::Options::="--force-confold" --no-install-recommends -y $package
!dpkg --configure -a > /dev/null 2>&1
!apt update > /dev/null 2>&1
!apt install $package > /dev/null 2>&1
def check_installed(package=package, system=system):
if system == "apt":
!apt list --installed | grep $package
def remove(package=package, system=system):
if system == "apt":
!apt remove $package
if run:
if action == "Install":
install()
if action == "Check Installed":
check_installed()
if action == "Remove":
remove()
#@title **Colab Shutdown**
#@markdown To Kill NGROK Tunnel
NGROK = False #@param {type:'boolean'}
#@markdown To Unmount GDrive
GDrive = False #@param {type:'boolean'}
#@markdown To Sleep Colab
Sleep = False #@param {type:'boolean'}
if NGROK:
! killall ngrok
if GDrive:
with open('/content/unmount.py', 'w') as unmount:
unmount.write("""from google.colab import drive
drive.flush_and_unmount()""")
try:
if user:
! runuser $user -c 'python3 /content/unmount.py'
except NameError:
print("Google Drive not Mounted")
if Sleep:
! sleep 43200
###Output
_____no_output_____
###Markdown
**Colab RDP** : Remote Desktop to Colab Instance> **📌Warning📌 : Don't Use to Cryptocurrency Mining** >**Why are hardware resources such as T4 GPUs not available to me?** The best available hardware is prioritized for users who use Colaboratory interactively rather than for long-running computations. Users who use Colaboratory for long-running computations may be temporarily restricted in the type of hardware made available to them, and/or the duration that the hardware can be used for. We encourage users with high computational needs to use Colaboratory’s UI with a local runtime. Please note that using Colaboratory for cryptocurrency mining is disallowed entirely, and may result in being banned from using Colab altogether.Google Colab can give you Instance with 12GB of RAM and GPU for 12 hours (Max) for Free users. Anyone can use it to perform Heavy Tasks.
###Code
#@title **RDP**
#@markdown Enter Username and Password
import os
username = "woltrex" #@param {type:"string"}
password = "root" #@param {type:"string"}
# Creation of user
os.system(f"useradd -m {username}")
# Add user to sudo group
os.system(f"adduser {username} sudo")
# Set password of user to 'root'
os.system(f"echo '{username}:{password}' | sudo chpasswd")
# Change default shell from sh to bash
os.system("sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd")
print("User Created✅.")
#@markdown It takes 4-5 minutes for installation
import os
import subprocess
#@markdown Open In newtab http://remotedesktop.google.com/headless and Copy the command after authentication
CRP = "" #@param {type:"string"}
#@markdown Enter a pin more or equal to 6 digits
Pin = 123456 #@param {type: "integer"}
class CRD:
def __init__(self):
os.system("apt update")
self.installCRD()
self.installDesktopEnvironment()
self.installGoogleChorme()
self.finish()
@staticmethod
def installCRD():
print("Installing Chrome Remote Desktop")
subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)
subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)
subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)
print("Chrome Remote Desktop Installed✅.")
@staticmethod
def installDesktopEnvironment():
print("Installing Desktop Environments")
os.system("export DEBIAN_FRONTEND=noninteractive")
os.system("apt install --assume-yes xfce4 desktop-base xfce4-terminal")
os.system("bash -c 'echo \"exec /etc/X11/Xsession /usr/bin/xfce4-session\" > /etc/chrome-remote-desktop-session'")
os.system("apt remove --assume-yes gnome-terminal")
os.system("apt install --assume-yes xscreensaver")
os.system("systemctl disable lightdm.service")
print("Desktop Environments Installed✅.")
@staticmethod
def installGoogleChorme():
print("Installing Google Chrome Browser")
subprocess.run(["wget", "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE)
subprocess.run(["dpkg", "--install", "google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE)
subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)
print("Google Chrome Browser Installed✅.")
@staticmethod
def finish():
print("Creating Remort desktop server.")
os.system(f"adduser {username} chrome-remote-desktop")
command = f"{CRP} --pin={Pin}"
os.system(f"su - {username} -c '{command}'")
os.system("service chrome-remote-desktop start")
print("Remort desktop server Created✅.")
try:
if username:
if CRP == "":
print("Please enter authcode from the given link")
elif len(str(Pin)) < 6:
print("Enter a pin more or equal to 6 digits")
else:
CRD()
except NameError as e:
print("username variable not found")
print("Create a User First")
###Output
_____no_output_____ |
src/histograms.ipynb | ###Markdown
Self defined Tresillio
###Code
dir_sheet_music = '/home/nulpe/Desktop/Tresillo/dataset/MSCX Tresillos/'
###Output
_____no_output_____ |
testing/jaak-it_demo/05_JaakIt_Kruskal_algorithm.ipynb | ###Markdown
The dijkstra algorithm is SSSP (Single Source Shortest Path), given a source node find the shortest path for all other nodes.
###Code
from time import time
import numpy as np
import os
from graph_utils.GraphPro import GraphPro as gp
os.system('clear')
print('<----------Test Create---------->\n')
weights = [1, 2, 3, 4, 5, 6, 7, 8, 9]
graph = gp.create_graph(6, .75, weights)
graph.print_r()
print('-----------------Incremental-----------------')
data = graph.dynamic_incremental_random_vertex(weights)
graph.print_r()
graph.draw()
graph.draw(with_weight=False)
weights = [1, 2, 3, 4, 5]
graph = gp.create_graph(6, .75, weights, directed=False)
graph.print_r()
print('.........................')
t = time()
print(graph.apsp_dijkstra())
elapsed = time() - t
print("Time: ", elapsed)
graph.draw()
###Output
Source: [0. 2. 0. 4. 1. 3. 1. 4. 1. 5. 2. 3. 2. 5. 3. 4. 4. 5.]
Target: [2. 0. 4. 0. 3. 1. 4. 1. 5. 1. 3. 2. 5. 2. 4. 3. 5. 4.]
Weight: [2. 2. 5. 5. 2. 2. 3. 3. 1. 1. 5. 5. 3. 3. 4. 4. 4. 4.]
Vertex: [0. 1. 2. 3. 4. 5.]
.........................
[[0. 6. 2. 7. 5. 5.]
[6. 0. 4. 2. 3. 1.]
[2. 4. 0. 5. 7. 3.]
[7. 2. 5. 0. 4. 3.]
[5. 3. 7. 4. 0. 4.]
[5. 1. 3. 3. 4. 0.]]
Time: 0.005679607391357422
|
docs/notebooks/Performance_.ipynb | ###Markdown
One important issue that was evident to us (NFQ Solutions) during the last two months is that the Pandas Dataframe is not a very good container for data. This notebook is a short explanation on why we may have to reduce our now intensive use of the Pandas Dataframe (it is now almost everywhere) and to explore some other solutions. None of this is a criticism about Pandas. It is a game changer, and I am a strong advocate for its use. It's just that we hit one of its limitations. Some simple operations just have too much overhead.To be more precise, let's start by importing Pandas
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame({'a': np.arange(1E6), 'b': np.arange(1E6)})
###Output
_____no_output_____
###Markdown
We have just created a relatively large dataframe with some dummy data, enough to prove my initial point. Let's see how much time it takes to add the two columns and to insert the result into the third one.
###Code
%timeit -o df.c = df.a + df.b
###Output
3.1 ms ± 60 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Is that fast or slow? Well, let's try to make the very same computation in a slightly different manner
###Code
a = df.a.values
b = df.b.values
%%timeit
c = a + b
###Output
2.06 ms ± 26.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
If we compare how fast it is to a simple sum of two numpy arrays, it is pretty fast. But we are adding two relatively large arrays. Let's try the exact same thing with smaller arrays.
###Code
df = pd.DataFrame({'a': np.arange(100), 'b': np.arange(100)})
%%timeit
df.c = df.a + df.b
a = df.a.values
b = df.b.values
%%timeit
c = a + b
###Output
896 ns ± 2.79 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Now things have changed quite a lot. Just adding two arrays takes two orders of magnitude less than adding from the Pandas Dataframe. But this comparison is not fare at all. Those 145µs are not spent waiting. Pandas does lots of things with the value of the Series resulting from the sum before it inserts it to the dataframe. If we profile the execution of that simple sum, we'll see that almost a fifth of the time is spent on a function called `_sanitize_array`.
###Code
from IPython.display import Image
Image(filename='_static/snakeviz_add.png')
###Output
_____no_output_____
###Markdown
The most important characteristic of Pandas is that it always does what it is supposed to do with data regardless of how dirty, heterogeneous, sparse (you name it) your data is. And it does an amazing job with that. But the price we have to pay are those two orders of magnitude in time.That is exactly what impacted the performance of our last project. The Dataframe is a very convenient container because it always does something that makes sense, therefore you have to code very little. For instance, take the `join` method of a dataframe. It does just what it has to do, and it is definitely not trivial. Unfortunately, that overhead is too much for us.We are in the typical situation where abstractions are not for free. The higher the level, the slower the computation. This is a kind of a *second law of Thermodynamics* applied to numerical computing. And there are abstractions that are tremendously useful to **us**. A Dataframe is not a dictionary of arrays. It can be indexed by row and by column, and it can operate as a whole, and on any imaginable portion of it. It can sort, group, joing, merge... You name it. But if you want to compute the payment schedule of all the securities of an entire bank, you may need thousands of processors to have it done in less than six hours.This is where I started thinking. There must be something in between. Something that is fast, but it's not just a dictionary of numpy arrays. And I started designing gtable
###Code
from gtable import Table
tb = Table({'a': np.arange(1E6), 'b': np.arange(1E6)})
tb
%%timeit
tb.c = tb.a + tb.b
###Output
3.76 ms ± 37.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
You can see that for large arrays, the computation time shadows the overhead. Let's see how well it does with smaller arrays
###Code
tb = Table({'a': np.arange(100), 'b': np.arange(100)})
%%timeit
tb.c = tb.a + tb.b
###Output
10.9 µs ± 67.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
###Markdown
We have improved by a factor of 7, which is crucial if that's the difference between running in one or seven servers. We can still improve the computation by a little bit more if we fallback into some kind of *I know what I am doing* mode, and we want to reuse memory to avoid allocations:
###Code
%%timeit
tb['a'] = tb['a'] + tb['b']
###Output
2.15 µs ± 14.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
###Markdown
Now the performance of arithmetic operations with gtable is closer to operate with plain arrays to the overhead-driven performance of Pandas. You can seriously break the table if you really don't know what you are doing. But for obvious reasons, having this kind of performance tricks is key to us.Of course, these speedups come at a cost: features. Gtable is in its infancy. There are literally two afternoons of work on it, and the whole module fits within a single file with less than 300 lines of code. It is pure python, and I have not started to seriously tune its performance. But the idea of having something inbetween a Dataframe and a dictionary of arrays with support for sparse information is appealing to say the list.Let me demo a little the capabilities of this tiny module of mine. Assume that we start with a small table with two columns
###Code
tb = Table({'a': pd.date_range('2000-01-01', freq='M', periods=10),
'b': np.random.randn(10)})
tb
tb.add_column('schedule', np.array(['first ']))
tb
###Output
_____no_output_____
###Markdown
I have been able to concatenate a full column in the horizontal direction with a single value, and it's part of the information that the printed value of the table gives. Storing the data and the indexes separately is a nice and efficient way of dealing with sparse data. We can visualize the table by converting it to a pandas Dataframe
###Code
tb
###Output
_____no_output_____
###Markdown
Gtable is not designed as a general tool for data analysis, but as an efficient data container. We can also concatenate data in the vertical direction efficiently, also keeping a single copy of data when necessary
###Code
tb1 = tb.copy()
tb1.schedule.values[0] = 'second'
tb.stitch(tb1)
tb
###Output
_____no_output_____
###Markdown
If you care a little about how it is done. The internal storage is just a list of arrays and a bitmap index. The bitmap index is interesting because some computations, like sorting or filtering, only involve the index. The storage of the table is stored within the `_data`, the `_keys` and `_index` attributes
###Code
tb.data
tb.keys
tb.index
###Output
_____no_output_____
###Markdown
We can take some advantage of knowing the internal representation of the data to insert data into the table in an efficient way. Every attribute of the table corresponds to a column, and each column stores the data as a numpy array in `values` and a piece of the index in `index`.
###Code
b_col = tb.b
b_col
###Output
_____no_output_____
###Markdown
This means that it is relatively simple make efficient computations with a whole column, to add yet another colum
###Code
tb.sum_b = b_col.values.cumsum()
tb.sum_b.values
tb['sum_b']
###Output
_____no_output_____
###Markdown
We'll see where it will go from here
###Code
%load_ext snakeviz
%%snakeviz
tb.a > tb.b
col = tb.a >= tb.b
col.values
###Output
_____no_output_____ |
labs/notebooks/non_linear_classifiers/exercise_1.ipynb | ###Markdown
Amazon Sentiment Data To ease-up the upcoming implementation exercise, examine and comment the following implementation of a log-linear model and its gradient update rule. Start by loading Amazon sentiment corpus used in day 1
###Code
import lxmls.readers.sentiment_reader as srs
from lxmls.deep_learning.utils import AmazonData
corpus = srs.SentimentCorpus("books")
data = AmazonData(corpus=corpus)
data.datasets['train']
###Output
_____no_output_____
###Markdown
A Shallow Model: Log-Linear in Numpy Compare the following numpy implementation of a log-linear model with the derivations seen in the previous sections. Introduce comments on the blocks marked with relating them to the corresponding algorithm steps.
###Code
from lxmls.deep_learning.utils import Model, glorot_weight_init, index2onehot, logsumexp
import numpy as np
class NumpyLogLinear(Model):
def __init__(self, **config):
# Initialize parameters
weight_shape = (config['input_size'], config['num_classes'])
# after Xavier Glorot et al
self.weight = glorot_weight_init(weight_shape, 'softmax')
self.bias = np.zeros((1, config['num_classes']))
self.learning_rate = config['learning_rate']
def log_forward(self, input=None):
"""Forward pass of the computation graph"""
# Linear transformation
z = np.dot(input, self.weight.T) + self.bias
# Softmax implemented in log domain
log_tilde_z = z - logsumexp(z, axis=1, keepdims=True)
return log_tilde_z
def predict(self, input=None):
"""Prediction: most probable class index"""
return np.argmax(np.exp(self.log_forward(input)), axis=1)
def update(self, input=None, output=None):
"""Stochastic Gradient Descent update"""
# Probabilities of each class
class_probabilities = np.exp(self.log_forward(input))
batch_size, num_classes = class_probabilities.shape
# Error derivative at softmax layer
I = index2onehot(output, num_classes)
error = (class_probabilities - I) / batch_size
# Weight gradient
gradient_weight = np.zeros(self.weight.shape)
for l in range(batch_size):
gradient_weight += np.outer(error[l, :], input[l, :])
# Bias gradient
gradient_bias = np.sum(error, axis=0, keepdims=True)
# SGD update
self.weight = self.weight - self.learning_rate * gradient_weight
self.bias = self.bias - self.learning_rate * gradient_bias
###Output
_____no_output_____
###Markdown
Training Bench Instantiate model and data classes. Check the initial accuracy of the model. This should be close to 50% since we are on a binary prediction task and the model is not trained yet.
###Code
learning_rate = 0.05
num_epochs = 10
batch_size = 30
model = NumpyLogLinear(
input_size=corpus.nr_features,
num_classes=2,
learning_rate=learning_rate
)
# Define number of epochs and batch size
num_epochs = 10
batch_size = 30
# Get batch iterators for train and test
train_batches = data.batches('train', batch_size=batch_size)
test_set = data.batches('test', batch_size=None)[0]
# Get intial accuracy
hat_y = model.predict(input=test_set['input'])
accuracy = 100*np.mean(hat_y == test_set['output'])
print("Initial accuracy %2.2f %%" % accuracy)
###Output
Initial accuracy 51.25 %
###Markdown
Train the model with simple batch stochastic gradient descent. Be sure to understand each of the steps involved, including the code running inside of the model class. We will be wokring on a more complex version of the model in the upcoming exercise.
###Code
# Epoch loop
for epoch in range(num_epochs):
# Batch loop
for batch in train_batches:
model.update(input=batch['input'], output=batch['output'])
# Prediction for this epoch
hat_y = model.predict(input=test_set['input'])
# Evaluation
accuracy = 100*np.mean(hat_y == test_set['output'])
# Inform user
print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
###Output
Epoch 1: accuracy 60.50 %
Epoch 2: accuracy 74.00 %
Epoch 3: accuracy 76.75 %
Epoch 4: accuracy 77.75 %
Epoch 5: accuracy 78.25 %
Epoch 6: accuracy 79.00 %
Epoch 7: accuracy 79.75 %
Epoch 8: accuracy 79.75 %
Epoch 9: accuracy 81.25 %
Epoch 10: accuracy 81.75 %
###Markdown
Amazon Sentiment Data To ease-up the upcoming implementation exercise, examine and comment the following implementation of a log-linear model and its gradient update rule. Start by loading Amazon sentiment corpus used in day 1
###Code
import lxmls.readers.sentiment_reader as srs
from lxmls.deep_learning.utils import AmazonData
corpus = srs.SentimentCorpus("books")
data = AmazonData(corpus=corpus)
data.datasets['train']
###Output
_____no_output_____
###Markdown
A Shallow Model: Log-Linear in Numpy Compare the following numpy implementation of a log-linear model with the derivations seen in the previous sections. Introduce comments on the blocks marked with relating them to the corresponding algorithm steps.
###Code
from lxmls.deep_learning.utils import Model, glorot_weight_init, index2onehot, logsumexp
import numpy as np
class NumpyLogLinear(Model):
def __init__(self, **config):
# Initialize parameters
weight_shape = (config['input_size'], config['num_classes'])
# after Xavier Glorot et al
self.weight = glorot_weight_init(weight_shape, 'softmax')
self.bias = np.zeros((1, config['num_classes']))
self.learning_rate = config['learning_rate']
def log_forward(self, input=None):
"""Forward pass of the computation graph"""
# Linear transformation
z = np.dot(input, self.weight.T) + self.bias
# Softmax implemented in log domain
log_tilde_z = z - logsumexp(z, axis=1, keepdims=True)
return log_tilde_z
def predict(self, input=None):
"""Prediction: most probable class index"""
return np.argmax(np.exp(self.log_forward(input)), axis=1)
def update(self, input=None, output=None):
"""Stochastic Gradient Descent update"""
# Probabilities of each class
class_probabilities = np.exp(self.log_forward(input))
batch_size, num_classes = class_probabilities.shape
# Error derivative at softmax layer
I = index2onehot(output, num_classes)
error = (class_probabilities - I) / batch_size
# Weight gradient
gradient_weight = np.zeros(self.weight.shape)
for l in range(batch_size):
gradient_weight += np.outer(error[l, :], input[l, :])
# Bias gradient
gradient_bias = np.sum(error, axis=0, keepdims=True)
# SGD update
self.weight = self.weight - self.learning_rate * gradient_weight
self.bias = self.bias - self.learning_rate * gradient_bias
###Output
_____no_output_____
###Markdown
Training Bench Instantiate model and data classes. Check the initial accuracy of the model. This should be close to 50% since we are on a binary prediction task and the model is not trained yet.
###Code
learning_rate = 0.05
num_epochs = 10
batch_size = 30
model = NumpyLogLinear(
input_size=corpus.nr_features,
num_classes=2,
learning_rate=learning_rate
)
# Define number of epochs and batch size
num_epochs = 10
batch_size = 30
# Get batch iterators for train and test
train_batches = data.batches('train', batch_size=batch_size)
test_set = data.batches('test', batch_size=None)[0]
# Get intial accuracy
hat_y = model.predict(input=test_set['input'])
accuracy = 100*np.mean(hat_y == test_set['output'])
print("Initial accuracy %2.2f %%" % accuracy)
###Output
_____no_output_____
###Markdown
Train the model with simple batch stochastic gradient descent. Be sure to understand each of the steps involved, including the code running inside of the model class. We will be wokring on a more complex version of the model in the upcoming exercise.
###Code
# Epoch loop
for epoch in range(num_epochs):
# Batch loop
for batch in train_batches:
model.update(input=batch['input'], output=batch['output'])
# Prediction for this epoch
hat_y = model.predict(input=test_set['input'])
# Evaluation
accuracy = 100*np.mean(hat_y == test_set['output'])
# Inform user
print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
###Output
_____no_output_____
###Markdown
Amazon Sentiment Data To ease-up the upcoming implementation exercise, examine and comment the following implementation of a log-linear model and its gradient update rule. Start by loading Amazon sentiment corpus used in day 1
###Code
import lxmls.readers.sentiment_reader as srs
from lxmls.deep_learning.utils import AmazonData
corpus = srs.SentimentCorpus("books")
data = AmazonData(corpus=corpus)
data.datasets['train']
###Output
_____no_output_____
###Markdown
A Shallow Model: Log-Linear in Numpy Compare the following numpy implementation of a log-linear model with the derivations seen in the previous sections. Introduce comments on the blocks marked with relating them to the corresponding algorithm steps.
###Code
from lxmls.deep_learning.utils import Model, glorot_weight_init, index2onehot, logsumexp
import numpy as np
class NumpyLogLinear(Model):
def __init__(self, **config):
# Initialize parameters
weight_shape = (config['input_size'], config['num_classes'])
# after Xavier Glorot et al
self.weight = glorot_weight_init(weight_shape, 'softmax')
self.bias = np.zeros((1, config['num_classes']))
self.learning_rate = config['learning_rate']
def log_forward(self, input=None):
"""Forward pass of the computation graph"""
# Linear transformation
z = np.dot(input, self.weight.T) + self.bias
# Softmax implemented in log domain
log_tilde_z = z - logsumexp(z, axis=1, keepdims=True)
return log_tilde_z
def predict(self, input=None):
"""Prediction: most probable class index"""
return np.argmax(np.exp(self.log_forward(input)), axis=1)
def update(self, input=None, output=None):
"""Stochastic Gradient Descent update"""
# Probabilities of each class
class_probabilities = np.exp(self.log_forward(input))
batch_size, num_classes = class_probabilities.shape
# Error derivative at softmax layer
I = index2onehot(output, num_classes)
error = (class_probabilities - I) / batch_size
# Weight gradient
gradient_weight = np.zeros(self.weight.shape)
for l in range(batch_size):
gradient_weight += np.outer(error[l, :], input[l, :])
# Bias gradient
gradient_bias = np.sum(error, axis=0, keepdims=True)
# SGD update
self.weight = self.weight - self.learning_rate * gradient_weight
self.bias = self.bias - self.learning_rate * gradient_bias
###Output
_____no_output_____
###Markdown
Training Bench Instantiate model and data classes. Check the initial accuracy of the model. This should be close to 50% since we are on a binary prediction task and the model is not trained yet.
###Code
learning_rate = 0.05
num_epochs = 10
batch_size = 30
model = NumpyLogLinear(
input_size=corpus.nr_features,
num_classes=2,
learning_rate=learning_rate
)
# Define number of epochs and batch size
num_epochs = 10
batch_size = 30
# Get batch iterators for train and test
train_batches = data.batches('train', batch_size=batch_size)
test_set = data.batches('test', batch_size=None)[0]
# Get intial accuracy
hat_y = model.predict(input=test_set['input'])
accuracy = 100*np.mean(hat_y == test_set['output'])
print("Initial accuracy %2.2f %%" % accuracy)
###Output
_____no_output_____
###Markdown
Train the model with simple batch stochastic gradient descent. Be sure to understand each of the steps involved, including the code running inside of the model class. We will be wokring on a more complex version of the model in the upcoming exercise.
###Code
# Epoch loop
for epoch in range(num_epochs):
# Batch loop
for batch in train_batches:
model.update(input=batch['input'], output=batch['output'])
# Prediction for this epoch
hat_y = model.predict(input=test_set['input'])
# Evaluation
accuracy = 100*np.mean(hat_y == test_set['output'])
# Inform user
print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
###Output
_____no_output_____
###Markdown
Amazon Sentiment Data To ease-up the upcoming implementation exercise, examine and comment the following implementation of a log-linear model and its gradient update rule. Start by loading Amazon sentiment corpus used in day 1
###Code
%load_ext autoreload
%autoreload 2
import lxmls.readers.sentiment_reader as srs
from lxmls.deep_learning.utils import AmazonData
corpus = srs.SentimentCorpus("books")
data = AmazonData(corpus=corpus)
data.datasets['train']
###Output
_____no_output_____
###Markdown
A Shallow Model: Log-Linear in Numpy Compare the following numpy implementation of a log-linear model with the derivations seen in the previous sections. Introduce comments on the blocks marked with relating them to the corresponding algorithm steps.
###Code
from lxmls.deep_learning.utils import Model, glorot_weight_init, index2onehot, logsumexp
import numpy as np
class NumpyLogLinear(Model):
def __init__(self, **config):
# Initialize parameters
weight_shape = (config['input_size'], config['num_classes'])
# after Xavier Glorot et al
self.weight = glorot_weight_init(weight_shape, 'softmax')
self.bias = np.zeros((1, config['num_classes']))
self.learning_rate = config['learning_rate']
def log_forward(self, input=None):
"""Forward pass of the computation graph"""
# Linear transformation
z = np.dot(input, self.weight.T) + self.bias
# Softmax implemented in log domain
log_tilde_z = z - logsumexp(z, axis=1, keepdims=True)
return log_tilde_z
def predict(self, input=None):
"""Prediction: most probable class index"""
return np.argmax(np.exp(self.log_forward(input)), axis=1)
def update(self, input=None, output=None):
"""Stochastic Gradient Descent update"""
#
class_probabilities = np.exp(self.log_forward(input))
batch_size, num_classes = class_probabilities.shape
#
I = index2onehot(output, num_classes)
error = (class_probabilities - I) / batch_size
#
gradient_weight = np.zeros(self.weight.shape)
for l in range(batch_size):
gradient_weight += np.outer(error[l, :], input[l, :])
#
gradient_bias = np.sum(error, axis=0, keepdims=True)
#
self.weight = self.weight - self.learning_rate * gradient_weight
self.bias = self.bias - self.learning_rate * gradient_bias
###Output
_____no_output_____
###Markdown
Training Bench Instantiate model and data classes. Check the initial accuracy of the model. This should be close to 50% since we are on a binary prediction task and the model is not trained yet.
###Code
learning_rate = 0.05
num_epochs = 10
batch_size = 30
model = NumpyLogLinear(
input_size=corpus.nr_features,
num_classes=2,
learning_rate=learning_rate
)
# Define number of epochs and batch size
num_epochs = 10
batch_size = 30
# Get batch iterators for train and test
train_batches = data.batches('train', batch_size=batch_size)
test_set = data.batches('test', batch_size=None)[0]
# Get intial accuracy
hat_y = model.predict(input=test_set['input'])
accuracy = 100*np.mean(hat_y == test_set['output'])
print("Initial accuracy %2.2f %%" % accuracy)
###Output
_____no_output_____
###Markdown
Train the model with simple batch stochastic gradient descent. Be sure to understand each of the steps involved, including the code running inside of the model class. We will be wokring on a more complex version of the model in the upcoming exercise.
###Code
# Epoch loop
for epoch in range(num_epochs):
# Batch loop
for batch in train_batches:
model.update(input=batch['input'], output=batch['output'])
# Prediction for this epoch
hat_y = model.predict(input=test_set['input'])
# Evaluation
accuracy = 100*np.mean(hat_y == test_set['output'])
# Inform user
print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
###Output
_____no_output_____
###Markdown
Amazon Sentiment Data To ease-up the upcoming implementation exercise, examine and comment the following implementation of a log-linear model and its gradient update rule. Start by loading Amazon sentiment corpus used in day 1
###Code
%load_ext autoreload
%autoreload 2
import lxmls.readers.sentiment_reader as srs
from lxmls.deep_learning.utils import AmazonData
corpus = srs.SentimentCorpus("books")
data = AmazonData(corpus=corpus)
data.datasets['train']
###Output
_____no_output_____
###Markdown
A Shallow Model: Log-Linear in Numpy Compare the following numpy implementation of a log-linear model with the derivations seen in the previous sections. Introduce comments on the blocks marked with relating them to the corresponding algorithm steps.
###Code
from lxmls.deep_learning.utils import Model, glorot_weight_init, index2onehot, logsumexp
import numpy as np
class NumpyLogLinear(Model):
def __init__(self, **config):
# Initialize parameters
weight_shape = (config['input_size'], config['num_classes'])
# after Xavier Glorot et al
self.weight = glorot_weight_init(weight_shape, 'softmax')
self.bias = np.zeros((1, config['num_classes']))
self.learning_rate = config['learning_rate']
def log_forward(self, input=None):
"""Forward pass of the computation graph"""
# Linear transformation
z = np.dot(input, self.weight.T) + self.bias
# Softmax implemented in log domain
log_tilde_z = z - logsumexp(z, axis=1, keepdims=True)
return log_tilde_z
def predict(self, input=None):
"""Prediction: most probable class index"""
return np.argmax(np.exp(self.log_forward(input)), axis=1)
def update(self, input=None, output=None):
"""Stochastic Gradient Descent update"""
# Probabilities of each class
class_probabilities = np.exp(self.log_forward(input))
batch_size, num_classes = class_probabilities.shape
# Error derivative at softmax layer
I = index2onehot(output, num_classes)
error = (class_probabilities - I) / batch_size
# Weight gradient
gradient_weight = np.zeros(self.weight.shape)
for l in range(batch_size):
gradient_weight += np.outer(error[l, :], input[l, :])
# Bias gradient
gradient_bias = np.sum(error, axis=0, keepdims=True)
# SGD update
self.weight = self.weight - self.learning_rate * gradient_weight
self.bias = self.bias - self.learning_rate * gradient_bias
###Output
_____no_output_____
###Markdown
Training Bench Instantiate model and data classes. Check the initial accuracy of the model. This should be close to 50% since we are on a binary prediction task and the model is not trained yet.
###Code
learning_rate = 0.05
num_epochs = 10
batch_size = 30
model = NumpyLogLinear(
input_size=corpus.nr_features,
num_classes=2,
learning_rate=learning_rate
)
# Define number of epochs and batch size
num_epochs = 10
batch_size = 30
# Get batch iterators for train and test
train_batches = data.batches('train', batch_size=batch_size)
test_set = data.batches('test', batch_size=None)[0]
# Get intial accuracy
hat_y = model.predict(input=test_set['input'])
accuracy = 100*np.mean(hat_y == test_set['output'])
print("Initial accuracy %2.2f %%" % accuracy)
###Output
_____no_output_____
###Markdown
Train the model with simple batch stochastic gradient descent. Be sure to understand each of the steps involved, including the code running inside of the model class. We will be wokring on a more complex version of the model in the upcoming exercise.
###Code
# Epoch loop
for epoch in range(num_epochs):
# Batch loop
for batch in train_batches:
model.update(input=batch['input'], output=batch['output'])
# Prediction for this epoch
hat_y = model.predict(input=test_set['input'])
# Evaluation
accuracy = 100*np.mean(hat_y == test_set['output'])
# Inform user
print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
###Output
_____no_output_____ |
Test_Score_VGG.ipynb | ###Markdown
Scoring your trained modelIn the cell below, please load your model into `model`. Also if you used an image size for your input images that *isn't* 224x224, you'll need to set `image_size` to the size you used. The scoring code assumes square input images.For example, this is how I loaded in my checkpoint:```pythonimport torchfrom torch import nnimport torch.nn.functional as Ffrom torchvision import modelsclass FFClassifier(nn.Module): def __init__(self, in_features, hidden_features, out_features, drop_prob=0.1): super().__init__() self.fc1 = nn.Linear(in_features, hidden_features) self.fc2 = nn.Linear(hidden_features, out_features) self.drop = nn.Dropout(p=drop_prob) def forward(self, x): x = self.drop(F.relu(self.fc1(x))) x = self.fc2(x) x = F.log_softmax(x, dim=1) return x def load_checkpoint(checkpoint_path): checkpoint = torch.load(checkpoint_path) model = models.vgg16(pretrained=False) for param in model.parameters(): param.requires_grad = False Put the classifier on the pretrained network model.classifier = FFClassifier(25088, checkpoint['hidden'], 102) model.load_state_dict(checkpoint['state_dict']) return modelmodel = load_checkpoint('/home/workspace/classifier.pt')```Your exact code here will depend on how you defined your network in the project. Make sure you use the absolute path to your checkpoint which should have been uploaded to the `/home/workspace` directory.Run the cell, then after loading the data, press "Test Code" below. This can take a few minutes or more depending on the size of your network. Your model needs to reach **at least 20% accuracy** on the test set to be recorded.
###Code
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import models
from collections import OrderedDict
def load_checkpoint(checkpoint_path):
checkpoint = torch.load(checkpoint_path,map_location='cpu')
model = models.vgg19(pretrained=True)
for param in model.parameters():
param.requires_grad = False
# Put the classifier on the pretrained network
model.classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(25088, 4096)),
('relu', nn.ReLU()),
('fc2', nn.Linear(4096, 102)),
('output', nn.LogSoftmax(dim=1))
]))
model.load_state_dict(checkpoint['state_dict'],strict=False)
return model
# Load your model to this variable
model = load_checkpoint('/home/workspace/classifier.pth')
model.eval()
# If you used something other than 224x224 cropped images, set the correct size here
image_size = 224
# Values you used for normalizing the images. Default here are for
# pretrained models from torchvision.
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
###Output
_____no_output_____ |
01_deck.ipynb | ###Markdown
Deck> Deck of cards
###Code
#hide
from nbdev.showdoc import *
#export
from deck_of_cards.card import Card
# export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of Card objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as 10 of Hearts will still be in the Deck of cards because we haven't removed it.
###Code
c = Card(2, 10)
assert c in deck.cards
c
###Output
_____no_output_____
###Markdown
Deck> API for Deck
###Code
#export
from deck_of_cards.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A `Deck` of cards is a collection of `Card` objects
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from a deck, we can verify that it no longer exists.
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the `10 of hearts` will still be in the `Deck` of cards, because we haven't removed it.
###Code
c = Card(2, 10)
assert c in deck.cards
str(c)
###Output
_____no_output_____
###Markdown
Deck> Playing Cards
###Code
#export
from deck_of_cards.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def __repr__(self):
return self.__str()
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the `10 of hearts` will be in the Deck of cards because we haven't removed it:
###Code
c = Card(2, 10)
assert c in deck.cards
c
###Output
_____no_output_____
###Markdown
Deck> more details here
###Code
from nbdev import *
#export
from deck_of_cards.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
card23 = Card(2,3)
deck.remove_card(card23)
assert card23 not in deck.cards
c = Card(2,10)
assert c in deck.cards
c
#export
class Hand(Deck):
"""Represents a hand of playing cards."""
def __init__(self, label=''):
self.cards = []
self.label = label
def find_defining_class(obj, method_name):
"""Finds and returns the class object that will provide
the definition of method_name (as a string) if it is
invoked on obj.
obj: any python object
method_name: string method name
"""
for ty in type(obj).mro():
if method_name in ty.__dict__:
return ty
return None
deck = Deck()
deck.shuffle()
hand = Hand()
print(find_defining_class(hand, 'shuffle'))
print(find_defining_class(hand, 'shuffle'))
deck.move_cards(hand, 5)
hand.sort()
print(hand)
def find_defining_class(obj, method_name):
"""Finds and returns the class object that will provide
the definition of method_name (as a string) if it is
invoked on obj.
obj: any python object
method_name: string method name
"""
for ty in type(obj).mro():
if method_name in ty.__dict__:
return ty
return None
#hide
type(hand).mro()[1].__dict__
###Output
_____no_output_____
###Markdown
DeckPlaying CardsThis module is taken from the Card module from the ThinkPython2 repo
###Code
#hide
from nbdev import *
#export
from deck_of_cardsv1.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects
###Code
deck = Deck()
deck.pop_card()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists.
###Code
deck = Deck()
c = Card(2,3)
deck.remove_card(c)
c not in deck.cards
assert c not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the 10 of hearts will still be in the Deck of cards because we haven't removed it.
###Code
c2 = Card(2,10)
c2 not in deck.cards
assert c2 in deck.cards
###Output
_____no_output_____
###Markdown
Deck> A class representing a DECK of playing cards
###Code
# export
from test_cards_demo.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of Card objects
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists
###Code
card23 = Card(2,3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
Otherwise the deck has been untouched, so another card that we are looking for should be available inside, as confirmed below:
###Code
c = Card(2,10)
assert c in deck.cards
c
###Output
_____no_output_____
###Markdown
Deck> Playing Cards
###Code
#export
from deck_of_cards.cardz import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the `10 of hearts` will still be in the Deck of cards because we haven't removed it:
###Code
c = Card(2,10)
assert c in deck.cards
c
###Output
_____no_output_____
###Markdown
Deck> Playing Cards
###Code
#hide
from nbdev.showdoc import *
#export
from deck_of_cards.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
deck = Deck()
assert isinstance(deck.pop_card(), Card)
###Output
_____no_output_____
###Markdown
deck> Deck of cards
###Code
#hide
%load_ext autoreload
%autoreload 2
#export
from sampleproj.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of Card objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the 10 of hearts will still be in the Deck of cards because we haven't removed it:
###Code
c = Card(2,10)
assert c in deck.cards
c
#hide
from nbdev.export import notebook2script # Same as running nbdev_build_lib
notebook2script()
###Output
Converted 00_card.ipynb.
Converted 01_deck.ipynb.
Converted index.ipynb.
###Markdown
Deck> Playing Cards
###Code
#|export
from deck_of_cards.card import Card
#|export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the `10 of hearts` will still be in the Deck of cards because we haven't removed it:
###Code
c = Card(2,10)
assert c in deck.cards
c
###Output
_____no_output_____
###Markdown
Deck> Playing Cards
###Code
#export
from deck_of_cards.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of `Card` objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of Card objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the 10 of hearts will still be in the Deck of cards because we haven't removed it:
###Code
c = Card(2,10)
assert c in deck.cards
c
from nbdev.export import notebook2script; notebook2script()
###Output
Converted 00_card.ipynb.
Converted 01_deck.ipynb.
Converted index.ipynb.
###Markdown
Deck> Here, we define a separate class for a deck of cards.
###Code
#export
from deck_of_cards.card import *
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
Deck> Playing Cards
###Code
#export
from deck_of_cards.card import Card
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
If we remove a card from the Deck we can verify that it no longer exists:
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the `10 of hearts` will still be in the Deck of cards because we haven't removed it:
###Code
c = Card(2,10)
assert c in deck.cards
c
###Output
_____no_output_____
###Markdown
Deck> Playing cards
###Code
# export
from deck_of_cards.card import Card
#hide
from nbdev.showdoc import *
# export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
if card in self.cards:
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
card2_10 = Card(2, 10)
assert card2_10 in deck.cards
###Output
_____no_output_____
###Markdown
Deck> Implements the deck class of the ThinkPython example.
###Code
#hide
from nbdev.showdoc import *
from nbdev.export import notebook2script; notebook2script()
from deck_of_cards.card import Card
%load_ext autoreload
%autoreload 2
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
`Deck` is a class that represents a complete deck of cards.
###Code
print(Deck())
show_doc(Deck.remove_card)
deck = Deck()
c = Card(suit=1, rank=3)
deck.remove_card(c)
###Output
_____no_output_____
###Markdown
Here we can see that '3 of Diamonds' is now missing from the deck
###Code
print(deck)
###Output
Ace of Clubs
2 of Clubs
3 of Clubs
4 of Clubs
5 of Clubs
6 of Clubs
7 of Clubs
8 of Clubs
9 of Clubs
10 of Clubs
Jack of Clubs
Queen of Clubs
King of Clubs
Ace of Diamonds
2 of Diamonds
4 of Diamonds
5 of Diamonds
6 of Diamonds
7 of Diamonds
8 of Diamonds
9 of Diamonds
10 of Diamonds
Jack of Diamonds
Queen of Diamonds
King of Diamonds
Ace of Hearts
2 of Hearts
3 of Hearts
4 of Hearts
5 of Hearts
6 of Hearts
7 of Hearts
8 of Hearts
9 of Hearts
10 of Hearts
Jack of Hearts
Queen of Hearts
King of Hearts
Ace of Spades
2 of Spades
3 of Spades
4 of Spades
5 of Spades
6 of Spades
7 of Spades
8 of Spades
9 of Spades
10 of Spades
Jack of Spades
Queen of Spades
King of Spades
###Markdown
To test that this behaviour is maintained in the future:
###Code
assert c not in deck.cards
notebook2script()
###Output
Converted 00_card.ipynb.
Converted 01_deck.ipynb.
Converted index.ipynb.
###Markdown
Deck> Playing Cards
###Code
# export
from deck_of_cards.card import Card
# export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A `Deck` of cards is a collection of Card objects:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), Card)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
if we remove a card from a deck, we can verify it no longer exists
###Code
card23 = Card(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
###Output
_____no_output_____
###Markdown
However, another card that we haven't removed, such as the 10 of hearts will still be in the Deck of cards because we haven't removed it:
###Code
c = Card(2,10)
assert c in deck.cards
print(c)
###Output
10 of Hearts
###Markdown
Deck> Playing cards.
###Code
#export
from deck_of_cards.mycard import MyCard
#export
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = MyCard(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: MyCard
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `MyCard` objects:. For example:
###Code
deck = Deck()
assert isinstance(deck.pop_card(), MyCard)
show_doc(Deck.remove_card)
###Output
_____no_output_____
###Markdown
You can do comparisons of cards, too!
###Code
card23 = MyCard(2, 3)
deck.remove_card(card23)
assert card23 not in deck.cards
c = MyCard(2,10)
assert c in deck.cards
c
#hide
from nbdev.export import notebook2script; notebook2script()
###Output
Converted 00_card.ipynb.
Converted 01_deck.ipynb.
Converted 02_mycard.ipynb.
Converted index.ipynb.
###Markdown
Deck> Playing Cards
###Code
#export
import random
from deck_of_cards.card import Card
class Deck:
"""Represents a deck of cards.
Attributes:
cards: list of Card objects.
"""
def __init__(self):
"""Initializes the Deck with 52 cards.
"""
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
"""Returns a string representation of the deck.
"""
res = []
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
def add_card(self, card):
"""Adds a card to the deck.
card: Card
"""
self.cards.append(card)
def remove_card(self, card):
"""Removes a card from the deck or raises exception if it is not there.
card: Card
"""
self.cards.remove(card)
def pop_card(self, i=-1):
"""Removes and returns a card from the deck.
i: index of the card to pop; by default, pops the last card.
"""
return self.cards.pop(i)
def shuffle(self):
"""Shuffles the cards in this deck."""
random.shuffle(self.cards)
def sort(self):
"""Sorts the cards in ascending order."""
self.cards.sort()
def move_cards(self, hand, num):
"""Moves the given number of cards from the deck into the Hand.
hand: destination Hand object
num: integer number of cards to move
"""
for i in range(num):
hand.add_card(self.pop_card())
###Output
_____no_output_____
###Markdown
A Deck of cards is a collection of `Card` objects.
###Code
show_doc(Deck.remove_card, disp=True)
###Output
_____no_output_____
###Markdown
If we remove a card from the deck, we can verify that it no longer exists:
###Code
deck = Deck()
card23 = Card(2,3)
print(card23)
print(card23 in deck.cards)
deck.remove_card(card23)
print(card23 in deck.cards)
#hide
assert card23 not in deck.cards
###Output
_____no_output_____ |
001-Jupyter/001-Tutorials/003-IPython-in-Depth/exercises/Interactive Widgets/Widget Exercises.ipynb | ###Markdown
Widget Exercises Widget basics Displaying a widgetCreate and display a `Text` widget. Change that widget's `value` and some of it's other properties. Discover the other properties by querying the `keys` property of the instance. *Hint: You'll need to import from ipywidgets and IPython.display.*
###Code
# %load soln/displaying.py
from ipywidgets import *
from IPython.display import display
w = Text(value="test")
display(w)
w.keys
###Output
_____no_output_____
###Markdown
Widget list Selection widget Create and display one of the selection widgets (dropdown, select, radiobuttons, or togglebuttons). Use the dictionary syntax to set the list of possible values. The values should be "Left" = 0, "Center" = 1, and "Right" = 2. Try reading and setting the value programmatically.
###Code
# %load soln/selection.py
from ipywidgets import *
from IPython.display import display
w = RadioButtons(options={"Left": 0, "Center": 1, "Right": 2}, description="Alignment:")
display(w)
print(w.value)
w.value = 1
###Output
_____no_output_____
###Markdown
Link Use a link to link the values of a `Textarea` widget and, an `HTML` or `Latex` widget. Display the widgets and try typing Latex and HTML in the textarea. *Hint: Look at the Widget Basics notebook for an example of how to use link.*
###Code
# %load soln/link.py
from ipywidgets import *
from IPython.display import display
from traitlets import link
code = Textarea(description="Source:", value="Cool math: $\\frac{F}{m}=a$")
preview = Label()
display(code, preview)
mylink = link((code, 'value'), (preview, 'value'))
###Output
_____no_output_____
###Markdown
Widget events on_submit event Create and display a `Text` widget. Use the `on_submit` event to print the value of the textbox just before you clear the textbox. *Hint: The `on_submit` callback must accept one argument, the `sender`.*
###Code
# %load soln/on_submit.py
from ipywidgets import *
w = Text()
def handle_submit(sender):
print(sender.value)
sender.value = ''
w.on_submit(handle_submit)
w
###Output
_____no_output_____
###Markdown
on_trait_change event Create and display a `Text` widget. Use the `observe` method to register a callback that prints the value of the textbox without clearing it. Observe the difference in behavior to Exercise 1.
###Code
# %load soln/on_trait_change.py
from ipywidgets import *
w = Text(placeholder='Search')
def handle_submit(args):
print(args['new'])
w.observe(handle_submit, names='value')
w
###Output
_____no_output_____
###Markdown
Widget styling Colored text Create and display an `HTML` widget with a value of your choice (i.e. "Hello World"). Use its attributes to change that widget's background color and font color.
###Code
# %load soln/colored.py
from ipywidgets import *
w = HTML(value="Hello world!")
w.color = 'red'
w.background_color = 'yellow'
w
###Output
_____no_output_____
###Markdown
Vertical sliders Create an array of 10 or more vertical sliders. Align the sliders using a container so they look like an equalizer. *Hint: Refer to the Widget List notebook for an example of how to display a vertical slider.*
###Code
# %load soln/sliders.py
from ipywidgets import *
from IPython.display import display
sliders = [FloatSlider(description=str(i), orientation="vertical", value=50.) for i in range(10)]
container = HBox(children=sliders)
display(container)
###Output
_____no_output_____ |
Code/preProc.ipynb | ###Markdown
PreProcessing FakeNews TitlesHere we begin to extract all of the fake news article titles found in fakeNews.csvThis data was provided by IEEEDataPort:“Covid-19 Fake News Infodemic Research Dataset (CoVID19-FNIR Dataset) | IEEE DataPort.” https://ieee-dataport.org/open-access/covid-19-fake-news-infodemic-research-dataset-covid19-fnir-dataset (accessed Nov. 08, 2021).
###Code
import nltk
import os
import string
import pandas as pd
import re
import time
import nltk.corpus
import unidecode
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from autocorrect import Speller
from bs4 import BeautifulSoup
from nltk.corpus import stopwords
from nltk import word_tokenize
import string
import spacy
###Output
_____no_output_____
###Markdown
Loading Article Titlesin func.py we have created methods that will load and clean the following text titles. We further go on to select keywords related to a core word bank provided by Spacy. We use the Spacy methods to generate a series of uni/bi/tri/quad-grams for the resulting corpus of words created from our clean list.
###Code
import func as fc
lst = fc.load_ext()
display(lst[:1])
cleanList = fc.clean(lst[:400])
#will return our cleaned list of fake news titles
display(cleanList)
#Here we begin to use the Spacy natural processing with emphasis on Biomedical words
#we use the function nlp from Spacy to extract all of the key words in our titles
#that relate more heavily to the biomedical sciences
nlp = spacy.load("en_core_sci_sm")
dd = fc.creat_corpus(cleanList)
doc = nlp(dd)
words = list(doc.ents)
###Output
_____no_output_____
###Markdown
Generated N-grams The nlp function provided by Spacy works to extract all of the significant uni, bi, tri, and quad-grams that are related to the Spacy biomedical words library. We then separate the generated grams into one of unigrams, bigrams, and polygrams as to later perform set comparison on them and the n-grams generated by each post.
###Code
listWords = words
listWords2 = words
sing = list()
doub = list()
poly = list()
for e in listWords:
# result = bool(re.search(r"\s", str(e)))
numspace = re.findall(r"\s", str(e))
space = len(numspace)
if space == 1:
doub.append(str(e))
elif space == 0:
sing.append(str(e))
else:
poly.append(str(e))
print(poly)
import numpy as n
sing2 = pd.DataFrame(sing)
doub2 = pd.DataFrame(doub)
poly2 = pd.DataFrame(poly)
sing2.to_csv("Data/unigram.csv", encoding='utf-8', index=False)
doub2.to_csv("Data/bigram.csv", encoding='utf-8', index=False)
poly2.to_csv("Data/polygram.csv", encoding='utf-8', index=False)
###Output
_____no_output_____ |
magic_methods.ipynb | ###Markdown
Magic Methods Magic methods are a very powerful feature of Python and can open a whole new door for you. However with great power comes great responsibility.**Attention**The some of following experiments are solely for educational purposes and NOT for production code. Reference: [Data Module](https://docs.python.org/3/reference/datamodel.html) I suppose that you know how to define a class in Python. If not, then now it's enough to know that it works in this way:
###Code
class MyClass():
"""Docs
"""
def method(self):
print('Hi')
a = MyClass()
###Output
_____no_output_____
###Markdown
Magic methods uses for implementation different so called protocols. String protocolThe simplest and popular (IMHO) protocol is string output protocol.
###Code
class MyClass():
def __str__(self):
return 'MyClass str'
def __repr__(self):
return 'MyClass repr - обычное представление'
def __format__(self, format_spec):
if format_spec == 'd':
return 'MyClass as integer'
return self.__str__()
a = MyClass()
str(a)
print(a)
repr(a)
a
f'Str: {a} Int: {a:d}'
ascii(a)
###Output
_____no_output_____
###Markdown
All string representation methods:* ``__str__``* ``__repr__``* ``__bytes__``* ``__format__`` Class lifecycle methodsEvery object and class have some stages:* object initialization* object destroing
###Code
class A():
def __init__(self, name):
print(f'Init called for {name}')
self.name = name
def __del__(self):
print(f'{self} deleted')
def __repr__(self):
return f'{self.name}'
a1 = A('A1')
a2 = A('A2')
del a1
del a2
###Output
Init called for A1
Init called for A2
A1 deleted
A2 deleted
###Markdown
Comparing* lt a < b* le a <= b* eq a == b* ne a != b* gt a > b* ge a >= b Attribute access* getattr* getattribute* setattr* delattr* dir
###Code
class DictAsObj():
def __init__(self, d):
self._reestr = d
def __getattr__(self, key):
try:
return self._reestr[key]
except KeyError:
raise AttributeError
d = DictAsObj({'name': 'Tor'})
d.func123(12)
###Output
func123 12
###Markdown
Containers operations* len* getitem* setitem* delitem* iter* reversed* contains
###Code
class MyCont(object):
def __init__(self, iterable):
self.data = tuple(iterable)
def __len__(self):
print('Lenght')
return len(self.data)
def __iter__(self):
print('iter')
return iter(self.data)
def __contains__(self, item):
print('contains')
if item == 'Tor':
return True
return item in self.data
def __getitem__(self, item):
return self.data[item]
def __delitem__(self, item):
print('No')
c = MyCont('abc')
for i in c:
print(i)
print('Tor' in c)
print('d' in c)
print(c[:]) slice()
###Output
iter
a
b
c
contains
True
contains
False
c
###Markdown
Callable``__call__``
###Code
class Executor():
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
print('Called func with', args, kwargs)
return self.func(*args, **kwargs)
e = Executor(lambda x: print(x))
e(10)
###Output
Called func with (10,) {}
10
###Markdown
Arithmetic operations
###Code
class Num(object):
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += other
return self.value
def __sub__(self, other):
self.value -= other
return self.value
def __repr__(self):
return str(self.value)
a = Num(10)
b = Num(12)
a + 10
10 + a
###Output
_____no_output_____
###Markdown
List of operations:* ``__add__(self, other)`` -> a + other* ``__sub__(self, other)`` -> a - other* ``__mul__(self, other)`` -> a * other* ``__matmul__(self, other)`` -> a @ other* ``__truediv__(self, other)`` -> a / other* ``__floordiv__(self, other)`` -> a // other* ``__mod__(self, other)`` a % other* ``__divmod__(self, other)`` divmod(a, other)* ``__pow__(self, other[, modulo])`` a ** other* ``__lshift__(self, other)`` a >> other* ``__rshift__(self, other)`` a << other* ``__and__(self, other)`` a & other* ``__xor__(self, other)`` a ^ other* ``__or__(self, other)`` a | other Reversed operations Method with name like ``__radd__`` (r - prefix)
###Code
class Num(object):
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += other
return self.value
def __radd__(self, other):
return self.value + other
def __sub__(self, other):
self.value -= other
return self.value
def __repr__(self):
return str(self.value)
a = Num(10)
a + 10
10 + a
###Output
_____no_output_____
###Markdown
Augmented assignmentsMethods like ``__iadd__`` (i - prefix)
###Code
class Num(object):
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.value + other
__radd__ = __add__
def __iadd__(self, other):
self.value += other
return self
def __repr__(self):
return f'Num {self.value}'
a = Num(10)
a += 1
a
###Output
_____no_output_____
###Markdown
Other math * ``__neg__``* ``__pos__``* ``__abs__``* ``__invert__``* ``__complex__``* ``__int__``* ``__float__``* ``__index__``* ``__round__``* etc Context manager protocol
###Code
class ExampleMgr():
def __enter__(self):
print('Start work')
return
def __exit__(self, exc_type, exc_value, traceback):
print('Finish')
if exc_type:
print('ERROR: ', exc_type)
print(exc_value)
return True # True suppresses exception raising
with ExampleMgr():
print('OK')
with ExampleMgr():
print('OK')
raise Exception
print('Never')
###Output
Start work
OK
Finish
ERROR: <class 'Exception'>
###Markdown
Magic MethodsBelow you'll find the same code from the previous exercise except two more methods have been added: an __add__ method and a __repr__ method. Your task is to fill out the code and get all of the unit tests to pass. You'll find the code cell with the unit tests at the bottom of this Jupyter notebook.As in previous exercises, there is an answer key that you can look at if you get stuck. Click on the "Jupyter" icon at the top of this notebook, and open the folder 4.OOP_code_magic_methods. You'll find the answer.py file inside the folder.
###Code
import math
import matplotlib.pyplot as plt
class Gaussian():
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
self.mean = mu
self.stdev = sigma
self.data = []
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.mean
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Function to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
self.data = data_list
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
# Unit tests to check your solution
import unittest
class TestGaussianClass(unittest.TestCase):
def setUp(self):
self.gaussian = Gaussian(25, 2)
def test_initialization(self):
self.assertEqual(self.gaussian.mean, 25, 'incorrect mean')
self.assertEqual(self.gaussian.stdev, 2, 'incorrect standard deviation')
def test_pdf(self):
self.assertEqual(round(self.gaussian.pdf(25), 5), 0.19947,\
'pdf function does not give expected result')
def test_meancalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(self.gaussian.calculate_mean(),\
sum(self.gaussian.data) / float(len(self.gaussian.data)), 'calculated mean not as expected')
def test_stdevcalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(round(self.gaussian.stdev, 2), 92.87, 'sample standard deviation incorrect')
self.gaussian.read_data_file('numbers.txt', False)
self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')
def test_add(self):
gaussian_one = Gaussian(25, 3)
gaussian_two = Gaussian(30, 4)
gaussian_sum = gaussian_one + gaussian_two
self.assertEqual(gaussian_sum.mean, 55)
self.assertEqual(gaussian_sum.stdev, 5)
def test_repr(self):
gaussian_one = Gaussian(25, 3)
self.assertEqual(str(gaussian_one), "mean 25, standard deviation 3")
tests = TestGaussianClass()
tests_loaded = unittest.TestLoader().loadTestsFromModule(tests)
unittest.TextTestRunner().run(tests_loaded)
###Output
......
----------------------------------------------------------------------
Ran 6 tests in 0.014s
OK
###Markdown
Magic MethodsBelow you'll find the same code from the previous exercise except two more methods have been added: an __add__ method and a __repr__ method. Your task is to fill out the code and get all of the unit tests to pass. You'll find the code cell with the unit tests at the bottom of this Jupyter notebook.As in previous exercises, there is an answer key that you can look at if you get stuck. Click on the "Jupyter" icon at the top of this notebook, and open the folder 4.OOP_code_magic_methods. You'll find the answer.py file inside the folder.
###Code
import math
import matplotlib.pyplot as plt
class Gaussian():
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
self.mean = mu
self.stdev = sigma
self.data = []
def calculate_mean(self):
"""Method to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
#TODO: Calculate the mean of the data set. Remember that the data set is stored in self.data
# Change the value of the mean attribute to be the mean of the data set
# Return the mean of the data set
self.mean = sum(self.data)/len(self.data)
return self.mean
def calculate_stdev(self, sample=True):
"""Method to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
# TODO:
# Calculate the standard deviation of the data set
#
# The sample variable determines if the data set contains a sample or a population
# If sample = True, this means the data is a sample.
# Keep the value of sample in mind for calculating the standard deviation
#
# Make sure to update self.stdev and return the standard deviation as well
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
sigma = 0
for d in self.data:
sigma += (d - self.mean)**2
self.stdev = math.sqrt(sigma/n)
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Method to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
# This code opens a data file and appends the data to a list called data_list
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
# TODO:
# Update the self.data attribute with the data_list
# Update self.mean with the mean of the data_list.
# You can use the calculate_mean() method with self.calculate_mean()
# Update self.stdev with the standard deviation of the data_list. Use the
# calcaulte_stdev() method.
self.data = data_list
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Method to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
# TODO: Plot a histogram of the data_list using the matplotlib package.
# Be sure to label the x and y axes and also give the chart a title
plt.hist(self.data)
plt.title("Histogram of data")
plt.xlabel("data")
plt.ylabel("count")
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
# TODO: Calculate the probability density function of the Gaussian distribution
# at the value x. You'll need to use self.stdev and self.mean to do the calculation
u = self.mean
sigma = self.stdev
density = 1/(math.sqrt(2*math.pi)*sigma) * math.exp(-0.5*((x-u)/sigma)**2)
return density
def plot_histogram_pdf(self, n_spaces = 50):
"""Method to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
#TODO: Nothing to do for this method. Try it out and see how it works.
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Magic method to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
# TODO: Calculate the results of summing two Gaussian distributions
# When summing two Gaussian distributions, the mean value is the sum
# of the means of each Gaussian.
#
# When summing two Gaussian distributions, the standard deviation is the
# square root of the sum of square ie sqrt(stdev_one ^ 2 + stdev_two ^ 2)
# create a new Gaussian object
result = Gaussian()
# TODO: calculate the mean and standard deviation of the sum of two Gaussians
result.mean = self.mean + other.mean # change this line to calculate the mean of the sum of two Gaussian distributions
result.stdev = math.sqrt(self.stdev**2 + other.stdev**2) # change this line to calculate the standard deviation of the sum of two Gaussian distributions
return result
def __repr__(self):
"""Magic method to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
# TODO: Return a string in the following format -
# "mean mean_value, standard deviation standard_deviation_value"
# where mean_value is the mean of the Gaussian distribution
# and standard_deviation_value is the standard deviation of
# the Gaussian.
# For example "mean 3.5, standard deviation 1.3"
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
# Unit tests to check your solution
import unittest
class TestGaussianClass(unittest.TestCase):
def setUp(self):
self.gaussian = Gaussian(25, 2)
def test_initialization(self):
self.assertEqual(self.gaussian.mean, 25, 'incorrect mean')
self.assertEqual(self.gaussian.stdev, 2, 'incorrect standard deviation')
def test_pdf(self):
self.assertEqual(round(self.gaussian.pdf(25), 5), 0.19947,\
'pdf function does not give expected result')
def test_meancalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(self.gaussian.calculate_mean(),\
sum(self.gaussian.data) / float(len(self.gaussian.data)), 'calculated mean not as expected')
def test_stdevcalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(round(self.gaussian.stdev, 2), 92.87, 'sample standard deviation incorrect')
self.gaussian.read_data_file('numbers.txt', False)
self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')
def test_add(self):
gaussian_one = Gaussian(25, 3)
gaussian_two = Gaussian(30, 4)
gaussian_sum = gaussian_one + gaussian_two
self.assertEqual(gaussian_sum.mean, 55)
self.assertEqual(gaussian_sum.stdev, 5)
def test_repr(self):
gaussian_one = Gaussian(25, 3)
self.assertEqual(str(gaussian_one), "mean 25, standard deviation 3")
tests = TestGaussianClass()
tests_loaded = unittest.TestLoader().loadTestsFromModule(tests)
unittest.TextTestRunner().run(tests_loaded)
###Output
......
----------------------------------------------------------------------
Ran 6 tests in 0.013s
OK
|
lectures/12-data-intro.ipynb | ###Markdown
Data - an introduction to the world of Pandas**Note:** This is an edited version of [Cliburn Chan's](http://people.duke.edu/~ccc14/sta-663-2017/07_Data.html) original tutorial, as part of his Stat-663 course at Duke. All changes remain licensed as the original, under the terms of the MIT license.Additionally, sections have been merged from [Chris Fonnesbeck's Pandas tutorial from the NGCM Summer Academy](https://github.com/fonnesbeck/ngcm_pandas_2017/blob/master/notebooks/1.%20Introduction%20to%20NumPy%20and%20Pandas.ipynb), which are licensed under [CC0 terms](https://creativecommons.org/share-your-work/public-domain/cc0) (aka 'public domain'). Resources- [The Introduction to Pandas chapter](http://proquest.safaribooksonline.com/9781491957653/pandas_html) in the Python for Data Analysis book by Wes McKinney is essential reading for this topic. This is the [companion notebook](https://github.com/wesm/pydata-book/blob/2nd-edition/ch05.ipynb) for that chapter.- [Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/)- [QGrid](https://github.com/quantopian/qgrid) Pandas**pandas** is a Python package providing fast, flexible, and expressive data structures designed to work with *relational* or *labeled* data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python. pandas is well suited for:- **Tabular** data with heterogeneously-typed columns, as you might find in an SQL table or Excel spreadsheet- Ordered and unordered (not necessarily fixed-frequency) **time series** data.- Arbitrary **matrix** data with row and column labelsVirtually any statistical dataset, labeled or unlabeled, can be converted to a pandas data structure for cleaning, transformation, and analysis. Key features - Easy handling of **missing data**- **Size mutability**: columns can be inserted and deleted from DataFrame and higher dimensional objects- Automatic and explicit **data alignment**: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically- Powerful, flexible **group by functionality** to perform split-apply-combine operations on data sets- Intelligent label-based **slicing, fancy indexing, and subsetting** of large data sets- Intuitive **merging and joining** data sets- Flexible **reshaping and pivoting** of data sets- **Hierarchical labeling** of axes- Robust **IO tools** for loading data from flat files, Excel files, databases, and HDF5- **Time series functionality**: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
plt.style.use('seaborn-dark')
###Output
_____no_output_____
###Markdown
Working with Series* A pandas Series is a generationalization of 1d numpy array* A series has an *index* that labels each element in the vector.* A `Series` can be thought of as an ordered key-value store.
###Code
np.array(range(5,10))
x = Series(range(5,10))
x
###Output
_____no_output_____
###Markdown
We can treat Series objects much like numpy vectors
###Code
x.sum(), x.mean(), x.std()
x**2
x[x >= 8]
###Output
_____no_output_____
###Markdown
Series can also contain more information than numpy vectors You can always use standard positional indexing
###Code
x[1:4]
###Output
_____no_output_____
###Markdown
Series indexBut you can also assign labeled indexes.
###Code
x.index = list('abcde')
x
###Output
_____no_output_____
###Markdown
Note that with labels, the end index is included
###Code
x['b':'d']
###Output
_____no_output_____
###Markdown
Even when you have a labeled index, positional arguments still work
###Code
x[1:4]
###Output
_____no_output_____
###Markdown
Working with missing dataMissing data is indicated with NaN (not a number).
###Code
y = Series([10, np.nan, np.nan, 13, 14])
y
###Output
_____no_output_____
###Markdown
Concatenating two series
###Code
z = pd.concat([x, y])
z
###Output
_____no_output_____
###Markdown
Reset index to default
###Code
z = z.reset_index(drop=True)
z
z**2
###Output
_____no_output_____
###Markdown
`pandas` aggregate functions ignore missing data
###Code
z.sum(), z.mean(), z.std()
###Output
_____no_output_____
###Markdown
Selecting missing values
###Code
z[z.isnull()]
###Output
_____no_output_____
###Markdown
Selecting non-missing values
###Code
z[z.notnull()]
###Output
_____no_output_____
###Markdown
Replacement of missing values
###Code
z.fillna(0)
z.fillna(method='ffill')
z.fillna(method='bfill')
z.fillna(z.mean())
###Output
_____no_output_____
###Markdown
Working with dates / timesWe will see more date/time handling in the DataFrame section.
###Code
z.index = pd.date_range('01-Jan-2016', periods=len(z))
z
###Output
_____no_output_____
###Markdown
Intelligent aggregation over datetime ranges
###Code
z.resample('W').sum()
###Output
_____no_output_____
###Markdown
Formatting datetime objects (see http://strftime.org)
###Code
z.index.strftime('%b %d, %Y')
###Output
_____no_output_____
###Markdown
DataFramesInevitably, we want to be able to store, view and manipulate data that is *multivariate*, where for every index there are multiple fields or columns of data, often of varying data type.A `DataFrame` is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. It is directly inspired by the R DataFrame. Titanic data
###Code
url = 'https://raw.githubusercontent.com/mwaskom/seaborn-data/master/titanic.csv'
titanic = pd.read_csv(url)
titanic.head()
titanic.shape
titanic.size
titanic.columns
# For display purposes, we will drop some columns
titanic = titanic[['survived', 'sex', 'age', 'fare',
'embarked', 'class', 'who', 'deck', 'embark_town',]]
titanic.dtypes
###Output
_____no_output_____
###Markdown
Summarizing a data frame
###Code
titanic.describe()
titanic.head(20)
titanic.tail(5)
titanic.columns
titanic.index
###Output
_____no_output_____
###Markdown
IndexingThe default indexing mode for dataframes with `df[X]` is to access the DataFrame's *columns*:
###Code
titanic[['sex', 'age', 'class']].head()
###Output
_____no_output_____
###Markdown
Using the `iloc` helper for indexing
###Code
titanic.head(3)
titanic.iloc
titanic.iloc[0]
titanic.iloc[0:5]
titanic.iloc[ [0, 10, 1, 5] ]
titanic.iloc[10:15]['age']
titanic.iloc[10:15][ ['age'] ]
titanic[titanic. < 2]
titanic["new column"] = 0
titanic["new column"][:10]
titanic[titanic.age < 2].index
df = pd.DataFrame(dict(name=['Alice', 'Bob'], age=[20, 30]),
columns = ['name', 'age'], # enforce column order
index=pd.Series([123, 989], name='id'))
df
###Output
_____no_output_____
###Markdown
`.iloc` vs `.loc`These are two accessors with a key difference:* `.iloc` indexes *positionally** `.loc` indexes *by label*
###Code
#df[0] # error
#df[123] # error
df.iloc[0]
df.loc[123]
df.loc[ [123] ]
###Output
_____no_output_____
###Markdown
Sorting and ordering data
###Code
titanic.head()
###Output
_____no_output_____
###Markdown
The `sort_index` method is designed to sort a DataFrame by either its index or its columns:
###Code
titanic.sort_index(ascending=False).head()
###Output
_____no_output_____
###Markdown
Since the Titanic index is already sorted, it's easier to illustrate how to use it for the index with a small test DF:
###Code
df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150], columns=['A'])
df
df.sort_index() # same as df.sort_index('index')
###Output
_____no_output_____
###Markdown
Pandas also makes it easy to sort on the *values* of the DF:
###Code
titanic.sort_values('age', ascending=True).head()
###Output
_____no_output_____
###Markdown
And we can sort on more than one column in a single call:
###Code
titanic.sort_values(['survived', 'age'], ascending=[True, True]).head()
###Output
_____no_output_____
###Markdown
*Note:* both the index and the columns can be named:
###Code
t = titanic.sort_values(['survived', 'age'], ascending=[True, False])
t.index.name = 'id'
t.columns.name = 'attributes'
t.head()
###Output
_____no_output_____
###Markdown
Grouping data
###Code
sex_class = titanic.groupby(['sex', 'class'])
###Output
_____no_output_____
###Markdown
What is a GroubBy object?
###Code
sex_class
from IPython.display import display
for name, group in sex_class:
print('name:', name, '\ngroup:\n')
display(group.head(2))
sex_class.get_group(('female', 'Second')).head()
###Output
_____no_output_____
###Markdown
The GroubBy object has a number of aggregation methods that will then compute summary statistics over the group members, e.g.:
###Code
sex_class.count()
###Output
_____no_output_____
###Markdown
Why Kate Winslett survived and Leonardo DiCaprio didn't
###Code
sex_class.mean()[['survived']]
###Output
_____no_output_____
###Markdown
Of the females who were in first class, count the number from each embarking town
###Code
sex_class.get_group(('female', 'First')).groupby('embark_town').count()
###Output
_____no_output_____
###Markdown
Since `count` counts non-missing data, we're really interested in the maximum value for each row, which we can obtain directly:
###Code
sex_class.get_group(('female', 'First')).groupby('embark_town').count().max('columns')
###Output
_____no_output_____
###Markdown
Cross-tabulation
###Code
pd.crosstab(titanic.survived, titanic['class'])
###Output
_____no_output_____
###Markdown
We can also get multiple summaries at the same timeThe `agg` method is the most flexible, as it allows us to specify directly which functions we want to call, and where:
###Code
def my_func(x):
return np.max(x)
mapped_funcs = {'embarked': 'count',
'age': ('mean', 'median', my_func),
'survived': sum}
sex_class.get_group(('female', 'First')).groupby('embark_town').agg(mapped_funcs)
###Output
_____no_output_____
###Markdown
Making plots with `pandas`Note: you may need to run```pip install pandas-datareader```to install the specialized readers.
###Code
from pandas_datareader import data as web
import datetime
try:
apple = pd.read_csv('data/apple.csv', index_col=0, parse_dates=True)
except:
apple = web.DataReader('AAPL', 'yahoo',
start = datetime.datetime(2015, 1, 1),
end = datetime.datetime(2015, 12, 31))
# Let's save this data to a CSV file so we don't need to re-download it on every run:
apple.to_csv('data/apple.csv')
apple.head()
apple.tail()
###Output
_____no_output_____
###Markdown
Let's save this data to a CSV file so we don't need to re-download it on every run:
###Code
f, ax = plt.subplots()
apple.plot.line(y='Close', marker='o', markersize=3, linewidth=0.5, ax=ax);
f.suptitle("Apple stock in 2015")
f
# Zoom in on large drop in August
aug = apple['2015-08-01':'2015-08-30']
aug.head()
aug.plot.line(y=['High', 'Low', 'Open', 'Close'], marker='o', markersize=10, linewidth=1);
###Output
/Users/fperez/usr/conda/envs/s159/lib/python3.6/site-packages/pandas/plotting/_core.py:1714: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
series.name = label
###Markdown
Data conversionsOne of the nicest features of `pandas` is the ease of converting tabular data across different storage formats. We will illustrate by converting the `titanic` dataframe into multiple formats. CSV
###Code
titanic.to_csv('titanic.csv', index=False)
t1 = pd.read_csv('titanic.csv')
t1.head(2)
###Output
_____no_output_____
###Markdown
ExcelYou may need to first install openpyxl:```pip install openpyxl```
###Code
t1.to_excel('titanic.xlsx')
t2 = pd.read_excel('titanic.xlsx')
t2.head(2)
###Output
_____no_output_____
###Markdown
Relational Database
###Code
import sqlite3
con = sqlite3.connect('titanic.db')
t2.to_sql('titanic', con, index=False, if_exists='replace')
t3 = pd.read_sql('select * from titanic', con)
t3.head(2)
###Output
_____no_output_____
###Markdown
JSON
###Code
t3.to_json('titanic.json')
t4 = pd.read_json('titanic.json')
t4.head(2)
t4 = t4[t3.columns]
t4.head(2)
###Output
_____no_output_____
###Markdown
HDF5The [HDF5 format](http://proquest.safaribooksonline.com/book/physics/9781491901564/10dot-storing-data-files-and-hdf5/chp_storing_data_html) was designed in the Earth Sciences community but it can be an excellent general purpose tool. It's efficient and type-safe, so you can store complex dataframes in it and recover them back without information loss, using the `to_hdf` method:
###Code
t4.to_hdf('titanic.h5', 'titanic')
t5 = pd.read_hdf('titanic.h5', 'titanic')
t5.head(2)
###Output
_____no_output_____
###Markdown
FeatherYou may need to install the [Feather](https://blog.cloudera.com/blog/2016/03/feather-a-fast-on-disk-format-for-data-frames-for-r-and-python-powered-by-apache-arrow) support first:```conda install -c conda-forge feather-format```
###Code
t6 = t5.reset_index(drop=True)
t6.head()
t6.to_feather('titanic.feather')
t7 = pd.read_feather('titanic.feather')
t7.head()
###Output
_____no_output_____ |
github/joeynmt/recursion_ter_4.ipynb | ###Markdown
ExplanationGiven all hypotheses from one sequence, finding n hypotheses that satisfy both quality and diversity criteria.Quality scores are represented as "Scores" in DataFrame, which are confidence scores generating from an nmt model.Diversity scores are represented as averaged TER scores.Calculating a new score with weighted quality score and the weighted diversity score, which is described as:> z_score = alpha \* quality scores + beta \* diversity scorePseudocode of seaching n hypothese(non recursion):``` if n == 1: selected_index = argmax(hypothesis.Scores) selected_list.append(selected_index) remove selected_index from remaining_list else if n > 1: Calculate z_score for all hypotheses in remaining_list: z_scores = alpha * hypothesis.Scores + beta * 1/(n-1) * Sum ( TER(remaining hypotheses, hyp_1st), TER(remaining hypotheses, hyp_2nd), ..., TER(remaining hypotheses, hyp_nth)) selected_index = argmax(z_scores) selected_list.append(selected_index) remove selected_index from remaining_list ``` Function**Recursion** and **memorizing** are used for actual implementation. Input: data: all generated predictions corresponding quality scores for one sequence n: the number of predictions to be selected according to criteria of quality and diversity alpha: weight for quality scores beta: weight for diversity scores Return: selected_list: a list of indexes of n selected predictions. remaining_list: a list with value False for selected indexes. True for unselected indexes memo_ter: a list of TER score lists. Used for memorizing previously calculated TER scores
###Code
def find_seq_list(data:pd.DataFrame, n:int, alpha=1.0, beta=1.0):
memo_ter = [[] for _ in range(n-1)]
selected_list = []
remaining_list = [bool(1) for _ in range(len(data.index))] # Initialize remaining_list with lists of True
def cal_seq(data:pd.DataFrame, selected_list:list, remaining_list:list, n: int, memo_ter: list, alpha:float, beta:float):
assert n > 0, "Number of selected predictions has to be a positive integer."
# Select the top score in data["Scores"] as the first selected index
if n == 1:
selected_idx = np.argmax(data["Scores"].to_numpy())
selected_list.append(selected_idx)
remaining_list[selected_idx] = False
return selected_list, remaining_list, memo_ter
if n > 1:
selected_list, remaining_list, memo_ter = cal_seq(data, selected_list, remaining_list, n-1, memo_ter,alpha, beta)
print(n-2)
ter_list = [[] for _ in range(len(data.index))] # ter_list stores TER scores
ref = data["Predictions"][selected_list[n-2]] # Take the latest selected index
for iter_i in range(len(data.index)):
if remaining_list[iter_i] == 0:
# Setting False for already selected indexes in remaining_list to exclude them from calculating TER scores
ter_list[iter_i] = 0.0
else:
ter_list[iter_i] = pyter.ter(data["Predictions"][iter_i], ref)
#ter_list[iter_i] = fake_ter(data["Predictions"][iter_i], ref)
memo_ter[n-2] = ter_list # Save TER socres to memo_ter
print("second")
sum_ter = np.zeros((len(data.index),1))
z_scores = np.ones(sum_ter.shape)*(-np.inf) # Initialize z_scores with negative infinite values
for j in range(n-1):
ter = np.array(memo_ter[j],dtype = np.float64).reshape(50,1)
sum_ter = sum_ter + ter
sum_ter = sum_ter/(n-1) # Calculate diversity scores by averaging TER scores
z_scores[remaining_list] = alpha * np.array(data["Scores"][remaining_list]).reshape(-1,1) + beta * sum_ter[remaining_list] # Update z_scores for remaining hypotheses
selected_idx = np.argmax(z_scores)
selected_list.append(selected_idx)
remaining_list[selected_idx] = False
return selected_list, remaining_list, memo_ter
selected_ls, remaining_ls, ter_ls = cal_seq(data, selected_list, remaining_list, n, memo_ter, alpha, beta)
print("third")
print(len(ter_ls))
return selected_ls, remaining_ls, ter_ls
selected_ls, remaining_ls, ter_ls = find_seq_list(data,4)
print(selected_ls)
print(remaining_ls)
print(len(ter_ls))
sub_df = data[["Predictions","Scores"]]
sub_df.iloc[selected_ls]
###Output
_____no_output_____ |
Tarea_7_Recursividad.ipynb | ###Markdown
###Code
# 1. Crear una lista de enteros en Python y realizar la suma con recursividad, el caso base es cuando la lista este vacía.
x=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
def suma(x,i,s):
if i<len(x):
s = s + x[ i ]
suma(x,i+1,s)
if i == len( x )-1:
print(s)
suma(x,0,0)
# 2. Hacer un contador regresivo con recursión.
def contador( i ):
if i > 0:
print( i )
contador( i - 1 )
n= 20
print("N= ", n)
contador( 20 )
# 3. Sacar de un ADT pila el valor en la posición media
class Stack:
def __init__(self):
self.__datos=[]
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push(self,valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("----------------")
for i in self.__datos[-1::-1]:
print(i)
print("----------------")
print()
def medio(pila,m,i,a):
bandera=0
if m>i:
a=pila.pop()
i+=1
medio(pila,m,i,a)
if m==i:
pila.pop()
pila.push(a)
pila=Stack()
pila.push(0)
pila.push(1)
pila.push(3)
pila.push(5)
pila.push(25)
pila.push(30)
pila.push(35)
pila.push(40)
pila.push(45)
pila.push(50)
print("Elementos de la pila: ")
m=pila.get_length()//2
pila.to_string()
medio(pila,m,0,0)
print("Pila sin punto medio: ")
pila.to_string()
###Output
Elementos de la pila:
----------------
50
45
40
35
30
25
5
3
1
0
----------------
Pila sin punto medio:
----------------
50
45
40
35
30
5
3
1
0
----------------
###Markdown
###Code
#1. Crear una lista de enteros en Python y realizar la suma con recursividad, el caso base es cuando la lista este vacía.
def sumRec (lista):
if lista == []:
suma = 0
else:
suma = lista[0] + sumRec(lista[1:])
return suma
def main():
lista_int = [1,2,3,4,5,6,7,8,9]
print("La suma de la lista es:",sumRec(lista_int))
main()
#2. Hacer un contador regresivo con recursión
def regresivo(x):
if x >= 0:
print(x)
regresivo(x - 1)
def main():
regresivo(10)
main()
#3. Sacar de un ADT pila el valor en la posición media.
#En esta parte le quitamos los valores de la pila hasta que las dos pilas sean iguales o tengan de diferencia 1
def medRec(pila,pila2):
if (len(pila) - len(pila2)) <= 1:
pila.pop()
regresaPila(pila , pila2)
else:
pila2.append(pila.pop())
medRec(pila,pila2)
#Regresamos los valores que quitamos de la pila
def regresaPila(pila, pila2):
if pila2 == []:
print(pila)
else:
pila.append(pila2.pop())
regresaPila(pila, pila2)
def main():
pila = [1,2,3,4,5,6,7,8,9]
pila2 = []
print("La pila 1 es: ", pila)
print("Despues eliminamos el valor que se encuentra en medio de la pila")
medRec(pila,pila2)
main()
###Output
La pila 1 es: [1, 2, 3, 4, 5, 6, 7, 8, 9]
Despues eliminamos el valor que se encuentra en medio de la pila
[1, 2, 3, 4, 6, 7, 8, 9]
###Markdown
Realizar:1. Crear una lista de enteros en Python y realizar la suma con recursividad, el caso base es cuando la lista este vacía.2. Hacer un contador regresivo con recursión.3. Sacar de un ADT pila el valor en la posición media.
###Code
class Pila:
def __init__(self):
self.items = []
def imprimirCompleto(self):
for x in range(0,len(self.items),1):
print(self.items[x],end=",")
def estaVacia(self):
return self.items == []
def incluir(self, item):
self.items.append(item)
def extraerPrimero(self):
return self.items.pop(0)
def extraerUltimo(self):
return self.items.pop()
def inspeccionarUltimo(self):
return self.items[len(self.items)-1]
def tamano(self):
return len(self.items)
def sumaRec(lista,longitud,suma = 0):
if longitud != 0:
suma += lista[longitud-1]
print(suma)
sumaRec(lista,longitud-1,suma)
def contador(maximo):
if maximo != 0:
print(maximo)
contador(maximo-1)
def posicionRec(pila):
lon = pila.tamano()
if lon == 1:
print("\nEl elemento de la posicicon media es: ",pila.inspeccionarUltimo())
else:
pila.extraerPrimero()
pila.extraerUltimo()
posicionRec(pila)
def main():
print("1. Crear una lista de enteros en Python y realizar \nla suma con recursividad, el caso base es cuando \nla lista este vacía.")
listNum = [1,2,3,4,5,6,7,8,9]
print(listNum)
sumaRec(listNum,len(listNum))
print("2. Hacer un contador regresivo con recursión.")
print("INICIA CONTEO REGRESIVO")
contador(30)
print("FINALIZA CONTEO REGRESIVO")
print("3. Sacar de un ADT pila el valor en la posición media")
p = Pila()
p.incluir(1)
p.incluir(2)
p.incluir(3)
p.incluir(4)
p.incluir(5)
p.incluir(6)
p.incluir(7)
p.incluir(8)
p.incluir(9)
p.imprimirCompleto()
try:
posicionRec(p)
except:
print("La lista no tiene posicion Media")
main()
###Output
1. Crear una lista de enteros en Python y realizar
la suma con recursividad, el caso base es cuando
la lista este vacía.
[1, 2, 3, 4, 5, 6, 7, 8, 9]
9
17
24
30
35
39
42
44
45
2. Hacer un contador regresivo con recursión.
INICIA CONTEO REGRESIVO
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
FINALIZA CONTEO REGRESIVO
3. Sacar de un ADT pila el valor en la posición media
1,2,3,4,5,6,7,8,9,
El elemento de la posicicon media es: 5
###Markdown
###Code
class Pila:
def __init__(self):
self.__data = []
def is_empty(self):
return len(self.__data)==0
def get_top(self):
return self.__data[len(self.__data)-1]
def pop(self):
return self.__data.pop()
def push (self, value):
self.__data.append(value)
def lenght(self):
return len(self.__data)
def to_string(self):
for i in self.__data[::-1]:
print(i)
# 1. Crear una lista de enteros en Python y
# realizar la suma con recursividad, el caso base es cuando la lista este vacía.
def nose(l):
if len(l) == 0:
return 0
else:
n = l.pop()
n1 = nose(l)
n += n1
return n
lisInt=[1,1,1,1,1]
print(lisInt)
print(f"LA SUMA ES:",nose(lisInt))
# 2. Hacer un contador regresivo con recursión.
def contador(x):
if x >= 0:
print(x)
contador(x-1)
contador(10)
# 3. Sacar de un ADT pila el valor en la posición media.
def posicion_media(p,t):
if t//2 == p.lenght()-1:
return print(f"Valor Medio:",p.pop())
else:
n=p.pop()
posicion_media(p,t)
p.push(n)
p = Pila()
p.push(5)
p.push(6)
p.push(10)
p.push(1) # <--- valor medio
p.push(2)
p.push(14)
p.push(12)
p.to_string()
print("-----")
posicion_media(p,p.lenght())
print("-----")
p.to_string()
###Output
12
14
2
1
10
6
5
-----
Valor Medio: 1
-----
12
14
2
10
6
5
###Markdown
###Code
#Crear una lista de enteros en Python y realizar la suma con recursividad, el caso base es cuando la lista este vacía.
def recursuma(arreglo,suma,i):
if i > 0:
suma+=arreglo[i-1]
i-=1
recursuma(arreglo,suma,i)
else:
print(suma)
Arreglo=[1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9]
suma=0
i=18
print(Arreglo)
recursuma(Arreglo,suma,i)
# Hacer un contador regresivo con recursión.
def contareg(n):
if n>0:
print(n)
n-=1
contareg(n)
else:
print("0")
n=10
print("Numero:",n)
contareg(n)
#Sacar de un ADT pila el valor en la posición media.
class Stack:
def __init__(self):
self.__datos=[]
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push(self,valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("-------------------------")
for i in self.__datos[-1::-1]:
print(i)
print("-------------------------")
print()
def medio(pila,m,x,i):
if m>i:
x=pila.pop()
i+=1
medio(pila,m,x,i)
if m==i:
pila.pop()
pila.push(x)
pila=Stack()
pila.push(10)
pila.push(20)
pila.push(30)
pila.push(40)
pila.push(50)
pila.push(60)
pila.push(70)
pila.push(80)
pila.push(90)
print("Pila completa")
pila.to_string()
m=pila.get_length()//2
medio(pila,m,0,0)
print("Pila sin el valor de enmedio")
pila.to_string()
###Output
Pila completa
-------------------------
90
80
70
60
50
40
30
20
10
-------------------------
Pila sin el valor de enmedio
-------------------------
90
80
70
60
40
30
20
10
-------------------------
|
BaselineLinReg.ipynb | ###Markdown
Linear Regression on photo brightness
###Code
#imports
import numpy as np
import sklearn
from sklearn.linear_model import LinearRegression
from sklearn import metrics
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#Load data
data_file = ""
label_file = ""
X = np.load(data_file)
#y = np.load(label_file)
y = np.random.randint(6, size = X.shape)
#Characterize the data
plt.figure()
plt.title("Average image brightness")
plt.hist(X)
plt.show()
#Run linear regression
reg = LinearRegression().fit(X, y)
print(reg.score(X, y))
print(reg.coef_)
print(reg.intercept_)
#pred_Y = reg.predict(X)
y = [1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4]
pred_y = [1,2,1,2,1,2,1,2,1,1,1,1,4,3,3,3,3,4,4]
cmat = metrics.confusion_matrix(y, pred_y)
print_confusion_matrix(cmat)
def print_confusion_matrix(confusion_matrix, class_names = None, figsize = (10,7), fontsize=14):
"""Prints a confusion matrix, as returned by sklearn.metrics.confusion_matrix, as a heatmap.
Arguments
---------
confusion_matrix: numpy.ndarray
The numpy.ndarray object returned from a call to sklearn.metrics.confusion_matrix.
Similarly constructed ndarrays can also be used.
class_names: list
An ordered list of class names, in the order they index the given confusion matrix.
figsize: tuple
A 2-long tuple, the first value determining the horizontal size of the ouputted figure,
the second determining the vertical size. Defaults to (10,7).
fontsize: int
Font size for axes labels. Defaults to 14.
Returns
-------
matplotlib.figure.Figure
The resulting confusion matrix figure
"""
if class_names == None:
class_names = list(np.arange(len(confusion_matrix)))
print(class_names)
df_cm = pd.DataFrame(
confusion_matrix, index=class_names, columns=class_names,
)
fig = plt.figure(figsize=figsize)
try:
heatmap = sns.heatmap(df_cm, annot=True, fmt="d")
except ValueError:
raise ValueError("Confusion matrix values must be integers.")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize)
plt.ylabel('True label')
plt.xlabel('Predicted label')
return fig
np.arange(len(cmat))
###Output
_____no_output_____ |
Model backlog/Train/251-Tweet-Train-5Fold-roBERTa OneCycle cosine.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil
from scripts_step_lr_schedulers import *
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Load data
###Code
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64_clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training samples: {len(k_fold)}')
display(k_fold.head())
###Output
Training samples: 26882
###Markdown
Model parameters
###Code
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
config = {
"MAX_LEN": 64,
"BATCH_SIZE": 32,
"EPOCHS": 2,
"LEARNING_RATE": 1e-4,
"ES_PATIENCE": 2,
"N_FOLDS": 5,
"question_size": 4,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
###Output
_____no_output_____
###Markdown
Learning rate schedule
###Code
@tf.function
def one_cycle(step, total_steps, lr_start=1e-4, lr_max=1e-3):
""" Create a schedule with a learning rate that decreases linearly after
linearly increasing during a warmup period.
"""
warmup_steps = total_steps // 2
if step < warmup_steps:
lr = (lr_max - lr_start) / warmup_steps * step + lr_start
else:
current_percentage = step / total_steps
if current_percentage <= .9:
lr = lr_max * ((total_steps - step) / (total_steps - warmup_steps))
else:
lr = lr_max * ((total_steps - step) / (total_steps - warmup_steps) * 0.7)
return lr
lr_min = 1e-6
lr_start = 0
lr_max = config['LEARNING_RATE']
train_size = len(k_fold[k_fold['fold_1'] == 'train'])
step_size = train_size // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
warmup_steps = total_steps//2 # total_steps * 0.1
num_cycles = 1
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [cosine_with_hard_restarts_schedule_with_warmup(tf.cast(x, tf.float32), total_steps=total_steps,
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, num_cycles=num_cycles) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 0 to 0.0001 to 1e-06
###Markdown
Model
###Code
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, use_bias=False, name='qa_outputs')(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
###Output
_____no_output_____
###Markdown
Train
###Code
def get_training_dataset(x_train, y_train, batch_size, buffer_size, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_train[0], 'attention_mask': x_train[1]},
(y_train[0], y_train[1])))
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_validation_dataset(x_valid, y_valid, batch_size, buffer_size, repeated=False, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_valid[0], 'attention_mask': x_valid[1]},
(y_valid[0], y_valid[1])))
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.cache()
dataset = dataset.prefetch(buffer_size)
return dataset
AUTO = tf.data.experimental.AUTOTUNE
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint((MODEL_BASE_PATH + model_path), monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
lr = lambda: cosine_with_hard_restarts_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
total_steps=total_steps, warmup_steps=warmup_steps,
lr_start=lr_start, lr_max=lr_max, lr_min=lr_min,
num_cycles=num_cycles)
optimizer = optimizers.Adam(learning_rate=lr)
model.compile(optimizer, loss=[losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True),
losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)])
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
###Output
FOLD: 1
Epoch 1/2
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 175s - loss: 4.4050 - tf_op_layer_y_start_1_loss: 2.1864 - tf_op_layer_y_end_1_loss: 2.2186 - val_loss: 3.8690 - val_tf_op_layer_y_start_1_loss: 1.9698 - val_tf_op_layer_y_end_1_loss: 1.8992
Epoch 2/2
672/672 - 174s - loss: 3.7630 - tf_op_layer_y_start_1_loss: 1.9000 - tf_op_layer_y_end_1_loss: 1.8631 - val_loss: 3.7556 - val_tf_op_layer_y_start_1_loss: 1.9081 - val_tf_op_layer_y_end_1_loss: 1.8476
FOLD: 2
Epoch 1/2
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.4704 - tf_op_layer_y_start_2_loss: 2.1936 - tf_op_layer_y_end_2_loss: 2.2768 - val_loss: 3.8350 - val_tf_op_layer_y_start_2_loss: 1.9302 - val_tf_op_layer_y_end_2_loss: 1.9048
Epoch 2/2
672/672 - 175s - loss: 3.7787 - tf_op_layer_y_start_2_loss: 1.9102 - tf_op_layer_y_end_2_loss: 1.8685 - val_loss: 3.7450 - val_tf_op_layer_y_start_2_loss: 1.8952 - val_tf_op_layer_y_end_2_loss: 1.8499
FOLD: 3
Epoch 1/2
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.4689 - tf_op_layer_y_start_3_loss: 2.2188 - tf_op_layer_y_end_3_loss: 2.2501 - val_loss: 3.8388 - val_tf_op_layer_y_start_3_loss: 1.9363 - val_tf_op_layer_y_end_3_loss: 1.9025
Epoch 2/2
672/672 - 175s - loss: 3.7692 - tf_op_layer_y_start_3_loss: 1.9006 - tf_op_layer_y_end_3_loss: 1.8686 - val_loss: 3.7347 - val_tf_op_layer_y_start_3_loss: 1.8949 - val_tf_op_layer_y_end_3_loss: 1.8398
FOLD: 4
Epoch 1/2
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.4425 - tf_op_layer_y_start_4_loss: 2.2025 - tf_op_layer_y_end_4_loss: 2.2400 - val_loss: 3.8207 - val_tf_op_layer_y_start_4_loss: 1.9296 - val_tf_op_layer_y_end_4_loss: 1.8911
Epoch 2/2
672/672 - 174s - loss: 3.7494 - tf_op_layer_y_start_4_loss: 1.8992 - tf_op_layer_y_end_4_loss: 1.8502 - val_loss: 3.7269 - val_tf_op_layer_y_start_4_loss: 1.8794 - val_tf_op_layer_y_end_4_loss: 1.8474
FOLD: 5
Epoch 1/2
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.4025 - tf_op_layer_y_start_5_loss: 2.2007 - tf_op_layer_y_end_5_loss: 2.2018 - val_loss: 3.8312 - val_tf_op_layer_y_start_5_loss: 1.9333 - val_tf_op_layer_y_end_5_loss: 1.8979
Epoch 2/2
672/672 - 175s - loss: 3.7735 - tf_op_layer_y_start_5_loss: 1.9103 - tf_op_layer_y_end_5_loss: 1.8632 - val_loss: 3.7299 - val_tf_op_layer_y_start_5_loss: 1.8837 - val_tf_op_layer_y_end_5_loss: 1.8461
###Markdown
Model loss graph
###Code
#@title
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model evaluation
###Code
#@title
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
#@title
k_fold['jaccard_mean'] = 0
for n in range(config['N_FOLDS']):
k_fold['jaccard_mean'] += k_fold[f'jaccard_fold_{n+1}'] / config['N_FOLDS']
display(k_fold[['text', 'selected_text', 'sentiment', 'text_tokenCnt',
'selected_text_tokenCnt', 'jaccard', 'jaccard_mean'] + [c for c in k_fold.columns if (c.startswith('prediction_fold'))]].head(15))
###Output
_____no_output_____ |
discrete math 2.ipynb | ###Markdown
Chapter 5 Prufer's Algorithm
###Code
#T=[[6, 2], [6, 7], [5, 6], [3, 5], [5, 1], [1, 4]]
#V=[2, 6, 7, 3, 5, 1, 4]
## Ex 5.1, 31
T=[[1, 2], [2, 3], [5, 2], [5, 4], [6, 5], [7, 5]]
V=[1, 2, 3, 4, 5, 6, 7]
def PrA(T, V):
a=[]
while len(T)!=1:
L=[i for i in V if (flatten(T)).count(i)==1]
v1=min(L)
for i in T:
if v1 in i:
e=i
break
ak=e[1] if e[0]==v1 else e[0]
del V[V.index(v1)]
a+=[ak, ]
del T[T.index(e)]
return a
return a
PrA(T, V)
###Output
_____no_output_____
###Markdown
Prim's Algorithm
###Code
#Ed=[['A', 'B', 4], ['E', 'A', 3], ['B', 'E', 5], ['E', 'D', 2], ['E', 'C', 3], ['C', 'B', 5], ['C', 'D', 3]]
#Ed=[['A', 'B', 9], ['D', 'B', 11], ['D','A', 3], ['B', 'F', 8], ['F', 'D', 9], ['F', 'C', 5], ['C', 'B', 8], ['E', 'F', 6], ['C', 'E', 3], ['E', 'D', 11]]
## Ex 5.2, 19
#Ed=[['C', 'B', 4], ['A', 'C', 3], ['C', 'D', 5], ['B', 'A', 6], ['A', 'D', 4], ['D', 'E', 2], ['H', 'E', 2], ['E', 'F', 4], ['F', 'I', 3], ['I', 'H', 3], ['I', 'G', 2], ['G', 'H', 4], ['G', 'E', 1]]
## Ex 5.2, 18
Ed=[['A', 'B', 4], ['A', 'E', 3], ['E', 'F', 6], ['F', 'B', 5], ['F', 'I', 4], ['I', 'J', 3], ['J', 'G', 1], ['G', 'F', 1], ['A', 'F', 2],['I', 'G', 5], ['G', 'C', 2], ['G', 'H', 4], ['H', 'D', 2], ['D', 'C', 3]]
def PA(Ed, st):
"""Ed is the list of edges with weights and st is the starting vertex"""
E1=[]
E2=[]
for i in Ed:
E1+=[[i[0], i[1]], ]
E2+=[i[2], ]
X=flatten(E1)
V=list(set(X))
n=len(V)
E3=E1[:]
L=[st]
T=[]
w=0
while True:
I=[]
temp=[]
s=0
for i in E1:
if (i[0] in L and i[1] not in L) or (i[1] in L and i[0] not in L):
I+=[E1.index(i),]
temp+=[i,]
M=[E2[i] for i in I]
Min=min(M) # Max=max(M) for maximal spanning tree
te=temp[M.index(Min)]
T+=[te,]
del E3[E3.index(te)]
x, y=te[0], te[1]
L+=[x if y in L else y, ]
for i in E3:
if (i[0] in L and i[1] not in L) or (i[1] in L and i[0] not in L):s+=1
if s==0: break
for i in T: w+=E2[E1.index(i)]
return T, w if len(L)==n else "No minimal spanning tree"
PA(Ed, 'C')
###Output
_____no_output_____
###Markdown
Kruskal's Algorithm
###Code
#Ed=[['A', 'B', 4], ['E', 'A', 3], ['B', 'E', 5], ['E', 'D', 2], ['E', 'C', 3], ['C', 'B', 5], ['C', 'D', 3]]
#Ed=[['C', 'B', 4], ['A', 'C', 3], ['C', 'D', 5], ['B', 'A', 6], ['A', 'D', 4], ['D', 'E', 2], ['H', 'E', 2], ['E', 'F', 4], ['F', 'I', 3], ['I', 'H', 3], ['I', 'G', 2], ['G', 'H', 4], ['G', 'E', 1]]
Ed=[['A', 'B', 4], ['A', 'E', 3], ['E', 'F', 6], ['F', 'B', 5], ['F', 'I', 4], ['I', 'J', 3], ['J', 'G', 1], ['G', 'F', 1], ['A', 'F', 2],['I', 'G', 5], ['G', 'C', 2], ['G', 'H', 4], ['H', 'D', 2], ['D', 'C', 3]]
def Kr_A(Ed):
E1=[]
E2=[]
for i in Ed:
E1+=[[i[0], i[1]], ]
E2+=[i[2], ]
X=flatten(E1)
V=list(set(X))
n=len(V)
E3=E1[:]
E4=E2[:]
w=0
M=min(E2)
e=E1[E2.index(M)]
cl=[e]
T=[e, ]
ind=E3.index(e)
del E3[ind]
del E4[ind]
while len(T)<n-1 and len(E3)!=0:
M=min(E4)
e=E3[E4.index(M)]
for i in cl:
if e[0] in i and e[1] not in i:
ind=cl.index(i)
cl1=cl[:ind]+cl[ind+1:]
s=0
for j in cl1:
if e[1] in j:
ind1=cl1.index(j)
cl[ind]=cl[ind]+cl1[ind1]
J=j
T+=[e,]
break
else:
s+=1
if s==len(cl1):
cl[ind]=cl[ind]+[e[1]]
T+=[e, ]
J=0
break
elif e[1] in i and e[0] not in i:
ind=cl.index(i)
cl1=cl[:ind]+cl[ind+1:]
s=0
for j in cl1:
if e[0] in j:
ind1=cl1.index(j)
cl[ind]=cl[ind]+cl1[ind1]
J=j
T+=[e,]
break
else:
s+=1
if s==len(cl1):
cl[ind]=cl[ind]+[e[0]]
T+=[e, ]
J=0
break
elif e[0] not in i and e[1] not in i:
cl+=[e,]
T+=[e,]
J=0
break
else:
J=0
break
ind=E3.index(e)
del E3[ind]
del E4[ind]
if J!=0: del cl[cl.index(j)]
for i in T: w+=E2[E1.index(i)]
return T, w
Kr_A(Ed)
###Output
_____no_output_____
###Markdown
Depth-First Search Algoritm
###Code
Ed=[['A', 'B'], ['G', 'A'], ['G', 'F'], ['F', 'B'], ['F', 'C'], ['C', 'G'], ['C', 'D'], ['C', 'E'], ['F', 'H'], ['H', 'B'], ['B', 'J'], ['J', 'H'], ['H', 'I'], ['I', 'F']]
#Ed=[['A', 'C'], ['C', 'G'], ['G', 'H'], ['H', 'A'], ['G', 'F'], ['F', 'C'], ['H', 'F'], ['F', 'B'], ['B', 'D'], ['D', 'E'], ['E', 'F'], ['E', 'B']]
#Ed=[['A', 'B'], ['B', 'E'], ['E', 'A'], ['E', 'C'], ['C', 'D'], ['B', 'F'], ['F', 'G'], ['E', 'G'], ['D', 'H'], ['H', 'C'], ['C', 'I'], ['I', 'J'], ['J', 'D']]
st='A'
def DFA(Ed, st):
V=list(set(flatten(Ed)))
L={'A':[1, '-']}
T=[]
k=2
while len(L)!=len(V):
temp=[]
t=[]
for i in Ed:
if st in i: temp+=[i,]
for i in temp:
if i[0]==st and i[1] not in L: t+=[i[1], ]
elif i[1]==st and i[0] not in L: t+=[i[0], ]
if len(t)!=0:
v=min(t)
e=[v, st] if [v, st] in temp else [st, v]
L[v]=[k, st]
T+=[e, ]
del Ed[Ed.index(e)]
st=v
k+=1
else:
st=L[st][1]
if st=='A': break
return L, T if len(L)==len(V) else "No spanning tree"
DFA(Ed, st)
#Ed=[['A', 'B'], ['A', 'C'], ['A', 'D'], ['D', 'C'], ['C', 'B']]
Ed=[['A', 'B'], ['B', 'D'], ['D', 'A'], ['C', 'F'], ['F', 'E']] # ['B', 'E'] is a bridge
st='A'
DFA(Ed, st)
###Output
_____no_output_____
###Markdown
Preorder Traversal Algorithm
###Code
#Ed=[['A', 'B', 'l'], ['A', 'C', 'r'], ['B', 'E', 'r'], ['B', 'D', 'l'], ['D', 'F', 'r'], ['E', 'G', 'l']]
#Ex 5.5, 17
Ed=[['A', 'B', 'l'], ['A', 'C', 'r'], ['B', 'D', 'l'], ['B', 'E', 'r'], ['D', 'G', 'l'], ['G', 'L', 'l'], ['E', 'I', 'r'], ['E', 'H', 'l'], ['H', 'M', 'l'], ['I', 'N', 'l'], ['C', 'F', 'r'], ['F', 'K', 'r'], ['F', 'J', 'l'], ['K', 'Q', 'r'], ['J', 'O', 'l'], ['J', 'P', 'r']]
##'r' means right child, 'l' means left child and ['A', 'B', 'l'] means 'A' is the parent and 'B' is the left child of 'A'
def Preo_trav(Ed):
E1=[]
E2=[]
for i in Ed:
E1+=[i[:2], ]
E2+=[i[2], ]
V=list(set(flatten(E1)))
# we need to determine the root, i.e, the node which has indegree=0
In=0
root=''
for v in V:
for i in E1:
if i[1]==v:In+=1
if In==0:
root=v
break
else: In=0
st=root
L={st:[1, '-']}
k=2
while len(L)!=len(V):
t1=[]
t2=[]
for i in E1:
if i[0]==st and i[1] not in L:
t1+=[i, ]
t2+=[E2[E1.index(i)], ]
if len(t1)==2:
ind=t2.index('l')
e=t1[ind]
L[e[1]]=[k, e[0]]
st=e[1]
k+=1
ind1=E1.index(e)
del E1[ind1]
del E2[ind1]
elif len(t2)==1:
e=t1[0]
L[e[1]]=[k, e[0]]
st=e[1]
k+=1
ind1=E1.index(e)
del E1[ind1]
del E2[ind1]
else: st=L[st][1]
l={}
print(L)
for i in L: l[L[i][0]]=i
return l
Preo_trav(Ed)
###Output
{'A': [1, '-'], 'B': [2, 'A'], 'D': [3, 'B'], 'G': [4, 'D'], 'L': [5, 'G'], 'E': [6, 'B'], 'H': [7, 'E'], 'M': [8, 'H'], 'I': [9, 'E'], 'N': [10, 'I'], 'C': [11, 'A'], 'F': [12, 'C'], 'J': [13, 'F'], 'O': [14, 'J'], 'P': [15, 'J'], 'K': [16, 'F'], 'Q': [17, 'K']}
###Markdown
Postorder Traversal Algorithm
###Code
#Ed=[['A', 'B', 'l'], ['A', 'C', 'r'], ['B', 'E', 'r'], ['B', 'D', 'l'], ['D', 'F', 'r'], ['E', 'G', 'l']]
Ed=[['A', 'B', 'l'], ['A', 'C', 'r'], ['B', 'D', 'l'], ['B', 'E', 'r'], ['D', 'G', 'l'], ['G', 'L', 'l'], ['E', 'I', 'r'], ['E', 'H', 'l'], ['H', 'M', 'l'], ['I', 'N', 'l'], ['C', 'F', 'r'], ['F', 'K', 'r'], ['F', 'J', 'l'], ['K', 'Q', 'r'], ['J', 'O', 'l'], ['J', 'P', 'r']]
def Posto_trav(Ed):
E1=[]
E2=[]
for i in Ed:
E1+=[i[:2], ]
E2+=[i[2], ]
V=list(set(flatten(E1)))
# we need to determine the root, i.e, the node which has indegree=0
In=0
root=''
for v in V:
for i in E1:
if i[1]==v:In+=1
if In==0:
root=v
break
else: In=0
st=root
L1={st:'-'}
L2={}
k=1
while len(L2)!=len(V):
t1=[]
t2=[]
for i in E1:
if i[0]==st and i[1] not in L2:
t1+=[i, ]
t2+=[E2[E1.index(i)], ]
if len(t2)==2:
ind=t2.index('l')
e=t1[ind]
L1[e[1]]=st
st=e[1]
elif len(t2)==1:
e=t1[0]
L1[e[1]]=st
st=e[1]
else:
L2[st]=k
e=[L1[st], st]
if e[0]=='-':
L2[e[1]]=k
break
st=L1[st]
ind1=E1.index(e)
del E1[ind1]
del E2[ind1]
k+=1
return(L2)
Posto_trav(Ed)
###Output
_____no_output_____
###Markdown
Inorder Traversal Algorithm
###Code
#Ed=[['A', 'B', 'l'], ['A', 'C', 'r'], ['B', 'E', 'r'], ['B', 'D', 'l'], ['D', 'F', 'r'], ['E', 'G', 'l']]
Ed=[['A', 'B', 'l'], ['A', 'C', 'r'], ['B', 'D', 'l'], ['B', 'E', 'r'], ['D', 'G', 'l'], ['G', 'L', 'l'], ['E', 'I', 'r'], ['E', 'H', 'l'], ['H', 'M', 'l'], ['I', 'N', 'l'], ['C', 'F', 'r'], ['F', 'K', 'r'], ['F', 'J', 'l'], ['K', 'Q', 'r'], ['J', 'O', 'l'], ['J', 'P', 'r']]
def Ino_trav(Ed):
E1=[]
E2=[]
for i in Ed:
E1+=[i[:2], ]
E2+=[i[2], ]
V=list(set(flatten(E1)))
# we need to determine the root, i.e, the node which has indegree=0
In=0
root=''
for v in V:
for i in E1:
if i[1]==v:In+=1
if In==0:
root=v
break
else: In=0
st=root
L1={st:'-'}
L2={}
k=1
while len(L2)!=len(V):
t1=[]
t2=[]
for i in E1:
if i[0]==st and i[1] not in L2:
t1+=[i, ]
t2+=[E2[E1.index(i)], ]
if len(t2)==2:
ind=t2.index('l')
e=t1[ind]
L1[e[1]]=st
st=e[1]
elif len(t2)==1:
if t2[0]=='l':
e=t1[0]
L1[e[1]]=st
st=e[1]
else:
e=t1[0]
L2[e[0]]=k
L1[e[1]]=st
st=e[1]
k+=1
else:
if st in L2:
st=L1[st]
else:
L2[st]=k
e=[L1[st], st]
if e[0]=='-':
L2[e[1]]=k
break
st=L1[st]
k+=1
return(L2)
Ino_trav(Ed)
###Output
_____no_output_____
###Markdown
Huffman's Optimal Binary Tree Algorithm: For nonnegative real numbers $w_1$, $w_2$, ..., $w_k$, where $k>=2$, this algorithm constructs an optimal binary tree for the weights $w_1$, $w_2$, ..., $w_k$. In the algorithm a vertex is referred to by its label
###Code
#S=[2, 3, 4, 7, 8]
#S=[2, 4, 5, 6]
#S=[2, 3, 5, 5, 6]
#S=[10, 12, 13, 16, 17, 17]
#Ex 5.6, 33
S=[1, 4, 9, 16, 25, 36]
def HOBT(S):
S.sort()
S1=S[:]
T=[]
while len(S)!=1:
m1=min(S1)
del S1[S1.index(m1)]
m2=min(S1)
del S1[S1.index(m2)]
root=m1+m2
if m1!=m2:
ind1=S.index(m1)
ind2=S.index(m2)
else:
ind1=S.index(m1)
S[ind1]+=1
ind2=S.index(m2)
S[ind1]-=1
i1=ind1 if ind1<=ind2 else ind2
i2=ind1 if i1==ind2 else ind2
T+=[[root, S[i1], 'l'], ]
T+=[[root, S[i2], 'r'], ]
S[i1]=root
del S[i2]
S1=S[:]
S.sort()
S1.sort()
return T
HOBT(S)
HOBT([32, 28, 20, 4, 1])
###Output
_____no_output_____
###Markdown
Binary Search Tree Construction Algorithm This algorithm constructs a binary search tree in which the vertices are labeled $a_1$, $a_2$, ..., $a_n$, where $a_1$, $a_2$, ..., $a_n$ are distinct and $n>=2$. In the algorithm, the vertex is referred to by its label
###Code
#a=[5, 9, 8, 1, 2, 4, 10, 6]
#Ex. 5.6, 51
a=[14, 17, 3, 6, 15, 1, 20, 2, 5, 10, 18, 7, 16]
def BSTC(a):
n=len(a)
root=a[0]
T=[]
k=1
while k<n:
V=root
while True:
temp=[]
l=0
r=0
for i in T:
if i[0]==V:
if i[2]=='l':l+=1
else: r+=1
temp+=[i, ]
if a[k]<V:
if l==0:
T+=[[V, a[k], 'l'], ]
break
else: V=temp[0][1] if temp[0][2]=='l' else temp[1][1]
elif a[k]>V:
if r==0:
T+=[[V, a[k], 'r'], ]
break
else: V=temp[0][1] if temp[0][2]=='r' else temp[1][1]
k+=1
return (T)
BSTC(a)
###Output
_____no_output_____
###Markdown
Binary Search Tree Search Algorithm
###Code
T=[[5, 9, 'r'], [9, 8, 'l'], [5, 1, 'l'], [1, 2, 'r'], [2, 4, 'r'], [9, 10, 'r'], [8, 6, 'l']]
s=7
def BSTS(T, s):
"""T is the binary search tree, s is the element to be searched"""
E1=[]
E2=[]
for i in T:
E1+=[i[:2], ]
E2+=[i[2], ]
V=list(set(flatten(E1)))
# we need to determine the root, i.e, the node which has indegree=0
In=0
root=0
for v in V:
for i in E1:
if i[1]==v:In+=1
if In==0:
root=v
break
else: In=0
v=root
while True:
if s==v: return True
t1=[]
t2=[]
for i in range(len(E1)):
if E1[i][0]==v:
t1+=[E1[i], ]
t2+=[E2[i], ]
if len(t2)==2:
if s>v:
ind=t2.index('r')
e=t1[ind]
v=e[1]
elif s<v:
ind=t2.index('l')
e=t1[ind]
v=e[1]
elif len(t2)==1:
if s<v:
if t2[0]=='l':
e=t1[0]
v=e[1]
elif t2[0]=='r': return False
elif s>v:
if t2[0]=='l': return False
elif t2[0]=='r':
e=t1[0]
v=e[1]
else: return False
BSTS(T, s)
###Output
_____no_output_____
###Markdown
Chapter 6 Independent Set Algorithm (A Matching Algorithm)
###Code
A=np.array((['1', 0, 1, 0, 0, 1, 0], [0, 0, '1', 1, 1, 0, 1], [1, 0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, '1', 0], [0, '1', 0, 0, 1, 0, 1], [0, 0, 1, 0, 0, 1, 0]), dtype=object)
#Ex. 6.3, 5
#A=np.array(([0, '1', 0, 1], ['1', 1, 0, 0], [0, 0, '1', 1], [1, 1, 1, 0]), dtype=object)
#Ex. 6.3, 7
#A=np.array((['1', 1, 0, 1, 1], [1, 0, 0, 0, '1'], [0, '1', 0, 1, 0], [1, 1, 0, 0, 1]), dtype=object)
#A=np.array((['1', 0, 1, 1], [0, '1', 0, 0], [1, 1, 0, 0], [0, 1, 0, 0]), dtype=object)
#Ex. 6.3, 9
#A=np.array((['1', 1, 1, 1, 1], [1, 0, 0, 0, 0], [0, '1', 0, 0, 0], [1, 1, 0 ,0, 0], [1, 0, '1', 0, 1]), dtype=object)
def ISA(A):
row=len(A)
col=len(A[0])
while True:
breakage=0
cln=np.array([[0]]*row)
A=np.append(A, cln, axis=1)
A=np.append(A, [[0]*(col+1)], axis=0)
for i in range(col):
x=A[:, i]
if list(x).count(1)>0 and list(x).count('1')==0: A[row, i]='#'
while True:
for i in range(col):
if A[row, i]!=0:
if '/' not in A[row, i]:
x=A[:row, i]
for j in range(row):
if x[j]==1 and A[j, col]==0: A[j, col]=str(i)
A[row, i]+='/'
A1=np.copy(A)
print("A1")
print(A1)
for i in range(row):
if A[i, col]!=0:
if '/' not in A[i, col]:
x=A[i, :col]
if list(x).count('1')>0:
for j in range(col):
if x[j]=='1' and A[row, j]==0: A[row, j]=str(i)
A[i, col]+='/'
elif list(x).count('1')==0:
breakage=1
A[i, col]+='!'
break
A2=np.copy(A)
print("A2")
print(A2)
if np.array_equal(A1, A2): break
if breakage==1:
j=1
R=i
C=int(A[i, col][0])
ind=[[R, C]]
while True:
RC=ind[-1]
if j%2==1:
C=RC[1]
if '#' in A[row, C]: break
R=int(A[row, C][0]) if '/' in A[row, C] else int(A[row, C])
ind+=[[R, C], ]
j+=1
elif j%2==0:
R=RC[0]
C=int(A[R, col][0]) if '/' in A[R, col] else int(A[R, col])
ind+=[[R, C], ]
j+=1
for i in ind: A[i[0], i[1]]='1' if A[i[0], i[1]]==1 else 1
A=A[:row]
A=np.delete(A, -1, axis=1)
print(A)
break
if np.array_equal(A1, A2):
A=A[:row]
A=np.delete(A, -1, axis=1)
print("a")
print(A)
i, j=np.where(A=='1')
I=[str(k+1) for k in i]
J=[chr(k+65) for k in j]
break
return [I[i]+J[i] for i in range(len(I))]
ISA(A)
###Output
A1
[['1' 0 1 0 0 1 0 0]
[0 0 '1' 1 1 0 1 '3']
[1 0 1 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4']
[0 0 1 0 0 1 0 0]
[0 0 0 '#/' '#/' 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 0]
[0 0 '1' 1 1 0 1 '3/']
[1 0 1 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4/']
[0 0 1 0 0 1 0 0]
[0 '4' '1' '#/' '#/' 0 '#/' 0]]
A1
[['1' 0 1 0 0 1 0 '2']
[0 0 '1' 1 1 0 1 '3/']
[1 0 1 0 0 0 0 '2']
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4/']
[0 0 1 0 0 1 0 '2']
[0 '4/' '1/' '#/' '#/' 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 '2/']
[0 0 '1' 1 1 0 1 '3/']
[1 0 1 0 0 0 0 '2!']
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4/']
[0 0 1 0 0 1 0 '2']
['0' '4/' '1/' '#/' '#/' 0 '#/' 0]]
[['1' 0 1 0 0 1 0]
[0 0 1 '1' 1 0 1]
[1 0 '1' 0 0 0 0]
[1 0 0 0 0 '1' 0]
[0 '1' 0 0 1 0 1]
[0 0 1 0 0 1 0]]
A1
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '4']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4']
[0 0 1 0 0 1 0 0]
[0 0 0 0 '#/' 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '4/']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4/']
[0 0 1 0 0 1 0 0]
[0 '4' 0 '1' '#/' 0 '#/' 0]]
A1
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '4/']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4/']
[0 0 1 0 0 1 0 0]
[0 '4/' 0 '1/' '#/' 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '4/']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 '1' 0 0 1 0 1 '4/']
[0 0 1 0 0 1 0 0]
[0 '4/' 0 '1/' '#/' 0 '#/' 0]]
a
[['1' 0 1 0 0 1 0]
[0 0 1 '1' 1 0 1]
[1 0 '1' 0 0 0 0]
[1 0 0 0 0 '1' 0]
[0 '1' 0 0 1 0 1]
[0 0 1 0 0 1 0]]
###Markdown
The problem of the courses and professors, Pg 342
###Code
A=np.array((['1', 0, 1, 0, 0, 1, 0], [0, 0, '1', 1, 1, 0, 1], [1, 0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, '1', 0], [0, 1, 0, 0, '1', 0, 1], [0, 0, 1, 0, 0, 1, 0]), dtype=object)
ISA(A)
###Output
A1
[['1' 0 1 0 0 1 0 0]
[0 0 '1' 1 1 0 1 '3']
[1 0 1 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1']
[0 0 1 0 0 1 0 0]
[0 '#/' 0 '#/' 0 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 0]
[0 0 '1' 1 1 0 1 '3/']
[1 0 1 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1/']
[0 0 1 0 0 1 0 0]
[0 '#/' '1' '#/' '4' 0 '#/' 0]]
A1
[['1' 0 1 0 0 1 0 '2']
[0 0 '1' 1 1 0 1 '3/']
[1 0 1 0 0 0 0 '2']
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1/']
[0 0 1 0 0 1 0 '2']
[0 '#/' '1/' '#/' '4/' 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 '2/']
[0 0 '1' 1 1 0 1 '3/']
[1 0 1 0 0 0 0 '2!']
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1/']
[0 0 1 0 0 1 0 '2']
['0' '#/' '1/' '#/' '4/' 0 '#/' 0]]
[['1' 0 1 0 0 1 0]
[0 0 1 '1' 1 0 1]
[1 0 '1' 0 0 0 0]
[1 0 0 0 0 '1' 0]
[0 1 0 0 '1' 0 1]
[0 0 1 0 0 1 0]]
A1
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '6']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1']
[0 0 1 0 0 1 0 0]
[0 '#/' 0 0 0 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '6/']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1/']
[0 0 1 0 0 1 0 0]
[0 '#/' 0 '1' '4' 0 '#/' 0]]
A1
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '6/']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1/']
[0 0 1 0 0 1 0 0]
[0 '#/' 0 '1/' '4/' 0 '#/' 0]]
A2
[['1' 0 1 0 0 1 0 0]
[0 0 1 '1' 1 0 1 '6/']
[1 0 '1' 0 0 0 0 0]
[1 0 0 0 0 '1' 0 0]
[0 1 0 0 '1' 0 1 '1/']
[0 0 1 0 0 1 0 0]
[0 '#/' 0 '1/' '4/' 0 '#/' 0]]
a
[['1' 0 1 0 0 1 0]
[0 0 1 '1' 1 0 1]
[1 0 '1' 0 0 0 0]
[1 0 0 0 0 '1' 0]
[0 1 0 0 '1' 0 1]
[0 0 1 0 0 1 0]]
###Markdown
Hungarian Algorithm
###Code
#Ex 6.5
#A=np.array(([3, 6, 3, 5, 3], [7, 3, 5, 8, 5], [5, 2, 8, 6, 2], [8, 3, 6, 4, 4], [0, 0, 0, 0, 0]), dtype=object)
#A=np.array(([6, 2, 5, 8], [6, 7, 1, 6], [6, 3, 4, 5], [5, 4, 3, 4]), dtype=object)
#A=np.array(([3, 6, 3, 5], [7, 3, 5, 8], [5, 2, 8, 6], [8, 3, 6, 4]), dtype=object)
#A=np.array(([3, 5, 5, 3, 8], [4, 6,4, 2, 6], [4, 6, 1, 3, 6], [3, 4, 4, 6, 5], [5, 7, 3, 5, 9]), dtype=object)
A=np.array(([5, 6, 2, 3, 4, 3], [6, 4, 4, 2, 0, 3], [5, 4, 5, 2, 6, 6], [5, 6, 1, 4, 7, 6]), dtype=object)
def reduce(A):
row=len(A)
col=len(A[0])
ind=[]
a=np.copy(A)
for r in range(row):
x=A[r, :]
if 0 in x:
c=list(x).index(0)
ind+=[[r, c], ]
A[r, :]='x'
A[:, c]='x'
for i in ind: a[i[0], i[1]]='0'
return(a)
def ISA2(A):
row=len(A)
col=len(A[0])
while True:
breakage=0
cln=np.array([[0]]*row)
A=np.append(A, cln, axis=1)
A=np.append(A, [[0]*(col+1)], axis=0)
for i in range(col):
x=A[:, i]
if list(x).count(0)>0 and list(x).count('0')==0: A[row, i]='#'
while True:
for i in range(col):
if A[row, i]!=0:
if '/' not in A[row, i]:
x=A[:row, i]
for j in range(row):
if x[j]==0 and A[j, col]==0: A[j, col]=str(i)
A[row, i]+='/'
A1=np.copy(A)
for i in range(row):
if A[i, col]!=0:
if '/' not in A[i, col]:
x=A[i, :col]
if list(x).count('0')>0:
for j in range(col):
if x[j]=='0' and A[row, j]==0: A[row, j]=str(i)
A[i, col]+='/'
elif list(x).count('0')==0:
breakage=1
A[i, col]+='!'
break
A2=np.copy(A)
if np.array_equal(A1, A2): break
if breakage==1:
j=1
R=i
C=int(A[i, col][0])
ind=[[R, C]]
while True:
RC=ind[-1]
if j%2==1:
C=RC[1]
if '#' in A[row, C]: break
R=int(A[row, C][0]) if '/' in A[row, C] else int(A[row, C])
ind+=[[R, C], ]
j+=1
elif j%2==0:
R=RC[0]
C=int(A[R, col][0]) if '/' in A[R, col] else int(A[R, col])
ind+=[[R, C], ]
j+=1
for i in ind: A[i[0], i[1]]='0' if A[i[0], i[1]]==0 else 0
A=A[:row]
A=np.delete(A, -1, axis=1)
print(A)
break
if np.array_equal(A1, A2):
break
return A
def Hungarian(A):
a=np.copy(A)
row=len(A)
col=len(A[0])
if row<col:
num=col-row
for i in range(num):A=np.append(A, [[0]*col], axis=0)
row=len(A)
col=len(A[0])
a=np.copy(A)
print(A)
for i in range(row): A[i, :]-=min(A[i, :])
for j in range(col): A[:, j]-=min(A[:, j])
while True:
A=ISA2(reduce(A))
print(A)
R1, R2, C1, C2=[], [], [], []
for i in range(row):
if A[i, col]==0:R1+=[i, ]
else:R2+=[i, ]
for i in range(col):
if A[row, i]!=0:C1+=[i, ]
else: C2+=[i, ]
X=[]
Y=[]
for i in R1:
for j in C1:
X+=[A[i, j], ]
Y+=[[i, j], ]
if len(X)==0: break
M=min(X)
print(M)
Int=[]
for i in R2:
for j in C2: Int+=[[i, j], ]
for i in Y: A[i[0], i[1]]-=M
for i in Int: A[i[0], i[1]]+=M
print("Y=",Y)
print("Int=",Int)
A=A[:row]
A=np.delete(A, -1, axis=1)
I, J=np.where(A=='0')
ind=[]
for i in range(len(I)):A[I[i], J[i]]=0
print(A)
I, J=np.where(A=='0')
s=0
for i in range(len(I)): s+=a[I[i], J[i]]
return s
Hungarian(A)
###Output
[[5 6 2 3 4 3]
[6 4 4 2 0 3]
[5 4 5 2 6 6]
[5 6 1 4 7 6]
[0 0 0 0 0 0]
[0 0 0 0 0 0]]
[[3 4 '0' 1 2 1 0]
[6 4 4 2 '0' 3 0]
[3 2 3 '0' 4 4 0]
[4 5 0 3 6 5 0]
['0' 0 0 0 0 0 '5/']
[0 '0' 0 0 0 0 '5/']
['4/' '5/' 0 0 0 '#/' 0]]
1
Y= [[0, 0], [0, 1], [0, 5], [1, 0], [1, 1], [1, 5], [2, 0], [2, 1], [2, 5], [3, 0], [3, 1], [3, 5]]
Int= [[4, 2], [4, 3], [4, 4], [5, 2], [5, 3], [5, 4]]
[[2 3 0 1 2 0]
[5 3 4 2 0 2]
[2 1 3 0 4 3]
[3 4 0 3 6 4]
[0 0 1 1 1 0]
[0 0 1 1 1 0]]
[[2 3 0 1 2 '0']
[5 3 4 2 '0' 2]
[2 1 3 '0' 4 3]
[3 4 '0' 3 6 4]
['0' 0 1 1 1 0]
[0 '0' 1 1 1 0]]
[[2 3 0 1 2 '0' 0]
[5 3 4 2 '0' 2 0]
[2 1 3 '0' 4 3 0]
[3 4 '0' 3 6 4 0]
['0' 0 1 1 1 0 0]
[0 '0' 1 1 1 0 0]
[0 0 0 0 0 0 0]]
|
UGRID/.ipynb_checkpoints/no_bug_after_all-checkpoint.ipynb | ###Markdown
Plot with cool Cartopy tiled background
###Code
geodetic = ccrs.Geodetic(globe=ccrs.Globe(datum='WGS84'))
fig = plt.figure(figsize=(8,8))
tiler = MapQuestOpenAerial()
ax = plt.axes(projection=tiler.crs)
#ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent(bbox,geodetic)
ax.add_image(tiler, 8)
#ax.coastlines()
plt.tricontourf(triang, zcube.data, levels=levs, transform=geodetic)
plt.colorbar()
plt.tricontour(triang, zcube.data, colors='k',levels=levs, transform=geodetic)
tvar = cube.coord('time')
tstr = tvar.units.num2date(tvar.points[itime])
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
plt.title('%s: %s: %s' % (zcube.attributes['title'],var,tstr));
###Output
_____no_output_____ |
directme/analysis/Gov16_suburb summary.ipynb | ###Markdown
Summarise accidents by suburbs factoring in mortality rate
###Code
###problem with the suburblist.txt, need to rewrite this function to support new data.
subsdf = pd.DataFrame(columns=["Postcode", "Suburb", "TotalAccidents", "Non-injury", "Other-injury", "Serious-injury", "Fatality"])
with open("suburblist.txt") as f:
a = f.readlines()
k = []
row = 0
for i in a:
t = i.strip("Suburb : \n")
#t = t.strip("\n")
c=t.split("(")
sub = c[0]
pos = c[1].strip("()")
#pos = pos.stri(")")
subsdf.loc[row] = [pos, sub, 0, 0, 0, 0, 0]
row += 1
print(subsdf.shape )
#load accident to find mortality
#from PERSON.csv, INJ_LEVEL 1-4, 4 being non-injury, 1 being fatality, inj level desc has more details.
persondf = pd.read_csv("PERSON.csv")
persondf.columns
subsdf[(subsdf["ACCIDENT_NO"] == "T20060002570") & subsdf["INJ_LEVEL"]
#need to optimize this function. Approach chould be from persondf's perspective to check against suburb code,
#it would be faster that way.
def countInjuries(data): #the data is a sub dataframe of node
#debug(10, data.head())
injuries = [0, 0, 0, 0] #four categories of injuries
debug(10, "Processing: {0}".format(data.shape[0]))
if data.shape[0] > 0:
for i in range(data.shape[0]):
#for each accident number
number = data.iloc[i, 0]
injuries[0] += persondf[(persondf["ACCIDENT_NO"] == number) & (persondf["INJ_LEVEL"] == 1)].shape[0]
injuries[1] += persondf[(persondf["ACCIDENT_NO"] == number) & (persondf["INJ_LEVEL"] == 2)].shape[0]
injuries[2] += persondf[(persondf["ACCIDENT_NO"] == number) & (persondf["INJ_LEVEL"] == 3)].shape[0]
injuries[3] += persondf[(persondf["ACCIDENT_NO"] == number) & (persondf["INJ_LEVEL"] == 4)].shape[0]
return injuries
for i in range(subsdf.shape[0]):
code = int(subsdf.loc[i, 'Postcode'])
#debug(10, code)
if type(code) == type(0):
tempdf = nodedf[nodedf["Postcode No"] == code]
count = tempdf.shape[0]
injuries = countInjuries(tempdf)
debug(10, injuries)
if(len(injuries) > 0):
subsdf.loc[i, 'TotalAccidents'] = count
subsdf.loc[i, "None-injury"] = injuries[3]
subsdf.loc[i, "Other-injury"] = injuries[2]
subsdf.loc[i, "Serious-injury"] = injuries[1]
subsdf.loc[i, "Fatality"] = injuries[0]
#def findCases(colName, df1, df2): #df1 is the dataframe to look for, df2 is the dataframe to compare
subsdf.head()
subsdf.to_csv("AccidentsbySuburbs.csv")
###Output
_____no_output_____ |
dutch-f3/dutch_f3_spectrum.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
!pip install segyio
###Output
Collecting segyio
[?25l Downloading https://files.pythonhosted.org/packages/14/e2/7a975288dcc3e159d7eda706723c029d2858cbe3df7913041af904f23866/segyio-1.9.1-cp36-cp36m-manylinux1_x86_64.whl (89kB)
[K |███▋ | 10kB 15.2MB/s eta 0:00:01
[K |███████▎ | 20kB 3.1MB/s eta 0:00:01
[K |███████████ | 30kB 4.1MB/s eta 0:00:01
[K |██████████████▋ | 40kB 4.3MB/s eta 0:00:01
[K |██████████████████▎ | 51kB 3.6MB/s eta 0:00:01
[K |██████████████████████ | 61kB 4.0MB/s eta 0:00:01
[K |█████████████████████████▌ | 71kB 4.3MB/s eta 0:00:01
[K |█████████████████████████████▏ | 81kB 4.6MB/s eta 0:00:01
[K |████████████████████████████████| 92kB 3.4MB/s
[?25hRequirement already satisfied: numpy>=1.10 in /usr/local/lib/python3.6/dist-packages (from segyio) (1.18.4)
Installing collected packages: segyio
Successfully installed segyio-1.9.1
###Markdown
Read Data
###Code
filename = '/content/drive/My Drive/Public geoscience Data/Dutch F3 seismic data/Dutch Government_F3_entire_8bit seismic.segy'
import segyio
with segyio.open(filename) as f:
print('Inline range from', min(f.ilines), 'to', max(f.ilines))
print('Crossline range from', min(f.xlines), 'to', max(f.xlines))
data = segyio.tools.cube(f)
clip_percentile = 99
vm = np.percentile(data, clip_percentile)
inlines = f.ilines
crosslines = f.xlines
twt = f.samples + 1000
f'The {clip_percentile}th percentile is {vm:.0f}; the max amplitude is {data.max():.0f}'
inlines[:10]
crosslines[:10]
###Output
_____no_output_____
###Markdown
2D Slice
###Code
# slice the data at inline 300
inline_number = 300
slices = data[(inline_number+1),:,:]
slices.shape
plt.figure(figsize=(20,10))
extent = [crosslines[0], crosslines[-1], twt[-1], twt[0]]
p1 = plt.imshow(slices.T, vmin=-vm, vmax=vm, aspect='auto', extent=extent, cmap='Accent')
plt.title('Dutch F3 Seismic at Inline {}'.format(inline_number), size=20, pad=20)
plt.colorbar(p1)
plt.show()
###Output
_____no_output_____
###Markdown
Frequency of 2D Slice
###Code
# transpose the slice
transp_slice = np.transpose(slices)
# take the average of each individual crossline traces in inline slice
# time
min_time = 0
max_time = len(twt)
# crosslines
xmin = 0
xmax = len(crosslines)
trace = np.mean(transp_slice[min_time:max_time, xmin:xmax], axis=1)
Fs_seis = 1 / 0.004 # Seconds.
n_seis = len(trace)
k_seis = np.arange(n_seis)
T_seis = n_seis / Fs_seis
freq_seis = k_seis / T_seis
freq_seis_il = freq_seis[range(n_seis//2)] # One side frequency range.
spec_seis = np.fft.fft(trace) / n_seis # FFT computing and normalization.
spec_seis = spec_seis[range(n_seis//2)]
# This is to smooth the spectrum over a window of 10.
roll_win = np.ones(10) / 10
spec_seis_il = np.convolve(spec_seis, roll_win, mode='same')
plt.figure(figsize=(10,5))
plt.plot(freq_seis_il, np.abs(spec_seis_il))
plt.xlim(xmin=0)
plt.title('Frequency Spectrum of Dutch F3 at Inline {}'.format(inline_number), size=20, pad=10)
plt.show()
transp_slice.shape
###Output
_____no_output_____
###Markdown
Frequency Spectrum of the Whole 3D cube
###Code
slices.T.shape
transp_cube = np.transpose(data)
transp_cube.shape
# transpose the cube
transp_cube = np.transpose(data)
# time
min_time = 0
max_time = len(twt)
# crosslines
xmin = 0
xmax = len(crosslines)
# inlines
ymin = 0
ymax = len(inlines)
mean_xl_traces = [] # mean of crossline traces of each inline section
for i in range(len(inlines)):
mean_xl = np.mean(transp_cube[min_time:max_time, xmin:xmax, i], axis=1)
mean_xl_traces.append(mean_xl)
transp_xl = np.transpose(mean_xl_traces)
# take average of each individual mean values of xl in the inline section
trace = np.mean(transp_slice[min_time:max_time, ymin:ymax], axis=1)
Fs_seis = 1 / 0.004 # Seconds.
n_seis = len(trace)
k_seis = np.arange(n_seis)
T_seis = n_seis / Fs_seis
freq_seis = k_seis / T_seis
freq_seis_whole = freq_seis[range(n_seis//2)] # One side frequency range.
spec_seis = np.fft.fft(trace) / n_seis # FFT computing and normalization.
spec_seis = spec_seis[range(n_seis//2)]
# This is to smooth the spectrum over a window of 10.
roll_win = np.ones(10) / 10
spec_seis_whole = np.convolve(spec_seis, roll_win, mode='same')
plt.figure(figsize=(10,5))
plt.plot(freq_seis_whole, np.abs(spec_seis_whole), color='blue', alpha=.7, label='Whole cube')
plt.plot(freq_seis_il, np.abs(spec_seis_il), color='red', alpha=.7, label='Inline 300')
plt.xlim(xmin=0)
plt.title('Frequency Spectrum of Dutch F3', size=20, pad=10)
plt.legend()
plt.show()
###Output
_____no_output_____ |
nbs/old nbs/transform.ipynb | ###Markdown
from sklearn.preprocessing import MinMaxScaler from sklearn.pipeline import Pipeline from sklearn.datasets import load_bostonfrom sklearn.model_selection import train_test_splitdata = load_boston()
###Code
from tabint.utils import *
data = pd.read_csv('DLCO.csv', sep=";")
x = data[['Sex', 'Age', 'Height']]
x.iloc[1,1] = np.NaN
x.head()
y = data['DLCO']
###Output
_____no_output_____
###Markdown
steps and transform
###Code
def df_from_array(ary, columns, index = None): return pd.DataFrame(ary, columns=columns, index = index)
class TBStep:
def __init__(self, **kargs): pass
def fit(self, **kargs): pass
def transform(self, df, **kargs): pass
def fit_transform(self, df):
self.fit(df)
return self.transform(df)
class noop_step(TBStep):
def transform(self, df): return df
noop_transform = TBTransform([noop_step])
class TBTransform:
def __init__(self, steps):
self.steps = steps
def __repr__(self):
return '\n'.join([str(pos) + ' - '+ str(step) for pos, step in enumerate(self.steps)])
def fit(self, df):
for step in self.steps: step.fit(df)
self.first_transform = True
def transform(self, df):
df = df.copy()
for step in self.steps: df = step.transform(df)
if self.first_transform is None: self.get_features(df)
return df
def get_features(self, df):
self.features = df.columns
self.cons = []; self.cats = []
for feature, value in df.items():
if np.array_equal(np.sort(value.unique()), np.array([0, 1])) or np.array_equal(np.sort(value.unique()), np.array([0])): self.cats.append(feature)
else: self.cons.append(feature)
def append(self, steps): self.steps.append(steps)
def insert(self, index, steps): self.steps.insert(index, steps)
def pop(self, n_pop): self.steps.pop(n_pop)
###Output
_____no_output_____
###Markdown
drop features
###Code
class drop_features(TBStep):
def __init__(self, features = None):
self.features = features
def __repr__(self):
print_features = ', '.join(to_iter(self.features))
return f'drop {print_features}'
def transform(self, df): return df.drop(self.features, axis=1)
dr = drop_features(['a', 'b'])
dr
###Output
_____no_output_____
###Markdown
select
###Code
class select(TBStep):
def __init__(self, features):
self.features = features
def __repr__(self):
print_features = ', '.join(to_iter(self.features))
return f'select {print_features}'
def transform(self, df): return df[self.features]
slc = select(['a', 'b'])
slc
###Output
_____no_output_____
###Markdown
apply function
###Code
def unique_list(*agrs):
lists = []
for agr in agrs: lists += list(agr)
return list(set(lists))
unique_list(['a', 'b'], ['a', 'c'])
class apply_function(TBStep):
def __init__(self, function_dict): self.function_dict = function_dict
def __repr__(self):
keys = ', '.join(self.function_dict.keys())
return f'apply function for {keys}'
def transform(self, df):
df = df.copy()
for key in self.function_dict.keys(): df[key] = self.function_dict[key](df)
return df
af = apply_function({'Sex': lambda df: df['Sex'].apply(lambda x : 1 if x == 'F' else 0)})
af
af.fit(x)
###Output
_____no_output_____
###Markdown
fill na
###Code
class fill_na(TBStep):
def __init__(self, features = None):
self.na_dict = {}
self.features = features
def __repr__(self):
return 'fill na'
def fit(self, df):
if self.features is None: self.features = df.columns
for feature in self.features:
if is_numeric_dtype(df[feature].values):
if pd.isnull(df[feature]).sum():
self.na_dict[feature] = df[feature].median()
def transform(self, df):
df = df.copy()
for key in self.na_dict.keys(): df[key] = df[key].fillna(self.na_dict[key])
return(df)
###Output
_____no_output_____
###Markdown
remove outlier
###Code
from tabint.pre_processing import *
from tabint.utils import *
class remove_outlier(TBStep):
def __init__(self, features = None):
self.features = features
def __repr__(self):
print_features = ', '.join(to_iter(self.features))
return f'remove outlier of {print_features}'
def fit(self, df):
self.bw_dict = {}
if self.features is None: self.features = df.columns
for feature, value in df[self.features].items():
if is_numeric_dtype(value):
self.bw_dict[feature] = {}
Min, _, _, _, Max, _ = boxnwhisker_value(value)
self.bw_dict[feature]['Min'] = Min
self.bw_dict[feature]['Max'] = Max
def transform(self, df):
mask = np.full(df.shape[0], True)
for key in self.bw_dict.keys():
values = df[key].values
Min = self.bw_dict[key]['Min']
Max = self.bw_dict[key]['Max']
inlier = np.logical_and(values >= Min, values <= Max)
mask = np.logical_and(mask, inlier)
self.mask = mask
return df[mask]
ro = remove_outlier(['a', 'b'])
ro
###Output
_____no_output_____
###Markdown
subset
###Code
class subset(TBStep):
def __init__(self, n_sample = None, ratio = 0.3):
self.n_sample = n_sample
self.ratio = ratio
def __repr__(self): return f'select subset with {self.n_sample} samples'
def fit(self, df):
if self.n_sample is None: self.n_sample = self.ratio*df.shape[0]
def transform(self, df): return df.sample(self.n_sample)
x.shape[0]
ss = subset(20)
ss
###Output
_____no_output_____
###Markdown
app cat
###Code
class app_cat(TBStep):
def __init__(self, max_n_cat=15, features = None):
self.max_n_cat = max_n_cat
self.features = features
def __repr__(self): return f'apply category with maximum number of distinct value is {self.max_n_cat}'
def fit(self, df):
if self.features is None: self.features = df.columns
self.app_cat_dict = {}
for feature, value in df[self.features].items():
if is_numeric_dtype(value) and value.dtypes != np.bool:
if value.nunique()<=self.max_n_cat:
if not np.array_equal(value.unique(), np.array([0, 1])):
self.app_cat_dict[feature] = self.as_category_as_order
else:
if value.nunique()>self.max_n_cat: self.app_cat_dict[feature] = self.as_category_as_codes
elif value.dtypes.name == 'object': self.app_cat_dict[feature] = self.as_category_as_order
elif value.dtypes.name == 'category': self.app_cat_dict[feature] = self.cat_as_order
@staticmethod
def cat_as_order(x): return x.cat.as_ordered()
@staticmethod
def as_category_as_codes(x): return x.astype('category').cat.codes+1
@staticmethod
def as_category_as_order(x): return x.astype('category').cat.as_ordered()
def transform(self, df):
df = df.copy()
for key in self.app_cat_dict.keys(): df[key] = self.app_cat_dict[key](df[key])
return df
ac = app_cat()
ac
###Output
_____no_output_____
###Markdown
dummies
###Code
set(['a', 'b'] + ['a', 'c'])
class dummies(TBStep):
def __init__(self, dummy_na = True):
self.dummy_na = dummy_na
def __repr__(self): return 'get dummies'
def transform(self, df):
df = df.copy()
df = pd.get_dummies(df, dummy_na=self.dummy_na)
return df
###Output
_____no_output_____
###Markdown
scale var
###Code
import warnings
import sklearn
from sklearn.exceptions import DataConversionWarning
class scale_vars(TBStep):
def __init__(self, features = None):
warnings.filterwarnings('ignore', category=sklearn.exceptions.DataConversionWarning)
self.features= features
def __repr__(self): return 'scale features'
def fit(self, df):
if self.features is None: self.features = df.columns
self.features = [i for i in self.features if is_numeric_dtype(df[i])]
map_f = [([n],StandardScaler()) for n in df[self.features].columns]
self.mapper = DataFrameMapper(map_f).fit(df[self.features].dropna(axis=0))
def transform(self, df):
df = df.copy()
df[self.mapper.transformed_names_] = self.mapper.transform(df[self.features])
return df
sv = scale_vars()
sv
sv.fit(x)
###Output
_____no_output_____
###Markdown
test
###Code
steps = [apply_function({'height2': lambda df: df['Height']*2}),
fill_na(),
select(['Sex', 'Age', 'Height', 'height2']),
drop_features('height2'),
scale_vars(),
app_cat(),
dummies(),
remove_outlier('Height')]
tfms = TBTransform(steps)
tfms
tfms.fit(x)
a = tfms.transform(x)
tfms.features
tfms.cats
tfms.cons
a.Sex_nan.unique()
np.sort(a.Sex_F.unique())
np.array_equal(a.Sex_F.unique(), np.array([0, 1]))
###Output
_____no_output_____
###Markdown
dataset
###Code
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.utils.validation import _num_samples, check_array
from sklearn.model_selection._split import _approximate_mode, _validate_shuffle_split
from sklearn.utils import indexable, check_random_state, safe_indexing
class split_by_cats(StratifiedShuffleSplit):
def _iter_indices(self, X, y, groups=None):
n_samples = _num_samples(X)
y = check_array(y, ensure_2d=False, dtype=None)
n_train, n_test = _validate_shuffle_split(n_samples, self.test_size,
self.train_size)
if y.ndim == 2:
# for multi-label y, map each distinct row to a string repr
# using join because str(row) uses an ellipsis if len(row) > 1000
y = np.array([' '.join(row.astype('str')) for row in y])
classes, y_indices = np.unique(y, return_inverse=True)
n_classes = classes.shape[0]
class_counts = np.bincount(y_indices)
if np.min(class_counts) < 2:
print(ValueError("The least populated class in y has only 1"
" member, which is too few. The minimum"
" number of groups for any class cannot"
" be less than 2."))
if n_train < n_classes:
print(ValueError('The train_size = %d should be greater or '
'equal to the number of classes = %d' %
(n_train, n_classes)))
if n_test < n_classes:
print(ValueError('The test_size = %d should be greater or '
'equal to the number of classes = %d' %
(n_test, n_classes)))
# Find the sorted list of instances for each class:
# (np.unique above performs a sort, so code is O(n logn) already)
class_indices = np.split(np.argsort(y_indices, kind='mergesort'),
np.cumsum(class_counts)[:-1])
rng = check_random_state(self.random_state)
for _ in range(self.n_splits):
# if there are ties in the class-counts, we want
# to make sure to break them anew in each iteration
n_i = _approximate_mode(class_counts, n_train, rng)
class_counts_remaining = class_counts - n_i
t_i = _approximate_mode(class_counts_remaining, n_test, rng)
train = []
test = []
for i in range(n_classes):
permutation = rng.permutation(class_counts[i])
perm_indices_class_i = class_indices[i].take(permutation,
mode='clip')
train.extend(perm_indices_class_i[:n_i[i]])
test.extend(perm_indices_class_i[n_i[i]:n_i[i] + t_i[i]])
train = rng.permutation(train)
test = rng.permutation(test)
yield train, test
def split_time_series(df, time_feature, ratio):
df = df.copy()
df = df.sort_values(by=time_feature, ascending=True)
split_id = int(df.shape*(1-ratio))
x_trn, y_trn = df[:split_id], y[:split_id]
x_val, y_val = df[split_id:], y[split_id:]
return x_trn, y_trn, x_val, y_val
def stratify_split(df, y, cats, ratio):
keys = df[cats]
if y.dtype.name[:5] != 'float': keys = pd.concat([keys, y], axis=1)
keys = keys.apply(lambda x: '~'.join([str(j) for j in x.values]), axis=1)
sss = split_by_cats(train_size =1-ratio, test_size=ratio)
train, val = next(sss.split(df, keys))
x_trn, x_val = safe_indexing(df, train), safe_indexing(df, val)
y_trn, y_val = safe_indexing(y, train), safe_indexing(y, val)
return x_trn, y_trn, x_val, y_val
ds = TBDataset.from_SKSplit(x, y, cats = 'Sex')
ds.x_trn.shape
ds.cons
class TBDataset:
"""
Contain train, validation, test set
"""
def __init__(self, x_trn, x_val, x_tst, x_tfms, y_trn, y_val, y_tfms):
self.x_trn, self.y_trn, self.x_tst = x_trn, y_trn, x_tst
self.x_val, self.y_val = x_val, y_val
self.x_tfms, self.y_tfms = x_tfms, y_tfms
@classmethod
def from_Split(cls, df, y = None, y_field = None, tp = '_',
x_tst = None, time_feature = None, ratio = 0.2,
x_tfms = None, y_tfms = None, **kargs):
"""
use sklearn split function to split data
"""
df = df.copy()
if y is None: y = df[y_field]; df = df.drop(y_field, axis = 1)
if tp != 'time series': x_trn, y_trn, x_val, y_val = stratify_split(df, y, x_tfms.cats, ratio)
else: x_trn, y_trn, x_val, y_val = split_time_series(df, time_feature, ratio)
x_trn, x_val, x_tst, y_trn, y_val, x_tfms, y_tfms = transform_data(x_trn, x_val, x_tst, y_trn, y_val, x_tfms, y_tfms)
return cls(x_trn, x_val, x_tst, x_tfms, y_trn, y_val, y_tfms)
@staticmethod
def transform_data(x_trn, x_val, x_tst, y_trn, y_val, x_tfms, y_tfms):
if x_tfms is not None: x_tfms = noop_transform
x_tfms.fit(x_trn)
x_trn = x_tfms.transform(x_trn)
x_val = x_tfms.transform(x_val)
if x_tst is not None: x_tst = x_tfms.transform(x_tst)
if y_tfms is not None: y_tfms = noop_transform
y_tfms.fit(y_trn)
y_trn = y_tfms.transform(y_trn)
y_val = y_tfms.transform(y_val)
return x_trn, x_val, x_tst, y_trn, y_val, x_tfms, y_tfms
def val_permutation(self, features):
""""
permute one or many columns of validation set. For permutation importance
"""
features = to_iter(features)
df = self.x_val.copy()
for ft in features: df[ft] = np.random.permutation(df[ft])
return df
def apply_function(self, feature, function_dict, inplace = True, tp = 'trn'):
"""
apply a function f for all dataset
"""
features = to_iter(features)
step = apply_function(function_dict).fit(self.x_trn)
self.apply_step(step, features, inplace, tp)
def sample(self, tp = 'trn', ratio = 0.3):
"""
get sample of dataset
"""
if 'tst' == tp:
return None if self.x_tst is None else self.x_tst.sample(self.x_tst.shape[0]*ratio)
else:
df, y = (self.x_trn, self.y_trn) if tp == 'trn' else (self.x_val, self.y_val)
_, df, _, y = train_test_split(df, y, test_size = ratio, stratify = y)
return df, y
def select(self, features, inplace = True, tp = 'trn'):
"""
keep columns of dataset
"""
features = to_iter(features)
step = select(features).fit(self.x_trn)
self.apply_step(step, features, inplace, tp)
def drop(self, feature, inplace = True, tp = 'trn'):
"""
drop columns of dataset
"""
features = to_iter(features)
step = drop_features(features).fit(self.x_trn)
self.apply_step(step, features, inplace, tp)
def remove_outlier(self, features = None, inplace = True, tp = 'trn'):
features = features or self.cons
features = to_iter(features)
mask_trn = self.get_mask_outlier(self.x_trn, features)
mask_val = self.get_mask_outlier(self.x_val, features)
if inplace:
self.x_trn, self.y_trn = self.x_trn[mask_trn], self.y_trn[mask_trn]
self.x_val, self.y_val = self.x_val[mask_val], self.y_val[mask_val]
else:
return (self.x_trn[mask_trn], self.y_trn[mask_trn]) if tp == 'trn' else (self.x_val[mask_val], self.y_val[mask_val])
def get_mask_outlier(self, df, features):
step = remove_outlier(features)
step.fit(df)
_ = step.transform(df)
mask = step.mask
return mask
def apply_step(self, step, features, inplace, tp):
if inplace:
x_tfms.append(step)
self.x_trn = step.transform(self.x_trn)
self.x_val = step.transform(self.x_val)
if self.x_tst is not None: self.x_tst = step.transform(self.x_tst)
x_tfms.get_features(self.x_trn)
else:
if tp == 'tst': return None if self.x_tst is None else step.transform(self.x_tst)
else: return (step.transform(self.x_trn), self.y_trn) if tp == 'trn' else (step.transform(self.x_val), self.y_val)
@property
def cons(self): return self.x_tfms.cons
@property
def cats(self): return self.x_tfms.cats
@property
def features(self): return self.x_trn.columns
@property
def trn(self): return self.x_trn, self.y_trn
@property
def n_trn(self): return self.x_trn.shape[0]
@property
def val(self): return self.x_val, self.y_val
@property
def n_val(self): return self.x_val.shape[0]
###Output
_____no_output_____ |
notebooks/course 2/week 1/C2W1_Assignment.ipynb | ###Markdown
Week 1 Assignment: Data Validation[Tensorflow Data Validation (TFDV)](https://cloud.google.com/solutions/machine-learning/analyzing-and-validating-data-at-scale-for-ml-using-tfx) is an open-source library that helps to understand, validate, and monitor production machine learning (ML) data at scale. Common use-cases include comparing training, evaluation and serving datasets, as well as checking for training/serving skew. You have seen the core functionalities of this package in the previous ungraded lab and you will get to practice them in this week's assignment.In this lab, you will use TFDV in order to:* Generate and visualize statistics from a dataframe* Infer a dataset schema* Calculate, visualize and fix anomaliesLet's begin! Table of Contents- [1 - Setup and Imports](1)- [2 - Load the Dataset](2) - [2.1 - Read and Split the Dataset](2-1) - [2.1.1 - Data Splits](2-1-1) - [2.1.2 - Label Column](2-1-2)- [3 - Generate and Visualize Training Data Statistics](3) - [3.1 - Removing Irrelevant Features](3-1) - [Exercise 1 - Generate Training Statistics](ex-1) - [Exercise 2 - Visualize Training Statistics](ex-2)- [4 - Infer a Data Schema](4) - [Exercise 3: Infer the training set schema](ex-3)- [5 - Calculate, Visualize and Fix Evaluation Anomalies](5) - [Exercise 4: Compare Training and Evaluation Statistics](ex-4) - [Exercise 5: Detecting Anomalies](ex-5) - [Exercise 6: Fix evaluation anomalies in the schema](ex-6)- [6 - Schema Environments](6) - [Exercise 7: Check anomalies in the serving set](ex-7) - [Exercise 8: Modifying the domain](ex-8) - [Exercise 9: Detecting anomalies with environments](ex-9)- [7 - Check for Data Drift and Skew](7)- [8 - Display Stats for Data Slices](8)- [9 - Freeze the Schema](8) 1 - Setup and Imports
###Code
# Import packages
import os
import pandas as pd
import tensorflow as tf
import tempfile, urllib, zipfile
import tensorflow_data_validation as tfdv
from tensorflow.python.lib.io import file_io
from tensorflow_data_validation.utils import slicing_util
from tensorflow_metadata.proto.v0.statistics_pb2 import DatasetFeatureStatisticsList, DatasetFeatureStatistics
# Set TF's logger to only display errors to avoid internal warnings being shown
tf.get_logger().setLevel('ERROR')
###Output
_____no_output_____
###Markdown
2 - Load the DatasetYou will be using the [Diabetes 130-US hospitals for years 1999-2008 Data Set](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008) donated to the University of California, Irvine (UCI) Machine Learning Repository. The dataset represents 10 years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes.This dataset has already been included in your Jupyter workspace so you can easily load it. 2.1 Read and Split the Dataset
###Code
# Read CSV data into a dataframe and recognize the missing data that is encoded with '?' string as NaN
df = pd.read_csv('dataset_diabetes/diabetic_data.csv', header=0, na_values = '?')
# Preview the dataset
df.head()
###Output
_____no_output_____
###Markdown
Data splitsIn a production ML system, the model performance can be negatively affected by anomalies and divergence between data splits for training, evaluation, and serving. To emulate a production system, you will split the dataset into:* 70% training set * 15% evaluation set* 15% serving setYou will then use TFDV to visualize, analyze, and understand the data. You will create a data schema from the training dataset, then compare the evaluation and serving sets with this schema to detect anomalies and data drift/skew. Label ColumnThis dataset has been prepared to analyze the factors related to readmission outcome. In this notebook, you will treat the `readmitted` column as the *target* or label column. The target (or label) is important to know while splitting the data into training, evaluation and serving sets. In supervised learning, you need to include the target in the training and evaluation datasets. For the serving set however (i.e. the set that simulates the data coming from your users), the **label column needs to be dropped** since that is the feature that your model will be trying to predict.The following function returns the training, evaluation and serving partitions of a given dataset:
###Code
def prepare_data_splits_from_dataframe(df):
'''
Splits a Pandas Dataframe into training, evaluation and serving sets.
Parameters:
df : pandas dataframe to split
Returns:
train_df: Training dataframe(70% of the entire dataset)
eval_df: Evaluation dataframe (15% of the entire dataset)
serving_df: Serving dataframe (15% of the entire dataset, label column dropped)
'''
# 70% of records for generating the training set
train_len = int(len(df) * 0.7)
# Remaining 30% of records for generating the evaluation and serving sets
eval_serv_len = len(df) - train_len
# Half of the 30%, which makes up 15% of total records, for generating the evaluation set
eval_len = eval_serv_len // 2
# Remaining 15% of total records for generating the serving set
serv_len = eval_serv_len - eval_len
# Sample the train, validation and serving sets. We specify a random state for repeatable outcomes.
train_df = df.iloc[:train_len].sample(frac=1, random_state=48).reset_index(drop=True)
eval_df = df.iloc[train_len: train_len + eval_len].sample(frac=1, random_state=48).reset_index(drop=True)
serving_df = df.iloc[train_len + eval_len: train_len + eval_len + serv_len].sample(frac=1, random_state=48).reset_index(drop=True)
# Serving data emulates the data that would be submitted for predictions, so it should not have the label column.
serving_df = serving_df.drop(['readmitted'], axis=1)
return train_df, eval_df, serving_df
# Split the datasets
train_df, eval_df, serving_df = prepare_data_splits_from_dataframe(df)
print('Training dataset has {} records\nValidation dataset has {} records\nServing dataset has {} records'.format(len(train_df),len(eval_df),len(serving_df)))
###Output
Training dataset has 71236 records
Validation dataset has 15265 records
Serving dataset has 15265 records
###Markdown
3 - Generate and Visualize Training Data StatisticsIn this section, you will be generating descriptive statistics from the dataset. This is usually the first step when dealing with a dataset you are not yet familiar with. It is also known as performing an *exploratory data analysis* and its purpose is to understand the data types, the data itself and any possible issues that need to be addressed.It is important to mention that **exploratory data analysis should be perfomed on the training dataset** only. This is because getting information out of the evaluation or serving datasets can be seen as "cheating" since this data is used to emulate data that you have not collected yet and will try to predict using your ML algorithm. **In general, it is a good practice to avoid leaking information from your evaluation and serving data into your model.** Removing Irrelevant FeaturesBefore you generate the statistics, you may want to drop irrelevant features from your dataset. You can do that with TFDV with the [tfdv.StatsOptions](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/StatsOptions) class. It is usually **not a good idea** to drop features without knowing what information they contain. However there are times when this can be fairly obvious.One of the important parameters of the `StatsOptions` class is `feature_whitelist`, which defines the features to include while calculating the data statistics. You can check the [documentation](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/StatsOptionsargs) to learn more about the class arguments.In this case, you will omit the statistics for `encounter_id` and `patient_nbr` since they are part of the internal tracking of patients in the hospital and they don't contain valuable information for the task at hand.
###Code
# Define features to remove
features_to_remove = {'encounter_id', 'patient_nbr'}
# Collect features to whitelist while computing the statistics
approved_cols = [col for col in df.columns if (col not in features_to_remove)]
# Instantiate a StatsOptions class and define the feature_whitelist property
stats_options = tfdv.StatsOptions(feature_whitelist=approved_cols)
# Review the features to generate the statistics
print(stats_options.feature_whitelist)
###Output
['race', 'gender', 'age', 'weight', 'admission_type_id', 'discharge_disposition_id', 'admission_source_id', 'time_in_hospital', 'payer_code', 'medical_specialty', 'num_lab_procedures', 'num_procedures', 'num_medications', 'number_outpatient', 'number_emergency', 'number_inpatient', 'diag_1', 'diag_2', 'diag_3', 'number_diagnoses', 'max_glu_serum', 'A1Cresult', 'metformin', 'repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride', 'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone', 'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide', 'examide', 'citoglipton', 'insulin', 'glyburide-metformin', 'glipizide-metformin', 'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone', 'change', 'diabetesMed', 'readmitted']
###Markdown
Exercise 1: Generate Training Statistics TFDV allows you to generate statistics from different data formats such as CSV or a Pandas DataFrame. Since you already have the data stored in a DataFrame you can use the function [`tfdv.generate_statistics_from_dataframe()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) which, given a DataFrame and `stats_options`, generates an object of type `DatasetFeatureStatisticsList`. This object includes the computed statistics of the given dataset.Complete the cell below to generate the statistics of the training set. Remember to pass the training dataframe and the `stats_options` that you defined above as arguments.
###Code
### START CODE HERE
train_stats = tfdv.generate_statistics_from_dataframe(train_df, stats_options)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features used: {len(train_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples used: {train_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {train_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {train_stats.datasets[0].features[-1].path.step[0]}")
###Output
Number of features used: 48
Number of examples used: 71236
First feature: race
Last feature: readmitted
###Markdown
**Expected Output:**```Number of features used: 48Number of examples used: 71236First feature: raceLast feature: readmitted``` Exercise 2: Visualize Training StatisticsNow that you have the computed statistics in the `DatasetFeatureStatisticsList` instance, you will need a way to **visualize** these to get actual insights. TFDV provides this functionality through the method [`tfdv.visualize_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics).Using this function in an interactive Python environment such as this one will output a very nice and convenient way to interact with the descriptive statistics you generated earlier. **Try it out yourself!** Remember to pass in the generated training statistics in the previous exercise as an argument.
###Code
### START CODE HERE
tfdv.visualize_statistics(train_stats)
### END CODE HERE
###Output
_____no_output_____
###Markdown
4 - Infer a data schema A schema defines the **properties of the data** and can thus be used to detect errors. Some of these properties include:- which features are expected to be present- feature type- the number of values for a feature in each example- the presence of each feature across all examples- the expected domains of featuresThe schema is expected to be fairly static, whereas statistics can vary per data split. So, you will **infer the data schema from only the training dataset**. Later, you will generate statistics for evaluation and serving datasets and compare their state with the data schema to detect anomalies, drift and skew. Exercise 3: Infer the training set schemaSchema inference is straightforward using [`tfdv.infer_schema()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/infer_schema). This function needs only the **statistics** (an instance of `DatasetFeatureStatisticsList`) of your data as input. The output will be a Schema [protocol buffer](https://developers.google.com/protocol-buffers) containing the results.A complimentary function is [`tfdv.display_schema()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/display_schema) for displaying the schema in a table. This accepts a **Schema** protocol buffer as input.Fill the code below to infer the schema from the training statistics using TFDV and display the result.
###Code
### START CODE HERE
# Infer the data schema by using the training statistics that you generated
schema = tfdv.infer_schema(train_stats)
# Display the data schema
tfdv.display_schema(schema)
### END CODE HERE
# TEST CODE
# Check number of features
print(f"Number of features in schema: {len(schema.feature)}")
# Check domain name of 2nd feature
print(f"Second feature in schema: {list(schema.feature)[1].domain}")
###Output
Number of features in schema: 48
Second feature in schema: gender
###Markdown
**Expected Output:**```Number of features in schema: 48Second feature in schema: gender``` **Be sure to check the information displayed before moving forward.** 5 - Calculate, Visualize and Fix Evaluation Anomalies It is important that the schema of the evaluation data is consistent with the training data since the data that your model is going to receive should be consistent to the one you used to train it with.Moreover, it is also important that the **features of the evaluation data belong roughly to the same range as the training data**. This ensures that the model will be evaluated on a similar loss surface covered during training. Exercise 4: Compare Training and Evaluation StatisticsNow you are going to generate the evaluation statistics and compare it with training statistics. You can use the [`tfdv.generate_statistics_from_dataframe()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) function for this. But this time, you'll need to pass the **evaluation data**. For the `stats_options` parameter, the list you used before works here too.Remember that to visualize the evaluation statistics you can use [`tfdv.visualize_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics). However, it is impractical to visualize both statistics separately and do your comparison from there. Fortunately, TFDV has got this covered. You can use the `visualize_statistics` function and pass additional parameters to overlay the statistics from both datasets (referenced as left-hand side and right-hand side statistics). Let's see what these parameters are:- `lhs_statistics`: Required parameter. Expects an instance of `DatasetFeatureStatisticsList `.- `rhs_statistics`: Expects an instance of `DatasetFeatureStatisticsList ` to compare with `lhs_statistics`.- `lhs_name`: Name of the `lhs_statistics` dataset.- `rhs_name`: Name of the `rhs_statistics` dataset.For this case, remember to define the `lhs_statistics` protocol with the `eval_stats`, and the optional `rhs_statistics` protocol with the `train_stats`.Additionally, check the function for the protocol name declaration, and define the lhs and rhs names as `'EVAL_DATASET'` and `'TRAIN_DATASET'` respectively.
###Code
### START CODE HERE
# Generate evaluation dataset statistics
# HINT: Remember to use the evaluation dataframe and to pass the stats_options (that you defined before) as an argument
eval_stats = tfdv.generate_statistics_from_dataframe(eval_df, stats_options=stats_options)
# Compare evaluation data with training data
# HINT: Remember to use both the evaluation and training statistics with the lhs_statistics and rhs_statistics arguments
# HINT: Assign the names of 'EVAL_DATASET' and 'TRAIN_DATASET' to the lhs and rhs protocols
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features: {len(eval_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples: {eval_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {eval_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {eval_stats.datasets[0].features[-1].path.step[0]}")
###Output
Number of features: 48
Number of examples: 15265
First feature: race
Last feature: readmitted
###Markdown
**Expected Output:**```Number of features: 48Number of examples: 15265First feature: raceLast feature: readmitted``` Exercise 5: Detecting Anomalies At this point, you should ask if your evaluation dataset matches the schema from your training dataset. For instance, if you scroll through the output cell in the previous exercise, you can see that the categorical feature **glimepiride-pioglitazone** has 1 unique value in the training set while the evaluation dataset has 2. You can verify with the built-in Pandas `describe()` method as well.
###Code
train_df["glimepiride-pioglitazone"].describe()
eval_df["glimepiride-pioglitazone"].describe()
###Output
_____no_output_____
###Markdown
It is possible but highly inefficient to visually inspect and determine all the anomalies. So, let's instead use TFDV functions to detect and display these.You can use the function [`tfdv.validate_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/validate_statistics) for detecting anomalies and [`tfdv.display_anomalies()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/display_anomalies) for displaying them.The `validate_statistics()` method has two required arguments:- an instance of `DatasetFeatureStatisticsList`- an instance of `Schema`Fill in the following graded function which, given the statistics and schema, displays the anomalies found.
###Code
def calculate_and_display_anomalies(statistics, schema):
'''
Calculate and display anomalies.
Parameters:
statistics : Data statistics in statistics_pb2.DatasetFeatureStatisticsList format
schema : Data schema in schema_pb2.Schema format
Returns:
display of calculated anomalies
'''
### START CODE HERE
# HINTS: Pass the statistics and schema parameters into the validation function
anomalies = tfdv.validate_statistics(statistics, schema)
# HINTS: Display input anomalies by using the calculated anomalies
tfdv.display_anomalies(anomalies)
### END CODE HERE
###Output
_____no_output_____
###Markdown
You should see detected anomalies in the `medical_specialty` and `glimepiride-pioglitazone` features by running the cell below.
###Code
# Check evaluation data for errors by validating the evaluation data staticss using the previously inferred schema
calculate_and_display_anomalies(eval_stats, schema=schema)
###Output
_____no_output_____
###Markdown
Exercise 6: Fix evaluation anomalies in the schemaThe evaluation data has records with values for the features **glimepiride-pioglitazone** and **medical_speciality** that were not included in the schema generated from the training data. You can fix this by adding the new values that exist in the evaluation dataset to the domain of these features. To get the `domain` of a particular feature you can use [`tfdv.get_domain()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/get_domain).You can use the `append()` method to the `value` property of the returned `domain` to add strings to the valid list of values. To be more explicit, given a domain you can do something like:```pythondomain.value.append("feature_value")```
###Code
### START CODE HERE
# Get the domain associated with the input feature, glimepiride-pioglitazone, from the schema
glimepiride_pioglitazone_domain = tfdv.get_domain(schema, 'glimepiride-pioglitazone')
# HINT: Append the missing value 'Steady' to the domain
glimepiride_pioglitazone_domain.value.append('Steady')
# Get the domain associated with the input feature, medical_specialty, from the schema
medical_specialty_domain = tfdv.get_domain(schema, 'medical_specialty')
# HINT: Append the missing value 'Neurophysiology' to the domain
medical_specialty_domain.value.append('Neurophysiology')
# HINT: Re-calculate and re-display anomalies with the new schema
calculate_and_display_anomalies(eval_stats, schema=schema)
### END CODE HERE
###Output
_____no_output_____
###Markdown
If you did the exercise correctly, you should see *"No anomalies found."* after running the cell above. 6 - Schema EnvironmentsBy default, all datasets in a pipeline should use the same schema. However, there are some exceptions. For example, the **label column is dropped in the serving set** so this will be flagged when comparing with the training set schema. **In this case, introducing slight schema variations is necessary.** Exercise 7: Check anomalies in the serving setNow you are going to check for anomalies in the **serving data**. The process is very similar to the one you previously did for the evaluation data with a little change. Let's create a new `StatsOptions` that is aware of the information provided by the schema and use it when generating statistics from the serving DataFrame.
###Code
# Define a new statistics options by the tfdv.StatsOptions class for the serving data by passing the previously inferred schema
options = tfdv.StatsOptions(schema=schema,
infer_type_from_schema=True,
feature_whitelist=approved_cols)
### START CODE HERE
# Generate serving dataset statistics
# HINT: Remember to use the serving dataframe and to pass the newly defined statistics options
serving_stats = tfdv.generate_statistics_from_dataframe(serving_df, stats_options=options)
# HINT: Calculate and display anomalies using the generated serving statistics
calculate_and_display_anomalies(serving_stats, schema=schema)
### END CODE HERE
###Output
_____no_output_____
###Markdown
You should see that `metformin-rosiglitazone`, `metformin-pioglitazone`, `payer_code` and `medical_specialty` features have an anomaly (i.e. Unexpected string values) which is less than 1%. Let's **relax the anomaly detection constraints** for the last two of these features by defining the `min_domain_mass` of the feature's distribution constraints.
###Code
# This relaxes the minimum fraction of values that must come from the domain for the feature.
# Get the feature and relax to match 90% of the domain
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.distribution_constraints.min_domain_mass = 0.9
# Get the feature and relax to match 90% of the domain
medical_specialty = tfdv.get_feature(schema, 'medical_specialty')
medical_specialty.distribution_constraints.min_domain_mass = 0.9
# Detect anomalies with the updated constraints
calculate_and_display_anomalies(serving_stats, schema=schema)
###Output
_____no_output_____
###Markdown
If the `payer_code` and `medical_specialty` are no longer part of the output cell, then the relaxation worked! Exercise 8: Modifying the DomainLet's investigate the possible cause of the anomalies for the other features, namely `metformin-pioglitazone` and `metformin-rosiglitazone`. From the output of the previous exercise, you'll see that the `anomaly long description` says: "Examples contain values missing from the schema: Steady (<1%)". You can redisplay the schema and look at the domain of these features to verify this statement.When you inferred the schema at the start of this lab, it's possible that some values were not detected in the training data so it was not included in the expected domain values of the feature's schema. In the case of `metformin-rosiglitazone` and `metformin-pioglitazone`, the value "Steady" is indeed missing. You will just see "No" in the domain of these two features after running the code cell below.
###Code
tfdv.display_schema(schema)
###Output
_____no_output_____
###Markdown
Towards the bottom of the Domain-Values pairs of the cell above, you can see that many features (including **'metformin'**) have the same values: `['Down', 'No', 'Steady', 'Up']`. These values are common to many features including the ones with missing values during schema inference. TFDV allows you to modify the domains of some features to match an existing domain. To address the detected anomaly, you can **set the domain** of these features to the domain of the `metformin` feature.Complete the function below to set the domain of a feature list to an existing feature domain. For this, use the [`tfdv.set_domain()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/set_domain) function, which has the following parameters:- `schema`: The schema- `feature_path`: The name of the feature whose domain needs to be set.- `domain`: A domain protocol buffer or the name of a global string domain present in the input schema.
###Code
def modify_domain_of_features(features_list, schema, to_domain_name):
'''
Modify a list of features' domains.
Parameters:
features_list : Features that need to be modified
schema: Inferred schema
to_domain_name : Target domain to be transferred to the features list
Returns:
schema: new schema
'''
### START CODE HERE
# HINT: Loop over the feature list and use set_domain with the inferred schema, feature name and target domain name
for feature in features_list:
tfdv.set_domain(schema, feature, to_domain_name)
### END CODE HERE
return schema
###Output
_____no_output_____
###Markdown
Using this function, set the domain of the features defined in the `domain_change_features` list below to be equal to **metformin's domain** to address the anomalies found.**Since you are overriding the existing domain of the features, it is normal to get a warning so you don't do this by accident.**
###Code
domain_change_features = ['repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride',
'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone',
'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide',
'examide', 'citoglipton', 'insulin', 'glyburide-metformin', 'glipizide-metformin',
'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone']
# Infer new schema by using your modify_domain_of_features function
# and the defined domain_change_features feature list
schema = modify_domain_of_features(domain_change_features, schema, 'metformin')
# Display new schema
tfdv.display_schema(schema)
# TEST CODE
# check that the domain of some features are now switched to `metformin`
print(f"Domain name of 'chlorpropamide': {tfdv.get_feature(schema, 'chlorpropamide').domain}")
print(f"Domain values of 'chlorpropamide': {tfdv.get_domain(schema, 'chlorpropamide').value}")
print(f"Domain name of 'repaglinide': {tfdv.get_feature(schema, 'repaglinide').domain}")
print(f"Domain values of 'repaglinide': {tfdv.get_domain(schema, 'repaglinide').value}")
print(f"Domain name of 'nateglinide': {tfdv.get_feature(schema, 'nateglinide').domain}")
print(f"Domain values of 'nateglinide': {tfdv.get_domain(schema, 'nateglinide').value}")
###Output
Domain name of 'chlorpropamide': metformin
Domain values of 'chlorpropamide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'repaglinide': metformin
Domain values of 'repaglinide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'nateglinide': metformin
Domain values of 'nateglinide': ['Down', 'No', 'Steady', 'Up']
###Markdown
**Expected Output:**```Domain name of 'chlorpropamide': metforminDomain values of 'chlorpropamide': ['Down', 'No', 'Steady', 'Up']Domain name of 'repaglinide': metforminDomain values of 'repaglinide': ['Down', 'No', 'Steady', 'Up']Domain name of 'nateglinide': metforminDomain values of 'nateglinide': ['Down', 'No', 'Steady', 'Up']``` Let's do a final check of anomalies to see if this solved the issue.
###Code
calculate_and_display_anomalies(serving_stats, schema=schema)
###Output
_____no_output_____
###Markdown
You should now see the `metformin-pioglitazone` and `metformin-rosiglitazone` features dropped from the output anomalies. Exercise 9: Detecting anomalies with environmentsThere is still one thing to address. The `readmitted` feature (which is the label column) showed up as an anomaly ('Column dropped'). Since labels are not expected in the serving data, let's tell TFDV to ignore this detected anomaly.This requirement of introducing slight schema variations can be expressed by using [environments](https://www.tensorflow.org/tfx/data_validation/get_startedschema_environments). In particular, features in the schema can be associated with a set of environments using `default_environment`, `in_environment` and `not_in_environment`.
###Code
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
###Output
_____no_output_____
###Markdown
Complete the code below to exclude the `readmitted` feature from the `SERVING` environment.To achieve this, you can use the [`tfdv.get_feature()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/get_feature) function to get the `readmitted` feature from the inferred schema and use its `not_in_environment` attribute to specify that `readmitted` should be removed from the `SERVING` environment's schema. This **attribute is a list** so you will have to **append** the name of the environment that you wish to omit this feature for.To be more explicit, given a feature you can do something like:```pythonfeature.not_in_environment.append('NAME_OF_ENVIRONMENT')```The function `tfdv.get_feature` receives the following parameters:- `schema`: The schema.- `feature_path`: The path of the feature to obtain from the schema. In this case this is equal to the name of the feature.
###Code
### START CODE HERE
# Specify that 'readmitted' feature is not in SERVING environment.
# HINT: Append the 'SERVING' environmnet to the not_in_environment attribute of the feature
tfdv.get_feature(schema, 'readmitted').not_in_environment.append('SERVING')
# HINT: Calculate anomalies with the validate_statistics function by using the serving statistics,
# inferred schema and the SERVING environment parameter.
serving_anomalies_with_env = tfdv.validate_statistics(serving_stats, schema, environment='SERVING')
### END CODE HERE
###Output
_____no_output_____
###Markdown
You should see "No anomalies found" by running the cell below.
###Code
# Display anomalies
tfdv.display_anomalies(serving_anomalies_with_env)
###Output
_____no_output_____
###Markdown
Now you have succesfully addressed all anomaly-related issues! 7 - Check for Data Drift and Skew During data validation, you also need to check for data drift and data skew between the training and serving data. You can do this by specifying the [skew_comparator and drift_comparator](https://www.tensorflow.org/tfx/data_validation/get_startedchecking_data_skew_and_drift) in the schema. Drift and skew is expressed in terms of [L-infinity distance](https://en.wikipedia.org/wiki/Chebyshev_distance) which evaluates the difference between vectors as the greatest of the differences along any coordinate dimension.You can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.Let's check for the skew in the **diabetesMed** feature and drift in the **payer_code** feature.
###Code
# Calculate skew for the diabetesMed feature
diabetes_med = tfdv.get_feature(schema, 'diabetesMed')
diabetes_med.skew_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate drift for the payer_code feature
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.drift_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate anomalies
skew_drift_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
# Display anomalies
tfdv.display_anomalies(skew_drift_anomalies)
###Output
_____no_output_____
###Markdown
In both of these cases, the detected anomaly distance is not too far from the threshold value of `0.03`. For this exercise, let's accept this as within bounds (i.e. you can set the distance to something like `0.035` instead).**However, if the anomaly truly indicates a skew and drift, then further investigation is necessary as this could have a direct impact on model performance.** 8 - Display Stats for Data Slices Finally, you can [slice the dataset and calculate the statistics](https://www.tensorflow.org/tfx/data_validation/get_startedcomputing_statistics_over_slices_of_data) for each unique value of a feature. By default, TFDV computes statistics for the overall dataset in addition to the configured slices. Each slice is identified by a unique name which is set as the dataset name in the [DatasetFeatureStatistics](https://github.com/tensorflow/metadata/blob/master/tensorflow_metadata/proto/v0/statistics.protoL43) protocol buffer. Generating and displaying statistics over different slices of data can help track model and anomaly metrics. Let's first define a few helper functions to make our code in the exercise more neat.
###Code
def split_datasets(dataset_list):
'''
split datasets.
Parameters:
dataset_list: List of datasets to split
Returns:
datasets: sliced data
'''
datasets = []
for dataset in dataset_list.datasets:
proto_list = DatasetFeatureStatisticsList()
proto_list.datasets.extend([dataset])
datasets.append(proto_list)
return datasets
def display_stats_at_index(index, datasets):
'''
display statistics at the specified data index
Parameters:
index : index to show the anomalies
datasets: split data
Returns:
display of generated sliced data statistics at the specified index
'''
if index < len(datasets):
print(datasets[index].datasets[0].name)
tfdv.visualize_statistics(datasets[index])
###Output
_____no_output_____
###Markdown
The function below returns a list of `DatasetFeatureStatisticsList` protocol buffers. As shown in the ungraded lab, the first one will be for `All Examples` followed by individual slices through the feature you specified.To configure TFDV to generate statistics for dataset slices, you will use the function `tfdv.StatsOptions()` with the following 4 arguments: - `schema`- `slice_functions` passed as a list.- `infer_type_from_schema` set to True. - `feature_whitelist` set to the approved features.Remember that `slice_functions` only work with [`generate_statistics_from_csv()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_csv) so you will need to convert the dataframe to CSV.
###Code
def sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe, schema):
'''
generate statistics for the sliced data.
Parameters:
slice_fn : slicing definition
approved_cols: list of features to pass to the statistics options
dataframe: pandas dataframe to slice
schema: the schema
Returns:
slice_info_datasets: statistics for the sliced dataset
'''
# Set the StatsOptions
slice_stats_options = tfdv.StatsOptions(schema=schema,
slice_functions=[slice_fn],
infer_type_from_schema=True,
feature_whitelist=approved_cols)
# Convert Dataframe to CSV since `slice_functions` works only with `tfdv.generate_statistics_from_csv`
CSV_PATH = 'slice_sample.csv'
dataframe.to_csv(CSV_PATH)
# Calculate statistics for the sliced dataset
sliced_stats = tfdv.generate_statistics_from_csv(CSV_PATH, stats_options=slice_stats_options)
# Split the dataset using the previously defined split_datasets function
slice_info_datasets = split_datasets(sliced_stats)
return slice_info_datasets
###Output
_____no_output_____
###Markdown
With that, you can now use the helper functions to generate and visualize statistics for the sliced datasets.
###Code
# Generate slice function for the `medical_speciality` feature
slice_fn = slicing_util.get_feature_value_slicer(features={'medical_specialty': None})
# Generate stats for the sliced dataset
slice_datasets = sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe=train_df, schema=schema)
# Print name of slices for reference
print(f'Statistics generated for:\n')
print('\n'.join([sliced.datasets[0].name for sliced in slice_datasets]))
# Display at index 10, which corresponds to the slice named `medical_specialty_Gastroenterology`
display_stats_at_index(10, slice_datasets)
###Output
_____no_output_____
###Markdown
If you are curious, try different slice indices to extract the group statistics. For instance, `index=5` corresponds to all `medical_specialty_Surgery-General` records. You can also try slicing through multiple features as shown in the ungraded lab. Another challenge is to implement your own helper functions. For instance, you can make a `display_stats_for_slice_name()` function so you don't have to determine the index of a slice. If done correctly, you can just do `display_stats_for_slice_name('medical_specialty_Gastroenterology', slice_datasets)` and it will generate the same result as `display_stats_at_index(10, slice_datasets)`. 9 - Freeze the schemaNow that the schema has been reviewed, you will store the schema in a file in its "frozen" state. This can be used to validate incoming data once your application goes live to your users.This is pretty straightforward using Tensorflow's `io` utils and TFDV's [`write_schema_text()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/write_schema_text) function.
###Code
# Create output directory
OUTPUT_DIR = "output"
file_io.recursive_create_dir(OUTPUT_DIR)
# Use TensorFlow text output format pbtxt to store the schema
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
# write_schema_text function expect the defined schema and output path as parameters
tfdv.write_schema_text(schema, schema_file)
###Output
_____no_output_____ |
brusselator_hopf.ipynb | ###Markdown
Hopf Bifurcation: The Emergence of Limit-cycle Dynamics*Cem Özen*, May 2017.A *Hopf bifurcation* is a critical point in which a periodic orbit appears or disappears through a local change in the stability of a fixed point in a dynamical system as one of the system parameters is varied. Hopf bifurcations occur in many of the well-known dynamical systems such as the Lotka-Volterra model, the Lorenz model, the Selkov model of glycolysis, the Belousov-Zhabotinsky reaction model, and the Hodgkin-Huxley model for nerve membrane. The animation above shows the emergence of a limit cycle in the Brusselator system (for the actual simulation, see below). In this notebook, I will consider a system of chemical reactions known by the name *Brusselator* in literature (see: https://en.wikipedia.org/wiki/Brusselator for more information) as a model for Hopf bifurcations. The Brusselator reactions are given by$A \longrightarrow X$ $2X + Y\longrightarrow 3X$ $B + X \longrightarrow Y + D$ $X \longrightarrow E$ For the sake of simplicity, we will assume that the reaction constants of all these reactions are unity (i.e. in all the reactions, $k=1$ ). Furthermore let's assume that the reactant concentrations $A$ and $B$ are so large that they remain constant. Therefore, only $X$ and $Y$ concentrations will be dynamical. The rate equations for $X$ and $Y$ are then given by $$\begin{eqnarray}\dot{X} & = & A + X^2Y - BX - X, \\\dot{Y} & = & BX - X^2Y\end{eqnarray}$$ The X-nullcline and the Y-nullcline are given by the conditions of $0 = A + X^2Y - BX - X$ and $0 = BX - X^2Y$ respectively. From these equations, we obtain:$$\begin{eqnarray}Y(X) & = & \frac{-A + X(B+1)}{X^2}, & \quad \textrm{(X-nullcline)} \\Y(X) & = & \frac{B}{X}, & \quad \textrm{(Y-nullcline)} \end{eqnarray}$$ In this notebook, I will also demonstrate how one can perform symbolical computations using Python's `SymPy` library. We also need extra Jupyter Notebook functionality to render nice display of the resulting equations. (Notice that we are using LaTex in typesetting this document particularly for the purpose of producing nice looking equations).
###Code
import numpy as np
from numpy.linalg import eig
from scipy import integrate
import sympy
from IPython.display import display, Math, Latex
import matplotlib.pyplot as plt
sympy.init_printing(use_latex='mathjax')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's obtain the nullcline equations using `SymPy`:
###Code
X, Y, A, B = sympy.symbols('X Y A B') # you need to introduce the sysmbols first
# let's get the X-nullcline as a function of X:
sympy.solve(sympy.Eq(A + X**2 * Y - B * X - X, 0), Y)
# let's get the Y-nullcline as a function of X:
sympy.solve(sympy.Eq(B * X - X**2 * Y, 0), Y)
###Output
_____no_output_____
###Markdown
Now let's find the fixed points ($X^*, Y^*$) of this 2-D system (there is only one, actually). The fixed point is given by the simultaneous solution of the X-nullcline and Y-nullcline equations, therefore $$ (X^*, Y^*) = (A, \frac{B}{A}) $$For the sake of using `SymPy`, let's obtain this solution once again:
###Code
# Solve the system of equations defined by the X-nullcline and Y-nullcline with respect to X and Y:
sympy.solve([A + X**2 * Y - B * X - X, B * X - X**2 * Y], [X, Y])
###Output
_____no_output_____
###Markdown
Now, a bifurcation analysis of the Brusselator model requires us to keep track of the local stability of its fixed point. This can be done, according to the *linearized stability analysis* by evaluating the Jacobian matrix at the fixed point. The Jacobian matrix at the fixed point is given by:$$\begin{eqnarray}J & = & \left\vert\matrix{{\partial f \over \partial x} & {\partial f\over \partial y} \cr {\partial g \over \partial x} & {\partial g\over \partial y} }\right\vert_{(X^*, Y^*)} \\ & = & \left\vert\matrix{ -B + 2XY - 1 & X^2 \cr B - 2XY & -X^2 }\right\vert_{(X^*, Y^*)} \\ & = & \left\vert\matrix{ B - 1 & A^2 \cr -B & -A^2 }\right\vert\end{eqnarray}$$This result can also be obtained very easily using `SymPy`:
###Code
# define the Brusselator dynamical system as a SymPy matrix
brusselator = sympy.Matrix([A + X**2 * Y - B * X - X, B * X - X**2 * Y])
# Jacobian matrix with respect to X and Y
J = brusselator.jacobian([X, Y])
J
# Jacobian matrix evaluated at the coordinates of the fixed point
J_at_fp = J.subs({X:A, Y:B/A}) # substitute X with A and Y with B/A
J_at_fp
###Output
_____no_output_____
###Markdown
A limit-cycle can emerge in a 2-dimensional, attractive dynamical system if the fixed point of the system goes unstable. In this case, trajectories must be pulled by a limit cycle. (According to the Poincare-Bendixson theorem, a 2-dimensional system cannot have strange attractors). In this case, the Hopf bifurcation is called a *supercritical Hopf bifurcation*, because limit cycle is stable. In the following, we will see how the stable fixed point (spiral) of the Brusselator goes unstable, giving rise to a limit cycle in turn. Conditions for the stability are determined by the trace and the determinant of the Jacobian. So let's evaluate them:
###Code
Delta = J_at_fp.det() # determinant of the Jacobian
Delta.simplify()
tau = J_at_fp.trace() # trace of the Jacobian
tau
###Output
_____no_output_____
###Markdown
To have an unstable spiral we need:$$\begin{eqnarray}\tau & > & 0 \quad \Rightarrow \quad & B > A^2 + 1 \quad \textrm{required} \\\Delta & > & 0 \quad {} \quad & \textrm{automatically satisfied} \\\tau^2 & < & 4 \Delta \quad {} \quad & \textrm{automatically satisfied}\end{eqnarray}$$The second and third conditions were satisfied because of the first condition, automatically. Therefore we need to have:$$ B > A^2 + 1 $$ for limit cycles. Birth of A Limit Cycle: Hopf Bifurcation Numerical Simulation of the Brusselator SystemIn the following, I perform a numerical simulation of the (supercritical) Hopf bifurcation in the Brusselator system by varying the parameter $B$ while the value of $A=1$.
###Code
# Brusselator System:
def dX_dt(A, B, X, t):
x, y = X[0], X[1]
return np.array([A + x**2 * y - B * x -x,
B * x - x**2 * y])
T = 50 * np.pi # simulation time
dt = 0.01 # integration time step
# time steps to be used in integration of the Brusselator system
t=np.arange(0, T, dt)
# create a canvas and 3 subplots..we will use each one for different choice of A and B paraeters
fig, ax = plt.subplots(1, 3)
fig.set_size_inches(13,5)
def plotter(A, B, ax):
"""
This function draws a phase portrait by assigning a vector characterizing how the concentrations
change at a given value of X and Y. It also draws a couple of example trajectories.
"""
# Drow direction fields using matplotlib 's quiver function, similar to what we did
# in class but qualitatively
xmin, xmax = 0, 5 # min and max values of x axis in the plot
ymin, ymax = 0, 5 # min and max values of y axis in the plot
x = np.linspace(xmin, xmax, 10) # divide x axis to intervals
y = np.linspace(ymin, ymax, 10) # divide y axis to intervals
X1 , Y1 = np.meshgrid(x, y) # from these intervals create a grid
DX1, DY1 = dX_dt(A, B, [X1, Y1], t) # compute rate of change of the concentrations on grid points
M = (np.hypot(DX1, DY1)) # norm of the rate of changes
M[ M == 0] = 1. # prevention against divisions by zero
DX1 /= M # we normalize the direction field vector (each has unit length now)
DY1 /= M # we normalize the direction field vector (each has unit length now)
Q = ax.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.jet)
num_traj = 5 # number of trajectories
# choose several initial points (x_i, y_j), for i and j chosen as in linspace calls below
X0 = np.asarray(list(zip(np.linspace(xmin, xmax, num_traj), np.linspace(ymin, ymax, num_traj))))
vcolors = plt.cm.jet_r(np.linspace(0., 1., num_traj)) # colors for each trajectory
# integrate the Brusellator ODE's using all initial points to produce corresponding trajectories
X = np.asarray([integrate.odeint(lambda x, t: dX_dt(A, B, x, t), X0i,t)
for X0i in X0])
# plot the trajectories we obtained above
for i in range(num_traj):
x, y = X[i, :, :].T # x and y histories for trajectory i
ax.plot(x, y, '-', c=vcolors[i], lw=2)
# set limits, put labels etc..
ax.set_xlim(xmin=xmin, xmax=xmax)
ax.set_ylim(ymin=ymin, ymax=ymax)
ax.set_xlabel("X", fontsize = 20)
ax.set_ylabel("Y", fontsize = 20)
ax.annotate("A={}, B={}".format(A, B), xy = (0.4, 0.9), xycoords = 'axes fraction', fontsize = 20, color = "k")
# Now let's prepare plots for the following choices of A and B:
plotter(A=1, B=1, ax=ax[0])
plotter(A=1, B=2, ax=ax[1])
plotter(A=1, B=3, ax=ax[2])
###Output
_____no_output_____
###Markdown
Above you see how a limit cycle can be created in a dynamical system, as one of the system parameters is varied.Here we have kept $A=1$ but varied $B$ from 1 to 2.5. Note that $B=2$ is the borderline value, marking the change in the stability of the fixed point. For $B<1$ the fixed point is stable but as we cross the value $B=2$, it changes its character to unstable and a limit cycle is born. This phenomenon is an example of *Hopf Bifurcation*.On the leftmost panel we have a stable spiral. Technically, this means that the Jacobian at the fixed point has two complex eigenvalues (a complex conjugate pair). The fact that the eigenvalues are complex is responsible for the spiralling effect. In stable spirals, the real part of the eigenvalues are negative, which is why these spiralling solutions decay, that is, trajectories nearby fall on to the fixed point. As the bifurcation parameter (here $B$) varies, the real part of the complex eigenvalues increase, reach zero at certain value of $B$, and keep growing now on the positive side. If the real part of the eigenvalues are positive, the fixed point is an unstable spiral; trajectories nearby are pushed out of the fixed point (see the rightmost panel and plot below). Since this 2-D dynamical system is attractive, by the Poincare-Bendixson theorem, the emergence of the unstable spiral accompanies the birth of a limit-cycle. Notice that the panel in the middle is the borderline case between the stable and unstable spirals: in this case the real part of the eigenvalues is exactly zero (see plots below); the linear stabilization theory falsely predicts a neutral oscillation (i.e a center) at $B=2$---due to purely imaginary eigenvalues. However, the fixed point is still a stable spiral then. Eigenvalues of the Jacobian
###Code
# Eigenvalues of the Jacobian at A=1, B=1 (fixed point is stable spiral)
J_numeric = np.asarray(J_at_fp.evalf(subs={A:1, B:1})).astype(np.float64)
w, _ = eig(J_numeric)
w
# Eigenvalues of the Jacobian at A=1, B=3 (fixed point is unstable spiral)
J_numeric = np.asarray(J_at_fp.evalf(subs={A:1, B:3})).astype(np.float64)
w, _ = eig(J_numeric)
w
###Output
_____no_output_____
###Markdown
Let's prepare plots showing how the real and imaginary parts of the eigenvalues change as $B$ is varied.
###Code
from numpy.linalg import eig
a = 1
eigen_real, eigen_imag = [], []
B_vals = np.linspace(1, 3, 20)
for b in B_vals:
J_numeric = np.asarray(J_at_fp.evalf(subs={A:a, B:b})).astype(np.float64)
w, _ = eig(J_numeric)
eigen_real.append(w[0].real)
eigen_imag.append(abs(w[0].imag))
eigen_real = np.asanyarray(eigen_real)
eigen_imag = np.asarray(eigen_imag)
fig, ax = plt.subplots(1, 2)
fig.set_size_inches(10,5)
fig.subplots_adjust(wspace=0.5)
ax[0].axhline(y=0, c="k", ls="dashed")
ax[0].plot(B_vals, eigen_real)
ax[0].set_ylabel(r"$\mathfrak{Re}(\lambda)$", fontsize = 20)
ax[0].set_xlabel(r"$B$", fontsize = 20)
ax[1].set_ylabel(r"$|\mathfrak{Im}(\lambda)|$", fontsize = 20)
ax[1].set_xlabel(r"$B$", fontsize = 20)
ax[1].plot(B_vals, eigen_imag)
###Output
_____no_output_____
###Markdown
Hopf bifurcation, is only one type of bifurcation, albeit a very important one for physical and biological systems. There are other types of bifurcation in which one can create or destroy fixed points or alter their properties in different ways than a Hopf bifurcations does. If you are curious, I suggest you to perform your own numerical experiements by playing with the values of $A$, $B$or both. An Animation of the Hopf Bifurcation
###Code
from matplotlib import animation, rc
from IPython.display import HTML
# Brusselator System:
def dX_dt(A, B, X, t):
x, y = X[0], X[1]
return np.array([A + x**2 * y - B * x -x, B * x - x**2 * y])
T = 50 * np.pi # simulation time
dt = 0.01 # integration time step
# time steps to be used in integration of the Brusselator system
t=np.arange(0, T, dt)
num_traj = 5 # number of trajectories
xmin, xmax = 0, 5 # min and max values of x axis in the plot
ymin, ymax = 0, 5 # min and max values of y axis in the plot
A = 1. # we will keep A parameter constant
# vary B parameter
Bmin, Bmax, numB = 1., 3., 100 # min, max, number of steps for varying B
Bvals = np.linspace(Bmin, Bmax, numB)
# set up the figure, the axis, and the plot element we want to animate
fig = plt.figure()
fig.set_size_inches(8,8)
ax = plt.axes(xlim=(xmin, xmax), ylim=(ymin, ymax))
ax.set_ylabel("Y", fontsize = 20)
ax.set_xlabel("X", fontsize = 20)
# choose a set of initial points for our trajectories (in each frame we will use the same set)
X0 = list(zip(np.linspace(xmin, xmax, num_traj), np.linspace(ymin, ymax, num_traj)))
# choose a color set for our trajectories
vcolors = plt.cm.jet_r(np.linspace(0., 1., num_traj))
# prepare the mesh grid
x = np.linspace(xmin, xmax, 15) # divide x axis to intervals
y = np.linspace(ymin, ymax, 15) # divide y axis to intervals
X1 , Y1 = np.meshgrid(x, y) # from these intervals create a grid
# set up the lines, the quiver and the text object
lines = [ax.plot([], [], [], '-', c=c, lw=2)[0] for c in vcolors]
Q = ax.quiver(X1, Y1, [], [], [], pivot='mid', cmap=plt.cm.jet)
text = ax.text(0.02, 0.95, '', fontsize=20, transform=ax.transAxes)
# initialization function: plot the background of each frame. Needs to return each object to be updated
def init():
for line in lines:
line.set_data([], [])
Q.set_UVC([], [], [])
text.set_text("")
return Q, lines, text
# animation function. This is called sequentially
def animate(i):
B = Bvals[i]
DX1, DY1 = dX_dt(A, B, [X1, Y1], t) # compute rate of change of the concentrations on grid points
M = (np.hypot(DX1, DY1)) # norm of the rate of changes
M[ M == 0] = 1. # prevention against divisions by zero
DX1 /= M # we normalize the direction field vector (each has unit length now)
DY1 /= M # we normalize the direction field vector (each has unit length now)
Q.set_UVC(DX1, DY1, M)
# integrate the Brusellator ODE's for the set of trajectories, store them in X
for line, X0i in zip(lines, X0):
X = integrate.odeint(lambda x, t: dX_dt(A, B, x, t), X0i,t)
x, y = X.T # get x and y for current trajectory
line.set_data(x, y)
text.set_text("A={:.2f}, B={:.2f}".format(A, B))
return Q, lines, text
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=30, blit=False)
# instantiate the animator.
#anim = animation.FuncAnimation(fig, animate, init_func=init, frames=1000, interval=200, blit=True)
#HTML(anim.to_html5_video())
rc('animation', html='html5')
plt.close()
anim
###Output
_____no_output_____ |
01-intro-101/python/practices/04-robin-hood/your-solution-here/03c_limpieza-datos_python.ipynb | ###Markdown
IntroducciónEn esta actividad aprenderemos a limpiar los datos, el paso necesario antes de hacer cualquier análisis o modelo. Ésto incluye tratar los datos vacíos o datos que faltan, convertir tipos de datos y binarizar variables. El conjunto de datos que usaremos en esta actividad corresponde a los metadatos de más de 5000 películas en IMDB, el [IMDB 5000 Movie Dataset](https://data.world/popculture/imdb-5000-movie-dataset).Lo primero que haremos, como siempre, será leer los datos y darles un vistazo.
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('data/movie_metadata.csv')
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Tratamiento de los datos no disponiblesUno de los problemas más habituales que nos podemos encontrar en nuestros conjuntos de datos es que no tengamos todas las observaciones para todos los registros. Es decir, que algunas de las variables no estén disponibles. Este hecho se puede dar por varias razones: porque estos datos no existen, porque no se introdujeron correctamente o simplemente por algun error de computación. El caso es que en la mayoría de análisis, los datos no disponibles pueden causar errores. A continuación veremos tres estrategias diferentes para solucionar este problema.- Eliminar las variables con un porcentaje alto de valores vacíos- Eliminar las filas con variables no disponibles- Imputar los datos o llenarlos con un valor por defecto Eliminar las variables con un porcentaje alto de valores vacíosEn el caso poco habitual que tengamos columnas sin información, con todos los valores vacíos, podemos aplicar la función `dropna` con el parámetro `axis=1`, indicando que queremos aplicarlo si se cumple para todas las filas, con `how=all`.En nuestro caso, no tenemos ninguna columna vacía. Para verlo, primero crearemos una.
###Code
df['missings'] = np.nan
###Output
_____no_output_____
###Markdown
Vemos que tenemos una nueva columna llamada `missings`, con todos los valores nulos.
###Code
'missings' in df.columns and df['missings'].isnull().all()
###Output
_____no_output_____
###Markdown
Eliminamos las columnas con todos los valors `NaN`. El parámetro `inplace=True` indica que la operación modificará el dataset.
###Code
df.dropna(axis=1, how='all', inplace=True)
###Output
_____no_output_____
###Markdown
Ahora ya no tenemos la columna en el _dataset_.
###Code
'missings' in df.columns
###Output
_____no_output_____
###Markdown
Si queremos eliminar todas las columnas que tengan algun valor `NaN`, lo indicaremos con el parámetro `how=any`.
###Code
df.dropna(axis=1, how='any').head()
###Output
_____no_output_____
###Markdown
Lo más habitual será un punto intermedio. Miraremos cuál es la disponibilidad de cada una de las variables, y decidiremos si nos es útil para el análisis o no. Con el siguiente comando, calculamos el porcentaje de valores `NaN` de cada variable.
###Code
df.isnull().sum()/len(df)*100
###Output
_____no_output_____
###Markdown
Si establecemos como criterio que no queremos usar variables que tengan más del 5% de valores `NaN`, eliminaremos las variables `gross`, `content_rating`, `budget` y `aspect_ratio`. La función `drop` nos permite eliminar las columnas por su nombre.
###Code
delete_columns = df.columns[df.isnull().sum()/len(df)*100 > 5]
delete_columns
df.drop(delete_columns, axis=1).head()
###Output
_____no_output_____
###Markdown
Eliminar las filas con variables no disponiblesEn nuestro _dataset_ pueden haber registros o filas que no tengan parte de la información. Una solución drástica sería eliminar todos aquellos registros que les falte alguna variable.Volveremos a usar la función `dropna` para este caso.
###Code
df.dropna().head()
###Output
_____no_output_____
###Markdown
En el otro extremo más conservador, podemos eliminar aquellas filas que tengan todos los valores `NaN` con el parámetro `how=all`:
###Code
df.dropna(how='all').head()
###Output
_____no_output_____
###Markdown
Podemos añadir un _threshold_ en el número de valores `NaN` que estamos dispuestos a tolerar en nuestro _dataset_. Si queremos eliminar aquellas observaciones con 3 o más valores no disponibles, indicaremos que queremos todas las variables que tengan como mínimo `len(df.columns)-3` valores no nulos:
###Code
df.dropna(thresh=len(df.columns)-3).head()
###Output
_____no_output_____
###Markdown
Si hay alguna variable que necesitemos que esté informada, podemos eliminar aquellas observaciones que no la tengan.
###Code
df.dropna(subset=['director_name']).head()
###Output
_____no_output_____
###Markdown
Rellenar con un valor por defectoUna de las estrategias más habituales es rellenar los valores no conocidos con un valor por defecto. Por ejemplo, podríamos decidir asumir que si no tenemos datos de los _likes_ del director en el _Facebook_, es que no tiene ninguno.
###Code
df.director_facebook_likes = df.director_facebook_likes.fillna(0)
###Output
_____no_output_____
###Markdown
O bién, podríamos decidir calcular la media de los _likes_ e imputarlo.
###Code
df.director_facebook_likes = df.director_facebook_likes.fillna(df.director_facebook_likes.mean())
###Output
_____no_output_____
###Markdown
Las técnicas de imputació que hemos visto son las más sencillas, pero pueden llegar a ser tan complejas como queramos. Transformación de las variablesUn paso previo imprescindible a aplicar un modelo a los datos es la realización de algunas transformaciones a las variables para adaptarlas a les características del análisis que queremos realizar. A continuación presentamos tres de las transformaciones más habituales. Conversión de tipos de datosHay veces que los tipos de datos que infiere `Pandas` no son los que nosotros queremos. En este caso, deberemos convertir los tipos de datos de las variables. Para ver de qué tipo de datos es cada columna de nuestro _dataset_, usaremos el atributo `dtypes`.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Vemos que la variable `num_critic_for_reviews` es de tipo `float64`. Si queremos convertirla a entero, simplemente usaremos la función `astype('int')`. El tipo de datos `int` no acepta valores `NaN`, de manera que primero deberemos assignarle un valor a aquellas que no tenguan. En nuestro caso, la llenaremos con un 0.
###Code
df.num_critic_for_reviews = df.num_critic_for_reviews.fillna(0)
df.num_critic_for_reviews.astype('int')
df.num_critic_for_reviews.head()
###Output
_____no_output_____
###Markdown
Binarización de variablesLa mayoría de herramientas y modelos para analizar datos sólo aceptan números como valores de las variables. Esto puede ser un problema si en nuestras variables tenemos variables categóricas. Para convertir estas variables en numéricas, la manera más sencilla es hacer lo que se conoce por _one-hot encoding_, es decir, transformar cada categoría de la variable en un vector binario que indique si la variable tiene este valor o no.En nuestro dataset, por ejemplo, transformaremos la variable _color_, que tiene dos valores diferentes.
###Code
df['color'].unique()
###Output
_____no_output_____
###Markdown
La función `get_dummies` hace todo el trabajo del _one-hot encoding_.
###Code
df_color = pd.get_dummies(df['color'])
df_color.head()
###Output
_____no_output_____
###Markdown
Ahora sólo hace falta que eliminemos la variable `color` en nuestro _dataset_ y la sustituyamos por las dos variables binarias que hemos creado.
###Code
df.drop(['color'], axis = 1, inplace = True)
df = df.join(df_color)
df.head()
###Output
_____no_output_____ |
Sentiment Analysis using logistic regression.ipynb | ###Markdown
Reviews Classification (Sentiment Analysis) using Logistic Regression
###Code
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression as LR
from sklearn.metrics import roc_auc_score as AUC
from bs4 import BeautifulSoup
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
train = pd.read_csv('labeledTrainData.tsv',header=0,delimiter='\t',quoting = 3)
train.loc[0,'review']
train.head()
###Output
_____no_output_____
###Markdown
Let's start by preparing the dataThe goal is to:-transform all the text to lower case-remove HTML tags, remove punctuation, numbers ...-remove the connectors : very common words such as : and , the , is, so ...
###Code
example1= BeautifulSoup(train['review'][0],'lxml').get_text()
example1
letters_only=re.sub('[^a-zA-Z]',' ',example1)
letters_only
lower_case = letters_only.lower()
lower_case
words = lower_case.split()
words
print(stopwords.words('english'))
def review_prepro(raw_string):
preprocessed_string = BeautifulSoup(raw_string,'lxml').get_text()
preprocessed_string = re.sub('[^a-zA-Z]',' ',preprocessed_string)
preprocessed_string= preprocessed_string.lower()
words = preprocessed_string.split()
tabous = stopwords.words('english')
result=''
for word in words :
if word not in tabous :
result = result +' '+word
return result[1:]
train['review'][5]
review_prepro(train['review'][5])
clean_train_reviews=[]
for review in train['review']:
clean_train_reviews.append(review_prepro(review))
clean_train_reviews
###Output
_____no_output_____
###Markdown
For generating a bag of words model, we will use the scikit-learn package
###Code
vectorizer = CountVectorizer(analyzer='word', tokenizer=None, preprocessor = None, stop_words= None, max_features =5000)
train_data_features = vectorizer.fit_transform(clean_train_reviews)
train_data_features = train_data_features.toarray()
test = pd.read_csv('labeledTestData.tsv',header=0,delimiter='\t',quoting = 3)
test.head()
clean_test_reviews=[]
for review in test['review']:
clean_test_reviews.append(review_prepro(review))
len(clean_test_reviews)
test_data_features = vectorizer.transform(clean_test_reviews)
test_data_features=test_data_features.toarray()
test_data_features.shape
###Output
_____no_output_____
###Markdown
-----------------------------------------------------------------------------------------------------------------Let's use a prebuilt logisitic regression model from sklearn to perform the classification
###Code
model = LR()
model.fit(train_data_features,train['sentiment'])
p=model.predict_proba(test_data_features)[:,1]
output = pd.DataFrame(data= {'id':test['id'] , 'sentiment':p})
output.head(10)
test.head(10)
###Output
_____no_output_____
###Markdown
Let's measure the quality of our classification (the quality of our sentiment analysis applied to movie reviews)
###Code
auc = AUC(test['sentiment'].values, p)
auc
###Output
_____no_output_____ |
myexamples/pylab/BYORP3.ipynb | ###Markdown
code for BYORP calculation The surface thermal inertia is neglected, so that thermal radiation is re-emitted with no time lag, and the reflected and thermally radiated components are assumed Lambertian (isotropic) and so emitted with fluxparallel to the local surface normal. We ignore heat conduction. The surface is described with a closedtriangular mesh.The radiation force from the $i$-th facet is$$ {\bf F}_i = - \frac{F_\odot}{c} {S_i} (\hat {\bf n}_i \cdot \hat {\bf s}_\odot) \hat {\bf n}_i $$where $S_i$ is the area of the $i$-th facet and $\hat {\bf n}_i$ is its surface normal.Here $F_\odot$ is the solar radiation flux and $c$ is the speed of light.The direction of the Sun is $\hat {\bf s}_\odot$.The total Yarkovsky force is a sum over all the facets $${\bf F}_Y = \sum_{i: \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0} {\bf F}_i $$Only facets on the day side or with $\hat {\bf n}_i \cdot \hat {\bf s}_\odot >0$ are included in the sum.The torque affecting the binary orbit from a single facet is $$ {\boldsymbol \tau}_{i,B} = \begin{cases} - \frac{F_\odot}{c} {S_i} (\hat {\bf n}_i \cdot \hat {\bf s}_\odot) ( {\bf a}_B \times \hat {\bf n}_i) & \mbox{if } \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0 \\ 0 & \mbox{otherwise} \end{cases}$$where ${\bf a}_B$ is the secondary's radial vector from the binary center of mass.The torque affecting the binary orbit is the sum of the torques from each facet and should be an average over the orbit around the Sun and over the binary orbit and spin of the secondary.$$ {\boldsymbol \tau}_{BY} = \frac{1}{T} \int_0^T dt\ \sum_{i: \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0} {\boldsymbol \tau}_{i,B} $$If $\hat {\bf l}$ is the binary orbit normal then $$ {\boldsymbol \tau}_{BY} \cdot \hat {\bf l} $$ changes the binary's orbital angular momentum and causes binary orbit migration.The torque affecting the spin (also known as YORP) instantaneously depends on the radii of each facit ${\bf r}_i$ from the asteroid center of mass $$ {\boldsymbol \tau}_{i,s} = \begin{cases}- \frac{F_\odot}{c} {S_i} (\hat {\bf n}_i \cdot \hat {\bf s}_\odot) ({\bf r}_i \times \hat{\bf n}_i) & \mbox{if } \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0 \\0 & \mbox{otherwise}\end{cases}$$$$ {\boldsymbol \tau}_Y = \frac{1}{T} \int_0^T dt \ \sum_{i: \hat {\bf n}_i \cdot \hat {\bf s}_\odot >0} {\boldsymbol \tau}_{i,s} $$where the average is done over the orbit about the Sun and the spin of the asteroid.If the spin axis is $\hat {\boldsymbol \omega}$ then $$ {\boldsymbol \tau}_Y \cdot \hat {\boldsymbol \omega} $$ gives the body spin up or spin down rate.In practice we average over the Sun's directions first and then average over spin (for YORP) or and spin and binary orbit direction (for BYORP) afterward. Units For our calculation are $F_\odot/c = 1$.For YORP $R=1$.For BYORP $a_B = 1$ and $R=1$ (in the surface area).Here $R$ is volume equivalent sphere radius.To put in physical units: Multiply ${\boldsymbol \tau}_Y$ by $\frac{F_\odot R^3}{c}$.Multiply ${\boldsymbol \tau}_{BY}$ by $\frac{F_\odot R^2 a_B}{c}$.Alternatively we are computing:${\boldsymbol \tau}_Y \times \frac{c}{F_\odot R^3} $ ${\boldsymbol \tau}_{BY} \times \frac{c}{F_\odot R^2 a_B} $ To get the rate the spin changes for YORP$\dot \omega = \frac{ {\boldsymbol \tau}_Y \cdot \hat {\bf s} }{C} $where $C$ is the moment of inertia about the spin axis.To order of magnitude what we are computing can be multiplied by $\frac{F_\odot R^3}{c MR^2} $ to estimate $\dot \omega$and by $\frac{F_\odot R^3}{c MR^2 \omega} $to estimate $\dot \epsilon$. To get the rate that obliquity changes for YORP$\dot \epsilon = \frac{ {\boldsymbol \tau}_Y \cdot \hat {\boldsymbol \phi} }{C \omega} $where unit vector $\hat {\boldsymbol \phi}$ is in the xy plane (ecliptic) and is perpendicular to the spin axis.To get the semi-major axis drift rate for BYORP$ \dot a_B = \frac{2 {\boldsymbol \tau}_{BY} \cdot \hat {\bf l}}{M n_Ba_B} $where $M$ is the secondary mass, $n_B$ and $a_B$ are binary orbit mean motion and semi-major axis.To order of magnitude to get the drift rate we multiply what we are getting by $\frac{F_\odot R^2 a_B}{c} \times \frac{1}{M n_B a_B}$.Dimensionless numbers used by Steiberg+10 (eqns 19,48)$f_{Y} \equiv \tau_{Y} \frac{3}{2} \frac{c}{\pi R^3 F_\odot}$$f_{BY} \equiv \tau_{BY} \frac{3}{2} \frac{c}{\pi R^2 a_B F_\odot}$Our computed values are the same as theirs except for a factor of 3/2 (but they have a 2/3 in their torque) and a factor of $\pi$.We need to divide by $\pi$ to have values consistent with theirs. Assumptions:Circular orbit for binary.Circuilar orbit for binary around Sun.No shadows.No conduction. Lambertian isotropic emission. No thermal lag.We neglect distance of facet centroids from secondary center of mass when computing BYORP. Coordinate system:binary orbit is kept in xy plane Compare YORP on primary to BYORP on secondary.$\frac{\tau_{Yp}}{\tau_{BY} }\sim \frac{R_p^2 }{R_s^2 } \frac{R_p }{a_B }\frac{f_Y}{ f_{BY}}$For Didymos, this is about $8 f_Y/f_{BY}$.
###Code
# perturb a sphere (mesh, premade) and stretch it so that
# it becomes an ellipsoid.
# We can't directly edit vertices or faces
# see this: https://github.com/PyMesh/PyMesh/issues/156
# the work around is to copy the entire mesh after modifying it
# arguments:
# devrand, Randomly add devrand to x,y,z positions of each vertex
# aratio1 and aratio2, stretch or compress a sphere by aratio1 and aratio2
# returns: a new mesh
# we assume that longest semi-major axis a is along x,
# medium semi-axis b is along y, semi-minor c axis is along z
# Volume should stay the same!
def sphere_perturb(sphere,devrand,aratio1,aratio2):
#devrand = 0.05 # how far to perturb each vertex
nv = len(sphere.vertices)
f = sphere.faces
v = np.copy(sphere.vertices)
# add perturbations to x,y,z to all vertices
for i in range(nv):
dx = devrand*random.uniform(-1,1)
dy = devrand*random.uniform(-1,1)
dz = devrand*random.uniform(-1,1)
v[i,0] += dx
v[i,1] += dy
v[i,2] += dz
# aratio1 = c/a this gives c = aratio1*a
# aratio2 = b/a this gives b = aratio2*a
# volume = 4/3 pi a*b*c for an ellipsoid
vol = 1*aratio1*aratio2
rad_cor = pow(vol,-1./3.)
v[:,2] *= aratio1*rad_cor # make oblate, adjusts z coords
v[:,1] *= aratio2*rad_cor # make elongated in xy plane , adjusts y coords
v[:,0] *= rad_cor # adjusts x coords
# volume should now stay the same
sub_com(v) # subtract center of mass from vertex positions
psphere = pymesh.form_mesh(v, f)
psphere.add_attribute("face_area")
psphere.add_attribute("face_normal")
psphere.add_attribute("face_centroid")
return psphere
# substract the center of mass from a list of vertices
def sub_com(v):
nv = len(v)
xsum = np.sum(v[:,0])
ysum = np.sum(v[:,1])
zsum = np.sum(v[:,2])
xmean = xsum/nv
ymean = ysum/nv
zmean = zsum/nv
v[:,0]-= xmean
v[:,1]-= ymean
v[:,2]-= zmean
# compute surface area by summing area of all facets
# divide by 4pi which is the surface area of a sphere with radius 1
def surface_area(mesh):
#f = mesh.faces
S_i = mesh.get_face_attribute('face_area')
area =np.sum(S_i)
return area/(4*np.pi)
# print number of faces
def nf_mesh(mesh):
f = mesh.faces
print('number of faces ',len(f))
# compute the volume of the tetrahedron formed from face with index iface
# and the origin
def vol_i(mesh,iface):
f = mesh.faces
v = mesh.vertices
iv1 = f[iface,0] # indexes of the 3 vertices
iv2 = f[iface,1]
iv3 = f[iface,2]
#print(iv1,iv2,iv3)
v1 = v[iv1] # the 3 vertices
v2 = v[iv2]
v3 = v[iv3]
#print(v1,v2,v3)
mat = np.array([v1,v2,v3])
# the volume is equal to 1/6 determinant of the matrix formed with the three vertices
# https://en.wikipedia.org/wiki/Tetrahedron
#print(mat)
vol = np.linalg.det(mat)/6.0 # compute determinant
return vol
# compute the volume of the mesh by looping over all tetrahedrons formed from the faces
# we assume that the body is convex
def volume_mesh(mesh):
f = mesh.faces
nf = len(f)
vol = 0.0
for iface in range(nf):
vol += vol_i(mesh,iface)
return vol
# if vol equ radius is 1 the volume should be equal to 4*np.pi/3 which is 4.1888
# tests
#vi = vol_i(squannit,1)
#print(vi)
#vtot = volume_mesh(squannit)
#print(vtot)
# correct all the radii so that the volume becomes that of a sphere with radius 1
# return a new mesh
def cor_volume(mesh):
vol = volume_mesh(mesh)
print('Volume {:.4f}'.format(vol))
rad = pow(vol*3/(4*np.pi),1.0/3.0)
print('radius of vol equ sphere {:.4f}'.format(rad))
f = mesh.faces
v = np.copy(mesh.vertices)
v /= rad
newmesh = pymesh.form_mesh(v, f)
newmesh.add_attribute("face_area")
newmesh.add_attribute("face_normal")
newmesh.add_attribute("face_centroid")
vol = volume_mesh(newmesh)
print('new Volume {:.3f}'.format(vol))
return newmesh
# compute the radiation force instantaneously on a triangular mesh for each facit
# arguments:
# mesh, the body (a triangular surface mesh)
# s_hat is a 3 length np.array (a unit vector) pointing to the Sun
# return the vector F_i for each facet
# returns: F_i_x is the x component of F_i and is a vector that has the length of the number of faces
# Force is zero if facets are not on the day side
def F_i(mesh,s_hat):
s_len = np.sqrt(s_hat[0]**2 + s_hat[1]**2 + s_hat[2]**2) # in case s_hat was not normalized
#nf = len(mesh.faces)
S_i = mesh.get_face_attribute('face_area') # vector of facet areas
f_normal = mesh.get_face_attribute('face_normal') # vector of vector of facet normals
# normal components
nx = np.squeeze(f_normal[:,0]) # a vector, of length number of facets
ny = np.squeeze(f_normal[:,1])
nz = np.squeeze(f_normal[:,2])
# dot product of n_i and s_hat
n_dot_s = (nx*s_hat[0] + ny*s_hat[1] + nz*s_hat[2])/s_len # a vector
F_i_x = -S_i*n_dot_s*nx # a vector, length number of facets
F_i_y = -S_i*n_dot_s*ny
F_i_z = -S_i*n_dot_s*nz
ii = (n_dot_s <0) # the night sides
F_i_x[ii] = 0 # get rid of night sides
F_i_y[ii] = 0
F_i_z[ii] = 0
return F_i_x,F_i_y,F_i_z # these are each vectors for each face
# compute radiation forces F_i for each face, but averaging over all positions of the Sun
# a circular orbit for the asteroid is assumed
# arguments:
# nphi_Sun is the number of solar angles, evenly spaced in 2pi so we are assuming circular orbit
# incl is solar orbit inclination in radians
# returns: F_i_x average and other 2 components of forces for each facet
def F_i_sun_ave(mesh,nphi_Sun,incl):
dphi = 2*np.pi/nphi_Sun
# compute the first set of forces so we have vectors the right length
phi = 0.0
s_hat = np.array([np.cos(phi)*np.cos(incl),np.sin(phi)*np.cos(incl),np.sin(incl)])
# compute the radiation force instantaneously on the triangular mesh for sun at s_hat
F_i_x_sum,F_i_y_sum,F_i_z_sum = F_i(mesh,s_hat)
# now compute the forces for the rest of the solar angles
for i in range(1,nphi_Sun): # do the rest of the angles
phi = i*dphi
s_hat = np.array([np.cos(phi)*np.cos(incl),np.sin(phi)*np.cos(incl),np.sin(incl)])
# compute the radiation force instantaneously on the triangular mesh for sun at s_hat
F_i_x,F_i_y,F_i_z = F_i(mesh,s_hat) # These are vectors of length number of facets
F_i_x_sum += F_i_x # sum up forces
F_i_y_sum += F_i_y
F_i_z_sum += F_i_z
F_i_x_ave = F_i_x_sum/nphi_Sun # average
F_i_y_ave = F_i_y_sum/nphi_Sun
F_i_z_ave = F_i_z_sum/nphi_Sun
return F_i_x_ave,F_i_y_ave,F_i_z_ave # these are vectors for each face
# compute cross product C=AxB using components
def cross_prod_xyz(Ax,Ay,Az,Bx,By,Bz):
Cx = Ay*Bz - Az*By
Cy = Az*Bx - Ax*Bz
Cz = Ax*By - Ay*Bx
return Cx,Cy,Cz
# compute total Yorp torque averaging over nphi_Sun solar positions
# this is at a single body orientation
# a circular orbit is assumed
# arguments:
# mesh: the body
# nphi_Sun is the number of solar angles
# incl is solar orbit inclination in radians
# returns: torque components
def tau_Ys(mesh,nphi_Sun,incl):
# compute F_i for each face, but averaging over all positions of the Sun
F_i_x_ave, F_i_y_ave,F_i_z_ave = F_i_sun_ave(mesh,nphi_Sun,incl)
r_i = mesh.get_face_attribute("face_centroid") # radii to each facet
rx = np.squeeze(r_i[:,0]) # radius of centroid from center of mass
ry = np.squeeze(r_i[:,1]) # these are vectors, length number of faces
rz = np.squeeze(r_i[:,2])
# cross product works on vectors
tau_i_x,tau_i_y,tau_i_z = cross_prod_xyz(rx,ry,rz,F_i_x_ave,F_i_y_ave,F_i_z_ave)
#This is the torque from each day lit facet
tau_x = np.sum(tau_i_x) # sum up forces from all faces
tau_y = np.sum(tau_i_y)
tau_z = np.sum(tau_i_z)
return tau_x,tau_y,tau_z # these are numbers for torque components
# compute total BYORP averaging over nphi_Sun solar positions
# for a single binary vector a_bin and body position described with mesh
# arguments:
# incl is solar orbit inclination in radians
# nphi_Sun is the number of solar angles
# returns: torque components
def tau_Bs(mesh,nphi_Sun,incl,a_bin):
# compute F_i for each face, but averaging over all positions of the Sun
F_i_x_ave, F_i_y_ave,F_i_z_ave = F_i_sun_ave(mesh,nphi_Sun,incl) # these are vectors length number of faces
# forces from day lit faces
F_x = np.sum(F_i_x_ave) #sum up the force
F_y = np.sum(F_i_y_ave)
F_z = np.sum(F_i_z_ave)
a_x = a_bin[0] # binary direction
a_y = a_bin[1]
a_z = a_bin[2]
tau_x,tau_y,tau_z = cross_prod_xyz(a_x,a_y,a_z,F_x,F_y,F_z) # cross product
return tau_x,tau_y,tau_z # these are numbers that give the torque components
# first rotate vertices in the mesh about the z axis by angle phi in radians
# then tilt over the body by obliquity which is an angle in radians
# arguments:
# mesh, triangular surface mess for body
# obliquity, angle in radius to tilt body z axis over
# phi, angle in radians to spin/rotate body about its z axis
# phi_prec, angle in randias that tilt is done, it's a precession angle
# sets rotation axis for tilt, this axis is in the xy plane
# returns:
# new_mesh: the tilted/rotated mesh
# zrot: the new z-body spin axis
def tilt_obliq(mesh,obliquity,phi,phi_prec):
f = mesh.faces
v = np.copy(mesh.vertices)
nv = len(v)
# precession angle is phi_prec
axist = np.array([np.cos(phi_prec),np.sin(phi_prec),0])
qt = pymesh.Quaternion.fromAxisAngle(axist, obliquity)
zaxis = np.array([0,0,1])
zrot = qt.rotate(zaxis) # body principal axis will become zrot
# spin rotation about now tilted principal body axis
qs = pymesh.Quaternion.fromAxisAngle(zrot, phi)
# loop over all vertices and do two rotations
for i in range(nv):
v[i] = qt.rotate(v[i]) # tilt it over
v[i] = qs.rotate(v[i]) # spin
new_mesh = pymesh.form_mesh(v, f)
new_mesh.add_attribute("face_area")
new_mesh.add_attribute("face_normal")
new_mesh.add_attribute("face_centroid")
return new_mesh,zrot
# tilt,spin a body and compute binary direction, assuming tidally locked
# arguments:
# body: triangular surface mesh (in principal axis coordinate system)
# nphi is the number of angles that could be done with indexing by iphi
# obliquity: w.r.t to binary orbit angular momentum direction
# iphi: number of rotations by dphi where dphi = 2pi/nphi
# this is principal axis rotation about z axis
# phi0: an offset for phi applied to body but not binary axis
# phi_prec: a precession angle for tilting
# returns:
# tbody, a body rotated after iphi rotations by dphi and tilted by obliquity
# a_bin, binary direction assuming same rotation rate, tidal lock
# l_bin: binary orbit angular momentum orbital axis
# zrot: spin axis direction
def tilt_and_bin(body,obliquity,nphi,iphi,phi0,phi_prec):
dphi = 2*np.pi/nphi
phi = iphi*dphi
tbody,zrot = tilt_obliq(body,obliquity,phi + phi0,phi_prec) # tilt and spin body
a_bin = np.array([np.cos(phi),np.sin(phi),0.0]) # direction to binary
l_bin = np.array([0,0,1.0]) # angular momentum axis of binary orbit
return tbody,a_bin,l_bin,zrot
# compute the YORP torque on body
# arguments:
# body: triangular surface mesh (in principal axis coordinate system)
# nphi is number of body angles spin
# nphi_Sun is the number of solar angles used
# obliquity: angle of body w.r.t to Sun aka ecliptic pole
# returns:
# 3 torque components
# torque dot spin axis so spin down rate can be computed
# torque dot azimuthal unit vector so obliquity change rate can be computed
def compute_Y(body,obliquity,nphi,nphi_Sun):
incl = 0.0 # set Sun inclination to zero so obliquity is w.r.t solar orbit
phi0 = 0 # offset in spin set to zero
phi_prec=0 # precession angle set to zero
tau_Y_x = 0.0
tau_Y_y = 0.0
tau_Y_z = 0.0
for iphi in range(nphi): # body positions
# rotate the body and tilt it over
tbody,a_bin,l_bin,zrot = tilt_and_bin(body,obliquity,nphi,iphi,phi0,phi_prec)
# compute torques over solar positions
tau_x,tau_y,tau_z = tau_Ys(tbody,nphi_Sun,incl)
tau_Y_x += tau_x
tau_Y_y += tau_y
tau_Y_z += tau_z
tau_Y_x /= nphi # average
tau_Y_y /= nphi
tau_Y_z /= nphi
# compute component that affects spin-down/up rate, this is tau dot spin axis
sx = zrot[0]; sy = zrot[1]; sz=zrot[2]
tau_s = tau_Y_x*sx + tau_Y_y*sy + tau_Y_z*sz
# we need a unit vector, phi_hat, that is in the xy plane, points in the azimuthal direction
# and is perpendicular to the rotation axis
spl = np.sqrt(sx**2 + sy**2)
tau_o = 0
if (spl >0):
phi_hat_x = sy/spl
phi_hat_y = -sx/spl
phi_hat_z = 0
tau_o = tau_Y_x*phi_hat_x + tau_Y_y*phi_hat_y+tau_Y_z*phi_hat_z
# tau_o should tell us about obliquity change rate
return tau_Y_x,tau_Y_y,tau_Y_z,tau_s,tau_o
# compute the BYORP torque, for a tidally locked binary
# arguments:
# body: triangular surface mesh (in principal axis coordinate system)
# nphi is the number of body angles we will use (spin)
# obliquity is body tilt w.r.t to binary orbit
# incl is solar orbit inclination
# nphi_Sun is the number of solar angles used
# phi0 an offset for body spin angle that is not applied to binary direction
# phi_prec z-axis precession angle, relevant for obliquity
# returns:
# 3 torque components
# torque dot l_bin so can compute binary orbit drift rate
def compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec):
tau_BY_x = 0.0
tau_BY_y = 0.0
tau_BY_z = 0.0
for iphi in range(nphi): # body positions
# rotate the body and tilt it over, and find binary direction
tbody,a_bin,l_bin,zrot = tilt_and_bin(body,obliquity,nphi,iphi,phi0,phi_prec)
# a_bin is binary direction
# compute torques over spin/body positions
tau_x,tau_y,tau_z =tau_Bs(tbody,nphi_Sun,incl,a_bin)
tau_BY_x += tau_x
tau_BY_y += tau_y
tau_BY_z += tau_z
tau_BY_x /= nphi # average
tau_BY_y /= nphi
tau_BY_z /= nphi
# compute component that affects binary orbit angular momentum
# this is tau dot l_bin
tau_l = tau_BY_x*l_bin[0] + tau_BY_y*l_bin[1] + tau_BY_z*l_bin[2]
return tau_BY_x,tau_BY_y,tau_BY_z, tau_l
print(4*np.pi/3)
# create a sphere of radius 1
center = np.array([0,0,0])
sphere = pymesh.generate_icosphere(1., center, refinement_order=2)
sphere.add_attribute("face_area")
sphere.add_attribute("face_normal")
sphere.add_attribute("face_centroid")
print(volume_mesh(sphere))
nf_mesh(sphere)
# create a perturbed ellipsoid using the above sphere
devrand = 0.05 # perturbation size
aratio1 = 0.5 # axis ratios
aratio2 = 0.7
body = sphere_perturb(sphere,devrand,aratio1,aratio2) # create it
print(volume_mesh(body)) #check volume
p=meshplot.plot(body.vertices, body.faces,return_plot=True) # show it
# add a red line which could show where the binary is
r = 1.5; theta = np.pi/4
p0 = np.array([0,0,0]); p1 = np.array([r*np.cos(theta),r*np.sin(theta),0])
p.add_lines(p0, p1, shading={"line_color": "red", "line_width": 1.0});
# check total surface area
print(surface_area(sphere))
print(surface_area(body))
# subtract 1 and you have approximately the d_s used by Steinberg+10
# many of their d_s are lower (see their figure 3)
# compute the YORP torque on body as a function of obliquity
# here obliquity is w.r.t Sun
# returns obliquity and torque arrays
def obliq_Y_fig(body):
nphi_Sun=36 # number of solar positions
nphi = 36 # number of spin positions
nobliq = 20 # number of obliquities
dobliq = np.pi/nobliq
tau_s_arr = np.zeros(nobliq) # to store torques
tau_o_arr = np.zeros(nobliq) # to store torques
o_arr = np.zeros(nobliq) # to store obliquities in degrees
for i in range(nobliq):
obliquity=i*dobliq
tau_Y_x,tau_Y_y,tau_Y_z,tau_s,tau_o =compute_Y(body,obliquity,nphi,nphi_Sun)
#print(tau_s)
tau_s_arr[i] = tau_s
tau_o_arr[i] = tau_o
o_arr[i] = obliquity*180/np.pi
return o_arr, tau_s_arr, tau_o_arr
# compute YORPs as a function of obliquity (single body, obliquity w.r.t Solar orbit)
o_arr, tau_s_arr, tau_o_arr = obliq_Y_fig(body)
# also check the sphere for YORP
o_arr2, tau_s_arr2,tau_o_arr2 = obliq_Y_fig(sphere) # note y axis
# compare the two YORPs
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150)
ax.plot(o_arr2,tau_s_arr2,'go-',label='sphere')
#ax.plot(o_arr2,tau_o_arr2,'bo-',label='sphere')
ax.plot(o_arr,tau_s_arr,'rD-',label=r'body, $s$')
ax.plot(o_arr,tau_o_arr,'D:',label='body, $o$', color='orange')
ax.set_xlabel('obliquity (deg)',fontsize=16)
ax.set_ylabel(r'${ \tau}_Y \cdot \hat{ s}, { \tau}_Y \cdot \hat{\phi}$',fontsize=16)
ax.legend()
# the sizes here agree with right hand side of Figure 3 by Steinberg&Sari+11
# compute the BYORP torque on body as a function of inclination
# for a given obliquity and precession angle
# returns inclination and torque arrays
def obliq_BY_fig(body,obliquity,phi_prec):
phi0=0
nphi_Sun=36 # number of solar positions
nphi = 36 # number of spin positions
nincl = 20 # number of inclinations
dincl = np.pi/nincl
tau_l_arr = np.zeros(nincl) # to store torques
i_arr = np.zeros(nincl)
for i in range(nincl):
incl=i*dincl
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec)
i_arr[i] = incl*180/np.pi
tau_l_arr[i] = tau_l
return i_arr,tau_l_arr
# compute the BYORP torque on body as a function of obliquity
# for a given inclination and precession angle
# returns obliquity and torque arrays
def obliq_BY_fig2(body,incl,phi_prec):
phi0=0
nphi_Sun=36 # number of solar positions
nphi = 36 # number of spin positions
nobliq = 60 # number of obliquities
dobliq = np.pi/nobliq
tau_l_arr = np.zeros(nobliq) # to store torques
o_arr = np.zeros(nobliq)
for i in range(nobliq):
obliquity=i*dobliq
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec)
o_arr[i] = obliquity*180/np.pi
tau_l_arr[i] = tau_l
return o_arr,tau_l_arr
# compute BYORPs as a function of inclination
obliquity = 0; phi_prec=0
squannit = pymesh.load_mesh("kw4b.obj")
body, info = pymesh.collapse_short_edges(squannit, 0.05)
i_arr,tau_l_arr = obliq_BY_fig(body,obliquity,phi_prec)
i_arr2,tau_l_arr2 = obliq_BY_fig(sphere,obliquity,phi_prec)
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150)
ax.plot(i_arr2,tau_l_arr2,'go-',label='sphere')
ax.plot(i_arr,tau_l_arr,'rD-',label='body')
ax.set_xlabel('inclination (deg)',fontsize=16)
ax.set_ylabel(r'${\tau}_{BY} \cdot \hat{l}$',fontsize=16)
ax.legend()
# compute the BYORP torque on body as a function of precession angle
# for a given obliquity and inclination
# returns precession angle and torque arrays
def obliq_BY_fig3(body,obliquity,incl):
phi0=0
nphi_Sun=36 # number of solar positions
nphi = 36 # number of spin positions
nprec = 30 # number of precession angles
dprec = np.pi/nprec # only goes from 0 to pi
tau_l_arr = np.zeros(nprec) # to store torques
p_arr = np.zeros(nprec)
for i in range(nprec):
phi_prec=i*dprec
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec)
p_arr[i] = phi_prec*180/np.pi
tau_l_arr[i] = tau_l
return p_arr,tau_l_arr
# compute the BYORP torque on body as a function of libration angle phi0
# for a given obliquity and inclination and precession angle
# returns libration angle and torque arrays
def obliq_BY_fig4(body,obliquity,incl,phi_prec):
phi0=0
nphi_Sun=36 # number of solar positions
nphi = 36 # number of spin positions
nlib = 20 # number of libration angles
dlib = 0.5*np.pi/nlib # going from -pi/4 to pi/4
tau_l_arr = np.zeros(nlib) # to store torques
l_arr = np.zeros(nlib)
for i in range(nlib):
phi0=i*dlib - np.pi/4
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec)
l_arr[i] = phi0*180/np.pi
tau_l_arr[i] = tau_l
return l_arr,tau_l_arr
# compute the BYORP torque on body as a function of obliquity and precession angle
# for a given inclination
# returns 2D torque array and arrays for the axes so a contour or color image can be plotted
def obliq_BY_fig2D(body,incl):
phi0=0
nphi_Sun=36 # number of solar positions
nphi = 36 # number of spin positions
nprec = 10 # number of precession angles
nobliq = 12 # number of obliquities
dprec = np.pi/nprec
dobliq = np.pi/nobliq
tau_l_arr = np.zeros((nprec,nobliq)) # to store torques
# with imshow x axis will be obliq
p_arr = np.zeros(nprec)
o_arr = np.zeros(nobliq)
for i in range(nprec):
phi_prec=i*dprec
p_arr[i] = phi_prec*180/np.pi
for j in range(nobliq):
obliquity = j*dobliq
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,phi0,phi_prec)
o_arr[j] = obliquity*180/np.pi
tau_l_arr[i,j] = tau_l
print(i)
return p_arr,o_arr,tau_l_arr
# compute BYORPs as a function of obliquity
incl = 0; phi_prec=0
squannit = pymesh.load_mesh("kw4b.obj")
body, info = pymesh.collapse_short_edges(squannit, 0.05)
o_arr,tau_l_arr = obliq_BY_fig2(body,incl,phi_prec)
o_arr2,tau_l_arr2 = obliq_BY_fig2(sphere,incl,phi_prec)
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=300)
ax.plot(o_arr2,tau_l_arr2,'go-',label='sphere')
ax.plot(o_arr,tau_l_arr,'rD-',label='body')
ax.set_xlabel('obliquity (deg)',fontsize=16)
ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16)
ax.legend()
plt.savefig('tau_BY_obl1234.png')
# compute BYORPs as a function of libration angle
# compute BYORPs as a function of obliquity
incl = 0; phi_prec=0
squannit = pymesh.load_mesh("kw4b.obj")
body, info = pymesh.collapse_short_edges(squannit, 0.05)
incl = 0; phi_prec=0; obliquity = np.pi/4
l_arr,tau_l_arr=obliq_BY_fig4(body,obliquity,incl,phi_prec)
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150)
#ax.plot(o_arr2,tau_l_arr2,'go-',label='sphere')
ax.plot(l_arr,tau_l_arr,'rD-',label='body')
ax.set_xlabel('libration angle (deg)',fontsize=16)
ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16)
ax.legend()
#plt.savefig('tau_BY_lib.png')
# fairly sensitive to libration angle
###Output
_____no_output_____
###Markdown
what next?Normalize in terms of what Steinberg+11 used and compare drift ratesto the size of BYORPs estimated by other people. Done.We need to figure out how the dimensionless constants used by Jacobson and Scheeres+11 compare with thoseby Steinberg+11Steinberg shows in their figure 4 that they expect similar sizes for the two dimensionless parameters.Our computations are not consistent with that as we get larger BYORP (by a factor of about 100) than YORP coefficients. However, our body is not round, compared to most of theirs.We need to get a sphape model for something that has a YORP or BYORP prediction and check our code with it.Try Moshup secondary called Squannit shape modelWe need to explore sensitivity of our BYORP with obliquity with the shape model.
###Code
# we seem to find that moderate obliquity variations can reverse BYORP, particularly for a non-round secondary.
# And this is for fixed obliquity, not chaotic ones.
# We might be able to somewhat mitigate the tension between dissipation rate estimates
# and we would predict obliquity in Didymos! yay!
#https://www.naic.edu/~smarshal/1999kw4.html
#Squannit is secondary of Moshup which was 1999KW4
squannit = pymesh.load_mesh("kw4b.obj")
nf_mesh(squannit)
# we need to normalize it so that its volume is 4/3 pi
# to compute the volume of a tetrahdon that is made from a face + a vertex at the origin
# we need to compute the determanent of a 3x3 matrix that consists of the 3 vertices in the face.
# we then sum over all faces in the mesh to get the total volume
# alternatively we use the generalized voxel thing in pymesh which is 4 vertices.
# to do this we add a vertex at the center and then we need to make the same number
# of voxels as faces using the vertex at the origin.
# and then we sum over the voxel_volume attributes of all the voxels.
p=meshplot.plot(squannit.vertices, squannit.faces,return_plot=True) # show it
vol = volume_mesh(squannit)
print(vol)
R_squannit = pow(vol*3/(4.0*np.pi),0.3333333)
print(R_squannit) # I don't know what units this is in. maybe km
# the object is supposed to have Beta: 0.571 x 0. 463 x 0.349 km (6% uncertainty)
# rescale so it has vol equiv sphere radius of 1
new_squannit = cor_volume(squannit)
# reduce the number of faces to something reasonable
short_squannit, info = pymesh.collapse_short_edges(squannit, 0.05)
nf_mesh(short_squannit)
meshplot.plot(short_squannit.vertices, short_squannit.faces)
# compute BYORPs as a function of obliquity
incl = 0; phi_prec=0
o_arr,tau_l_arr = obliq_BY_fig2(short_squannit,incl,phi_prec)
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150)
#ax.plot(o_arr2,tau_l_arr2,'go-',label='sphere')
ax.plot(o_arr,tau_l_arr,'rD-',label='squannit')
ax.set_xlabel('obliquity (deg)',fontsize=16)
ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16)
ax.legend()
#plt.savefig('tau_BY_obl.png')
import time
# compute BYORPs as a function of precession angle, seems not sensitive to precession angle
# compute BYORPs as a function of obliquity
incl = 0; phi_prec=0
squannit = pymesh.load_mesh("kw4b.obj")
body, info = pymesh.collapse_short_edges(squannit, 0.05)
incl = 0; #phi_prec=0
obliquity=np.pi/4
start = time.perf_counter()
p_arr,tau_l_arr = obliq_BY_fig3(body,obliquity,incl)
end = time.perf_counter()#Print Time
print(f'D8: time to complete {round(end - start,2)} second(s)')
# p_arr2,tau_l_arr2 = obliq_BY_fig3(sphere,obliquity,incl)
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150)
# ax.plot(p_arr2,tau_l_arr2,'go-',label='sphere')
ax.plot(p_arr,tau_l_arr,'rD-',label='body')
ax.set_xlabel('precession angle (deg)',fontsize=16)
ax.set_ylabel(r'${ \tau}_{BY} \cdot \hat{l}$',fontsize=16)
ax.legend()
# I don't understand why this has period 6
# It is very sensitive to obliquity but not to precession angle
incl = 0 # this takes a really long time!
p_arr,o_arr,tau_l_arr_2D=obliq_BY_fig2D(body,incl)
fig,ax = plt.subplots(1,1,figsize=(5,4),dpi=150)
ax.set_ylabel('precession angle (deg)',fontsize=16)
ax.set_xlabel('obliquity (deg)',fontsize=16)
maxt = np.max(tau_l_arr_2D)
mint = np.min(tau_l_arr_2D)
maxabs = max(abs(maxt),abs(mint))
im=ax.imshow(tau_l_arr_2D, cmap='RdBu',vmin=-maxabs, vmax=maxabs,
extent=[np.min(o_arr), np.max(o_arr), np.min(p_arr), np.max(p_arr)], origin='lower')
plt.colorbar(im)
# more tests below
# see if compute_Y works on body
nphi_Sun=36
nphi = 36
obliquity=0
tau_Y_x,tau_Y_y,tau_Y_z,tau_s =compute_Y(body,obliquity,nphi,nphi_Sun)
print(tau_Y_x ,tau_Y_y ,tau_Y_z,tau_s)
# see if compute_BY works on body
incl=0.0
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,0)
print(tau_BY_x ,tau_BY_y ,tau_BY_z,tau_l)
#check flip by 180degrees
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(body,obliquity,nphi,nphi_Sun,incl,np.pi)
print(tau_BY_x ,tau_BY_y ,tau_BY_z,tau_l)
# see if compute_Y works on sphere
tau_Y_x,tau_Y_y,tau_Y_z,tau_s =compute_Y(sphere,obliquity,nphi,nphi_Sun)
print(tau_Y_x ,tau_Y_y ,tau_Y_z,tau_s)
# see how compute_BY works on sphere
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(sphere,obliquity,nphi,nphi_Sun,incl,0.05)
print(tau_BY_x,tau_BY_y,tau_BY_z,tau_l)
# see how compute_BY works on sphere
tau_BY_x,tau_BY_y,tau_BY_z, tau_l =compute_BY(sphere,obliquity,nphi,nphi_Sun,incl,0.0)
print(tau_BY_x,tau_BY_y,tau_BY_z,tau_l)
# all tests so far seem reasonable, sphere gives a BYORP but is sensitive to initial angle of rotation
# as our sphere is multifaceted.
# the size is smaller than for our other shapes
###Output
_____no_output_____ |
Assignment_3/Part_1_analysis_pi/Code.ipynb | ###Markdown
Question 1
###Code
# How many first decimal digits are correct when compaing with piMathe
def sameLetter(test, answer):
n = 0
for (t, a) in zip(test, answer):
if t == a:
n = n+1
else:
return n
if __name__ == "__main__":
n = sameLetter(piEgypt, piMathe)
print('For piEgypt, n = {}'.format(n))
n = sameLetter(piChina, piMathe)
print('For piChina, n = {}'.format(n))
n = sameLetter(piIndia, piMathe)
print('For piIndia, n = {}'.format(n))
n = sameLetter(piGreec, piMathe)
print('For piGreec, n = {}'.format(n))
print('China method gave the highest precison')
# Compute the frequency
def digitFrequency(inputVector):
n = len(inputVector)
ans = [ 0 for i in range(10)]
for d in inputVector:
d = int(d)
ans[d] = ans[d] + 1
ans = np.array(ans, dtype = 'f')
ans = (ans * 100) / len(inputVector)
return ans
if __name__ == "__main__":
f = digitFrequency(piMathe)
print("Frequency of piMathe = {}, sum = {}, max = {}, min = {}".format(f, sum(f), max(f), min(f)))
f = digitFrequency(piEgypt)
print("Frequency of piEgype is {}, sum = {}, max = {}, min = {}".format(f, sum(f), max(f), min(f)))
f = digitFrequency(piChina)
print("Frequency of piChina is {}, sum = {}, max = {}, min = {}".format(f, sum(f), max(f), min(f)))
f = digitFrequency(piIndia)
print("Frequency of piIndia is {}, sum = {}, max = {}, min = {}".format(f, sum(f), max(f), min(f)))
f = digitFrequency(piGreec)
print("Frequency of piGreec is {}, sum = {}, max = {}, min = {}".format(f, sum(f), max(f), min(f)))
###Output
Frequency of piMathe = [ 4. 10. 10. 16. 8. 10. 8. 8. 10. 16.], sum = 100.0, max = 16.0, min = 4.0
Frequency of piEgype is [ 4. 16. 12. 14. 14. 14. 0. 8. 6. 12.], sum = 100.0, max = 16.0, min = 0.0
Frequency of piChina is [ 8. 6. 10. 10. 12. 20. 6. 6. 10. 12.], sum = 100.0, max = 20.0, min = 6.0
Frequency of piIndia is [ 2. 2. 8. 12. 10. 8. 6. 4. 34. 14.], sum = 100.0, max = 34.0, min = 2.0
Frequency of piGreec is [10. 18. 4. 12. 10. 12. 4. 10. 10. 10.], sum = 100.0, max = 18.0, min = 4.0
###Markdown
Quesiton 2
###Code
piMathe = digitFrequency(piMathe)
piEgypt = digitFrequency(piEgypt)
piChina = digitFrequency(piChina)
piIndia = digitFrequency(piIndia)
piGreec = digitFrequency(piGreec)
print(piMathe)
print(piEgypt)
print(piChina)
print(piIndia)
print(piGreec)
import statistics
def maxAbs(test, ans):
errorList = []
for (t, a) in zip(test, ans):
t = int(t)
a = int(a)
error = abs(t - a)
errorList.append(error)
return max(errorList)
def medianAbs(test, ans):
errorList = []
for (t, a) in zip(test, ans):
t = int(t)
a = int(a)
error = abs(t - a)
errorList.append(error)
return statistics.median(errorList)
def meanAbs(test, ans):
errorList = []
for (t, a) in zip(test, ans):
t = int(t)
a = int(a)
error = abs(t - a)
errorList.append(error)
return sum(errorList) / len(errorList)
def rootSquError(test, ans):
errorList = []
for (t, a) in zip(test, ans):
t = int(t)
a = int(a)
error = abs(t - a)
errorList.append(error * error)
return(sum(errorList) / len(errorList))**0.5
if __name__ == "__main__":
# Max Absolute
e = maxAbs(piEgypt, piMathe)
print("piEgypt, max absolute is {}".format(e))
e = maxAbs(piChina, piMathe)
print("piChina, max absolute is {}".format(e))
e = maxAbs(piIndia, piMathe)
print("piIndia, max absolute is {}".format(e))
e = maxAbs(piGreec, piMathe)
print("piGreec, max absolute is {}".format(e))
print()
# Median Absolute
e = medianAbs(piEgypt, piMathe)
print("piEgypt, median absolute is {}".format(e))
e = medianAbs(piChina, piMathe)
print("piChina, median absolute is {}".format(e))
e = medianAbs(piIndia, piMathe)
print("piIndia, median absolute is {}".format(e))
e = medianAbs(piGreec, piMathe)
print("piGreec, median absolute is {}".format(e))
print()
# Mean Absolute
e = meanAbs(piEgypt, piMathe)
print("piEgypt, mean absolute is {}".format(e))
e = meanAbs(piChina, piMathe)
print("piChina, mean absolute is {}".format(e))
e = meanAbs(piIndia, piMathe)
print("piIndia, mean absolute is {}".format(e))
e = meanAbs(piGreec, piMathe)
print("piGreec, mean absolute is {}".format(e))
print()
# RMSE
e = rootSquError(piEgypt, piMathe)
print("piEgypt, RMSE is {:.1f}".format(e))
e = rootSquError(piChina, piMathe)
print("piChina, RMSE is {:.1f}".format(e))
e = rootSquError(piIndia, piMathe)
print("piIndia, RMSE is {:.1f}".format(e))
e = rootSquError(piGreec, piMathe)
print("piGreec, RMSE is {:.1f}".format(e))
print()
###Output
piEgypt, max absolute is 8
piChina, max absolute is 10
piIndia, max absolute is 24
piGreec, max absolute is 8
piEgypt, median absolute is 4.0
piChina, median absolute is 4.0
piIndia, median absolute is 2.0
piGreec, median absolute is 4.0
piEgypt, mean absolute is 3.6
piChina, mean absolute is 3.6
piIndia, mean absolute is 5.2
piGreec, mean absolute is 4.0
piEgypt, RMSE is 4.4
piChina, RMSE is 4.6
piIndia, RMSE is 8.3
piGreec, RMSE is 4.6
|
contrib/fairness/fairlearn-azureml-mitigation.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).We will apply the [grid search algorithm](https://fairlearn.github.io/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn==0.4.6`* `joblib`* `shap`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Loading the DataWe use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data from the `shap` package:
###Code
from sklearn.datasets import fetch_openml
data = fetch_openml(data_id=1590, as_frame=True)
X_raw = data.data
Y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
A = X_raw[['sex','race']]
X = X_raw.drop(labels=['sex', 'race'],axis = 1)
X_dummies = pd.get_dummies(X)
sc = StandardScaler()
X_scaled = sc.fit_transform(X_dummies)
X_scaled = pd.DataFrame(X_scaled, columns=X_dummies.columns)
le = LabelEncoder()
Y = le.fit_transform(Y)
###Output
_____no_output_____
###Markdown
With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled,
Y,
A,
test_size = 0.2,
random_state=0,
stratify=Y)
# Work around indexing issue
X_train = X_train.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, Y_train,
sensitive_features=A_train.sex)
predictors = sweep._predictors
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=Y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn>=0.6.2` (pre-v0.5.0 will work with minor modifications)* `joblib`* `liac-arff`* `raiwidgets`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook. Loading the DataWe use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from raiwidgets import FairnessDashboard
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector as selector
from sklearn.pipeline import Pipeline
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
from fairness_nb_utils import fetch_census_dataset
data = fetch_census_dataset()
# Extract the items we want
X_raw = data.data
y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:
###Code
A = X_raw[['sex','race']]
X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)
###Output
_____no_output_____
###Markdown
We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset.
###Code
(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(
X_raw, y, A, test_size=0.3, random_state=12345, stratify=y
)
# Ensure indices are aligned between X, y and A,
# after all the slicing and splitting of DataFrames
# and Series
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
y_test = y_test.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).For this preprocessing, we make use of `Pipeline` objects from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
###Output
_____no_output_____
###Markdown
Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:
###Code
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairnessDashboard(sensitive_features=A_test,
y_true=y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, y_train,
sensitive_features=A_train.sex)
# For Fairlearn pre-v0.5.0, need sweep._predictors
predictors = sweep.predictors_
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for predictor in predictors:
error = ErrorRate()
error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(predictor.predict)[0])
disparities.append(disparity.gamma(predictor.predict).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairnessDashboard(sensitive_features=A_test,
y_true=y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairnessDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn>=0.6.2` (pre-v0.5.0 will work with minor modifications)* `joblib`* `liac-arff`* `raiwidgets==0.4.0`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook. Loading the DataWe use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from raiwidgets import FairnessDashboard
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector as selector
from sklearn.pipeline import Pipeline
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
from fairness_nb_utils import fetch_census_dataset
data = fetch_census_dataset()
# Extract the items we want
X_raw = data.data
y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:
###Code
A = X_raw[['sex','race']]
X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)
###Output
_____no_output_____
###Markdown
We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset.
###Code
(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(
X_raw, y, A, test_size=0.3, random_state=12345, stratify=y
)
# Ensure indices are aligned between X, y and A,
# after all the slicing and splitting of DataFrames
# and Series
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
y_test = y_test.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).For this preprocessing, we make use of `Pipeline` objects from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
###Output
_____no_output_____
###Markdown
Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:
###Code
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairnessDashboard(sensitive_features=A_test,
y_true=y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, y_train,
sensitive_features=A_train.sex)
# For Fairlearn pre-v0.5.0, need sweep._predictors
predictors = sweep.predictors_
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for predictor in predictors:
error = ErrorRate()
error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(predictor.predict)[0])
disparities.append(disparity.gamma(predictor.predict).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairnessDashboard(sensitive_features=A_test,
y_true=y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairnessDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn==0.4.6`* `joblib`* `shap`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Loading the DataWe use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data from the `shap` package:
###Code
from utilities import fetch_openml_with_retries
data = fetch_openml_with_retries(data_id=1590)
# Extract the items we want
X_raw = data.data
Y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
A = X_raw[['sex','race']]
X = X_raw.drop(labels=['sex', 'race'],axis = 1)
X_dummies = pd.get_dummies(X)
sc = StandardScaler()
X_scaled = sc.fit_transform(X_dummies)
X_scaled = pd.DataFrame(X_scaled, columns=X_dummies.columns)
le = LabelEncoder()
Y = le.fit_transform(Y)
###Output
_____no_output_____
###Markdown
With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled,
Y,
A,
test_size = 0.2,
random_state=0,
stratify=Y)
# Work around indexing issue
X_train = X_train.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, Y_train,
sensitive_features=A_train.sex)
predictors = sweep._predictors
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=Y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)* `joblib`* `shap`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook. Loading the DataWe use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn.compose import ColumnTransformer
from sklearn.datasets import fetch_openml
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector as selector
from sklearn.pipeline import Pipeline
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
from fairness_nb_utils import fetch_openml_with_retries
data = fetch_openml_with_retries(data_id=1590)
# Extract the items we want
X_raw = data.data
y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:
###Code
A = X_raw[['sex','race']]
X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)
###Output
_____no_output_____
###Markdown
We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset.
###Code
(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(
X_raw, y, A, test_size=0.3, random_state=12345, stratify=y
)
# Ensure indices are aligned between X, y and A,
# after all the slicing and splitting of DataFrames
# and Series
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
y_test = y_test.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).For this preprocessing, we make use of `Pipeline` objects from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
###Output
_____no_output_____
###Markdown
Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:
###Code
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, y_train,
sensitive_features=A_train.sex)
# For Fairlearn v0.5.0, need sweep.predictors_
predictors = sweep._predictors
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)* `joblib`* `liac-arff`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook. Loading the DataWe use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector as selector
from sklearn.pipeline import Pipeline
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
from fairness_nb_utils import fetch_census_dataset
data = fetch_census_dataset()
# Extract the items we want
X_raw = data.data
y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:
###Code
A = X_raw[['sex','race']]
X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)
###Output
_____no_output_____
###Markdown
We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset.
###Code
(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(
X_raw, y, A, test_size=0.3, random_state=12345, stratify=y
)
# Ensure indices are aligned between X, y and A,
# after all the slicing and splitting of DataFrames
# and Series
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
y_test = y_test.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).For this preprocessing, we make use of `Pipeline` objects from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
###Output
_____no_output_____
###Markdown
Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:
###Code
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, y_train,
sensitive_features=A_train.sex)
# For Fairlearn v0.5.0, need sweep.predictors_
predictors = sweep._predictors
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn>=0.6.2` (pre-v0.5.0 will work with minor modifications)* `joblib`* `liac-arff`* `raiwidgets~=0.7.0`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook. Loading the DataWe use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from raiwidgets import FairnessDashboard
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector as selector
from sklearn.pipeline import Pipeline
import pandas as pd
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
from fairness_nb_utils import fetch_census_dataset
data = fetch_census_dataset()
# Extract the items we want
X_raw = data.data
y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:
###Code
A = X_raw[['sex','race']]
X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)
###Output
_____no_output_____
###Markdown
We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset.
###Code
(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(
X_raw, y, A, test_size=0.3, random_state=12345, stratify=y
)
# Ensure indices are aligned between X, y and A,
# after all the slicing and splitting of DataFrames
# and Series
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
y_test = y_test.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).For this preprocessing, we make use of `Pipeline` objects from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
###Output
_____no_output_____
###Markdown
Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:
###Code
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairnessDashboard(sensitive_features=A_test,
y_true=y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, y_train,
sensitive_features=A_train.sex)
# For Fairlearn pre-v0.5.0, need sweep._predictors
predictors = sweep.predictors_
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for predictor in predictors:
error = ErrorRate()
error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(predictor.predict)[0])
disparities.append(disparity.gamma(predictor.predict).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairnessDashboard(sensitive_features=A_test,
y_true=y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairnessDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).We will apply the [grid search algorithm](https://fairlearn.github.io/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn==0.4.6`* `joblib`* `shap` Loading the DataWe use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
import pandas as pd
import shap
###Output
_____no_output_____
###Markdown
We can now load and inspect the data from the `shap` package:
###Code
X_raw, Y = shap.datasets.adult()
X_raw["Race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
A = X_raw[['Sex','Race']]
X = X_raw.drop(labels=['Sex', 'Race'],axis = 1)
X = pd.get_dummies(X)
le = LabelEncoder()
Y = le.fit_transform(Y)
###Output
_____no_output_____
###Markdown
With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_raw,
Y,
A,
test_size = 0.2,
random_state=0,
stratify=Y)
# Work around indexing issue
X_train = X_train.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
# Improve labels
A_test.Sex.loc[(A_test['Sex'] == 0)] = 'female'
A_test.Sex.loc[(A_test['Sex'] == 1)] = 'male'
A_test.Race.loc[(A_test['Race'] == 0)] = 'Amer-Indian-Eskimo'
A_test.Race.loc[(A_test['Race'] == 1)] = 'Asian-Pac-Islander'
A_test.Race.loc[(A_test['Race'] == 2)] = 'Black'
A_test.Race.loc[(A_test['Race'] == 3)] = 'Other'
A_test.Race.loc[(A_test['Race'] == 4)] = 'White'
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, Y_train,
sensitive_features=A_train.Sex)
predictors = sweep._predictors
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.Sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.Sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.Sex, 'race': A_test.Race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=Y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.  Unfairness Mitigation with Fairlearn and Azure Machine Learning**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** Table of Contents1. [Introduction](Introduction)1. [Loading the Data](LoadingData)1. [Training an Unmitigated Model](UnmitigatedModel)1. [Mitigation with GridSearch](Mitigation)1. [Uploading a Fairness Dashboard to Azure](AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure1. [Conclusion](Conclusion) IntroductionThis notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).We will apply the [grid search algorithm](https://fairlearn.github.io/api_reference/fairlearn.reductions.htmlfairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. SetupTo use this notebook, an Azure Machine Learning workspace is required.Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.This notebook also requires the following packages:* `azureml-contrib-fairness`* `fairlearn==0.4.6`* `joblib`* `shap`Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
###Code
# !pip install --upgrade scikit-learn>=0.22.1
###Output
_____no_output_____
###Markdown
Loading the DataWe use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:
###Code
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
import pandas as pd
import shap
###Output
_____no_output_____
###Markdown
We can now load and inspect the data from the `shap` package:
###Code
X_raw, Y = shap.datasets.adult()
X_raw["Race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
A = X_raw[['Sex','Race']]
X = X_raw.drop(labels=['Sex', 'Race'],axis = 1)
X = pd.get_dummies(X)
le = LabelEncoder()
Y = le.fit_transform(Y)
###Output
_____no_output_____
###Markdown
With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_raw,
Y,
A,
test_size = 0.2,
random_state=0,
stratify=Y)
# Work around indexing issue
X_train = X_train.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
# Improve labels
A_test.Sex.loc[(A_test['Sex'] == 0)] = 'female'
A_test.Sex.loc[(A_test['Sex'] == 1)] = 'male'
A_test.Race.loc[(A_test['Race'] == 0)] = 'Amer-Indian-Eskimo'
A_test.Race.loc[(A_test['Race'] == 1)] = 'Asian-Pac-Islander'
A_test.Race.loc[(A_test['Race'] == 2)] = 'Black'
A_test.Race.loc[(A_test['Race'] == 3)] = 'Other'
A_test.Race.loc[(A_test['Race'] == 4)] = 'White'
###Output
_____no_output_____
###Markdown
Training an Unmitigated ModelSo we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
###Code
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
We can view this model in the fairness dashboard, and see the disparities which appear:
###Code
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with GridSearchThe `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
###Code
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
###Output
_____no_output_____
###Markdown
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
###Code
sweep.fit(X_train, Y_train,
sensitive_features=A_train.Sex)
predictors = sweep._predictors
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
###Code
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.Sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.Sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
###Code
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
###Output
_____no_output_____
###Markdown
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
###Code
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test.tolist(),
y_pred=predictions_dominant)
###Output
_____no_output_____
###Markdown
When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. Uploading a Fairness Dashboard to AzureUploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:1. Register the dominant models1. Precompute all the required metrics1. Upload to AzureBefore that, we need to connect to Azure Machine Learning Studio:
###Code
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
###Output
_____no_output_____
###Markdown
Registering ModelsThe fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
###Code
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
###Output
_____no_output_____
###Markdown
Now, produce new predictions dictionaries, with the updated names:
###Code
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
###Output
_____no_output_____
###Markdown
Precomputing MetricsWe create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
###Code
sf = { 'sex': A_test.Sex, 'race': A_test.Race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=Y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
###Output
_____no_output_____
###Markdown
Uploading the DashboardNow, we import our `contrib` package which contains the routine to perform the upload:
###Code
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
###Output
_____no_output_____
###Markdown
Now we can create an Experiment, then a Run, and upload our dashboard to it:
###Code
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
###Output
_____no_output_____
###Markdown
The dashboard can be viewed in the Run Details page.Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
###Code
print(dash_dict == downloaded_dict)
###Output
_____no_output_____ |
notebooks/02-Reference-Trajectory.ipynb | ###Markdown
Computing Co-States and GainsOnce the reference trajectory has been propagated, additional information is required for formulating the guidance law. These are the costates $\lambda_h$, $\lambda_s$, $\lambda_v$, $\lambda_\gamma$ and $\lambda_u$. Of these, $\lambda_s$ was found to have a constant value of $1$ throughout the trajectory.The remaining costates have these terminal boundary conditions associated with them:$$\begin{align}\lambda_h(t_f) &= -\cot{\gamma(t_f)} \\\lambda_v(t_f) &= 0 \\\lambda_\gamma(t_f) &= 0 \\\lambda_u(t_f) &= 0 \\\end{align}$$Starting at these terminal boundary conditions, we can integrate the equations in reverse to obtain the time history of these co-states.
###Code
def traj_eom_with_costates(t: float,
state: np.array,
params: dict,
bank_angle_fn: Callable[[float, np.array, dict], float]
):
lamS = 1
h, s, V, gam, lamH, lamV, lamGAM, lamU = state
u = bank_angle_fn(t, state, params)
rho0 = params['rho0']
H = params['H']
beta = params['beta']
LD = params['LD']
R_m = params['R_m']
g = params['g']
r = R_m + h
v = V
V2 = V*V
rho = rho0 * exp(-h/H)
D_m = rho * V2 / (2 * beta) # Drag Acceleration (D/m)
# lamHDot = D_m*LD*lamGAM*cos(u)/(H*v) - D_m*lamV/H + lamGAM*v*cos(gam)/r**2
# lamVDot = D_m*LD*lamGAM*cos(u)/v**2 - LD*lamGAM*rho*cos(u)/beta - g*lamGAM*cos(gam)/v**2 - lamGAM*cos(gam)/r \
# - lamH*sin(gam) \
# - lamS*cos(gam) \
# + lamV*rho*v/beta
# lamGAMDot = g*lamV*cos(gam) - lamGAM*(g*sin(gam) - v**2*sin(gam)/r)/v - lamH*v*cos(gam) + lamS*v*sin(gam)
lamHdot = D_m*LD*lamGAM*cos(u)/(H*v) - D_m*lamV/H + lamGAM*v*cos(gam)/r**2
lamVdot = D_m*LD*lamGAM*cos(u)/v**2 - LD*lamGAM*rho*cos(u)/beta - g*lamGAM*cos(gam)/v**2 - lamGAM*cos(gam)/r - lamH*sin(gam) - lamS*cos(gam) + lamV*rho*v/beta
lamGAMdot = -g*lamGAM*sin(gam)/v + g*lamV*cos(gam) + lamGAM*v*sin(gam)/r - lamH*v*cos(gam) + lamS*v*sin(gam)
# lamUdot = -LD*lamGAM*rho*v*sin(u)/(2*beta)
lamUdot = LD*lamGAM*rho*v*sin(u)/(2*beta)
return np.array([V * sin(gam), # dh/dt
V * cos(gam), # ds/dt
-D_m - g*sin(gam), # dV/dt
(V2 * cos(gam)/r + D_m*LD*cos(u) - g*cos(gam))/V, # dgam/dt
lamHdot,
lamVdot,
lamGAMdot,
lamUdot]
)
ref_tf = ref_traj.t[-1]
ref_tspan_rev = ref_traj.t[::-1] # Reverse the time span
Xf = np.copy(ref_traj.X[-1,:])
# Ensure monotonic decreasing V
def V_event(t,X,p,_):
return X[3] - 5500
V_event.direction = 1
V_event.terminal = True
X_and_lam0 = np.concatenate((Xf, [-1/np.tan(Xf[3]), 0, 0, 0]))
output = solve_ivp(traj_eom_with_costates, # lambda t,X,p,u: -traj_eom_with_costates(t,X,p,u),
[ref_tf, 0],
X_and_lam0,
t_eval=ref_traj.t[::-1],
rtol=1e-8,
events=V_event,
args=(params, reference_bank_angle))
lam = output.y.T[:,4:][::-1]
X_and_lam = output.y.T[::-1]
np.set_printoptions(suppress=True)
class ApolloReferenceData:
def __init__(self, X_and_lam: np.array, u: np.array, tspan: np.array, params: dict):
"""
X_and_lam: [h, s, v, gam, lamH, lamV, lamGAM, lamU] - 8 x n matrix
tspan: 1 x n vector
"""
self.X_and_lam = X_and_lam
self.tspan = tspan
self.params = params
self.u = u
assert len(X_and_lam.shape) == 2 and X_and_lam.shape[0] > 1, "Need at least two rows of data"
self.num_rows = X_and_lam.shape[0]
self.delta_v = abs(X_and_lam[1,2] - X_and_lam[0,2])
assert self.delta_v > 0, "Reference trajectory has repeated velocites in different rows"
self.start_v = X_and_lam[0,2]
F1, F2, F3, D_m, hdot_ref = self._compute_gains_and_ref()
F3[-1] = F3[-2] # Account for F3=0 at t=tf
# Stack the columns as follows:
# [t, h, s, v, gam, F1, F2, F3, D/m]
self.data = np.column_stack((tspan, X_and_lam[:,:4], F1, F2, F3, D_m, hdot_ref))
def _compute_gains_and_ref(self):
h = self.X_and_lam[:,0]
v = self.X_and_lam[:,2]
gam = self.X_and_lam[:,3]
lamH = self.X_and_lam[:,4]
lamGAM = self.X_and_lam[:,6]
lamU = self.X_and_lam[:,7]
rho0 = self.params['rho0']
H = self.params['H']
beta = self.params['beta'] # m/(Cd * Aref)
v2 = v*v
rho = rho0 * exp(-h/H)
D_m = rho * v2 / (2 * beta) # Drag Acceleration (D/m)
hdot = v * sin(gam)
F1 = H * lamH/D_m
F2 = lamGAM/(v * np.cos(gam))
F3 = lamU
return F1, F2, F3, D_m, hdot
def get_row_by_velocity(self, v: float):
"""
Returns data row closest to given velocity
"""
all_v = self.data[:,3]
dist_to_v = np.abs(all_v - v)
index = min(dist_to_v) == dist_to_v
return self.data[index,:][0]
def save(self, filename: str):
"""Saves the reference trajectory data to a file"""
np.savez(filename, X_and_lam=self.X_and_lam, u=self.u, tspan=self.tspan, params=self.params)
@staticmethod
def load(filename: str):
"""Initializes a new ApolloReferenceData from a saved data file"""
npzdata = np.load(filename, allow_pickle=True)
X_and_lam = npzdata.get('X_and_lam')
u = npzdata.get('u')
tspan = npzdata.get('tspan')
params = npzdata.get('params').item()
return ApolloReferenceData(X_and_lam, u, tspan, params)
# Test loading and saving of data
apollo_ref = ApolloReferenceData(X_and_lam, ref_traj.u, ref_traj.t, params)
apollo_ref.save('apollo_data_vref.npz')
# Load data back and check that it matches the original
ref = ApolloReferenceData.load('apollo_data_vref.npz')
assert np.allclose(ref.data, apollo_ref.data)
###Output
_____no_output_____ |
notebooks/tay_donovan_12964300_week3_SVC_abs_grid10k.ipynb | ###Markdown
**SVC**
###Code
experiment_label = 'SVC05_abs_c10k_rem'
user_label = 'tay_donovan'
###Output
_____no_output_____
###Markdown
**Aim** Look for performance improvement in SVC model:1. Absolute values for negative values2. Boost gridsearchCV to 100003. Remove correlated features **Findings**First pass: {'C': 500, 'degree': 3, 'gamma': 1e-05, 'kernel': 'rbf'}Second pass: {'C': 1500, 'degree': 3, 'gamma': 1e-05, 'kernel': 'rbf'}Third pass: {'C': 2000, 'degree': 3, 'gamma': 5e-06, 'kernel': 'rbf'}Fourth pass: {'C': 2500, 'degree': 3, 'gamma': 1e-06, 'kernel': 'rbf'}Fifth pass: {'C': 3500, 'degree': 3, 'gamma': 1e-08, 'kernel': 'rbf'}Sixth pass: {'C': 10000, 'degree': 3, 'gamma': 1e-09, 'kernel': 'rbf'}Seventh pass: {'C': 10000, 'degree': 3, 'gamma': 1e-10, 'kernel': 'rbf'}Need to boost C parameter, try leaving all features.
###Code
#Initial imports
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
import os
import sys
sys.path.append(os.path.abspath('..'))
from src.common_lib import DataReader, NBARawData
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
**Data input and cleansing**
###Code
#Load dataset using common function DataReader.read_data()
data_reader = DataReader()
# Load Raw Train Data
df_train = data_reader.read_data(NBARawData.TRAIN)
# Load Test Raw Data
df_test = data_reader.read_data(NBARawData.TEST)
#For train dataframe, remove redundant column 'Id_old'
cols_drop = ["Id", "Id_old"]
df_train.drop(cols_drop, axis=1, inplace=True)
df_train.columns = df_train.columns.str.strip()
df_train.describe
#For test dataframe, remove redundant column 'Id_old'
df_test.drop(cols_drop, axis=1, inplace=True)
df_test.columns = df_test.columns.str.strip()
df_test.describe
###Output
_____no_output_____
###Markdown
**Negative values in dataset**
###Code
print(df_train.where(df_train < 0).count())
# Negative values do not make sense in this context
#Define negative cleaning function
def clean_negatives(strategy, df):
if strategy=='abs':
df = abs(df)
if strategy=='null':
df[df < 0] = 0
if strategy=='mean':
df[df < 0] = None
df.fillna(df.mean(), inplace=True)
return(df)
#Clean negative numbers
negatives_strategy = 'abs'
df_train = clean_negatives(negatives_strategy, df_train)
df_test = clean_negatives(negatives_strategy, df_test)
###Output
_____no_output_____
###Markdown
**Feature Correlation and Selection**
###Code
#Use Pearson Correlation to determine feature correlation
pearsoncorr = df_train.corr('pearson')
#Create heatmap of pearson correlation factors
fig, ax = plt.subplots(figsize=(10,10))
sb.heatmap(pearsoncorr,
xticklabels=pearsoncorr.columns,
yticklabels=pearsoncorr.columns,
cmap='RdBu_r',
annot=True,
linewidth=0.2)
#Drop correlated features w/ score over 0.9 - retain "MINS", "3P MADE","FTM","REB"
selected_features = data_reader.select_feature_by_correlation(df_train)
selected_test = data_reader.select_feature_by_correlation(df_test)
df_train = df_train[selected_features]
df_test = df_test[selected_test]
cols_drop = ["PTS", "FGM", "FGA", "3PA", "FTA", "DREB", "OREB"]
df_train.drop(cols_drop, axis=1, inplace=True)
df_test.drop(cols_drop, axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
**Standard Scaling**
###Code
#Standardise scaling of all feature values
scaler = StandardScaler()
df_cleaned = df_train_selected.copy()
target = df_cleaned.pop('TARGET_5Yrs')
df_train_cleaned = scaler.fit_transform(df_cleaned)
df_train_scaled = pd.DataFrame(df_train_cleaned)
df_train_scaled.columns = df_cleaned.columns
df_train_scaled['TARGET_5Yrs'] = target
# Split the training dataset using common function data_reader.splitdata
X_train, X_val, y_train, y_val = data_reader.split_data(df_train)
#X_train, X_val, y_train, y_val = data_reader.split_data(df_train_scaled)
###Output
_____no_output_____
###Markdown
**Model Selection and Training**
###Code
#Create Optimised Model
optmodel = SVC()
#Use GridSearchCV to optimise parameters
from sklearn.model_selection import GridSearchCV
# defining parameter range
param_grid = {'C': [10000, 200000, 40000],
'gamma': [5e-10, 1e-10, 5e-11],
'kernel': ['rbf'],
'degree': [3]
}
grid = GridSearchCV(SVC(probability=True), param_grid, refit = True, verbose = 3, scoring="roc_auc", n_jobs=-2)
# fitting the model for grid search
grid.fit(X_train, y_train)
#Print the optimised parameters
print(grid.best_params_)
#Create model with the optimised parameters
model = SVC(C=10000, break_ties=False, class_weight='balanced', coef0=0.0,
decision_function_shape='ovr', degree=3,
gamma=1e-10, kernel='rbf', max_iter=-1,
probability=True, random_state=None, shrinking=True,
tol=0.001, verbose=False)
model.fit(X_train, y_train);
#Store model in /models
from joblib import dump
dump(model, '../models/' + experiment_label + '.joblib')
###Output
_____no_output_____
###Markdown
**Model Evaluation**
###Code
#Create predictions for train and validation
y_train_preds = model.predict(X_train)
y_val_preds = model.predict(X_val)
#Evaluate train predictions
#from src.models.aj_metrics import confusion_matrix
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.metrics import plot_roc_curve, plot_precision_recall_curve
from sklearn.metrics import classification_report
sys.path.append(os.path.abspath('..'))
from src.models.aj_metrics import confusion_matrix
y_train_preds
#Training performance results
print("ROC AUC Score:")
print(roc_auc_score(y_train,y_train_preds))
print("Accuracy Score:")
print(accuracy_score(y_train, y_train_preds))
print(classification_report(y_train, y_train_preds))
#Confusion matrix
print(confusion_matrix(y_train, y_train_preds))
#ROC Curve
plot_roc_curve(model,X_train, y_train)
#Precision Recall Curve
plot_precision_recall_curve(model,X_train,y_train)
#Validation performance analysis
print("ROC AUC Score:")
print(roc_auc_score(y_val,y_val_preds))
print("Accuracy Score:")
print(accuracy_score(y_val, y_val_preds))
print("Confusion Matrix:")
print(classification_report(y_val, y_val_preds))
#Confusion matrix
print(confusion_matrix(y_train, y_train_preds))
#ROC Curve
plot_roc_curve(model,X_val, y_val)
#Precision Recall Curve
plot_precision_recall_curve(model,X_train,y_train)
###Output
_____no_output_____
###Markdown
**Test output**
###Code
#Output predictions
X_test = df_test
y_test_preds = model.predict_proba(X_test)[:,1]
y_test_preds
output = pd.DataFrame({'Id': range(0,3799), 'TARGET_5Yrs': [p for p in y_test_preds]})
output.to_csv("../reports/" + user_label + "_submission_" + experiment_label + ".csv", index=False)
###Output
_____no_output_____ |
labs/lab0_lang_detect/langdetect_student.ipynb | ###Markdown
Práctica: detector de idioma En esta práctica vamos a construir un detector automático de idioma, capaz de discriminar texto de 20 idiomas diferentes. Para ello vamos a utilizar únicamente método basados en análisis de caracteres, que sin embargo resultan ser muy efectivos para este problema. Instrucciones A lo largo de este cuaderno encontrarás celdas vacías que tendrás que rellenar con tu propio código. Sigue las instrucciones del cuaderno y presta especial atención a los siguientes iconos:Deberás responder a la pregunta indicada con el código o contestación que escribas en la celda inferior. Esto es una pista u observación que te puede ayudar a resolver la práctica. Este es un ejercicio avanzado y voluntario que puedes realizar si quieres profundar más sobre el tema. Te animamos a intentarlo para aprender más ¡Ánimo!Para evitar problemas de compatibilidad y de paquetes no instalados, se recomienda ejecutar este notebook bajo uno de los [entornos recomendados de Text Mining](https://github.com/albarji/teaching-environments/tree/master/textmining).Adicionalmente si necesitas consultar la ayuda de cualquier función python puedes colocar el cursor de escritura sobre el nombre de la misma y pulsar Mayúsculas+Shift para que aparezca un recuadro con sus detalles. Ten en cuenta que esto únicamente funciona en las celdas de código.¡Adelante! Carga y preparación de datos Para este ejercicio usaremos el corpus de muestras de frases de diferentes idiomas que puede obtenerse de Tatoeba: https://tatoeba.org/eng/downloads . Descarga el fichero sentences.tar.bz2 de la web de Tatoeba y descomprímelo. Crea una variable DATAFILE con la ruta completa al fichero descomprimido.
###Code
####### INSERT YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Al tratarse de un fichero en formato TSV (tab separated file) podemos cargarlo como un Dataframe de pandas con facilidad. Si lo has descargado y has indicado la ruta correctamente, la siguiente celda debería cargar los datos y mostrar una porción de los mismos.
###Code
import pandas as pd
df = pd.read_csv(DATAFILE, sep="\t", index_col=0, names=["lang", "text"])
df
###Output
_____no_output_____
###Markdown
Como podemos ver, cada registro contiene una frase (columna "text") y un indicador del idioma al que pertenece (columna "lang"). El indicador de idioma sigue el estándar [ISO 639-3](https://en.wikipedia.org/wiki/List_of_ISO_639-3_codes). Antes de ponernos a trabajar con los datos debemos limpiarlos un poco. La siguiente comprobación nos demuestra que en el indicador de idioma existen valores desconocidos:
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Podemos eliminar esos registros inválidos con la siguiente instrucción
###Code
df = df.dropna()
###Output
_____no_output_____
###Markdown
Ahora vamos a comprobar qué idiomas hay presentes en los datos. Para ello vamos a utilizar Counter, una estructura que funciona de manera análoga a un diccionario de python, pero que permite llevar la cuenta del número de veces que ha aparecido un elemento.
###Code
from collections import Counter
langcounter = Counter(df["lang"])
langcounter
###Output
_____no_output_____
###Markdown
¡Son 331 idiomas! Para centrar este ejercicio vamos a focalizarnos en los 20 idiomas más representativos. Podemos obtener 20 los elementos más frecuentes de un Counter de la siguiente manera:
###Code
langcounter.most_common(20)
###Output
_____no_output_____
###Markdown
Esto nos devuelve los 20 elementos más frecuentes, junto con sus frecuencias de aparición. Entre los idiomas más frecuentes encontramos el inglés, italiano, ruso, turco, alemán, español, hebreo, japonés, finlandes, chino mandarín, ... Crea una variable commonlangs que sea una lista con los nombres de esos 20 idiomas más frecuentes. Tendrás que tomar la salida del método most_common y quedarte solo con los nombres de los idiomas.
###Code
####### INSERT YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Si todo es correcto, la siguiente línea debería filtrar los datos para quedarnos solo con las frases de esos 20 idiomas.
###Code
df = df[df["lang"].isin(commonlangs)]
df
###Output
_____no_output_____
###Markdown
Ahora que hemos limpiado los datos, vamos a separar las variables de entrada (texto) de las de salida (idiomas). Para poder introducir las etiquetas de idioma en el modelo tendremos que codificarlas de forma numérica: para ello usaremos LabelEncoder de scikit-learn:
###Code
from sklearn.preprocessing import LabelEncoder
X = df["text"]
labelencoder = LabelEncoder().fit(df["lang"])
y = labelencoder.transform(df["lang"])
###Output
_____no_output_____
###Markdown
A continuación vamos a separar los datos en un conjunto para entrenar el modelo y otro para hacer las predicciones:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
###Output
_____no_output_____
###Markdown
Con esto tenemos todo listo para construir el modelo. Modelo de uni-gramas Para empezar vamos a construir un modelo que simplemente tenga en cuenta el tipo de caracteres que aparecen en el texto para tratar de determinar el idioma. Esto significa que vamos a montar un proceso que convierta un texto dado en un vector de frecuencia de caracteres, para luego poder aplicar un sistema de aprendizaje automático sobre los vectores que obtengamos. Esta transformación puede hacerse muy fácilmente empleando la clase CountVectorizer de paquete scikit-learn:
###Code
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
Esta clase nos da funcionalidad para tomar un listado de textos y convertirlos en una representación numérica. Podemos configurar cómo va a realizarse esta conversión mediante diferentes parámetros a la hora de instancia un CountVectorizer:* **analyzer**: tipo de elementos del texto que vamos a contar para generar la representación vectorial * *word*: conteo de palabras o n-gramas de palabras * *char*: conteo de caracteres o n-gramas de caracteres * *char_wb*: conteo de caracters o n-gramas de caracteres dentro de cada palabra* **ngram_range**: tupla tipo (n, m) que indica que rango de n-gramas vamos a construir. Con (1, 1) tendremos unigramas, mientras que con (1, 3) contaremos desde unigramas hasta trigramas.* **min_df**: número (o fracción) mínimo de textos en los que tiene que aparecer un elemento para considerarlo en la cuenta. Con esto podemos obviar palabras o caracteres que aparezcan muy poco y por tanto no sean relevantes.* **max_df**: número (o fracción) máximo de textos en los que tiene que aparecer un elemento para considerarlo en la cuenta. Con esto podemos obviar palabras o caracteres que aparezcan en casi todos los textos, y por tanto no sean discriminativos.* **binary**: hacer cuentas binarias (True) al estilo bag-of-words o hacer cuentas reales de elementos (False)* **lowercase**: convertir automáticamente todos los textos a minúsculas (True) o no (False)Para el caso que nos ocupa queremos crear un CountVectorizer que analize unigramas de caracteres, lo cual se conseguiría como
###Code
vectorizadorejemplo = CountVectorizer(analyzer = "char", ngram_range = (1,1))
###Output
_____no_output_____
###Markdown
Una vez construído podemos convertir una lista de textos a vectores usando el método **fit_transform**:
###Code
ejemplos = [
"The cat sat on the mat",
"The dog barked at the cat",
"Dog days"
]
transformados = vectorizadorejemplo.fit_transform(ejemplos)
transformados
###Output
_____no_output_____
###Markdown
Por eficiencia los vectores calculados se almacenan como una matriz comprimida. Podemos ver los contenidos de esta matriz de la siguiente forma:
###Code
transformados.toarray()
###Output
_____no_output_____
###Markdown
¿Qué significa esto? Podemos preguntar a nuestro objeto vectorizador qué vocabulario ha construido con los textos que hemos proporcionado:
###Code
vectorizadorejemplo.vocabulary_
###Output
_____no_output_____
###Markdown
Esto nos indica a qué entrada de los vectores generados se corresponde cada palabra del vocabulario. Efectivamente, el primer vector de *transformados*, que corresponde a la frase "The cat sat on the mat" nos está indicando que el carácter "a" aparece tres veces (índice 1 del vector) o que el carácter "c" aparece una vez (índice 3 del vector). Podemos obtener una representación más gráfica de la vectorización con la siguiente función auxiliar.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import itertools
def plot_vectorizer_matrix(vectorizer, texts, title='Vectorizer matrix', cmap=plt.cm.Blues):
"""
Generate a visual representation of the matrix produced by a vectorizer over some texts
"""
np.set_printoptions(precision=2)
matrix = vectorizer.transform(texts).toarray()
plt.imshow(matrix, interpolation='nearest', cmap=cmap)
plt.title(title)
keys = [k for k, v in sorted(vectorizer.vocabulary_.items(), key=lambda x: x[1])]
plt.xticks(np.arange(len(vectorizer.vocabulary_)), keys)
plt.yticks(range(len(texts)), texts)
thresh = matrix.max() / 2.
for i, j in itertools.product(range(matrix.shape[0]), range(matrix.shape[1])):
if matrix[i, j] > 0:
plt.text(j, i, format(matrix[i, j], 'd'),
horizontalalignment="center",
color="white" if matrix[i, j] > thresh else "black")
plot_vectorizer_matrix(vectorizadorejemplo, ejemplos)
###Output
_____no_output_____
###Markdown
Aquí vemos claramente el resultado de la vectorización: el texto *Dog days* se ha transformado a un vector que nos indica que en el texto original había presentes los siguientes caracteres: 1 espacio, 1 `a`, 2 `d`, 1 `o`, 1 `s`, y 1 `y`. Para combinar fácilmente este proceso de vectorización con un modelo de clasificación vamos a usar un **Pipeline** de scikit-learn. En ejercicios posteriores veremos más detalles sobre esto; de momento nos basta con saber que un Pipeline define una serie de etapas del modelo.
###Code
from sklearn.pipeline import Pipeline
from sklearn.linear_model import SGDClassifier
model = Pipeline([
('vectorizer', CountVectorizer(analyzer = "char", ngram_range = (1,1))),
('classifier', SGDClassifier(max_iter=1))
]
)
###Output
_____no_output_____
###Markdown
Aquí hemos definido un Pipeline que primero aplica la vectorización por frecuencias de caracteres que hemos discutido arriba, y después la pasa a un modelo de clasificación tipo SGDClassifier. Este es un modelo tipo SVM lineal cuya implementación está especializada en trabajar con grandes volúmenes de datos. Dado que contamos con casi 5 millones de frases para entrenar, es suficiente con hacer una sola iteración sobre los datos de entrenamiento.Una vez definido, entrenamos el modelo con los datos de entrenamiento que habíamos preparado. El entrenamiento debería tardar en torno a 2 minutos.
###Code
%%time
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Con el modelo ya entrenado vamos a valorar cómo de bien lo hemos hecho en el conjunto de datos de test. Para ello podemos usar el método score del modelo.
###Code
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Deberías haber obtenido en torno a un 86%-87% de accuracy. Para hacer un análisis más en profundidad de la calidad de este modelo, vamos a pintar la matriz de confusión por idiomas. Para ello nos apoyaremos en la siguiente función:
###Code
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
np.set_printoptions(precision=2)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Ahora generamos las predicciones para el conjunto de test, usamos la función de scikit-learn que calcula la matriz de confusión, y la pintamos con la función definida arriba.
###Code
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
cnf_matrix = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(12,12))
plot_confusion_matrix(cnf_matrix, classes=labelencoder.classes_, normalize=True, title='Normalized confusion matrix')
plt.show()
###Output
_____no_output_____
###Markdown
La matriz de confusión revela que algunos idiomas son muy fácilmente identificables: chino mandarín, hebreo, japonés y ruso. Esto tiene sentido porque emplean un juego de caracteres muy diferente al de otros idiomas. Sin embargo el modelo confunde mucho entre sí los idiomas que son similares: berebe con cabilio, o español con italiano y portugués.Podemos hacerlo mejor. Pero para ello tendremos que recurrir a n-gramas de caracteres. Modelo de bi-gramas Repite los pasos anteriores para construir un modelo de bi-gramas de caracteres. ¿Obtienes mejor precisión global con este modelo? ¿Qué confusiones han desaparecido en la matriz de confusión? ¿Cuáles permanecen? Revisar la explicación anterior de los parámetros que acepta CountVectorizer. Solo es necesario cambiar este elemento en la definición del Pipeline para obtener el modelo de bigramas.
###Code
####### INSERT YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Más allá de bi-gramas Es posible obtener resultados aún mejores si se emplean tri-gramas o tetra-gramas. Pero dado el tamaño del dataset esto puede requerir de un gasto excesivo de memoria. Para evitar esto, será necesario recurrir a otras estrategias de vectorización, como [HashingVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html). Intenta mejorar los resultados del modelo bi-gramas utilizando HashingVectorizer y un mayor orden de n-gramas.
###Code
####### INSERT YOUR CODE HERE
###Output
_____no_output_____ |
notebooks/Intro_to_pandas.ipynb | ###Markdown
CORE Skills Prerequisite - Intro to PandasThis lesson is adapted from the [Data Carpentry Ecology lesson](http://www.datacarpentry.org/python-ecology-lesson/) How to use a Jupyter Notebookhttps://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/index.htmlhttps://jupyterlab.readthedocs.io/en/stable/user/notebook.html- The file autosaves- You run a cell with **shift + enter** or using the run button in the tool bar- If you run a cell with **option + enter** it will also create a new cell below- See *Help > Keyboard Shortcuts* or the *Cheatsheet* for more info- The notebook has different type of cells: Code and Markdown are most commonly used- **Code** cells expect code for the Kernel you have chosen, syntax highlighting is available, comments in the code are specified with -> code after this will not be executed- **Markdown** cells allow you to right report style text, using markdown for formatting the style (e.g. Headers, bold face etc) Working With Pandas DataFrames in Python Starting in the same spotTo help the lesson run smoothly, let's ensure everyone is in the same directory.This should help us avoid path and file name issues. At this time pleasenavigate to the workshop directory. If you working in IPython Notebook be surethat you start your notebook in the workshop directory.A quick aside that there are Python libraries like [OSLibrary](https://docs.python.org/3/library/os.html) that can work with ourdirectory structure, however, that is not our focus today.If you need to change your directory ```import os``` and use ```os.chdir``` Our Data For this lesson, we will be using the Portal Teaching data, a subset of the datafrom Ernst et al[Long-term monitoring and experimental manipulation of a Chihuahuan Desert ecosystem near Portal, Arizona, USA](http://www.esapubs.org/archive/ecol/E090/118/default.htm)We will be using files from the [Portal Project Teaching Database](https://figshare.com/articles/Portal_Project_Teaching_Database/1314459).This section will use the `surveys.csv` file which can be found in /data/python/python_dataWe are studying the species and weight of animals caught in plots in our studyarea. The dataset is stored as a `.csv` file: each row holds information for asingle animal, and the columns represent:| Column | Description ||------------------|------------------------------------|| record_id | Unique id for the observation || month | month of observation || day | day of observation || year | year of observation || plot | ID of a particular plot || species | 2-letter code || sex | sex of animal ("M", "F") || wgt | weight of the animal in grams |The first few rows of our first file look like this:```record_id,month,day,year,plot,species,sex,wgt1,7,16,1977,2,NA,M,2,7,16,1977,3,NA,M,3,7,16,1977,2,DM,F,``` About LibrariesA library in Python contains a set of tools (called functions) that performtasks on our data. Importing a library is like getting a piece of lab equipmentout of a storage locker and setting it up on the bench for use in a project.Once a library is set up, it can be used or called to perform many tasks.Python doesn't load all of the libraries available to it by default. We have toadd an `import` statement to our code in order to use library functions. To importa library, we use the syntax `import libraryName`. If we want to give thelibrary a nickname to shorten the command, we can add `as nickNameHere`. Anexample of importing the pandas library using the common nickname `pd` is below.You only need to load a library once during your session. You can load the library when neededor you can load all necessary libraries at the beginning of your script. This is good practice, especially for the readability of your code Pandas in PythonOne of the best options for working with tabular data in Python is to use the[Python Data Analysis Library](http://pandas.pydata.org/) (a.k.a. Pandas). ThePandas library provides data structures, produces high quality plots with[matplotlib](http://matplotlib.org/) and integrates nicely with other librariesthat use [NumPy](http://www.numpy.org/) (which is another Python library) arrays.A handy **Pandas cheathsheet** can be found [here](http://pandas.pydata.org/Pandas_Cheat_Sheet.pdf).Each time we call a function that's in a library, we use the syntax`LibraryName.FunctionName`. Adding the library name with a `.` before thefunction name tells Python where to find the function. In the example above, wehave imported Pandas as `pd`. This means we don't have to type out `pandas` eachtime we call a Pandas function.
###Code
# check if you need to change your directory
import os
os.getcwd()
os.listdir("../")
os.chdir("../data/")
os.getcwd()
import pandas as pd
#check your version, we need v0.19 or higher
pd.__version__
###Output
_____no_output_____
###Markdown
Reading CSV Data Using PandasWe will begin by locating and reading our survey data which are in CSV format.We can use Pandas' `read_csv` function to pull the file directly into a[DataFrame](http://pandas.pydata.org/pandas-docs/stable/dsintro.htmldataframe). So What's a DataFrame?A DataFrame is a 2-dimensional data structure that can store data of differenttypes (including characters, integers, floating point values, factors and more)in columns. It is similar to a spreadsheet or an SQL table or the `data.frame` inR. A DataFrame always has an index (0-based). An index refers to the position of an element in the data structure.
###Code
# note that pd.read_csv is used because we imported pandas as pd
pd.read_csv("surveys.csv")
###Output
_____no_output_____
###Markdown
We can see that there were 33,549 rows parsed. Each row has 9columns. The first column is the index of the DataFrame. The index is used toidentify the position of the data, but it is not an actual column of the DataFrame. It looks like the `read_csv` function in Pandas read our file properly. However, we haven't saved any data to memory so we can work with it.We need to assign the DataFrame to a variable. Remember that a variable is a name for a value, such as `x`, or `data`. We can create a new object with a variable name by assigning a value to it using `=`.Let's call the imported survey data `surveys_df`:
###Code
surveys_df = pd.read_csv("surveys.csv")
###Output
_____no_output_____
###Markdown
Notice when you assign the imported DataFrame to a variable, Python does notproduce any output on the screen. We can print the value of the `surveys_df`object by typing its name into the Python command prompt. Manipulating Our Species Survey DataNow we can start manipulating our data. First, let's check the data type of thedata stored in `surveys_df` using the `type` method. The `type` method and`__class__` attribute tell us that `surveys_df` is ``.
###Code
type(surveys_df)
surveys_df.__class__
###Output
_____no_output_____
###Markdown
We can also enter `surveys_df.dtypes` at our prompt to view the data type for eachcolumn in our DataFrame. `int64` represents numeric integer values - `int64` cellscan not store decimals. `object` represents strings (letters and numbers). `float64`represents numbers with decimals.
###Code
surveys_df.dtypes
###Output
_____no_output_____
###Markdown
Pandas and base Python use slightly different names for data types. More on thisis in the table below:| Pandas Type | Native Python Type | Description ||-------------|--------------------|-------------|| object | string | The most general dtype. Will be assigned to your column if column has mixed types (numbers and strings). || int64 | int | Numeric characters. 64 refers to the memory allocated to hold this character. || float64 | float | Numeric characters with decimals. If a column contains numbers and NaNs(see below), pandas will default to float64, in case your missing value has a decimal. || datetime64, timedelta[ns] | N/A (but see the [datetime](http://doc.python.org/2/library/datetime.html) module in Python's standard library) | Values meant to hold time data. Look into these for time series experiments. |--- Exploring DataFrames in PythonThere are multiple methods that can be used to access and summarise the datastored in DataFrames. Let's try out a few. Note that we call the method by usingthe object name followed by . and the method name. So `surveys_df.columns` provides an indexof all of the column names in our DataFrame.
###Code
surveys_df.columns
###Output
_____no_output_____
###Markdown
Selecting Rows and ColumnsIn pandas you can use several ways to **select a specific column**:- square brackets `[]` - a `.` and the column nameFor example, we can select all of data from a column named `species` from the `surveys_df`DataFrame by name:```pythonsurveys_df['species'] this syntax, calling the column as an attribute, gives you the same outputsurveys_df.species```
###Code
surveys_df['species']
surveys_df.species
###Output
_____no_output_____
###Markdown
Using double square brackets `[[]]` we can pass a list of column names too by listing the names we want:
###Code
surveys_df[['record_id','species']]
###Output
_____no_output_____
###Markdown
We can also create an new object that contains the data as follows:```python create an object named surveys_species that only contains the `species_id` columnsurveys_species = surveys_df['species']```
###Code
surveys_species = surveys_df['species']
surveys_species
###Output
_____no_output_____
###Markdown
**NOTE:** If a column name is not contained in the DataFrame, an exception(error) will be raised.```pythonsurveys_df['speciess']```
###Code
surveys_df['speciess']
###Output
_____no_output_____
###Markdown
ChallengesTry out the methods below to see what they return.1. `surveys_df.columns`.2. `surveys_df.head()`. Also, what does `surveys_df.head(15)` do?3. `surveys_df.tail()`.4. `surveys_df.shape`. Take note of the output of the shape method. What format does it return the shape of the DataFrame in?HINT: [More on tuples, here](https://docs.python.org/3/tutorial/datastructures.htmltuples-and-sequences).
###Code
surveys_df.columns
surveys_df.head()
surveys_df.tail()
surveys_df.shape
###Output
_____no_output_____
###Markdown
---Converting between different data types
###Code
# Let's check the types of data we have in our dataframe
surveys_df.dtypes
# convert the record_id field from an integer to a float
surveys_df['record_id'] = surveys_df['record_id'].astype('float64')
surveys_df.dtypes
###Output
_____no_output_____
###Markdown
What happens if we try to convert weight values to integers?
###Code
surveys_df['wgt'].astype('int64')
###Output
_____no_output_____
###Markdown
Notice that this throws a value error: `ValueError: Cannot convert NA tointeger`. If we look at the `weight` column in the surveys data we notice thatthere are NaN (**N**ot **a** **N**umber) values. *NaN* values are undefinedvalues that cannot be represented mathematically. Pandas, for example, will readan empty cell in a CSV or Excel sheet as a NaN. NaNs have some desirableproperties: if we were to average the `weight` column without replacing our NaNs,Python would know to skip over those cells.
###Code
surveys_df['wgt'].mean()
###Output
_____no_output_____
###Markdown
_Note: older pandas version do not know how to handle NaN, please update to v0.19 or higher_Check your pandas version using `pd.__version__`, if you need to update open a bash shelland type ```conda update pandas```.--- Missing Data Values - NaNDealing with missing data values is always a challenge. It's sometimes hard toknow why values are missing - was it because of a data entry error? Or data thatsomeone was unable to collect? Should the value be 0? We need to know howmissing values are represented in the dataset in order to make good decisions.If we're lucky, we have some metadata that will tell us more about how nullvalues were handled.For instance, in some disciplines, like Remote Sensing, missing data values areoften defined as -9999. Having a bunch of -9999 values in your data could reallyalter numeric calculations. Often in spreadsheets, cells are left empty where nodata are available. Pandas will, by default, replace those missing values withNaN. However it is good practice to get in the habit of intentionally markingcells that have no data, with a no data value! That way there are no questionsin the future when you (or someone else) explores your data. Where Are the NaN's?Let's explore the NaN values in our data a bit further. First, let's figure out **how many rows contain NaN values for weight**. We can do this by identifying how many rows have a NULL value (`.isnull`) or by counting the number of rows that have a meaningful value (e.g., wgt>0):
###Code
surveys_df[pd.isnull(surveys_df['wgt'])]
surveys_df[surveys_df['wgt']>0]
###Output
_____no_output_____
###Markdown
We can replace all NaN values with zeroes using the `.fillna()` method (aftermaking a copy of the data so we don't lose our work).However, NaN and 0 yield different analysis results. The mean value when NaNvalues are replaced with 0 is different from when NaN values are simply thrownout or ignored.
###Code
# replace NaN with 0
df1 = surveys_df.copy()
df1['wgt'] = df1['wgt'].fillna(0)
#check mean, how does it differ from before?
print(surveys_df['wgt'].mean())
print(df1['wgt'].mean())
###Output
_____no_output_____
###Markdown
We can fill NaN values with any value that we chose. The code below fills allNaN values with a mean for all weight values.```python df1['wgt'] = surveys_df['wgt'].fillna(surveys_df['wgt'].mean())```We could also chose to create a subset of our data, only keeping rows that donot contain NaN values, using `.dropna()` method.**The point is to make conscious decisions about how to manage missing data.** This is where we think about how our data will be used and how these values willimpact the scientific conclusions made from the data.Python gives us all of the tools that we need to account for these issues. Wejust need to be cautious about how the decisions that we make impact scientificresults.
###Code
df1['wgt'] = surveys_df['wgt'].fillna(surveys_df['wgt'].mean())
print(surveys_df['wgt'].mean())
print(df1['wgt'].mean())
###Output
_____no_output_____
###Markdown
Calculating summary statistics for a Pandas DataFrameWe've read our data into Python. Next, let's perform some quick summarystatistics to learn more about the data that we're working with. We might wantto know how many animals were collected in each plot, or how many of eachspecies were caught. We can perform summary stats quickly using groups. Butfirst we need to figure out what we want to group by.---Let's find out how many unique plot IDs and species we have in our data:
###Code
# Reminder of the column names
surveys_df.columns
# Create a list of unique plot ID's and species found in the surveys data
plot_names = pd.unique(surveys_df['plot'])
species = pd.unique(surveys_df['species'])
# Check the length of the list
print('There are: ' + str(len(plot_names)) + ' unique plots in the data')
print('There are: ' + str(len(species)) + ' unique species in the data')
# Single line solution
print('There are: ' + str(surveys_df['plot'].nunique()) + ' unique plots in the data')
print('There are: ' + str(surveys_df['species'].nunique()) + ' unique species in the data')
###Output
_____no_output_____
###Markdown
---The Pandas function `describe` will return descriptive stats including: mean,median, max, min, std and count for a particular column in the data. Pandas'`describe` function will only return summary values for columns containingnumeric data.We can calculate basic statistics for all records in a single column using thesyntax below:
###Code
surveys_df.describe()
###Output
_____no_output_____
###Markdown
We can also extract one specific metric if we wish:```pythonsurveys_df['wgt'].min()surveys_df['wgt'].max()surveys_df['wgt'].mean()surveys_df['wgt'].std()surveys_df['wgt'].count()```
###Code
surveys_df['wgt'].mean()
###Output
_____no_output_____
###Markdown
Basic Math FunctionsIf we wanted to, we could perform math on an entire column of our data. Forexample let's multiply all weight values by 2. A more practical use of this mightbe to normalize the data according to a mean, area, or some other valuecalculated from our data.
###Code
# multiply all weight values by 2
surveys_df['wgt']*2
###Output
_____no_output_____
###Markdown
Groups in PandasWe often want to calculate summary statistics grouped by subsets or attributeswithin fields of our data, for example we might want to know what the summary stats look like split by sex.We can use Pandas' `.groupby` method, which creates a groupby DataFrame on which we can perform other pandas methods.
###Code
# grouping the df by sex
by_sex = surveys_df.groupby('sex')
# summary statistics for this new df
by_sex.describe()
# provide the mean for each numeric column by sex
by_sex.min()
###Output
_____no_output_____
###Markdown
The `groupby` command is powerful in that it allows us to quickly generatesummary stats, not just for one group but several.For example, we might want to calculate the averageweight of all individuals per plot:```pythonsurveys_df.groupby('plot')['wgt'].mean()```
###Code
# calculate average weight of individuals in each plot
surveys_df.groupby('plot')['wgt'].mean()
###Output
_____no_output_____
###Markdown
Or, we might want to know how many males and females we have for each species:```pythonsurveys_df.groupby(['species','sex'])['record_id'].count()```
###Code
# count the number of each sex per species
surveys_df.groupby(['species','sex'])['record_id'].count()
###Output
_____no_output_____
###Markdown
Challenge1. Calculate the average weight for each species per plot2. Calculate the average weight for each sex of each species per plot
###Code
surveys_df.groupby(['plot','species'])['wgt'].mean()
surveys_df.groupby(['plot','species','sex'])['wgt'].mean()
surveys_df.groupby(['species','plot'])['wgt'].mean()
###Output
_____no_output_____
###Markdown
Quick & Easy Plotting Data Using PandasWe can plot our summary stats using Pandas, too.
###Code
# make sure figures appear inline in Jupyter Notebook
%matplotlib inline
# plot year vs wgt
surveys_df.plot(x='year', y='wgt', kind='scatter')
# create a quick bar chart
species_count = surveys_df.groupby('species')['record_id'].count()
species_count.plot(kind='bar')
# We can also look at how many animals were captured in each plot:
total_count = surveys_df.groupby('plot')['record_id'].nunique()
# let's plot that too, default is a line plot
total_count.plot(kind='bar')
###Output
_____no_output_____
###Markdown
Challenge Activities1. Create a plot of average weight across all species per plot. x-axis = plot, y-axis = wgt2. Create the same plot, but with average weight for each sex per plot. Hint, you will need to `unstack` when plotting. x-axis = plot, y-axis = wgt, different lines for each sex.3. Create a trend plot of the average weight per plot over time. x-axis = year, y-axis = wgt, different lines for each plot.
###Code
# group by plot and calculate mean wgt
avrg_wgt = surveys_df.groupby('plot')['wgt'].mean()
# let's plot, you should see x-axis -> plot, y-axis -> wgt
avrg_wgt.plot()
# group by plot and sex, then calculate mean wgt
avrg_wgt = surveys_df.groupby(['plot','sex'])['wgt'].mean()
# let's plot, you should see x-axis -> plot, y-axis -> wgt, different lines for sex
# you need to use the .unstack() method before the .plot() for this to work
avrg_wgt.unstack().plot()
avrg_wgt.unstack()
# group by year and plot, then calculate mean wgt
wgt_by_time = surveys_df[surveys_df['plot']<5].groupby(['year','plot'])['wgt'].mean()
# let's plot, you should see x-axis -> year, y-axis -> wgt, different lines for plot
# you need to use the .unstack() method before the .plot() for this to work
wgt_by_time.unstack().plot()
###Output
_____no_output_____
###Markdown
Indexing & Slicing in PythonWe often want to work with subsets of a **DataFrame** object. There aredifferent ways to accomplish this including: using labels (ie, column headings - as used previously),numeric ranges or specific x,y index locations. Extracting Range based Subsets: Slicing**REMINDER**: Python Uses 0-based IndexingLet's remind ourselves that Python uses 0-basedindexing. This means that the first element in an object is located at position0. This is different from other tools like R and Matlab that index elementswithin objects starting at 1. Challenges```python Create a list of numbers:a = [1,2,3,4,5]```1. What value does the code below return? a[0]2. How about this: a[5]3. Or this? a[len(a)]4. In the example above, calling `a[5]` returns an error. Why is that?
###Code
a = [1,2,3,4,5]
a[0]
a[5]
a[len(a)]
a[-2]
###Output
_____no_output_____
###Markdown
Slicing Subsets of Rows in PythonSlicing using the `[]` operator selects a set of rows and/or columns from aDataFrame. To slice out a set of rows, you use the following syntax:`data[start:stop]`. When slicing in pandas the start bound is included in theoutput. The stop bound is one step BEYOND the row you want to select. So if youwant to select rows 0, 1 and 2 your code would look like this:```python select rows 0,1,2 (but not 3)surveys_df[0:3]```The stop bound in Python is different from what you might be used to inlanguages like Matlab and R.```python select the first, second and third rows from the surveys variablesurveys_df[0:3] select the first 5 rows (rows 0,1,2,3,4)surveys_df[:5] select the last element in the listsurveys_df[-1:]```
###Code
surveys_df[0:3]
surveys_df[:5]
surveys_df[-1:]
surveys_df['plot'][[0,4,7]]
surveys_df[['plot','wgt']][0:3]
###Output
_____no_output_____
###Markdown
We can also reassign values within subsets of our DataFrame. But before we do that, let's make a copy of our DataFrame so as not to modify our original imported data. ```python copy the surveys dataframe so we don't modify the original DataFramesurveys_copy = surveys_df set the first three rows of data in the DataFrame to 0surveys_copy[0:3] = 0```Next, try the following code: ```pythonsurveys_copy.head()surveys_df.head()```What is the difference between the two data frames?
###Code
surveys_df.head()
# copy the surveys dataframe so we don't modify the original DataFrame
surveys_copy = surveys_df
# set the first three rows of data in the DataFrame to 0
surveys_copy[0:3] = 0
print(surveys_copy.head())
surveys_df.head()
###Output
_____no_output_____
###Markdown
Referencing Objects vs Copying Objects in PythonWe might have thought that we were creating a fresh copy of the `surveys_df` objects when we used the code `surveys_copy = surveys_df`. However the statement y = x doesn’t create a copy of our DataFrame. It creates a new variable y that refers to the **same** object x refers to. This means that there is only one object (the DataFrame), and both x and y refer to it. So when we assign the first 3 columns the value of 0 using the `surveys_copy` DataFrame, the `surveys_df` DataFrame is modified too. To create a fresh copy of the `surveys_df`DataFrame we use the syntax y=x.copy(). But before we have to read the surveys_df again because the current version contains the unintentional changes made to the first 3 columns.```pythonsurveys_df = pd.read_csv("data/surveys.csv")surveys_copy= surveys_df.copy()```
###Code
surveys_df = pd.read_csv("surveys.csv")
surveys_copy= surveys_df.copy()
surveys_df.head()
surveys_copy.head()
# set the first three rows of data in the DataFrame to 0
surveys_copy[0:3] = 0
surveys_df.head()
surveys_copy.head()
###Output
_____no_output_____
###Markdown
Slicing Subsets of Rows and Columns in Python PandasWe can select specific ranges of our data in both the row and column directionsusing either label or integer-based indexing.- `loc`: indexing via *labels* (which can be integers)- `iloc`: indexing via *integers*- To select a subset of rows AND columns from our DataFrame, we can use the `iloc`method. For example, we can select month, day and year (columns 2, 3 and 4 if westart counting at 1), like this:```pythonsurveys_df.iloc[0:3, 1:4]```
###Code
surveys_df.head()
surveys_df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
Notice that we asked for a slice from 0:3. This yielded 3 rows of data. When youask for 0:3, you are actually telling python to start at index 0 and select rows0, 1, 2 **up to but not including 3**.Let's next explore some other ways to index and select subsets of data:
###Code
# select all columns for rows of index values 0 and 10
surveys_df.loc[[0, 10], :]
# what does this do?
surveys_df.loc[0:4, 'plot' : 'wgt']
# What happens when you type the code below?
surveys_df.iloc[[0, 10, 45549], :]
###Output
_____no_output_____
###Markdown
NOTE: Labels must be found in the DataFrame or you will get a `KeyError`. Thestart bound and the stop bound are **included**. When using `loc`, integers*can* also be used, but they refer to the **index label** and not the position. Thuswhen you use `loc`, and select 1:4, you will get a different result than using`iloc` to select rows 1:4.We can also select a specific data value according to the specific row andcolumn location within the data frame using the `iloc` function:`dat.iloc[row,column]`.```pythonsurveys_df.iloc[2,6]```which gives **output**```'F'```Remember that Python indexing begins at 0. So, the index location [2, 6] selectsthe element that is 3 rows down and 7 columns over in the DataFrame. Challenge Activities1. What happens when you type: - surveys_df[0:3] - surveys_df[:5] - surveys_df[-1:]2. What happens when you call: - `surveys_df.iloc[0:4, 1:4]` - `surveys_df.loc[0:4, 1:4]` - How are the two commands different?
###Code
surveys_df.iloc[0:4, 1:4]
surveys_df.loc[0:4, 'month':'year']
###Output
_____no_output_____
###Markdown
Using MasksA mask can be useful to locate where a particular subset of values exist ordon't exist - for example, NaN, or "Not a Number" values. To understand masks,we also need to understand `BOOLEAN` objects in python.Boolean values include `true` or `false`. So for example```python set x to 5x = 5 what does the code below return?x > 5 how about this?x == 5```
###Code
x=5
x>5
x == 5
%whos
a = [1,2,3,4]
avrg_wgt > 2
plot_names > 5
###Output
_____no_output_____
###Markdown
When we ask python what the value of `x > 5` is, we get `False`. This is because xis not greater than 5 it is equal to 5. To create a boolean mask, you first create theTrue / False criteria (e.g. values > 5 = True). Python will then assess eachvalue in the object to determine whether the value meets the criteria (True) ornot (False). Python creates an output object that is the same shape asthe original object, but with a True or False value for each index location. Logical evaluatorsYou can use the syntax below when querying data from a DataFrame. Experimentwith selecting various subsets of the "surveys" data.* Equals: `==`* Not equals: `!=`* Greater than, less than: `>` or `<`* Greater than or equal to `>=`* Less than or equal to `<=`Let's try this out. Let's identify all locations in the survey data that havenull (missing or NaN) data values. We can use the `isnull` method to do this.Each cell with a null value will be assigned a value of `True` in the newboolean object.
###Code
pd.isnull(surveys_df)
###Output
_____no_output_____
###Markdown
To select the rows where there are null values, we can use the mask as an index to subset our data as follows:```pythonTo select just the rows with NaN values, we can use the .any methodsurveys_df[pd.isnull(surveys_df).any(axis=1)]```Note that there are many null or NaN values in the `wgt` column of our DataFrame.We will explore different ways of dealing with these in Lesson 03.We can run `isnull` on a particular column too. What does the code below do?```python what does this do?empty_weights = surveys_df[pd.isnull(surveys_df).any(axis=1)]['wgt']```Let's take a minute to look at the statement above. We are using the Boolean object as an index. We are asking python to select rows that have a `NaN` valuefor weight.
###Code
empty_weights = surveys_df[pd.isnull(surveys_df).any(axis=1)]['wgt']
empty_weights.describe()
surveys_df[pd.isnull(surveys_df).any(axis='columns')]
###Output
_____no_output_____
###Markdown
---We can also select a subset of our data using criteria. For example, we canselect all rows that have a year value of 2002.
###Code
surveys_df[surveys_df['year'] == 2002]
#Or we can select all rows that do not contain the year 2002.
surveys_df[surveys_df['year'] != 2002]
#We can define sets of criteria too:
surveys_df[(surveys_df['year'] >= 1980) & (surveys_df['year'] <= 1985)]
###Output
_____no_output_____
###Markdown
Challenge Activities1. Select a subset of rows in the `surveys_df` DataFrame that contain data from the year 1999 and that contain weight values less than or equal to 8. How many rows did you end up with? What did your neighbor get?2. You can use the `isin` command in python to query a DataFrame based upon a list of values as follows: `surveys_df[surveys_df['species'].isin([listGoesHere])]`. Use the `isin` function to find all plots that contain particular species in the surveys DataFrame. How many records contain these values?3. Experiment with other queries. Create a query that finds all rows with a weight value > or equal to 0.4. The `~` symbol in Python can be used to return the OPPOSITE of the selection that you specify in python. It is equivalent to **is not in**. Write a query that selects all rows that are NOT equal to 'M' or 'F' in the surveysdata.
###Code
surveys_df[(surveys_df['year'] == 1999) & (surveys_df['wgt']<=8)]
surveys_df['species'].unique()
# number of unique plot id where species are found
surveys_df[surveys_df['species'].isin(['SH','UL','CT'])]['plot'].nunique()
# total number of records where species are found
surveys_df[surveys_df['species'].isin(['SH','UL','CT'])]['plot'].count()
surveys_df[surveys_df['wgt']>=0]
surveys_df[~surveys_df['sex'].isin(['F','M'])]
###Output
_____no_output_____
###Markdown
ConcatinatingWe can use the `concat` function in Pandas to append either columns or rows fromone DataFrame to another. Let's grab two subsets of our data to see how thisworks.
###Code
# read in first 10 lines of surveys table
survey_sub=surveys_df.head(10)
survey_sub
# grab the last 10 rows (minus the last one)
survey_sub_last10 = surveys_df[-11:-1]
survey_sub_last10
# reset the index values to the second dataframe appends properly
# drop=True option avoids adding new index column with
# old index values
survey_sub_last10 = survey_sub_last10.reset_index(drop=True)
survey_sub_last10
###Output
_____no_output_____
###Markdown
When we concatenate DataFrames, we need to specify the axis. `axis=0` tellsPandas to stack the second DataFrame under the first one. It will automaticallydetect whether the column names are the same and will stack accordingly.`axis=1` will stack the columns in the second DataFrame to the RIGHT of thefirst DataFrame. To stack the data vertically, we need to make sure we have thesame columns and associated column format in both datasets. When we stackhorizonally, we want to make sure what we are doing makes sense (ie the data arerelated in some way).
###Code
# stack the DataFrames on top of each other
vertical_stack = pd.concat([survey_sub, survey_sub_last10], axis = 0)
vertical_stack
# place the DataFrames side by side
horizontal_stack = pd.concat([survey_sub, survey_sub_last10], axis = 1)
horizontal_stack
horizontal_stack['wgt']
###Output
_____no_output_____
###Markdown
Notice anything unusual about the `vertical_stack`?The row indexes for the two data frames `survey_sub` and `survey_sub_last10`have been repeated. We can reindex the new dataframe using the `reset_index()` method.
###Code
vertical_stack = vertical_stack.reset_index()
vertical_stack
###Output
_____no_output_____
###Markdown
Writing Out Data to CSVWe can use the `to_csv` command to do export a DataFrame in CSV format. Note that the codebelow will by default save the data into the current working directory. We cansave it to a different folder by adding the foldername and a slash to the file`vertical_stack.to_csv('foldername/out.csv')`.```python Write DataFrame to CSV vertical_stack.to_csv('data/out.csv')```Check out your working directory to make sure the CSV wrote out properly, andthat you can open it! If you want, try to bring it back into python to make sureit imports properly.```python let's read our output back into python and make sure all looks goodnew_output = pd.read_csv('data/out.csv', keep_default_na=False, na_values=[""])```
###Code
vertical_stack.to_csv('../out.csv')
###Output
_____no_output_____ |
notebooks/Learning-Spark/Pandas-UDFs.ipynb | ###Markdown
d Pandas UDFs
###Code
import pandas as pd
from pyspark.sql.functions import pandas_udf
@pandas_udf("long")
def pandas_plus_one(v: pd.Series) -> pd.Series:
return v + 1
df = spark.range(3)
df.withColumn("plus_one", pandas_plus_one("id")).show()
from typing import Iterator
@pandas_udf('long')
def pandas_plus_one(iterator: Iterator[pd.Series]) -> Iterator[pd.Series]:
return map(lambda s: s + 1, iterator)
df.withColumn("plus_one", pandas_plus_one("id")).show()
def pandas_filter(
iterator: Iterator[pd.DataFrame]) -> Iterator[pd.DataFrame]:
for pdf in iterator:
yield pdf[pdf.id == 1]
df.mapInPandas(pandas_filter, schema=df.schema).show()
df1 = spark.createDataFrame(
[(1201, 1, 1.0), (1201, 2, 2.0), (1202, 1, 3.0), (1202, 2, 4.0)],
("time", "id", "v1"))
df2 = spark.createDataFrame(
[(1201, 1, "x"), (1201, 2, "y")], ("time", "id", "v2"))
def asof_join(left: pd.DataFrame, right: pd.DataFrame) -> pd.DataFrame:
return pd.merge_asof(left, right, on="time", by="id")
df1.groupby("id").cogroup(
df2.groupby("id")
).applyInPandas(asof_join, "time int, id int, v1 double, v2 string").show()
###Output
_____no_output_____ |
walmart_type_classitype.ipynb | ###Markdown
TripType == 31- WIRELESS- Saturday가 이기긴했는데....- 9998
###Code
df[df["TripType"] == 31].DepartmentDescription.value_counts()
(df[df["TripType"] == 31].DepartmentDescription.value_counts()/df.DepartmentDescription.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 31].Weekday.value_counts()
(df[df["TripType"] == 31].Weekday.value_counts()/df.Weekday.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 31].FinelineNumber.value_counts()
(df[df["TripType"] == 31].FinelineNumber.value_counts()/df.FinelineNumber.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 31].ScanCount.value_counts()
(df[df["TripType"] == 31].ScanCount.value_counts()/df.ScanCount.value_counts()).sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
TripType == 32- INFANT CONSUMABLE HARDLINES, INFANT APPAREL- 3175
###Code
df[df["TripType"] == 32].DepartmentDescription.value_counts()
(df[df["TripType"] == 32].DepartmentDescription.value_counts()/df.DepartmentDescription.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 32].Weekday.value_counts()
(df[df["TripType"] == 32].Weekday.value_counts()/df.Weekday.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 32].FinelineNumber.value_counts()
df[df["TripType"] == 32].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 33- HOUSEHOKD CHEMICALS/SUPP, HOUSEHOLD PAPER GOODS (가정용 화학용품)- 8945 (거이 의미가 없는데..)
###Code
df[df["TripType"] == 33].DepartmentDescription.value_counts()
(df[df["TripType"] == 33].DepartmentDescription.value_counts()/df.DepartmentDescription.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 33].Weekday.value_counts()
(df[df["TripType"] == 33].Weekday.value_counts()/df.Weekday.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 33].FinelineNumber.value_counts()
(df[df["TripType"] == 33].FinelineNumber.value_counts()/df.FinelineNumber.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 33].ScanCount.value_counts()
(df[df["TripType"] == 33].ScanCount.value_counts()/df.ScanCount.value_counts()).sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
TripType == 34- PETS AND SUPPLIES- Saturday 가 압도적- 1946 (이것도뭐.. 거이 의미x)
###Code
df[df["TripType"] == 34].DepartmentDescription.value_counts()
(df[df["TripType"] == 34].DepartmentDescription.value_counts()/df.DepartmentDescription.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 34].Weekday.value_counts()
(df[df["TripType"] == 34].Weekday.value_counts()/df.Weekday.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 34].FinelineNumber.value_counts()
(df[df["TripType"] == 34].FinelineNumber.value_counts()/df.FinelineNumber.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 34].ScanCount.value_counts()
(df[df["TripType"] == 34].ScanCount.value_counts()/df.ScanCount.value_counts()).sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
TripType == 35- DSD GROCERY- 808 (too)
###Code
df[df["TripType"] == 35].DepartmentDescription.value_counts()
(df[df["TripType"] == 35].DepartmentDescription.value_counts()/df.DepartmentDescription.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 35].Weekday.value_counts()
(df[df["TripType"] == 35].Weekday.value_counts()/df.Weekday.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 35].FinelineNumber.value_counts()
(df[df["TripType"] == 35].FinelineNumber.value_counts()/df.FinelineNumber.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 35].ScanCount.value_counts()
(df[df["TripType"] == 35].ScanCount.value_counts()/df.ScanCount.value_counts()).sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
TripType == 36- PERSONAL CARE (개인 용품정도)- Saturday가 Sunday 를 이김- 203 (애매..)
###Code
df[df["TripType"] == 36].DepartmentDescription.value_counts()
(df[df["TripType"] == 36].DepartmentDescription.value_counts()/df.DepartmentDescription.value_counts()).sort_values(ascending=False)
df[df["TripType"] == 36].Weekday.value_counts()
df[df["TripType"] == 36].FinelineNumber.value_counts()
df[df["TripType"] == 36].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 37- PRODUCE- 5501 (압도적)
###Code
df[df["TripType"] == 37].DepartmentDescription.value_counts()
df[df["TripType"] == 37].Weekday.value_counts()
df[df["TripType"] == 37].FinelineNumber.value_counts()
df[df["TripType"] == 37].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 38- DAIRY, GROCERY DRY GOODS- 월요 의지 불태움- 1508 (두배정도)
###Code
df[df["TripType"] == 38].DepartmentDescription.value_counts()
df[df["TripType"] == 38].Weekday.value_counts()
df[df["TripType"] == 38].FinelineNumber.value_counts()
df[df["TripType"] == 38].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 39- DSD GROCERY, GROCERY DRY GOODS (두개 차이가 거이없음)- 5501 (거이 두배)
###Code
df[df["TripType"] == 39].DepartmentDescription.value_counts()
df[df["TripType"] == 39].Weekday.value_counts()
df[df["TripType"] == 39].FinelineNumber.value_counts()
df[df["TripType"] == 39].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 40- GROCERY DRY GOODS, DSD GROCERY (10000개정도 차이나는데 둘다많음)- 5501
###Code
df[df["TripType"] == 40].DepartmentDescription.value_counts()
df[df["TripType"] == 40].Weekday.value_counts()
df[df["TripType"] == 40].FinelineNumber.value_counts()
df[df["TripType"] == 40].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 41 - onemore...- SHOES, IMPULSE MERCHANDISE, PERSONAL CARE, DSD GROCERY... (차이가 너무 없음)- 135 (의미없음)
###Code
df[df["TripType"] == 41].DepartmentDescription.value_counts()
df[df["TripType"] == 41].Weekday.value_counts()
df[df["TripType"] == 41].FinelineNumber.value_counts()
df[df["TripType"] == 41].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 42- IMPULSE MERCHANDISE, CELEBRATION (차이가 크지않음)- 1805 (의미x)
###Code
df[df["TripType"] == 42].DepartmentDescription.value_counts()
df[df["TripType"] == 42].Weekday.value_counts()
df[df["TripType"] == 42].FinelineNumber.value_counts()
df[df["TripType"] == 42].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 43 - One More....- PERSONAL CARE, DSD GROCERY, IMPULSE MERCHANDISE (3개가 400을 넘음)- 0 (거의 의미x)
###Code
df[df["TripType"] == 43].DepartmentDescription.value_counts()
df[df["TripType"] == 43].Weekday.value_counts()
df[df["TripType"] == 43].FinelineNumber.value_counts()
df[df["TripType"] == 43].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 44- PERSONAL CARE, DSD GROCERY, GROCERY DRY GOODS (첫번째께 1857개로 제일많지만 나머지것들도 1000개가 넘음)- 0 (의미x)
###Code
df[df["TripType"] == 44].DepartmentDescription.value_counts()
df[df["TripType"] == 44].Weekday.value_counts()
df[df["TripType"] == 44].FinelineNumber.value_counts()
df[df["TripType"] == 44].ScanCount.value_counts()
###Output
_____no_output_____
###Markdown
TripType == 999- FINANCIAL SERVICES, IMPULSE MERCHANDISE (2289개, 1147개로 두개가 가장많음)- 279 (others 니깐 이것을 더세부적으로 나누지 않았을까)
###Code
df[df["TripType"] == 999].DepartmentDescription.value_counts()
df[df["TripType"] == 999].Weekday.value_counts()
df[df["TripType"] == 999].FinelineNumber.value_counts()
df[df["TripType"] == 999].ScanCount.value_counts()
###Output
_____no_output_____ |
source/generate_input_data.ipynb | ###Markdown
Generate encoded treesHere, we generate encoded trees by converting an HTML to an encoded tree. This function accepts the maximum depth by _max\_depth_ by which it encodes the HTML such that the maximum depth and maximum branches (i.e. maximum number of children) can be up to _max\_depth_.
###Code
depth = 4
label_colname='stars'
is_multicase = True if label_colname == 'categories' else False
test = tree_lib.add_encoded_trees_to_dataframe(test, label_colname=label_colname, max_depth=depth, is_multicase=is_multicase)
cnt = 0
for _, entry in test.iterrows():
if cnt == 10: break
cnt += 1
print(entry['encoded_tree'][-100:], entry['url'])
df = tree_lib.add_encoded_trees_to_dataframe(df, label_colname=label_colname, max_depth=depth, is_multicase=is_multicase)
###Output
_____no_output_____
###Markdown
Mean wordcounts per leafHere, we find the average number of words per leaves among all trees from out input data.
###Code
sum, n = 0, 0
for tree_str in df['encoded_tree']:
wc, nl = tree_lib.median_wordcounts(tree_str)
if nl > 0:
sum += wc / nl
n += 1
print(f'Median words per leaf = {sum / n}')
for itr in range(5):
df = df.sample(frac=1).reset_index(drop=True)
train, test = utils.get_train_test(df, split_size_ratio=0.7)
directory = os.path.dirname(f'/home/vahidsanei_google_com/data/yelp_data/oversampled_depth={depth}/{label_colname}/shuffle{itr}/')
if not os.path.exists(directory):
os.makedirs(directory)
train = utils.oversampling(train, col_name=label_colname)
train.to_csv(os.path.join(directory, 'train.csv'), index=False)
test.to_csv(os.path.join(directory, 'test.csv'), index=False)
cnt = 0
for tree_str in df['encoded_tree']:
cnt += 1
if cnt == 10: break
tree = tree_lib.Tree(tree_str)
print(tree.label, tree_str[-5:-1])
sum_d, sum_b = 0, 0
for html in df['webpage_text']:
d, b = tree_lib.get_html_depth_branches(html)
sum_d += d
sum_b += b
n = len(df['webpage_text'])
print(f'Avg max depth = {sum_d / n}, Avg max branches = {sum_b / n}')
###Output
Avg max depth = 18.53236971988975, Avg max branches = 43.1758541119159
###Markdown
Oversampling with limitWe can pass a parameter _limit_ to function __utils.oversampling__ to put a constraint on the size of entries for each class. We can think of it as a sort of combination oversampling and downsampling.
###Code
limit=2500
for itr in range(5):
df = df.sample(frac=1).reset_index(drop=True)
train, test = utils.get_train_test(df, split_size_ratio=0.8)
directory = os.path.dirname(f'/home/vahidsanei_google_com/data/yelp_data/oversampled_limit={limit}/{label_colname}/shuffle{itr}/')
if not os.path.exists(directory):
os.makedirs(directory)
train = utils.oversampling(train, col_name=label_colname, limit=limit)
train.to_csv(os.path.join(directory, 'train.csv'), index=False)
test.to_csv(os.path.join(directory, 'test.csv'), index=False)
###Output
Max count = 7173 Min Count = 548
Max count = 7246 Min Count = 550
Max count = 7150 Min Count = 544
Max count = 7210 Min Count = 565
Max count = 7196 Min Count = 548
|
code/model_zoo/pytorch_ipynb/mlp-fromscratch__sigmoid-mse.ipynb | ###Markdown
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
###Output
Sebastian Raschka
CPython 3.6.8
IPython 7.2.0
torch 1.0.0
###Markdown
Model Zoo -- Multilayer Perceptron From Scratch (Sigmoid activation, MSE Loss) Implementation of a 1-hidden layer multi-layer perceptron from scratch using- sigmoid activation in the hidden layer- sigmoid activation in the output layer- Mean Squared Error loss function Imports
###Code
import matplotlib.pyplot as plt
import pandas as pd
import torch
%matplotlib inline
import time
import numpy as np
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch
###Output
_____no_output_____
###Markdown
Settings and Dataset
###Code
##########################
### SETTINGS
##########################
RANDOM_SEED = 1
BATCH_SIZE = 100
NUM_EPOCHS = 50
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
###Output
Image batch dimensions: torch.Size([100, 1, 28, 28])
Image label dimensions: torch.Size([100])
###Markdown
Model Implementation
###Code
##########################
### MODEL
##########################
class MultilayerPerceptron():
def __init__(self, num_features, num_hidden, num_classes):
super(MultilayerPerceptron, self).__init__()
self.num_classes = num_classes
# hidden 1
self.weight_1 = torch.zeros(num_hidden, num_features,
dtype=torch.float).normal_(0.0, 0.1)
self.bias_1 = torch.zeros(num_hidden, dtype=torch.float)
# output
self.weight_o = torch.zeros(self.num_classes, num_hidden,
dtype=torch.float).normal_(0.0, 0.1)
self.bias_o = torch.zeros(self.num_classes, dtype=torch.float)
def forward(self, x):
# hidden 1
# input dim: [n_hidden, n_features] dot [n_features, n_examples] .T
# output dim: [n_examples, n_hidden]
z_1 = torch.mm(x, self.weight_1.t()) + self.bias_1
a_1 = torch.sigmoid(z_1)
# hidden 2
# input dim: [n_classes, n_hidden] dot [n_hidden, n_examples] .T
# output dim: [n_examples, n_classes]
z_2 = torch.mm(a_1, self.weight_o.t()) + self.bias_o
a_2 = torch.sigmoid(z_2)
return a_1, a_2
def backward(self, x, a_1, a_2, y):
#########################
### Output layer weights
#########################
# onehot encoding
y_onehot = torch.FloatTensor(y.size(0), self.num_classes)
y_onehot.zero_()
y_onehot.scatter_(1, y.view(-1, 1).long(), 1)
# Part 1: dLoss/dOutWeights
## = dLoss/dOutAct * dOutAct/dOutNet * dOutNet/dOutWeight
## where DeltaOut = dLoss/dOutAct * dOutAct/dOutNet
## for convenient re-use
# input/output dim: [n_examples, n_classes]
dloss_da2 = 2.*(a_2 - y_onehot) / y.size(0)
# input/output dim: [n_examples, n_classes]
da2_dz2 = a_2 * (1. - a_2) # sigmoid derivative
# output dim: [n_examples, n_classes]
delta_out = dloss_da2 * da2_dz2 # "delta (rule) placeholder"
# gradient for output weights
# [n_examples, n_hidden]
dz2__dw_out = a_1
# input dim: [n_classlabels, n_examples] dot [n_examples, n_hidden]
# output dim: [n_classlabels, n_hidden]
dloss__dw_out = torch.mm(delta_out.t(), dz2__dw_out)
dloss__db_out = torch.sum(delta_out, dim=0)
#################################
# Part 2: dLoss/dHiddenWeights
## = DeltaOut * dOutNet/dHiddenAct * dHiddenAct/dHiddenNet * dHiddenNet/dWeight
# [n_classes, n_hidden]
dz2__a1 = self.weight_o
# output dim: [n_examples, n_hidden]
dloss_a1 = torch.mm(delta_out, dz2__a1)
# [n_examples, n_hidden]
da1__dz1 = a_1 * (1. - a_1) # sigmoid derivative
# [n_examples, n_features]
dz1__dw1 = x
# output dim: [n_hidden, n_features]
dloss_dw1 = torch.mm((dloss_a1 * da1__dz1).t(), dz1__dw1)
dloss_db1 = torch.sum((dloss_a1 * da1__dz1), dim=0)
return dloss__dw_out, dloss__db_out, dloss_dw1, dloss_db1
###Output
_____no_output_____
###Markdown
Training
###Code
####################################################
##### Training and evaluation wrappers
###################################################
def to_onehot(y, num_classes):
y_onehot = torch.FloatTensor(y.size(0), num_classes)
y_onehot.zero_()
y_onehot.scatter_(1, y.view(-1, 1).long(), 1).float()
return y_onehot
def loss_func(targets_onehot, probas_onehot):
return torch.mean(torch.mean((targets_onehot - probas_onehot)**2, dim=0))
def compute_mse(net, data_loader):
curr_mse, num_examples = torch.zeros(model.num_classes).float(), 0
with torch.no_grad():
for features, targets in data_loader:
features = features.view(-1, 28*28)
logits, probas = net.forward(features)
y_onehot = to_onehot(targets, model.num_classes)
loss = torch.sum((y_onehot - probas)**2, dim=0)
num_examples += targets.size(0)
curr_mse += loss
curr_mse = torch.mean(curr_mse/num_examples, dim=0)
return curr_mse
def train(model, data_loader, num_epochs,
learning_rate=0.1):
minibatch_cost = []
epoch_cost = []
for e in range(num_epochs):
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.view(-1, 28*28)
#### Compute outputs ####
a_1, a_2 = model.forward(features)
#### Compute gradients ####
dloss__dw_out, dloss__db_out, dloss_dw1, dloss_db1 = \
model.backward(features, a_1, a_2, targets)
#### Update weights ####
model.weight_1 -= learning_rate * dloss_dw1
model.bias_1 -= learning_rate * dloss_db1
model.weight_o -= learning_rate * dloss__dw_out
model.bias_o -= learning_rate * dloss__db_out
#### Logging ####
curr_cost = loss_func(to_onehot(targets, model.num_classes), a_2)
minibatch_cost.append(curr_cost)
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(e+1, NUM_EPOCHS, batch_idx,
len(train_loader), curr_cost))
#### Logging ####
curr_cost = compute_mse(model, train_loader)
epoch_cost.append(curr_cost)
print('Epoch: %03d/%03d |' % (e+1, NUM_EPOCHS), end="")
print(' Train MSE: %.5f' % curr_cost)
return minibatch_cost, epoch_cost
####################################################
##### Training
###################################################
torch.manual_seed(RANDOM_SEED)
model = MultilayerPerceptron(num_features=28*28,
num_hidden=50,
num_classes=10)
minibatch_cost, epoch_cost = train(model,
train_loader,
num_epochs=NUM_EPOCHS,
learning_rate=0.1)
###Output
Epoch: 001/050 | Batch 000/600 | Cost: 0.2471
Epoch: 001/050 | Batch 050/600 | Cost: 0.0885
Epoch: 001/050 | Batch 100/600 | Cost: 0.0880
Epoch: 001/050 | Batch 150/600 | Cost: 0.0877
Epoch: 001/050 | Batch 200/600 | Cost: 0.0847
Epoch: 001/050 | Batch 250/600 | Cost: 0.0838
Epoch: 001/050 | Batch 300/600 | Cost: 0.0808
Epoch: 001/050 | Batch 350/600 | Cost: 0.0801
Epoch: 001/050 | Batch 400/600 | Cost: 0.0766
Epoch: 001/050 | Batch 450/600 | Cost: 0.0740
Epoch: 001/050 | Batch 500/600 | Cost: 0.0730
Epoch: 001/050 | Batch 550/600 | Cost: 0.0730
Epoch: 001/050 | Train MSE: 0.06566
Epoch: 002/050 | Batch 000/600 | Cost: 0.0644
Epoch: 002/050 | Batch 050/600 | Cost: 0.0637
Epoch: 002/050 | Batch 100/600 | Cost: 0.0600
Epoch: 002/050 | Batch 150/600 | Cost: 0.0580
Epoch: 002/050 | Batch 200/600 | Cost: 0.0541
Epoch: 002/050 | Batch 250/600 | Cost: 0.0546
Epoch: 002/050 | Batch 300/600 | Cost: 0.0547
Epoch: 002/050 | Batch 350/600 | Cost: 0.0488
Epoch: 002/050 | Batch 400/600 | Cost: 0.0515
Epoch: 002/050 | Batch 450/600 | Cost: 0.0476
Epoch: 002/050 | Batch 500/600 | Cost: 0.0486
Epoch: 002/050 | Batch 550/600 | Cost: 0.0447
Epoch: 002/050 | Train MSE: 0.04302
Epoch: 003/050 | Batch 000/600 | Cost: 0.0413
Epoch: 003/050 | Batch 050/600 | Cost: 0.0404
Epoch: 003/050 | Batch 100/600 | Cost: 0.0374
Epoch: 003/050 | Batch 150/600 | Cost: 0.0351
Epoch: 003/050 | Batch 200/600 | Cost: 0.0374
Epoch: 003/050 | Batch 250/600 | Cost: 0.0371
Epoch: 003/050 | Batch 300/600 | Cost: 0.0347
Epoch: 003/050 | Batch 350/600 | Cost: 0.0359
Epoch: 003/050 | Batch 400/600 | Cost: 0.0373
Epoch: 003/050 | Batch 450/600 | Cost: 0.0305
Epoch: 003/050 | Batch 500/600 | Cost: 0.0333
Epoch: 003/050 | Batch 550/600 | Cost: 0.0318
Epoch: 003/050 | Train MSE: 0.03251
Epoch: 004/050 | Batch 000/600 | Cost: 0.0292
Epoch: 004/050 | Batch 050/600 | Cost: 0.0312
Epoch: 004/050 | Batch 100/600 | Cost: 0.0273
Epoch: 004/050 | Batch 150/600 | Cost: 0.0293
Epoch: 004/050 | Batch 200/600 | Cost: 0.0293
Epoch: 004/050 | Batch 250/600 | Cost: 0.0292
Epoch: 004/050 | Batch 300/600 | Cost: 0.0350
Epoch: 004/050 | Batch 350/600 | Cost: 0.0312
Epoch: 004/050 | Batch 400/600 | Cost: 0.0276
Epoch: 004/050 | Batch 450/600 | Cost: 0.0312
Epoch: 004/050 | Batch 500/600 | Cost: 0.0315
Epoch: 004/050 | Batch 550/600 | Cost: 0.0291
Epoch: 004/050 | Train MSE: 0.02692
Epoch: 005/050 | Batch 000/600 | Cost: 0.0282
Epoch: 005/050 | Batch 050/600 | Cost: 0.0264
Epoch: 005/050 | Batch 100/600 | Cost: 0.0231
Epoch: 005/050 | Batch 150/600 | Cost: 0.0242
Epoch: 005/050 | Batch 200/600 | Cost: 0.0260
Epoch: 005/050 | Batch 250/600 | Cost: 0.0215
Epoch: 005/050 | Batch 300/600 | Cost: 0.0294
Epoch: 005/050 | Batch 350/600 | Cost: 0.0220
Epoch: 005/050 | Batch 400/600 | Cost: 0.0246
Epoch: 005/050 | Batch 450/600 | Cost: 0.0229
Epoch: 005/050 | Batch 500/600 | Cost: 0.0289
Epoch: 005/050 | Batch 550/600 | Cost: 0.0240
Epoch: 005/050 | Train MSE: 0.02365
Epoch: 006/050 | Batch 000/600 | Cost: 0.0201
Epoch: 006/050 | Batch 050/600 | Cost: 0.0223
Epoch: 006/050 | Batch 100/600 | Cost: 0.0253
Epoch: 006/050 | Batch 150/600 | Cost: 0.0258
Epoch: 006/050 | Batch 200/600 | Cost: 0.0216
Epoch: 006/050 | Batch 250/600 | Cost: 0.0282
Epoch: 006/050 | Batch 300/600 | Cost: 0.0203
Epoch: 006/050 | Batch 350/600 | Cost: 0.0218
Epoch: 006/050 | Batch 400/600 | Cost: 0.0249
Epoch: 006/050 | Batch 450/600 | Cost: 0.0211
Epoch: 006/050 | Batch 500/600 | Cost: 0.0230
Epoch: 006/050 | Batch 550/600 | Cost: 0.0174
Epoch: 006/050 | Train MSE: 0.02156
Epoch: 007/050 | Batch 000/600 | Cost: 0.0186
Epoch: 007/050 | Batch 050/600 | Cost: 0.0223
Epoch: 007/050 | Batch 100/600 | Cost: 0.0208
Epoch: 007/050 | Batch 150/600 | Cost: 0.0231
Epoch: 007/050 | Batch 200/600 | Cost: 0.0219
Epoch: 007/050 | Batch 250/600 | Cost: 0.0206
Epoch: 007/050 | Batch 300/600 | Cost: 0.0227
Epoch: 007/050 | Batch 350/600 | Cost: 0.0249
Epoch: 007/050 | Batch 400/600 | Cost: 0.0214
Epoch: 007/050 | Batch 450/600 | Cost: 0.0203
Epoch: 007/050 | Batch 500/600 | Cost: 0.0209
Epoch: 007/050 | Batch 550/600 | Cost: 0.0160
Epoch: 007/050 | Train MSE: 0.02006
Epoch: 008/050 | Batch 000/600 | Cost: 0.0171
Epoch: 008/050 | Batch 050/600 | Cost: 0.0232
Epoch: 008/050 | Batch 100/600 | Cost: 0.0227
Epoch: 008/050 | Batch 150/600 | Cost: 0.0156
Epoch: 008/050 | Batch 200/600 | Cost: 0.0157
Epoch: 008/050 | Batch 250/600 | Cost: 0.0189
Epoch: 008/050 | Batch 300/600 | Cost: 0.0154
Epoch: 008/050 | Batch 350/600 | Cost: 0.0213
Epoch: 008/050 | Batch 400/600 | Cost: 0.0158
Epoch: 008/050 | Batch 450/600 | Cost: 0.0201
Epoch: 008/050 | Batch 500/600 | Cost: 0.0176
Epoch: 008/050 | Batch 550/600 | Cost: 0.0254
Epoch: 008/050 | Train MSE: 0.01892
Epoch: 009/050 | Batch 000/600 | Cost: 0.0195
Epoch: 009/050 | Batch 050/600 | Cost: 0.0214
Epoch: 009/050 | Batch 100/600 | Cost: 0.0255
Epoch: 009/050 | Batch 150/600 | Cost: 0.0153
Epoch: 009/050 | Batch 200/600 | Cost: 0.0184
Epoch: 009/050 | Batch 250/600 | Cost: 0.0247
Epoch: 009/050 | Batch 300/600 | Cost: 0.0151
Epoch: 009/050 | Batch 350/600 | Cost: 0.0165
Epoch: 009/050 | Batch 400/600 | Cost: 0.0171
Epoch: 009/050 | Batch 450/600 | Cost: 0.0136
Epoch: 009/050 | Batch 500/600 | Cost: 0.0206
Epoch: 009/050 | Batch 550/600 | Cost: 0.0142
Epoch: 009/050 | Train MSE: 0.01803
Epoch: 010/050 | Batch 000/600 | Cost: 0.0183
Epoch: 010/050 | Batch 050/600 | Cost: 0.0222
Epoch: 010/050 | Batch 100/600 | Cost: 0.0203
Epoch: 010/050 | Batch 150/600 | Cost: 0.0224
Epoch: 010/050 | Batch 200/600 | Cost: 0.0234
Epoch: 010/050 | Batch 250/600 | Cost: 0.0229
Epoch: 010/050 | Batch 300/600 | Cost: 0.0179
Epoch: 010/050 | Batch 350/600 | Cost: 0.0181
Epoch: 010/050 | Batch 400/600 | Cost: 0.0122
Epoch: 010/050 | Batch 450/600 | Cost: 0.0176
Epoch: 010/050 | Batch 500/600 | Cost: 0.0198
Epoch: 010/050 | Batch 550/600 | Cost: 0.0142
Epoch: 010/050 | Train MSE: 0.01727
Epoch: 011/050 | Batch 000/600 | Cost: 0.0156
Epoch: 011/050 | Batch 050/600 | Cost: 0.0178
Epoch: 011/050 | Batch 100/600 | Cost: 0.0102
Epoch: 011/050 | Batch 150/600 | Cost: 0.0188
Epoch: 011/050 | Batch 200/600 | Cost: 0.0177
Epoch: 011/050 | Batch 250/600 | Cost: 0.0196
Epoch: 011/050 | Batch 300/600 | Cost: 0.0115
Epoch: 011/050 | Batch 350/600 | Cost: 0.0109
Epoch: 011/050 | Batch 400/600 | Cost: 0.0212
Epoch: 011/050 | Batch 450/600 | Cost: 0.0162
Epoch: 011/050 | Batch 500/600 | Cost: 0.0139
Epoch: 011/050 | Batch 550/600 | Cost: 0.0144
Epoch: 011/050 | Train MSE: 0.01665
Epoch: 012/050 | Batch 000/600 | Cost: 0.0185
Epoch: 012/050 | Batch 050/600 | Cost: 0.0137
Epoch: 012/050 | Batch 100/600 | Cost: 0.0160
Epoch: 012/050 | Batch 150/600 | Cost: 0.0142
Epoch: 012/050 | Batch 200/600 | Cost: 0.0138
Epoch: 012/050 | Batch 250/600 | Cost: 0.0169
Epoch: 012/050 | Batch 300/600 | Cost: 0.0141
Epoch: 012/050 | Batch 350/600 | Cost: 0.0137
Epoch: 012/050 | Batch 400/600 | Cost: 0.0134
Epoch: 012/050 | Batch 450/600 | Cost: 0.0141
Epoch: 012/050 | Batch 500/600 | Cost: 0.0139
Epoch: 012/050 | Batch 550/600 | Cost: 0.0175
Epoch: 012/050 | Train MSE: 0.01609
Epoch: 013/050 | Batch 000/600 | Cost: 0.0197
Epoch: 013/050 | Batch 050/600 | Cost: 0.0134
Epoch: 013/050 | Batch 100/600 | Cost: 0.0213
Epoch: 013/050 | Batch 150/600 | Cost: 0.0172
Epoch: 013/050 | Batch 200/600 | Cost: 0.0149
Epoch: 013/050 | Batch 250/600 | Cost: 0.0155
Epoch: 013/050 | Batch 300/600 | Cost: 0.0224
Epoch: 013/050 | Batch 350/600 | Cost: 0.0177
Epoch: 013/050 | Batch 400/600 | Cost: 0.0125
Epoch: 013/050 | Batch 450/600 | Cost: 0.0191
Epoch: 013/050 | Batch 500/600 | Cost: 0.0196
Epoch: 013/050 | Batch 550/600 | Cost: 0.0167
Epoch: 013/050 | Train MSE: 0.01561
Epoch: 014/050 | Batch 000/600 | Cost: 0.0206
Epoch: 014/050 | Batch 050/600 | Cost: 0.0139
Epoch: 014/050 | Batch 100/600 | Cost: 0.0145
Epoch: 014/050 | Batch 150/600 | Cost: 0.0210
Epoch: 014/050 | Batch 200/600 | Cost: 0.0113
Epoch: 014/050 | Batch 250/600 | Cost: 0.0160
Epoch: 014/050 | Batch 300/600 | Cost: 0.0188
Epoch: 014/050 | Batch 350/600 | Cost: 0.0247
Epoch: 014/050 | Batch 400/600 | Cost: 0.0208
Epoch: 014/050 | Batch 450/600 | Cost: 0.0170
Epoch: 014/050 | Batch 500/600 | Cost: 0.0148
Epoch: 014/050 | Batch 550/600 | Cost: 0.0197
Epoch: 014/050 | Train MSE: 0.01518
Epoch: 015/050 | Batch 000/600 | Cost: 0.0138
Epoch: 015/050 | Batch 050/600 | Cost: 0.0183
Epoch: 015/050 | Batch 100/600 | Cost: 0.0117
Epoch: 015/050 | Batch 150/600 | Cost: 0.0123
Epoch: 015/050 | Batch 200/600 | Cost: 0.0114
Epoch: 015/050 | Batch 250/600 | Cost: 0.0116
Epoch: 015/050 | Batch 300/600 | Cost: 0.0199
Epoch: 015/050 | Batch 350/600 | Cost: 0.0165
Epoch: 015/050 | Batch 400/600 | Cost: 0.0199
Epoch: 015/050 | Batch 450/600 | Cost: 0.0143
Epoch: 015/050 | Batch 500/600 | Cost: 0.0148
Epoch: 015/050 | Batch 550/600 | Cost: 0.0130
Epoch: 015/050 | Train MSE: 0.01481
Epoch: 016/050 | Batch 000/600 | Cost: 0.0195
Epoch: 016/050 | Batch 050/600 | Cost: 0.0150
Epoch: 016/050 | Batch 100/600 | Cost: 0.0145
Epoch: 016/050 | Batch 150/600 | Cost: 0.0139
Epoch: 016/050 | Batch 200/600 | Cost: 0.0108
Epoch: 016/050 | Batch 250/600 | Cost: 0.0110
Epoch: 016/050 | Batch 300/600 | Cost: 0.0119
Epoch: 016/050 | Batch 350/600 | Cost: 0.0175
Epoch: 016/050 | Batch 400/600 | Cost: 0.0133
Epoch: 016/050 | Batch 450/600 | Cost: 0.0144
Epoch: 016/050 | Batch 500/600 | Cost: 0.0168
Epoch: 016/050 | Batch 550/600 | Cost: 0.0131
Epoch: 016/050 | Train MSE: 0.01447
Epoch: 017/050 | Batch 000/600 | Cost: 0.0128
Epoch: 017/050 | Batch 050/600 | Cost: 0.0160
Epoch: 017/050 | Batch 100/600 | Cost: 0.0183
Epoch: 017/050 | Batch 150/600 | Cost: 0.0136
Epoch: 017/050 | Batch 200/600 | Cost: 0.0144
Epoch: 017/050 | Batch 250/600 | Cost: 0.0109
Epoch: 017/050 | Batch 300/600 | Cost: 0.0104
Epoch: 017/050 | Batch 350/600 | Cost: 0.0146
Epoch: 017/050 | Batch 400/600 | Cost: 0.0099
Epoch: 017/050 | Batch 450/600 | Cost: 0.0096
Epoch: 017/050 | Batch 500/600 | Cost: 0.0145
Epoch: 017/050 | Batch 550/600 | Cost: 0.0160
Epoch: 017/050 | Train MSE: 0.01415
Epoch: 018/050 | Batch 000/600 | Cost: 0.0140
Epoch: 018/050 | Batch 050/600 | Cost: 0.0145
Epoch: 018/050 | Batch 100/600 | Cost: 0.0167
Epoch: 018/050 | Batch 150/600 | Cost: 0.0136
Epoch: 018/050 | Batch 200/600 | Cost: 0.0102
Epoch: 018/050 | Batch 250/600 | Cost: 0.0164
Epoch: 018/050 | Batch 300/600 | Cost: 0.0094
Epoch: 018/050 | Batch 350/600 | Cost: 0.0169
Epoch: 018/050 | Batch 400/600 | Cost: 0.0108
Epoch: 018/050 | Batch 450/600 | Cost: 0.0155
Epoch: 018/050 | Batch 500/600 | Cost: 0.0106
Epoch: 018/050 | Batch 550/600 | Cost: 0.0143
Epoch: 018/050 | Train MSE: 0.01386
Epoch: 019/050 | Batch 000/600 | Cost: 0.0226
Epoch: 019/050 | Batch 050/600 | Cost: 0.0175
Epoch: 019/050 | Batch 100/600 | Cost: 0.0165
Epoch: 019/050 | Batch 150/600 | Cost: 0.0118
Epoch: 019/050 | Batch 200/600 | Cost: 0.0174
Epoch: 019/050 | Batch 250/600 | Cost: 0.0132
Epoch: 019/050 | Batch 300/600 | Cost: 0.0136
Epoch: 019/050 | Batch 350/600 | Cost: 0.0090
Epoch: 019/050 | Batch 400/600 | Cost: 0.0064
Epoch: 019/050 | Batch 450/600 | Cost: 0.0168
Epoch: 019/050 | Batch 500/600 | Cost: 0.0135
Epoch: 019/050 | Batch 550/600 | Cost: 0.0166
Epoch: 019/050 | Train MSE: 0.01360
Epoch: 020/050 | Batch 000/600 | Cost: 0.0184
Epoch: 020/050 | Batch 050/600 | Cost: 0.0124
Epoch: 020/050 | Batch 100/600 | Cost: 0.0142
Epoch: 020/050 | Batch 150/600 | Cost: 0.0167
Epoch: 020/050 | Batch 200/600 | Cost: 0.0140
Epoch: 020/050 | Batch 250/600 | Cost: 0.0112
Epoch: 020/050 | Batch 300/600 | Cost: 0.0140
Epoch: 020/050 | Batch 350/600 | Cost: 0.0115
Epoch: 020/050 | Batch 400/600 | Cost: 0.0106
Epoch: 020/050 | Batch 450/600 | Cost: 0.0156
Epoch: 020/050 | Batch 500/600 | Cost: 0.0150
Epoch: 020/050 | Batch 550/600 | Cost: 0.0113
Epoch: 020/050 | Train MSE: 0.01335
Epoch: 021/050 | Batch 000/600 | Cost: 0.0127
Epoch: 021/050 | Batch 050/600 | Cost: 0.0100
Epoch: 021/050 | Batch 100/600 | Cost: 0.0183
Epoch: 021/050 | Batch 150/600 | Cost: 0.0138
Epoch: 021/050 | Batch 200/600 | Cost: 0.0120
Epoch: 021/050 | Batch 250/600 | Cost: 0.0115
Epoch: 021/050 | Batch 300/600 | Cost: 0.0125
Epoch: 021/050 | Batch 350/600 | Cost: 0.0085
Epoch: 021/050 | Batch 400/600 | Cost: 0.0121
Epoch: 021/050 | Batch 450/600 | Cost: 0.0140
Epoch: 021/050 | Batch 500/600 | Cost: 0.0098
Epoch: 021/050 | Batch 550/600 | Cost: 0.0145
Epoch: 021/050 | Train MSE: 0.01312
Epoch: 022/050 | Batch 000/600 | Cost: 0.0141
Epoch: 022/050 | Batch 050/600 | Cost: 0.0147
Epoch: 022/050 | Batch 100/600 | Cost: 0.0172
Epoch: 022/050 | Batch 150/600 | Cost: 0.0161
Epoch: 022/050 | Batch 200/600 | Cost: 0.0108
Epoch: 022/050 | Batch 250/600 | Cost: 0.0108
Epoch: 022/050 | Batch 300/600 | Cost: 0.0149
Epoch: 022/050 | Batch 350/600 | Cost: 0.0133
Epoch: 022/050 | Batch 400/600 | Cost: 0.0077
Epoch: 022/050 | Batch 450/600 | Cost: 0.0101
Epoch: 022/050 | Batch 500/600 | Cost: 0.0177
Epoch: 022/050 | Batch 550/600 | Cost: 0.0120
Epoch: 022/050 | Train MSE: 0.01291
Epoch: 023/050 | Batch 000/600 | Cost: 0.0165
Epoch: 023/050 | Batch 050/600 | Cost: 0.0132
Epoch: 023/050 | Batch 100/600 | Cost: 0.0169
Epoch: 023/050 | Batch 150/600 | Cost: 0.0135
Epoch: 023/050 | Batch 200/600 | Cost: 0.0133
Epoch: 023/050 | Batch 250/600 | Cost: 0.0137
Epoch: 023/050 | Batch 300/600 | Cost: 0.0149
Epoch: 023/050 | Batch 350/600 | Cost: 0.0185
Epoch: 023/050 | Batch 400/600 | Cost: 0.0091
Epoch: 023/050 | Batch 450/600 | Cost: 0.0141
Epoch: 023/050 | Batch 500/600 | Cost: 0.0170
Epoch: 023/050 | Batch 550/600 | Cost: 0.0096
Epoch: 023/050 | Train MSE: 0.01270
Epoch: 024/050 | Batch 000/600 | Cost: 0.0122
Epoch: 024/050 | Batch 050/600 | Cost: 0.0095
Epoch: 024/050 | Batch 100/600 | Cost: 0.0099
Epoch: 024/050 | Batch 150/600 | Cost: 0.0063
Epoch: 024/050 | Batch 200/600 | Cost: 0.0133
Epoch: 024/050 | Batch 250/600 | Cost: 0.0108
Epoch: 024/050 | Batch 300/600 | Cost: 0.0149
Epoch: 024/050 | Batch 350/600 | Cost: 0.0143
Epoch: 024/050 | Batch 400/600 | Cost: 0.0124
Epoch: 024/050 | Batch 450/600 | Cost: 0.0116
Epoch: 024/050 | Batch 500/600 | Cost: 0.0083
Epoch: 024/050 | Batch 550/600 | Cost: 0.0079
Epoch: 024/050 | Train MSE: 0.01251
Epoch: 025/050 | Batch 000/600 | Cost: 0.0147
Epoch: 025/050 | Batch 050/600 | Cost: 0.0104
Epoch: 025/050 | Batch 100/600 | Cost: 0.0120
Epoch: 025/050 | Batch 150/600 | Cost: 0.0127
Epoch: 025/050 | Batch 200/600 | Cost: 0.0094
Epoch: 025/050 | Batch 250/600 | Cost: 0.0085
Epoch: 025/050 | Batch 300/600 | Cost: 0.0138
Epoch: 025/050 | Batch 350/600 | Cost: 0.0086
Epoch: 025/050 | Batch 400/600 | Cost: 0.0130
Epoch: 025/050 | Batch 450/600 | Cost: 0.0136
Epoch: 025/050 | Batch 500/600 | Cost: 0.0135
Epoch: 025/050 | Batch 550/600 | Cost: 0.0155
Epoch: 025/050 | Train MSE: 0.01232
Epoch: 026/050 | Batch 000/600 | Cost: 0.0138
Epoch: 026/050 | Batch 050/600 | Cost: 0.0136
Epoch: 026/050 | Batch 100/600 | Cost: 0.0076
Epoch: 026/050 | Batch 150/600 | Cost: 0.0179
Epoch: 026/050 | Batch 200/600 | Cost: 0.0119
Epoch: 026/050 | Batch 250/600 | Cost: 0.0142
Epoch: 026/050 | Batch 300/600 | Cost: 0.0138
Epoch: 026/050 | Batch 350/600 | Cost: 0.0107
Epoch: 026/050 | Batch 400/600 | Cost: 0.0103
Epoch: 026/050 | Batch 450/600 | Cost: 0.0091
Epoch: 026/050 | Batch 500/600 | Cost: 0.0116
Epoch: 026/050 | Batch 550/600 | Cost: 0.0091
Epoch: 026/050 | Train MSE: 0.01215
Epoch: 027/050 | Batch 000/600 | Cost: 0.0085
Epoch: 027/050 | Batch 050/600 | Cost: 0.0065
Epoch: 027/050 | Batch 100/600 | Cost: 0.0102
Epoch: 027/050 | Batch 150/600 | Cost: 0.0152
Epoch: 027/050 | Batch 200/600 | Cost: 0.0162
Epoch: 027/050 | Batch 250/600 | Cost: 0.0079
Epoch: 027/050 | Batch 300/600 | Cost: 0.0118
Epoch: 027/050 | Batch 350/600 | Cost: 0.0111
Epoch: 027/050 | Batch 400/600 | Cost: 0.0081
Epoch: 027/050 | Batch 450/600 | Cost: 0.0100
Epoch: 027/050 | Batch 500/600 | Cost: 0.0103
Epoch: 027/050 | Batch 550/600 | Cost: 0.0117
Epoch: 027/050 | Train MSE: 0.01199
Epoch: 028/050 | Batch 000/600 | Cost: 0.0077
Epoch: 028/050 | Batch 050/600 | Cost: 0.0164
Epoch: 028/050 | Batch 100/600 | Cost: 0.0095
Epoch: 028/050 | Batch 150/600 | Cost: 0.0112
Epoch: 028/050 | Batch 200/600 | Cost: 0.0109
Epoch: 028/050 | Batch 250/600 | Cost: 0.0148
Epoch: 028/050 | Batch 300/600 | Cost: 0.0126
Epoch: 028/050 | Batch 350/600 | Cost: 0.0082
Epoch: 028/050 | Batch 400/600 | Cost: 0.0115
Epoch: 028/050 | Batch 450/600 | Cost: 0.0194
Epoch: 028/050 | Batch 500/600 | Cost: 0.0111
Epoch: 028/050 | Batch 550/600 | Cost: 0.0145
Epoch: 028/050 | Train MSE: 0.01181
Epoch: 029/050 | Batch 000/600 | Cost: 0.0112
Epoch: 029/050 | Batch 050/600 | Cost: 0.0137
Epoch: 029/050 | Batch 100/600 | Cost: 0.0192
Epoch: 029/050 | Batch 150/600 | Cost: 0.0105
Epoch: 029/050 | Batch 200/600 | Cost: 0.0107
Epoch: 029/050 | Batch 250/600 | Cost: 0.0081
Epoch: 029/050 | Batch 300/600 | Cost: 0.0079
Epoch: 029/050 | Batch 350/600 | Cost: 0.0126
Epoch: 029/050 | Batch 400/600 | Cost: 0.0135
Epoch: 029/050 | Batch 450/600 | Cost: 0.0062
Epoch: 029/050 | Batch 500/600 | Cost: 0.0121
Epoch: 029/050 | Batch 550/600 | Cost: 0.0091
Epoch: 029/050 | Train MSE: 0.01167
Epoch: 030/050 | Batch 000/600 | Cost: 0.0068
Epoch: 030/050 | Batch 050/600 | Cost: 0.0115
Epoch: 030/050 | Batch 100/600 | Cost: 0.0145
Epoch: 030/050 | Batch 150/600 | Cost: 0.0128
Epoch: 030/050 | Batch 200/600 | Cost: 0.0129
Epoch: 030/050 | Batch 250/600 | Cost: 0.0128
Epoch: 030/050 | Batch 300/600 | Cost: 0.0085
Epoch: 030/050 | Batch 350/600 | Cost: 0.0149
Epoch: 030/050 | Batch 400/600 | Cost: 0.0080
Epoch: 030/050 | Batch 450/600 | Cost: 0.0168
Epoch: 030/050 | Batch 500/600 | Cost: 0.0106
Epoch: 030/050 | Batch 550/600 | Cost: 0.0125
Epoch: 030/050 | Train MSE: 0.01152
Epoch: 031/050 | Batch 000/600 | Cost: 0.0137
Epoch: 031/050 | Batch 050/600 | Cost: 0.0080
Epoch: 031/050 | Batch 100/600 | Cost: 0.0122
Epoch: 031/050 | Batch 150/600 | Cost: 0.0121
Epoch: 031/050 | Batch 200/600 | Cost: 0.0125
Epoch: 031/050 | Batch 250/600 | Cost: 0.0120
Epoch: 031/050 | Batch 300/600 | Cost: 0.0123
Epoch: 031/050 | Batch 350/600 | Cost: 0.0166
Epoch: 031/050 | Batch 400/600 | Cost: 0.0099
Epoch: 031/050 | Batch 450/600 | Cost: 0.0099
Epoch: 031/050 | Batch 500/600 | Cost: 0.0103
Epoch: 031/050 | Batch 550/600 | Cost: 0.0099
Epoch: 031/050 | Train MSE: 0.01138
Epoch: 032/050 | Batch 000/600 | Cost: 0.0125
Epoch: 032/050 | Batch 050/600 | Cost: 0.0114
Epoch: 032/050 | Batch 100/600 | Cost: 0.0118
Epoch: 032/050 | Batch 150/600 | Cost: 0.0110
Epoch: 032/050 | Batch 200/600 | Cost: 0.0137
Epoch: 032/050 | Batch 250/600 | Cost: 0.0156
Epoch: 032/050 | Batch 300/600 | Cost: 0.0084
Epoch: 032/050 | Batch 350/600 | Cost: 0.0187
Epoch: 032/050 | Batch 400/600 | Cost: 0.0101
Epoch: 032/050 | Batch 450/600 | Cost: 0.0071
Epoch: 032/050 | Batch 500/600 | Cost: 0.0104
Epoch: 032/050 | Batch 550/600 | Cost: 0.0135
Epoch: 032/050 | Train MSE: 0.01126
Epoch: 033/050 | Batch 000/600 | Cost: 0.0159
Epoch: 033/050 | Batch 050/600 | Cost: 0.0126
Epoch: 033/050 | Batch 100/600 | Cost: 0.0077
Epoch: 033/050 | Batch 150/600 | Cost: 0.0093
Epoch: 033/050 | Batch 200/600 | Cost: 0.0092
Epoch: 033/050 | Batch 250/600 | Cost: 0.0128
Epoch: 033/050 | Batch 300/600 | Cost: 0.0095
Epoch: 033/050 | Batch 350/600 | Cost: 0.0108
Epoch: 033/050 | Batch 400/600 | Cost: 0.0116
Epoch: 033/050 | Batch 450/600 | Cost: 0.0082
Epoch: 033/050 | Batch 500/600 | Cost: 0.0151
Epoch: 033/050 | Batch 550/600 | Cost: 0.0097
Epoch: 033/050 | Train MSE: 0.01112
Epoch: 034/050 | Batch 000/600 | Cost: 0.0119
Epoch: 034/050 | Batch 050/600 | Cost: 0.0079
Epoch: 034/050 | Batch 100/600 | Cost: 0.0118
Epoch: 034/050 | Batch 150/600 | Cost: 0.0122
Epoch: 034/050 | Batch 200/600 | Cost: 0.0078
Epoch: 034/050 | Batch 250/600 | Cost: 0.0142
Epoch: 034/050 | Batch 300/600 | Cost: 0.0066
Epoch: 034/050 | Batch 350/600 | Cost: 0.0112
Epoch: 034/050 | Batch 400/600 | Cost: 0.0067
Epoch: 034/050 | Batch 450/600 | Cost: 0.0105
Epoch: 034/050 | Batch 500/600 | Cost: 0.0119
Epoch: 034/050 | Batch 550/600 | Cost: 0.0145
Epoch: 034/050 | Train MSE: 0.01099
Epoch: 035/050 | Batch 000/600 | Cost: 0.0100
Epoch: 035/050 | Batch 050/600 | Cost: 0.0072
Epoch: 035/050 | Batch 100/600 | Cost: 0.0071
Epoch: 035/050 | Batch 150/600 | Cost: 0.0111
Epoch: 035/050 | Batch 200/600 | Cost: 0.0096
Epoch: 035/050 | Batch 250/600 | Cost: 0.0089
Epoch: 035/050 | Batch 300/600 | Cost: 0.0098
Epoch: 035/050 | Batch 350/600 | Cost: 0.0116
Epoch: 035/050 | Batch 400/600 | Cost: 0.0128
Epoch: 035/050 | Batch 450/600 | Cost: 0.0091
Epoch: 035/050 | Batch 500/600 | Cost: 0.0093
Epoch: 035/050 | Batch 550/600 | Cost: 0.0103
Epoch: 035/050 | Train MSE: 0.01088
Epoch: 036/050 | Batch 000/600 | Cost: 0.0065
Epoch: 036/050 | Batch 050/600 | Cost: 0.0164
Epoch: 036/050 | Batch 100/600 | Cost: 0.0118
Epoch: 036/050 | Batch 150/600 | Cost: 0.0075
Epoch: 036/050 | Batch 200/600 | Cost: 0.0193
Epoch: 036/050 | Batch 250/600 | Cost: 0.0208
Epoch: 036/050 | Batch 300/600 | Cost: 0.0096
Epoch: 036/050 | Batch 350/600 | Cost: 0.0084
Epoch: 036/050 | Batch 400/600 | Cost: 0.0096
Epoch: 036/050 | Batch 450/600 | Cost: 0.0109
Epoch: 036/050 | Batch 500/600 | Cost: 0.0104
Epoch: 036/050 | Batch 550/600 | Cost: 0.0063
Epoch: 036/050 | Train MSE: 0.01076
Epoch: 037/050 | Batch 000/600 | Cost: 0.0092
Epoch: 037/050 | Batch 050/600 | Cost: 0.0120
Epoch: 037/050 | Batch 100/600 | Cost: 0.0107
Epoch: 037/050 | Batch 150/600 | Cost: 0.0139
Epoch: 037/050 | Batch 200/600 | Cost: 0.0127
Epoch: 037/050 | Batch 250/600 | Cost: 0.0082
Epoch: 037/050 | Batch 300/600 | Cost: 0.0073
Epoch: 037/050 | Batch 350/600 | Cost: 0.0072
Epoch: 037/050 | Batch 400/600 | Cost: 0.0083
Epoch: 037/050 | Batch 450/600 | Cost: 0.0087
Epoch: 037/050 | Batch 500/600 | Cost: 0.0187
Epoch: 037/050 | Batch 550/600 | Cost: 0.0128
Epoch: 037/050 | Train MSE: 0.01064
Epoch: 038/050 | Batch 000/600 | Cost: 0.0145
Epoch: 038/050 | Batch 050/600 | Cost: 0.0082
Epoch: 038/050 | Batch 100/600 | Cost: 0.0116
Epoch: 038/050 | Batch 150/600 | Cost: 0.0114
Epoch: 038/050 | Batch 200/600 | Cost: 0.0089
Epoch: 038/050 | Batch 250/600 | Cost: 0.0110
Epoch: 038/050 | Batch 300/600 | Cost: 0.0130
Epoch: 038/050 | Batch 350/600 | Cost: 0.0155
Epoch: 038/050 | Batch 400/600 | Cost: 0.0107
Epoch: 038/050 | Batch 450/600 | Cost: 0.0076
Epoch: 038/050 | Batch 500/600 | Cost: 0.0138
Epoch: 038/050 | Batch 550/600 | Cost: 0.0123
Epoch: 038/050 | Train MSE: 0.01054
Epoch: 039/050 | Batch 000/600 | Cost: 0.0106
Epoch: 039/050 | Batch 050/600 | Cost: 0.0153
Epoch: 039/050 | Batch 100/600 | Cost: 0.0108
Epoch: 039/050 | Batch 150/600 | Cost: 0.0097
Epoch: 039/050 | Batch 200/600 | Cost: 0.0116
Epoch: 039/050 | Batch 250/600 | Cost: 0.0123
Epoch: 039/050 | Batch 300/600 | Cost: 0.0082
Epoch: 039/050 | Batch 350/600 | Cost: 0.0114
Epoch: 039/050 | Batch 400/600 | Cost: 0.0083
Epoch: 039/050 | Batch 450/600 | Cost: 0.0162
Epoch: 039/050 | Batch 500/600 | Cost: 0.0108
Epoch: 039/050 | Batch 550/600 | Cost: 0.0110
Epoch: 039/050 | Train MSE: 0.01043
Epoch: 040/050 | Batch 000/600 | Cost: 0.0121
Epoch: 040/050 | Batch 050/600 | Cost: 0.0137
Epoch: 040/050 | Batch 100/600 | Cost: 0.0094
Epoch: 040/050 | Batch 150/600 | Cost: 0.0080
Epoch: 040/050 | Batch 200/600 | Cost: 0.0107
Epoch: 040/050 | Batch 250/600 | Cost: 0.0092
Epoch: 040/050 | Batch 300/600 | Cost: 0.0088
Epoch: 040/050 | Batch 350/600 | Cost: 0.0097
Epoch: 040/050 | Batch 400/600 | Cost: 0.0084
Epoch: 040/050 | Batch 450/600 | Cost: 0.0134
Epoch: 040/050 | Batch 500/600 | Cost: 0.0144
Epoch: 040/050 | Batch 550/600 | Cost: 0.0094
Epoch: 040/050 | Train MSE: 0.01033
Epoch: 041/050 | Batch 000/600 | Cost: 0.0112
Epoch: 041/050 | Batch 050/600 | Cost: 0.0063
Epoch: 041/050 | Batch 100/600 | Cost: 0.0117
Epoch: 041/050 | Batch 150/600 | Cost: 0.0126
Epoch: 041/050 | Batch 200/600 | Cost: 0.0181
Epoch: 041/050 | Batch 250/600 | Cost: 0.0158
Epoch: 041/050 | Batch 300/600 | Cost: 0.0140
Epoch: 041/050 | Batch 350/600 | Cost: 0.0109
Epoch: 041/050 | Batch 400/600 | Cost: 0.0105
Epoch: 041/050 | Batch 450/600 | Cost: 0.0130
Epoch: 041/050 | Batch 500/600 | Cost: 0.0081
Epoch: 041/050 | Batch 550/600 | Cost: 0.0126
Epoch: 041/050 | Train MSE: 0.01023
Epoch: 042/050 | Batch 000/600 | Cost: 0.0100
Epoch: 042/050 | Batch 050/600 | Cost: 0.0114
Epoch: 042/050 | Batch 100/600 | Cost: 0.0109
Epoch: 042/050 | Batch 150/600 | Cost: 0.0066
Epoch: 042/050 | Batch 200/600 | Cost: 0.0080
Epoch: 042/050 | Batch 250/600 | Cost: 0.0101
Epoch: 042/050 | Batch 300/600 | Cost: 0.0122
Epoch: 042/050 | Batch 350/600 | Cost: 0.0108
Epoch: 042/050 | Batch 400/600 | Cost: 0.0088
Epoch: 042/050 | Batch 450/600 | Cost: 0.0132
Epoch: 042/050 | Batch 500/600 | Cost: 0.0103
Epoch: 042/050 | Batch 550/600 | Cost: 0.0083
Epoch: 042/050 | Train MSE: 0.01013
Epoch: 043/050 | Batch 000/600 | Cost: 0.0097
Epoch: 043/050 | Batch 050/600 | Cost: 0.0103
Epoch: 043/050 | Batch 100/600 | Cost: 0.0144
Epoch: 043/050 | Batch 150/600 | Cost: 0.0095
Epoch: 043/050 | Batch 200/600 | Cost: 0.0108
Epoch: 043/050 | Batch 250/600 | Cost: 0.0124
Epoch: 043/050 | Batch 300/600 | Cost: 0.0125
Epoch: 043/050 | Batch 350/600 | Cost: 0.0117
Epoch: 043/050 | Batch 400/600 | Cost: 0.0085
Epoch: 043/050 | Batch 450/600 | Cost: 0.0097
Epoch: 043/050 | Batch 500/600 | Cost: 0.0163
Epoch: 043/050 | Batch 550/600 | Cost: 0.0099
Epoch: 043/050 | Train MSE: 0.01005
Epoch: 044/050 | Batch 000/600 | Cost: 0.0090
Epoch: 044/050 | Batch 050/600 | Cost: 0.0079
Epoch: 044/050 | Batch 100/600 | Cost: 0.0089
Epoch: 044/050 | Batch 150/600 | Cost: 0.0110
Epoch: 044/050 | Batch 200/600 | Cost: 0.0072
Epoch: 044/050 | Batch 250/600 | Cost: 0.0089
Epoch: 044/050 | Batch 300/600 | Cost: 0.0138
Epoch: 044/050 | Batch 350/600 | Cost: 0.0069
Epoch: 044/050 | Batch 400/600 | Cost: 0.0086
Epoch: 044/050 | Batch 450/600 | Cost: 0.0100
Epoch: 044/050 | Batch 500/600 | Cost: 0.0076
Epoch: 044/050 | Batch 550/600 | Cost: 0.0076
Epoch: 044/050 | Train MSE: 0.00995
Epoch: 045/050 | Batch 000/600 | Cost: 0.0098
Epoch: 045/050 | Batch 050/600 | Cost: 0.0064
Epoch: 045/050 | Batch 100/600 | Cost: 0.0097
Epoch: 045/050 | Batch 150/600 | Cost: 0.0077
Epoch: 045/050 | Batch 200/600 | Cost: 0.0136
Epoch: 045/050 | Batch 250/600 | Cost: 0.0181
Epoch: 045/050 | Batch 300/600 | Cost: 0.0085
Epoch: 045/050 | Batch 350/600 | Cost: 0.0102
Epoch: 045/050 | Batch 400/600 | Cost: 0.0058
Epoch: 045/050 | Batch 450/600 | Cost: 0.0099
Epoch: 045/050 | Batch 500/600 | Cost: 0.0061
Epoch: 045/050 | Batch 550/600 | Cost: 0.0077
Epoch: 045/050 | Train MSE: 0.00986
Epoch: 046/050 | Batch 000/600 | Cost: 0.0074
Epoch: 046/050 | Batch 050/600 | Cost: 0.0109
Epoch: 046/050 | Batch 100/600 | Cost: 0.0090
Epoch: 046/050 | Batch 150/600 | Cost: 0.0079
Epoch: 046/050 | Batch 200/600 | Cost: 0.0085
Epoch: 046/050 | Batch 250/600 | Cost: 0.0104
Epoch: 046/050 | Batch 300/600 | Cost: 0.0121
Epoch: 046/050 | Batch 350/600 | Cost: 0.0101
Epoch: 046/050 | Batch 400/600 | Cost: 0.0091
Epoch: 046/050 | Batch 450/600 | Cost: 0.0114
Epoch: 046/050 | Batch 500/600 | Cost: 0.0082
Epoch: 046/050 | Batch 550/600 | Cost: 0.0104
Epoch: 046/050 | Train MSE: 0.00978
Epoch: 047/050 | Batch 000/600 | Cost: 0.0109
Epoch: 047/050 | Batch 050/600 | Cost: 0.0111
Epoch: 047/050 | Batch 100/600 | Cost: 0.0075
Epoch: 047/050 | Batch 150/600 | Cost: 0.0144
Epoch: 047/050 | Batch 200/600 | Cost: 0.0092
Epoch: 047/050 | Batch 250/600 | Cost: 0.0080
Epoch: 047/050 | Batch 300/600 | Cost: 0.0118
Epoch: 047/050 | Batch 350/600 | Cost: 0.0110
Epoch: 047/050 | Batch 400/600 | Cost: 0.0038
Epoch: 047/050 | Batch 450/600 | Cost: 0.0159
Epoch: 047/050 | Batch 500/600 | Cost: 0.0084
Epoch: 047/050 | Batch 550/600 | Cost: 0.0110
Epoch: 047/050 | Train MSE: 0.00969
Epoch: 048/050 | Batch 000/600 | Cost: 0.0071
Epoch: 048/050 | Batch 050/600 | Cost: 0.0095
Epoch: 048/050 | Batch 100/600 | Cost: 0.0093
Epoch: 048/050 | Batch 150/600 | Cost: 0.0144
Epoch: 048/050 | Batch 200/600 | Cost: 0.0123
Epoch: 048/050 | Batch 250/600 | Cost: 0.0070
Epoch: 048/050 | Batch 300/600 | Cost: 0.0107
Epoch: 048/050 | Batch 350/600 | Cost: 0.0123
Epoch: 048/050 | Batch 400/600 | Cost: 0.0064
Epoch: 048/050 | Batch 450/600 | Cost: 0.0129
Epoch: 048/050 | Batch 500/600 | Cost: 0.0065
Epoch: 048/050 | Batch 550/600 | Cost: 0.0121
Epoch: 048/050 | Train MSE: 0.00961
Epoch: 049/050 | Batch 000/600 | Cost: 0.0031
Epoch: 049/050 | Batch 050/600 | Cost: 0.0115
Epoch: 049/050 | Batch 100/600 | Cost: 0.0046
Epoch: 049/050 | Batch 150/600 | Cost: 0.0104
Epoch: 049/050 | Batch 200/600 | Cost: 0.0070
Epoch: 049/050 | Batch 250/600 | Cost: 0.0056
Epoch: 049/050 | Batch 300/600 | Cost: 0.0114
Epoch: 049/050 | Batch 350/600 | Cost: 0.0099
Epoch: 049/050 | Batch 400/600 | Cost: 0.0110
Epoch: 049/050 | Batch 450/600 | Cost: 0.0077
Epoch: 049/050 | Batch 500/600 | Cost: 0.0071
Epoch: 049/050 | Batch 550/600 | Cost: 0.0120
Epoch: 049/050 | Train MSE: 0.00953
Epoch: 050/050 | Batch 000/600 | Cost: 0.0113
Epoch: 050/050 | Batch 050/600 | Cost: 0.0132
Epoch: 050/050 | Batch 100/600 | Cost: 0.0060
Epoch: 050/050 | Batch 150/600 | Cost: 0.0071
Epoch: 050/050 | Batch 200/600 | Cost: 0.0069
Epoch: 050/050 | Batch 250/600 | Cost: 0.0151
Epoch: 050/050 | Batch 300/600 | Cost: 0.0106
Epoch: 050/050 | Batch 350/600 | Cost: 0.0122
Epoch: 050/050 | Batch 400/600 | Cost: 0.0081
Epoch: 050/050 | Batch 450/600 | Cost: 0.0095
Epoch: 050/050 | Batch 500/600 | Cost: 0.0122
Epoch: 050/050 | Batch 550/600 | Cost: 0.0075
Epoch: 050/050 | Train MSE: 0.00945
###Markdown
Evaluation
###Code
plt.plot(range(len(minibatch_cost)), minibatch_cost)
plt.ylabel('Mean Squared Error')
plt.xlabel('Minibatch')
plt.show()
plt.plot(range(len(epoch_cost)), epoch_cost)
plt.ylabel('Mean Squared Error')
plt.xlabel('Epoch')
plt.show()
def compute_accuracy(net, data_loader):
correct_pred, num_examples = 0, 0
with torch.no_grad():
for features, targets in data_loader:
features = features.view(-1, 28*28)
_, outputs = net.forward(features)
predicted_labels = torch.argmax(outputs, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
print('Training Accuracy: %.2f' % compute_accuracy(model, train_loader))
print('Test Accuracy: %.2f' % compute_accuracy(model, test_loader))
###Output
Training Accuracy: 94.72
Test Accuracy: 94.49
###Markdown
Visual Inspection
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
for features, targets in test_loader:
break
fig, ax = plt.subplots(1, 4)
for i in range(4):
ax[i].imshow(features[i].view(28, 28), cmap=matplotlib.cm.binary)
plt.show()
_, predictions = model.forward(features[:4].view(-1, 28*28))
predictions = torch.argmax(predictions, dim=1)
print('Predicted labels', predictions)
###Output
Predicted labels tensor([7, 2, 1, 0])
|
awsdeployment/deployment.ipynb | ###Markdown
Deploying the knowledge retriever
###Code
import tarfile
import sagemaker
import torch
import torch.nn as nn
from sagemaker import get_execution_role
from sagemaker.pytorch import PyTorch as PyTorchEstimator, PyTorchModel
###Output
_____no_output_____
###Markdown
Set-up
###Code
session = sagemaker.Session()
role = get_execution_role()
artifact = 's3://kbchatter/simple/model.tar.gz'
###Output
_____no_output_____
###Markdown
Deploy the model from S3 (using code in code/inference.py)
###Code
pytorch_model = PyTorchModel(model_data=artifact,
role=role,
framework_version='1.6.0',
py_version='py3',
entry_point='knowledge-base-chatter/awsdeployment/code/predict.py',
source_dir='knowledge-base-chatter/awsdeployment/code',
dependencies=[
'knowledge-base-chatter/awsdeployment/code/contexts.json',
'knowledge-base-chatter/models/retrievalmodel.py' # not in dir for some reason
],
)
predictor = pytorch_model.deploy(instance_type='ml.p3.2xlarge', # 'ml.c4.xlarge',
initial_instance_count=1,
wait=True)
predictor.endpoint # for lambda
predictor.serializer = sagemaker.serializers.JSONSerializer()
predictor.deserializer = sagemaker.deserializers.JSONDeserializer()
print(predictor.predict({'question': 'How do I delete a channel in Slack?'})) # need to convert output/input to json for lambda
###Output
['how run an executive question and answer ( ama ) session to run an executive ama in slack : create a new channel called # exec - ama. announce it in channels with your largest membership and encourage everyone to join. post a policy document to help guide employees in formatting their questions, and pin it to the channel. set a helpful tone with', 'how do i delete a channel in slack? [SEP] whats send information into a google sheet handy for? : keep track of the information collected by your workflow by automatically sending it to a google sheet. this might come in handy for things like tracking help desk requests, collecting continuous feedback, or managing a nomination process. a few uses for this workflow : sales : gathering ongoing customer feedbackproduct : collecting product feedbackmarketing : tracking incoming requeststeam leaders : archiving employee feedback, recognition, and projectsanyone can trigger this workflow through the shortcuts menu, which then prompts them to fill out a form, and sends that information to a google sheet in real time.. to send information into a google sheet get started : install the google sheets app for slackdownload the examplenavigate to workflow builder and select importonce imported, edit the workflow to make sense for your', 'how do i delete a channel in slack? [SEP] whats get feedback, no meeting required handy for? : this workflow helps drive decisions and share information when meetings are hard to orchestrate. it allows people to review ideas and submit feedback on their own time. a few uses for this workflow : content writers : ask for volunteers to edit your upcoming post. designers : get people to evaluate your latest explorations. marketing : solicit reviews of the new pitch deck. youll create a workflow that triggers when someone reacts with an emoji of your choice. the workflow will then use slackbot to dm step - by - step instructions for reviewing the information and gathering feedback using a form. this allows your team to review during different hours when you need to be flexible, but still drive the decisions needed to get work done.. to get feedback, no meeting required get started : download the examplenavigate to workflow builder and select importonce imported, edit the workflow to make sense for your', 'how do i delete a channel in slack? [SEP] get feedback, no meeting required tip uses : slack features. get feedback, no meeting required prep : under 10 mins. get feedback, no meeting required slack skill level : intermediate. get feedback, no meeting required results', 'your user profile, and encourage others to check user profiles before dming them or scheduling meetings. if youd like a dedicated tool for visually displaying time zones of']
###Markdown
Delete the model and endpoint
###Code
pytorch_model.delete_model()
predictor.delete_endpoint()
###Output
_____no_output_____ |
dakota/oat/transition-year/transition-year-pu.ipynb | ###Markdown
** In this Jupyter Notebook, the absolute + sensitivity analysis results are generated for a One-at-a-time sensitivity analysis of transition year for the proliferation risk evaluation metric. The reason why this metric is separated from the others is that you need the -exp version of each sqlite and pyne on your machine to make it work **
###Code
import cymetric as cym
from cymetric import timeseries
import matplotlib as plt
import pandas as pd
import numpy as np
import sys
sys.path.insert(0, '../../../scripts/')
import output as oup
###Output
/Users/gwenchee/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: QAWarning: pyne.data is not yet QA compliant.
return f(*args, **kwds)
/Users/gwenchee/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: QAWarning: pyne.material is not yet QA compliant.
return f(*args, **kwds)
/Users/gwenchee/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: QAWarning: pyne.enrichment is not yet QA compliant.
return f(*args, **kwds)
###Markdown
This is when you already have the results
###Code
df_p = pd.read_csv('ty-df-pu.csv',index_col='TY')
df_p
###Output
_____no_output_____
###Markdown
The rest of the code below is to generate the above results
###Code
starter_string = 'TY'
scenario_nums = ['960','972','984','996','1008']
df_p = oup.initialize_df(scenario_index=starter_string,
scenarios_nums=scenario_nums)
df_p['Final HLW'] = 0
df_p['Final Depleted U'] = 0
df_p['Total uranium ore'] = 0
df_p['Total idle capacity'] = 0
df_p['Last date idle capacity'] = 0
df_p['Duration of transition'] = 0
output_start = '../cyclus-files/oat/transition-year/ty'
ev_dict = {}
for x in range(len(scenario_nums)):
output_file = output_start + scenario_nums[x]+'-exp.sqlite'
ev_dict[scenario_nums[x]] = cym.Evaluator(db=cym.dbopen(output_file),write=True)
for x in range(len(scenario_nums)):
cp = cym.timeseries.inventories(ev_dict[scenario_nums[x]],facilities=['lwrstorage','moxstorage','frstorage'],nucs=['pu-238','pu-239','pu-240','pu-241','pu-242','pu-244'])['Quantity']
fissile_cp = cym.timeseries.inventories(ev_dict[scenario_nums[x]],facilities=['lwrstorage','moxstorage','frstorage'],nucs=['pu-239','pu-241'])['Quantity']
df_p.loc[scenario_nums[x],'Max Pu in all CP'] = cp.max()
df_p.loc[scenario_nums[x],'Pu Quality in all CP at Max Pu'] = fissile_cp[cp.idxmax()]/cp.max()
hlw = cym.timeseries.inventories(ev_dict[scenario_nums[x]],facilities=['enrichmentsink','lwrsink','moxsink','frsink'],nucs=['pu-238','pu-239','pu-240','pu-241','pu-242','pu-244'])['Quantity']
fissile_hlw = cym.timeseries.inventories(ev_dict[scenario_nums[x]],facilities=['enrichmentsink','lwrsink','moxsink','frsink'],nucs=['pu-239','pu-241'])['Quantity']
df_p.loc[scenario_nums[x],'Max Pu in HLW'] = hlw.max()
df_p.loc[scenario_nums[x],'Pu Quality in HLW at Max Pu'] = fissile_hlw[hlw.idxmax()]/hlw.max()
rpr = cym.timeseries.inventories(ev_dict[scenario_nums[x]],facilities=['lwrreprocessing','moxreprocessing','frreprocessing'],nucs=['pu-238','pu-239','pu-240','pu-241','pu-242','pu-244'])['Quantity']
fissile_rpr = cym.timeseries.inventories(ev_dict[scenario_nums[x]],facilities=['lwrreprocessing','moxreprocessing','frreprocessing'],nucs=['pu-239','pu-241'])['Quantity']
df_p.loc[scenario_nums[x],'Max Pu in all RPR'] = rpr.max()
df_p.loc[scenario_nums[x],'Pu Quality in all RPR at Max Pu'] = fissile_rpr[rpr.idxmax()]/rpr.max()
df_p.to_csv('ty-df-pu.csv')
df_p_sa = oup.sensitivity(960,df_p)
df_p_sa
df_p_sa.to_csv('ty-df-pu-sa.csv')
###Output
_____no_output_____ |
assignments/2019/assignment3/Generative_Adversarial_Networks_PyTorch.ipynb | ###Markdown
Generative Adversarial Networks (GANs)So far in CS231N, all the applications of neural networks that we have explored have been **discriminative models** that take an input and are trained to produce a labeled output. This has ranged from straightforward classification of image categories to sentence generation (which was still phrased as a classification problem, our labels were in vocabulary space and we’d learned a recurrence to capture multi-word labels). In this notebook, we will expand our repetoire, and build **generative models** using neural networks. Specifically, we will learn how to build models which generate novel images that resemble a set of training images. What is a GAN?In 2014, [Goodfellow et al.](https://arxiv.org/abs/1406.2661) presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the **discriminator**. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the **generator**, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.We can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game:$$\underset{G}{\text{minimize}}\; \underset{D}{\text{maximize}}\; \mathbb{E}_{x \sim p_\text{data}}\left[\log D(x)\right] + \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$where $z \sim p(z)$ are the random noise samples, $G(z)$ are the generated images using the neural network generator $G$, and $D$ is the output of the discriminator, specifying the probability of an input being real. In [Goodfellow et al.](https://arxiv.org/abs/1406.2661), they analyze this minimax game and show how it relates to minimizing the Jensen-Shannon divergence between the training data distribution and the generated samples from $G$.To optimize this minimax game, we will aternate between taking gradient *descent* steps on the objective for $G$, and gradient *ascent* steps on the objective for $D$:1. update the **generator** ($G$) to minimize the probability of the __discriminator making the correct choice__. 2. update the **discriminator** ($D$) to maximize the probability of the __discriminator making the correct choice__.While these updates are useful for analysis, they do not perform well in practice. Instead, we will use a different objective when we update the generator: maximize the probability of the **discriminator making the incorrect choice**. This small change helps to allevaiate problems with the generator gradient vanishing when the discriminator is confident. This is the standard update used in most GAN papers, and was used in the original paper from [Goodfellow et al.](https://arxiv.org/abs/1406.2661). In this assignment, we will alternate the following updates:1. Update the generator ($G$) to maximize the probability of the discriminator making the incorrect choice on generated data:$$\underset{G}{\text{maximize}}\; \mathbb{E}_{z \sim p(z)}\left[\log D(G(z))\right]$$2. Update the discriminator ($D$), to maximize the probability of the discriminator making the correct choice on real and generated data:$$\underset{D}{\text{maximize}}\; \mathbb{E}_{x \sim p_\text{data}}\left[\log D(x)\right] + \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$ What else is there?Since 2014, GANs have exploded into a huge research area, with massive [workshops](https://sites.google.com/site/nips2016adversarial/), and [hundreds of new papers](https://github.com/hindupuravinash/the-gan-zoo). Compared to other approaches for generative models, they often produce the highest quality samples but are some of the most difficult and finicky models to train (see [this github repo](https://github.com/soumith/ganhacks) that contains a set of 17 hacks that are useful for getting models working). Improving the stabiilty and robustness of GAN training is an open research question, with new papers coming out every day! For a more recent tutorial on GANs, see [here](https://arxiv.org/abs/1701.00160). There is also some even more recent exciting work that changes the objective function to Wasserstein distance and yields much more stable results across model architectures: [WGAN](https://arxiv.org/abs/1701.07875), [WGAN-GP](https://arxiv.org/abs/1704.00028).GANs are not the only way to train a generative model! For other approaches to generative modeling check out the [deep generative model chapter](http://www.deeplearningbook.org/contents/generative_models.html) of the Deep Learning [book](http://www.deeplearningbook.org). Another popular way of training neural networks as generative models is Variational Autoencoders (co-discovered [here](https://arxiv.org/abs/1312.6114) and [here](https://arxiv.org/abs/1401.4082)). Variatonal autoencoders combine neural networks with variationl inference to train deep generative models. These models tend to be far more stable and easier to train but currently don't produce samples that are as pretty as GANs.Here's an example of what your outputs from the 3 different models you're going to train should look like... note that GANs are sometimes finicky, so your outputs might not look exactly like this... this is just meant to be a *rough* guideline of the kind of quality you can expect: Setup
###Code
import torch
import torch.nn as nn
from torch.nn import init
import torchvision
import torchvision.transforms as T
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
def show_images(images):
images = np.reshape(images, [images.shape[0], -1]) # images reshape to (batch_size, D)
sqrtn = int(np.ceil(np.sqrt(images.shape[0])))
sqrtimg = int(np.ceil(np.sqrt(images.shape[1])))
fig = plt.figure(figsize=(sqrtn, sqrtn))
gs = gridspec.GridSpec(sqrtn, sqrtn)
gs.update(wspace=0.05, hspace=0.05)
for i, img in enumerate(images):
ax = plt.subplot(gs[i])
plt.axis('off')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_aspect('equal')
plt.imshow(img.reshape([sqrtimg,sqrtimg]))
return
def preprocess_img(x):
return 2 * x - 1.0
def deprocess_img(x):
return (x + 1.0) / 2.0
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def count_params(model):
"""Count the number of parameters in the current TensorFlow graph """
param_count = np.sum([np.prod(p.size()) for p in model.parameters()])
return param_count
answers = dict(np.load('gan-checks-tf.npz'))
###Output
_____no_output_____
###Markdown
Dataset GANs are notoriously finicky with hyperparameters, and also require many training epochs. In order to make this assignment approachable without a GPU, we will be working on the MNIST dataset, which is 60,000 training and 10,000 test images. Each picture contains a centered image of white digit on black background (0 through 9). This was one of the first datasets used to train convolutional neural networks and it is fairly easy -- a standard CNN model can easily exceed 99% accuracy. To simplify our code here, we will use the PyTorch MNIST wrapper, which downloads and loads the MNIST dataset. See the [documentation](https://github.com/pytorch/vision/blob/master/torchvision/datasets/mnist.py) for more information about the interface. The default parameters will take 5,000 of the training examples and place them into a validation dataset. The data will be saved into a folder called `MNIST_data`.
###Code
class ChunkSampler(sampler.Sampler):
"""Samples elements sequentially from some offset.
Arguments:
num_samples: # of desired datapoints
start: offset where we should start selecting from
"""
def __init__(self, num_samples, start=0):
self.num_samples = num_samples
self.start = start
def __iter__(self):
return iter(range(self.start, self.start + self.num_samples))
def __len__(self):
return self.num_samples
NUM_TRAIN = 50000
NUM_VAL = 5000
NOISE_DIM = 96
batch_size = 128
mnist_train = dset.MNIST('./cs231n/datasets/MNIST_data', train=True, download=True,
transform=T.ToTensor())
loader_train = DataLoader(mnist_train, batch_size=batch_size,
sampler=ChunkSampler(NUM_TRAIN, 0))
mnist_val = dset.MNIST('./cs231n/datasets/MNIST_data', train=True, download=True,
transform=T.ToTensor())
loader_val = DataLoader(mnist_val, batch_size=batch_size,
sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))
imgs = loader_train.__iter__().next()[0].view(batch_size, 784).numpy().squeeze()
show_images(imgs)
###Output
_____no_output_____
###Markdown
Random NoiseGenerate uniform noise from -1 to 1 with shape `[batch_size, dim]`.Hint: use `torch.rand`.
###Code
def sample_noise(batch_size, dim):
"""
Generate a PyTorch Tensor of uniform random noise.
Input:
- batch_size: Integer giving the batch size of noise to generate.
- dim: Integer giving the dimension of noise to generate.
Output:
- A PyTorch Tensor of shape (batch_size, dim) containing uniform
random noise in the range (-1, 1).
"""
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
return 2*torch.rand(batch_size,dim)-1
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
###Output
_____no_output_____
###Markdown
Make sure noise is the correct shape and type:
###Code
def test_sample_noise():
batch_size = 3
dim = 4
torch.manual_seed(231)
z = sample_noise(batch_size, dim)
np_z = z.cpu().numpy()
assert np_z.shape == (batch_size, dim)
assert torch.is_tensor(z)
assert np.all(np_z >= -1.0) and np.all(np_z <= 1.0)
assert np.any(np_z < 0.0) and np.any(np_z > 0.0)
print('All tests passed!')
test_sample_noise()
###Output
All tests passed!
###Markdown
FlattenRecall our Flatten operation from previous notebooks... this time we also provide an Unflatten, which you might want to use when implementing the convolutional generator. We also provide a weight initializer (and call it for you) that uses Xavier initialization instead of PyTorch's uniform default.
###Code
class Flatten(nn.Module):
def forward(self, x):
N, C, H, W = x.size() # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
class Unflatten(nn.Module):
"""
An Unflatten module receives an input of shape (N, C*H*W) and reshapes it
to produce an output of shape (N, C, H, W).
"""
def __init__(self, N=-1, C=128, H=7, W=7):
super(Unflatten, self).__init__()
self.N = N
self.C = C
self.H = H
self.W = W
def forward(self, x):
return x.view(self.N, self.C, self.H, self.W)
def initialize_weights(m):
if isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose2d):
init.xavier_uniform_(m.weight.data)
###Output
_____no_output_____
###Markdown
CPU / GPUBy default all code will run on CPU. GPUs are not needed for this assignment, but will help you to train your models faster. If you do want to run the code on a GPU, then change the `dtype` variable in the following cell.
###Code
# dtype = torch.FloatTensor
dtype = torch.cuda.FloatTensor ## UNCOMMENT THIS LINE IF YOU'RE ON A GPU!
###Output
_____no_output_____
###Markdown
DiscriminatorOur first step is to build a discriminator. Fill in the architecture as part of the `nn.Sequential` constructor in the function below. All fully connected layers should include bias terms. The architecture is: * Fully connected layer with input size 784 and output size 256 * LeakyReLU with alpha 0.01 * Fully connected layer with input_size 256 and output size 256 * LeakyReLU with alpha 0.01 * Fully connected layer with input size 256 and output size 1 Recall that the Leaky ReLU nonlinearity computes $f(x) = \max(\alpha x, x)$ for some fixed constant $\alpha$; for the LeakyReLU nonlinearities in the architecture above we set $\alpha=0.01$. The output of the discriminator should have shape `[batch_size, 1]`, and contain real numbers corresponding to the scores that each of the `batch_size` inputs is a real image.
###Code
def discriminator():
"""
Build and return a PyTorch model implementing the architecture above.
"""
model = nn.Sequential(
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
Flatten(),
nn.Linear(784,256),
nn.LeakyReLU(),
nn.Linear(256,256),
nn.LeakyReLU(),
nn.Linear(256,1)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
)
return model
###Output
_____no_output_____
###Markdown
Test to make sure the number of parameters in the discriminator is correct:
###Code
def test_discriminator(true_count=267009):
model = discriminator()
cur_count = count_params(model)
if cur_count != true_count:
print('Incorrect number of parameters in discriminator. Check your achitecture.')
else:
print('Correct number of parameters in discriminator.')
test_discriminator()
###Output
Correct number of parameters in discriminator.
###Markdown
GeneratorNow to build the generator network: * Fully connected layer from noise_dim to 1024 * `ReLU` * Fully connected layer with size 1024 * `ReLU` * Fully connected layer with size 784 * `TanH` (to clip the image to be in the range of [-1,1])
###Code
def generator(noise_dim=NOISE_DIM):
"""
Build and return a PyTorch model implementing the architecture above.
"""
model = nn.Sequential(
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
nn.Linear(noise_dim,1024),
nn.ReLU(),
nn.Linear(1024,1024),
nn.ReLU(),
nn.Linear(1024,784),
nn.Tanh()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
)
return model
###Output
_____no_output_____
###Markdown
Test to make sure the number of parameters in the generator is correct:
###Code
def test_generator(true_count=1858320):
model = generator(4)
cur_count = count_params(model)
if cur_count != true_count:
print('Incorrect number of parameters in generator. Check your achitecture.')
else:
print('Correct number of parameters in generator.')
test_generator()
###Output
Correct number of parameters in generator.
###Markdown
GAN LossCompute the generator and discriminator loss. The generator loss is:$$\ell_G = -\mathbb{E}_{z \sim p(z)}\left[\log D(G(z))\right]$$and the discriminator loss is:$$ \ell_D = -\mathbb{E}_{x \sim p_\text{data}}\left[\log D(x)\right] - \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$Note that these are negated from the equations presented earlier as we will be *minimizing* these losses.**HINTS**: You should use the `bce_loss` function defined below to compute the binary cross entropy loss which is needed to compute the log probability of the true label given the logits output from the discriminator. Given a score $s\in\mathbb{R}$ and a label $y\in\{0, 1\}$, the binary cross entropy loss is$$ bce(s, y) = -y * \log(s) - (1 - y) * \log(1 - s) $$A naive implementation of this formula can be numerically unstable, so we have provided a numerically stable implementation for you below.You will also need to compute labels corresponding to real or fake and use the logit arguments to determine their size. Make sure you cast these labels to the correct data type using the global `dtype` variable, for example:`true_labels = torch.ones(size).type(dtype)`Instead of computing the expectation of $\log D(G(z))$, $\log D(x)$ and $\log \left(1-D(G(z))\right)$, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing.
###Code
def bce_loss(input, target):
"""
Numerically stable version of the binary cross-entropy loss function.
As per https://github.com/pytorch/pytorch/issues/751
See the TensorFlow docs for a derivation of this formula:
https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits
Inputs:
- input: PyTorch Tensor of shape (N, ) giving scores.
- target: PyTorch Tensor of shape (N,) containing 0 and 1 giving targets.
Returns:
- A PyTorch Tensor containing the mean BCE loss over the minibatch of input data.
"""
neg_abs = - input.abs()
loss = input.clamp(min=0) - input * target + (1 + neg_abs.exp()).log()
return loss.mean()
def discriminator_loss(logits_real, logits_fake):
"""
Computes the discriminator loss described above.
Inputs:
- logits_real: PyTorch Tensor of shape (N,) giving scores for the real data.
- logits_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.
Returns:
- loss: PyTorch Tensor containing (scalar) the loss for the discriminator.
"""
real_N=logits_real.size()
fake_N=logits_fake.size()
labels_real=torch.ones(real_N).type(dtype)
labels_fake=torch.zeros(fake_N).type(dtype)
loss_real=torch.mean(bce_loss(logits_real,labels_real))
loss_fake=torch.mean(bce_loss(logits_fake,labels_fake))
loss = loss_real+loss_fake
return loss
def generator_loss(logits_fake):
"""
Computes the generator loss described above.
Inputs:
- logits_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.
Returns:
- loss: PyTorch Tensor containing the (scalar) loss for the generator.
"""
fake_N=logits_fake.size()
labels_fake=torch.ones(fake_N).type(dtype)
loss = torch.mean(bce_loss(logits_fake,labels_fake))
return loss
###Output
_____no_output_____
###Markdown
Test your generator and discriminator loss. You should see errors < 1e-7.
###Code
def test_discriminator_loss(logits_real, logits_fake, d_loss_true):
d_loss = discriminator_loss(torch.Tensor(logits_real).type(dtype),
torch.Tensor(logits_fake).type(dtype)).cpu().numpy()
print("Maximum error in d_loss: %g"%rel_error(d_loss_true, d_loss))
test_discriminator_loss(answers['logits_real'], answers['logits_fake'],
answers['d_loss_true'])
def test_generator_loss(logits_fake, g_loss_true):
g_loss = generator_loss(torch.Tensor(logits_fake).type(dtype)).cpu().numpy()
print("Maximum error in g_loss: %g"%rel_error(g_loss_true, g_loss))
test_generator_loss(answers['logits_fake'], answers['g_loss_true'])
###Output
Maximum error in g_loss: 4.4518e-09
###Markdown
Optimizing our lossMake a function that returns an `optim.Adam` optimizer for the given model with a 1e-3 learning rate, beta1=0.5, beta2=0.999. You'll use this to construct optimizers for the generators and discriminators for the rest of the notebook.
###Code
def get_optimizer(model):
"""
Construct and return an Adam optimizer for the model with learning rate 1e-3,
beta1=0.5, and beta2=0.999.
Input:
- model: A PyTorch model that we want to optimize.
Returns:
- An Adam optimizer for the model with the desired hyperparameters.
"""
optimizer =optim.Adam(model.parameters(),lr=1e-3,betas=(0.5,0.999))
return optimizer
###Output
_____no_output_____
###Markdown
Training a GAN!We provide you the main training loop... you won't need to change this function, but we encourage you to read through and understand it.
###Code
def run_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss, show_every=250,
batch_size=128, noise_size=96, num_epochs=10):
"""
Train a GAN!
Inputs:
- D, G: PyTorch models for the discriminator and generator
- D_solver, G_solver: torch.optim Optimizers to use for training the
discriminator and generator.
- discriminator_loss, generator_loss: Functions to use for computing the generator and
discriminator loss, respectively.
- show_every: Show samples after every show_every iterations.
- batch_size: Batch size to use for training.
- noise_size: Dimension of the noise to use as input to the generator.
- num_epochs: Number of epochs over the training dataset to use for training.
"""
iter_count = 0
for epoch in range(num_epochs):
for x, _ in loader_train:
if len(x) != batch_size:
continue
D_solver.zero_grad()
real_data = x.type(dtype)
logits_real = D(2* (real_data - 0.5)).type(dtype)
g_fake_seed = sample_noise(batch_size, noise_size).type(dtype)
fake_images = G(g_fake_seed).detach()
logits_fake = D(fake_images.view(batch_size, 1, 28, 28))
d_total_error = discriminator_loss(logits_real, logits_fake)
d_total_error.backward()
D_solver.step()
G_solver.zero_grad()
g_fake_seed = sample_noise(batch_size, noise_size).type(dtype)
fake_images = G(g_fake_seed)
gen_logits_fake = D(fake_images.view(batch_size, 1, 28, 28))
g_error = generator_loss(gen_logits_fake)
g_error.backward()
G_solver.step()
if (iter_count % show_every == 0):
print('Iter: {}, D: {:.4}, G:{:.4}'.format(iter_count,d_total_error.item(),g_error.item()))
imgs_numpy = fake_images.data.cpu().numpy()
show_images(imgs_numpy[0:16])
plt.show()
print()
iter_count += 1
# Make the discriminator
D = discriminator().type(dtype)
# Make the generator
G = generator().type(dtype)
# Use the function you wrote earlier to get optimizers for the Discriminator and the Generator
D_solver = get_optimizer(D)
G_solver = get_optimizer(G)
# Run it!
run_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss)
###Output
Iter: 0, D: 1.481, G:0.6869
###Markdown
Well that wasn't so hard, was it? In the iterations in the low 100s you should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000. Least Squares GANWe'll now look at [Least Squares GAN](https://arxiv.org/abs/1611.04076), a newer, more stable alernative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss:$$\ell_G = \frac{1}{2}\mathbb{E}_{z \sim p(z)}\left[\left(D(G(z))-1\right)^2\right]$$and the discriminator loss:$$ \ell_D = \frac{1}{2}\mathbb{E}_{x \sim p_\text{data}}\left[\left(D(x)-1\right)^2\right] + \frac{1}{2}\mathbb{E}_{z \sim p(z)}\left[ \left(D(G(z))\right)^2\right]$$**HINTS**: Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing. When plugging in for $D(x)$ and $D(G(z))$ use the direct output from the discriminator (`scores_real` and `scores_fake`).
###Code
def ls_discriminator_loss(scores_real, scores_fake):
"""
Compute the Least-Squares GAN loss for the discriminator.
Inputs:
- scores_real: PyTorch Tensor of shape (N,) giving scores for the real data.
- scores_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.
Outputs:
- loss: A PyTorch Tensor containing the loss.
"""
loss = None
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
loss_real=((scores_real-1)**2).mean()
loss_fake=(scores_fake**2).mean()
loss=0.5*loss_real+0.5*loss_fake
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
return loss
def ls_generator_loss(scores_fake):
"""
Computes the Least-Squares GAN loss for the generator.
Inputs:
- scores_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.
Outputs:
- loss: A PyTorch Tensor containing the loss.
"""
loss=0.5*((scores_fake-1)**2).mean()
return loss
###Output
_____no_output_____
###Markdown
Before running a GAN with our new loss function, let's check it:
###Code
def test_lsgan_loss(score_real, score_fake, d_loss_true, g_loss_true):
score_real = torch.Tensor(score_real).type(dtype)
score_fake = torch.Tensor(score_fake).type(dtype)
d_loss = ls_discriminator_loss(score_real, score_fake).cpu().numpy()
g_loss = ls_generator_loss(score_fake).cpu().numpy()
print("Maximum error in d_loss: %g"%rel_error(d_loss_true, d_loss))
print("Maximum error in g_loss: %g"%rel_error(g_loss_true, g_loss))
test_lsgan_loss(answers['logits_real'], answers['logits_fake'],
answers['d_loss_lsgan_true'], answers['g_loss_lsgan_true'])
###Output
Maximum error in d_loss: 1.53171e-08
Maximum error in g_loss: 2.7837e-09
###Markdown
Run the following cell to train your model!
###Code
D_LS = discriminator().type(dtype)
G_LS = generator().type(dtype)
D_LS_solver = get_optimizer(D_LS)
G_LS_solver = get_optimizer(G_LS)
run_a_gan(D_LS, G_LS, D_LS_solver, G_LS_solver, ls_discriminator_loss, ls_generator_loss)
###Output
Iter: 0, D: 0.4267, G:0.4812
###Markdown
Deeply Convolutional GANsIn the first part of the notebook, we implemented an almost direct copy of the original GAN network from Ian Goodfellow. However, this network architecture allows no real spatial reasoning. It is unable to reason about things like "sharp edges" in general because it lacks any convolutional layers. Thus, in this section, we will implement some of the ideas from [DCGAN](https://arxiv.org/abs/1511.06434), where we use convolutional networks DiscriminatorWe will use a discriminator inspired by the TensorFlow MNIST classification tutorial, which is able to get above 99% accuracy on the MNIST dataset fairly quickly. * Reshape into image tensor (Use Unflatten!)* Conv2D: 32 Filters, 5x5, Stride 1* Leaky ReLU(alpha=0.01)* Max Pool 2x2, Stride 2* Conv2D: 64 Filters, 5x5, Stride 1* Leaky ReLU(alpha=0.01)* Max Pool 2x2, Stride 2* Flatten* Fully Connected with output size 4 x 4 x 64* Leaky ReLU(alpha=0.01)* Fully Connected with output size 1
###Code
def build_dc_classifier():
"""
Build and return a PyTorch model for the DCGAN discriminator implementing
the architecture above.
"""
return nn.Sequential(
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Unflatten(N=-1,C=1,H=28,W=28),
nn.Conv2d(1,32,5),
nn.LeakyReLU(),
nn.MaxPool2d(2,stride=2),
nn.Conv2d(32,64,5),
nn.LeakyReLU(),
nn.MaxPool2d(2,stride=2),
Flatten(),
nn.Linear(4*4*64,4*4*64),
nn.LeakyReLU(),
nn.Linear(4*4*64,1)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
)
data = next(enumerate(loader_train))[-1][0].type(dtype)
b = build_dc_classifier().type(dtype)
out = b(data)
print(out.size())
###Output
torch.Size([128, 1])
###Markdown
Check the number of parameters in your classifier as a sanity check:
###Code
def test_dc_classifer(true_count=1102721):
model = build_dc_classifier()
cur_count = count_params(model)
if cur_count != true_count:
print('Incorrect number of parameters in generator. Check your achitecture.')
else:
print('Correct number of parameters in generator.')
test_dc_classifer()
###Output
Correct number of parameters in generator.
###Markdown
GeneratorFor the generator, we will copy the architecture exactly from the [InfoGAN paper](https://arxiv.org/pdf/1606.03657.pdf). See Appendix C.1 MNIST. See the documentation for [tf.nn.conv2d_transpose](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). We are always "training" in GAN mode. * Fully connected with output size 1024* `ReLU`* BatchNorm* Fully connected with output size 7 x 7 x 128 * ReLU* BatchNorm* Reshape into Image Tensor of shape 7, 7, 128* Conv2D^T (Transpose): 64 filters of 4x4, stride 2, 'same' padding (use `padding=1`)* `ReLU`* BatchNorm* Conv2D^T (Transpose): 1 filter of 4x4, stride 2, 'same' padding (use `padding=1`)* `TanH`* Should have a 28x28x1 image, reshape back into 784 vector
###Code
def build_dc_generator(noise_dim=NOISE_DIM):
"""
Build and return a PyTorch model implementing the DCGAN generator using
the architecture described above.
"""
return nn.Sequential(
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
nn.Linear(noise_dim,1024),
nn.ReLU(),
nn.BatchNorm1d(1024),
nn.Linear(1024,7*7*128),
nn.ReLU(),
nn.BatchNorm1d(7*7*128),
Unflatten(),
nn.ConvTranspose2d(128,64,4,stride=2,padding=1),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.ConvTranspose2d(64,1,4,stride=2,padding=1),
nn.Tanh()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
)
test_g_gan = build_dc_generator().type(dtype)
test_g_gan.apply(initialize_weights)
fake_seed = torch.randn(batch_size, NOISE_DIM).type(dtype)
fake_images = test_g_gan.forward(fake_seed)
fake_images.size()
###Output
_____no_output_____
###Markdown
Check the number of parameters in your generator as a sanity check:
###Code
def test_dc_generator(true_count=6580801):
model = build_dc_generator(4)
cur_count = count_params(model)
if cur_count != true_count:
print('Incorrect number of parameters in generator. Check your achitecture.')
else:
print('Correct number of parameters in generator.')
test_dc_generator()
D_DC = build_dc_classifier().type(dtype)
D_DC.apply(initialize_weights)
G_DC = build_dc_generator().type(dtype)
G_DC.apply(initialize_weights)
D_DC_solver = get_optimizer(D_DC)
G_DC_solver = get_optimizer(G_DC)
run_a_gan(D_DC, G_DC, D_DC_solver, G_DC_solver, discriminator_loss, generator_loss, num_epochs=5)
###Output
Iter: 0, D: 1.408, G:1.356
|
Applied AI Study Group #2 - February 2020/week2/3- LSTM_example.ipynb | ###Markdown
Data Examination
###Code
import pandas as pd
import gdown
gdown.download('https://drive.google.com/uc?id={}'.format('1wdbtj2s9Vst5EQll1oqc1qXx-_tTv7Cf'),'DCOILBRENTEU.csv',quiet=False)
!ls
!pwd
datam = pd.read_csv("DCOILBRENTEU.csv")
datam.head()
#datam.describe()
datam[datam.DCOILBRENTEU=='.']
datam = datam[datam.DCOILBRENTEU != "."]
print(datam.shape)
import matplotlib.pyplot as plt
datam = datam.iloc[:,1].values.astype(float)
plt.plot(datam,color='red',label = 'Petrol Borsası')
plt.title("Petrol Piyasası")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
import numpy as np
length = len(datam)
print(length)
length *= 1-0.1
print(length)
batch_size = 64
epochs = 120
timesteps = 30
def get_train_length(dataset,batch_size,test_percent):
length = len(dataset)
length *= 1-test_percent
train_length_values = []
for x in (range(int(length)- 100, int(length))):
modulo = x%batch_size
if(modulo==0):
train_length_values.append(x)
print(x)
return(max(train_length_values))
def get_train_length_v2(dataset,batch_size,test_percent):
length = len(dataset)
length *= 1-test_percent
howmanybatches = int(length/batch_size)
my_train_length = howmanybatches* batch_size
return my_train_length
length = get_train_length(datam,batch_size,0.1)
length_2 = get_train_length_v2(datam,batch_size,0.1)
print(length==length_2)
upper_train = length + timesteps * 2
datam_train = datam[:upper_train]
datam_train = datam_train[:]
datam_train.shape
print(min(datam_train),max(datam_train))
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0,1))
trs = sc.fit_transform(np.float64(datam_train).reshape(-1,1))
trs.shape
print(min(trs),max(trs))
xtr = []
ytr = []
#print(length+timesteps)
for i in range(timesteps,length+timesteps):
xtr.append(trs[i-timesteps:i])
ytr.append(trs[i:i+timesteps])
print(len(xtr))
print(len(ytr))
print(np.shape(xtr[0:2]))
print(np.shape(ytr[0:2]))
xtr = np.array(xtr)
ytr = np.array(ytr)
###Output
7296
7296
(2, 30, 1)
(2, 30, 1)
###Markdown
Network
###Code
from keras.layers import Dense, Input, LSTM
from keras.models import Model
import h5py
input_1 = Input(batch_shape=(64,timesteps,1),name = 'input')
lstm1 = LSTM(8, stateful = False, return_sequences= True, name = 'lstm_1' )(input_1)
lstm2 = LSTM(4, stateful = True, return_sequences= True, name = 'lstm_2' )(lstm1)
output_1 = Dense(1,)(lstm2)
modelim = Model(inputs = input_1, outputs = output_1)
modelim.compile(optimizer = 'adam', loss = 'mae')
modelim.summary()
epochs = 100
for i in range(epochs):
print('Epoch'+str(i))
modelim.fit(xtr,ytr,shuffle=False,epochs=1,batch_size=64)
modelim.reset_states()
predicted = modelim.predict(xtr,batch_size = 64)
predicted = np.reshape(predicted,(predicted.shape[0],predicted.shape[1]))
print(predicted.shape)
predicted = sc.inverse_transform(predicted)
# for j in range(0,len(xtr)-timesteps):
# predict
predicted[0:len(ytr),0].astype(float)
plt.plot(ytr[0:len(ytr)].astype(float),color = 'red')
plt.plot(,color = 'blue')
###Output
_____no_output_____ |
books/track_stn.ipynb | ###Markdown
CONFIGURATE
###Code
!ls ../out/
project = '../out'
name = 'ferattpaperstn_attgmmstnnet_ferattentionstn_attloss_adam_affectnetdark_dim64_preactresnet18x128_fold0_003'
pathnamedataset = '~/.datasets'
pathmodel = os.path.join( project, name, 'models/model_best.pth.tar' ) #model_best
pathproject = os.path.join( project, name )
batch_size = 1
workers = 1
cuda = False
parallel = False
gpu = 1
seed = 1
imsize = 128
###Output
_____no_output_____
###Markdown
LOAD MODEL
###Code
# load model
print('>> Load model ...')
net = AttentionGMMSTNNeuralNet(
patchproject=project,
nameproject=name,
no_cuda=cuda,
parallel=parallel,
seed=seed,
gpu=gpu
)
if net.load( pathmodel ) is not True:
assert(False)
###Output
WARNING:root:Setting up a new session...
WARNING:visdom:Without the incoming socket you cannot receive events from the server or register event handlers to your Visdom client.
###Markdown
DATASETS
###Code
print('>> Load dataset ...')
namedataset = FactoryDataset.affect
subset = FactoryDataset.validation
imagesize=128
dataset = Dataset(
data=FactoryDataset.factory(
pathname=pathnamedataset,
name=namedataset,
subset=subset,
download=True
),
num_channels=3,
transform=transforms.Compose([
mtrans.ToResize( (imagesize,imagesize), resize_mode='square' ),
mtrans.ToTensor(),
#mtrans.ToMeanNormalization( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] )
mtrans.ToNormalization(),
])
)
# emotions = dataset.data.classes
emotions = ['Neutral - NE', 'Happiness - HA', 'Surprise - SU', 'Sadness - SA', 'Anger - AN', 'Disgust - DI', 'Fear - FR', 'Contempt - CO']
# if namedataset == FactoryDataset.bu3dfe:
# emotions = emotions[:-1]
print(emotions)
print(len(emotions))
import random
def sigmoid(x):
return 1. / (1 + np.exp(-x))
def norm(x):
x = x-x.min()
x = x / x.max()
return x
def mean_normalization(image, mean, std):
tensor = image.float()/255.0
result_tensor = []
for t, m, s in zip(tensor, mean, std):
result_tensor.append(t.sub_(m).div_(s))
return torch.stack(result_tensor, 0)
def pad(image, xypad):
h,w = image.shape
im_pad = np.zeros( (h+2*xypad,w+2*xypad) )
im_pad[xypad:xypad+h,xypad:xypad+w] = image
return im_pad
def crop(image, xycrop):
h,w = image.shape[:2]
image = image[ xycrop:h-xycrop,xycrop:w-xycrop ]
return image
imagesize=128
image = cv2.imread('../rec/selfie_happy.png')[:,:,0]
# image = pad(image,50)
image = crop(image,50)
# sigma=0.1
# image = image/255.0
# noise = np.array([random.gauss(0,sigma) for i in range( image.shape[0]*image.shape[1] )])
# noise = noise.reshape(image.shape[0],image.shape[1])
# image = (np.clip(image+noise,0,1)*255).astype(np.uint8)
image = np.stack( (image,image,image), axis=2 )
image = cv2.resize( image, (imagesize, imagesize) )
# gamma=0.1
# image[:,:,0] = norm((image[:,:,0]/255)**gamma)*255
# image[:,:,1] = norm((image[:,:,1]/255)**gamma)*255
# image[:,:,2] = norm((image[:,:,2]/255)**gamma)*255
image = torch.from_numpy(image).permute( (2,0,1) ).unsqueeze(0).float()
# image = mean_normalization(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
image = image/255
z, y_lab_hat, att, theta, att_t, fmap, srf = net( image )
att = att.data.cpu().numpy().transpose(2,3,1,0)[...,0]
att_t = att_t.data.cpu().numpy().transpose(2,3,1,0)[...,0]
fmap = fmap.data.cpu().numpy().transpose(2,3,1,0)[:,:,0,0]
srf = srf.data.cpu().numpy().transpose(2,3,1,0)[...,0]
image = image.data.cpu().numpy().transpose(2,3,1,0)[...,0]
y_lab_hat_max = y_lab_hat.argmax()
print(image.shape)
print(y_lab_hat)
print(y_lab_hat_max)
print(emotions[y_lab_hat_max])
plt.figure( figsize=(16,8))
plt.subplot(151)
plt.imshow( norm(image) )
plt.title('image')
plt.axis('off')
plt.subplot(152)
plt.imshow( (fmap))
plt.title('attention map')
plt.axis('off' )
plt.subplot(153)
plt.imshow( srf.sum(2) )
plt.title('feature map')
plt.axis('off' )
plt.subplot(154)
plt.imshow( norm(att) )
# plt.title('class {}'.format( y_lab_hat_max ) )
plt.title('attention feature')
plt.axis('off')
plt.subplot(155)
plt.imshow( norm(att_t) )
# plt.title('class {}'.format( y_lab_hat_max ) )
plt.title('attention transform feature')
plt.axis('off')
plt.show()
import scipy.misc
import random
def sigmoid(x):
return 1. / (1 + np.exp(-x))
def norm(x):
x = x-x.min()
x = x / x.max()
return x
def mean_normalization(image, mean, std):
tensor = image.float()/255.0
result_tensor = []
for t, m, s in zip(tensor, mean, std):
result_tensor.append(t.sub_(m).div_(s))
return torch.stack(result_tensor, 0)
def pad(image, xypad):
h,w = image.shape
im_pad = np.zeros( (h+2*xypad,w+2*xypad) )
im_pad[xypad:xypad+h,xypad:xypad+w] = image
return im_pad
def crop(image, xycrop):
h,w = image.shape[:2]
image = image[ xycrop:h-xycrop,xycrop:w-xycrop , :]
return image
def fusion( imx, imy, x=0,y=0, alpha=0.5 ):
n,m = imy.shape[:2]
imx[ x:x+n,y:y+m, : ] = alpha*imx[ x:x+n,y:y+m, : ] + (1-alpha)*imy
return imx
def noise(image, sigma=0.05):
image = image/255.0
noise = np.array([random.gauss(0,sigma) for i in range( image.shape[0]*image.shape[1]*3 )])
noise = noise.reshape(image.shape[0],image.shape[1],3)
image = (np.clip(image+noise,0,1)*255).astype(np.uint8)
return image
def ligth(image, gamma=0.2):
image[:,:,0] = norm((image[:,:,0]/255)**gamma)*255
image[:,:,1] = norm((image[:,:,1]/255)**gamma)*255
image[:,:,2] = norm((image[:,:,2]/255)**gamma)*255
return image
class cTrack(object):
'''track frame
'''
def __init__(self, net, image_size=128):
self.imagesize=image_size
self.net=net
def __call__(self, frame):
#image = frame
image = frame.mean(axis=2)
image = np.stack( (image,image,image), axis=2 )
image = cv2.resize( image, (self.imagesize,self.imagesize) )
image = torch.from_numpy(image).permute( (2,0,1) ).unsqueeze(0).float()
#image = mean_normalization(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
image = image / 255
zhat, y_lab_hat, att, theta, att_t, fmap, srf = self.net( image )
att = att.data.cpu().numpy().transpose(2,3,1,0)[...,0]
att_t = att_t.data.cpu().numpy().transpose(2,3,1,0)[...,0]
fmap = fmap.data.cpu().numpy().transpose(2,3,1,0)[:,:,0,0]
srf = srf.data.cpu().numpy().transpose(2,3,1,0)[...,0]
y_lab_hat_max = y_lab_hat.argmax()
return att, att_t, fmap, srf, zhat, y_lab_hat, y_lab_hat_max
class cFrame(object):
'''frames porcess
'''
def __init__(self, image_size=[640, 640, 3], border=0, offsetx=0, offsety=0):
self.imagesize = image_size
self.asp = float(image_size[1])/image_size[0]
self.border = border
self.offsetx = offsetx
self.offsety = offsety
def __call__(self, frame):
'''process frame
'''
#H, W original image size
H=frame.shape[0]; W=frame.shape[1];
#image canonization
if H>W: frame = frame.transpose(1,0,2)
H=frame.shape[0]; W=frame.shape[1]
H1 = int(H - self.border)
W1 = int(H1 * self.asp)
offsetx=self.offsetx
offsety=self.offsety
Wdif = int(np.abs(W - W1) / 2.0)
Hdif = int(np.abs(H - H1) / 2.0)
vbox = np.array([[Wdif, Hdif], [W - Wdif, H - Hdif]])
frame_p = frame[vbox[0, 1]+offsety:vbox[1, 1]+offsety, vbox[0, 0]+offsetx:vbox[1, 0]+offsetx, : ]; #(2, 1, 0)
aspY = float(self.imagesize[0]) / frame_p.shape[0]
aspX = float(self.imagesize[1]) / frame_p.shape[1]
frame_p = scipy.misc.imresize(frame_p, (self.imagesize[0], self.imagesize[1]), interp='bilinear')
return frame_p
def drawcaption( y, emotions, imsizeout=(200,200) ):
ne = len(emotions)
colors = ([150,150,150],[130,130,130],[255,255,255],[255,255,255])
hbox=40; wbox= 135 + 170
imsize=(hbox*ne,wbox,3)
imemotions = np.zeros( imsize, dtype=np.uint8 )*255
ymax = y.argmax()
for i, yi in enumerate(y):
k = 1 if y[i]>0.5 else 0
kh = 1 if ymax==i else 0
bbox = np.array([[0,0],[wbox,0],[wbox,hbox],[0,hbox]]);
bbox[:,0] += 0
bbox[:,1] += 26-26 + (i)*40
imemotions = cv2.fillConvexPoly(imemotions, bbox, color=colors[kh] )
bbox = np.array([[0,0],[int(wbox*y[i]),0],[int(y[i]*wbox),hbox],[0,hbox]]);
bbox[:,0] += 0
bbox[:,1] += 26-26 + (i)*40
imemotions = cv2.fillConvexPoly(imemotions, bbox, color=[255,160,122] )
cv2.putText(
imemotions,
#'{}: {:.3f}'.format(emotions[i][:-5],y[i]),
'{}: {:.2f}%'.format(emotions[i][:-5], y[i]*100 ),
(2, 26 + (i)*40),
color=colors[2+kh],
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=1,
thickness=2
)
#imemotions = imemotions[20:-20,20:-20,:]
imemotions = cv2.resize( imemotions, imsizeout )
return imemotions
import cv2
from IPython.display import clear_output
filename = '../out/videos/004653400.avi' # exp_cb.mkv, exp_mz_sn.mkv
cap = cv2.VideoCapture( filename )
print(cap.isOpened())
frame_proc = cFrame( image_size=[128,128,3] )
track = cTrack( net, image_size=128 )
k = 0
iframe=0
totalframe=25
iniframe=0 #0, 1700
bligth=False
mingam, maxgam = 0.01,10.0
#gammas = mingam + np.random.rand(totalframe)*( maxgam - mingam )
gammas = np.linspace( mingam, maxgam, num=totalframe )
gammas = gammas[::-1]
#gammas.sort()
bnoise=True
minnoise, maxnoise = 0.01, 0.1
# sigmas = minnoise + np.random.rand(totalframe)*( maxnoise - minnoise )
sigmas = np.linspace( minnoise, maxnoise, num=totalframe )
sigmas = sigmas[::-1]
#sigmas.sort()
# for every frame
while(cap.isOpened()):
# read
#for i in range(100):
# ret, frame = cap.read()
ret, frame = cap.read()
if k%2 != 0 or k < iniframe:
k+=1
continue
#print(k)
#frame = frame[:-300,850:,:]
#frame = frame[50:-350,500:1350,:]
frame = frame[0:500,0:500,:]
image = frame_proc( frame )
image = crop(image,20)
#image = ligth(image, gamma = 0.3 )
#noise
if bnoise:
image = noise(image, sigma=sigmas[iframe])
image = np.clip(image, 0, 255 )
#ligth
if bligth:
image = ligth(image, gamma = gammas[iframe] )
image = np.clip(image, 0, 255 )
att, att_t, fmap, srf, zhat, yhat, yhat_max = track( image )
yhat = TF.softmax( yhat, dim=1 )[0,:]
att_map = att.mean(axis=2)
#print(yhat)
#create video frame
imsize=500
midsize=250
layer = np.zeros( [768, 1024, 3] , dtype=np.uint8 )*255
caption = drawcaption(yhat, emotions )
image = cv2.resize( image, (imsize,imsize) )[:,:,(2,1,0)]
att = cv2.resize( norm(att)*255, (midsize,midsize) )
att_t = cv2.resize( norm(att_t)*255, (midsize,midsize) )
fmap = cv2.resize( norm(fmap)*255, (midsize,midsize) )
srf = cv2.resize( norm(srf.sum(axis=2))*255, (midsize,midsize) )
#https://www.learnopencv.com/applycolormap-for-pseudocoloring-in-opencv-c-python/
fmap = cv2.applyColorMap( fmap.astype(np.uint8) , cv2.COLORMAP_JET)[:,:,(2,1,0)]
srf = cv2.applyColorMap( srf.astype(np.uint8), cv2.COLORMAP_JET)[:,:,(2,1,0)]
att = np.concatenate( (att, att_t), axis=0 )
feature = np.concatenate( (fmap, srf ), axis=0 )
#feature = np.stack( (feature, feature,feature ), axis=2 )
image = np.concatenate( (image, feature, att), axis=1 )
#layer
layer = fusion(layer, image, x=10+100, y=10, alpha=0.0 )
layer = fusion(layer, caption, x=20+100, y=20, alpha=0.2 )
# cv2.imwrite('../out/result/{:06d}.png'.format(iframe), layer[:,:,(2,1,0)] )
k+=1; iframe+=1
# show
ishow=True
if ishow:
plt.figure( figsize=(16,8) )
plt.imshow( layer )
plt.axis('off')
plt.show()
clear_output(wait=True)
if iframe%100 == 0:
print(k, iframe)
if k > iniframe+totalframe:
break
cap.release()
# print('DONE!!!')
plt.figure( figsize=(16,8) )
plt.imshow( layer )
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
notebooks/1_data_download_analysis_visualization/1.04_preprocessing_with_cdo.ipynb | ###Markdown
004. Using "cdo" to manipulate the data The **Climate data operators** are a popular tool of command line functions. Lately, a python-bindings became available (https://pypi.org/project/cdo/).Setup and Documentation for CDO: https://code.mpimet.mpg.de/projects/cdo/wiki/CdoDocumentation As we are dealing with monthly files for many variables, we have to1. Put the monthly files with hourly data together2. Aggregate the hourly data into daily data (optionally)2. Merge the variables into one fileCDO provides the following methods:- `cdo.cat()` concatenates files- `cdo.dayavg()` averages hourly data into daily data- `cdo.daysum()` sums hourly values to daily valuesYou could use these like so:
###Code
from cdo import Cdo
import glob
import xarray as xr
cdo = Cdo()
tmp_file = './tmp.nc'
xar = xr.open_mfdataset(glob.glob(path_to_data+'era5_precipitation*.nc'),
combine='by_coords')
xar.to_netcdf(tmp_file)
cdo.daysum(input=tmp_file,
output=path_to_data+'era5_precip_daysum.nc')
os.remove(tmp_file)
###Output
_____no_output_____
###Markdown
__Note__ that CDO daily aggregations (max/mean) set the timestamps to 23 UTC of each day you are aggregating over.We needed to shift those times to the whole day to work with the data (subtract 23 hours from the time coordinate).Within this projects, the CDO command line tools were used from python using `os.system()`. This is maybe not the most elegant solution, but it works and we used it because we were most familiar with this solution. You are free to use the python-bindings as well, there should not be any difference.The methods reside within the **utils.py** file inside the **/python/aux/** dir and are used to easily preprocess the data.The following methods are currently implemented:1. `cdo_daily_means()`: generates daily averages from the input data2. `cdo_precip_sums()`: generates daily precipitation sums from input data3. `cdo_clean_precip()`: extracts precipitation vars from input data to new file and removes it from input data4. `cdo_spatial_cut()`: extracts all of the input data within a specified bounding box to a new file5. `cdo_merge_time()`: merges all of the input data into a new file on the time dimensionExample calls are listed below. Python-PathTo import python functions from the **./python/aux/** dir, we have to add the main path of the repository to the so called *python-path* of the system. This is done with the following two lines:
###Code
import sys
sys.path.append('../../')
###Output
_____no_output_____
###Markdown
Define the needed variablesAll files inside the specified directory which include the specified string are processed.
###Code
path_to_data = 'volume/project/data/'
###Output
_____no_output_____
###Markdown
Execute the methodsFor every existing and matching file, the method is executed. For more details check the **utils.py** file. 1) cdo_daily_meansloops through the given directory and and executes "cdo dayavg * file_includes * file_out" appends "dayavg" at the end of the filename
###Code
from python.aux.utils import cdo_daily_means
incl = 'temperature'
cdo_daily_means(path=path_to_data, file_includes=incl)
###Output
_____no_output_____
###Markdown
2) cdo_precip_sumsloops through the given directory and and executes "cdo -b 32 daysum filein.nc fileout.nc" appends "daysum" at the end of the filename
###Code
from python.aux.utils import cdo_precip_sums
incl = 'large_scale_precipitation'
cdo_precip_sums(path=path_to_data, file_includes=incl)
###Output
_____no_output_____
###Markdown
3) cdo_clean_preciploops through the given directory and and executes "ncks -v cp,tp filein.nc fileout.nc" or "ncks -x -v cp,tp filein.nc fileout.nc" for all files which contain precip_type in their name and creates new files with the corresponding variables
###Code
from python.aux.utils import cdo_clean_precip
cdo_clean_precip(path=path_to_data, precip_type='precipitation')
###Output
_____no_output_____
###Markdown
4) cdo_spatial_cutloops through the given directory and and executes "cdo -sellonlatbox,lonmin,lonmax,latmin,latmax * file_includes * fileout.nc" appends "spatial_cut_*new_file_includes*" at the end of the filename
###Code
from python.aux.utils import cdo_spatial_cut
lonmin = 10
lonmax = 20
latmin = 40
latmax = 50
incl = 'temperature'
incl_new = 'temperature_spatial_cut'
cdo_spatial_cut(path=path_to_data, file_includes=incl, new_file_includes=incl_new, lonmin, lonmax, latmin, latmax)
###Output
_____no_output_____
###Markdown
5) cdo_merge_timemerges all files including a specified string in their name within the given directory into the specified new file with "cdo mergetime * file_includes * fileout.nc"
###Code
from python.aux.utils import cdo_merge_time
incl = 'temperature'
new_filename = 'temperature_YYYYinit-YYYYend.nc'
cdo_merge_time(path=path_to_data, file_includes=incl, new_file=new_filename)
###Output
_____no_output_____ |
kaggle_titanic/titanic.ipynb | ###Markdown
Titanic: Machine Learning from Disaster
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import tree, preprocessing, metrics
from sklearn.model_selection import cross_val_score, train_test_split
from IPython.display import Image
import pydotplus
train_data = pd.read_csv('./data/train.csv')
train_data.head()
###Output
_____no_output_____
###Markdown
数据集说明共 11 个属性和 1 个标记。| | | | | | | | | | | | | :--: |:--: | :--: | :--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: || PassengerId| Pclass| Name| Sex|Age |SibSp | Parch| Ticket|Fare |Cabin | Embarked| | 乘客ID| 舱位等级| 姓名 | 性别 | 年龄 |船上同辈个数 |船上不同辈个数 |票号 |票价| 客舱号|登船港口 | Survived: 是否获救* 0 = 没有* 1 = 获救 数据初步分析根据常识,影响是否获救的因素应该主要跟以下几个因素有关:* 舱位等级* 性别* 年龄* 票价 (与舱位等级相关)主要使用上述4个特征,先查看以下这 4 个变量的情况以及和是否获救的关系。
###Code
train_data.groupby('Pclass').mean()['Survived'].plot.bar()
train_data.groupby(['Pclass','Sex']).mean()['Survived'].plot.bar()
age_range = pd.cut(train_data["Age"], np.arange(0, 90, 10))
train_data.groupby([age_range]).mean()['Survived'].plot.bar()
fare_range = pd.cut(train_data["Fare"], np.arange(0, 700, 100))
train_data.groupby([fare_range]).mean()['Survived'].plot.bar()
fig, ax = plt.subplots(2, 2,figsize=[10,8])
train_data.groupby('Pclass').mean()['Survived'].plot.bar(ax=ax[0][0])
train_data.groupby('Sex').mean()['Survived'].plot.bar(ax=ax[0][1])
train_data.groupby([fare_range]).mean()['Survived'].plot.bar(ax=ax[1][0])
train_data.groupby([age_range]).mean()['Survived'].plot.bar(ax=ax[1][1])
plt.tight_layout()
plt.savefig('/Users/lli/GitHub/zhihu/source/img/ml/3_2.png',dpi=300)
###Output
_____no_output_____
###Markdown
可以看出,由于泰塔尼克的施救原则,性别是影响是否获救的最大因素,其次为舱位等级、票价、年龄。考虑到票价和舱位等级相关,这里仅考虑三个因素。 数据预处理
###Code
train_data = train_data.drop(['PassengerId','Name','SibSp','Parch','Ticket','Cabin','Embarked','Fare'], axis=1)
train_data.count()
# 填充 age
train_data['Age'] = train_data['Age'].fillna(value=train_data.Age.mean())
# 将性别转换为 0,1
train_data['Sex'] = preprocessing.LabelEncoder().fit_transform(train_data['Sex'])
# 将Pclass转换为 P_1, P_2, P_3 三个变量
#pclass = pd.get_dummies(train_data['Pclass'],prefix='P')
#train_data = train_data.drop(['Pclass'], axis=1)
#train_data = pd.concat([pclass,train_data], axis=1)
#
X = train_data.drop(['Survived'], axis=1).values
y = train_data['Survived'].values
###Output
_____no_output_____
###Markdown
生成决策树
###Code
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.22)
clf = tree.DecisionTreeClassifier(criterion='entropy',max_depth = 3)
clf.fit (X_train, y_train)
clf.score (X_test, y_test)
name = train_data.drop(['Survived'], axis=1).columns
label = ["Unurvived","Survived"]
dot_data = tree.export_graphviz(clf, out_file=None,feature_names=name, class_names=label,filled=True)
graph = pydotplus.graph_from_dot_data(dot_data)
graph.write_png('tree.png')
Image(graph.create_png())
###Output
_____no_output_____
###Markdown
模型评估
###Code
predict = clf.predict(X)
print(metrics.classification_report(y, predict))
scores = cross_val_score(clf, X, y, cv=10)
scores.mean()
metrics.classification_report(y, predict)
###Output
_____no_output_____ |
Line-detection/1. Hough lines.ipynb | ###Markdown
Hough Lines Import resources and display the image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# #read in image
# image -= cv2.imread('')
# Read in the image
image = cv2.imread('images/images.PNG')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Perform edge detection
###Code
# Convert image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Define our parameters for Canny
low_threshold = 200
high_threshold = 250
edges = cv2.Canny(gray, low_threshold, high_threshold)
plt.imshow(edges, cmap='gray')
###Output
_____no_output_____
###Markdown
Find lines using a Hough transform
###Code
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1
theta = np.pi/180
threshold = 60
min_line_length = 190
max_line_gap = 5
line_image = np.copy(image) #creating an image copy to draw lines on
# Run Hough on the edge-detected image
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
# Iterate over the output "lines" and draw lines on the image copy
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),15)
plt.imshow(line_image)
###Output
_____no_output_____ |
Object tracking and Localization/Mini_Project-2D_Histogram/writeup.ipynb | ###Markdown
Two Dimensional Histogram Filter - Your First Feature (and your first bug).Writing code is important. But a big part of being on a self driving car team is working with a **large** existing codebase. On high stakes engineering projects like a self driving car, you will probably have to earn the trust of your managers and coworkers before they'll let you make substantial changes to the code base. A typical assignment for someone new to a team is to make progress on a backlog of bugs. So with that in mind, that's what you will be doing for your first project in the Nanodegree.You'll go through this project in a few parts:1. **Explore the Code** - don't worry about bugs at this point. The goal is to get a feel for how this code base is organized and what everything does.2. **Implement a Feature** - write code that gets the robot moving correctly.3. **Fix a Bug** - Implementing motion will reveal a bug which hadn't shown up before. Here you'll identify what the bug is and take steps to reproduce it. Then you'll identify the cause and fix it. Part 1: Exploring the codeIn this section you will just run some existing code to get a feel for what this localizer does.You can navigate through this notebook using the arrow keys on your keyboard. You can run the code in a cell by pressing **`Ctrl + Enter`**Navigate through the cells below. In each cell you should1. Read through the code. It's okay to not understand everything at this point. 2. Make a guess about what will happen when you run the code. 3. Run the code and compare what you see with what you expected. 4. When you get to a **TODO** read the instructions carefully and complete the activity.
###Code
# This code "imports" code from some of the other files we've written
# in this directory. Specifically simulate.py and helpers.py
import simulate as sim
import helpers
import localizer
# Don't worry too much about this code for now...
from __future__ import division, print_function
%load_ext autoreload
%autoreload 2
# This code defines a 5x5 robot world as well as some other parameters
# which we will discuss later. It then creates a simulation and shows
# the initial beliefs.
R = 'r'
G = 'g'
grid = [
[R,G,G,G,R],
[G,G,R,G,R],
[G,R,G,G,G],
[R,R,G,R,G],
[R,G,R,G,R],
]
blur = 0.05
p_hit = 200.0
simulation = sim.Simulation(grid, blur, p_hit)
simulation.show_beliefs()
###Output
_____no_output_____
###Markdown
Run the code below multiple times by repeatedly pressing Ctrl + Enter.After each run observe how the state has changed.
###Code
simulation.run(1)
simulation.show_beliefs()
###Output
_____no_output_____
###Markdown
What do you think this call to `run` is doing? Look at the code in **`simulate.py`** to find out (remember - you can see other files in the current directory by clicking on the `jupyter` logo in the top left of this notebook).Spend a few minutes looking at the `run` method and the methods it calls to get a sense for what's going on. What am I looking at?The red star shows the robot's true position. The blue circles indicate the strength of the robot's belief that it is at any particular location.Ideally we want the biggest blue circle to be at the same position as the red star.
###Code
# We will provide you with the function below to help you look
# at the raw numbers.
def show_rounded_beliefs(beliefs):
for row in beliefs:
for belief in row:
print("{:0.3f}".format(belief), end=" ")
print()
# The {:0.3f} notation is an example of "string
# formatting" in Python. You can learn more about string
# formatting at https://pyformat.info/
show_rounded_beliefs(simulation.beliefs)
###Output
0.069 0.070 0.004 0.070 0.003
0.069 0.003 0.069 0.070 0.070
0.002 0.002 0.069 0.003 0.069
0.002 0.069 0.003 0.069 0.002
0.002 0.070 0.070 0.069 0.002
###Markdown
_____ Part 2: Implement a 2D sense function.As you can see, the robot's beliefs aren't changing. No matter how many times we call the simulation's sense method, nothing happens. The beliefs remain uniform. Instructions1. Open `localizer.py` and complete the `sense` function.3. Run the code in the cell below to import the localizer module (or reload it) and then test your sense function.4. If the test passes, you've successfully implemented your first feature! Keep going with the project. If your tests don't pass (they likely won't the first few times you test), keep making modifications to the `sense` function until they do!
###Code
reload(localizer)
def test_sense():
R = 'r'
_ = 'g'
simple_grid = [
[_,_,_],
[_,R,_],
[_,_,_]
]
p = 1.0 / 9
initial_beliefs = [
[p,p,p],
[p,p,p],
[p,p,p]
]
observation = R
expected_beliefs_after = [
[1/11, 1/11, 1/11],
[1/11, 3/11, 1/11],
[1/11, 1/11, 1/11]
]
p_hit = 3.0
p_miss = 1.0
beliefs_after_sensing = localizer.sense(
observation, simple_grid, initial_beliefs, p_hit, p_miss)
if helpers.close_enough(beliefs_after_sensing, expected_beliefs_after):
print("Tests pass! Your sense function is working as expected")
return
elif not isinstance(beliefs_after_sensing, list):
print("Your sense function doesn't return a list!")
return
elif len(beliefs_after_sensing) != len(expected_beliefs_after):
print("Dimensionality error! Incorrect height")
return
elif len(beliefs_after_sensing[0] ) != len(expected_beliefs_after[0]):
print("Dimensionality Error! Incorrect width")
return
elif beliefs_after_sensing == initial_beliefs:
print("Your code returns the initial beliefs.")
return
total_probability = 0.0
for row in beliefs_after_sensing:
for p in row:
total_probability += p
if abs(total_probability-1.0) > 0.001:
print("Your beliefs appear to not be normalized")
return
print("Something isn't quite right with your sense function")
test_sense()
###Output
Tests pass! Your sense function is working as expected
###Markdown
Integration TestingBefore we call this "complete" we should perform an **integration test**. We've verified that the sense function works on it's own, but does the localizer work overall?Let's perform an integration test. First you you should execute the code in the cell below to prepare the simulation environment.
###Code
from simulate import Simulation
import simulate as sim
import helpers
reload(localizer)
reload(sim)
reload(helpers)
R = 'r'
G = 'g'
grid = [
[R,G,G,G,R,R,R],
[G,G,R,G,R,G,R],
[G,R,G,G,G,G,R],
[R,R,G,R,G,G,G],
[R,G,R,G,R,R,R],
[G,R,R,R,G,R,G],
[R,R,R,G,R,G,G],
]
# Use small value for blur. This parameter is used to represent
# the uncertainty in MOTION, not in sensing. We want this test
# to focus on sensing functionality
blur = 0.1
p_hit = 100.0
simulation = sim.Simulation(grid, blur, p_hit)
# Use control+Enter to run this cell many times and observe how
# the robot's belief that it is in each cell (represented by the
# size of the corresponding circle) changes as the robot moves.
# The true position of the robot is given by the red star.
# Run this cell about 15-25 times and observe the results
simulation.run(1)
simulation.show_beliefs()
# If everything is working correctly you should see the beliefs
# converge to a single large circle at the same position as the
# red star. Though, if your sense function is implemented correctly
# and this output is not converging as expected.. it may have to do
# with the `move` function bug; your next task!
#
# When you are satisfied that everything is working, continue
# to the next section
###Output
_____no_output_____
###Markdown
Part 3: Identify and Reproduce a BugSoftware has bugs. That's okay.A user of your robot called tech support with a complaint> "So I was using your robot in a square room and everything was fine. Then I tried loading in a map for a rectangular room and it drove around for a couple seconds and then suddenly stopped working. Fix it!"Now we have to debug. We are going to use a systematic approach.1. Reproduce the bug2. Read (and understand) the error message (when one exists)3. Write a test that triggers the bug.4. Generate a hypothesis for the cause of the bug.5. Try a solution. If it fixes the bug, great! If not, go back to step 4. Step 1: Reproduce the bugThe user said that **rectangular environments** seem to be causing the bug. The code below is the same as the code you were working with when you were doing integration testing of your new feature. See if you can modify it to reproduce the bug.
###Code
from simulate import Simulation
import simulate as sim
import helpers
reload(localizer)
reload(sim)
reload(helpers)
R = 'r'
G = 'g'
grid = [
[R,G,G,G,R,R,R],
[G,G,R,G,R,G,R],
[G,R,G,G,G,G,R],
[R,R,G,R,G,G,G],
]
blur = 0.001
p_hit = 100.0
simulation = sim.Simulation(grid, blur, p_hit)
# remember, the user said that the robot would sometimes drive around for a bit...
# It may take several calls to "simulation.run" to actually trigger the bug.
simulation.run(1)
simulation.show_beliefs()
simulation.run(1)
###Output
_____no_output_____
###Markdown
Step 2: Read and Understand the error messageIf you triggered the bug, you should see an error message directly above this cell. The end of that message should say:```IndexError: list index out of range```And just above that you should see something like```path/to/your/directory/localizer.pyc in move(dy, dx, beliefs, blurring) 38 new_i = (i + dy ) % width 39 new_j = (j + dx ) % height---> 40 new_G[int(new_i)][int(new_j)] = cell 41 return blur(new_G, blurring)```This tells us that line 40 (in the move function) is causing an `IndexError` because "list index out of range".If you aren't sure what this means, use Google! Copy and paste `IndexError: list index out of range` into Google! When I do that, I see something like this:Browse through the top links (often these will come from stack overflow) and read what people have said about this error until you are satisfied you understand how it's caused. Step 3: Write a test that reproduces the bugThis will help you know when you've fixed it and help you make sure you never reintroduce it in the future. You might have to try many potential solutions, so it will be nice to have a single function to call to confirm whether or not the bug is fixed
###Code
# According to the user, sometimes the robot actually does run "for a while"
# - How can you change the code so the robot runs "for a while"?
# - How many times do you need to call simulation.run() to consistently
# reproduce the bug?
# Modify the code below so that when the function is called
# it consistently reproduces the bug.
def test_robot_works_in_rectangle_world():
from simulate import Simulation
import simulate as sim
import helpers
reload(localizer)
reload(sim)
reload(helpers)
R = 'r'
G = 'g'
grid = [
[R,G,G,G,R,R,R],
[G,G,R,G,R,G,R],
[G,R,G,G,G,G,R],
[R,R,G,R,G,G,G],
]
blur = 0.001
p_hit = 100.0
for i in range(1000):
simulation = sim.Simulation(grid, blur, p_hit)
simulation.run(1)
test_robot_works_in_rectangle_world()
###Output
_____no_output_____
###Markdown
Step 4: Generate a HypothesisIn order to have a guess about what's causing the problem, it will be helpful to use some Python debuggin toolsThe `pdb` module (`p`ython `d`e`b`ugger) will be helpful here! Setting up the debugger 1. Open `localizer.py` and uncomment the line to the top that says `import pdb`2. Just before the line of code that is causing the bug `new_G[int(new_i)][int(new_j)] = cell`, add a new line of code that says `pdb.set_trace()`3. Run your test by calling your test function (run the cell below this one)4. You should see a text entry box pop up! For now, type `c` into the box and hit enter to **c**ontinue program execution. Keep typing `c` and enter until the bug is triggered again
###Code
test_robot_works_in_rectangle_world()
###Output
_____no_output_____
###Markdown
Using the debuggerThe debugger works by pausing program execution wherever you write `pdb.set_trace()` in your code. You also have access to any variables which are accessible from that point in your code. Try running your test again. This time, when the text entry box shows up, type `new_i` and hit enter. You will see the value of the `new_i` variable show up in the debugger window. Play around with the debugger: find the values of `new_j`, `height`, and `width`. Do they seem reasonable / correct?When you are done playing around, type `c` to continue program execution. Was the bug triggered? Keep playing until you have a guess about what is causing the bug. Step 5: Write a FixYou have a hypothesis about what's wrong. Now try to fix it. When you're done you should call your test function again. You may want to remove (or comment out) the line you added to `localizer.py` that says `pdb.set_trace()` so your test can run without you having to type `c` into the debugger box.
###Code
test_robot_works_in_rectangle_world()
###Output
_____no_output_____ |
Project Files/Thermodynamics Observables Calculation with Quantum Computer.ipynb | ###Markdown
**Thermodynamics Observables Calculation with Quantum Computer**
###Code
import os
import sys
sys.path.insert(0, os.path.abspath('thermodynamics_utils'))
import partition_function
import thermodynamics
import vibrational_structure_fd
# imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from functools import partial
from qiskit.utils import QuantumInstance
from qiskit import Aer
from qiskit.algorithms import NumPyMinimumEigensolver, VQE
from qiskit_nature.drivers import UnitsType, Molecule
from qiskit_nature.drivers.second_quantization import (
ElectronicStructureDriverType,
ElectronicStructureMoleculeDriver,
)
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import JordanWignerMapper
from qiskit_nature.algorithms import GroundStateEigensolver
import qiskit_nature.constants as const
from qiskit_nature.algorithms.pes_samplers import BOPESSampler, EnergySurface1DSpline
from thermodynamics import constant_volume_heat_capacity
from vibrational_structure_fd import VibrationalStructure1DFD
from partition_function import DiatomicPartitionFunction
from thermodynamics import Thermodynamics
import warnings
warnings.simplefilter("ignore", np.RankWarning)
###Output
_____no_output_____
###Markdown
A preliminary draft with more information related to this tutorial can be found in preprint: Stober et al, arXiv 2003.02303 (2020) **Calculation of the Born Oppenheimer Potential Energy Surface (BOPES)** To compute thermodynamic observables we begin with single point energy calculation which calculates the wavefunction and charge density and therefore the energy of a particular arrangement of nuclei. Here we compute the Born-Oppenheimer potential energy surface of a hydrogen molecule, as an example, which is simply the electronic energy as a function of bond length.
###Code
qubit_converter = QubitConverter(mapper=JordanWignerMapper())
quantum_instance = QuantumInstance(backend=Aer.get_backend("aer_simulator_statevector"))
solver = VQE(quantum_instance=quantum_instance)
me_gss = GroundStateEigensolver(qubit_converter, solver)
stretch1 = partial(Molecule.absolute_distance, atom_pair=(1, 0))
mol = Molecule(
geometry=[("H", [0.0, 0.0, 0.0]), ("H", [0.0, 0.0, 0.2])],
degrees_of_freedom=[stretch1],
masses=[1.6735328e-27, 1.6735328e-27],
)
# pass molecule to PSYCF driver
driver = ElectronicStructureMoleculeDriver(mol, driver_type=ElectronicStructureDriverType.PYSCF)
es_problem = ElectronicStructureProblem(driver)
# BOPES sampler testing
bs = BOPESSampler(gss=me_gss, bootstrap=True)
points = np.linspace(0.45, 5, 50)
res = bs.sample(es_problem, points)
energies = []
bs_res_full = res.raw_results
for point in points:
energy = bs_res_full[point].computed_energies + bs_res_full[point].nuclear_repulsion_energy
energies.append(energy)
fig = plt.figure()
plt.plot(points, energies)
plt.title("Dissociation profile")
plt.xlabel("Interatomic distance")
plt.ylabel("Energy")
energy_surface = EnergySurface1DSpline()
xdata = res.points
ydata = res.energies
energy_surface.fit(xdata=xdata, ydata=ydata)
plt.plot(xdata, ydata, "kx")
x = np.arange(min(xdata) - 0.25, max(xdata) + 0.25, 0.05)
plt.plot(x, energy_surface.eval(x), "r-")
plt.xlabel(r"distance, $\AA$")
plt.ylabel("energy, Hartree")
dist = max(ydata) - min(ydata)
plt.ylim(min(ydata) - 0.1 * dist, max(ydata) + 0.1 * dist)
###Output
_____no_output_____
###Markdown
**Calculation of the Molecular Vibrational Energy Levels** The Born-Oppeheimer approximation removes internuclear vibrations from the molecular Hamiltonian and the energy computed from quantum mechanical ground-state energy calculations using this approximation contain only the electronic energy. Since even at absolute zero internuclear vibrations still occur, a correction is required to obtain the true zero-temperature energy of a molecule. This correction is called the zero-point vibrational energy (ZPE), which is computed by summing the contribution from internuclear vibrational modes. Therefore, the next step in computing thermodynamic observables is determining the vibrational energy levels. This can be done by constructing the Hessian matrix based on computed single point energies close to the equilibrium bond length. The eigenvalues of the Hessian matrix can then be used to determine the vibrational energy levels and the zero-point vibrational energy \begin{equation}{\rm ZPE} = \frac{1}{2}\, \sum_i ^M \nu_i \, ,\end{equation}with $\nu_i$ being the vibrational frequencies, $M = 3N − 6$ or $M = 3N − 5$ for non-linear or linear molecules, respectively, and $N$ is number of the particles. Here we fit a "full" energy surface using a 1D spline potential and use it to evaluate molecular vibrational energy levels.
###Code
vibrational_structure = VibrationalStructure1DFD(mol, energy_surface)
plt.plot(xdata, ydata, "kx")
x = np.arange(min(xdata) - 0.25, max(xdata) + 0.25, 0.05)
plt.plot(x, energy_surface.eval(x), "r-")
plt.xlabel(r"distance, $\AA$")
plt.ylabel("energy, Hartree")
dist = max(ydata) - min(ydata)
plt.ylim(min(ydata) - 0.1 * dist, max(ydata) + 0.1 * dist)
for N in range(15):
on = np.ones(x.shape)
on *= energy_surface.eval(
energy_surface.get_equilibrium_geometry()
) + vibrational_structure.vibrational_energy_level(N)
plt.plot(x, on, "g:")
on = np.ones(x.shape)
plt.show()
###Output
_____no_output_____
###Markdown
**Create a Partition Function for the Calculation of Heat Capacity** The partition function for a molecule is the product of contributions from translational, rotational, vibrational, electronic, and nuclear degrees of freedom. Having the vibrational frequencies, now we can obtain the vibrational partition function $q_{\rm vibration}$ to compute the whole molecular partition function \begin{equation}q_{\rm vibration} = \prod_{i=1} ^M \frac{\exp\,(-\Theta_{\nu_i}/2T)}{1-\exp\,(-\Theta_{\nu_i}/2T} \, . \end{equation} Here $\Theta_{\nu_i}= h\nu_i/k_B$, $T$ is the temperature and $k_B$ is the Boltzmann constant. The single-point energy calculations and the resulting partition function can be used to calculate the (constant volume or constant pressure) heat capacity of the molecules. The constant volume heat capacity, for example, is given by \begin{equation}C_v = \left.\frac{\partial U}{\partial T}\right|_{N,V}\, ,\qquad{\rm with} \quadU=k_B T^2 \left.\frac{\partial {\rm ln} Q}{\partial T}\right|_{N,V} .\end{equation}$U$ is the internal energy, $V$ is the volume and $Q$ is the partition function. Here we illustrate the simplest usage of the partition function, namely creating a Thermodynamics object to compute properties like the constant pressure heat capacity defined above.
###Code
Q = DiatomicPartitionFunction(mol, energy_surface, vibrational_structure)
P = 101350 # Pa
temps = np.arange(10, 1050, 5) # K
mol.spins = [1 / 2, 1 / 2]
td = Thermodynamics(Q, pressure=101350)
td.set_pressure(101350)
temps = np.arange(10, 1500, 5)
ymin = 5
ymax = 11
plt.plot(temps, td.constant_pressure_heat_capacity(temps) / const.CAL_TO_J)
plt.xlim(0, 1025)
plt.ylim(ymin, ymax)
plt.xlabel("Temperature, K")
plt.ylabel("Cp, cal mol$^{-1}$ K$^{-1}$")
plt.show()
###Output
_____no_output_____
###Markdown
Here we demonstrate how to access particular components (the rotational part) of the partition function, which in the H2 case we can further split to para-hydrogen and ortho-hydrogen components.
###Code
eq = Q.get_partition(part="rot", split="eq")
para = Q.get_partition(part="rot", split="para")
ortho = Q.get_partition(part="rot", split="ortho")
###Output
_____no_output_____
###Markdown
We will now plot the constant volume heat capacity (of the rotational part) demonstrating how we can call directly the functions in the 'thermodynamics' module, providing a callable object for the partition function (or in this case its rotational component). Note that in the plot we normalize the plot dividing by the universal gas constant R (Avogadro's number times Boltzmann's constant) and we use crossed to compare with experimental data found in literature.
###Code
# REFERENCE DATA from literature
df_brink_T = [80.913535, 135.240157, 176.633783, 219.808499, 246.226899]
df_brink_Cv = [0.118605, 0.469925, 0.711510, 0.833597, 0.895701]
df_eucken_T = [
25.120525,
30.162485,
36.048121,
41.920364,
56.195875,
62.484934,
72.148692,
73.805910,
73.804236,
92.214423,
180.031917,
230.300866,
]
df_eucken_Cv = [
0.012287,
0.012354,
0.008448,
0.020478,
0.032620,
0.048640,
0.048768,
0.076678,
0.078670,
0.170548,
0.667731,
0.847681,
]
df_gia_T = [
190.919338,
195.951254,
202.652107,
204.292585,
209.322828,
225.300754,
234.514217,
243.747768,
]
df_gia_Cv = [0.711700, 0.723719, 0.749704, 0.797535, 0.811546, 0.797814, 0.833793, 0.845868]
df_parting_T = [80.101665, 86.358919, 185.914204, 239.927797]
df_parting_Cv = [0.084730, 0.138598, 0.667809, 0.891634]
df_ce_T = [
80.669344,
135.550569,
145.464190,
165.301153,
182.144856,
203.372528,
237.993108,
268.696642,
294.095771,
308.872014,
]
df_ce_Cv = [
0.103048,
0.467344,
0.541364,
0.647315,
0.714078,
0.798258,
0.891147,
0.944848,
0.966618,
0.985486,
]
HeatCapacity = constant_volume_heat_capacity
R = const.N_A * const.KB_J_PER_K
plt.plot(temps, HeatCapacity(eq, temps) / R, "-k", label="Cv_rot Equilibrium")
plt.plot(temps, HeatCapacity(para, temps) / R, "-b", label="Cv_rot Para")
plt.plot(temps, HeatCapacity(ortho, temps) / R, "-r", label="Cv_rot Ortho")
plt.plot(
temps,
0.25 * HeatCapacity(para, temps) / R + 0.75 * HeatCapacity(ortho, temps) / R,
"-g",
label="Cv_rot 1:3 para:ortho",
)
plt.plot(df_brink_T, df_brink_Cv, "+g")
plt.plot(df_eucken_T, df_eucken_Cv, "+g")
plt.plot(df_gia_T, df_gia_Cv, "+g")
plt.plot(df_parting_T, df_parting_Cv, "+g")
plt.plot(df_ce_T, df_ce_Cv, "+g", label="experimental data")
plt.legend(loc="upper right", frameon=False)
plt.xlim(10, 400)
plt.ylim(-0.1, 2.8)
plt.xlabel("Temperature, K")
plt.ylabel("Cv (rotational)/R")
plt.tight_layout()
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____ |
Stochastic-Graph-assisted-Genre-Classification.ipynb | ###Markdown
Stochastic Graph assisted Genre ClassificationIn this notebook we show how to reproduce the main results of our report. Specifically, we compare the GloVe embedding and the count vectorizer, and a MLP and a GNN for the models. Baseline: Multilayer PerceptronChange the ```use_glove``` parameter to switch between the embeddings.
###Code
from src.tdde13.baselines import MLP
X_train, X_val, X_test, y_train, y_val, y_test = get_data()
use_glove = False
if use_glove:
# print(X_train)
X_train_transformed = glove_embedding(X_train)
X_val_transformed = glove_embedding(X_val)
X_test_transformed = glove_embedding(X_test)
# assert False, "embedding"
else:
vectorizer = CountVectorizer()
X_train_transformed = vectorizer.fit_transform(X_train)
X_val_transformed = vectorizer.transform(X_val)
X_test_transformed = vectorizer.transform(X_test)
X_train_transformed = construct_sparse_tensor(X_train_transformed.tocoo())
X_val_transformed = construct_sparse_tensor(X_val_transformed.tocoo())
X_test_transformed = construct_sparse_tensor(X_test_transformed.tocoo())
input_size = X_train_transformed.shape[1]
output_size = 10
y_train = LabelEncoder().fit_transform(y_train)
y_val = LabelEncoder().fit_transform(y_val)
y_test = LabelEncoder().fit_transform(y_test)
y_train = torch.tensor(y_train)
y_val = torch.tensor(y_val)
y_test = torch.tensor(y_test)
hparams = {
"lr" : 0.0002,
"epochs" : 35, # 15,
"batch_size" : 128,
"patience" : 5
}
mlp = MLP(input_size, output_size)
trace_train, trace_val = mlp.train_mlp(X_train_transformed, X_val_transformed, X_test_transformed, y_train, y_val, y_test, hparams)
plot_curves(trace_train, trace_val)
###Output
Epoch: 0
###Markdown
GraphSage
###Code
from src.tdde13.graphsage import GraphSAGE
X, y, edge_index, idx_train, idx_val, idx_test = get_data_graphsage()
use_glove = False
if use_glove:
X_transformed = glove_embedding(X)
else:
vectorizer = CountVectorizer()
X_transformed = vectorizer.fit_transform(X)
X_transformed = construct_sparse_tensor(X_transformed.tocoo())
y = LabelEncoder().fit_transform(y)
y = torch.tensor(y)
in_dim = X_transformed.shape[1]
hidden_dim = 128
out_dim = 10
hparams = {
"lr" : 0.001,
"epochs" : 35, # 15,
"batch_size" : 128,
"patience" : 5,
"use_glove" : use_glove
}
graphsage = GraphSAGE(in_dim=in_dim, hidden_dim=hidden_dim, out_dim=out_dim)
# graphsage.train_graphsage(X_transformed, edge_index, y, idx_train, idx_val, idx_test)
trace_train, trace_val = graphsage.train_graphsage_batchwise(X_transformed, edge_index, y, idx_train, idx_val, idx_test, hparams)
plot_curves(trace_train, trace_val)
###Output
_____no_output_____ |
Lab_Basic_SQL.ipynb | ###Markdown
Lab DirectionsFor this lab, you'll be writing some basic SQL queries. Here is what you need to do:1. "OPEN" this lab in "Google Colab" and proceed to set up a free Google Colab account if you need to: https://colab.research.google.com/. 2. Save a copy of this lab to your Colab account. You'll want to connect to a "runtime".3. Work on the lab! To run code cells, you just need to click "Ctrl + Enter." To edit cells, you can double click on them. If there is an option, make sure to select "Anyone can view" the link.4. When you are done working on the lab, please click the "Share" button, copy the link you see there, and submit this to D2L.Oh, and make sure to put your name here:NAME(s):--- Problem 0: Run the cells belowThe following two cells will get the Movie database set up. After you've run these cells (by hitting Ctrl + Enter), you should see the relational schema for each of the Movie databse tables.Note: The first time you run these cells, it may take a minute or two!
###Code
# Some UNIX utilites we need to install for the lab.
!pip install wget --quiet
!pip install sqlalchemy --quiet
!pip install ipython-sql --quiet
# Now let's download the file we'll be using for this lab
!wget 'https://github.com/brendanpshea/database_class/raw/main/movie.sqlite' -q
# Let's make a connnection with the database
%load_ext sql
%sql sqlite:////content/movie.sqlite
# Get the schema
%%sql
SELECT name AS "Table Name", sql AS "Schema of Table" FROM sqlite_master WHERE type = 'table';
# Show the first 5 rows of each table
movie_df = %sql SELECT * FROM Movie LIMIT 5;
person_df = %sql SELECT * FROM Person LIMIT 5;
actor_df = %sql SELECT * FROM Actor LIMIT 5;
director_df = %sql SELECT * FROM Director LIMIT 5;
oscar_df = %sql SELECT * FROM Oscar LIMIT 5;
print(movie_df,'\n\n',person_df, '\n\n', actor_df, '\n\n', director_df, '\n\n', oscar_df)
%%sql
--feel free to type some queries here (and the run them) to see how they work!
--You should only write ONE query per cell.
###Output
_____no_output_____
###Markdown
Problem 1Retrieve the first 8 rows in the Oscar table ordered by Oscar type. Hint: You'll need to use SELECT, FROM, ORDER BY, and LIMIT. You'll want to retrieve all attributes using *.
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 2Retrieve a list of the different "types" of awards that are listed in the Oscar table. Hint: use DISTINCT.
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 3Retrieve the number of distinct directors that are represented in the Director table. Hint, you'll neeed to use both COUNT and DISTINCT. You only want each "director_id" to be counted once.
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 4Retrieve the name and release year of movies that start with the word 'Captain.'
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 5Retrieve the name and place of birth of all Persons born in "Minnesota, USA". Order them in descending order by name.
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 6Retrieve the names and release years of movies that have won a 'Best Picture' Oscar. Limit the results to 10. Hint: You'll need to JOIN Oscar and Movie using Oscar.movie_id and Movie.id. You'll also need to think about Oscar.type and Movie.year.
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 7Retrieve the names of all Persons who have won a 'BEST-ACTRESS' award since 1990. Hint: You'll need to JOIN Person with Oscar.
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 8Retrieve the COUNT of actors who appeared in a film in 2012. Name this column that is returned "2012 Actors." Hint: You'll need to JOIN Actor with Movie. And rememember to count only "distinct" actor_ids (since some actors will have appeared in multiple movies).
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 9Retrieve the total running time of all of the films that Bill Murray has appeared in. Express this as "number of days." (So, you need to convert somehow take the sum of the runtimes, and then convert minutes to days).
###Code
%%sql
--Your code here
###Output
_____no_output_____
###Markdown
Problem 10Retrieve the names of films that have have a person with a first name starting with "WIll" as director or an actor. Hint: You'll be doing two "select" statements with a UNION in the middle of them.
###Code
%%sql
--Your code here
###Output
_____no_output_____ |
analysis/3BF_convergence.ipynb | ###Markdown
Convergence Analysis
###Code
def ref_cubic_3bf(X):
"""Compute 16 kf^3 for any kf prescription"""
return 16 * X.ravel() ** 3
def ref_linear_3bf(X):
"""Compute 16 kf^3 for any kf prescription"""
return 16 * X.ravel()
ref_3bf_func = ref_linear_3bf
###Output
_____no_output_____
###Markdown
This is not quite right: should put the 3BF corrections right after the 2BF corrections
###Code
# def create_appended_y(y2, y3, y2_ref, y3_ref):
# y2 = y2.copy()
# diff = y2[:, [-1]] + y2_ref * y3[:, 2:] / y3_ref
# return np.append(y2, diff, axis=1)
# y_n_app = create_appended_y(y_n_2bf, y_n_3bf, ref_2bf, ref_3bf_func(kf_n)[:, None])
# y_s_app = create_appended_y(y_s_2bf, y_s_3bf, ref_2bf, ref_3bf_func(kf_s)[:, None])
# y_d_app = create_appended_y(y_d_2bf, y_d_3bf, ref_2bf, ref_3bf_func(kf_avg)[:, None])
c_n_app = gm.coefficients(y_n_appended, Q_n, ref_2bf, orders=orders_appended)
c_s_app = gm.coefficients(y_s_appended, Q_s, ref_2bf, orders=orders_appended)
c_d_app = gm.coefficients(y_d_appended, Q_d, ref_2bf, orders=orders_appended)
fig, axes = plot_coefficients(
kf_n, density, coeffs_2bf=coeffs_n_2bf, coeffs_3bf=coeffs_n_3bf,
coeffs_23bf=coeffs_n_2_plus_3bf, orders=orders, colors=colors)
fig.suptitle('Pure Neutron Matter', y=1.02)
plt.plot(kf_n, c_s_app);
plt.axhline(0, 0, 1)
kf_n * hbar_c
0.12, 3.2
density
nuclear_density(600 / hbar_c, 2)
nuclear_density(600 / hbar_c, 4)
assert False
fermi_momenta = {
pure_neutron: kf_n,
sym_nuclear: kf_s,
s2_energy: kf_d
}
Fermi_momenta = {
pure_neutron: Kf_n,
sym_nuclear: Kf_s,
s2_energy: Kf_d
}
refs = {
# E/N
(pure_neutron, body2): ref_2bf,
(pure_neutron, body3): ref_3bf_func,
(pure_neutron, body23): ref_2bf,
(pure_neutron, body23_appended): ref_2bf,
# E/A
(sym_nuclear, body2): ref_2bf,
(sym_nuclear, body3): ref_3bf_func,
(sym_nuclear, body23): ref_2bf,
(sym_nuclear, body23_appended): ref_2bf,
# S2
(s2_energy, body2): ref_2bf,
(s2_energy, body3): ref_3bf_func,
(s2_energy, body23): ref_2bf,
(s2_energy, body23_appended): ref_2bf,
}
# S2_observables = {
# body2: y_d_2bf,
# body3: y_d_3bf,
# body23: y_d_2_plus_3bf,
# }
observables = {
# E/N
(pure_neutron, body2): y_n_2bf,
(pure_neutron, body3): y_n_3bf,
(pure_neutron, body23): y_n_2_plus_3bf,
# (pure_neutron, body23_appended): y_n_app,
(pure_neutron, body23_appended): y_n_appended,
# E/A
(sym_nuclear, body2): y_s_2bf,
(sym_nuclear, body3): y_s_3bf,
(sym_nuclear, body23): y_s_2_plus_3bf,
# (sym_nuclear, body23_appended): y_s_app,
(sym_nuclear, body23_appended): y_s_appended,
# S2
(s2_energy, body2): y_d_2bf,
(s2_energy, body3): y_d_3bf,
(s2_energy, body23): y_d_2_plus_3bf,
# (s2_energy, body23_appended): y_d_app,
# (s2_energy, body23_appended): y_d_appended,
}
excluded_orders = {
body2: excluded_2bf,
body3: excluded_3bf,
body23: excluded_2bf,
body23_appended: excluded_2bf,
}
obs_types = [pure_neutron, sym_nuclear, s2_energy]
systems = {pure_neutron: 'neutron', sym_nuclear: 'symmetric', s2_energy: 'difference'}
body_types = [body2, body3, body23, body23_appended]
analyses = {}
with tqdm(total=len(obs_types) * len(body_types)) as pbar:
for obs_type in obs_types:
X_i = Fermi_momenta[obs_type]
y2_i = observables[obs_type, body2]
y3_i = observables[obs_type, body23]
ref2_i = refs[obs_type, body2]
ref3_i = refs[obs_type, body3]
for n_body in body_types:
pbar.set_postfix(obs_type=obs_type, n_body=n_body, refresh=True)
ex_i = excluded_orders[n_body]
system_i = systems[obs_type]
orders_i = orders.copy()
if n_body == body23_appended:
max_idxs = [3, 5]
max_idx_labels = [2, 3]
# orders_i = orders_appended
else:
max_idxs = [2, 3]
max_idx_labels = None
# orders_i = orders
analyses[obs_type, n_body] = MatterConvergenceAnalysis(
X=X_i, y2=y2_i, y3=y3_i, orders=orders_i, train=train, valid=valid,
ref2=ref2_i, ref3=ref3_i, ratio='kf', density=density,
kernel=kernel, system=system_i, fit_n2lo=fit_n2lo, fit_n3lo=fit_n3lo, Lambda=Lambda,
body=n_body, savefigs=savefigs, excluded=ex_i,
# optimizer=optimizer,
decomposition='eig', **hyperparams
)
analyses[obs_type, n_body].setup_posteriors(
breakdown_min=breakdown_min, breakdown_max=breakdown_max, breakdown_num=breakdown_num,
ls_min=ls_min, ls_max=ls_max, ls_num=ls_num,
max_idx=max_idxs, logprior=None, max_idx_labels=max_idx_labels
)
pbar.update(1)
analyses[sym_nuclear, body23_appended].plot_coefficients(show_excluded=True, show_process=True, breakdown=Lb)
analyses[sym_nuclear, body23_appended].plot_coeff_diagnostics(breakdown=Lb);
analyses[pure_neutron, body23_appended].plot_coeff_diagnostics(breakdown=Lb);
analyses[s2_energy, body23_appended].plot_coeff_diagnostics(breakdown=Lb);
analyses[sym_nuclear, body23_appended].plot_observables(show_excluded=True, show_process=True, breakdown=Lb)
assert False
def create_breakdown_df(analyses, body_types, obs_type):
df_Lb_pdfs = pd.concat([analyses[obs_type, n_body].df_breakdown for n_body in body_types])
# df_Lb_pdfs['$k_F$'] = kf_type_name
grouped = df_Lb_pdfs[(df_Lb_pdfs['Body'] != body23) & (df_Lb_pdfs['Body'] != body23_appended)].groupby(
['$\Lambda_b$ (MeV)', 'Order', 'system'], sort=False
)
prod_df = grouped.prod().reset_index()
prod_df['Body'] = 'Total'
new_df = pd.concat([df_Lb_pdfs, prod_df], sort=False)
# For appended 3N orders
# order_fixes = {'N$^4$LO': 'N$^2$LO', 'N$^5$LO': 'N$^3$LO'}
# new_df['Order'] = new_df['Order'].replace(order_fixes)
return new_df
df_Lb_pdfs_n = create_breakdown_df(analyses, body_types, pure_neutron)
df_Lb_pdfs_s = create_breakdown_df(analyses, body_types, sym_nuclear)
df_Lb_pdfs_d = create_breakdown_df(analyses, body_types, s2_energy)
df_Lb_pdf = pd.concat([df_Lb_pdfs_n, df_Lb_pdfs_s, df_Lb_pdfs_d])
for obs_type, df_lb_i in zip(obs_types, [df_Lb_pdfs_n, df_Lb_pdfs_s, df_Lb_pdfs_d]):
fig, ax = plt.subplots(figsize=(3.4, 4.4))
ax = pdfplot(
x=r'$\Lambda_b$ (MeV)', y='Body', pdf='pdf', data=df_lb_i, hue='Order',
order=[*body_types[:-1], 'Total', body_types[-1]],
hue_order=[r'N$^2$LO', r'N$^3$LO'],
cut=1e-2, linewidth=1,
palette="coolwarm", saturation=1., ax=ax, margin=0.3,
)
ax.set_xlim(0, 1200)
ax.set_xticks([0, 300, 600, 900, 1200])
ax.grid(axis='x')
ax.set_title(f'{obs_type}')
ax.set_axisbelow(True)
plt.show()
# fig.savefig(f'breakdown_obs-{obs_type}')
savefigs_diagnostics = True
df_Lb_pdf_all = df_Lb_pdfs_n.copy()
df_Lb_pdf_all['pdf'] = df_Lb_pdfs_n['pdf'] * df_Lb_pdfs_s['pdf'] * df_Lb_pdfs_d['pdf']
df_Lb_pdf_all['system'] = 'All'
df_Lb_pdf = pd.concat([df_Lb_pdfs_n, df_Lb_pdfs_s, df_Lb_pdfs_d, df_Lb_pdf_all])
fig, ax = plt.subplots(figsize=(3.4, 4.4))
ax = pdfplot(
x=r'$\Lambda_b$ (MeV)', y='system', pdf='pdf', data=df_Lb_pdf[df_Lb_pdf['Body'] == 'Total'], hue='Order',
order=[r'$E/N$', r'$E/A$', r'$S_2$', 'All'], hue_order=[r'N$^2$LO', r'N$^3$LO'], cut=1e-2, linewidth=1,
palette="coolwarm", saturation=1., ax=ax, margin=0.3,
)
ax.set_xlim(0, 1200)
ax.set_xticks([0, 300, 600, 900, 1200])
ax.grid(axis='x')
# ax.set_title(f'{obs_type}')
ax.set_axisbelow(True)
plt.show()
if savefigs_diagnostics:
fig.savefig(f'breakdown_2n-3n-product_Lambda_{Lambda}')
df_Lb_pdf_all = df_Lb_pdfs_n.copy()
df_Lb_pdf_all['pdf'] = df_Lb_pdfs_n['pdf'] * df_Lb_pdfs_s['pdf'] * df_Lb_pdfs_d['pdf']
df_Lb_pdf_all['system'] = 'All'
df_Lb_pdf = pd.concat([df_Lb_pdfs_n, df_Lb_pdfs_s, df_Lb_pdfs_d, df_Lb_pdf_all])
fig, ax = plt.subplots(figsize=(3.4, 4.4))
ax = pdfplot(
x=r'$\Lambda_b$ (MeV)', y='system', pdf='pdf', data=df_Lb_pdf[df_Lb_pdf['Body'] == 'Total'], hue='Order',
order=[r'$E/N$', r'$E/A$', r'$S_2$', 'All'], hue_order=[r'N$^2$LO', r'N$^3$LO'], cut=1e-2, linewidth=1,
palette="coolwarm", saturation=1., ax=ax, margin=0.3,
)
ax.set_xlim(0, 1200)
ax.set_xticks([0, 300, 600, 900, 1200])
ax.grid(axis='x')
# ax.set_title(f'{obs_type}')
ax.set_axisbelow(True)
plt.show()
if savefigs_diagnostics:
fig.savefig(f'breakdown_2n-3n-product_Lambda_{Lambda}')
fig, ax = plt.subplots(figsize=(3.4, 4.4))
ax = pdfplot(
x=r'$\Lambda_b$ (MeV)', y='system', pdf='pdf', data=df_Lb_pdf[df_Lb_pdf['Body'] == 'Appended'], hue='Order',
order=[r'$E/N$', r'$E/A$', r'$S_2$', 'All'], hue_order=[r'N$^2$LO', r'N$^3$LO'], cut=1e-2, linewidth=1,
palette="coolwarm", saturation=1., ax=ax, margin=0.3,
)
ax.set_xlim(0, 1200)
ax.set_xticks([0, 300, 600, 900, 1200])
ax.grid(axis='x')
# ax.set_title(f'{obs_type}')
ax.set_axisbelow(True)
plt.show()
if savefigs_diagnostics:
fig.savefig(f'breakdown_3n-appended_Lambda_{Lambda}')
lb_max_mask = \
(df_Lb_pdf['Body'] == 'Appended') & \
(df_Lb_pdf['system'] == 'All') & \
(df_Lb_pdf['Order'] == 'N$^3$LO')
lb_max_idx = df_Lb_pdf[lb_max_mask]['pdf'].idxmax()
lb_map = df_Lb_pdf[lb_max_mask].loc[lb_max_idx]['$\Lambda_b$ (MeV)']
lb_map
lb_max_s_mask = \
(df_Lb_pdf['Body'] == 'Appended') & \
(df_Lb_pdf['system'] == '$E/A$') & \
(df_Lb_pdf['Order'] == 'N$^3$LO')
lb_max_idx_s = df_Lb_pdf[lb_max_s_mask]['pdf'].idxmax()
lb_map_s = df_Lb_pdf[lb_max_s_mask].loc[lb_max_idx_s]['$\Lambda_b$ (MeV)']
lb_map_s
analyses[s2_energy, body23_appended].df_joint
ls_map_vals = {}
for obs_type, n_body in product(obs_types, body_types):
# fig, ax = plt.subplots()
print(f'obs: {obs_type}, Body: {n_body}')
if n_body == body23_appended:
midx = 3
ls_map_i = analyses[obs_type, n_body].compute_best_length_scale_for_breakdown(lb_map, midx)
ls_map_vals[obs_type] = ls_map_i
print(ls_map_i)
else:
midx = 3
fig = analyses[obs_type, n_body].plot_joint_breakdown_ls(max_idx=midx)
fig.suptitle(f'{obs_type}, {n_body}', y=3)
plt.show()
Q_n_map = ratio_kf(kf_n, breakdown=lb_map)
Q_s_map = ratio_kf(kf_s, breakdown=lb_map)
Q_d_map = ratio_kf(kf_d, breakdown=lb_map)
coeffs_n_appended_map = gm.coefficients(y_n_appended, ratio=Q_n_map, ref=ref_2bf, orders=orders_appended)
coeffs_s_appended_map = gm.coefficients(y_s_appended, ratio=Q_s_map, ref=ref_2bf, orders=orders_appended)
coeffs_d_appended_map = gm.coefficients(y_d_appended, ratio=Q_d_map, ref=ref_2bf, orders=orders_appended)
hyperparams
order_labels_appended = [fr'$c_{{{n}}}^{{({b})}}$' for n, b in zip(orders_appended, [2, 2, 2, 3, 2, 3])]
cmap_list_names_appended = ['Oranges', 'Greens', 'Blues', 'Blues', 'Reds', 'Reds']
cmap_list_appended = [plt.get_cmap(c) for c in cmap_list_names_appended]
color_list_appended = [cmap(0.55 - 0.1 * (i == 0)) for i, cmap in enumerate(cmap_list_appended)]
order_labels_appended
from nuclear_matter.graphs import lighten_color
def diagnostics_for_2bf_and_3bf(X, c, orders, train, valid, order_labels, colors, density, **kwargs):
gp = gm.ConjugateGaussianProcess(**kwargs)
gp.fit(X[train], c[train])
cov = gp.cov(X[valid])
print(gp.cbar_sq_mean_, gp.kernel_)
gp_std = gp.cbar_sq_mean_
interp_c, interp_std = gp.predict(X, return_std=True)
mean = np.zeros(cov.shape[0])
fig = plt.figure(figsize=(7.0, 2.5), constrained_layout=True)
spec = fig.add_gridspec(nrows=1, ncols=7)
ax_cs = fig.add_subplot(spec[:, :3])
ax_md = fig.add_subplot(spec[:, 3])
ax_pc = fig.add_subplot(spec[:, 4:])
# markeredgecolors = [None, None, None, '0.3', None, '0.3']
markeredgecolors = colors
# markerlinestyles = ['-', '-', '-', '--', '-', '--']
markerlinestyles = ['-', '-', '-', '-', '-', '-']
markerfillstyles = ['full', 'full', 'full', 'left', 'full', 'left']
markers = ['^', 'X', 'o', 'o', 's', 's']
markerfacecolors = colors
markerfacecoloralts = np.array(colors).copy()
markerfacecoloralts[3] = mpl.colors.to_rgba('w')
markerfacecoloralts[5] = mpl.colors.to_rgba('w')
i = 0
# x = X.ravel()
x = density
for n, c_n, label, color, ec, ls in zip(orders, c.T, order_labels, colors, markeredgecolors, markerlinestyles):
ax_cs.plot(
x, c_n, c=color, label=label, ls=ls, zorder=i/10, markevery=train, marker=markers[i],
markeredgecolor=ec, fillstyle=markerfillstyles[i], markerfacecoloralt=markerfacecoloralts[i],
markeredgewidth=0.5,
)
light_color = lighten_color(color)
ax_cs.fill_between(
x, interp_c[:, i] + 2*interp_std, interp_c[:, i] - 2*interp_std,
facecolor=light_color, edgecolor=color, zorder=(i-0.5)/10, alpha=1
)
# ax_cs.scatter(
# X[train].ravel(), c_n[train], c=color, edgecolor=ec, linestyle=ls,
# linewidth=0.5, zorder=(i+0.5)/10
# )
ax_cs.axhline(0, 0, 1, c='k', lw=0.8, zorder=-1)
ax_cs.axhline(+2*gp_std, 0, 1, c='lightgrey', lw=0.8, zorder=-1)
ax_cs.axhline(-2*gp_std, 0, 1, c='lightgrey', lw=0.8, zorder=-1)
i += 1
# ax_cs.set_xlabel(r'$k_F$ [fm$^{-1}$]')
ax_cs.set_xlabel(r'$n$ [fm$^{-3}$]')
ax_cs.set_xticks(x[train], minor=False)
ax_cs.set_xticks(x[valid], minor=True)
ax_cs.legend(ncol=3)
graph = gm.GraphicalDiagnostic(
c[valid], mean=mean, cov=cov, markeredgecolors=markeredgecolors,
colors=colors, markerfillstyles=markerfillstyles, markers=markers
)
graph.md_squared(type='box', trim=True, title=None, ax=ax_md)
graph.pivoted_cholesky_errors(ax=ax_pc)
bbox = dict(facecolor='w', boxstyle='round', alpha=0.7)
ax_pc.text(0.05, 0.95, r'${\rm D}_{\rm PC}$', ha='left', va='top', bbox=bbox, transform=ax_pc.transAxes)
ax_pc.set_title('')
from matplotlib.ticker import MaxNLocator
ax_pc.yaxis.set_major_locator(MaxNLocator(integer=True))
return fig
def plot_cbar_ell_correlation(X, c, orders, train, ls_vals, **kwargs):
gp = gm.ConjugateGaussianProcess(**kwargs)
gp.fit(X[train], c[train])
loglike = np.array([gp.log_marginal_likelihood(theta=np.log(ls)) for ls in ls_vals])
coeff_noise_n = 1e-2
coeff_noise_s = 1e-2
coeff_noise_d = 1e-2
valid2 = [i % 4 != 0 for i in range(len(density))]
hyperparams
with plt.rc_context({"text.usetex": True, "text.latex.preview": True}):
fig = diagnostics_for_2bf_and_3bf(
Kf_n, coeffs_n_appended, orders_appended, train, valid2, order_labels_appended,
colors=color_list_appended, density=density,
kernel=RBF(ls_map_vals[pure_neutron]) + WhiteKernel(coeff_noise_n**2, noise_level_bounds='fixed'),
decomposition='eig',
# decomposition='cholesky',
# kernel=RBF(0.9) + WhiteKernel(coeff_noise**2, noise_level_bounds='fixed'),
# optimizer=None,
**hyperparams
# center=0, disp=0, sd=0.5
);
plt.show()
if savefigs_diagnostics:
fig.savefig(f'diagnostics_3n-appended_system-n_Lambda_{Lambda}')
with plt.rc_context({"text.usetex": True, "text.latex.preview": True}):
fig = diagnostics_for_2bf_and_3bf(
Kf_s, coeffs_s_appended, orders_appended, train, valid, order_labels_appended,
colors=color_list_appended, density=density,
kernel=RBF(ls_map_vals[sym_nuclear]/2) + WhiteKernel(coeff_noise_s**2, noise_level_bounds='fixed'),
decomposition='eig',
# decomposition='cholesky',
# optimizer=None,
**hyperparams
)
plt.show()
if savefigs_diagnostics:
fig.savefig(f'diagnostics_3n-appended_system-s_Lambda_{Lambda}')
with plt.rc_context({"text.usetex": True, "text.latex.preview": True}):
fig = diagnostics_for_2bf_and_3bf(
Kf_d, coeffs_d_appended, orders_appended, train, valid, order_labels_appended,
colors=color_list_appended, density=density,
kernel=RBF(ls_map_vals[s2_energy]) + WhiteKernel(coeff_noise_d**2, noise_level_bounds='fixed'),
decomposition='eig',
# decomposition='cholesky',
# optimizer=None,
**hyperparams
)
plt.show()
if savefigs_diagnostics:
fig.savefig(f'diagnostics_3n-appended_system-d_Lambda_{Lambda}')
###Output
_____no_output_____ |
src/notebooks/web-circular-barplot-with-matplotlib.ipynb | ###Markdown
AboutThis page showcases the work of [Tobias Stadler](https://tobias-stalder.netlify.app/). You can find the original [R](https://www.r-graph-gallery.com/) code on Tobias' GitHub [here](https://github.com/toebR/Tidy-Tuesday/blob/master/hiking/script.R). Thanks to him for accepting sharing his work here! Thanks also to [Tomás Capretto](https://tcapretto.netlify.app/) who translated this work from R to Python! 🙏🙏As a teaser, here is the plot we’re gonna try building: Load librariesLet's load libraries and utilities that are going to be used today. [`textwrap`](https://docs.python.org/3/library/textwrap.html) is Python built-in module that contains several utilities to wrap text. In this post, it is going to help us to split long names into multiple lines.
###Code
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.cm import ScalarMappable
from matplotlib.lines import Line2D
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from textwrap import wrap
###Output
_____no_output_____
###Markdown
Load and prepare the dataThis guide shows how to create a beautiful circular barplot to visualize several characteristics of hiking locations in Washington.The data for this post comes from [Washington Trails Association](https://www.wta.org/go-outside/hikes?b_start:int=1) courtesy of the [TidyX crew](https://github.com/thebioengineer/TidyX/tree/master/TidyTuesday_Explained/035-Rectangles), [Ellis Hughes](https://twitter.com/Ellis_hughes) and [Patrick Ward](https://twitter.com/OSPpatrick). This guide uses the dataset released for the [TidyTuesday](https://github.com/rfordatascience/tidytuesday) initiative on the week of 2020-11-24. You can find the original announcement and more information about the data [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2020/2020-11-24). Thank you all for making this possible!Let's start by loading and exploring the first rows of the dataset.
###Code
data = pd.read_csv("hike_data.csv")
data.head()
###Output
_____no_output_____
###Markdown
The first step is to extract the **region** from the `location` column. This is given by the text before the `"--"`.
###Code
data["region"] = data["location"].str.split("--", n=1, expand=True)[0]
# Make sure there's no leading/trailing whitespace
data["region"] = data["region"].str.strip()
###Output
_____no_output_____
###Markdown
A similar approach is used to extract the **number of miles**.
###Code
# Make sure to use .astype(Float) so it is numeric.
data["length_num"] = data["length"].str.split(" ", n=1, expand=True)[0].astype(float)
###Output
_____no_output_____
###Markdown
Now it's time to compute the **cumulative length** and **mean gain** for each region, as well as recording the **number of tracks** per region.
###Code
summary_stats = data.groupby(["region"]).agg(
sum_length = ("length_num", "sum"),
mean_gain = ("gain", "mean")
).reset_index()
summary_stats["mean_gain"] = summary_stats["mean_gain"].round(0)
trackNrs = data.groupby("region").size().to_frame('n').reset_index()
###Output
_____no_output_____
###Markdown
Finally, merge `summary_stats` with `tracksNrs` to get the final dataset.
###Code
summary_all = pd.merge(summary_stats, trackNrs, "left", on = "region")
summary_all.head()
###Output
_____no_output_____
###Markdown
Basic radar plot Radar charts plot data points in a circular layout. Instead of horizontal and vertical axes, it has an **angular** and a **radial** axis for **x** and **y**, respectively. In this world, **x** values are given by **angles** and **y** values are a **distance** from the center of the circle.In the chart we're just about to build, the **x** axis will represent the **regions**, and the **y** axis will represent their **cumulative length** and **mean gain**. Color is going to represent the **number of tracks**. Before getting started, just note the values of **x**, given in angles, have to be manually calculated and passed to Matplotlib. This is what is going on in the `np.linspace()` that defines the `ANGLES` variable.
###Code
# Bars are sorted by the cumulative track length
df_sorted = summary_all.sort_values("sum_length", ascending=False)
# Values for the x axis
ANGLES = np.linspace(0.05, 2 * np.pi - 0.05, len(df_sorted), endpoint=False)
# Cumulative length
LENGTHS = df_sorted["sum_length"].values
# Mean gain length
MEAN_GAIN = df_sorted["mean_gain"].values
# Region label
REGION = df_sorted["region"].values
# Number of tracks per region
TRACKS_N = df_sorted["n"].values
###Output
_____no_output_____
###Markdown
As usually, colors and other important values are declared before the code that actually produces the plot. In addition, the following chunk also sets the default font to **Bell MT**. For a step-by-step guide on how to install and load custom fonts in Matplotlib, have a look a [this post](https://www.python-graph-gallery.com/custom-fonts-in-matplotlib).
###Code
GREY12 = "#1f1f1f"
# Set default font to Bell MT
plt.rcParams.update({"font.family": "Bell MT"})
# Set default font color to GREY12
plt.rcParams["text.color"] = GREY12
# The minus glyph is not available in Bell MT
# This disables it, and uses a hyphen
plt.rc("axes", unicode_minus=False)
# Colors
COLORS = ["#6C5B7B","#C06C84","#F67280","#F8B195"]
# Colormap
cmap = mpl.colors.LinearSegmentedColormap.from_list("my color", COLORS, N=256)
# Normalizer
norm = mpl.colors.Normalize(vmin=TRACKS_N.min(), vmax=TRACKS_N.max())
# Normalized colors. Each number of tracks is mapped to a color in the
# color scale 'cmap'
COLORS = cmap(norm(TRACKS_N))
###Output
_____no_output_____
###Markdown
Excited about how to make it? Let's do it!
###Code
# Some layout stuff ----------------------------------------------
# Initialize layout in polar coordinates
fig, ax = plt.subplots(figsize=(9, 12.6), subplot_kw={"projection": "polar"})
# Set background color to white, both axis and figure.
fig.patch.set_facecolor("white")
ax.set_facecolor("white")
ax.set_theta_offset(1.2 * np.pi / 2)
ax.set_ylim(-1500, 3500)
# Add geometries to the plot -------------------------------------
# See the zorder to manipulate which geometries are on top
# Add bars to represent the cumulative track lengths
ax.bar(ANGLES, LENGTHS, color=COLORS, alpha=0.9, width=0.52, zorder=10)
# Add dashed vertical lines. These are just references
ax.vlines(ANGLES, 0, 3000, color=GREY12, ls=(0, (4, 4)), zorder=11)
# Add dots to represent the mean gain
ax.scatter(ANGLES, MEAN_GAIN, s=60, color=GREY12, zorder=11)
# Add labels for the regions -------------------------------------
# Note the 'wrap()' function.
# The '5' means we want at most 5 consecutive letters in a word,
# but the 'break_long_words' means we don't want to break words
# longer than 5 characters.
REGION = ["\n".join(wrap(r, 5, break_long_words=False)) for r in REGION]
REGION
# Set the labels
ax.set_xticks(ANGLES)
ax.set_xticklabels(REGION, size=12);
###Output
_____no_output_____
###Markdown
Pretty good start! It wasn't too complicated to map the variable onto the different geometries in the plot. Customize guides and annotationsThe plot above looks quite nice for a start. But so many reference lines are unnecesary. Let's remove these defaults and improve this chart with custom annotations and guides.
###Code
# Remove unnecesary guides ---------------------------------------
# Remove lines for polar axis (x)
ax.xaxis.grid(False)
# Put grid lines for radial axis (y) at 0, 1000, 2000, and 3000
ax.set_yticklabels([])
ax.set_yticks([0, 1000, 2000, 3000])
# Remove spines
ax.spines["start"].set_color("none")
ax.spines["polar"].set_color("none")
# Adjust padding of the x axis labels ----------------------------
# This is going to add extra space around the labels for the
# ticks of the x axis.
XTICKS = ax.xaxis.get_major_ticks()
for tick in XTICKS:
tick.set_pad(10)
# Add custom annotations -----------------------------------------
# The following represent the heights in the values of the y axis
PAD = 10
ax.text(-0.2 * np.pi / 2, 1000 + PAD, "1000", ha="center", size=12)
ax.text(-0.2 * np.pi / 2, 2000 + PAD, "2000", ha="center", size=12)
ax.text(-0.2 * np.pi / 2, 3000 + PAD, "3000", ha="center", size=12)
# Add text to explain the meaning of the height of the bar and the
# height of the dot
ax.text(ANGLES[0], 3100, "Cummulative Length [FT]", rotation=21,
ha="center", va="center", size=10, zorder=12)
ax.text(ANGLES[0]+ 0.012, 1300, "Mean Elevation Gain\n[FASL]", rotation=-69,
ha="center", va="center", size=10, zorder=12)
fig
###Output
_____no_output_____
###Markdown
Final chartThe result looks much better! The clutter in the previous plot has dissapeared, that's great! The last step is to add a legend that makes the colors more meaningful and a good title and annotations that can easily transmit what this chart is about.
###Code
# Add legend -----------------------------------------------------
# First, make some room for the legend and the caption in the bottom.
fig.subplots_adjust(bottom=0.175)
# Create an inset axes.
# Width and height are given by the (0.35 and 0.01) in the
# bbox_to_anchor
cbaxes = inset_axes(
ax,
width="100%",
height="100%",
loc="center",
bbox_to_anchor=(0.325, 0.1, 0.35, 0.01),
bbox_transform=fig.transFigure # Note it uses the figure.
)
# Create a new norm, which is discrete
bounds = [0, 100, 150, 200, 250, 300]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
# Create the colorbar
cb = fig.colorbar(
ScalarMappable(norm=norm, cmap=cmap),
cax=cbaxes, # Use the inset_axes created above
orientation = "horizontal",
ticks=[100, 150, 200, 250]
)
# Remove the outline of the colorbar
cb.outline.set_visible(False)
# Remove tick marks
cb.ax.xaxis.set_tick_params(size=0)
# Set legend label and move it to the top (instead of default bottom)
cb.set_label("Amount of tracks", size=12, labelpad=-40)
# Add annotations ------------------------------------------------
# Make some room for the title and subtitle above.
fig.subplots_adjust(top=0.8)
# Define title, subtitle, and caption
title = "\nHiking Locations in Washington"
subtitle = "\n".join([
"This Visualisation shows the cummulative length of tracks,",
"the amount of tracks and the mean gain in elevation per location.\n",
"If you are an experienced hiker, you might want to go",
"to the North Cascades since there are a lot of tracks,",
"higher elevations and total length to overcome."
])
caption = "Data Visualisation by Tobias Stalder\ntobias-stalder.netlify.app\nSource: TidyX Crew (Ellis Hughes, Patrick Ward)\nLink to Data: github.com/rfordatascience/tidytuesday/blob/master/data/2020/2020-11-24/readme.md"
# And finally, add them to the plot.
fig.text(0.1, 0.93, title, fontsize=25, weight="bold", ha="left", va="baseline")
fig.text(0.1, 0.9, subtitle, fontsize=14, ha="left", va="top")
fig.text(0.5, 0.025, caption, fontsize=10, ha="center", va="baseline")
# Note: you can use `fig.savefig("plot.png", dpi=300)` to save it with in hihg-quality.
fig
###Output
_____no_output_____ |
Torrent_To_Google_Drive_Downloader_v3.ipynb | ###Markdown
Torrent To Google Drive Downloader v3 Mount Google Drive
To stream files we need to mount Google Drive.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Dependency
https://www.libtorrent.org/
###Code
!python -m pip install --upgrade pip setuptools wheel && python -m pip install lbry-libtorrent && apt install python3-libtorrent
###Output
_____no_output_____
###Markdown
Code to download torrent
Variable **link** stores the link string.
###Code
import libtorrent as lt
import time
import datetime
ses = lt.session()
ses.listen_on(6881, 6891)
params = {
'save_path': '/content/drive/My Drive/Torrent/',
'storage_mode': lt.storage_mode_t(2),
}
link = input("Input Torrent Link or Magnet and Press Enter")
print(link)
handle = lt.add_magnet_uri(ses, link, params)
ses.start_dht()
begin = time.time()
print(datetime.datetime.now())
print ('Downloading Metadata...')
while (not handle.has_metadata()):
time.sleep(1)
print ('Got Metadata, Starting Torrent Download...')
print("Starting", handle.name())
while (handle.status().state != lt.torrent_status.seeding):
s = handle.status()
state_str = ['queued', 'checking', 'downloading metadata', \
'downloading', 'finished', 'seeding', 'allocating']
print ('%.2f%% complete (down: %.1f kb/s up: %.1f kB/s peers: %d) %s ' % \
(s.progress * 100, s.download_rate / 1000, s.upload_rate / 1000, \
s.num_peers, state_str[s.state]))
time.sleep(5)
end = time.time()
print(handle.name(), "COMPLETE")
print("Elapsed Time: ",int((end-begin)//60),"min :", int((end-begin)%60), "sec")
print(datetime.datetime.now())
###Output
_____no_output_____
###Markdown
Torrent To Google Drive Downloader v3 Mount Google Drive
To stream files we need to mount Google Drive.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Dependency
https://www.libtorrent.org/
###Code
!python -m pip install --upgrade pip setuptools wheel && python -m pip install lbry-libtorrent && apt install python3-libtorrent
###Output
_____no_output_____
###Markdown
Code to download torrent
Variable **link** stores the link string.
###Code
import libtorrent as lt
import time
import datetime
ses = lt.session()
ses.listen_on(6881, 6891)
params = {
'save_path': '/content/drive/My Drive/Torrent/',
'storage_mode': lt.storage_mode_t(2)
}
link = input("Input Torrent Link or Magnet and Press Enter")
print(link)
handle = lt.add_magnet_uri(ses, link, params)
ses.start_dht()
begin = time.time()
print(datetime.datetime.now())
print ('Downloading Metadata...')
while (not handle.has_metadata()):
time.sleep(1)
print ('Got Metadata, Starting Torrent Download...')
print("Starting", handle.name())
while (handle.status().state != lt.torrent_status.seeding):
s = handle.status()
state_str = ['queued', 'checking', 'downloading metadata', \
'downloading', 'finished', 'seeding', 'allocating']
print ('%.2f%% complete (down: %.1f kb/s up: %.1f kB/s peers: %d) %s ' % \
(s.progress * 100, s.download_rate / 1000, s.upload_rate / 1000, \
s.num_peers, state_str[s.state]))
time.sleep(5)
end = time.time()
print(handle.name(), "COMPLETE")
print("Elapsed Time: ",int((end-begin)//60),"min :", int((end-begin)%60), "sec")
print(datetime.datetime.now())
###Output
_____no_output_____ |
02_Web_Scraping_With_Beautiful_Soup/02_webscraping_bs4.ipynb | ###Markdown
To open this notebook in Google Colab and start coding, click on the Colab icon below. Run in Google Colab Web ScrapingWeb scraping is the process of extracting and storing data from websites for analytical or other purposes. Therefore, it is useful to know the basics of html and css, because you have to identifiy the elements of a webpage you want to scrape. If you want to refresh your knowledge about these elements, check out the [HTML basics notebook](./01_HTML_Basics.ipynb).We will go through all the important steps performed during web scraping with python and BeautifulSoup in this Notebook. Learning objectives for this NotebookAt the end of this notebook you should:- be able to look at the structure of a real website- be able to figure out what information is relevant to you and how to find it (Locating Elements)- know how to download the HTML content with BeautifulSoup- know how to loop over an entire website structure and extract information- know how to save the data afterwardsFor web scraping it is useful to know the basics of html and css, because you have to identifiy the elements of a webpage you want to scrape. The easiest way to locate an element is to open your Chrome dev tools and inspect the element that you need. A cool shortcut for this is to highlight the element you want with your mouse and then press Ctrl + Shift + C or on macOS Cmd + Shift + C instead of having to right click + inspect each time (same in mozilla). Locating ElementsFor locating an element on a website you can use:- Tag name- Class name- IDs- XPath- CSS selectorsXPath is a technology that uses path expressions to select nodes or node- sets in an XML document (or in our case an HTML document). [Read here for more information](https://www.scrapingbee.com/blog/practical-xpath-for-web-scraping/) Is Web Scraping Legal?Unfortunately, there’s not a cut-and-dry answer here. Some websites explicitly allow web scraping. Others explicitly forbid it. Many websites don’t offer any clear guidance one way or the other.Before scraping any website, we should look for a terms and conditions page to see if there are explicit rules about scraping. If there are, we should follow them. If there are not, then it becomes more of a judgement call.Remember, though, that web scraping consumes server resources for the host website. If we’re just scraping one page once, that isn’t going to cause a problem. But if our code is scraping 1,000 pages once every ten minutes, that could quickly get expensive for the website owner.Thus, in addition to following any and all explicit rules about web scraping posted on the site, it’s also a good idea to follow these best practices: Web Scraping Best Practices:- Never scrape more frequently than you need to.- Consider caching the content you scrape so that it’s only downloaded once.- Build pauses into your code using functions like time.sleep() to keep from overwhelming servers with too many requests too quickly. The Problem we want to solveLarissa's sister broke her aquarium. And we decided to get her a new one because christmas is near and we want to cheer Larissa up! And because we know how to code and can't decide what fish we want to get, we will solve this problem with web scraping! BeautifulSoupThe library we will use today to find fishes we can gift Larissa for christmas is [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/). It is a library to extract data out of HTML and XML files.The first thing we’ll need to do to scrape a web page is to download the page. We can download pages using the Python requests.The requests library will make a GET request to a web server, which will download the HTML contents of a given web page for us. There are several different types of requests we can make using requests, of which GET is just one.
###Code
import time
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
# get the content of the website
page = requests.get("https://www.interaquaristik.de/tiere/zierfische")
html = page.content
###Output
_____no_output_____
###Markdown
We can use the BeautifulSoup library to parse this document, and extract the information from it.We first have to import the library, and create an instance of the BeautifulSoup class to parse our document:
###Code
# parse the html and save it into a BeautifulSoup instance
bs = BeautifulSoup(html, 'html.parser')
###Output
_____no_output_____
###Markdown
We can now print out the HTML content of the page, formatted nicely, using the prettify method on the BeautifulSoup object.
###Code
print(bs.prettify())
###Output
_____no_output_____
###Markdown
This step isn't strictly necessary, and we won't always bother with it, but it can be helpful to look at prettified HTML to make the structure of the page clearer and nested tags are easier to read.As all the tags are nested, we can move through the structure one level at a time. We can first select all the elements at the top level of the page using the children property of ``bs``.Note that children returns a list generator, so we need to call the list function on it:
###Code
list(bs.findChildren())
###Output
_____no_output_____
###Markdown
And then we can have a closer look on the children. For example the ```head```.
###Code
bs.find('head')
###Output
_____no_output_____
###Markdown
Here you can try out different tags like ```body```, headers like ```h1``` or ```title```:
###Code
bs.find('insert your tag here')
###Output
_____no_output_____
###Markdown
But what if we have more than one element with the same tag? Then we can just use the ```.find_all()``` method of BeautifulSoup:
###Code
bs.find_all('article')
###Output
_____no_output_____
###Markdown
Also you can search for more than one tag at once for example if you want to look for all headers on the page:
###Code
titles = bs.find_all(['h1', 'h2','h3','h4','h5','h6'])
print([title for title in titles])
###Output
_____no_output_____
###Markdown
Often we are not interested in the tags themselves, but in the content they contain. With the ```.get_text()``` method we can easily extract the text from between the tags. So let's find out if we really scrape the right page to buy the fishes:
###Code
bs.find('title').get_text()
###Output
_____no_output_____
###Markdown
Searching for tags by class and idWe introduced ```classes``` and ```ids``` earlier, but it probably wasn’t clear why they were useful.Classes and ```ids``` are used by ```CSS``` to determine which ```HTML``` elements to apply certain styles to. For web scraping they are also pretty useful as we can use them to specify the elements we want to scrape. In our case the ```ìds``` are not that useful there are only a few of them but one example would be:
###Code
bs.find_all('div', id='page-body')
###Output
_____no_output_____
###Markdown
But it seems like that the ```classes``` could be useful for finding the fishes and their prices, can you spot the necessary tags in the DevTool of your browser?
###Code
# tag of the description of the fishes
bs.find_all(class_="insert your tag here for the name")
# tag of the price of the fishes
bs.find_all(class_="insert your tag here for the price")
###Output
_____no_output_____
###Markdown
Extracting all the important information from the pageNow that we know how to extract each individual piece of information, we can save these informations to a list. Let's start with the price:
###Code
# We will search for the price
prices = bs.find_all(class_= "price")
prices_lst = [price.get_text() for price in prices]
prices_lst
###Output
_____no_output_____
###Markdown
We seem to be on the right track but like you can see it doesn't handle the special characters, spaces and paragraphs. So web scraping is coming hand in hand with cleaning your data:
###Code
prices_lst = [price.strip() for price in prices_lst]
prices_lst[:5]
###Output
_____no_output_____
###Markdown
That looks a little bit better but we want only the number to work with the prices later. We have to remove the letters and convert the string to a float:
###Code
# We are removing the letters from the end of the string and keeping only the first part
prices_lst = [price.replace('\xa0€ *', '') for price in prices_lst]
prices_lst[:5]
# Now we have to replace the comma with a dot to convert the string to a float
prices_lst = [price.replace(',', '.') for price in prices_lst]
prices_lst[:5]
# So lets convert the string into a float
prices_lst = [float(price) for price in prices_lst]
###Output
_____no_output_____
###Markdown
But if we want to convert the string to a flaot we get an error message there seems to be prices which start with ```ab```.So let me intodruce you to a very handy thing called ```Regular expressions``` or short ```regex```. It is a sequence of characters that specifies a search pattern. In python you can use regex with the ```re``` library. So lets have a look how many of the prices still contain any kind of letters.
###Code
# with the regex sequence we are looking for strings that contain any
# kind of letters
for price in prices_lst:
if re.match("^[A-Za-z]", price):
print(price)
###Output
_____no_output_____
###Markdown
So there are some prices with an "ab" in front of them, so lets remove the letters:
###Code
# Now we have to replace the comma with a dot to convert the string to a float
prices_lst = [float(price.replace('ab ', '')) for price in prices_lst]
prices_lst[:5]
###Output
_____no_output_____
###Markdown
Now it worked! so let's do the same with the description of the fishes:
###Code
# Find all the descriptions of the fish and save them in a variable
descriptions = bs.find_all(class_='thumb-title small')
# Get only the text of the descriptions
descriptions_lst = [description.get_text() for description in descriptions]
descriptions_lst
# Clean the text by removing spaces and paragraphs
descriptions_lst = [description.strip() for description in descriptions_lst]
descriptions_lst[:5]
###Output
_____no_output_____
###Markdown
Let's have a look if we can get the links to the images of the fish, so that we later can look up how the fish are looking, we can use the ```img``` tag for that in most cases:
###Code
# find all images of the fish
image_lst = bs.find('ul', {'class': 'product-list row grid'})
images = image_lst.find_all('img')
images
###Output
_____no_output_____
###Markdown
There are only two results for the image tag so let's have a look what the tag of the other images are.So they have the tag: ```picture``` so lets extract those:
###Code
# Extract all the pictures for the fish by using first the tag ul and than the tag picture
picture_lst = bs.find('ul', {'class': 'product-list row grid'})
pictures = picture_lst.find_all('picture')
pictures[:5]
###Output
_____no_output_____
###Markdown
That looks more like all pictures! Although, it seems some of the fish have specials like 'Sonderangebot' or 'Neuheit'. Wouldn't it be nice if we would have this information as well? Here it gets a little bit tricky because the 'Sonderangebot' and 'Neuheit' do not have the same ```classes``` in the ```span``` but if we go one tag higher we can get all of them:
###Code
# Extracting all the special offers by using the div tag and the class 'special-tags p-2'
specials = bs.find_all('div', {'class' : 'special-tags p-2'})
specials
###Output
_____no_output_____
###Markdown
If we want only the text from the ```span``` we now can iterate over the specials list and extract the text:
###Code
# to get only the text from the specials we are iterating over all specials
for special in specials:
# and than get the text of all spans from the special objects
special_text = special.find("span").get_text().strip()
print(special_text)
###Output
_____no_output_____
###Markdown
Nice that will help us for making a decision what fish to buy!But so far we only scraped the first page there are more fish on the next pages. There are 29 pages of fish. So how can we automate this? So this is the link of the first page: https://www.interaquaristik.de/tiere/zierfische The second link of the second page looks like this: https://www.interaquaristik.de/tiere/zierfische?page=2 The third: https://www.interaquaristik.de/tiere/zierfische?page=3 So the only thing that changes is the ending... Let's use this! But don't forget each request is causing traffic for the server, so we will set a sleep timer between requests! ```link = 'https://www.interaquaristik.de/tiere/zierfische'for _ in range(30): time.sleep(3) if _ == 0: page = requests.get(link) html = page.content else: print(link + f'?page={_}') page = requests.get(link + f'?page={_}') html = page.content```This will be our starting point!We will save our results in a pandas data frame so that we can work with the data later. Therefore we will create a empty data frame and append our data to it.
###Code
# Creating an empty Dataframe for later use
df = pd.DataFrame()
###Output
_____no_output_____
###Markdown
But first lets create some functions for the scraping part:1. for the description2. for the price3. for the images4. for specials
###Code
# Creating a function to get all the description
def get_description(lst_name):
'''
Get all the description from the fish by class_ = 'thumb-title small'
and saving it to an input list.
Input: list
Output: list
'''
# find all the descriptions and save them to a list
fish = bs.find_all(class_='thumb-title small')
# iterate over the list fish to get the text and strip the strings
for names in fish:
lst_name.append(
names.get_text()\
.strip()
)
return lst_name
# Creating a function to get all the prices
def get_price(lst_name):
'''
Get all the prices from the fish by class_ = 'prices'
and saving it to an input list.
Input: list
Output: list
'''
# find all the prices and save them to a list
prices = bs.find_all(class_='prices')
# iterate over the prices
for price in prices:
# try to clean the strings from spaces, letters and paragraphs and convert it into a float
try:
price = float(price.get_text()\
.strip()\
.replace('\xa0€ *','')\
.replace(',','.')\
.replace('ab ', '')
)
except:
# in some cases there is no * in the string like here: '\xa0€ *' with the except we try to intercept this
price = price.get_text()\
.split('\n')[0]\
.replace('\xa0€','')
if price != '':
price = 0.0
else:
price = float(price)
# append the prices to the lst_name list
lst_name.append(
price
)
return lst_name
# Creating a function to get all the images
def get_image(lst_name_1, lst_name_2):
'''
Get all the images from the fish by tag = 'ul' and class_ = 'product-list row grid'
and saving the name to one lst_name_1 and the link of the image to another lst_name_2.
Input: list_1, list_2
Output: list_1, list_2
'''
# find all images
images_listings = bs.find('ul', {'class': 'product-list row grid'})
images = images_listings.find_all('img')
# find all pictures
pictures_listings = bs.find('ul', {'class': 'product-list row grid'})
pictures = pictures_listings.find_all('picture')
# iterate over the images and save the names of the fish in one list and the link to the image in another one
for image in images:
lst_name_1.append(image['src'])
lst_name_2.append(image['alt'].strip())
# iterate over the pictures and save the names of the fish in one list and the link to the image in another one
for picture in pictures:
lst_name_1.append(picture['data-iesrc'])
lst_name_2.append(picture['data-alt'].strip())
return lst_name_1, lst_name_2
def get_special(lst_name_1, lst_name_2):
'''
Get all the images from the fish by tag = 'div' and class_ = 'thumb-inner'
and saving the name to one lst_name_1 and the index to another lst_name_2.
Input: list_1, list_2
Output: list_1, list_2
'''
# use the article as tag to get the index of all articles
article_lst = bs.find_all('div', {'class' : 'thumb-inner'})
# iterate over all articles with enumerate to get the single articles and the index
for idx,article in enumerate(article_lst):
# get all specials
spans = article.find('div', {'class' : 'special-tags p-2'})
# and if there is a special save the special and the index each to a list
if spans != None:
special = spans.find("span").get_text().strip()
lst_name_1.append(special)
lst_name_2.append(idx)
return lst_name_1, lst_name_2
###Output
_____no_output_____
###Markdown
Now we will combine it all and could scrape all pages:**NOTE:** We have commented out the code, because we don't want to overwhelm the server with the requests of participants in the meetup. Feel free to run the code after the meetup. We ran the code once and uploaded the result in a csv file to github so the following code will still work!
###Code
#link = 'https://www.interaquaristik.de/tiere/zierfische'
#
## for loop to get the page numbers
#for _ in range(30):
# # sleep timer to reduce the traffic for the server
# time.sleep(3)
# # create the lists for the functions
# fish_names = []
# fish_prices = []
# picture_lst = []
# picture_name = []
# index_lst =[]
# special_lst = []
# # first iteration is the main page
# if _ == 0:
# # get the content
# page = requests.get(link)
# html = page.content
# bs = BeautifulSoup(html, 'html.parser')
# # call the functions to get the information
# get_description(fish_names)
# get_price(fish_prices)
# get_image(picture_lst, picture_name)
# get_special(special_lst, index_lst)
# # create a pandas dataframe for the names and prices
# fish_dict = {
# 'fish_names': fish_names,
# 'fish_prices in EUR': fish_prices
# }
# df_fish_info = pd.DataFrame(data=fish_dict)
# # create a pandas dataframe for the pictures
# picture_dict = {
# 'fish_names': picture_name,
# 'pictures': picture_lst
# }
# df_picture = pd.DataFrame(data=picture_dict)
#
# # merge those two dataframes on the fishnames
# df_ = pd.merge(df_fish_info, df_picture, on='fish_names', how='outer')
#
# # create a pandas dataframe for the specials
# specials_dict = {
# 'special': special_lst
# }
# real_index = pd.Series(index_lst)
# df_specials = pd.DataFrame(data=specials_dict)
# df_specials.set_index(real_index, inplace=True)
#
# # merge the dataframes on the index
# df_ = pd.merge(df_, df_specials, left_index=True,right_index=True, how='outer')
# # append the temporary dataframe to the dataframe we created earlier outside the for loop
# df = df.append(df_)
# # else-statment for the next pages
# else:
# # get the content from the links we create with a f-string an the number we get from the for-loop
# page = requests.get(link+f'?page={_}')
# html = page.content
# bs = BeautifulSoup(html, 'html.parser')
# # call the functions to get the information
# get_description(fish_names)
# get_price(fish_prices)
# get_image(picture_lst, picture_name)
# get_special(special_lst, index_lst)
# # create a pandas dataframe for the names and prices
# fish_dict = {
# 'fish_names': fish_names,
# 'fish_prices in EUR': fish_prices
# }
# df_fish_info = pd.DataFrame(data=fish_dict)
# # create a pandas dataframe for the pictures
# picture_dict = {
# 'fish_names': picture_name,
# 'pictures': picture_lst
# }
# df_picture = pd.DataFrame(data=picture_dict)
#
# # merge those two dataframes on the fishnames
# df_ = pd.merge(df_fish_info, df_picture, on='fish_names', how='outer')
#
# # create a pandas dataframe for the specials
# specials_dict = {
# 'special': special_lst
# }
# real_index = pd.Series(index_lst)
# df_specials = pd.DataFrame(data=specials_dict)
# df_specials.set_index(real_index, inplace=True)
#
# # merge the dataframes on the index
# df_ = pd.merge(df_, df_specials, left_index=True,right_index=True, how='outer')
# # append the temporary dataframe to the dataframe we created earlier outside the for loop
# df = df.append(df_)
#
#checking if everything worked
#df.head()
###Output
_____no_output_____
###Markdown
The web scraping part is over and the following part is only looking at the data.We will save the dataframe to a csv file so that we don't have to scrape the info again! Checking for duplicates something that can happen quickly while scrapingdf.pivot_table(columns=['fish_names'], aggfunc='size') It seems like we have some duplicates. Let's drop them!
###Code
#df.drop_duplicates(inplace=True)
# save the dataframe to a csv file without index
#df.to_csv('fish_data.csv', index=False)
###Output
_____no_output_____
###Markdown
Because we haven't run the code for scraping all pages, we uploaded the data we scraped before to github and we now can load it into pandas:
###Code
# reading the csv file from github
df = pd.read_csv('https://raw.githubusercontent.com/neuefische/ds-meetups/main/02_Web_Scraping_With_Beautiful_Soup/fish_data.csv')
#checking if everything worked
df.head()
###Output
_____no_output_____
###Markdown
We want fish for Larissa that she has never had before, that is why we are looking for new items (Neuheiten).
###Code
# Query over the dataframe and keeping only the fish with the special Neuheit
df_special_offer = df.query('special == "Neuheit"')
df_special_offer.head()
###Output
_____no_output_____
###Markdown
We have a budget of around 250 € and we want to buy at least 10 fish so we will filter out fishes that are more expensive than 25 €!
###Code
# Filtering only for the fish that are cheaper than 25 EUR
df_final = df_special_offer[df_special_offer['fish_prices in EUR'] <= 25]
df_final.head()
###Output
_____no_output_____
###Markdown
So let's write some code that chooses the fish for us:
###Code
# our budget
BUDGET = 250
# a list for the fish we will buy
shopping_bag = []
# a variable here we save the updating price in
price = 0
# we are looking for fish until our budget is reached
while price <= BUDGET:
# samples the dataframe randomly
df_temp = df_final.sample(1)
# getting the name from the sample
name = df_temp['fish_names'].values
# getting the price from the sample
fish_price = df_temp['fish_prices in EUR'].values
# updating our price
price += fish_price
# adding the fish name to the shopping bag
shopping_bag.append((name[0],fish_price[0]))
pd.set_option('display.max_colwidth', None)
print(f"We are at a price point of {price[0].round(2)} Euro and this are the fish we chose:")
res=pd.DataFrame(shopping_bag,columns=["Name","Price [€]"])
display(res)
###Output
_____no_output_____
###Markdown
To open this notebook in Google Colab and start coding, click on the Colab icon below. Run in Google Colab Web ScrapingWeb scraping is the process of extracting and storing data from websites for analytical or other purposes. Therefore, it is useful to know the basics of html and css, because you have to identifiy the elements of a webpage you want to scrape. If you want to refresh your knowledge about these elements, check out the [HTML basics notebook](./01_HTML_Basics.ipynb).We will go through all the important steps performed during web scraping with python and BeautifulSoup in this Notebook. Learning objectives for this NotebookAt the end of this notebook you should:- be able to look at the structure of a real website- be able to figure out what information is relevant to you and how to find it (Locating Elements)- know how to download the HTML content with BeautifulSoup- know how to loop over an entire website structure and extract information- know how to save the data afterwardsFor web scraping it is useful to know the basics of html and css, because you have to identifiy the elements of a webpage you want to scrape. The easiest way to locate an element is to open your Chrome dev tools and inspect the element that you need. A cool shortcut for this is to highlight the element you want with your mouse and then press Ctrl + Shift + C or on macOS Cmd + Shift + C instead of having to right click + inspect each time (same in mozilla). Locating ElementsFor locating an element on a website you can use:- Tag name- Class name- IDs- XPath- CSS selectorsXPath is a technology that uses path expressions to select nodes or node- sets in an XML document (or in our case an HTML document). [Read here for more information](https://www.scrapingbee.com/blog/practical-xpath-for-web-scraping/) Is Web Scraping Legal?Unfortunately, there’s not a cut-and-dry answer here. Some websites explicitly allow web scraping. Others explicitly forbid it. Many websites don’t offer any clear guidance one way or the other.Before scraping any website, we should look for a terms and conditions page to see if there are explicit rules about scraping. If there are, we should follow them. If there are not, then it becomes more of a judgement call.Remember, though, that web scraping consumes server resources for the host website. If we’re just scraping one page once, that isn’t going to cause a problem. But if our code is scraping 1,000 pages once every ten minutes, that could quickly get expensive for the website owner.Thus, in addition to following any and all explicit rules about web scraping posted on the site, it’s also a good idea to follow these best practices: Web Scraping Best Practices:- Never scrape more frequently than you need to.- Consider caching the content you scrape so that it’s only downloaded once.- Build pauses into your code using functions like time.sleep() to keep from overwhelming servers with too many requests too quickly. The Problem we want to solveLarissa's sister broke her aquarium. And we decided to get her a new one because christmas is near and we want to cheer Larissa up! And because we know how to code and can't decide what fish we want to get, we will solve this problem with web scraping! BeautifulSoupThe library we will use today to find fishes we can gift Larissa for christmas is [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/). It is a library to extract data out of HTML and XML files.The first thing we’ll need to do to scrape a web page is to download the page. We can download pages using the Python requests.The requests library will make a GET request to a web server, which will download the HTML contents of a given web page for us. There are several different types of requests we can make using requests, of which GET is just one.
###Code
import time
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
# get the content of the website
page = requests.get("https://www.interaquaristik.de/tiere/zierfische")
html = page.content
###Output
_____no_output_____
###Markdown
We can use the BeautifulSoup library to parse this document, and extract the information from it.We first have to import the library, and create an instance of the BeautifulSoup class to parse our document:
###Code
# parse the html and save it into a BeautifulSoup instance
bs = BeautifulSoup(html, 'html.parser')
###Output
_____no_output_____
###Markdown
We can now print out the HTML content of the page, formatted nicely, using the prettify method on the BeautifulSoup object.
###Code
print(bs.prettify())
###Output
_____no_output_____
###Markdown
This step isn't strictly necessary, and we won't always bother with it, but it can be helpful to look at prettified HTML to make the structure of the page clearer and nested tags are easier to read.As all the tags are nested, we can move through the structure one level at a time. We can first select all the elements at the top level of the page using the children property of ``bs``.Note that children returns a list generator, so we need to call the list function on it:
###Code
list(bs.findChildren())
###Output
_____no_output_____
###Markdown
And then we can have a closer look on the children. For example the ```head```.
###Code
bs.find('head')
###Output
_____no_output_____
###Markdown
Here you can try out different tags like ```body```, headers like ```h1``` or ```title```:
###Code
bs.find('insert your tag here')
###Output
_____no_output_____
###Markdown
But what if we have more than one element with the same tag? Then we can just use the ```.find_all()``` method of BeautifulSoup:
###Code
bs.find_all('article')
###Output
_____no_output_____
###Markdown
Also you can search for more than one tag at once for example if you want to look for all headers on the page:
###Code
titles = bs.find_all(['h1', 'h2','h3','h4','h5','h6'])
print([title for title in titles])
###Output
_____no_output_____
###Markdown
Often we are not interested in the tags themselves, but in the content they contain. With the ```.get_text()``` method we can easily extract the text from between the tags. So let's find out if we really scrape the right page to buy the fishes:
###Code
bs.find('title').get_text()
###Output
_____no_output_____
###Markdown
Searching for tags by class and idWe introduced ```classes``` and ```ids``` earlier, but it probably wasn’t clear why they were useful.Classes and ```ids``` are used by ```CSS``` to determine which ```HTML``` elements to apply certain styles to. For web scraping they are also pretty useful as we can use them to specify the elements we want to scrape. In our case the ```ìds``` are not that useful there are only a few of them but one example would be:
###Code
bs.find_all('div', id='page-body')
###Output
_____no_output_____
###Markdown
But it seems like that the ```classes``` could be useful for finding the fishes and their prices, can you spot the necessary tags in the DevTool of your browser?
###Code
# tag of the description of the fishes
bs.find_all(class_="insert your tag here for the name")
# tag of the price of the fishes
bs.find_all(class_="insert your tag here for the price")
###Output
_____no_output_____
###Markdown
Extracting all the important information from the pageNow that we know how to extract each individual piece of information, we can save these informations to a list. Let's start with the price:
###Code
# We will search for the price
prices = bs.find_all(class_= "price")
prices_lst = [price.get_text() for price in prices]
prices_lst
###Output
_____no_output_____
###Markdown
We seem to be on the right track but like you can see it doesn't handle the special characters, spaces and paragraphs. So web scraping is coming hand in hand with cleaning your data:
###Code
prices_lst = [price.strip() for price in prices_lst]
prices_lst[:5]
###Output
_____no_output_____
###Markdown
That looks a little bit better but we want only the number to work with the prices later. We have to remove the letters and convert the string to a float:
###Code
# We are removing the letters from the end of the string and keeping only the first part
prices_lst = [price.replace('\xa0€ *', '') for price in prices_lst]
prices_lst[:5]
# Now we have to replace the comma with a dot to convert the string to a float
prices_lst = [price.replace(',', '.') for price in prices_lst]
prices_lst[:5]
# So lets convert the string into a float
prices_lst = [float(price) for price in prices_lst]
###Output
_____no_output_____
###Markdown
But if we want to convert the string to a flaot we get an error message there seems to be prices which start with ```ab```.So let me intodruce you to a very handy thing called ```Regular expressions``` or short ```regex```. It is a sequence of characters that specifies a search pattern. In python you can use regex with the ```re``` library. So lets have a look how many of the prices still contain any kind of letters.
###Code
# with the regex sequence we are looking for strings that contain any
# kind of letters
for price in prices_lst:
if re.match("^[A-Za-z]", price):
print(price)
###Output
_____no_output_____
###Markdown
So there are some prices with an "ab" in front of them, so lets remove the letters:
###Code
# Now we have to replace the comma with a dot to convert the string to a float
prices_lst = [float(price.replace('ab ', '')) for price in prices_lst]
prices_lst[:5]
###Output
_____no_output_____
###Markdown
Now it worked! so let's do the same with the description of the fishes:
###Code
# Find all the descriptions of the fish and save them in a variable
descriptions = bs.find_all(class_='thumb-title small')
# Get only the text of the descriptions
descriptions_lst = [description.get_text() for description in descriptions]
descriptions_lst
# Clean the text by removing spaces and paragraphs
descriptions_lst = [description.strip() for description in descriptions_lst]
descriptions_lst[:5]
###Output
_____no_output_____
###Markdown
Let's have a look if we can get the links to the images of the fish, so that we later can look up how the fish are looking, we can use the ```img``` tag for that in most cases:
###Code
# find all images of the fish
image_lst = bs.find('ul', {'class': 'product-list row grid'})
images = image_lst.find_all('img')
images
###Output
_____no_output_____
###Markdown
There are only two results for the image tag so let's have a look what the tag of the other images are.So they have the tag: ```picture``` so lets extract those:
###Code
# Extract all the pictures for the fish by using first the tag ul and than the tag picture
picture_lst = bs.find('ul', {'class': 'product-list row grid'})
pictures = picture_lst.find_all('picture')
pictures[:5]
###Output
_____no_output_____
###Markdown
That looks more like all pictures! Although, it seems some of the fish have specials like 'Sonderangebot' or 'Neuheit'. Wouldn't it be nice if we would have this information as well? Here it gets a little bit tricky because the 'Sonderangebot' and 'Neuheit' do not have the same ```classes``` in the ```span``` but if we go one tag higher we can get all of them:
###Code
# Extracting all the special offers by using the div tag and the class 'special-tags p-2'
specials = bs.find_all('div', {'class' : 'special-tags p-2'})
specials
###Output
_____no_output_____
###Markdown
If we want only the text from the ```span``` we now can iterate over the specials list and extract the text:
###Code
# to get only the text from the specials we are iterating over all specials
for special in specials:
# and than get the text of all spans from the special objects
special_text = special.find("span").get_text().strip()
print(special_text)
###Output
_____no_output_____
###Markdown
Nice that will help us for making a decision what fish to buy!But so far we only scraped the first page there are more fish on the next pages. There are 29 pages of fish. So how can we automate this? So this is the link of the first page: https://www.interaquaristik.de/tiere/zierfische The second link of the second page looks like this: https://www.interaquaristik.de/tiere/zierfische?page=2 The third: https://www.interaquaristik.de/tiere/zierfische?page=3 So the only thing that changes is the ending... Let's use this! But don't forget each request is causing traffic for the server, so we will set a sleep timer between requests! ```link = 'https://www.interaquaristik.de/tiere/zierfische'for _ in range(30): time.sleep(3) if _ == 0: page = requests.get(link) html = page.content else: print(link + f'?page={_}') page = requests.get(link + f'?page={_}') html = page.content```This will be our starting point!We will save our results in a pandas data frame so that we can work with the data later. Therefore we will create a empty data frame and append our data to it.
###Code
# Creating an empty Dataframe for later use
df = pd.DataFrame()
###Output
_____no_output_____
###Markdown
But first lets create some functions for the scraping part:1. for the description2. for the price3. for the images4. for specials
###Code
# Creating a function to get all the description
def get_description(lst_name):
'''
Get all the description from the fish by class_ = 'thumb-title small'
and saving it to an input list.
Input: list
Output: list
'''
# find all the descriptions and save them to a list
fish = bs.find_all(class_='thumb-title small')
# iterate over the list fish to get the text and strip the strings
for names in fish:
lst_name.append(
names.get_text()\
.strip()
)
return lst_name
# Creating a function to get all the prices
def get_price(lst_name):
'''
Get all the prices from the fish by class_ = 'prices'
and saving it to an input list.
Input: list
Output: list
'''
# find all the prices and save them to a list
prices = bs.find_all(class_='prices')
# iterate over the prices
for price in prices:
# try to clean the strings from spaces, letters and paragraphs and convert it into a float
try:
price = float(price.get_text()\
.strip()\
.replace('\xa0€ *','')\
.replace(',','.')\
.replace('ab ', '')
)
except:
# in some cases there is no * in the string like here: '\xa0€ *' with the except we try to intercept this
price = price.get_text()\
.split('\n')[0]\
.replace('\xa0€','')
if price != '':
price = 0.0
else:
price = float(price)
# append the prices to the fish_prices list
fish_prices.append(
price
)
return lst_name
# Creating a function to get all the images
def get_image(lst_name_1, lst_name_2):
'''
Get all the images from the fish by tag = 'ul' and class_ = 'product-list row grid'
and saving the name to one lst_name_1 and the link of the image to another lst_name_2.
Input: list_1, list_2
Output: list_1, list_2
'''
# find all images
images_listings = bs.find('ul', {'class': 'product-list row grid'})
images = images_listings.find_all('img')
# find all pictures
pictures_listings = bs.find('ul', {'class': 'product-list row grid'})
pictures = pictures_listings.find_all('picture')
# iterate over the images and save the names of the fish in one list and the link to the image in another one
for image in images:
lst_name_1.append(image['src'])
lst_name_2.append(image['alt'].strip())
# iterate over the pictures and save the names of the fish in one list and the link to the image in another one
for picture in pictures:
lst_name_1.append(picture['data-iesrc'])
lst_name_2.append(picture['data-alt'].strip())
return lst_name_1, lst_name_2
def get_special(lst_name_1, lst_name_2):
'''
Get all the images from the fish by tag = 'div' and class_ = 'thumb-inner'
and saving the name to one lst_name_1 and the index to another lst_name_2.
Input: list_1, list_2
Output: list_1, list_2
'''
# use the article as tag to get the index of all articles
article_lst = bs.find_all('div', {'class' : 'thumb-inner'})
# iterate over all articles with enumerate to get the single articles and the index
for idx,article in enumerate(article_lst):
# get all specials
spans = article.find('div', {'class' : 'special-tags p-2'})
# and if there is a special save the special and the index each to a list
if spans != None:
special = spans.find("span").get_text().strip()
lst_name_1.append(special)
lst_name_2.append(idx)
return lst_name_1, lst_name_2
###Output
_____no_output_____
###Markdown
Now we will combine it all and could scrape all pages:**NOTE:** We have commented out the code, because we don't want to overwhelm the server with the requests of participants in the meetup. Feel free to run the code after the meetup. We ran the code once and uploaded the result in a csv file to github so the following code will still work!
###Code
#link = 'https://www.interaquaristik.de/tiere/zierfische'
#
## for loop to get the page numbers
#for _ in range(30):
# # sleep timer to reduce the traffic for the server
# time.sleep(3)
# # create the lists for the functions
# fish_names = []
# fish_prices = []
# picture_lst = []
# picture_name = []
# index_lst =[]
# special_lst = []
# # first iteration is the main page
# if _ == 0:
# # get the content
# page = requests.get(link)
# html = page.content
# bs = BeautifulSoup(html, 'html.parser')
# # call the functions to get the information
# get_description(fish_names)
# get_price(fish_prices)
# get_image(picture_lst, picture_name)
# get_special(special_lst, index_lst)
# # create a pandas dataframe for the names and prices
# fish_dict = {
# 'fish_names': fish_names,
# 'fish_prices in EUR': fish_prices
# }
# df_fish_info = pd.DataFrame(data=fish_dict)
# # create a pandas dataframe for the pictures
# picture_dict = {
# 'fish_names': picture_name,
# 'pictures': picture_lst
# }
# df_picture = pd.DataFrame(data=picture_dict)
#
# # merge those two dataframes on the fishnames
# df_ = pd.merge(df_fish_info, df_picture, on='fish_names', how='outer')
#
# # create a pandas dataframe for the specials
# specials_dict = {
# 'special': special_lst
# }
# real_index = pd.Series(index_lst)
# df_specials = pd.DataFrame(data=specials_dict)
# df_specials.set_index(real_index, inplace=True)
#
# # merge the dataframes on the index
# df_ = pd.merge(df_, df_specials, left_index=True,right_index=True, how='outer')
# # append the temporary dataframe to the dataframe we created earlier outside the for loop
# df = df.append(df_)
# # else-statment for the next pages
# else:
# # get the content from the links we create with a f-string an the number we get from the for-loop
# page = requests.get(link+f'?page={_}')
# html = page.content
# bs = BeautifulSoup(html, 'html.parser')
# # call the functions to get the information
# get_description(fish_names)
# get_price(fish_prices)
# get_image(picture_lst, picture_name)
# get_special(special_lst, index_lst)
# # create a pandas dataframe for the names and prices
# fish_dict = {
# 'fish_names': fish_names,
# 'fish_prices in EUR': fish_prices
# }
# df_fish_info = pd.DataFrame(data=fish_dict)
# # create a pandas dataframe for the pictures
# picture_dict = {
# 'fish_names': picture_name,
# 'pictures': picture_lst
# }
# df_picture = pd.DataFrame(data=picture_dict)
#
# # merge those two dataframes on the fishnames
# df_ = pd.merge(df_fish_info, df_picture, on='fish_names', how='outer')
#
# # create a pandas dataframe for the specials
# specials_dict = {
# 'special': special_lst
# }
# real_index = pd.Series(index_lst)
# df_specials = pd.DataFrame(data=specials_dict)
# df_specials.set_index(real_index, inplace=True)
#
# # merge the dataframes on the index
# df_ = pd.merge(df_, df_specials, left_index=True,right_index=True, how='outer')
# # append the temporary dataframe to the dataframe we created earlier outside the for loop
# df = df.append(df_)
#
#checking if everything worked
#df.head()
###Output
_____no_output_____
###Markdown
The web scraping part is over and the following part is only looking at the data.We will save the dataframe to a csv file so that we don't have to scrape the info again! Checking for duplicates something that can happen quickly while scrapingdf.pivot_table(columns=['fish_names'], aggfunc='size') It seems like we have some duplicates. Let's drop them!
###Code
#df.drop_duplicates(inplace=True)
# save the dataframe to a csv file without index
#df.to_csv('fish_data.csv', index=False)
###Output
_____no_output_____
###Markdown
Because we haven't run the code for scraping all pages, we uploaded the data we scraped before to github and we now can load it into pandas:
###Code
# reading the csv file from github
df = pd.read_csv('https://raw.githubusercontent.com/neuefische/ds-meetups/main/02_Web_Scraping_With_Beautiful_Soup/fish_data.csv')
#checking if everything worked
df.head()
###Output
_____no_output_____
###Markdown
We want fish for Larissa that she has never had before, that is why we are looking for new items (Neuheiten).
###Code
# Query over the dataframe and keeping only the fish with the special Neuheit
df_special_offer = df.query('special == "Neuheit"')
df_special_offer.head()
###Output
_____no_output_____
###Markdown
We have a budget of around 250 € and we want to buy at least 10 fish so we will filter out fishes that are more expensive than 25 €!
###Code
# Filtering only for the fish that are cheaper than 25 EUR
df_final = df_special_offer[df_special_offer['fish_prices in EUR'] <= 25]
df_final.head()
###Output
_____no_output_____
###Markdown
So let's write some code that chooses the fish for us:
###Code
# our budget
BUDGET = 250
# a list for the fish we will buy
shopping_bag = []
# a variable here we save the updating price in
price = 0
# we are looking for fish until our budget is reached
while price <= BUDGET:
# samples the dataframe randomly
df_temp = df_final.sample(1)
# getting the name from the sample
name = df_temp['fish_names'].values
# getting the price from the sample
fish_price = df_temp['fish_prices in EUR'].values
# updating our price
price += fish_price
# adding the fish name to the shopping bag
shopping_bag.append((name[0],fish_price[0]))
pd.set_option('display.max_colwidth', None)
print(f"We are at a price point of {price[0].round(2)} Euro and this are the fish we chose:")
res=pd.DataFrame(shopping_bag,columns=["Name","Price [€]"])
display(res)
###Output
_____no_output_____ |
notebooks/10_SparkStreaming.ipynb | ###Markdown
10: SparkStreamingThis example shows the original implementation of streaming in Spark, the _Spark streaming_ capability that is based on the `RDD` API. We construct a simple "word count" server. This example watches a directory for new files and reads them as they arrive. The corresponding program version of this example, [SparkStreaming10.scala](https://github.com/deanwampler/spark-scala-tutorial/blob/master/src/main/scala/sparktutorial/SparkStreaming10.scala), supports this input source and a second option, input from a socket. See the [Tutorial.markdown](https://github.com/deanwampler/spark-scala-tutorial/blob/master/Tutorial.markdown), for details. The newer streaming module is called _Structured Streaming_. It is based on the `Dataset` API, for better performance and convenience. It has supports much lower-latency processing. Examples of this API are TBD here, but see the [Apache Spark Structured Streaming Programming Guide](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html) for more information. Watching a directory for new files supports a workflow where some process outputs new files to a "staging" directory where this job will do subsequent processing.Note that Spark Streaming does not use the `_SUCCESS` marker file we mentioned in an earlier notebook for batch processing, in part because that mechanism can only be used once *all* files are written to the directory. Hence, Spark can't know when writing the file has actually completed. This means you should only use this ingestion mechanism with files that "appear instantly" in the directory, i.e., through renaming from another location in the file system.For the example, a temporary directory is created and a second process writes the user-specified data file(s) (default: Enron emails) to a temporary directory every second. `SparkStreaming10` does *Word Count* on the data. Hence, the data would eventually repeat, but for convenience, we also stop after 200 iterations (the number of email files).
###Code
import java.io.File
val dataSource = new File("../data/enron-spam-ham")
val watchedDirectory = new File("tmp/streaming-input")
val outputPathRoot = new File("streaming-output/")
outputPathRoot.mkdirs()
val outputPath = new File(outputPathRoot, "wc-streaming")
val iterations = 200 // Terminate after N iterations
val sleepIntervalMillis = 1000 // How often to wait between writes of files to the directory
val batchSeconds = 2 // Size of batch intervals
###Output
_____no_output_____
###Markdown
A function to delete a file or a directory and its contents
###Code
def rmrf(root: File): Unit = {
if (root.isFile) root.delete()
else if (root.exists) {
root.listFiles.foreach(rmrf)
root.delete()
}
}
###Output
_____no_output_____
###Markdown
Use it to remove the watched directory, if one exists from a previous run. Then recreate it.
###Code
rmrf(watchedDirectory)
watchedDirectory.mkdirs()
###Output
_____no_output_____
###Markdown
We need a second process or dedicated thread to write new files to the watch directory. To support we'll insert here a striped-down version of [util.streaming.DataDirectoryServer.scala](https://github.com/deanwampler/spark-scala-tutorial/blob/master/src/main/scala/sparktutorial/util/streaming/DataDirectoryServer.scala) in the application version of the tutorial. It runs its logic in a separate thread. It Serves data to be used by this notebook by periodically writing a new file to a watched directory, as discussed below.
###Code
case class DataServerError(msg: String, cause: Throwable = null) extends RuntimeException(msg, cause)
import java.nio.file.{Files, FileSystems, Path}
import java.nio.file.attribute.BasicFileAttributes
import java.util.function.BiPredicate
import scala.util.control.NonFatal
import scala.collection.JavaConverters._
def makePath(pathString: String): Path = FileSystems.getDefault().getPath(pathString)
def makePath(file: java.io.File): Path = makePath(file.getAbsolutePath)
def makePath(parent: Path, name: String): Path = FileSystems.getDefault().getPath(parent.toString, name)
case class DataDirectoryServer(destinationDirectoryPath: Path, sourceRootPath: Path) extends Runnable {
def run: Unit = try {
val sources = getSourcePaths(sourceRootPath)
if (sources.size == 0) throw DataServerError(s"No sources for path $sourceRootPath!")
sources.zipWithIndex.foreach { case (source, index) =>
val destination = makePath(destinationDirectoryPath, source.getFileName.toString)
println(s"\nIteration ${index+1}: destination: ${destination}")
Files.copy(source, destination)
Thread.sleep(sleepIntervalMillis)
}
} catch {
case NonFatal(ex) => throw DataServerError("Data serving failed!", ex)
}
/**
* Get the paths for the source files.
*/
protected def getSourcePaths(sourcePath: Path): Seq[Path] =
Files.find(sourcePath, 5,
new BiPredicate[Path, BasicFileAttributes]() {
def test(path: Path, attribs: BasicFileAttributes): Boolean = attribs.isRegularFile
}).iterator.asScala.toSeq
}
###Output
_____no_output_____
###Markdown
Here is the Spark code for processing the stream. Start by creating the `StreamingContext`.
###Code
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.scheduler.{
StreamingListener, StreamingListenerReceiverError, StreamingListenerReceiverStopped}
val sc = spark.sparkContext
val ssc = new StreamingContext(sc, Seconds(batchSeconds))
###Output
_____no_output_____
###Markdown
Define a listener for the end of the stream.> **Note:** We have to repeat import statements because of scoping idiosyncrasies in the way cells are converted to Scala.
###Code
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.scheduler.{
StreamingListener, StreamingListenerReceiverError, StreamingListenerReceiverStopped}
class EndOfStreamListener(sc: StreamingContext) extends StreamingListener {
override def onReceiverError(error: StreamingListenerReceiverError):Unit = {
println(s"Receiver Error: $error. Stopping...")
sc.stop()
}
override def onReceiverStopped(stopped: StreamingListenerReceiverStopped):Unit = {
println(s"Receiver Stopped: $stopped. Stopping...")
}
}
ssc.addStreamingListener(new EndOfStreamListener(ssc))
###Output
_____no_output_____
###Markdown
Now add the logic to process to the data.We do _Word Count_, splitting on non-alphabetic characters.
###Code
val lines = ssc.textFileStream(watchedDirectory.getAbsolutePath)
val words = lines.flatMap(line => line.split("""[^\p{IsAlphabetic}]+"""))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
###Output
_____no_output_____
###Markdown
Calling print will cause some useful diagnostic output to be printed during every mini-batch:```text-------------------------------------------Time: 1413724627000 ms-------------------------------------------(limitless,2)(grand,2)(someone,4)(priority,2)(goals,1)(ll,5)(agree,1)(offer,2)(yahoo,3)(ebook,3)...```The time stamp will increment by 2000 ms each time, because we're running with 2-second batch intervals (or whatever you set `batchSeconds` to above). This particular output comes from the `print` method we added above, which is a useful debug tool for seeing the first 10 or so values in the current batch `RDD`.
###Code
wordCounts.print() // print a few counts...
###Output
_____no_output_____
###Markdown
Calling `saveAsTextFile` will cause new directories to be written under the `outputPath` directory, one new directory per mini-batch. They have names like `output/wc-streaming-1413724628000.out`, with a timestamp appended to our default output argument `output/wc-streaming`, and the extension we add, `out`. Each of these will contain the usual `_SUCCESS` and `part-0000N` files, one for each core that the task is given.
###Code
// Generates a separate subdirectory for each interval!!
wordCounts.saveAsTextFiles(outputPath.getAbsolutePath, "out")
###Output
_____no_output_____
###Markdown
Now start the background thread:
###Code
val directoryServerThread = new Thread(new DataDirectoryServer(makePath(watchedDirectory), makePath(dataSource)))
directoryServerThread.start()
###Output
Iteration 1: destination: /home/jovyan/notebooks/tmp/streaming-input/0003.2004-08-01.BG.spam.txt
###Markdown
Start the streaming process and wait forever. To have it exit after a certain number of milliseconds, pass a number for the milliseconds as the argument to `awaitTermination`. We'll wrap this in a separate thread so we can retain some control for stopping everything.
###Code
val streamRunnable = new Runnable {
def run(): Unit = {
ssc.start()
ssc.awaitTermination()
}
}
val streamThread = new Thread(streamRunnable)
streamThread.start()
###Output
_____no_output_____
###Markdown
Evaluate the next cell to stop the serving thread and streaming process. (If the cell evaluation hangs, stop or reset the kernel to kill it.)
###Code
directoryServerThread.stop()
ssc.stop(stopSparkContext = true)
streamThread.stop()
###Output
_____no_output_____
###Markdown
When finished with it, clean up the watched directory...
###Code
rmrf(watchedDirectory)
###Output
_____no_output_____
###Markdown
... and the streaming output directory
###Code
rmrf(outputPathRoot)
###Output
_____no_output_____ |
feature_extraction_diagrams/extdia_v0_batch.ipynb | ###Markdown
ini
###Code
BASE_FOLDER = '../'
%run -i ..\utility\feature_extractor\JupyterLoad_feature_extractor.py
%run -i ..\utility\extractor_batch.py
print('-'*44)
%run -i .\extdia_v0
print('-'*44)
target_folder = r'\dataset\extdia_v0'
n_jobs = 7
def process_set(FileFindDict, FileCountLimit=None):
extractor_batch(base_folder= BASE_FOLDER,
target_folder=target_folder,
extdia = extdia_v0,
FileFindDict = FileFindDict,
n_jobs = n_jobs,
target_class_map = {'abnormal':1, 'normal': 0},
FileCountLimit = FileCountLimit,
datset_folder_from_base = 'dataset')
###Output
_____no_output_____
###Markdown
Test Small set
###Code
process_set({'SNR': '6dB','machine': 'pump','ID': ['00','02','04','06']},2)
###Output
2020-04-29 23:13:22: Target folder will be: A:\Dev\NF_Prj_MIMII_Dataset\dataset\extdia_v0
2020-04-29 23:13:22: Extractor diagram is fof type: <class '__main__.extdia_v0'>
2020-04-29 23:13:22: --------------------------------------------
2020-04-29 23:13:22: Working on machinepart:pump SNR:6dB ID:00
2020-04-29 23:13:22: Files to process: 4
2020-04-29 23:13:22: multithread mode filling the queue
###Markdown
Full Batch
###Code
ts = time.time()
process_set({'SNR': '6dB','machine': 'pump','ID': ['00','02','04','06']})
process_set({'SNR': 'min6dB','machine': 'pump','ID': ['00','02','04','06']})
process_set({'SNR': '6dB','machine': 'fan','ID': ['00','02','04','06']})
process_set({'SNR': 'min6dB','machine': 'fan','ID': ['00','02','04','06']})
process_set({'SNR': '6dB','machine': 'slider','ID': ['00','02','04','06']})
process_set({'SNR': 'min6dB','machine': 'slider','ID': ['00','02','04','06']})
process_set({'SNR': '6dB','machine': 'valve','ID': ['00','02','04','06']})
process_set({'SNR': 'min6dB','machine': 'valve','ID': ['00','02','04','06']})
#process_set({'SNR': '0dB','machine': 'pump','ID': ['00','02','04','06']})
#process_set({'SNR': '0dB','machine': 'vave','ID': ['00','02','04','06']})
#process_set({'SNR': '0dB','machine': 'slider','ID': ['00','02','04','06']})
#process_set({'SNR': '0dB','machine': 'fan','ID': ['00','02','04','06']})
print('#####> total time needed: ' + str(np.round(time.time()-ts/60,1))
str(np.round(time.time()-ts/60,1))
###Output
_____no_output_____
###Markdown
Verfication
###Code
df_filename = 'pump6dB00_EDiaV0_pandaDisc.pkl'
df = pd.read_pickle(os.path.abspath(BASE_FOLDER+target_folder+'\\'+df_filename)
# simple check read back
#target_folder=r'\feature_extraction_diagrams\extdia_v0_testout2'
df = pd.read_pickle(os.path.abspath(BASE_FOLDER+target_folder+r'\pump6dB00_EDiaV0_pandaDisc.pkl'))
i = 1
op = 'MEL_den'
fp = os.path.abspath(BASE_FOLDER+df[op][i])
dd = pickle.load( open( fp , "rb" ))
print(dd[i]['para_dict']['wave_filepath'])
fe = feature_extractor_from_dict(dd[i], BASE_FOLDER)
fe.plot()
384/60
###Output
_____no_output_____ |
docs/notebooks/Compact-Schemes-for-Poisson-Equation.ipynb | ###Markdown
Compact n$^{th}$-order derivativeThe compact coefficients for the $n^{th}$ derivative $f^{(n)}$ of a function can be found by solving the system$$\begin{bmatrix} \begin{matrix} 0 & 0 & 0 \end{matrix} & \begin{matrix} 1 & 1 & 1\\ \end{matrix}\\ Q^{(n)} & \begin{matrix}h_{i-1} & 0 & h_{i+1}\\h_{i-1}^2/2! & 0 & h_{i+1}^2/2!\\h_{i-1}^3/3! & 0 & h_{i+1}^3/3!\\h_{i-1}^4/4! & 0 & h_{i+1}^4/4!\end{matrix}\\\begin{matrix} 0 & 1 & 0 \end{matrix} & \begin{matrix} 0 & 0 & 0\\ \end{matrix}\\\end{bmatrix}\begin{bmatrix}L_{i-1} \\ L_{i} \\ L_{i+1} \\ -R_{i-1} \\ -R_{i} \\ -R_{i+1}\\\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\\ 1\\,\end{bmatrix}$$where $h_{i-1}=x_{i-1}-x_i$ and $h_{i+1} = x_{i+1}-x_i$. The sub-matrix $Q^{(n)}$ depends on the derivative required. For the first derivative, we have$$Q^{(1)} = \begin{bmatrix}1 & 1 & 1\\h_{i-1} & 0 & h_{i+1}\\h_{i-1}^2/2! & 0 & h_{i+1}^2/2!\\h_{i-1}^3/3! & 0 & h_{i+1}^3/3!\\\end{bmatrix}$$and for the second derivative$$Q^{(2)} = \begin{bmatrix}0 & 0 & 0\\1 & 1 & 1\\h_{i-1} & 0 & h_{i+1}\\h_{i-1}^2/2! & 0 & h_{i+1}^2/2!\\\end{bmatrix}.$$
###Code
def get_compact_coeffs(n, hi):
# assumes uniform grid
h_i = -hi
r = np.hstack((np.array([0 for i in range(5)]),1.))
L = np.array([[0, 0, 0, 1, 1, 1],
[0, 0, 0, h_i, 0, hi],
[0, 0, 0, h_i**2/fact(2), 0, hi**2/fact(2)],
[0, 0, 0, h_i**3/fact(3), 0, hi**3/fact(3)],
[0, 0, 0, h_i**4/fact(4), 0, hi**4/fact(4)],
[0, 1, 0, 0, 0, 0]])
insert = np.array([[1, 1, 1],
[h_i, 0, hi],
[h_i**2/fact(2), 0, hi**2/fact(2)],
[h_i**3/fact(3), 0, hi**3/fact(3)]])
L[n:5,:3] = insert[:-n+5,:]
vec = np.round(np.linalg.solve(L, r), 8)
return vec[:3], -vec[3:]
###Output
_____no_output_____
###Markdown
We can check that for a first derivative, we recover the standard Pade ($4^{th}$-order) [coefficients](https://github.com/marinlauber/my-numerical-recipes/blob/master/Compact-Schemes.ipynb), which are$$ L = \left[\frac{1}{4}, 1, \frac{1}{4}\right], \qquad R = \left[-\frac{3}{4}, 0., \frac{3}{4}\right]$$
###Code
pade = np.array([1./4., 1., 1./4., -3./4., 0., 3./4.])
np.allclose(np.hstack(get_compact_coeffs(1, 1)), pade)
###Output
_____no_output_____
###Markdown
We can now write a function that, given a function $f$, on a uniform grid with spacing $dx$, return the $n^{th}$ derivative of that function. Because for each point we solve for the compact coefficients, we can in theory get compact schemes on non-uniform grid with the same accuracy. Here we will only focs on uniform grids.
###Code
def derive_compact(n, f, dx):
# get coeffs
L, R = get_compact_coeffs(n, dx)
# temp array
sol = np.empty_like(f)
# compact scheme on interior points
sol[2:-2] = R[0]*f[1:-3] + R[1]*f[2:-2] + R[2]*f[3:-1]
# boundary points
sol[-2] = R[0]*f[-3] + R[1]*f[-2] + R[2]*f[-1]
sol[-1] = R[0]*f[-2] + R[1]*f[-1] + R[2]*f[-0]
sol[ 0] = R[0]*f[-1] + R[1]*f[ 0] + R[2]*f[ 1]
sol[ 1] = R[0]*f[ 0] + R[1]*f[ 1] + R[2]*f[ 2]
# build ugly matrix by hand
A = sparse.diags(L,[-1,0,1],shape=(len(f),len(f))).toarray()
# periodic BS's
A[ 0,-1] = L[0]
A[-1, 0] = L[2]
return np.linalg.solve(A, sol)
###Output
_____no_output_____
###Markdown
We can then test the method on a known function, with known first and second derivaive. For simplicity, we will use trigonometric functions, which have well-behaved infinite derivatives.$$ f(x) = \sin(x), \,\, x\in[0, 2\pi]$$such that$$ \frac{d}{dx}f(x) = \cos(x), \quad \frac{d^2}{dx^2}f(x) = -\sin(x), \,\, x\in[0, 2\pi]$$
###Code
N = 128
x, dx = np.linspace(0, 2*np.pi, N, retstep=True, endpoint=False)
function = np.sin(x)
# first derivative
sol = derive_compact(1, function, dx)
print('First derivative L2 norm: ', norm(sol-np.cos(x)))
# second derivative
sol = derive_compact(2, function, dx)
print('Second derivative L2 norm: ', norm(sol+np.sin(x)))
###Output
First derivative L2 norm: 2.00356231982653e-09
Second derivative L2 norm: 1.5119843767976088e-09
###Markdown
Poisson Equation With Compact SchemesWe aim to solve the following one-dimensionnal Poisson equation with Dirichlet boudnary conditions$$\begin{split} -&\frac{d^2}{dx^2}u(x) = f(x), \quad a<x<b\\ &u(a) = 0, \quad u(b) = 0\\\end{split}$$where $a, b\in\mathbb{R}$, $u(x)$ is the unkown function and $f(x)$ is some given source function. We discretize the left side of the Poisson equaution ($u''_i$) using a compact finite difference scheme with fourth-order accuracy on a uniform grid with grid points being $x_i = a+ih, h=(b-a)/M, i=0, 1, 2,..., M$ where $M$ is a positive integer. $$\frac{1}{10}u''_{i-1} + u''_i + \frac{1}{10}u''_{i+1} = \frac{6}{5}\frac{u_{i+1} + 2u_i + u_{i-1}}{h^2},$$or in a more common form,$$u''_{i-1} + 10u''_i + u''_{i+1} = \frac{12}{h^2}\left(u_{i+1} + 2u_i + u_{i-1}\right).$$This results in the following tri-diagonal system$$ AU''= \frac{12}{h^2}BU,$$where $U'' = (u''_1,u''_2,...,u''_M)^\top$ and $U = (u_1,u_2,...,u_M)^\top\in \mathbb{R}^{M-1}$. The tri-diagonal matrix $A, B \in \mathbb{R}^{M-1\times M-1}$ are$$A = \begin{bmatrix}10 & 1 & 0 &\dots & 0 & 0 \\1 & 10 & 1 &\dots & 0 & 0 \\0 & 1 & 10 &\dots & 0 & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\0 & 0 & 0 & \dots & 10 & 1 \\0 & 0 & 0 &\dots & 1 & 10 \\\end{bmatrix}, \qquad B = \begin{bmatrix}-2 & 1 & 0 &\dots & 0 & 0 \\1 & -2 & 1 &\dots & 0 & 0 \\0 & 1 & -2 &\dots & 0 & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\0 & 0 & 0 & \dots & -2 & 1 \\0 & 0 & 0 &\dots & 1 & -2 \\\end{bmatrix}.$$In addition, we have $-u''(x_i)=f(x_i), i=1,2,...,M-1$ i.e. $-U''=F$. We can re-write out system as$$ -\frac{12}{h^2}BU = AF,$$To obtaine the solution $U$, we simply need to solve the system.
###Code
def build_AB(M, h):
A = sparse.diags([1.,10.,1.],[-1,0,1],shape=(M, M)).toarray()
B = sparse.diags([1.,-2.,1.],[-1,0,1],shape=(M, M)).toarray()
# dont forget BC, here homogeneous Dirichlet
B[ 0,:]=0.; B[ 0, 0]=1
B[-1,:]=0.; B[-1,-1]=1
return A, -12./h**2*B
###Output
_____no_output_____
###Markdown
In the first example, we consider the problem with homogeneous Dirichlet boundary conditions$$\begin{cases} -u''(x) = \pi^2\sin(\pi x), & 0 < x <2,\\ u(0)=0, \quad u(2) = 0.\end{cases}$$The exact solution is $u_e(x)=\sin(\pi x)$.
###Code
def SolvePoissonCompact(f, h, M):
u0 = np.zeros_like(f)
A, B = build_AB(M, h)
sigma = np.matmul(A, f)
return np.linalg.solve(B, sigma)
M = 64
x, h = np.linspace(0., 2., M, retstep=True, endpoint=True)
f = np.pi**2*np.sin(np.pi*x)
u_e = np.sin(np.pi*x)
u_num = SolvePoissonCompact(f, h, M)
print(norm(u_num-u_e))
plt.plot(x, u_e, '-s')
plt.plot(x, u_num,'-o')
plt.xlabel(r"$x$");plt.ylabel(r"$u$")
# plt.savefig("figure_1.png", dpi=300);
###Output
6.02248257496857e-06
###Markdown
Now with non-zero Dirichlet Boundry conditions$$\begin{cases} -u''(x) = 12e^{-x^2}(-x^2+1/2), & -8 < x <8,\\ u(-8)=-8, \quad u(8) = 8.\end{cases}$$The exact solution is $u_e(x)=3e^{-x^2}$. I. the numerical computation, we denote $U(x)=u(x)-x$ using change of variable. Applying tthe numerical algorithm, we now have $$\begin{cases} -U''(x) = 12e^{-x^2}(-x^2+1/2), & -8 < x <8,\\ U(-8)=-0, \quad U(8) = 0.\end{cases}$$and the approximate numerical solution at a grid point is found as $u(x) = U(x)=x$.
###Code
M = 64
x, h = np.linspace(-8., 8., M, retstep=True, endpoint=True)
f = 12.*np.exp(-x**2)*(-x**2 + 0.5)
u_e = 3.*np.exp(-x**2)+x
u_num = SolvePoissonCompact(f, h, M)
print(norm(u_num-u_e))
plt.plot(x, u_e, '-s')
plt.plot(x, u_num+x,'-o')
plt.xlabel(r"$x$");plt.ylabel(r"$u$");
# plt.savefig("figure_2.png", dpi=300);
###Output
0.5864429590964948
###Markdown
Using Faste Fourier Transforms to Solve the Poisson EquationWe actually do not need ton inverte the system described earlier to get the solution, [see](https://www.sciencedirect.com/science/article/pii/S0898122116300761). We can use the Sine transform for $U\in\mathbb{R}^{M-1}$$$\begin{split} u_j &= \sum_{k=1}^{M-1}\hat{u}_k\sin\left(\frac{jk\pi}{M}\right), \,\, j=1,2,...,M-1,\\ \hat{u_k} &= \frac{2}{M}\sum_{j=1}^{M-1}u_j\sin\left(\frac{ik\pi}{M}\right), \,\, j=1,2,...,M-1,\end{split}$$from whcih we can approximate $u_{i+1}, u_{i-1}, u''_{i+1}, u''_{i-1}$ as$$\begin{align} u_{i+1}=\sum_{k=1}^{M-1}\hat{u}_k\sin\left(\frac{(i+1)k\pi}{M}\right),\qquad & u_{i-1} = \sum_{k=1}^{M-1}\hat{u}_k\sin\left(\frac{(i-1)k\pi}{M}\right)\\ u''_{i} =\sum_{k=1}^{M-1}\hat{u}''_k\sin\left(\frac{ik\pi}{M}\right),\qquad & u''_{i+1} =\sum_{k=1}^{M-1}\hat{u}''_k\sin\left(\frac{(i+1)k\pi}{M}\right)\\ u''_{i-1} =\sum_{k=1}^{M-1}\hat{u}''_k\sin\left(\frac{(i-1)k\pi}{M}\right). & \\\end{align}$$Subsituting in the compact discretization of the Poisson equation gives, $$\sum_{k=1}^{M-1}\hat{u}''_k\left\{ \frac{1}{10}\sin\left(\frac{(i-1)k\pi}{M}\right) + \sin\left(\frac{ik\pi}{M}\right) + \frac{1}{10}\sin\left(\frac{(i+1)k\pi}{M}\right) \right\} =\frac{6}{5h^2}\sum_{k=1}^{M-1}\hat{u}_k\left\{ \sin\left(\frac{(i-1)k\pi}{M}\right) +\sin\left(\frac{(i+1)k\pi}{M}\right) - 2\sin\left(\frac{ik\pi}{M}\right) \right\}$$or, after rearranging$$\hat{u}_k = -\hat{u}''_k\left(\frac{24\sin^2\left(\frac{k\pi}{2M}\right)}{h^2}\right)^{-1}\left(\cos\left(\frac{k\pi}{M}\right)+5\right), \,\, k\in 1,2,..,M-1.$$In addition, we obtain $-u''_i = f_i \,(i=1,2,...,M-1)$. By the inverse Sine transform, we get to know $-\hat{u}''_k=\hat{f}_k \, (k=1,2,...,M-1)$, whci allows us to solve for $\hat{u}$$$\hat{u}_k = \hat{f}_k\left(\frac{24\sin^2\left(\frac{k\pi}{2M}\right)}{h^2}\right)^{-1}\left(\cos\left(\frac{k\pi}{M}\right)+5\right), \,\, k\in 1,2,..,M-1.$$> **_Note:_** We use a spectral method to solve the tri-diagonal system, this doesn't mean we solve it with spectral accuracy, here the modified wavenumber makes the spectral method the exact same accuracy as the compact scheme.
###Code
def SolvePoissonSine(f, h, M):
f_k = fftpack.dst(f, norm='ortho')
k = np.arange(1,M+1)
u_k = f_k*(24*np.sin(k*np.pi/(2*M))**2./h**2.)**(-1.)*(np.cos(np.pi*k/M)+5.)
return fftpack.idst(u_k, norm='ortho')
M = 64
x, h = np.linspace(-8, 8, M, retstep=True, endpoint=True)
f = 12.*np.exp(-x**2)*(-x**2 + 0.5)
u_e = 3.*np.exp(-x**2)+x
u_num = SolvePoissonSine(f, h, M)
print(norm(u_num-u_e))
plt.plot(x, u_num + x, '-o')
plt.plot(x, u_e, 's')
plt.xlabel(r"$x$");plt.ylabel(r"$u$");
# plt.savefig("figure_3.png", dpi=300);
###Output
0.5864429590964948
###Markdown
Order of Accuracy
###Code
L2_com = []
L2_Sine = []
Resolutions = 2.**np.arange(4,9)
for N in Resolutions:
x, h = np.linspace(0., 2., int(N), retstep=True, endpoint=True)
f = np.pi**2*np.sin(np.pi*x)
u_e = np.sin(np.pi*x)
u_num = SolvePoissonCompact(f, h, int(N))
error = norm(u_num-u_e)
L2_com.append(error)
u_num = SolvePoissonSine(f, h, int(N))
error = norm(u_num-u_e)
L2_Sine.append(error)
plt.loglog(Resolutions, np.array(L2_com), '--o', label='Compact Schemes')
plt.loglog(Resolutions, np.array(L2_Sine), ':s', label='Sine Transform')
plt.loglog(Resolutions, Resolutions**(-4), ':k', alpha=0.5, label=r"$4^{th}$-order")
plt.xlabel("Resolution (N)"); plt.ylabel(r"$L_2$-norm Error")
plt.legend();
# plt.savefig("figure_4.png", dpi=300);
###Output
_____no_output_____ |
_site/dd (6) (10) (2).ipynb | ###Markdown
English word sense disambiguation The task_Create a machine learning system that can disambiguate the correct sense of a word in context. Disambiguate the four words hard, interest, line, and serve into the senses given in the Senseval 2 corpus (NLTK Corpus HOWTO: Senseval). You can augment your data with other corpora as well. You can perform either supervised or unsupervised machine learning._ Why this task? Generally, the Turing test has been thought of as a test in which (currently) a machine only mimics human behavior, and does not necessarily or exactly show intelligence. With this in mind, I'm interested in exploring ways in which we could actually teach a machine to "understand" language. In this sense, The Winograd Schemas and other NLP focused tasks in which a machine would have to identify semantic meanings would, in my opinion, move us closer to teach a machine a certain kind of intelligence. Naive BayesFor this task I will use the Naive Bayes algorithm, which is a conditional probability model. It works by taking a dataset that contains classified instances. By extracting features from these instances and checking the appropriate classification for each instance, the algorithm calculates probabilities as to which extracted features belong to which classification. In this sense, Naive Bayes is a supervised learning algorithm, since we feed it training examples, or train data, and receive an output based on the algorithm's calculations. It is appropriate for this task because it does not require a large training set to be able to learn from one. It is also less sensitive to irrelevant features, as we have a multiple labels for each of the four words. You need to describe what data you plan to use and how it will be partitioned into training, development/validation and test sets.I am using the Senseval corpus. After randomization, I will split the data into training and testing sets. As for extracting features, I am planning on using a) context words (as in, words that appear around the focus word) and b) the 'senses' category, which represents the exact meaning of the focus word.
###Code
# Import some necessary modules
import nltk, random
from nltk.corpus import senseval
###Output
_____no_output_____
###Markdown
Here we import some necessary modules. For this task, we only need a few.`nltk` contains its Naive Bayes classifier and the Senseval corpus, which we will use as our data set.We use `random` to shuffle the training set, so that we can evaluate the model properly, as we will get varying results each time we run the program.
###Code
print("All fileids:", senseval.fileids())
print()
for fileid in senseval.fileids():
print(senseval.instances(fileid)[0])
print()
###Output
All fileids: ['hard.pos', 'interest.pos', 'line.pos', 'serve.pos']
SensevalInstance(word='hard-a', position=20, context=[('``', '``'), ('he', 'PRP'), ('may', 'MD'), ('lose', 'VB'), ('all', 'DT'), ('popular', 'JJ'), ('support', 'NN'), (',', ','), ('but', 'CC'), ('someone', 'NN'), ('has', 'VBZ'), ('to', 'TO'), ('kill', 'VB'), ('him', 'PRP'), ('to', 'TO'), ('defeat', 'VB'), ('him', 'PRP'), ('and', 'CC'), ('that', 'DT'), ("'s", 'VBZ'), ('hard', 'JJ'), ('to', 'TO'), ('do', 'VB'), ('.', '.'), ("''", "''")], senses=('HARD1',))
SensevalInstance(word='interest-n', position=18, context=[('yields', 'NNS'), ('on', 'IN'), ('money-market', 'JJ'), ('mutual', 'JJ'), ('funds', 'NNS'), ('continued', 'VBD'), ('to', 'TO'), ('slide', 'VB'), (',', ','), ('amid', 'IN'), ('signs', 'VBZ'), ('that', 'IN'), ('portfolio', 'NN'), ('managers', 'NNS'), ('expect', 'VBP'), ('further', 'JJ'), ('declines', 'NNS'), ('in', 'IN'), ('interest', 'NN'), ('rates', 'NNS'), ('.', '.')], senses=('interest_6',))
SensevalInstance(word='line-n', position=67, context=[('the', 'DT'), ('company', 'NN'), ('argued', 'VBD'), ('that', 'IN'), ('its', 'PRP$'), ('foreman', 'NN'), ('needn', 'NN'), ("'t", 'NN'), ('have', 'VBP'), ('told', 'VBN'), ('the', 'DT'), ('worker', 'NN'), ('not', 'RB'), ('to', 'TO'), ('move', 'VB'), ('the', 'DT'), ('plank', 'NN'), ('to', 'TO'), ('which', 'WDT'), ('his', 'PRP$'), ('lifeline', 'NN'), ('was', 'VBD'), ('tied', 'VBN'), ('because', 'IN'), ('"', '"'), ('that', 'WDT'), ('comes', 'VBZ'), ('with', 'IN'), ('common', 'JJ'), ('sense', 'NN'), ('.', '.'), ('"', '"'), ('the', 'DT'), ('commission', 'NN'), ('noted', 'VBD'), (',', ','), ('however', 'RB'), (',', ','), ('that', 'IN'), ('dellovade', 'NNP'), ('hadn', 'NN'), ("'t", 'NN'), ('instructed', 'VBD'), ('its', 'PRP$'), ('employees', 'NNS'), ('on', 'IN'), ('how', 'WRB'), ('to', 'TO'), ('secure', 'VB'), ('their', 'PRP$'), ('lifelines', 'NNS'), ('and', 'CC'), ('didn', 'VBD'), ("'t", 'NN'), ('heed', 'NN'), ('a', 'DT'), ('federal', 'JJ'), ('inspector', 'NN'), ("'s", 'POS'), ('earlier', 'JJR'), ('suggestion', 'NN'), ('that', 'IN'), ('the', 'DT'), ('company', 'NN'), ('install', 'VB'), ('special', 'JJ'), ('safety', 'NN'), ('lines', 'NNS'), ('inside', 'IN'), ('the', 'DT'), ('a-frame', 'NNP'), ('structure', 'NN'), ('it', 'PRP'), ('was', 'VBD'), ('building', 'VBG'), ('.', '.')], senses=('cord',))
SensevalInstance(word='serve-v', position=42, context=[('some', 'DT'), ('tart', 'JJ'), ('fruits', 'NNS'), ('mixed', 'VBN'), ('with', 'IN'), ('greens', 'NNS'), ('make', 'VBP'), ('a', 'DT'), ('nice', 'JJ'), ('contrast', 'NN'), ('with', 'IN'), ('rich', 'JJ'), ('meat', 'NN'), ('dishes', 'NNS'), ('(', '('), ('see', 'VB'), ('orange', 'NNP'), ('and', 'CC'), ('onion', 'NNP'), ('salad', 'NNP'), (',', ','), ('page', 'NN'), ('111', 'CD'), (')', 'SYM'), (',', ','), ('but', 'CC'), ('if', 'IN'), ('you', 'PRP'), ('like', 'VB'), ('to', 'TO'), ('follow', 'VB'), ('the', 'DT'), ('meat', 'NN'), ('course', 'NN'), ('with', 'IN'), ('sweet', 'JJ'), ('fruit', 'NN'), (',', ','), ('it', 'PRP'), ('seems', 'VBZ'), ('wiser', 'JJR'), ('to', 'TO'), ('serve', 'VB'), ('it', 'PRP'), ('plain', 'JJ'), ('with', 'IN'), ('a', 'DT'), ('good', 'JJ'), ('sharp', 'JJ'), ('cheese', 'NN'), ('and', 'CC'), ('let', 'VB'), ('it', 'PRP'), ('take', 'VB'), ('the', 'DT'), ('place', 'NN'), ('of', 'IN'), ('a', 'DT'), ('sweet', 'JJ'), ('or', 'CC'), ('dessert', 'NN'), ('course', 'NN'), ('.', '.'), ('if', 'IN'), ('you', 'PRP'), ('insist', 'VBP'), ('on', 'IN'), ('serving', 'VBG'), ('fruit', 'NN'), ('as', 'IN'), ('a', 'DT'), ('salad', 'NN'), (',', ','), ('don', 'VB'), ("'t", 'NN'), ('cut', 'NN'), ('it', 'PRP'), ('into', 'IN'), ('cubes', 'NNS'), ('and', 'CC'), ('mix', 'NN'), ('it', 'PRP'), ('up', 'RB'), ('.', '.')], senses=('SERVE10',))
###Markdown
There are four file ids in the senseval corpus, one for each of the words. Each fileid, or word, contains a series of instances. Above is an example instance for each of the words. The instance contains the following information:- Which word is in question, followed by its POS (Part of Speech) tag. E.g. word='hard-a' refers to the word 'hard' being an adjective.- The position of our word, telling us how many words are preceding our focus word (remember to count from 0!)- The context, which is a sentence or a longer sequence of words surrounding our focus word. Each of the tokens are tuples of two strings: the word itself and its POS tag.- The sense of the focus word, which is the label in our model.
###Code
'''
A simple function to extract the label from each of the instances. The indexes the listed labels also correspond to the
order of the instances in a word fileid.
The returned list will be used later in creating our featureset.
'''
def get_category(pos):
category = []
for inst in senseval.instances(pos):
category.append(inst.senses)
return category
'''
A function to create our featureset, which is returned as a dictionary of all of the features of an instance.
'''
def get_features(inst):
features = {}
p = inst.position
'''
As we are using the position of the focus word in an instance to get the previous and next words and tags,
we might get errors where there are not enough elements after the focus words. For this reason, we add the tuple
below to each of the instances.
'''
inst.context.append(('<END>','<END>'))
'''
Because the Senseval corpus contains some unexpected elements and other quirks, such as the string 'FRASL'
among some of the contexts (which expectedly are lists of tuples), we need to define each of the features with testing.
if an instance contains some of these elements that make our program fail, we instead return an empty set.
Additionally, we ignore all tokens that are not longer than one character. This is because we want to ignore tokens
such as punctuation, as well as all the random quirks of the data set.
'''
try:
left_word = ' '.join(w for (w,t) in inst.context[p-1:p] if len(w) > 1)
right_word = ' '.join(w for (w,t) in inst.context[p+1:p+2] if len(w) > 1)
more_left_word = ' '.join(w for (w,t) in inst.context[p-2:p] if len(w) > 1)
more_right_word = ' '.join(w for (w,t) in inst.context[p+1:p+3] if len(w) > 1)
left_tag = ' '.join(t for (w,t) in inst.context[p-1:p] if len(t) > 1)
right_tag = ' '.join(t for (w,t) in inst.context[p+1:p+2] if len(t) > 1)
more_left_tag = ' '.join(t for (w,t) in inst.context[p-2:p] if len(t) > 1)
more_right_tag = ' '.join(t for (w,t) in inst.context[p+1:p+3] if len(t) > 1)
except:
return features
'''
The extracted features are listed below.
'''
features['1 Previous tag'] = left_tag
features['1 Next tag'] = right_tag
features['2 Previous tags'] = more_left_tag
features['2 Next tags'] = more_right_tag
features['1 Previous word'] = left_word
features['1 Next word'] = right_word
features['2 Previous words'] = more_left_word
features['2 Next words'] = more_right_word
return features
'''
Because of the length of the label's name, it does not appear in the NLTK classifier's most informative features list.
Hence, we change it into a shorter one.
'''
interest_unchanged = get_category('interest.pos')
interest_c = []
for tuple in interest_unchanged:
if tuple == ('interest_1',):
interest_c.append('inte_1')
if tuple == ('interest_2',):
interest_c.append('inte_2')
if tuple == ('interest_3',):
interest_c.append('inte_3')
if tuple == ('interest_4',):
interest_c.append('inte_4')
if tuple == ('interest_5',):
interest_c.append('inte_5')
if tuple == ('interest_6',):
interest_c.append('inte_6')
'''
For each word, we send all of the word's instances to the above function. We use the zip()function to create tuples of
an instance and the correct label for the instance. As we iterate over an instance of a word and the label of our category
list, each tuple will have the correct label, since its index in the list corresponds to the order of the word instances.
'''
interest_featureset = [(get_features(inst), c) for c,inst in zip(interest_c, senseval.instances('interest.pos'))]
hard_featureset = [(get_features(inst), c) for c,inst in zip(get_category('hard.pos'), senseval.instances('hard.pos'))]
line_featureset = [(get_features(inst), c) for c,inst in zip(get_category('line.pos'), senseval.instances('line.pos'))]
serve_featureset = [(get_features(inst), c) for c,inst in zip(get_category('serve.pos'), senseval.instances('serve.pos'))]
print('Example of featureset for the word "hard":\n\n', hard_featureset[30])
print()
print('Amount of featuresets'
'\n for the word "hard:"', len(hard_featureset),
'\n for the word "interest:"', len(interest_featureset),
'\n for the word "line",', len(line_featureset),
'\n and the word "serve"', len(serve_featureset))
###Output
Example of featureset for the word "hard":
({'1 Previous tag': 'DT', '1 Next tag': 'NN', '2 Previous tags': 'VBP DT', '2 Next tags': 'NN IN', '1 Previous word': '', '1 Next word': 'time', '2 Previous words': 'have', '2 Next words': 'time with'}, ('HARD1',))
Amount of featuresets
for the word "hard:" 4333
for the word "interest:" 2368
for the word "line", 4146
and the word "serve" 4378
###Markdown
So far we have approximately 15,000 features available for input.As per the example above, the phrase "have hard time with" is classified by the label "HARD1" which represents (from the Senseval corpus) "not easy, requiring great physical or mental." The phrase is also sliced into features of previous words, next words, previous tags and next tags. With this, we have a tuple where the first element is a dictionary and the second element is the label (because of the format of the corpus, the label is a tuple as well with no second element.)Below, we randomize the featuresets of each of the words, and use quarter for the training set and another quarter for the testing set.
###Code
def get_size(fileid):
size = int(len(senseval.instances(fileid)) * 0.25)
return size
size = get_size('hard.pos')
random.shuffle(hard_featureset)
hard_train_set, hard_test_set = hard_featureset[size:], hard_featureset[:size]
size = get_size('interest.pos')
random.shuffle(interest_featureset)
interest_train_set, interest_test_set = interest_featureset[size:], interest_featureset[:size]
size = get_size('serve.pos')
random.shuffle(serve_featureset)
serve_train_set, serve_test_set = serve_featureset[size:], serve_featureset[:size]
size = get_size('line.pos')
random.shuffle(line_featureset)
line_train_set, line_test_set = line_featureset[size:], line_featureset[:size]
def word_bayes(train_set, test_set, word):
bayes_classifier = nltk.NaiveBayesClassifier.train(train_set)
print(word, "Naive Bayes accuracy percent:", (nltk.classify.accuracy(bayes_classifier, test_set))*100,"%")
print()
print(bayes_classifier.show_most_informative_features(20))
###Output
_____no_output_____
###Markdown
Carrying out the evaluationWe can use the method `nltk.FreqDist()` to compute the distribution of the senses of the words hard, interest, line and serve in the Senseval corpus. By choosing the most common sense and comparing it to the others, we can calculate the following baselines for the following words:- "hard": 79.7%- "serve": 41.4%- "interest": 52.9%- "line": 53.5%It is notable that a significant portion of the category "hard" senses is "HARD1." Similarly, each of the words have one sense that is significantly more common than the other meanings. In this sense, as we consider the baseline percentage and evaluation in general, we should not only consider the "worst case scenario" of us guessing the category randomly and correctly at the same time, but also see whether we can distinguish meanings from the most common senses label (and if not, then why?).
###Code
hard_dist = nltk.FreqDist([i.senses[0] for i in senseval.instances('hard.pos')])
hard_baseline = hard_dist.freq('HARD1')
#hard_dist FreqDist({'HARD1': 3455, 'HARD2': 502, 'HARD3': 376})
#hard_baseline 0.797369028386799
serve_dist = nltk.FreqDist([i.senses[0] for i in senseval.instances('serve.pos')])
serve_baseline = serve_dist.freq('SERVE10')
# serve_dist FreqDist({'SERVE10': 1814, 'SERVE12': 1272, 'SERVE2': 853, 'SERVE6': 439})
# serve_baseline 0.4143444495203289
interest_dist = nltk.FreqDist([i.senses[0] for i in senseval.instances('interest.pos')])
interest_baseline = interest_dist.freq('interest_6')
interest_baseline
# interest_distFreqDist({'interest_6': 1252, 'interest_5': 500, 'interest_1': 361,
# 'interest_4': 178, 'interest_3': 66, 'interest_2': 11})
# interest_baseline 0.5287162162162162
line_dist = nltk.FreqDist([i.senses[0] for i in senseval.instances('line.pos')])
line_baseline = line_dist.freq('product')
# line_dist FreqDist({'product': 2217, 'phone': 429, 'text': 404, 'division': 374, 'cord': 373, 'formation': 349})
# line_baseline 0.5347322720694645
###Output
_____no_output_____
###Markdown
Linguistic FindingsIt seems that the words and tags that come after the focus word are more important than previous ones. This can be seen in the word "hard." For example, it becomes apparent that we mean "difficult" when "hard" is followed by "to" (e.g. "It is hard to do.")Same phenomenon can be seen with the word category "interest", with the 11 most informative features being either next words or tags. The most informative feature, one next tag being NNS, probably represents cases like "interest rates."The WH-determiner seems to be an important factor when determining the sense of the word "serve."
###Code
word_bayes(hard_train_set, hard_test_set, "Hard -")
word_bayes(interest_train_set, interest_train_set, "Interest -")
word_bayes(serve_train_set, serve_test_set, "Serve -")
word_bayes(line_train_set, line_test_set, "Line -")
# | ('HARD1',): | "not easy, requiring great physical or mental" |
# | ('HARD2',): | "dispassionate" |
# | ('HARD3',): | "resisting weight or pressure" |
# | ('interest_1',): | "readiness to give attention" |
# | ('interest_2',): | "quality of causing attention to be given to" |
# | ('interest_3',): | "activity, etc. that one gives attention to" |
# | ('interest_4',): | "advantage, advancement or favor" |
# | ('interest_5',): | " a share in a company or business" |
# | ('interest_6',): | "money paid for the use of money" |
# | ('cord',): | "something (as a cord or rope) that is long and thin and flexible" |
# | ('formation',): | "a formation of people or things one beside another" |
# | ('text',): | "text consisting of a row of words written across a page or computer screen" |
# | ('phone',): | "a telephone connection" |
# | ('product',): | "a particular kind of product or merchandise" |
# | ('division',): | "a conceptual separation or distinction" |
# | ('SERVE12',): | "do duty or hold offices; serve in a specific function" |
# | ('SERVE10',): | "provide (usually but not necessarily food)" |
# | ('SERVE2',): | "serve a purpose, role, or function" |
# | ('SERVE6',): | "be used by; as of a utility" |
###Output
Hard - Naive Bayes accuracy percent: 83.84118190212374 %
Most Informative Features
1 Next word = 'to' HARD1 : HARD2 = 189.3 : 1.0
1 Next tag = 'TO' HARD1 : HARD2 = 141.9 : 1.0
2 Previous words = "it 's" HARD1 : HARD2 = 112.6 : 1.0
2 Next tags = 'TO VB' HARD1 : HARD3 = 77.3 : 1.0
2 Next tags = 'NN CC' HARD2 : HARD1 = 76.3 : 1.0
2 Previous tags = 'PRP VBZ' HARD1 : HARD3 = 68.8 : 1.0
1 Next word = 'work' HARD2 : HARD1 = 65.5 : 1.0
1 Previous word = "'s" HARD1 : HARD3 = 59.8 : 1.0
2 Next words = 'work' HARD2 : HARD1 = 59.3 : 1.0
2 Next tags = 'NN NNS' HARD3 : HARD1 = 58.3 : 1.0
2 Next words = 'work and' HARD2 : HARD1 = 49.9 : 1.0
1 Next word = 'line' HARD2 : HARD1 = 47.0 : 1.0
1 Previous word = 'no' HARD2 : HARD1 = 43.3 : 1.0
1 Next tag = 'VBN' HARD3 : HARD1 = 42.3 : 1.0
1 Next word = 'place' HARD3 : HARD1 = 41.6 : 1.0
2 Previous tags = 'NNS IN' HARD2 : HARD1 = 39.8 : 1.0
2 Next tags = 'JJ' HARD3 : HARD1 = 29.4 : 1.0
1 Next word = 'for' HARD1 : HARD2 = 28.0 : 1.0
2 Previous tags = 'VBN IN' HARD3 : HARD1 = 25.7 : 1.0
1 Previous tag = '``' HARD2 : HARD1 = 22.9 : 1.0
None
Interest - Naive Bayes accuracy percent: 94.42567567567568 %
Most Informative Features
1 Next tag = 'NNS' inte_6 : inte_1 = 111.2 : 1.0
2 Next tags = 'IN VBG' inte_1 : inte_6 = 74.6 : 1.0
1 Next word = 'in' inte_5 : inte_6 = 51.6 : 1.0
1 Previous word = 'other' inte_3 : inte_6 = 47.5 : 1.0
1 Next tag = 'VBP' inte_3 : inte_6 = 43.7 : 1.0
1 Next word = 'of' inte_4 : inte_6 = 41.5 : 1.0
2 Next tags = 'NNS' inte_6 : inte_1 = 39.1 : 1.0
1 Previous tag = 'VBN' inte_1 : inte_5 = 24.7 : 1.0
2 Next tags = 'TO VB' inte_4 : inte_6 = 23.6 : 1.0
2 Next tags = 'DT' inte_3 : inte_6 = 23.2 : 1.0
2 Next tags = 'NNS IN' inte_6 : inte_5 = 22.9 : 1.0
2 Next words = 'in the' inte_5 : inte_4 = 21.5 : 1.0
1 Next tag = 'NN' inte_6 : inte_1 = 21.4 : 1.0
2 Previous tags = 'VB JJ' inte_3 : inte_4 = 19.9 : 1.0
1 Previous tag = 'TO' inte_2 : inte_6 = 19.7 : 1.0
1 Next tag = 'TO' inte_2 : inte_6 = 19.6 : 1.0
1 Previous tag = '' inte_6 : inte_5 = 17.8 : 1.0
2 Previous tags = 'PRP$ NN' inte_5 : inte_6 = 17.7 : 1.0
2 Previous tags = 'VBP VBN' inte_1 : inte_6 = 17.6 : 1.0
1 Previous word = 'of' inte_1 : inte_5 = 17.0 : 1.0
None
Serve - Naive Bayes accuracy percent: 78.70201096892139 %
Most Informative Features
1 Previous tag = 'WDT' SERVE6 : SERVE1 = 85.4 : 1.0
2 Next words = 'as' SERVE2 : SERVE1 = 68.4 : 1.0
2 Previous tags = 'WDT' SERVE6 : SERVE1 = 60.0 : 1.0
1 Next word = 'as' SERVE2 : SERVE1 = 57.0 : 1.0
2 Previous tags = 'WP' SERVE1 : SERVE1 = 40.9 : 1.0
1 Previous tag = 'WP' SERVE1 : SERVE2 = 40.5 : 1.0
2 Next words = 'on the' SERVE1 : SERVE1 = 38.5 : 1.0
2 Next words = '' SERVE1 : SERVE1 = 35.3 : 1.0
2 Previous words = 'who' SERVE1 : SERVE1 = 34.8 : 1.0
1 Previous word = 'that' SERVE2 : SERVE1 = 33.9 : 1.0
1 Previous word = 'it' SERVE2 : SERVE1 = 33.0 : 1.0
1 Next word = 'under' SERVE1 : SERVE1 = 31.9 : 1.0
2 Next words = 'to' SERVE1 : SERVE6 = 31.7 : 1.0
2 Previous words = 'which' SERVE6 : SERVE1 = 31.2 : 1.0
1 Previous word = 'before' SERVE1 : SERVE1 = 30.4 : 1.0
2 Previous tags = 'NN CC' SERVE1 : SERVE2 = 29.5 : 1.0
2 Previous tags = 'WDT MD' SERVE2 : SERVE1 = 29.0 : 1.0
1 Next word = 'in' SERVE1 : SERVE2 = 27.8 : 1.0
2 Previous tags = 'NN WDT' SERVE2 : SERVE1 = 26.9 : 1.0
2 Next tags = 'CD' SERVE1 : SERVE1 = 26.8 : 1.0
None
Line - Naive Bayes accuracy percent: 72.00772200772201 %
Most Informative Features
1 Next word = 'between' divisi : produc = 152.9 : 1.0
1 Previous word = 'in' format : produc = 63.9 : 1.0
1 Next word = 'of' produc : phone = 62.8 : 1.0
1 Previous word = 'telephone' phone : text = 45.8 : 1.0
1 Previous tag = 'IN' format : divisi = 43.3 : 1.0
1 Previous word = 'long' format : text = 40.5 : 1.0
2 Previous tags = 'VB IN' format : produc = 38.3 : 1.0
2 Next tags = 'IN JJ' produc : phone = 32.3 : 1.0
1 Previous word = 'fine' divisi : text = 31.5 : 1.0
1 Previous word = 'new' produc : format = 30.8 : 1.0
2 Next tags = 'IN NNS' format : cord = 30.2 : 1.0
2 Previous words = 'new' produc : format = 28.8 : 1.0
2 Previous words = 'on the' phone : format = 27.9 : 1.0
1 Previous tag = 'PRP$' cord : phone = 26.8 : 1.0
1 Next word = 'like' text : produc = 22.8 : 1.0
2 Previous words = 'fine' divisi : text = 22.5 : 1.0
2 Next tags = 'VBD RB' cord : produc = 22.1 : 1.0
2 Previous tags = 'CD' phone : produc = 20.5 : 1.0
1 Next word = 'for' format : divisi = 19.7 : 1.0
2 Previous tags = 'DT VBG' cord : produc = 19.1 : 1.0
None
###Markdown
Best & Worst CategoriesWhen the model was in its early stages and it was only tested against the most common words in the whole context, the "hard" category was recognized the best. Nevertheless, the "hard" category was unrealistically high with consistently over 97% accuracy (only based on all words in the context), and the "interest" category now seems to have some interesting findings. This might relate to the fact that one of the categories is significantly more common. Because of this, the Naive Bayes algorithm might learn and apply general features to the most common category. This can be seen below, where we have made a program that predicts user input. There are some troubles of making the model predict other senses than the most common one of the category (e.g. "a hard rock" and other similar sentences are still predicted as "HARD1.")In this sense, the word "hard" might be the hardest to get right, as its baseline is already fairly high, 79,9%, and because of the tendency of the model to predict the most common category. Nevertheless, it seems to perform decently, as evidenced by the most informative features. In fact, the most informative features do give us meaningful information."Line" and "interest" could also be considered as difficult word categories to get correctly. The least used senses occur less than hundred times in the data, and the least common sense for "interest" even occurs only 11 times. Still, "interest" gives us consistently the highest accuracy, and might be another evidence of the Naive Bayes algorithm learning "bias" towards the most common label. Overfitting? As can be seen from above, the model can be somewhat biased and generate approximation errors. In addition, as the training and test sets are shuffled and split, the model should not be prone to overfitting. Program for the user end: predicting sentencesWhile this was not part of the project, I implemented it out of my own interest. While the part of the code that recognizes which word has been inputted is very basic, it can be developed further once the environment allows us to do so. For example, we could use regex to recognize all conjugated word forms.
###Code
from nltk import tokenize
def get_features(inst,p):
features = {}
all_words = []
left_words = []
right_words = []
inst.append('<END>')
# if inst.context[p+1] == 'FRASL':
# inst.context[p+1] = (inst.context[p+1],inst.context[p+1])
# if inst.context[p+2] == 'FRASL':
# inst.context[p+2] = (inst.context[p+1],inst.context[p+2])
try:
left_word = ' '.join(w for (w,t) in inst.context[p-1:p] if len(w) > 1)
right_word = ' '.join(w for (w,t) in inst.context[p+1:p+2] if len(w) > 1)
more_left_word = ' '.join(w for (w,t) in inst.context[p-2:p] if len(w) > 1)
more_right_word = ' '.join(w for (w,t) in inst.context[p+1:p+3] if len(w) > 1)
left_tag = ' '.join(t for (w,t) in inst.context[p-1:p] if len(t) > 1)
right_tag = ' '.join(t for (w,t) in inst.context[p+1:p+2] if len(t) > 1)
more_left_tag = ' '.join(t for (w,t) in inst.context[p-2:p] if len(t) > 1)
more_right_tag = ' '.join(t for (w,t) in inst.context[p+1:p+3] if len(t) > 1)
except:
return features
features['1 Previous tag'] = left_tag
features['1 Next tag'] = right_tag
features['2 Previous tags'] = more_left_tag
features['2 Next tags'] = more_right_tag
features['1 Previous word'] = left_word
features['1 Next word'] = right_word
features['2 Previous words'] = more_left_word
features['2 Next words'] = more_right_word
return features
def guess_sense(text, word, train_set):
classifier = nltk.NaiveBayesClassifier.train(train_set)
pos=text.find(word)
text = tokenize.wordpunct_tokenize(text)
tokenized_text = nltk.pos_tag(text)
pos=text.index(word)
guess = classifier.classify(get_features(tokenized_text, pos))
SV_SENSE_MAP = {
('HARD1',): "not easy, requiring great physical or mental",
('HARD2',): "dispassionate",
('HARD3',): "resisting weight or pressure",
('interest_1',): "readiness to give attention",
('interest_2',): "quality of causing attention to be given to",
('interest_3',): "activity, etc. that one gives attention to",
('interest_4',): "advantage, advancement or favor",
('interest_5',): " a share in a company or business",
('interest_6',): "money paid for the use of money",
('cord',): "something (as a cord or rope) that is long and thin and flexible",
('formation',): "a formation of people or things one beside another",
('text',): "text consisting of a row of words written across a page or computer screen",
('phone',): "a telephone connection",
('product',): "a particular kind of product or merchandise",
('division',): "a conceptual separation or distinction",
('SERVE12',): "do duty or hold offices; serve in a specific function",
('SERVE10',): "provide (usually but not necessarily food)",
('SERVE2',): "serve a purpose, role, or function",
('SERVE6',): "be used by; as of a utility"
}
x = SV_SENSE_MAP[guess]
print('Hmm...')
print('I think by "{}" you mean'.format(word), str(x))
text = input("Type a sentence with the word 'hard', 'line', 'serve' or 'interest'.\n")
if text.find('hard') > -1:
guess_sense(text, 'hard', hard_train_set)
elif text.find('line') > -1:
guess_sense(text, 'line', line_train_set)
elif text.find('serve') != -1:
guess_sense(text, 'serve', serve_train_set)
elif text.find('interest') != -1:
guess_sense(text, 'interest', interest_train_set)
else:
print('Didn\'t find the word "hard", "line", "serve" or "interest".')
###Output
Type a sentence with the word 'hard', 'line', 'serve' or 'interest'.
rock hard
Hmm...
I think by "hard" you mean not easy, requiring great physical or mental
|
Kristjan/logit-nb-bert-full-train-predict-ROC-AUC.ipynb | ###Markdown
Prep training data
###Code
import spacy
import re
nlp = spacy.load('en_core_web_sm')
regex1 = re.compile(r'(http\S+)|(#(\w+))|(@(\w+))|[^\w\s]|(\w*\d\w*)')
regex2 = re.compile(r'(\s+)|(\n+)')
def lemmatize(article):
article = re.sub(regex1, '', article)
article = re.sub(regex2,' ', article).strip().lower()
doc = nlp(article)
lemmatized_article = " ".join([token.lemma_ for token in doc if (token.is_stop==False)])
return lemmatized_article
am = pd.read_csv('../adverse_media_training.csv.zip')
nam = pd.read_csv('../non_adverse_media_training.csv.zip')
am_confirmed = am.loc[(am.label == 'am') | (am.label == 'am ')]
am_confirmed = pd.concat([am_confirmed, nam.loc[nam.label == 'am']])
nam_confirmed = nam.loc[(nam.label == 'nam') | (nam.label == 'random')]
nam_confirmed = pd.concat([nam_confirmed, am.loc[(am.label == 'nam') | (am.label == 'random')]])
am_confirmed['is_adverse_media'] = 1
nam_confirmed['is_adverse_media'] = 0
# Creating the train dataset
data = pd.concat([am_confirmed, nam_confirmed])
data["article"] = data["title"] + " " + data["article"]
data["lemmatized"] = data["article"].apply(lemmatize)
data = data.sample(frac = 1, random_state=42)
data = data.reset_index()
data = data.drop(['index'], axis=1)
x_train = data["lemmatized"]
y_train = data["is_adverse_media"]
###Output
_____no_output_____
###Markdown
Energize! ...khm, Vectorize.
###Code
ngram_vectorizer = TfidfVectorizer(max_features=40000,
min_df=5,
max_df=0.5,
analyzer='word',
stop_words='english',
ngram_range=(1, 3))
ngram_vectorizer.fit(x_train)
tfidf_train = ngram_vectorizer.transform(x_train)
###Output
_____no_output_____
###Markdown
Train all dem modelz!
###Code
logit_model = LogisticRegression(solver='sag')
logit_model.fit(tfidf_train, y_train)
nb_model = MultinomialNB(alpha=0.3)
nb_model.fit(tfidf_train, y_train)
###Output
_____no_output_____
###Markdown
Load and prepare public test data
###Code
public_test = pd.read_csv('../public_test.csv')
public_test_lemmatized = public_test[['article', 'label']].copy()
public_test_lemmatized["article"] = public_test_lemmatized["article"].apply(lemmatize)
public_test_lemmatized = public_test_lemmatized.reset_index()
public_test_lemmatized = public_test_lemmatized.drop(['index'], axis=1)
tfidf_public_test = ngram_vectorizer.transform(public_test_lemmatized.article)
###Output
_____no_output_____
###Markdown
Score!
###Code
def test_score(model, name, tfidf, labels):
preds = model.predict(tfidf)
accuracy = accuracy_score(labels, preds)
f1 = f1_score(labels, preds)
print(f'Prediction accuracy for {name} model on public test data:', round(accuracy*100, 3))
print(f'F1 score for {name} model on public test data:', round(f1*100, 3))
print()
test_score(logit_model, 'logistic regression', tfidf_public_test, public_test.label)
test_score(nb_model, 'naive bayes', tfidf_public_test, public_test.label)
# ROC AUC for Logit, Naive Bayes and BERT on public test data
nb_probs = nb_model.predict_proba(tfidf_public_test)
logit_probs = logit_model.predict_proba(tfidf_public_test)
bert_probs = pd.read_csv('BERT_public_test_preds.csv').prob1.to_numpy()
from sklearn.metrics import accuracy_score, auc, roc_curve
import numpy as np
def evaluate_roc(model_probs, y_true):
"""
- Print AUC
- Plot ROC
"""
# Plot ROC AUC
plt.figure(figsize=(15,7))
plt.title('Receiver Operating Characteristic')
colors = {'b', 'r', 'g'}
for name, preds in model_probs.items():
fpr, tpr, threshold = roc_curve(y_true, preds)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, colors.pop(), label = f'{name} AUC = %0.3f' % roc_auc)
print(f'{name} AUC: {roc_auc:.4f}')
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend()
plt.show()
evaluate_roc({'logistic regression': logit_probs[:,1], 'naive bayes': nb_probs[:,1], 'BERT': bert_probs}, public_test.label.to_numpy(), )
pd.DataFrame(nb_probs, columns['prob0'])
###Output
_____no_output_____ |
doc/source/cookbook/custom_colorbar_tickmarks.ipynb | ###Markdown
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
###Code
plot = slc.plots[("gas", "density")]
###Output
_____no_output_____
###Markdown
The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
###Code
colorbar = plot.cb
###Output
_____no_output_____
###Markdown
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
###Code
slc._setup_plots()
###Output
_____no_output_____
###Markdown
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](https://matplotlib.org/stable/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](https://matplotlib.org/stable/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
###Code
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(["$10^{-28}$"])
slc
###Output
_____no_output_____
###Markdown
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
###Code
plot = slc.plots['density']
###Output
_____no_output_____
###Markdown
The plot has a few attributes that point to underlying `matplotlib` plot primites. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
###Code
colorbar = plot.cb
###Output
_____no_output_____
###Markdown
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
###Code
slc._setup_plots()
###Output
_____no_output_____
###Markdown
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
###Code
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(['$10^{-28}$'])
slc
###Output
_____no_output_____
###Markdown
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
###Code
plot = slc.plots[('gas', 'density')]
###Output
_____no_output_____
###Markdown
The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
###Code
colorbar = plot.cb
###Output
_____no_output_____
###Markdown
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
###Code
slc._setup_plots()
###Output
_____no_output_____
###Markdown
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](https://matplotlib.org/stable/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](https://matplotlib.org/stable/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
###Code
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(['$10^{-28}$'])
slc
###Output
_____no_output_____
###Markdown
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
###Code
plot = slc.plots['density']
###Output
_____no_output_____
###Markdown
The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
###Code
colorbar = plot.cb
###Output
_____no_output_____
###Markdown
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
###Code
slc._setup_plots()
###Output
_____no_output_____
###Markdown
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](https://matplotlib.org/stable/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](https://matplotlib.org/stable/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
###Code
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(['$10^{-28}$'])
slc
###Output
_____no_output_____
###Markdown
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
###Code
plot = slc.plots['density']
###Output
_____no_output_____
###Markdown
The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
###Code
colorbar = plot.cb
###Output
_____no_output_____
###Markdown
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
###Code
slc._setup_plots()
###Output
_____no_output_____
###Markdown
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
###Code
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(['$10^{-28}$'])
slc
###Output
_____no_output_____
###Markdown
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
###Code
plot = slc.plots['density']
###Output
_____no_output_____
###Markdown
The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
###Code
colorbar = plot.cb
###Output
_____no_output_____
###Markdown
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
###Code
slc._setup_plots()
###Output
_____no_output_____
###Markdown
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](https://matplotlib.org/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](https://matplotlib.org/api/colorbar_api.htmlmatplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
###Code
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(['$10^{-28}$'])
slc
###Output
_____no_output_____ |
L02-NumPy_Part2-Lesson.ipynb | ###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.02180582 0.94281276]
[0.26687057 0.68403037]
[0.13412826 0.03989218]
[0.22707579 0.696518 ]
[0.05462452 0.33944004]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
# Find all values in the matrix less than 0.5
demo_g < 0.5
###Output
_____no_output_____
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
Help on built-in function array in module numpy:
array(...)
array(object, dtype=None, copy=True, order='K', subok=False, ndmin=0)
Create an array.
Parameters
----------
object : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
dtype : data-type, optional
The desired data-type for the array. If not given, then the type will
be determined as the minimum type required to hold the objects in the
sequence.
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy will
only be made if __array__ returns a copy, if obj is a nested sequence,
or if a copy is needed to satisfy any of the other requirements
(`dtype`, `order`, etc.).
order : {'K', 'A', 'C', 'F'}, optional
Specify the memory layout of the array. If object is not an array, the
newly created array will be in C order (row major) unless 'F' is
specified, in which case it will be in Fortran order (column major).
If object is an array the following holds.
===== ========= ===================================================
order no copy copy=True
===== ========= ===================================================
'K' unchanged F & C order preserved, otherwise most similar order
'A' unchanged F order if input is F and not C, otherwise C order
'C' C order C order
'F' F order F order
===== ========= ===================================================
When ``copy=False`` and a copy is made for other reasons, the result is
the same as if ``copy=True``, with some exceptions for `A`, see the
Notes section. The default order is 'K'.
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise
the returned array will be forced to be a base-class array (default).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting
array should have. Ones will be pre-pended to the shape as
needed to meet this requirement.
Returns
-------
out : ndarray
An array object satisfying the specified requirements.
See Also
--------
empty_like : Return an empty array with shape and type of input.
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
ones : Return a new array setting values to one.
zeros : Return a new array setting values to zero.
full : Return a new array of given shape filled with value.
Notes
-----
When order is 'A' and `object` is an array in neither 'C' nor 'F' order,
and a copy is forced by a change in dtype, then the order of the result is
not necessarily 'C' as expected. This is likely a bug.
Examples
--------
>>> np.array([1, 2, 3])
array([1, 2, 3])
Upcasting:
>>> np.array([1, 2, 3.0])
array([ 1., 2., 3.])
More than one dimension:
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
Minimum dimensions 2:
>>> np.array([1, 2, 3], ndmin=2)
array([[1, 2, 3]])
Type provided:
>>> np.array([1, 2, 3], dtype=complex)
array([ 1.+0.j, 2.+0.j, 3.+0.j])
Data-type consisting of more than one element:
>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>> x['a']
array([1, 3])
Creating an array from sub-classes:
>>> np.array(np.mat('1 2; 3 4'))
array([[1, 2],
[3, 4]])
>>> np.array(np.mat('1 2; 3 4'), subok=True)
matrix([[1, 2],
[3, 4]])
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out:
###Code
# Call help on an object we created.
x = np.array([1, 2, 3, 4])
help(x)
###Output
Help on ndarray object:
class ndarray(builtins.object)
| ndarray(shape, dtype=float, buffer=None, offset=0,
| strides=None, order=None)
|
| An array object represents a multidimensional, homogeneous array
| of fixed-size items. An associated data-type object describes the
| format of each element in the array (its byte-order, how many bytes it
| occupies in memory, whether it is an integer, a floating point number,
| or something else, etc.)
|
| Arrays should be constructed using `array`, `zeros` or `empty` (refer
| to the See Also section below). The parameters given here refer to
| a low-level method (`ndarray(...)`) for instantiating an array.
|
| For more information, refer to the `numpy` module and examine the
| methods and attributes of an array.
|
| Parameters
| ----------
| (for the __new__ method; see Notes below)
|
| shape : tuple of ints
| Shape of created array.
| dtype : data-type, optional
| Any object that can be interpreted as a numpy data type.
| buffer : object exposing buffer interface, optional
| Used to fill the array with data.
| offset : int, optional
| Offset of array data in buffer.
| strides : tuple of ints, optional
| Strides of data in memory.
| order : {'C', 'F'}, optional
| Row-major (C-style) or column-major (Fortran-style) order.
|
| Attributes
| ----------
| T : ndarray
| Transpose of the array.
| data : buffer
| The array's elements, in memory.
| dtype : dtype object
| Describes the format of the elements in the array.
| flags : dict
| Dictionary containing information related to memory use, e.g.,
| 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc.
| flat : numpy.flatiter object
| Flattened version of the array as an iterator. The iterator
| allows assignments, e.g., ``x.flat = 3`` (See `ndarray.flat` for
| assignment examples; TODO).
| imag : ndarray
| Imaginary part of the array.
| real : ndarray
| Real part of the array.
| size : int
| Number of elements in the array.
| itemsize : int
| The memory use of each array element in bytes.
| nbytes : int
| The total number of bytes required to store the array data,
| i.e., ``itemsize * size``.
| ndim : int
| The array's number of dimensions.
| shape : tuple of ints
| Shape of the array.
| strides : tuple of ints
| The step-size required to move from one element to the next in
| memory. For example, a contiguous ``(3, 4)`` array of type
| ``int16`` in C-order has strides ``(8, 2)``. This implies that
| to move from element to element in memory requires jumps of 2 bytes.
| To move from row-to-row, one needs to jump 8 bytes at a time
| (``2 * 4``).
| ctypes : ctypes object
| Class containing properties of the array needed for interaction
| with ctypes.
| base : ndarray
| If the array is a view into another array, that array is its `base`
| (unless that array is also a view). The `base` array is where the
| array data is actually stored.
|
| See Also
| --------
| array : Construct an array.
| zeros : Create an array, each element of which is zero.
| empty : Create an array, but leave its allocated memory unchanged (i.e.,
| it contains "garbage").
| dtype : Create a data-type.
|
| Notes
| -----
| There are two modes of creating an array using ``__new__``:
|
| 1. If `buffer` is None, then only `shape`, `dtype`, and `order`
| are used.
| 2. If `buffer` is an object exposing the buffer interface, then
| all keywords are interpreted.
|
| No ``__init__`` method is needed because the array is fully initialized
| after the ``__new__`` method.
|
| Examples
| --------
| These examples illustrate the low-level `ndarray` constructor. Refer
| to the `See Also` section above for easier ways of constructing an
| ndarray.
|
| First mode, `buffer` is None:
|
| >>> np.ndarray(shape=(2,2), dtype=float, order='F')
| array([[0.0e+000, 0.0e+000], # random
| [ nan, 2.5e-323]])
|
| Second mode:
|
| >>> np.ndarray((2,), buffer=np.array([1,2,3]),
| ... offset=np.int_().itemsize,
| ... dtype=int) # offset = 1*itemsize, i.e. skip first element
| array([2, 3])
|
| Methods defined here:
|
| __abs__(self, /)
| abs(self)
|
| __add__(self, value, /)
| Return self+value.
|
| __and__(self, value, /)
| Return self&value.
|
| __array__(...)
| a.__array__(|dtype) -> reference if type unchanged, copy otherwise.
|
| Returns either a new reference to self if dtype is not given or a new array
| of provided data type if dtype is different from the current dtype of the
| array.
|
| __array_function__(...)
|
| __array_prepare__(...)
| a.__array_prepare__(obj) -> Object of same type as ndarray object obj.
|
| __array_ufunc__(...)
|
| __array_wrap__(...)
| a.__array_wrap__(obj) -> Object of same type as ndarray object a.
|
| __bool__(self, /)
| self != 0
|
| __complex__(...)
|
| __contains__(self, key, /)
| Return key in self.
|
| __copy__(...)
| a.__copy__()
|
| Used if :func:`copy.copy` is called on an array. Returns a copy of the array.
|
| Equivalent to ``a.copy(order='K')``.
|
| __deepcopy__(...)
| a.__deepcopy__(memo, /) -> Deep copy of array.
|
| Used if :func:`copy.deepcopy` is called on an array.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __divmod__(self, value, /)
| Return divmod(self, value).
|
| __eq__(self, value, /)
| Return self==value.
|
| __float__(self, /)
| float(self)
|
| __floordiv__(self, value, /)
| Return self//value.
|
| __format__(...)
| Default object formatter.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getitem__(self, key, /)
| Return self[key].
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Return self+=value.
|
| __iand__(self, value, /)
| Return self&=value.
|
| __ifloordiv__(self, value, /)
| Return self//=value.
|
| __ilshift__(self, value, /)
| Return self<<=value.
|
| __imatmul__(self, value, /)
| Return self@=value.
|
| __imod__(self, value, /)
| Return self%=value.
|
| __imul__(self, value, /)
| Return self*=value.
|
| __index__(self, /)
| Return self converted to an integer, if self is suitable for use as an index into a list.
|
| __int__(self, /)
| int(self)
|
| __invert__(self, /)
| ~self
|
| __ior__(self, value, /)
| Return self|=value.
|
| __ipow__(self, value, /)
| Return self**=value.
|
| __irshift__(self, value, /)
| Return self>>=value.
|
| __isub__(self, value, /)
| Return self-=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __itruediv__(self, value, /)
| Return self/=value.
|
| __ixor__(self, value, /)
| Return self^=value.
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lshift__(self, value, /)
| Return self<<value.
|
| __lt__(self, value, /)
| Return self<value.
|
| __matmul__(self, value, /)
| Return self@value.
|
| __mod__(self, value, /)
| Return self%value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __neg__(self, /)
| -self
|
| __or__(self, value, /)
| Return self|value.
|
| __pos__(self, /)
| +self
|
| __pow__(self, value, mod=None, /)
| Return pow(self, value, mod).
|
| __radd__(self, value, /)
| Return value+self.
|
| __rand__(self, value, /)
| Return value&self.
|
| __rdivmod__(self, value, /)
| Return divmod(value, self).
|
| __reduce__(...)
| a.__reduce__()
|
| For pickling.
|
| __reduce_ex__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| __rfloordiv__(self, value, /)
| Return value//self.
|
| __rlshift__(self, value, /)
| Return value<<self.
|
| __rmatmul__(self, value, /)
| Return value@self.
|
| __rmod__(self, value, /)
| Return value%self.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __ror__(self, value, /)
| Return value|self.
|
| __rpow__(self, value, mod=None, /)
| Return pow(value, self, mod).
|
| __rrshift__(self, value, /)
| Return value>>self.
|
| __rshift__(self, value, /)
| Return self>>value.
|
| __rsub__(self, value, /)
| Return value-self.
|
| __rtruediv__(self, value, /)
| Return value/self.
|
| __rxor__(self, value, /)
| Return value^self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __setstate__(...)
| a.__setstate__(state, /)
|
| For unpickling.
|
| The `state` argument must be a sequence that contains the following
| elements:
|
| Parameters
| ----------
| version : int
| optional pickle version. If omitted defaults to 0.
| shape : tuple
| dtype : data-type
| isFortran : bool
| rawdata : string or list
| a binary string with the data (or a list if 'a' is an object array)
|
| __sizeof__(...)
| Size of object in memory, in bytes.
|
| __str__(self, /)
| Return str(self).
|
| __sub__(self, value, /)
| Return self-value.
|
| __truediv__(self, value, /)
| Return self/value.
|
| __xor__(self, value, /)
| Return self^value.
|
| all(...)
| a.all(axis=None, out=None, keepdims=False)
|
| Returns True if all elements evaluate to True.
|
| Refer to `numpy.all` for full documentation.
|
| See Also
| --------
| numpy.all : equivalent function
|
| any(...)
| a.any(axis=None, out=None, keepdims=False)
|
| Returns True if any of the elements of `a` evaluate to True.
|
| Refer to `numpy.any` for full documentation.
|
| See Also
| --------
| numpy.any : equivalent function
|
| argmax(...)
| a.argmax(axis=None, out=None)
|
| Return indices of the maximum values along the given axis.
|
| Refer to `numpy.argmax` for full documentation.
|
| See Also
| --------
| numpy.argmax : equivalent function
|
| argmin(...)
| a.argmin(axis=None, out=None)
|
| Return indices of the minimum values along the given axis of `a`.
|
| Refer to `numpy.argmin` for detailed documentation.
|
| See Also
| --------
| numpy.argmin : equivalent function
|
| argpartition(...)
| a.argpartition(kth, axis=-1, kind='introselect', order=None)
|
| Returns the indices that would partition this array.
|
| Refer to `numpy.argpartition` for full documentation.
|
| .. versionadded:: 1.8.0
|
| See Also
| --------
| numpy.argpartition : equivalent function
|
| argsort(...)
| a.argsort(axis=-1, kind=None, order=None)
|
| Returns the indices that would sort this array.
|
| Refer to `numpy.argsort` for full documentation.
|
| See Also
| --------
| numpy.argsort : equivalent function
|
| astype(...)
| a.astype(dtype, order='K', casting='unsafe', subok=True, copy=True)
|
| Copy of the array, cast to a specified type.
|
| Parameters
| ----------
| dtype : str or dtype
| Typecode or data-type to which the array is cast.
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout order of the result.
| 'C' means C order, 'F' means Fortran order, 'A'
| means 'F' order if all the arrays are Fortran contiguous,
| 'C' order otherwise, and 'K' means as close to the
| order the array elements appear in memory as possible.
| Default is 'K'.
| casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
| Controls what kind of data casting may occur. Defaults to 'unsafe'
| for backwards compatibility.
|
| * 'no' means the data types should not be cast at all.
| * 'equiv' means only byte-order changes are allowed.
| * 'safe' means only casts which can preserve values are allowed.
| * 'same_kind' means only safe casts or casts within a kind,
| like float64 to float32, are allowed.
| * 'unsafe' means any data conversions may be done.
| subok : bool, optional
| If True, then sub-classes will be passed-through (default), otherwise
| the returned array will be forced to be a base-class array.
| copy : bool, optional
| By default, astype always returns a newly allocated array. If this
| is set to false, and the `dtype`, `order`, and `subok`
| requirements are satisfied, the input array is returned instead
| of a copy.
|
| Returns
| -------
| arr_t : ndarray
| Unless `copy` is False and the other conditions for returning the input
| array are satisfied (see description for `copy` input parameter), `arr_t`
| is a new array of the same shape as the input array, with dtype, order
| given by `dtype`, `order`.
|
| Notes
| -----
| .. versionchanged:: 1.17.0
| Casting between a simple data type and a structured one is possible only
| for "unsafe" casting. Casting to multiple fields is allowed, but
| casting from multiple fields is not.
|
| .. versionchanged:: 1.9.0
| Casting from numeric to string types in 'safe' casting mode requires
| that the string dtype length is long enough to store the max
| integer/float value converted.
|
| Raises
| ------
| ComplexWarning
| When casting from complex to float or int. To avoid this,
| one should use ``a.real.astype(t)``.
|
| Examples
| --------
| >>> x = np.array([1, 2, 2.5])
| >>> x
| array([1. , 2. , 2.5])
|
| >>> x.astype(int)
| array([1, 2, 2])
|
| byteswap(...)
| a.byteswap(inplace=False)
|
| Swap the bytes of the array elements
|
| Toggle between low-endian and big-endian data representation by
| returning a byteswapped array, optionally swapped in-place.
| Arrays of byte-strings are not swapped. The real and imaginary
| parts of a complex number are swapped individually.
|
| Parameters
| ----------
| inplace : bool, optional
| If ``True``, swap bytes in-place, default is ``False``.
|
| Returns
| -------
| out : ndarray
| The byteswapped array. If `inplace` is ``True``, this is
| a view to self.
|
| Examples
| --------
| >>> A = np.array([1, 256, 8755], dtype=np.int16)
| >>> list(map(hex, A))
| ['0x1', '0x100', '0x2233']
| >>> A.byteswap(inplace=True)
| array([ 256, 1, 13090], dtype=int16)
| >>> list(map(hex, A))
| ['0x100', '0x1', '0x3322']
|
| Arrays of byte-strings are not swapped
|
| >>> A = np.array([b'ceg', b'fac'])
| >>> A.byteswap()
| array([b'ceg', b'fac'], dtype='|S3')
|
| ``A.newbyteorder().byteswap()`` produces an array with the same values
| but different representation in memory
|
| >>> A = np.array([1, 2, 3])
| >>> A.view(np.uint8)
| array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0,
| 0, 0], dtype=uint8)
| >>> A.newbyteorder().byteswap(inplace=True)
| array([1, 2, 3])
| >>> A.view(np.uint8)
| array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0,
| 0, 3], dtype=uint8)
|
| choose(...)
| a.choose(choices, out=None, mode='raise')
|
| Use an index array to construct a new array from a set of choices.
|
| Refer to `numpy.choose` for full documentation.
|
| See Also
| --------
| numpy.choose : equivalent function
|
| clip(...)
| a.clip(min=None, max=None, out=None, **kwargs)
|
| Return an array whose values are limited to ``[min, max]``.
| One of max or min must be given.
|
| Refer to `numpy.clip` for full documentation.
|
| See Also
| --------
| numpy.clip : equivalent function
|
| compress(...)
| a.compress(condition, axis=None, out=None)
|
| Return selected slices of this array along given axis.
|
| Refer to `numpy.compress` for full documentation.
|
| See Also
| --------
| numpy.compress : equivalent function
|
| conj(...)
| a.conj()
|
| Complex-conjugate all elements.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| conjugate(...)
| a.conjugate()
|
| Return the complex conjugate, element-wise.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| copy(...)
| a.copy(order='C')
|
| Return a copy of the array.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout of the copy. 'C' means C-order,
| 'F' means F-order, 'A' means 'F' if `a` is Fortran contiguous,
| 'C' otherwise. 'K' means match the layout of `a` as closely
| as possible. (Note that this function and :func:`numpy.copy` are very
| similar, but have different default values for their order=
| arguments.)
|
| See also
| --------
| numpy.copy
| numpy.copyto
|
| Examples
| --------
| >>> x = np.array([[1,2,3],[4,5,6]], order='F')
|
| >>> y = x.copy()
|
| >>> x.fill(0)
|
| >>> x
| array([[0, 0, 0],
| [0, 0, 0]])
|
| >>> y
| array([[1, 2, 3],
| [4, 5, 6]])
|
| >>> y.flags['C_CONTIGUOUS']
| True
|
| cumprod(...)
| a.cumprod(axis=None, dtype=None, out=None)
|
| Return the cumulative product of the elements along the given axis.
|
| Refer to `numpy.cumprod` for full documentation.
|
| See Also
| --------
| numpy.cumprod : equivalent function
|
| cumsum(...)
| a.cumsum(axis=None, dtype=None, out=None)
|
| Return the cumulative sum of the elements along the given axis.
|
| Refer to `numpy.cumsum` for full documentation.
|
| See Also
| --------
| numpy.cumsum : equivalent function
|
| diagonal(...)
| a.diagonal(offset=0, axis1=0, axis2=1)
|
| Return specified diagonals. In NumPy 1.9 the returned array is a
| read-only view instead of a copy as in previous NumPy versions. In
| a future version the read-only restriction will be removed.
|
| Refer to :func:`numpy.diagonal` for full documentation.
|
| See Also
| --------
| numpy.diagonal : equivalent function
|
| dot(...)
| a.dot(b, out=None)
|
| Dot product of two arrays.
|
| Refer to `numpy.dot` for full documentation.
|
| See Also
| --------
| numpy.dot : equivalent function
|
| Examples
| --------
| >>> a = np.eye(2)
| >>> b = np.ones((2, 2)) * 2
| >>> a.dot(b)
| array([[2., 2.],
| [2., 2.]])
|
| This array method can be conveniently chained:
|
| >>> a.dot(b).dot(b)
| array([[8., 8.],
| [8., 8.]])
|
| dump(...)
| a.dump(file)
|
| Dump a pickle of the array to the specified file.
| The array can be read back with pickle.load or numpy.load.
|
| Parameters
| ----------
| file : str or Path
| A string naming the dump file.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| dumps(...)
| a.dumps()
|
| Returns the pickle of the array as a string.
| pickle.loads or numpy.loads will convert the string back to an array.
|
| Parameters
| ----------
| None
|
| fill(...)
| a.fill(value)
|
| Fill the array with a scalar value.
|
| Parameters
| ----------
| value : scalar
| All elements of `a` will be assigned this value.
|
| Examples
| --------
| >>> a = np.array([1, 2])
| >>> a.fill(0)
| >>> a
| array([0, 0])
| >>> a = np.empty(2)
| >>> a.fill(1)
| >>> a
| array([1., 1.])
|
| flatten(...)
| a.flatten(order='C')
|
| Return a copy of the array collapsed into one dimension.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| 'C' means to flatten in row-major (C-style) order.
| 'F' means to flatten in column-major (Fortran-
| style) order. 'A' means to flatten in column-major
| order if `a` is Fortran *contiguous* in memory,
| row-major order otherwise. 'K' means to flatten
| `a` in the order the elements occur in memory.
| The default is 'C'.
|
| Returns
| -------
| y : ndarray
| A copy of the input array, flattened to one dimension.
|
| See Also
| --------
| ravel : Return a flattened array.
| flat : A 1-D flat iterator over the array.
|
| Examples
| --------
| >>> a = np.array([[1,2], [3,4]])
| >>> a.flatten()
| array([1, 2, 3, 4])
| >>> a.flatten('F')
| array([1, 3, 2, 4])
|
| getfield(...)
| a.getfield(dtype, offset=0)
|
| Returns a field of the given array as a certain type.
|
| A field is a view of the array data with a given data-type. The values in
| the view are determined by the given type and the offset into the current
| array in bytes. The offset needs to be such that the view dtype fits in the
| array dtype; for example an array of dtype complex128 has 16-byte elements.
| If taking a view with a 32-bit integer (4 bytes), the offset needs to be
| between 0 and 12 bytes.
|
| Parameters
| ----------
| dtype : str or dtype
| The data type of the view. The dtype size of the view can not be larger
| than that of the array itself.
| offset : int
| Number of bytes to skip before beginning the element view.
|
| Examples
| --------
| >>> x = np.diag([1.+1.j]*2)
| >>> x[1, 1] = 2 + 4.j
| >>> x
| array([[1.+1.j, 0.+0.j],
| [0.+0.j, 2.+4.j]])
| >>> x.getfield(np.float64)
| array([[1., 0.],
| [0., 2.]])
|
| By choosing an offset of 8 bytes we can select the complex part of the
| array for our view:
|
| >>> x.getfield(np.float64, offset=8)
| array([[1., 0.],
| [0., 4.]])
|
| item(...)
| a.item(*args)
|
| Copy an element of an array to a standard Python scalar and return it.
|
| Parameters
| ----------
| \*args : Arguments (variable number and type)
|
| * none: in this case, the method only works for arrays
| with one element (`a.size == 1`), which element is
| copied into a standard Python scalar object and returned.
|
| * int_type: this argument is interpreted as a flat index into
| the array, specifying which element to copy and return.
|
| * tuple of int_types: functions as does a single int_type argument,
| except that the argument is interpreted as an nd-index into the
| array.
|
| Returns
| -------
| z : Standard Python scalar object
| A copy of the specified element of the array as a suitable
| Python scalar
|
| Notes
| -----
| When the data type of `a` is longdouble or clongdouble, item() returns
| a scalar array object because there is no available Python scalar that
| would not lose information. Void arrays return a buffer object for item(),
| unless fields are defined, in which case a tuple is returned.
|
| `item` is very similar to a[args], except, instead of an array scalar,
| a standard Python scalar is returned. This can be useful for speeding up
| access to elements of the array and doing arithmetic on elements of the
| array using Python's optimized math.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.item(3)
| 1
| >>> x.item(7)
| 0
| >>> x.item((0, 1))
| 2
| >>> x.item((2, 2))
| 1
|
| itemset(...)
| a.itemset(*args)
|
| Insert scalar into an array (scalar is cast to array's dtype, if possible)
|
| There must be at least 1 argument, and define the last argument
| as *item*. Then, ``a.itemset(*args)`` is equivalent to but faster
| than ``a[args] = item``. The item should be a scalar value and `args`
| must select a single item in the array `a`.
|
| Parameters
| ----------
| \*args : Arguments
| If one argument: a scalar, only used in case `a` is of size 1.
| If two arguments: the last argument is the value to be set
| and must be a scalar, the first argument specifies a single array
| element location. It is either an int or a tuple.
|
| Notes
| -----
| Compared to indexing syntax, `itemset` provides some speed increase
| for placing a scalar into a particular location in an `ndarray`,
| if you must do this. However, generally this is discouraged:
| among other problems, it complicates the appearance of the code.
| Also, when using `itemset` (and `item`) inside a loop, be sure
| to assign the methods to a local variable to avoid the attribute
| look-up at each loop iteration.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.itemset(4, 0)
| >>> x.itemset((2, 2), 9)
| >>> x
| array([[2, 2, 6],
| [1, 0, 6],
| [1, 0, 9]])
|
| max(...)
| a.max(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the maximum along a given axis.
|
| Refer to `numpy.amax` for full documentation.
|
| See Also
| --------
| numpy.amax : equivalent function
|
| mean(...)
| a.mean(axis=None, dtype=None, out=None, keepdims=False)
|
| Returns the average of the array elements along given axis.
|
| Refer to `numpy.mean` for full documentation.
|
| See Also
| --------
| numpy.mean : equivalent function
|
| min(...)
| a.min(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the minimum along a given axis.
|
| Refer to `numpy.amin` for full documentation.
|
| See Also
| --------
| numpy.amin : equivalent function
|
| newbyteorder(...)
| arr.newbyteorder(new_order='S')
|
| Return the array with the same data viewed with a different byte order.
|
| Equivalent to::
|
| arr.view(arr.dtype.newbytorder(new_order))
|
| Changes are also made in all fields and sub-arrays of the array data
| type.
|
|
|
| Parameters
| ----------
| new_order : string, optional
| Byte order to force; a value from the byte order specifications
| below. `new_order` codes can be any of:
|
| * 'S' - swap dtype from current to opposite endian
| * {'<', 'L'} - little endian
| * {'>', 'B'} - big endian
| * {'=', 'N'} - native order
| * {'|', 'I'} - ignore (no change to byte order)
|
| The default value ('S') results in swapping the current
| byte order. The code does a case-insensitive check on the first
| letter of `new_order` for the alternatives above. For example,
| any of 'B' or 'b' or 'biggish' are valid to specify big-endian.
|
|
| Returns
| -------
| new_arr : array
| New array object with the dtype reflecting given change to the
| byte order.
|
| nonzero(...)
| a.nonzero()
|
| Return the indices of the elements that are non-zero.
|
| Refer to `numpy.nonzero` for full documentation.
|
| See Also
| --------
| numpy.nonzero : equivalent function
|
| partition(...)
| a.partition(kth, axis=-1, kind='introselect', order=None)
|
| Rearranges the elements in the array in such a way that the value of the
| element in kth position is in the position it would be in a sorted array.
| All elements smaller than the kth element are moved before this element and
| all equal or greater are moved behind it. The ordering of the elements in
| the two partitions is undefined.
|
| .. versionadded:: 1.8.0
|
| Parameters
| ----------
| kth : int or sequence of ints
| Element index to partition by. The kth element value will be in its
| final sorted position and all smaller elements will be moved before it
| and all equal or greater elements behind it.
| The order of all elements in the partitions is undefined.
| If provided with a sequence of kth it will partition all elements
| indexed by kth of them into their sorted position at once.
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'introselect'}, optional
| Selection algorithm. Default is 'introselect'.
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need to be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.partition : Return a parititioned copy of an array.
| argpartition : Indirect partition.
| sort : Full sort.
|
| Notes
| -----
| See ``np.partition`` for notes on the different algorithms.
|
| Examples
| --------
| >>> a = np.array([3, 4, 2, 1])
| >>> a.partition(3)
| >>> a
| array([2, 1, 3, 4])
|
| >>> a.partition((1, 3))
| >>> a
| array([1, 2, 3, 4])
|
| prod(...)
| a.prod(axis=None, dtype=None, out=None, keepdims=False, initial=1, where=True)
|
| Return the product of the array elements over the given axis
|
| Refer to `numpy.prod` for full documentation.
|
| See Also
| --------
| numpy.prod : equivalent function
|
| ptp(...)
| a.ptp(axis=None, out=None, keepdims=False)
|
| Peak to peak (maximum - minimum) value along a given axis.
|
| Refer to `numpy.ptp` for full documentation.
|
| See Also
| --------
| numpy.ptp : equivalent function
|
| put(...)
| a.put(indices, values, mode='raise')
|
| Set ``a.flat[n] = values[n]`` for all `n` in indices.
|
| Refer to `numpy.put` for full documentation.
|
| See Also
| --------
| numpy.put : equivalent function
|
| ravel(...)
| a.ravel([order])
|
| Return a flattened array.
|
| Refer to `numpy.ravel` for full documentation.
|
| See Also
| --------
| numpy.ravel : equivalent function
|
| ndarray.flat : a flat iterator on the array.
|
| repeat(...)
| a.repeat(repeats, axis=None)
|
| Repeat elements of an array.
|
| Refer to `numpy.repeat` for full documentation.
|
| See Also
| --------
| numpy.repeat : equivalent function
|
| reshape(...)
| a.reshape(shape, order='C')
|
| Returns an array containing the same data with a new shape.
|
| Refer to `numpy.reshape` for full documentation.
|
| See Also
| --------
| numpy.reshape : equivalent function
|
| Notes
| -----
| Unlike the free function `numpy.reshape`, this method on `ndarray` allows
| the elements of the shape parameter to be passed in as separate arguments.
| For example, ``a.reshape(10, 11)`` is equivalent to
| ``a.reshape((10, 11))``.
|
| resize(...)
| a.resize(new_shape, refcheck=True)
|
| Change shape and size of array in-place.
|
| Parameters
| ----------
| new_shape : tuple of ints, or `n` ints
| Shape of resized array.
| refcheck : bool, optional
| If False, reference count will not be checked. Default is True.
|
| Returns
| -------
| None
|
| Raises
| ------
| ValueError
| If `a` does not own its own data or references or views to it exist,
| and the data memory must be changed.
| PyPy only: will always raise if the data memory must be changed, since
| there is no reliable way to determine if references or views to it
| exist.
|
| SystemError
| If the `order` keyword argument is specified. This behaviour is a
| bug in NumPy.
|
| See Also
| --------
| resize : Return a new array with the specified shape.
|
| Notes
| -----
| This reallocates space for the data area if necessary.
|
| Only contiguous arrays (data elements consecutive in memory) can be
| resized.
|
| The purpose of the reference count check is to make sure you
| do not use this array as a buffer for another Python object and then
| reallocate the memory. However, reference counts can increase in
| other ways so if you are sure that you have not shared the memory
| for this array with another Python object, then you may safely set
| `refcheck` to False.
|
| Examples
| --------
| Shrinking an array: array is flattened (in the order that the data are
| stored in memory), resized, and reshaped:
|
| >>> a = np.array([[0, 1], [2, 3]], order='C')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [1]])
|
| >>> a = np.array([[0, 1], [2, 3]], order='F')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [2]])
|
| Enlarging an array: as above, but missing entries are filled with zeros:
|
| >>> b = np.array([[0, 1], [2, 3]])
| >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
| >>> b
| array([[0, 1, 2],
| [3, 0, 0]])
|
| Referencing an array prevents resizing...
|
| >>> c = a
| >>> a.resize((1, 1))
| Traceback (most recent call last):
| ...
| ValueError: cannot resize an array that references or is referenced ...
|
| Unless `refcheck` is False:
|
| >>> a.resize((1, 1), refcheck=False)
| >>> a
| array([[0]])
| >>> c
| array([[0]])
|
| round(...)
| a.round(decimals=0, out=None)
|
| Return `a` with each element rounded to the given number of decimals.
|
| Refer to `numpy.around` for full documentation.
|
| See Also
| --------
| numpy.around : equivalent function
|
| searchsorted(...)
| a.searchsorted(v, side='left', sorter=None)
|
| Find indices where elements of v should be inserted in a to maintain order.
|
| For full documentation, see `numpy.searchsorted`
|
| See Also
| --------
| numpy.searchsorted : equivalent function
|
| setfield(...)
| a.setfield(val, dtype, offset=0)
|
| Put a value into a specified place in a field defined by a data-type.
|
| Place `val` into `a`'s field defined by `dtype` and beginning `offset`
| bytes into the field.
|
| Parameters
| ----------
| val : object
| Value to be placed in field.
| dtype : dtype object
| Data-type of the field in which to place `val`.
| offset : int, optional
| The number of bytes into the field at which to place `val`.
|
| Returns
| -------
| None
|
| See Also
| --------
| getfield
|
| Examples
| --------
| >>> x = np.eye(3)
| >>> x.getfield(np.float64)
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
| >>> x.setfield(3, np.int32)
| >>> x.getfield(np.int32)
| array([[3, 3, 3],
| [3, 3, 3],
| [3, 3, 3]], dtype=int32)
| >>> x
| array([[1.0e+000, 1.5e-323, 1.5e-323],
| [1.5e-323, 1.0e+000, 1.5e-323],
| [1.5e-323, 1.5e-323, 1.0e+000]])
| >>> x.setfield(np.eye(3), np.int32)
| >>> x
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
|
| setflags(...)
| a.setflags(write=None, align=None, uic=None)
|
| Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY),
| respectively.
|
| These Boolean-valued flags affect how numpy interprets the memory
| area used by `a` (see Notes below). The ALIGNED flag can only
| be set to True if the data is actually aligned according to the type.
| The WRITEBACKIFCOPY and (deprecated) UPDATEIFCOPY flags can never be set
| to True. The flag WRITEABLE can only be set to True if the array owns its
| own memory, or the ultimate owner of the memory exposes a writeable buffer
| interface, or is a string. (The exception for string is made so that
| unpickling can be done without copying memory.)
|
| Parameters
| ----------
| write : bool, optional
| Describes whether or not `a` can be written to.
| align : bool, optional
| Describes whether or not `a` is aligned properly for its type.
| uic : bool, optional
| Describes whether or not `a` is a copy of another "base" array.
|
| Notes
| -----
| Array flags provide information about how the memory area used
| for the array is to be interpreted. There are 7 Boolean flags
| in use, only four of which can be changed by the user:
| WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED.
|
| WRITEABLE (W) the data area can be written to;
|
| ALIGNED (A) the data and strides are aligned appropriately for the hardware
| (as determined by the compiler);
|
| UPDATEIFCOPY (U) (deprecated), replaced by WRITEBACKIFCOPY;
|
| WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced
| by .base). When the C-API function PyArray_ResolveWritebackIfCopy is
| called, the base array will be updated with the contents of this array.
|
| All flags can be accessed using the single (upper case) letter as well
| as the full name.
|
| Examples
| --------
| >>> y = np.array([[3, 1, 7],
| ... [2, 0, 0],
| ... [8, 5, 9]])
| >>> y
| array([[3, 1, 7],
| [2, 0, 0],
| [8, 5, 9]])
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : True
| ALIGNED : True
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(write=0, align=0)
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : False
| ALIGNED : False
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(uic=1)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: cannot set WRITEBACKIFCOPY flag to True
|
| sort(...)
| a.sort(axis=-1, kind=None, order=None)
|
| Sort an array in-place. Refer to `numpy.sort` for full documentation.
|
| Parameters
| ----------
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional
| Sorting algorithm. The default is 'quicksort'. Note that both 'stable'
| and 'mergesort' use timsort under the covers and, in general, the
| actual implementation will vary with datatype. The 'mergesort' option
| is retained for backwards compatibility.
|
| .. versionchanged:: 1.15.0.
| The 'stable' option was added.
|
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.sort : Return a sorted copy of an array.
| numpy.argsort : Indirect sort.
| numpy.lexsort : Indirect stable sort on multiple keys.
| numpy.searchsorted : Find elements in sorted array.
| numpy.partition: Partial sort.
|
| Notes
| -----
| See `numpy.sort` for notes on the different sorting algorithms.
|
| Examples
| --------
| >>> a = np.array([[1,4], [3,1]])
| >>> a.sort(axis=1)
| >>> a
| array([[1, 4],
| [1, 3]])
| >>> a.sort(axis=0)
| >>> a
| array([[1, 3],
| [1, 4]])
|
| Use the `order` keyword to specify a field to use when sorting a
| structured array:
|
| >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
| >>> a.sort(order='y')
| >>> a
| array([(b'c', 1), (b'a', 2)],
| dtype=[('x', 'S1'), ('y', '<i8')])
|
| squeeze(...)
| a.squeeze(axis=None)
|
| Remove single-dimensional entries from the shape of `a`.
|
| Refer to `numpy.squeeze` for full documentation.
|
| See Also
| --------
| numpy.squeeze : equivalent function
|
| std(...)
| a.std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the standard deviation of the array elements along given axis.
|
| Refer to `numpy.std` for full documentation.
|
| See Also
| --------
| numpy.std : equivalent function
|
| sum(...)
| a.sum(axis=None, dtype=None, out=None, keepdims=False, initial=0, where=True)
|
| Return the sum of the array elements over the given axis.
|
| Refer to `numpy.sum` for full documentation.
|
| See Also
| --------
| numpy.sum : equivalent function
|
| swapaxes(...)
| a.swapaxes(axis1, axis2)
|
| Return a view of the array with `axis1` and `axis2` interchanged.
|
| Refer to `numpy.swapaxes` for full documentation.
|
| See Also
| --------
| numpy.swapaxes : equivalent function
|
| take(...)
| a.take(indices, axis=None, out=None, mode='raise')
|
| Return an array formed from the elements of `a` at the given indices.
|
| Refer to `numpy.take` for full documentation.
|
| See Also
| --------
| numpy.take : equivalent function
|
| tobytes(...)
| a.tobytes(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| .. versionadded:: 1.9.0
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]], dtype='<u2')
| >>> x.tobytes()
| b'\x00\x00\x01\x00\x02\x00\x03\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x02\x00\x01\x00\x03\x00'
|
| tofile(...)
| a.tofile(fid, sep="", format="%s")
|
| Write array to a file as text or binary (default).
|
| Data is always written in 'C' order, independent of the order of `a`.
| The data produced by this method can be recovered using the function
| fromfile().
|
| Parameters
| ----------
| fid : file or str or Path
| An open file object, or a string containing a filename.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| sep : str
| Separator between array items for text output.
| If "" (empty), a binary file is written, equivalent to
| ``file.write(a.tobytes())``.
| format : str
| Format string for text file output.
| Each entry in the array is formatted to text by first converting
| it to the closest Python type, and then using "format" % item.
|
| Notes
| -----
| This is a convenience function for quick storage of array data.
| Information on endianness and precision is lost, so this method is not a
| good choice for files intended to archive data or transport data between
| machines with different endianness. Some of these problems can be overcome
| by outputting the data as text files, at the expense of speed and file
| size.
|
| When fid is a file object, array contents are directly written to the
| file, bypassing the file object's ``write`` method. As a result, tofile
| cannot be used with files objects supporting compression (e.g., GzipFile)
| or file-like objects that do not support ``fileno()`` (e.g., BytesIO).
|
| tolist(...)
| a.tolist()
|
| Return the array as an ``a.ndim``-levels deep nested list of Python scalars.
|
| Return a copy of the array data as a (nested) Python list.
| Data items are converted to the nearest compatible builtin Python type, via
| the `~numpy.ndarray.item` function.
|
| If ``a.ndim`` is 0, then since the depth of the nested list is 0, it will
| not be a list at all, but a simple Python scalar.
|
| Parameters
| ----------
| none
|
| Returns
| -------
| y : object, or list of object, or list of list of object, or ...
| The possibly nested list of array elements.
|
| Notes
| -----
| The array may be recreated via ``a = np.array(a.tolist())``, although this
| may sometimes lose precision.
|
| Examples
| --------
| For a 1D array, ``a.tolist()`` is almost the same as ``list(a)``,
| except that ``tolist`` changes numpy scalars to Python scalars:
|
| >>> a = np.uint32([1, 2])
| >>> a_list = list(a)
| >>> a_list
| [1, 2]
| >>> type(a_list[0])
| <class 'numpy.uint32'>
| >>> a_tolist = a.tolist()
| >>> a_tolist
| [1, 2]
| >>> type(a_tolist[0])
| <class 'int'>
|
| Additionally, for a 2D array, ``tolist`` applies recursively:
|
| >>> a = np.array([[1, 2], [3, 4]])
| >>> list(a)
| [array([1, 2]), array([3, 4])]
| >>> a.tolist()
| [[1, 2], [3, 4]]
|
| The base case for this recursion is a 0D array:
|
| >>> a = np.array(1)
| >>> list(a)
| Traceback (most recent call last):
| ...
| TypeError: iteration over a 0-d array
| >>> a.tolist()
| 1
|
| tostring(...)
| a.tostring(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]], dtype='<u2')
| >>> x.tobytes()
| b'\x00\x00\x01\x00\x02\x00\x03\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x02\x00\x01\x00\x03\x00'
|
| trace(...)
| a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)
|
| Return the sum along diagonals of the array.
|
| Refer to `numpy.trace` for full documentation.
|
| See Also
| --------
| numpy.trace : equivalent function
|
| transpose(...)
| a.transpose(*axes)
|
| Returns a view of the array with axes transposed.
|
| For a 1-D array this has no effect, as a transposed vector is simply the
| same vector. To convert a 1-D array into a 2D column vector, an additional
| dimension must be added. `np.atleast2d(a).T` achieves this, as does
| `a[:, np.newaxis]`.
| For a 2-D array, this is a standard matrix transpose.
| For an n-D array, if axes are given, their order indicates how the
| axes are permuted (see Examples). If axes are not provided and
| ``a.shape = (i[0], i[1], ... i[n-2], i[n-1])``, then
| ``a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])``.
|
| Parameters
| ----------
| axes : None, tuple of ints, or `n` ints
|
| * None or no argument: reverses the order of the axes.
|
| * tuple of ints: `i` in the `j`-th place in the tuple means `a`'s
| `i`-th axis becomes `a.transpose()`'s `j`-th axis.
|
| * `n` ints: same as an n-tuple of the same ints (this form is
| intended simply as a "convenience" alternative to the tuple form)
|
| Returns
| -------
| out : ndarray
| View of `a`, with axes suitably permuted.
|
| See Also
| --------
| ndarray.T : Array property returning the array transposed.
| ndarray.reshape : Give a new shape to an array without changing its data.
|
| Examples
| --------
| >>> a = np.array([[1, 2], [3, 4]])
| >>> a
| array([[1, 2],
| [3, 4]])
| >>> a.transpose()
| array([[1, 3],
| [2, 4]])
| >>> a.transpose((1, 0))
| array([[1, 3],
| [2, 4]])
| >>> a.transpose(1, 0)
| array([[1, 3],
| [2, 4]])
|
| var(...)
| a.var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the variance of the array elements, along given axis.
|
| Refer to `numpy.var` for full documentation.
|
| See Also
| --------
| numpy.var : equivalent function
|
| view(...)
| a.view(dtype=None, type=None)
|
| New view of array with the same data.
|
| Parameters
| ----------
| dtype : data-type or ndarray sub-class, optional
| Data-type descriptor of the returned view, e.g., float32 or int16. The
| default, None, results in the view having the same data-type as `a`.
| This argument can also be specified as an ndarray sub-class, which
| then specifies the type of the returned object (this is equivalent to
| setting the ``type`` parameter).
| type : Python type, optional
| Type of the returned view, e.g., ndarray or matrix. Again, the
| default None results in type preservation.
|
| Notes
| -----
| ``a.view()`` is used two different ways:
|
| ``a.view(some_dtype)`` or ``a.view(dtype=some_dtype)`` constructs a view
| of the array's memory with a different data-type. This can cause a
| reinterpretation of the bytes of memory.
|
| ``a.view(ndarray_subclass)`` or ``a.view(type=ndarray_subclass)`` just
| returns an instance of `ndarray_subclass` that looks at the same array
| (same shape, dtype, etc.) This does not cause a reinterpretation of the
| memory.
|
| For ``a.view(some_dtype)``, if ``some_dtype`` has a different number of
| bytes per entry than the previous dtype (for example, converting a
| regular array to a structured array), then the behavior of the view
| cannot be predicted just from the superficial appearance of ``a`` (shown
| by ``print(a)``). It also depends on exactly how ``a`` is stored in
| memory. Therefore if ``a`` is C-ordered versus fortran-ordered, versus
| defined as a slice or transpose, etc., the view may give different
| results.
|
|
| Examples
| --------
| >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])
|
| Viewing array data using a different type and dtype:
|
| >>> y = x.view(dtype=np.int16, type=np.matrix)
| >>> y
| matrix([[513]], dtype=int16)
| >>> print(type(y))
| <class 'numpy.matrix'>
|
| Creating a view on a structured array so it can be used in calculations
|
| >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
| >>> xv = x.view(dtype=np.int8).reshape(-1,2)
| >>> xv
| array([[1, 2],
| [3, 4]], dtype=int8)
| >>> xv.mean(0)
| array([2., 3.])
|
| Making changes to the view changes the underlying array
|
| >>> xv[0,1] = 20
| >>> x
| array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')])
|
| Using a view to convert an array to a recarray:
|
| >>> z = x.view(np.recarray)
| >>> z.a
| array([1, 3], dtype=int8)
|
| Views share data:
|
| >>> x[0] = (9, 10)
| >>> z[0]
| (9, 10)
|
| Views that change the dtype size (bytes per entry) should normally be
| avoided on arrays defined by slices, transposes, fortran-ordering, etc.:
|
| >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
| >>> y = x[:, 0:2]
| >>> y
| array([[1, 2],
| [4, 5]], dtype=int16)
| >>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
| Traceback (most recent call last):
| ...
| ValueError: To change to a dtype of a different size, the array must be C-contiguous
| >>> z = y.copy()
| >>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
| array([[(1, 2)],
| [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| T
| The transposed array.
|
| Same as ``self.transpose()``.
|
| Examples
| --------
| >>> x = np.array([[1.,2.],[3.,4.]])
| >>> x
| array([[ 1., 2.],
| [ 3., 4.]])
| >>> x.T
| array([[ 1., 3.],
| [ 2., 4.]])
| >>> x = np.array([1.,2.,3.,4.])
| >>> x
| array([ 1., 2., 3., 4.])
| >>> x.T
| array([ 1., 2., 3., 4.])
|
| See Also
| --------
| transpose
|
| __array_finalize__
| None.
|
| __array_interface__
| Array protocol: Python side.
|
| __array_priority__
| Array priority.
|
| __array_struct__
| Array protocol: C-struct side.
|
| base
| Base object if memory is from some other object.
|
| Examples
| --------
| The base of an array that owns its memory is None:
|
| >>> x = np.array([1,2,3,4])
| >>> x.base is None
| True
|
| Slicing creates a view, whose memory is shared with x:
|
| >>> y = x[2:]
| >>> y.base is x
| True
|
| ctypes
| An object to simplify the interaction of the array with the ctypes
| module.
|
| This attribute creates an object that makes it easier to use arrays
| when calling shared libraries with the ctypes module. The returned
| object has, among others, data, shape, and strides attributes (see
| Notes below) which themselves return ctypes objects that can be used
| as arguments to a shared library.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| c : Python object
| Possessing attributes data, shape, strides, etc.
|
| See Also
| --------
| numpy.ctypeslib
|
| Notes
| -----
| Below are the public attributes of this object which were documented
| in "Guide to NumPy" (we have omitted undocumented public attributes,
| as well as documented private attributes):
|
| .. autoattribute:: numpy.core._internal._ctypes.data
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.shape
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.strides
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.data_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.shape_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.strides_as
| :noindex:
|
| If the ctypes module is not available, then the ctypes attribute
| of array objects still returns something useful, but ctypes objects
| are not returned and errors may be raised instead. In particular,
| the object will still have the ``as_parameter`` attribute which will
| return an integer equal to the data attribute.
|
| Examples
| --------
| >>> import ctypes
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.ctypes.data
| 30439712
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
| <ctypes.LP_c_long object at 0x01F01300>
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
| c_long(0)
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
| c_longlong(4294967296L)
| >>> x.ctypes.shape
| <numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
| >>> x.ctypes.shape_as(ctypes.c_long)
| <numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
| >>> x.ctypes.strides
| <numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
| >>> x.ctypes.strides_as(ctypes.c_longlong)
| <numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>
|
| data
| Python buffer object pointing to the start of the array's data.
|
| dtype
| Data-type of the array's elements.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| d : numpy dtype object
|
| See Also
| --------
| numpy.dtype
|
| Examples
| --------
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.dtype
| dtype('int32')
| >>> type(x.dtype)
| <type 'numpy.dtype'>
|
| flags
| Information about the memory layout of the array.
|
| Attributes
| ----------
| C_CONTIGUOUS (C)
| The data is in a single, C-style contiguous segment.
| F_CONTIGUOUS (F)
| The data is in a single, Fortran-style contiguous segment.
| OWNDATA (O)
| The array owns the memory it uses or borrows it from another object.
| WRITEABLE (W)
| The data area can be written to. Setting this to False locks
| the data, making it read-only. A view (slice, etc.) inherits WRITEABLE
| from its base array at creation time, but a view of a writeable
| array may be subsequently locked while the base array remains writeable.
| (The opposite is not true, in that a view of a locked array may not
| be made writeable. However, currently, locking a base object does not
| lock any views that already reference it, so under that circumstance it
| is possible to alter the contents of a locked array via a previously
| created writeable view onto it.) Attempting to change a non-writeable
| array raises a RuntimeError exception.
| ALIGNED (A)
| The data and all elements are aligned appropriately for the hardware.
| WRITEBACKIFCOPY (X)
| This array is a copy of some other array. The C-API function
| PyArray_ResolveWritebackIfCopy must be called before deallocating
| to the base array will be updated with the contents of this array.
| UPDATEIFCOPY (U)
| (Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array.
| When this array is
| deallocated, the base array will be updated with the contents of
| this array.
| FNC
| F_CONTIGUOUS and not C_CONTIGUOUS.
| FORC
| F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).
| BEHAVED (B)
| ALIGNED and WRITEABLE.
| CARRAY (CA)
| BEHAVED and C_CONTIGUOUS.
| FARRAY (FA)
| BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.
|
| Notes
| -----
| The `flags` object can be accessed dictionary-like (as in ``a.flags['WRITEABLE']``),
| or by using lowercased attribute names (as in ``a.flags.writeable``). Short flag
| names are only supported in dictionary access.
|
| Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be
| changed by the user, via direct assignment to the attribute or dictionary
| entry, or by calling `ndarray.setflags`.
|
| The array flags cannot be set arbitrarily:
|
| - UPDATEIFCOPY can only be set ``False``.
| - WRITEBACKIFCOPY can only be set ``False``.
| - ALIGNED can only be set ``True`` if the data is truly aligned.
| - WRITEABLE can only be set ``True`` if the array owns its own memory
| or the ultimate owner of the memory exposes a writeable buffer
| interface or is a string.
|
| Arrays can be both C-style and Fortran-style contiguous simultaneously.
| This is clear for 1-dimensional arrays, but can also be true for higher
| dimensional arrays.
|
| Even for contiguous arrays a stride for a given dimension
| ``arr.strides[dim]`` may be *arbitrary* if ``arr.shape[dim] == 1``
| or the array has no elements.
| It does *not* generally hold that ``self.strides[-1] == self.itemsize``
| for C-style contiguous arrays or ``self.strides[0] == self.itemsize`` for
| Fortran-style contiguous arrays is true.
|
| flat
| A 1-D iterator over the array.
|
| This is a `numpy.flatiter` instance, which acts similarly to, but is not
| a subclass of, Python's built-in iterator object.
|
| See Also
| --------
| flatten : Return a copy of the array collapsed into one dimension.
|
| flatiter
|
| Examples
| --------
| >>> x = np.arange(1, 7).reshape(2, 3)
| >>> x
| array([[1, 2, 3],
| [4, 5, 6]])
| >>> x.flat[3]
| 4
| >>> x.T
| array([[1, 4],
| [2, 5],
| [3, 6]])
| >>> x.T.flat[3]
| 5
| >>> type(x.flat)
| <class 'numpy.flatiter'>
|
| An assignment example:
|
| >>> x.flat = 3; x
| array([[3, 3, 3],
| [3, 3, 3]])
| >>> x.flat[[1,4]] = 1; x
| array([[3, 1, 3],
| [3, 1, 3]])
|
| imag
| The imaginary part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.imag
| array([ 0. , 0.70710678])
| >>> x.imag.dtype
| dtype('float64')
|
| itemsize
| Length of one array element in bytes.
|
| Examples
| --------
| >>> x = np.array([1,2,3], dtype=np.float64)
| >>> x.itemsize
| 8
| >>> x = np.array([1,2,3], dtype=np.complex128)
| >>> x.itemsize
| 16
|
| nbytes
| Total bytes consumed by the elements of the array.
|
| Notes
| -----
| Does not include memory consumed by non-element attributes of the
| array object.
|
| Examples
| --------
| >>> x = np.zeros((3,5,2), dtype=np.complex128)
| >>> x.nbytes
| 480
| >>> np.prod(x.shape) * x.itemsize
| 480
|
| ndim
| Number of array dimensions.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3])
| >>> x.ndim
| 1
| >>> y = np.zeros((2, 3, 4))
| >>> y.ndim
| 3
|
| real
| The real part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.real
| array([ 1. , 0.70710678])
| >>> x.real.dtype
| dtype('float64')
|
| See Also
| --------
| numpy.real : equivalent function
|
| shape
| Tuple of array dimensions.
|
| The shape property is usually used to get the current shape of an array,
| but may also be used to reshape the array in-place by assigning a tuple of
| array dimensions to it. As with `numpy.reshape`, one of the new shape
| dimensions can be -1, in which case its value is inferred from the size of
| the array and the remaining dimensions. Reshaping an array in-place will
| fail if a copy is required.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3, 4])
| >>> x.shape
| (4,)
| >>> y = np.zeros((2, 3, 4))
| >>> y.shape
| (2, 3, 4)
| >>> y.shape = (3, 8)
| >>> y
| array([[ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.]])
| >>> y.shape = (3, 6)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: total size of new array must be unchanged
| >>> np.zeros((4,2))[::2].shape = (-1,)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| AttributeError: incompatible shape for a non-contiguous array
|
| See Also
| --------
| numpy.reshape : similar function
| ndarray.reshape : similar method
|
| size
| Number of elements in the array.
|
| Equal to ``np.prod(a.shape)``, i.e., the product of the array's
| dimensions.
|
| Notes
| -----
| `a.size` returns a standard arbitrary precision Python integer. This
| may not be the case with other methods of obtaining the same value
| (like the suggested ``np.prod(a.shape)``, which returns an instance
| of ``np.int_``), and may be relevant if the value is used further in
| calculations that may overflow a fixed size integer type.
|
| Examples
| --------
| >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
| >>> x.size
| 30
| >>> np.prod(x.shape)
| 30
|
| strides
| Tuple of bytes to step in each dimension when traversing an array.
|
| The byte offset of element ``(i[0], i[1], ..., i[n])`` in an array `a`
| is::
|
| offset = sum(np.array(i) * a.strides)
|
| A more detailed explanation of strides can be found in the
| "ndarray.rst" file in the NumPy reference guide.
|
| Notes
| -----
| Imagine an array of 32-bit integers (each 4 bytes)::
|
| x = np.array([[0, 1, 2, 3, 4],
| [5, 6, 7, 8, 9]], dtype=np.int32)
|
| This array is stored in memory as 40 bytes, one after the other
| (known as a contiguous block of memory). The strides of an array tell
| us how many bytes we have to skip in memory to move to the next position
| along a certain axis. For example, we have to skip 4 bytes (1 value) to
| move to the next column, but 20 bytes (5 values) to get to the same
| position in the next row. As such, the strides for the array `x` will be
| ``(20, 4)``.
|
| See Also
| --------
| numpy.lib.stride_tricks.as_strided
|
| Examples
| --------
| >>> y = np.reshape(np.arange(2*3*4), (2,3,4))
| >>> y
| array([[[ 0, 1, 2, 3],
| [ 4, 5, 6, 7],
| [ 8, 9, 10, 11]],
| [[12, 13, 14, 15],
| [16, 17, 18, 19],
| [20, 21, 22, 23]]])
| >>> y.strides
| (48, 16, 4)
| >>> y[1,1,1]
| 17
| >>> offset=sum(y.strides * np.array((1,1,1)))
| >>> offset/y.itemsize
| 17
|
| >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
| >>> x.strides
| (32, 4, 224, 1344)
| >>> i = np.array([3,5,2,2])
| >>> offset = sum(i * x.strides)
| >>> x[3,5,2,2]
| 813
| >>> offset / x.itemsize
| 813
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
###Markdown
Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
_____no_output_____
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
_____no_output_____
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
_____no_output_____
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
_____no_output_____
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
_____no_output_____
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
_____no_output_____
###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.96567843 0.83841015]
[0.92732703 0.45005853]
[0.43958745 0.25624984]
[0.70135724 0.4107873 ]
[0.64615476 0.60885145]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
# Find all values in the matrix less than 0.5
demo_g < 0.5
###Output
_____no_output_____
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
Help on built-in function array in module numpy:
array(...)
array(object, dtype=None, *, copy=True, order='K', subok=False, ndmin=0)
Create an array.
Parameters
----------
object : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
dtype : data-type, optional
The desired data-type for the array. If not given, then the type will
be determined as the minimum type required to hold the objects in the
sequence.
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy will
only be made if __array__ returns a copy, if obj is a nested sequence,
or if a copy is needed to satisfy any of the other requirements
(`dtype`, `order`, etc.).
order : {'K', 'A', 'C', 'F'}, optional
Specify the memory layout of the array. If object is not an array, the
newly created array will be in C order (row major) unless 'F' is
specified, in which case it will be in Fortran order (column major).
If object is an array the following holds.
===== ========= ===================================================
order no copy copy=True
===== ========= ===================================================
'K' unchanged F & C order preserved, otherwise most similar order
'A' unchanged F order if input is F and not C, otherwise C order
'C' C order C order
'F' F order F order
===== ========= ===================================================
When ``copy=False`` and a copy is made for other reasons, the result is
the same as if ``copy=True``, with some exceptions for `A`, see the
Notes section. The default order is 'K'.
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise
the returned array will be forced to be a base-class array (default).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting
array should have. Ones will be pre-pended to the shape as
needed to meet this requirement.
Returns
-------
out : ndarray
An array object satisfying the specified requirements.
See Also
--------
empty_like : Return an empty array with shape and type of input.
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
ones : Return a new array setting values to one.
zeros : Return a new array setting values to zero.
full : Return a new array of given shape filled with value.
Notes
-----
When order is 'A' and `object` is an array in neither 'C' nor 'F' order,
and a copy is forced by a change in dtype, then the order of the result is
not necessarily 'C' as expected. This is likely a bug.
Examples
--------
>>> np.array([1, 2, 3])
array([1, 2, 3])
Upcasting:
>>> np.array([1, 2, 3.0])
array([ 1., 2., 3.])
More than one dimension:
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
Minimum dimensions 2:
>>> np.array([1, 2, 3], ndmin=2)
array([[1, 2, 3]])
Type provided:
>>> np.array([1, 2, 3], dtype=complex)
array([ 1.+0.j, 2.+0.j, 3.+0.j])
Data-type consisting of more than one element:
>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>> x['a']
array([1, 3])
Creating an array from sub-classes:
>>> np.array(np.mat('1 2; 3 4'))
array([[1, 2],
[3, 4]])
>>> np.array(np.mat('1 2; 3 4'), subok=True)
matrix([[1, 2],
[3, 4]])
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out: Call help on an object we created.x = np.array([1, 2, 3, 4])help(x) Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
The original matrix
[[0.23430951 0.15680436 0.71102349]
[0.9700775 0.71067625 0.43933634]]
The matrix after being tranposed
[[0.23430951 0.9700775 ]
[0.15680436 0.71067625]
[0.71102349 0.43933634]]
The tranposed matrix from the T attribute
[[0.23430951 0.9700775 ]
[0.15680436 0.71067625]
[0.71102349 0.43933634]]
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
(4,)
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
original:
[1 2 3 4]
reshaped:
[[1 2]
[3 4]]
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
[ 1 2 3 4 7 8 9 10]
[[1 2 3 4 7]
[5 6 7 8 8]]
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
_____no_output_____
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
original:
[[1 2 3 4]
[5 6 7 8]]
hsplit:
[array([[1, 2],
[5, 6]]), array([[3, 4],
[7, 8]])]
vsplit:
[array([[1, 2, 3, 4]]), array([[5, 6, 7, 8]])]
###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.39629479 0.43859992]
[0.61946487 0.3155219 ]
[0.12065773 0.88017175]
[0.51176937 0.9263265 ]
[0.2976452 0.71529963]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
# Find all values in the matrix less than 0.5
demo_g < 0.5
###Output
_____no_output_____
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
Help on built-in function array in module numpy.core.multiarray:
array(...)
array(object, dtype=None, copy=True, order='K', subok=False, ndmin=0)
Create an array.
Parameters
----------
object : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
dtype : data-type, optional
The desired data-type for the array. If not given, then the type will
be determined as the minimum type required to hold the objects in the
sequence. This argument can only be used to 'upcast' the array. For
downcasting, use the .astype(t) method.
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy will
only be made if __array__ returns a copy, if obj is a nested sequence,
or if a copy is needed to satisfy any of the other requirements
(`dtype`, `order`, etc.).
order : {'K', 'A', 'C', 'F'}, optional
Specify the memory layout of the array. If object is not an array, the
newly created array will be in C order (row major) unless 'F' is
specified, in which case it will be in Fortran order (column major).
If object is an array the following holds.
===== ========= ===================================================
order no copy copy=True
===== ========= ===================================================
'K' unchanged F & C order preserved, otherwise most similar order
'A' unchanged F order if input is F and not C, otherwise C order
'C' C order C order
'F' F order F order
===== ========= ===================================================
When ``copy=False`` and a copy is made for other reasons, the result is
the same as if ``copy=True``, with some exceptions for `A`, see the
Notes section. The default order is 'K'.
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise
the returned array will be forced to be a base-class array (default).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting
array should have. Ones will be pre-pended to the shape as
needed to meet this requirement.
Returns
-------
out : ndarray
An array object satisfying the specified requirements.
See Also
--------
empty_like : Return an empty array with shape and type of input.
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
ones : Return a new array setting values to one.
zeros : Return a new array setting values to zero.
full : Return a new array of given shape filled with value.
Notes
-----
When order is 'A' and `object` is an array in neither 'C' nor 'F' order,
and a copy is forced by a change in dtype, then the order of the result is
not necessarily 'C' as expected. This is likely a bug.
Examples
--------
>>> np.array([1, 2, 3])
array([1, 2, 3])
Upcasting:
>>> np.array([1, 2, 3.0])
array([ 1., 2., 3.])
More than one dimension:
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
Minimum dimensions 2:
>>> np.array([1, 2, 3], ndmin=2)
array([[1, 2, 3]])
Type provided:
>>> np.array([1, 2, 3], dtype=complex)
array([ 1.+0.j, 2.+0.j, 3.+0.j])
Data-type consisting of more than one element:
>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>> x['a']
array([1, 3])
Creating an array from sub-classes:
>>> np.array(np.mat('1 2; 3 4'))
array([[1, 2],
[3, 4]])
>>> np.array(np.mat('1 2; 3 4'), subok=True)
matrix([[1, 2],
[3, 4]])
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out:
###Code
# Call help on an object we created.
x = np.array([1, 2, 3, 4])
help(x)
###Output
Help on ndarray object:
class ndarray(builtins.object)
| ndarray(shape, dtype=float, buffer=None, offset=0,
| strides=None, order=None)
|
| An array object represents a multidimensional, homogeneous array
| of fixed-size items. An associated data-type object describes the
| format of each element in the array (its byte-order, how many bytes it
| occupies in memory, whether it is an integer, a floating point number,
| or something else, etc.)
|
| Arrays should be constructed using `array`, `zeros` or `empty` (refer
| to the See Also section below). The parameters given here refer to
| a low-level method (`ndarray(...)`) for instantiating an array.
|
| For more information, refer to the `numpy` module and examine the
| methods and attributes of an array.
|
| Parameters
| ----------
| (for the __new__ method; see Notes below)
|
| shape : tuple of ints
| Shape of created array.
| dtype : data-type, optional
| Any object that can be interpreted as a numpy data type.
| buffer : object exposing buffer interface, optional
| Used to fill the array with data.
| offset : int, optional
| Offset of array data in buffer.
| strides : tuple of ints, optional
| Strides of data in memory.
| order : {'C', 'F'}, optional
| Row-major (C-style) or column-major (Fortran-style) order.
|
| Attributes
| ----------
| T : ndarray
| Transpose of the array.
| data : buffer
| The array's elements, in memory.
| dtype : dtype object
| Describes the format of the elements in the array.
| flags : dict
| Dictionary containing information related to memory use, e.g.,
| 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc.
| flat : numpy.flatiter object
| Flattened version of the array as an iterator. The iterator
| allows assignments, e.g., ``x.flat = 3`` (See `ndarray.flat` for
| assignment examples; TODO).
| imag : ndarray
| Imaginary part of the array.
| real : ndarray
| Real part of the array.
| size : int
| Number of elements in the array.
| itemsize : int
| The memory use of each array element in bytes.
| nbytes : int
| The total number of bytes required to store the array data,
| i.e., ``itemsize * size``.
| ndim : int
| The array's number of dimensions.
| shape : tuple of ints
| Shape of the array.
| strides : tuple of ints
| The step-size required to move from one element to the next in
| memory. For example, a contiguous ``(3, 4)`` array of type
| ``int16`` in C-order has strides ``(8, 2)``. This implies that
| to move from element to element in memory requires jumps of 2 bytes.
| To move from row-to-row, one needs to jump 8 bytes at a time
| (``2 * 4``).
| ctypes : ctypes object
| Class containing properties of the array needed for interaction
| with ctypes.
| base : ndarray
| If the array is a view into another array, that array is its `base`
| (unless that array is also a view). The `base` array is where the
| array data is actually stored.
|
| See Also
| --------
| array : Construct an array.
| zeros : Create an array, each element of which is zero.
| empty : Create an array, but leave its allocated memory unchanged (i.e.,
| it contains "garbage").
| dtype : Create a data-type.
|
| Notes
| -----
| There are two modes of creating an array using ``__new__``:
|
| 1. If `buffer` is None, then only `shape`, `dtype`, and `order`
| are used.
| 2. If `buffer` is an object exposing the buffer interface, then
| all keywords are interpreted.
|
| No ``__init__`` method is needed because the array is fully initialized
| after the ``__new__`` method.
|
| Examples
| --------
| These examples illustrate the low-level `ndarray` constructor. Refer
| to the `See Also` section above for easier ways of constructing an
| ndarray.
|
| First mode, `buffer` is None:
|
| >>> np.ndarray(shape=(2,2), dtype=float, order='F')
| array([[ -1.13698227e+002, 4.25087011e-303],
| [ 2.88528414e-306, 3.27025015e-309]]) #random
|
| Second mode:
|
| >>> np.ndarray((2,), buffer=np.array([1,2,3]),
| ... offset=np.int_().itemsize,
| ... dtype=int) # offset = 1*itemsize, i.e. skip first element
| array([2, 3])
|
| Methods defined here:
|
| __abs__(self, /)
| abs(self)
|
| __add__(self, value, /)
| Return self+value.
|
| __and__(self, value, /)
| Return self&value.
|
| __array__(...)
| a.__array__(|dtype) -> reference if type unchanged, copy otherwise.
|
| Returns either a new reference to self if dtype is not given or a new array
| of provided data type if dtype is different from the current dtype of the
| array.
|
| __array_prepare__(...)
| a.__array_prepare__(obj) -> Object of same type as ndarray object obj.
|
| __array_ufunc__(...)
|
| __array_wrap__(...)
| a.__array_wrap__(obj) -> Object of same type as ndarray object a.
|
| __bool__(self, /)
| self != 0
|
| __complex__(...)
|
| __contains__(self, key, /)
| Return key in self.
|
| __copy__(...)
| a.__copy__()
|
| Used if :func:`copy.copy` is called on an array. Returns a copy of the array.
|
| Equivalent to ``a.copy(order='K')``.
|
| __deepcopy__(...)
| a.__deepcopy__(memo, /) -> Deep copy of array.
|
| Used if :func:`copy.deepcopy` is called on an array.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __divmod__(self, value, /)
| Return divmod(self, value).
|
| __eq__(self, value, /)
| Return self==value.
|
| __float__(self, /)
| float(self)
|
| __floordiv__(self, value, /)
| Return self//value.
|
| __format__(...)
| Default object formatter.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getitem__(self, key, /)
| Return self[key].
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Return self+=value.
|
| __iand__(self, value, /)
| Return self&=value.
|
| __ifloordiv__(self, value, /)
| Return self//=value.
|
| __ilshift__(self, value, /)
| Return self<<=value.
|
| __imatmul__(self, value, /)
| Return self@=value.
|
| __imod__(self, value, /)
| Return self%=value.
|
| __imul__(self, value, /)
| Return self*=value.
|
| __index__(self, /)
| Return self converted to an integer, if self is suitable for use as an index into a list.
|
| __int__(self, /)
| int(self)
|
| __invert__(self, /)
| ~self
|
| __ior__(self, value, /)
| Return self|=value.
|
| __ipow__(self, value, /)
| Return self**=value.
|
| __irshift__(self, value, /)
| Return self>>=value.
|
| __isub__(self, value, /)
| Return self-=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __itruediv__(self, value, /)
| Return self/=value.
|
| __ixor__(self, value, /)
| Return self^=value.
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lshift__(self, value, /)
| Return self<<value.
|
| __lt__(self, value, /)
| Return self<value.
|
| __matmul__(self, value, /)
| Return self@value.
|
| __mod__(self, value, /)
| Return self%value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __neg__(self, /)
| -self
|
| __or__(self, value, /)
| Return self|value.
|
| __pos__(self, /)
| +self
|
| __pow__(self, value, mod=None, /)
| Return pow(self, value, mod).
|
| __radd__(self, value, /)
| Return value+self.
|
| __rand__(self, value, /)
| Return value&self.
|
| __rdivmod__(self, value, /)
| Return divmod(value, self).
|
| __reduce__(...)
| a.__reduce__()
|
| For pickling.
|
| __repr__(self, /)
| Return repr(self).
|
| __rfloordiv__(self, value, /)
| Return value//self.
|
| __rlshift__(self, value, /)
| Return value<<self.
|
| __rmatmul__(self, value, /)
| Return value@self.
|
| __rmod__(self, value, /)
| Return value%self.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __ror__(self, value, /)
| Return value|self.
|
| __rpow__(self, value, mod=None, /)
| Return pow(value, self, mod).
|
| __rrshift__(self, value, /)
| Return value>>self.
|
| __rshift__(self, value, /)
| Return self>>value.
|
| __rsub__(self, value, /)
| Return value-self.
|
| __rtruediv__(self, value, /)
| Return value/self.
|
| __rxor__(self, value, /)
| Return value^self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __setstate__(...)
| a.__setstate__(state, /)
|
| For unpickling.
|
| The `state` argument must be a sequence that contains the following
| elements:
|
| Parameters
| ----------
| version : int
| optional pickle version. If omitted defaults to 0.
| shape : tuple
| dtype : data-type
| isFortran : bool
| rawdata : string or list
| a binary string with the data (or a list if 'a' is an object array)
|
| __sizeof__(...)
| Size of object in memory, in bytes.
|
| __str__(self, /)
| Return str(self).
|
| __sub__(self, value, /)
| Return self-value.
|
| __truediv__(self, value, /)
| Return self/value.
|
| __xor__(self, value, /)
| Return self^value.
|
| all(...)
| a.all(axis=None, out=None, keepdims=False)
|
| Returns True if all elements evaluate to True.
|
| Refer to `numpy.all` for full documentation.
|
| See Also
| --------
| numpy.all : equivalent function
|
| any(...)
| a.any(axis=None, out=None, keepdims=False)
|
| Returns True if any of the elements of `a` evaluate to True.
|
| Refer to `numpy.any` for full documentation.
|
| See Also
| --------
| numpy.any : equivalent function
|
| argmax(...)
| a.argmax(axis=None, out=None)
|
| Return indices of the maximum values along the given axis.
|
| Refer to `numpy.argmax` for full documentation.
|
| See Also
| --------
| numpy.argmax : equivalent function
|
| argmin(...)
| a.argmin(axis=None, out=None)
|
| Return indices of the minimum values along the given axis of `a`.
|
| Refer to `numpy.argmin` for detailed documentation.
|
| See Also
| --------
| numpy.argmin : equivalent function
|
| argpartition(...)
| a.argpartition(kth, axis=-1, kind='introselect', order=None)
|
| Returns the indices that would partition this array.
|
| Refer to `numpy.argpartition` for full documentation.
|
| .. versionadded:: 1.8.0
|
| See Also
| --------
| numpy.argpartition : equivalent function
|
| argsort(...)
| a.argsort(axis=-1, kind='quicksort', order=None)
|
| Returns the indices that would sort this array.
|
| Refer to `numpy.argsort` for full documentation.
|
| See Also
| --------
| numpy.argsort : equivalent function
|
| astype(...)
| a.astype(dtype, order='K', casting='unsafe', subok=True, copy=True)
|
| Copy of the array, cast to a specified type.
|
| Parameters
| ----------
| dtype : str or dtype
| Typecode or data-type to which the array is cast.
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout order of the result.
| 'C' means C order, 'F' means Fortran order, 'A'
| means 'F' order if all the arrays are Fortran contiguous,
| 'C' order otherwise, and 'K' means as close to the
| order the array elements appear in memory as possible.
| Default is 'K'.
| casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
| Controls what kind of data casting may occur. Defaults to 'unsafe'
| for backwards compatibility.
|
| * 'no' means the data types should not be cast at all.
| * 'equiv' means only byte-order changes are allowed.
| * 'safe' means only casts which can preserve values are allowed.
| * 'same_kind' means only safe casts or casts within a kind,
| like float64 to float32, are allowed.
| * 'unsafe' means any data conversions may be done.
| subok : bool, optional
| If True, then sub-classes will be passed-through (default), otherwise
| the returned array will be forced to be a base-class array.
| copy : bool, optional
| By default, astype always returns a newly allocated array. If this
| is set to false, and the `dtype`, `order`, and `subok`
| requirements are satisfied, the input array is returned instead
| of a copy.
|
| Returns
| -------
| arr_t : ndarray
| Unless `copy` is False and the other conditions for returning the input
| array are satisfied (see description for `copy` input parameter), `arr_t`
| is a new array of the same shape as the input array, with dtype, order
| given by `dtype`, `order`.
|
| Notes
| -----
| Starting in NumPy 1.9, astype method now returns an error if the string
| dtype to cast to is not long enough in 'safe' casting mode to hold the max
| value of integer/float array that is being casted. Previously the casting
| was allowed even if the result was truncated.
|
| Raises
| ------
| ComplexWarning
| When casting from complex to float or int. To avoid this,
| one should use ``a.real.astype(t)``.
|
| Examples
| --------
| >>> x = np.array([1, 2, 2.5])
| >>> x
| array([ 1. , 2. , 2.5])
|
| >>> x.astype(int)
| array([1, 2, 2])
|
| byteswap(...)
| a.byteswap(inplace=False)
|
| Swap the bytes of the array elements
|
| Toggle between low-endian and big-endian data representation by
| returning a byteswapped array, optionally swapped in-place.
|
| Parameters
| ----------
| inplace : bool, optional
| If ``True``, swap bytes in-place, default is ``False``.
|
| Returns
| -------
| out : ndarray
| The byteswapped array. If `inplace` is ``True``, this is
| a view to self.
|
| Examples
| --------
| >>> A = np.array([1, 256, 8755], dtype=np.int16)
| >>> map(hex, A)
| ['0x1', '0x100', '0x2233']
| >>> A.byteswap(inplace=True)
| array([ 256, 1, 13090], dtype=int16)
| >>> map(hex, A)
| ['0x100', '0x1', '0x3322']
|
| Arrays of strings are not swapped
|
| >>> A = np.array(['ceg', 'fac'])
| >>> A.byteswap()
| array(['ceg', 'fac'],
| dtype='|S3')
|
| choose(...)
| a.choose(choices, out=None, mode='raise')
|
| Use an index array to construct a new array from a set of choices.
|
| Refer to `numpy.choose` for full documentation.
|
| See Also
| --------
| numpy.choose : equivalent function
|
| clip(...)
| a.clip(min=None, max=None, out=None)
|
| Return an array whose values are limited to ``[min, max]``.
| One of max or min must be given.
|
| Refer to `numpy.clip` for full documentation.
|
| See Also
| --------
| numpy.clip : equivalent function
|
| compress(...)
| a.compress(condition, axis=None, out=None)
|
| Return selected slices of this array along given axis.
|
| Refer to `numpy.compress` for full documentation.
|
| See Also
| --------
| numpy.compress : equivalent function
|
| conj(...)
| a.conj()
|
| Complex-conjugate all elements.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| conjugate(...)
| a.conjugate()
|
| Return the complex conjugate, element-wise.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| copy(...)
| a.copy(order='C')
|
| Return a copy of the array.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout of the copy. 'C' means C-order,
| 'F' means F-order, 'A' means 'F' if `a` is Fortran contiguous,
| 'C' otherwise. 'K' means match the layout of `a` as closely
| as possible. (Note that this function and :func:`numpy.copy` are very
| similar, but have different default values for their order=
| arguments.)
|
| See also
| --------
| numpy.copy
| numpy.copyto
|
| Examples
| --------
| >>> x = np.array([[1,2,3],[4,5,6]], order='F')
|
| >>> y = x.copy()
|
| >>> x.fill(0)
|
| >>> x
| array([[0, 0, 0],
| [0, 0, 0]])
|
| >>> y
| array([[1, 2, 3],
| [4, 5, 6]])
|
| >>> y.flags['C_CONTIGUOUS']
| True
|
| cumprod(...)
| a.cumprod(axis=None, dtype=None, out=None)
|
| Return the cumulative product of the elements along the given axis.
|
| Refer to `numpy.cumprod` for full documentation.
|
| See Also
| --------
| numpy.cumprod : equivalent function
|
| cumsum(...)
| a.cumsum(axis=None, dtype=None, out=None)
|
| Return the cumulative sum of the elements along the given axis.
|
| Refer to `numpy.cumsum` for full documentation.
|
| See Also
| --------
| numpy.cumsum : equivalent function
|
| diagonal(...)
| a.diagonal(offset=0, axis1=0, axis2=1)
|
| Return specified diagonals. In NumPy 1.9 the returned array is a
| read-only view instead of a copy as in previous NumPy versions. In
| a future version the read-only restriction will be removed.
|
| Refer to :func:`numpy.diagonal` for full documentation.
|
| See Also
| --------
| numpy.diagonal : equivalent function
|
| dot(...)
| a.dot(b, out=None)
|
| Dot product of two arrays.
|
| Refer to `numpy.dot` for full documentation.
|
| See Also
| --------
| numpy.dot : equivalent function
|
| Examples
| --------
| >>> a = np.eye(2)
| >>> b = np.ones((2, 2)) * 2
| >>> a.dot(b)
| array([[ 2., 2.],
| [ 2., 2.]])
|
| This array method can be conveniently chained:
|
| >>> a.dot(b).dot(b)
| array([[ 8., 8.],
| [ 8., 8.]])
|
| dump(...)
| a.dump(file)
|
| Dump a pickle of the array to the specified file.
| The array can be read back with pickle.load or numpy.load.
|
| Parameters
| ----------
| file : str
| A string naming the dump file.
|
| dumps(...)
| a.dumps()
|
| Returns the pickle of the array as a string.
| pickle.loads or numpy.loads will convert the string back to an array.
|
| Parameters
| ----------
| None
|
| fill(...)
| a.fill(value)
|
| Fill the array with a scalar value.
|
| Parameters
| ----------
| value : scalar
| All elements of `a` will be assigned this value.
|
| Examples
| --------
| >>> a = np.array([1, 2])
| >>> a.fill(0)
| >>> a
| array([0, 0])
| >>> a = np.empty(2)
| >>> a.fill(1)
| >>> a
| array([ 1., 1.])
|
| flatten(...)
| a.flatten(order='C')
|
| Return a copy of the array collapsed into one dimension.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| 'C' means to flatten in row-major (C-style) order.
| 'F' means to flatten in column-major (Fortran-
| style) order. 'A' means to flatten in column-major
| order if `a` is Fortran *contiguous* in memory,
| row-major order otherwise. 'K' means to flatten
| `a` in the order the elements occur in memory.
| The default is 'C'.
|
| Returns
| -------
| y : ndarray
| A copy of the input array, flattened to one dimension.
|
| See Also
| --------
| ravel : Return a flattened array.
| flat : A 1-D flat iterator over the array.
|
| Examples
| --------
| >>> a = np.array([[1,2], [3,4]])
| >>> a.flatten()
| array([1, 2, 3, 4])
| >>> a.flatten('F')
| array([1, 3, 2, 4])
|
| getfield(...)
| a.getfield(dtype, offset=0)
|
| Returns a field of the given array as a certain type.
|
| A field is a view of the array data with a given data-type. The values in
| the view are determined by the given type and the offset into the current
| array in bytes. The offset needs to be such that the view dtype fits in the
| array dtype; for example an array of dtype complex128 has 16-byte elements.
| If taking a view with a 32-bit integer (4 bytes), the offset needs to be
| between 0 and 12 bytes.
|
| Parameters
| ----------
| dtype : str or dtype
| The data type of the view. The dtype size of the view can not be larger
| than that of the array itself.
| offset : int
| Number of bytes to skip before beginning the element view.
|
| Examples
| --------
| >>> x = np.diag([1.+1.j]*2)
| >>> x[1, 1] = 2 + 4.j
| >>> x
| array([[ 1.+1.j, 0.+0.j],
| [ 0.+0.j, 2.+4.j]])
| >>> x.getfield(np.float64)
| array([[ 1., 0.],
| [ 0., 2.]])
|
| By choosing an offset of 8 bytes we can select the complex part of the
| array for our view:
|
| >>> x.getfield(np.float64, offset=8)
| array([[ 1., 0.],
| [ 0., 4.]])
|
| item(...)
| a.item(*args)
|
| Copy an element of an array to a standard Python scalar and return it.
|
| Parameters
| ----------
| \*args : Arguments (variable number and type)
|
| * none: in this case, the method only works for arrays
| with one element (`a.size == 1`), which element is
| copied into a standard Python scalar object and returned.
|
| * int_type: this argument is interpreted as a flat index into
| the array, specifying which element to copy and return.
|
| * tuple of int_types: functions as does a single int_type argument,
| except that the argument is interpreted as an nd-index into the
| array.
|
| Returns
| -------
| z : Standard Python scalar object
| A copy of the specified element of the array as a suitable
| Python scalar
|
| Notes
| -----
| When the data type of `a` is longdouble or clongdouble, item() returns
| a scalar array object because there is no available Python scalar that
| would not lose information. Void arrays return a buffer object for item(),
| unless fields are defined, in which case a tuple is returned.
|
| `item` is very similar to a[args], except, instead of an array scalar,
| a standard Python scalar is returned. This can be useful for speeding up
| access to elements of the array and doing arithmetic on elements of the
| array using Python's optimized math.
|
| Examples
| --------
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[3, 1, 7],
| [2, 8, 3],
| [8, 5, 3]])
| >>> x.item(3)
| 2
| >>> x.item(7)
| 5
| >>> x.item((0, 1))
| 1
| >>> x.item((2, 2))
| 3
|
| itemset(...)
| a.itemset(*args)
|
| Insert scalar into an array (scalar is cast to array's dtype, if possible)
|
| There must be at least 1 argument, and define the last argument
| as *item*. Then, ``a.itemset(*args)`` is equivalent to but faster
| than ``a[args] = item``. The item should be a scalar value and `args`
| must select a single item in the array `a`.
|
| Parameters
| ----------
| \*args : Arguments
| If one argument: a scalar, only used in case `a` is of size 1.
| If two arguments: the last argument is the value to be set
| and must be a scalar, the first argument specifies a single array
| element location. It is either an int or a tuple.
|
| Notes
| -----
| Compared to indexing syntax, `itemset` provides some speed increase
| for placing a scalar into a particular location in an `ndarray`,
| if you must do this. However, generally this is discouraged:
| among other problems, it complicates the appearance of the code.
| Also, when using `itemset` (and `item`) inside a loop, be sure
| to assign the methods to a local variable to avoid the attribute
| look-up at each loop iteration.
|
| Examples
| --------
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[3, 1, 7],
| [2, 8, 3],
| [8, 5, 3]])
| >>> x.itemset(4, 0)
| >>> x.itemset((2, 2), 9)
| >>> x
| array([[3, 1, 7],
| [2, 0, 3],
| [8, 5, 9]])
|
| max(...)
| a.max(axis=None, out=None, keepdims=False)
|
| Return the maximum along a given axis.
|
| Refer to `numpy.amax` for full documentation.
|
| See Also
| --------
| numpy.amax : equivalent function
|
| mean(...)
| a.mean(axis=None, dtype=None, out=None, keepdims=False)
|
| Returns the average of the array elements along given axis.
|
| Refer to `numpy.mean` for full documentation.
|
| See Also
| --------
| numpy.mean : equivalent function
|
| min(...)
| a.min(axis=None, out=None, keepdims=False)
|
| Return the minimum along a given axis.
|
| Refer to `numpy.amin` for full documentation.
|
| See Also
| --------
| numpy.amin : equivalent function
|
| newbyteorder(...)
| arr.newbyteorder(new_order='S')
|
| Return the array with the same data viewed with a different byte order.
|
| Equivalent to::
|
| arr.view(arr.dtype.newbytorder(new_order))
|
| Changes are also made in all fields and sub-arrays of the array data
| type.
|
|
|
| Parameters
| ----------
| new_order : string, optional
| Byte order to force; a value from the byte order specifications
| below. `new_order` codes can be any of:
|
| * 'S' - swap dtype from current to opposite endian
| * {'<', 'L'} - little endian
| * {'>', 'B'} - big endian
| * {'=', 'N'} - native order
| * {'|', 'I'} - ignore (no change to byte order)
|
| The default value ('S') results in swapping the current
| byte order. The code does a case-insensitive check on the first
| letter of `new_order` for the alternatives above. For example,
| any of 'B' or 'b' or 'biggish' are valid to specify big-endian.
|
|
| Returns
| -------
| new_arr : array
| New array object with the dtype reflecting given change to the
| byte order.
|
| nonzero(...)
| a.nonzero()
|
| Return the indices of the elements that are non-zero.
|
| Refer to `numpy.nonzero` for full documentation.
|
| See Also
| --------
| numpy.nonzero : equivalent function
|
| partition(...)
| a.partition(kth, axis=-1, kind='introselect', order=None)
|
| Rearranges the elements in the array in such a way that the value of the
| element in kth position is in the position it would be in a sorted array.
| All elements smaller than the kth element are moved before this element and
| all equal or greater are moved behind it. The ordering of the elements in
| the two partitions is undefined.
|
| .. versionadded:: 1.8.0
|
| Parameters
| ----------
| kth : int or sequence of ints
| Element index to partition by. The kth element value will be in its
| final sorted position and all smaller elements will be moved before it
| and all equal or greater elements behind it.
| The order of all elements in the partitions is undefined.
| If provided with a sequence of kth it will partition all elements
| indexed by kth of them into their sorted position at once.
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'introselect'}, optional
| Selection algorithm. Default is 'introselect'.
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need to be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.partition : Return a parititioned copy of an array.
| argpartition : Indirect partition.
| sort : Full sort.
|
| Notes
| -----
| See ``np.partition`` for notes on the different algorithms.
|
| Examples
| --------
| >>> a = np.array([3, 4, 2, 1])
| >>> a.partition(3)
| >>> a
| array([2, 1, 3, 4])
|
| >>> a.partition((1, 3))
| array([1, 2, 3, 4])
|
| prod(...)
| a.prod(axis=None, dtype=None, out=None, keepdims=False)
|
| Return the product of the array elements over the given axis
|
| Refer to `numpy.prod` for full documentation.
|
| See Also
| --------
| numpy.prod : equivalent function
|
| ptp(...)
| a.ptp(axis=None, out=None, keepdims=False)
|
| Peak to peak (maximum - minimum) value along a given axis.
|
| Refer to `numpy.ptp` for full documentation.
|
| See Also
| --------
| numpy.ptp : equivalent function
|
| put(...)
| a.put(indices, values, mode='raise')
|
| Set ``a.flat[n] = values[n]`` for all `n` in indices.
|
| Refer to `numpy.put` for full documentation.
|
| See Also
| --------
| numpy.put : equivalent function
|
| ravel(...)
| a.ravel([order])
|
| Return a flattened array.
|
| Refer to `numpy.ravel` for full documentation.
|
| See Also
| --------
| numpy.ravel : equivalent function
|
| ndarray.flat : a flat iterator on the array.
|
| repeat(...)
| a.repeat(repeats, axis=None)
|
| Repeat elements of an array.
|
| Refer to `numpy.repeat` for full documentation.
|
| See Also
| --------
| numpy.repeat : equivalent function
|
| reshape(...)
| a.reshape(shape, order='C')
|
| Returns an array containing the same data with a new shape.
|
| Refer to `numpy.reshape` for full documentation.
|
| See Also
| --------
| numpy.reshape : equivalent function
|
| Notes
| -----
| Unlike the free function `numpy.reshape`, this method on `ndarray` allows
| the elements of the shape parameter to be passed in as separate arguments.
| For example, ``a.reshape(10, 11)`` is equivalent to
| ``a.reshape((10, 11))``.
|
| resize(...)
| a.resize(new_shape, refcheck=True)
|
| Change shape and size of array in-place.
|
| Parameters
| ----------
| new_shape : tuple of ints, or `n` ints
| Shape of resized array.
| refcheck : bool, optional
| If False, reference count will not be checked. Default is True.
|
| Returns
| -------
| None
|
| Raises
| ------
| ValueError
| If `a` does not own its own data or references or views to it exist,
| and the data memory must be changed.
| PyPy only: will always raise if the data memory must be changed, since
| there is no reliable way to determine if references or views to it
| exist.
|
| SystemError
| If the `order` keyword argument is specified. This behaviour is a
| bug in NumPy.
|
| See Also
| --------
| resize : Return a new array with the specified shape.
|
| Notes
| -----
| This reallocates space for the data area if necessary.
|
| Only contiguous arrays (data elements consecutive in memory) can be
| resized.
|
| The purpose of the reference count check is to make sure you
| do not use this array as a buffer for another Python object and then
| reallocate the memory. However, reference counts can increase in
| other ways so if you are sure that you have not shared the memory
| for this array with another Python object, then you may safely set
| `refcheck` to False.
|
| Examples
| --------
| Shrinking an array: array is flattened (in the order that the data are
| stored in memory), resized, and reshaped:
|
| >>> a = np.array([[0, 1], [2, 3]], order='C')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [1]])
|
| >>> a = np.array([[0, 1], [2, 3]], order='F')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [2]])
|
| Enlarging an array: as above, but missing entries are filled with zeros:
|
| >>> b = np.array([[0, 1], [2, 3]])
| >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
| >>> b
| array([[0, 1, 2],
| [3, 0, 0]])
|
| Referencing an array prevents resizing...
|
| >>> c = a
| >>> a.resize((1, 1))
| Traceback (most recent call last):
| ...
| ValueError: cannot resize an array that has been referenced ...
|
| Unless `refcheck` is False:
|
| >>> a.resize((1, 1), refcheck=False)
| >>> a
| array([[0]])
| >>> c
| array([[0]])
|
| round(...)
| a.round(decimals=0, out=None)
|
| Return `a` with each element rounded to the given number of decimals.
|
| Refer to `numpy.around` for full documentation.
|
| See Also
| --------
| numpy.around : equivalent function
|
| searchsorted(...)
| a.searchsorted(v, side='left', sorter=None)
|
| Find indices where elements of v should be inserted in a to maintain order.
|
| For full documentation, see `numpy.searchsorted`
|
| See Also
| --------
| numpy.searchsorted : equivalent function
|
| setfield(...)
| a.setfield(val, dtype, offset=0)
|
| Put a value into a specified place in a field defined by a data-type.
|
| Place `val` into `a`'s field defined by `dtype` and beginning `offset`
| bytes into the field.
|
| Parameters
| ----------
| val : object
| Value to be placed in field.
| dtype : dtype object
| Data-type of the field in which to place `val`.
| offset : int, optional
| The number of bytes into the field at which to place `val`.
|
| Returns
| -------
| None
|
| See Also
| --------
| getfield
|
| Examples
| --------
| >>> x = np.eye(3)
| >>> x.getfield(np.float64)
| array([[ 1., 0., 0.],
| [ 0., 1., 0.],
| [ 0., 0., 1.]])
| >>> x.setfield(3, np.int32)
| >>> x.getfield(np.int32)
| array([[3, 3, 3],
| [3, 3, 3],
| [3, 3, 3]])
| >>> x
| array([[ 1.00000000e+000, 1.48219694e-323, 1.48219694e-323],
| [ 1.48219694e-323, 1.00000000e+000, 1.48219694e-323],
| [ 1.48219694e-323, 1.48219694e-323, 1.00000000e+000]])
| >>> x.setfield(np.eye(3), np.int32)
| >>> x
| array([[ 1., 0., 0.],
| [ 0., 1., 0.],
| [ 0., 0., 1.]])
|
| setflags(...)
| a.setflags(write=None, align=None, uic=None)
|
| Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY),
| respectively.
|
| These Boolean-valued flags affect how numpy interprets the memory
| area used by `a` (see Notes below). The ALIGNED flag can only
| be set to True if the data is actually aligned according to the type.
| The WRITEBACKIFCOPY and (deprecated) UPDATEIFCOPY flags can never be set
| to True. The flag WRITEABLE can only be set to True if the array owns its
| own memory, or the ultimate owner of the memory exposes a writeable buffer
| interface, or is a string. (The exception for string is made so that
| unpickling can be done without copying memory.)
|
| Parameters
| ----------
| write : bool, optional
| Describes whether or not `a` can be written to.
| align : bool, optional
| Describes whether or not `a` is aligned properly for its type.
| uic : bool, optional
| Describes whether or not `a` is a copy of another "base" array.
|
| Notes
| -----
| Array flags provide information about how the memory area used
| for the array is to be interpreted. There are 7 Boolean flags
| in use, only four of which can be changed by the user:
| WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED.
|
| WRITEABLE (W) the data area can be written to;
|
| ALIGNED (A) the data and strides are aligned appropriately for the hardware
| (as determined by the compiler);
|
| UPDATEIFCOPY (U) (deprecated), replaced by WRITEBACKIFCOPY;
|
| WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced
| by .base). When the C-API function PyArray_ResolveWritebackIfCopy is
| called, the base array will be updated with the contents of this array.
|
| All flags can be accessed using the single (upper case) letter as well
| as the full name.
|
| Examples
| --------
| >>> y
| array([[3, 1, 7],
| [2, 0, 0],
| [8, 5, 9]])
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : True
| ALIGNED : True
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(write=0, align=0)
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : False
| ALIGNED : False
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(uic=1)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: cannot set WRITEBACKIFCOPY flag to True
|
| sort(...)
| a.sort(axis=-1, kind='quicksort', order=None)
|
| Sort an array, in-place.
|
| Parameters
| ----------
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional
| Sorting algorithm. Default is 'quicksort'.
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.sort : Return a sorted copy of an array.
| argsort : Indirect sort.
| lexsort : Indirect stable sort on multiple keys.
| searchsorted : Find elements in sorted array.
| partition: Partial sort.
|
| Notes
| -----
| See ``sort`` for notes on the different sorting algorithms.
|
| Examples
| --------
| >>> a = np.array([[1,4], [3,1]])
| >>> a.sort(axis=1)
| >>> a
| array([[1, 4],
| [1, 3]])
| >>> a.sort(axis=0)
| >>> a
| array([[1, 3],
| [1, 4]])
|
| Use the `order` keyword to specify a field to use when sorting a
| structured array:
|
| >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
| >>> a.sort(order='y')
| >>> a
| array([('c', 1), ('a', 2)],
| dtype=[('x', '|S1'), ('y', '<i4')])
|
| squeeze(...)
| a.squeeze(axis=None)
|
| Remove single-dimensional entries from the shape of `a`.
|
| Refer to `numpy.squeeze` for full documentation.
|
| See Also
| --------
| numpy.squeeze : equivalent function
|
| std(...)
| a.std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the standard deviation of the array elements along given axis.
|
| Refer to `numpy.std` for full documentation.
|
| See Also
| --------
| numpy.std : equivalent function
|
| sum(...)
| a.sum(axis=None, dtype=None, out=None, keepdims=False)
|
| Return the sum of the array elements over the given axis.
|
| Refer to `numpy.sum` for full documentation.
|
| See Also
| --------
| numpy.sum : equivalent function
|
| swapaxes(...)
| a.swapaxes(axis1, axis2)
|
| Return a view of the array with `axis1` and `axis2` interchanged.
|
| Refer to `numpy.swapaxes` for full documentation.
|
| See Also
| --------
| numpy.swapaxes : equivalent function
|
| take(...)
| a.take(indices, axis=None, out=None, mode='raise')
|
| Return an array formed from the elements of `a` at the given indices.
|
| Refer to `numpy.take` for full documentation.
|
| See Also
| --------
| numpy.take : equivalent function
|
| tobytes(...)
| a.tobytes(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| .. versionadded:: 1.9.0
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]])
| >>> x.tobytes()
| b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
|
| tofile(...)
| a.tofile(fid, sep="", format="%s")
|
| Write array to a file as text or binary (default).
|
| Data is always written in 'C' order, independent of the order of `a`.
| The data produced by this method can be recovered using the function
| fromfile().
|
| Parameters
| ----------
| fid : file or str
| An open file object, or a string containing a filename.
| sep : str
| Separator between array items for text output.
| If "" (empty), a binary file is written, equivalent to
| ``file.write(a.tobytes())``.
| format : str
| Format string for text file output.
| Each entry in the array is formatted to text by first converting
| it to the closest Python type, and then using "format" % item.
|
| Notes
| -----
| This is a convenience function for quick storage of array data.
| Information on endianness and precision is lost, so this method is not a
| good choice for files intended to archive data or transport data between
| machines with different endianness. Some of these problems can be overcome
| by outputting the data as text files, at the expense of speed and file
| size.
|
| When fid is a file object, array contents are directly written to the
| file, bypassing the file object's ``write`` method. As a result, tofile
| cannot be used with files objects supporting compression (e.g., GzipFile)
| or file-like objects that do not support ``fileno()`` (e.g., BytesIO).
|
| tolist(...)
| a.tolist()
|
| Return the array as a (possibly nested) list.
|
| Return a copy of the array data as a (nested) Python list.
| Data items are converted to the nearest compatible Python type.
|
| Parameters
| ----------
| none
|
| Returns
| -------
| y : list
| The possibly nested list of array elements.
|
| Notes
| -----
| The array may be recreated, ``a = np.array(a.tolist())``.
|
| Examples
| --------
| >>> a = np.array([1, 2])
| >>> a.tolist()
| [1, 2]
| >>> a = np.array([[1, 2], [3, 4]])
| >>> list(a)
| [array([1, 2]), array([3, 4])]
| >>> a.tolist()
| [[1, 2], [3, 4]]
|
| tostring(...)
| a.tostring(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]])
| >>> x.tobytes()
| b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
|
| trace(...)
| a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)
|
| Return the sum along diagonals of the array.
|
| Refer to `numpy.trace` for full documentation.
|
| See Also
| --------
| numpy.trace : equivalent function
|
| transpose(...)
| a.transpose(*axes)
|
| Returns a view of the array with axes transposed.
|
| For a 1-D array, this has no effect. (To change between column and
| row vectors, first cast the 1-D array into a matrix object.)
| For a 2-D array, this is the usual matrix transpose.
| For an n-D array, if axes are given, their order indicates how the
| axes are permuted (see Examples). If axes are not provided and
| ``a.shape = (i[0], i[1], ... i[n-2], i[n-1])``, then
| ``a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])``.
|
| Parameters
| ----------
| axes : None, tuple of ints, or `n` ints
|
| * None or no argument: reverses the order of the axes.
|
| * tuple of ints: `i` in the `j`-th place in the tuple means `a`'s
| `i`-th axis becomes `a.transpose()`'s `j`-th axis.
|
| * `n` ints: same as an n-tuple of the same ints (this form is
| intended simply as a "convenience" alternative to the tuple form)
|
| Returns
| -------
| out : ndarray
| View of `a`, with axes suitably permuted.
|
| See Also
| --------
| ndarray.T : Array property returning the array transposed.
|
| Examples
| --------
| >>> a = np.array([[1, 2], [3, 4]])
| >>> a
| array([[1, 2],
| [3, 4]])
| >>> a.transpose()
| array([[1, 3],
| [2, 4]])
| >>> a.transpose((1, 0))
| array([[1, 3],
| [2, 4]])
| >>> a.transpose(1, 0)
| array([[1, 3],
| [2, 4]])
|
| var(...)
| a.var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the variance of the array elements, along given axis.
|
| Refer to `numpy.var` for full documentation.
|
| See Also
| --------
| numpy.var : equivalent function
|
| view(...)
| a.view(dtype=None, type=None)
|
| New view of array with the same data.
|
| Parameters
| ----------
| dtype : data-type or ndarray sub-class, optional
| Data-type descriptor of the returned view, e.g., float32 or int16. The
| default, None, results in the view having the same data-type as `a`.
| This argument can also be specified as an ndarray sub-class, which
| then specifies the type of the returned object (this is equivalent to
| setting the ``type`` parameter).
| type : Python type, optional
| Type of the returned view, e.g., ndarray or matrix. Again, the
| default None results in type preservation.
|
| Notes
| -----
| ``a.view()`` is used two different ways:
|
| ``a.view(some_dtype)`` or ``a.view(dtype=some_dtype)`` constructs a view
| of the array's memory with a different data-type. This can cause a
| reinterpretation of the bytes of memory.
|
| ``a.view(ndarray_subclass)`` or ``a.view(type=ndarray_subclass)`` just
| returns an instance of `ndarray_subclass` that looks at the same array
| (same shape, dtype, etc.) This does not cause a reinterpretation of the
| memory.
|
| For ``a.view(some_dtype)``, if ``some_dtype`` has a different number of
| bytes per entry than the previous dtype (for example, converting a
| regular array to a structured array), then the behavior of the view
| cannot be predicted just from the superficial appearance of ``a`` (shown
| by ``print(a)``). It also depends on exactly how ``a`` is stored in
| memory. Therefore if ``a`` is C-ordered versus fortran-ordered, versus
| defined as a slice or transpose, etc., the view may give different
| results.
|
|
| Examples
| --------
| >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])
|
| Viewing array data using a different type and dtype:
|
| >>> y = x.view(dtype=np.int16, type=np.matrix)
| >>> y
| matrix([[513]], dtype=int16)
| >>> print(type(y))
| <class 'numpy.matrixlib.defmatrix.matrix'>
|
| Creating a view on a structured array so it can be used in calculations
|
| >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
| >>> xv = x.view(dtype=np.int8).reshape(-1,2)
| >>> xv
| array([[1, 2],
| [3, 4]], dtype=int8)
| >>> xv.mean(0)
| array([ 2., 3.])
|
| Making changes to the view changes the underlying array
|
| >>> xv[0,1] = 20
| >>> print(x)
| [(1, 20) (3, 4)]
|
| Using a view to convert an array to a recarray:
|
| >>> z = x.view(np.recarray)
| >>> z.a
| array([1], dtype=int8)
|
| Views share data:
|
| >>> x[0] = (9, 10)
| >>> z[0]
| (9, 10)
|
| Views that change the dtype size (bytes per entry) should normally be
| avoided on arrays defined by slices, transposes, fortran-ordering, etc.:
|
| >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
| >>> y = x[:, 0:2]
| >>> y
| array([[1, 2],
| [4, 5]], dtype=int16)
| >>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: new type not compatible with array.
| >>> z = y.copy()
| >>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
| array([[(1, 2)],
| [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| T
| Same as self.transpose(), except that self is returned if
| self.ndim < 2.
|
| Examples
| --------
| >>> x = np.array([[1.,2.],[3.,4.]])
| >>> x
| array([[ 1., 2.],
| [ 3., 4.]])
| >>> x.T
| array([[ 1., 3.],
| [ 2., 4.]])
| >>> x = np.array([1.,2.,3.,4.])
| >>> x
| array([ 1., 2., 3., 4.])
| >>> x.T
| array([ 1., 2., 3., 4.])
|
| __array_finalize__
| None.
|
| __array_interface__
| Array protocol: Python side.
|
| __array_priority__
| Array priority.
|
| __array_struct__
| Array protocol: C-struct side.
|
| base
| Base object if memory is from some other object.
|
| Examples
| --------
| The base of an array that owns its memory is None:
|
| >>> x = np.array([1,2,3,4])
| >>> x.base is None
| True
|
| Slicing creates a view, whose memory is shared with x:
|
| >>> y = x[2:]
| >>> y.base is x
| True
|
| ctypes
| An object to simplify the interaction of the array with the ctypes
| module.
|
| This attribute creates an object that makes it easier to use arrays
| when calling shared libraries with the ctypes module. The returned
| object has, among others, data, shape, and strides attributes (see
| Notes below) which themselves return ctypes objects that can be used
| as arguments to a shared library.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| c : Python object
| Possessing attributes data, shape, strides, etc.
|
| See Also
| --------
| numpy.ctypeslib
|
| Notes
| -----
| Below are the public attributes of this object which were documented
| in "Guide to NumPy" (we have omitted undocumented public attributes,
| as well as documented private attributes):
|
| * data: A pointer to the memory area of the array as a Python integer.
| This memory area may contain data that is not aligned, or not in correct
| byte-order. The memory area may not even be writeable. The array
| flags and data-type of this array should be respected when passing this
| attribute to arbitrary C-code to avoid trouble that can include Python
| crashing. User Beware! The value of this attribute is exactly the same
| as self._array_interface_['data'][0].
|
| * shape (c_intp*self.ndim): A ctypes array of length self.ndim where
| the basetype is the C-integer corresponding to dtype('p') on this
| platform. This base-type could be c_int, c_long, or c_longlong
| depending on the platform. The c_intp type is defined accordingly in
| numpy.ctypeslib. The ctypes array contains the shape of the underlying
| array.
|
| * strides (c_intp*self.ndim): A ctypes array of length self.ndim where
| the basetype is the same as for the shape attribute. This ctypes array
| contains the strides information from the underlying array. This strides
| information is important for showing how many bytes must be jumped to
| get to the next element in the array.
|
| * data_as(obj): Return the data pointer cast to a particular c-types object.
| For example, calling self._as_parameter_ is equivalent to
| self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a
| pointer to a ctypes array of floating-point data:
| self.data_as(ctypes.POINTER(ctypes.c_double)).
|
| * shape_as(obj): Return the shape tuple as an array of some other c-types
| type. For example: self.shape_as(ctypes.c_short).
|
| * strides_as(obj): Return the strides tuple as an array of some other
| c-types type. For example: self.strides_as(ctypes.c_longlong).
|
| Be careful using the ctypes attribute - especially on temporary
| arrays or arrays constructed on the fly. For example, calling
| ``(a+b).ctypes.data_as(ctypes.c_void_p)`` returns a pointer to memory
| that is invalid because the array created as (a+b) is deallocated
| before the next Python statement. You can avoid this problem using
| either ``c=a+b`` or ``ct=(a+b).ctypes``. In the latter case, ct will
| hold a reference to the array until ct is deleted or re-assigned.
|
| If the ctypes module is not available, then the ctypes attribute
| of array objects still returns something useful, but ctypes objects
| are not returned and errors may be raised instead. In particular,
| the object will still have the as parameter attribute which will
| return an integer equal to the data attribute.
|
| Examples
| --------
| >>> import ctypes
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.ctypes.data
| 30439712
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
| <ctypes.LP_c_long object at 0x01F01300>
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
| c_long(0)
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
| c_longlong(4294967296L)
| >>> x.ctypes.shape
| <numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
| >>> x.ctypes.shape_as(ctypes.c_long)
| <numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
| >>> x.ctypes.strides
| <numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
| >>> x.ctypes.strides_as(ctypes.c_longlong)
| <numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>
|
| data
| Python buffer object pointing to the start of the array's data.
|
| dtype
| Data-type of the array's elements.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| d : numpy dtype object
|
| See Also
| --------
| numpy.dtype
|
| Examples
| --------
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.dtype
| dtype('int32')
| >>> type(x.dtype)
| <type 'numpy.dtype'>
|
| flags
| Information about the memory layout of the array.
|
| Attributes
| ----------
| C_CONTIGUOUS (C)
| The data is in a single, C-style contiguous segment.
| F_CONTIGUOUS (F)
| The data is in a single, Fortran-style contiguous segment.
| OWNDATA (O)
| The array owns the memory it uses or borrows it from another object.
| WRITEABLE (W)
| The data area can be written to. Setting this to False locks
| the data, making it read-only. A view (slice, etc.) inherits WRITEABLE
| from its base array at creation time, but a view of a writeable
| array may be subsequently locked while the base array remains writeable.
| (The opposite is not true, in that a view of a locked array may not
| be made writeable. However, currently, locking a base object does not
| lock any views that already reference it, so under that circumstance it
| is possible to alter the contents of a locked array via a previously
| created writeable view onto it.) Attempting to change a non-writeable
| array raises a RuntimeError exception.
| ALIGNED (A)
| The data and all elements are aligned appropriately for the hardware.
| WRITEBACKIFCOPY (X)
| This array is a copy of some other array. The C-API function
| PyArray_ResolveWritebackIfCopy must be called before deallocating
| to the base array will be updated with the contents of this array.
| UPDATEIFCOPY (U)
| (Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array.
| When this array is
| deallocated, the base array will be updated with the contents of
| this array.
| FNC
| F_CONTIGUOUS and not C_CONTIGUOUS.
| FORC
| F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).
| BEHAVED (B)
| ALIGNED and WRITEABLE.
| CARRAY (CA)
| BEHAVED and C_CONTIGUOUS.
| FARRAY (FA)
| BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.
|
| Notes
| -----
| The `flags` object can be accessed dictionary-like (as in ``a.flags['WRITEABLE']``),
| or by using lowercased attribute names (as in ``a.flags.writeable``). Short flag
| names are only supported in dictionary access.
|
| Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be
| changed by the user, via direct assignment to the attribute or dictionary
| entry, or by calling `ndarray.setflags`.
|
| The array flags cannot be set arbitrarily:
|
| - UPDATEIFCOPY can only be set ``False``.
| - WRITEBACKIFCOPY can only be set ``False``.
| - ALIGNED can only be set ``True`` if the data is truly aligned.
| - WRITEABLE can only be set ``True`` if the array owns its own memory
| or the ultimate owner of the memory exposes a writeable buffer
| interface or is a string.
|
| Arrays can be both C-style and Fortran-style contiguous simultaneously.
| This is clear for 1-dimensional arrays, but can also be true for higher
| dimensional arrays.
|
| Even for contiguous arrays a stride for a given dimension
| ``arr.strides[dim]`` may be *arbitrary* if ``arr.shape[dim] == 1``
| or the array has no elements.
| It does *not* generally hold that ``self.strides[-1] == self.itemsize``
| for C-style contiguous arrays or ``self.strides[0] == self.itemsize`` for
| Fortran-style contiguous arrays is true.
|
| flat
| A 1-D iterator over the array.
|
| This is a `numpy.flatiter` instance, which acts similarly to, but is not
| a subclass of, Python's built-in iterator object.
|
| See Also
| --------
| flatten : Return a copy of the array collapsed into one dimension.
|
| flatiter
|
| Examples
| --------
| >>> x = np.arange(1, 7).reshape(2, 3)
| >>> x
| array([[1, 2, 3],
| [4, 5, 6]])
| >>> x.flat[3]
| 4
| >>> x.T
| array([[1, 4],
| [2, 5],
| [3, 6]])
| >>> x.T.flat[3]
| 5
| >>> type(x.flat)
| <type 'numpy.flatiter'>
|
| An assignment example:
|
| >>> x.flat = 3; x
| array([[3, 3, 3],
| [3, 3, 3]])
| >>> x.flat[[1,4]] = 1; x
| array([[3, 1, 3],
| [3, 1, 3]])
|
| imag
| The imaginary part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.imag
| array([ 0. , 0.70710678])
| >>> x.imag.dtype
| dtype('float64')
|
| itemsize
| Length of one array element in bytes.
|
| Examples
| --------
| >>> x = np.array([1,2,3], dtype=np.float64)
| >>> x.itemsize
| 8
| >>> x = np.array([1,2,3], dtype=np.complex128)
| >>> x.itemsize
| 16
|
| nbytes
| Total bytes consumed by the elements of the array.
|
| Notes
| -----
| Does not include memory consumed by non-element attributes of the
| array object.
|
| Examples
| --------
| >>> x = np.zeros((3,5,2), dtype=np.complex128)
| >>> x.nbytes
| 480
| >>> np.prod(x.shape) * x.itemsize
| 480
|
| ndim
| Number of array dimensions.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3])
| >>> x.ndim
| 1
| >>> y = np.zeros((2, 3, 4))
| >>> y.ndim
| 3
|
| real
| The real part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.real
| array([ 1. , 0.70710678])
| >>> x.real.dtype
| dtype('float64')
|
| See Also
| --------
| numpy.real : equivalent function
|
| shape
| Tuple of array dimensions.
|
| The shape property is usually used to get the current shape of an array,
| but may also be used to reshape the array in-place by assigning a tuple of
| array dimensions to it. As with `numpy.reshape`, one of the new shape
| dimensions can be -1, in which case its value is inferred from the size of
| the array and the remaining dimensions. Reshaping an array in-place will
| fail if a copy is required.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3, 4])
| >>> x.shape
| (4,)
| >>> y = np.zeros((2, 3, 4))
| >>> y.shape
| (2, 3, 4)
| >>> y.shape = (3, 8)
| >>> y
| array([[ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.]])
| >>> y.shape = (3, 6)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: total size of new array must be unchanged
| >>> np.zeros((4,2))[::2].shape = (-1,)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| AttributeError: incompatible shape for a non-contiguous array
|
| See Also
| --------
| numpy.reshape : similar function
| ndarray.reshape : similar method
|
| size
| Number of elements in the array.
|
| Equal to ``np.prod(a.shape)``, i.e., the product of the array's
| dimensions.
|
| Notes
| -----
| `a.size` returns a standard arbitrary precision Python integer. This
| may not be the case with other methods of obtaining the same value
| (like the suggested ``np.prod(a.shape)``, which returns an instance
| of ``np.int_``), and may be relevant if the value is used further in
| calculations that may overflow a fixed size integer type.
|
| Examples
| --------
| >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
| >>> x.size
| 30
| >>> np.prod(x.shape)
| 30
|
| strides
| Tuple of bytes to step in each dimension when traversing an array.
|
| The byte offset of element ``(i[0], i[1], ..., i[n])`` in an array `a`
| is::
|
| offset = sum(np.array(i) * a.strides)
|
| A more detailed explanation of strides can be found in the
| "ndarray.rst" file in the NumPy reference guide.
|
| Notes
| -----
| Imagine an array of 32-bit integers (each 4 bytes)::
|
| x = np.array([[0, 1, 2, 3, 4],
| [5, 6, 7, 8, 9]], dtype=np.int32)
|
| This array is stored in memory as 40 bytes, one after the other
| (known as a contiguous block of memory). The strides of an array tell
| us how many bytes we have to skip in memory to move to the next position
| along a certain axis. For example, we have to skip 4 bytes (1 value) to
| move to the next column, but 20 bytes (5 values) to get to the same
| position in the next row. As such, the strides for the array `x` will be
| ``(20, 4)``.
|
| See Also
| --------
| numpy.lib.stride_tricks.as_strided
|
| Examples
| --------
| >>> y = np.reshape(np.arange(2*3*4), (2,3,4))
| >>> y
| array([[[ 0, 1, 2, 3],
| [ 4, 5, 6, 7],
| [ 8, 9, 10, 11]],
| [[12, 13, 14, 15],
| [16, 17, 18, 19],
| [20, 21, 22, 23]]])
| >>> y.strides
| (48, 16, 4)
| >>> y[1,1,1]
| 17
| >>> offset=sum(y.strides * np.array((1,1,1)))
| >>> offset/y.itemsize
| 17
|
| >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
| >>> x.strides
| (32, 4, 224, 1344)
| >>> i = np.array([3,5,2,2])
| >>> offset = sum(i * x.strides)
| >>> x[3,5,2,2]
| 813
| >>> offset / x.itemsize
| 813
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
###Markdown
Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
The original matrix
[[0.49907564 0.06988973 0.92733429]
[0.67753636 0.63372073 0.16873223]]
The matrix after being tranposed
[[0.49907564 0.67753636]
[0.06988973 0.63372073]
[0.92733429 0.16873223]]
The tranposed matrix from the T attribute
[[0.49907564 0.67753636]
[0.06988973 0.63372073]
[0.92733429 0.16873223]]
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
(4,)
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
original:
[1 2 3 4]
reshaped:
[[1 2]
[3 4]]
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
[ 1 2 3 4 7 8 9 10]
[[1 2 3 4 7]
[5 6 7 8 8]]
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
concatenate:
[1 2 3 4 1 1 1 1]
vstack:
[[1 2 3 4]
[1 2 3 4]
[5 6 7 8]]
hstack:
[[1 2 3 4 1 2 3 4]
[5 6 7 8 5 6 7 8]]
column_stack:
[[1 2 3 4 1 2 3 4]
[5 6 7 8 5 6 7 8]]
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
original:
[[1 2 3 4]
[5 6 7 8]]
hsplit:
[array([[1, 2],
[5, 6]]), array([[3, 4],
[7, 8]])]
vsplit:
[array([[1, 2, 3, 4]]), array([[5, 6, 7, 8]])]
###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.75471319 0.33495085]
[0.10233538 0.3446458 ]
[0.09325156 0.04737132]
[0.1444406 0.20331162]
[0.16850446 0.65617096]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
# Find all values in the matrix less than 0.5
demo_g < 0.5
###Output
_____no_output_____
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
Help on built-in function array in module numpy:
array(...)
array(object, dtype=None, *, copy=True, order='K', subok=False, ndmin=0)
Create an array.
Parameters
----------
object : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
dtype : data-type, optional
The desired data-type for the array. If not given, then the type will
be determined as the minimum type required to hold the objects in the
sequence.
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy will
only be made if __array__ returns a copy, if obj is a nested sequence,
or if a copy is needed to satisfy any of the other requirements
(`dtype`, `order`, etc.).
order : {'K', 'A', 'C', 'F'}, optional
Specify the memory layout of the array. If object is not an array, the
newly created array will be in C order (row major) unless 'F' is
specified, in which case it will be in Fortran order (column major).
If object is an array the following holds.
===== ========= ===================================================
order no copy copy=True
===== ========= ===================================================
'K' unchanged F & C order preserved, otherwise most similar order
'A' unchanged F order if input is F and not C, otherwise C order
'C' C order C order
'F' F order F order
===== ========= ===================================================
When ``copy=False`` and a copy is made for other reasons, the result is
the same as if ``copy=True``, with some exceptions for `A`, see the
Notes section. The default order is 'K'.
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise
the returned array will be forced to be a base-class array (default).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting
array should have. Ones will be pre-pended to the shape as
needed to meet this requirement.
Returns
-------
out : ndarray
An array object satisfying the specified requirements.
See Also
--------
empty_like : Return an empty array with shape and type of input.
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
ones : Return a new array setting values to one.
zeros : Return a new array setting values to zero.
full : Return a new array of given shape filled with value.
Notes
-----
When order is 'A' and `object` is an array in neither 'C' nor 'F' order,
and a copy is forced by a change in dtype, then the order of the result is
not necessarily 'C' as expected. This is likely a bug.
Examples
--------
>>> np.array([1, 2, 3])
array([1, 2, 3])
Upcasting:
>>> np.array([1, 2, 3.0])
array([ 1., 2., 3.])
More than one dimension:
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
Minimum dimensions 2:
>>> np.array([1, 2, 3], ndmin=2)
array([[1, 2, 3]])
Type provided:
>>> np.array([1, 2, 3], dtype=complex)
array([ 1.+0.j, 2.+0.j, 3.+0.j])
Data-type consisting of more than one element:
>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>> x['a']
array([1, 3])
Creating an array from sub-classes:
>>> np.array(np.mat('1 2; 3 4'))
array([[1, 2],
[3, 4]])
>>> np.array(np.mat('1 2; 3 4'), subok=True)
matrix([[1, 2],
[3, 4]])
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out:
###Code
# Call help on an object we created.
x = np.array([1, 2, 3, 4])
help(x)
###Output
Help on ndarray object:
class ndarray(builtins.object)
| ndarray(shape, dtype=float, buffer=None, offset=0,
| strides=None, order=None)
|
| An array object represents a multidimensional, homogeneous array
| of fixed-size items. An associated data-type object describes the
| format of each element in the array (its byte-order, how many bytes it
| occupies in memory, whether it is an integer, a floating point number,
| or something else, etc.)
|
| Arrays should be constructed using `array`, `zeros` or `empty` (refer
| to the See Also section below). The parameters given here refer to
| a low-level method (`ndarray(...)`) for instantiating an array.
|
| For more information, refer to the `numpy` module and examine the
| methods and attributes of an array.
|
| Parameters
| ----------
| (for the __new__ method; see Notes below)
|
| shape : tuple of ints
| Shape of created array.
| dtype : data-type, optional
| Any object that can be interpreted as a numpy data type.
| buffer : object exposing buffer interface, optional
| Used to fill the array with data.
| offset : int, optional
| Offset of array data in buffer.
| strides : tuple of ints, optional
| Strides of data in memory.
| order : {'C', 'F'}, optional
| Row-major (C-style) or column-major (Fortran-style) order.
|
| Attributes
| ----------
| T : ndarray
| Transpose of the array.
| data : buffer
| The array's elements, in memory.
| dtype : dtype object
| Describes the format of the elements in the array.
| flags : dict
| Dictionary containing information related to memory use, e.g.,
| 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc.
| flat : numpy.flatiter object
| Flattened version of the array as an iterator. The iterator
| allows assignments, e.g., ``x.flat = 3`` (See `ndarray.flat` for
| assignment examples; TODO).
| imag : ndarray
| Imaginary part of the array.
| real : ndarray
| Real part of the array.
| size : int
| Number of elements in the array.
| itemsize : int
| The memory use of each array element in bytes.
| nbytes : int
| The total number of bytes required to store the array data,
| i.e., ``itemsize * size``.
| ndim : int
| The array's number of dimensions.
| shape : tuple of ints
| Shape of the array.
| strides : tuple of ints
| The step-size required to move from one element to the next in
| memory. For example, a contiguous ``(3, 4)`` array of type
| ``int16`` in C-order has strides ``(8, 2)``. This implies that
| to move from element to element in memory requires jumps of 2 bytes.
| To move from row-to-row, one needs to jump 8 bytes at a time
| (``2 * 4``).
| ctypes : ctypes object
| Class containing properties of the array needed for interaction
| with ctypes.
| base : ndarray
| If the array is a view into another array, that array is its `base`
| (unless that array is also a view). The `base` array is where the
| array data is actually stored.
|
| See Also
| --------
| array : Construct an array.
| zeros : Create an array, each element of which is zero.
| empty : Create an array, but leave its allocated memory unchanged (i.e.,
| it contains "garbage").
| dtype : Create a data-type.
|
| Notes
| -----
| There are two modes of creating an array using ``__new__``:
|
| 1. If `buffer` is None, then only `shape`, `dtype`, and `order`
| are used.
| 2. If `buffer` is an object exposing the buffer interface, then
| all keywords are interpreted.
|
| No ``__init__`` method is needed because the array is fully initialized
| after the ``__new__`` method.
|
| Examples
| --------
| These examples illustrate the low-level `ndarray` constructor. Refer
| to the `See Also` section above for easier ways of constructing an
| ndarray.
|
| First mode, `buffer` is None:
|
| >>> np.ndarray(shape=(2,2), dtype=float, order='F')
| array([[0.0e+000, 0.0e+000], # random
| [ nan, 2.5e-323]])
|
| Second mode:
|
| >>> np.ndarray((2,), buffer=np.array([1,2,3]),
| ... offset=np.int_().itemsize,
| ... dtype=int) # offset = 1*itemsize, i.e. skip first element
| array([2, 3])
|
| Methods defined here:
|
| __abs__(self, /)
| abs(self)
|
| __add__(self, value, /)
| Return self+value.
|
| __and__(self, value, /)
| Return self&value.
|
| __array__(...)
| a.__array__([dtype], /) -> reference if type unchanged, copy otherwise.
|
| Returns either a new reference to self if dtype is not given or a new array
| of provided data type if dtype is different from the current dtype of the
| array.
|
| __array_function__(...)
|
| __array_prepare__(...)
| a.__array_prepare__(obj) -> Object of same type as ndarray object obj.
|
| __array_ufunc__(...)
|
| __array_wrap__(...)
| a.__array_wrap__(obj) -> Object of same type as ndarray object a.
|
| __bool__(self, /)
| self != 0
|
| __complex__(...)
|
| __contains__(self, key, /)
| Return key in self.
|
| __copy__(...)
| a.__copy__()
|
| Used if :func:`copy.copy` is called on an array. Returns a copy of the array.
|
| Equivalent to ``a.copy(order='K')``.
|
| __deepcopy__(...)
| a.__deepcopy__(memo, /) -> Deep copy of array.
|
| Used if :func:`copy.deepcopy` is called on an array.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __divmod__(self, value, /)
| Return divmod(self, value).
|
| __eq__(self, value, /)
| Return self==value.
|
| __float__(self, /)
| float(self)
|
| __floordiv__(self, value, /)
| Return self//value.
|
| __format__(...)
| Default object formatter.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getitem__(self, key, /)
| Return self[key].
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Return self+=value.
|
| __iand__(self, value, /)
| Return self&=value.
|
| __ifloordiv__(self, value, /)
| Return self//=value.
|
| __ilshift__(self, value, /)
| Return self<<=value.
|
| __imatmul__(self, value, /)
| Return self@=value.
|
| __imod__(self, value, /)
| Return self%=value.
|
| __imul__(self, value, /)
| Return self*=value.
|
| __index__(self, /)
| Return self converted to an integer, if self is suitable for use as an index into a list.
|
| __int__(self, /)
| int(self)
|
| __invert__(self, /)
| ~self
|
| __ior__(self, value, /)
| Return self|=value.
|
| __ipow__(self, value, /)
| Return self**=value.
|
| __irshift__(self, value, /)
| Return self>>=value.
|
| __isub__(self, value, /)
| Return self-=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __itruediv__(self, value, /)
| Return self/=value.
|
| __ixor__(self, value, /)
| Return self^=value.
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lshift__(self, value, /)
| Return self<<value.
|
| __lt__(self, value, /)
| Return self<value.
|
| __matmul__(self, value, /)
| Return self@value.
|
| __mod__(self, value, /)
| Return self%value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __neg__(self, /)
| -self
|
| __or__(self, value, /)
| Return self|value.
|
| __pos__(self, /)
| +self
|
| __pow__(self, value, mod=None, /)
| Return pow(self, value, mod).
|
| __radd__(self, value, /)
| Return value+self.
|
| __rand__(self, value, /)
| Return value&self.
|
| __rdivmod__(self, value, /)
| Return divmod(value, self).
|
| __reduce__(...)
| a.__reduce__()
|
| For pickling.
|
| __reduce_ex__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| __rfloordiv__(self, value, /)
| Return value//self.
|
| __rlshift__(self, value, /)
| Return value<<self.
|
| __rmatmul__(self, value, /)
| Return value@self.
|
| __rmod__(self, value, /)
| Return value%self.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __ror__(self, value, /)
| Return value|self.
|
| __rpow__(self, value, mod=None, /)
| Return pow(value, self, mod).
|
| __rrshift__(self, value, /)
| Return value>>self.
|
| __rshift__(self, value, /)
| Return self>>value.
|
| __rsub__(self, value, /)
| Return value-self.
|
| __rtruediv__(self, value, /)
| Return value/self.
|
| __rxor__(self, value, /)
| Return value^self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __setstate__(...)
| a.__setstate__(state, /)
|
| For unpickling.
|
| The `state` argument must be a sequence that contains the following
| elements:
|
| Parameters
| ----------
| version : int
| optional pickle version. If omitted defaults to 0.
| shape : tuple
| dtype : data-type
| isFortran : bool
| rawdata : string or list
| a binary string with the data (or a list if 'a' is an object array)
|
| __sizeof__(...)
| Size of object in memory, in bytes.
|
| __str__(self, /)
| Return str(self).
|
| __sub__(self, value, /)
| Return self-value.
|
| __truediv__(self, value, /)
| Return self/value.
|
| __xor__(self, value, /)
| Return self^value.
|
| all(...)
| a.all(axis=None, out=None, keepdims=False)
|
| Returns True if all elements evaluate to True.
|
| Refer to `numpy.all` for full documentation.
|
| See Also
| --------
| numpy.all : equivalent function
|
| any(...)
| a.any(axis=None, out=None, keepdims=False)
|
| Returns True if any of the elements of `a` evaluate to True.
|
| Refer to `numpy.any` for full documentation.
|
| See Also
| --------
| numpy.any : equivalent function
|
| argmax(...)
| a.argmax(axis=None, out=None)
|
| Return indices of the maximum values along the given axis.
|
| Refer to `numpy.argmax` for full documentation.
|
| See Also
| --------
| numpy.argmax : equivalent function
|
| argmin(...)
| a.argmin(axis=None, out=None)
|
| Return indices of the minimum values along the given axis of `a`.
|
| Refer to `numpy.argmin` for detailed documentation.
|
| See Also
| --------
| numpy.argmin : equivalent function
|
| argpartition(...)
| a.argpartition(kth, axis=-1, kind='introselect', order=None)
|
| Returns the indices that would partition this array.
|
| Refer to `numpy.argpartition` for full documentation.
|
| .. versionadded:: 1.8.0
|
| See Also
| --------
| numpy.argpartition : equivalent function
|
| argsort(...)
| a.argsort(axis=-1, kind=None, order=None)
|
| Returns the indices that would sort this array.
|
| Refer to `numpy.argsort` for full documentation.
|
| See Also
| --------
| numpy.argsort : equivalent function
|
| astype(...)
| a.astype(dtype, order='K', casting='unsafe', subok=True, copy=True)
|
| Copy of the array, cast to a specified type.
|
| Parameters
| ----------
| dtype : str or dtype
| Typecode or data-type to which the array is cast.
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout order of the result.
| 'C' means C order, 'F' means Fortran order, 'A'
| means 'F' order if all the arrays are Fortran contiguous,
| 'C' order otherwise, and 'K' means as close to the
| order the array elements appear in memory as possible.
| Default is 'K'.
| casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
| Controls what kind of data casting may occur. Defaults to 'unsafe'
| for backwards compatibility.
|
| * 'no' means the data types should not be cast at all.
| * 'equiv' means only byte-order changes are allowed.
| * 'safe' means only casts which can preserve values are allowed.
| * 'same_kind' means only safe casts or casts within a kind,
| like float64 to float32, are allowed.
| * 'unsafe' means any data conversions may be done.
| subok : bool, optional
| If True, then sub-classes will be passed-through (default), otherwise
| the returned array will be forced to be a base-class array.
| copy : bool, optional
| By default, astype always returns a newly allocated array. If this
| is set to false, and the `dtype`, `order`, and `subok`
| requirements are satisfied, the input array is returned instead
| of a copy.
|
| Returns
| -------
| arr_t : ndarray
| Unless `copy` is False and the other conditions for returning the input
| array are satisfied (see description for `copy` input parameter), `arr_t`
| is a new array of the same shape as the input array, with dtype, order
| given by `dtype`, `order`.
|
| Notes
| -----
| .. versionchanged:: 1.17.0
| Casting between a simple data type and a structured one is possible only
| for "unsafe" casting. Casting to multiple fields is allowed, but
| casting from multiple fields is not.
|
| .. versionchanged:: 1.9.0
| Casting from numeric to string types in 'safe' casting mode requires
| that the string dtype length is long enough to store the max
| integer/float value converted.
|
| Raises
| ------
| ComplexWarning
| When casting from complex to float or int. To avoid this,
| one should use ``a.real.astype(t)``.
|
| Examples
| --------
| >>> x = np.array([1, 2, 2.5])
| >>> x
| array([1. , 2. , 2.5])
|
| >>> x.astype(int)
| array([1, 2, 2])
|
| byteswap(...)
| a.byteswap(inplace=False)
|
| Swap the bytes of the array elements
|
| Toggle between low-endian and big-endian data representation by
| returning a byteswapped array, optionally swapped in-place.
| Arrays of byte-strings are not swapped. The real and imaginary
| parts of a complex number are swapped individually.
|
| Parameters
| ----------
| inplace : bool, optional
| If ``True``, swap bytes in-place, default is ``False``.
|
| Returns
| -------
| out : ndarray
| The byteswapped array. If `inplace` is ``True``, this is
| a view to self.
|
| Examples
| --------
| >>> A = np.array([1, 256, 8755], dtype=np.int16)
| >>> list(map(hex, A))
| ['0x1', '0x100', '0x2233']
| >>> A.byteswap(inplace=True)
| array([ 256, 1, 13090], dtype=int16)
| >>> list(map(hex, A))
| ['0x100', '0x1', '0x3322']
|
| Arrays of byte-strings are not swapped
|
| >>> A = np.array([b'ceg', b'fac'])
| >>> A.byteswap()
| array([b'ceg', b'fac'], dtype='|S3')
|
| ``A.newbyteorder().byteswap()`` produces an array with the same values
| but different representation in memory
|
| >>> A = np.array([1, 2, 3])
| >>> A.view(np.uint8)
| array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0,
| 0, 0], dtype=uint8)
| >>> A.newbyteorder().byteswap(inplace=True)
| array([1, 2, 3])
| >>> A.view(np.uint8)
| array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0,
| 0, 3], dtype=uint8)
|
| choose(...)
| a.choose(choices, out=None, mode='raise')
|
| Use an index array to construct a new array from a set of choices.
|
| Refer to `numpy.choose` for full documentation.
|
| See Also
| --------
| numpy.choose : equivalent function
|
| clip(...)
| a.clip(min=None, max=None, out=None, **kwargs)
|
| Return an array whose values are limited to ``[min, max]``.
| One of max or min must be given.
|
| Refer to `numpy.clip` for full documentation.
|
| See Also
| --------
| numpy.clip : equivalent function
|
| compress(...)
| a.compress(condition, axis=None, out=None)
|
| Return selected slices of this array along given axis.
|
| Refer to `numpy.compress` for full documentation.
|
| See Also
| --------
| numpy.compress : equivalent function
|
| conj(...)
| a.conj()
|
| Complex-conjugate all elements.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| conjugate(...)
| a.conjugate()
|
| Return the complex conjugate, element-wise.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| copy(...)
| a.copy(order='C')
|
| Return a copy of the array.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout of the copy. 'C' means C-order,
| 'F' means F-order, 'A' means 'F' if `a` is Fortran contiguous,
| 'C' otherwise. 'K' means match the layout of `a` as closely
| as possible. (Note that this function and :func:`numpy.copy` are very
| similar, but have different default values for their order=
| arguments.)
|
| See also
| --------
| numpy.copy
| numpy.copyto
|
| Examples
| --------
| >>> x = np.array([[1,2,3],[4,5,6]], order='F')
|
| >>> y = x.copy()
|
| >>> x.fill(0)
|
| >>> x
| array([[0, 0, 0],
| [0, 0, 0]])
|
| >>> y
| array([[1, 2, 3],
| [4, 5, 6]])
|
| >>> y.flags['C_CONTIGUOUS']
| True
|
| cumprod(...)
| a.cumprod(axis=None, dtype=None, out=None)
|
| Return the cumulative product of the elements along the given axis.
|
| Refer to `numpy.cumprod` for full documentation.
|
| See Also
| --------
| numpy.cumprod : equivalent function
|
| cumsum(...)
| a.cumsum(axis=None, dtype=None, out=None)
|
| Return the cumulative sum of the elements along the given axis.
|
| Refer to `numpy.cumsum` for full documentation.
|
| See Also
| --------
| numpy.cumsum : equivalent function
|
| diagonal(...)
| a.diagonal(offset=0, axis1=0, axis2=1)
|
| Return specified diagonals. In NumPy 1.9 the returned array is a
| read-only view instead of a copy as in previous NumPy versions. In
| a future version the read-only restriction will be removed.
|
| Refer to :func:`numpy.diagonal` for full documentation.
|
| See Also
| --------
| numpy.diagonal : equivalent function
|
| dot(...)
| a.dot(b, out=None)
|
| Dot product of two arrays.
|
| Refer to `numpy.dot` for full documentation.
|
| See Also
| --------
| numpy.dot : equivalent function
|
| Examples
| --------
| >>> a = np.eye(2)
| >>> b = np.ones((2, 2)) * 2
| >>> a.dot(b)
| array([[2., 2.],
| [2., 2.]])
|
| This array method can be conveniently chained:
|
| >>> a.dot(b).dot(b)
| array([[8., 8.],
| [8., 8.]])
|
| dump(...)
| a.dump(file)
|
| Dump a pickle of the array to the specified file.
| The array can be read back with pickle.load or numpy.load.
|
| Parameters
| ----------
| file : str or Path
| A string naming the dump file.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| dumps(...)
| a.dumps()
|
| Returns the pickle of the array as a string.
| pickle.loads or numpy.loads will convert the string back to an array.
|
| Parameters
| ----------
| None
|
| fill(...)
| a.fill(value)
|
| Fill the array with a scalar value.
|
| Parameters
| ----------
| value : scalar
| All elements of `a` will be assigned this value.
|
| Examples
| --------
| >>> a = np.array([1, 2])
| >>> a.fill(0)
| >>> a
| array([0, 0])
| >>> a = np.empty(2)
| >>> a.fill(1)
| >>> a
| array([1., 1.])
|
| flatten(...)
| a.flatten(order='C')
|
| Return a copy of the array collapsed into one dimension.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| 'C' means to flatten in row-major (C-style) order.
| 'F' means to flatten in column-major (Fortran-
| style) order. 'A' means to flatten in column-major
| order if `a` is Fortran *contiguous* in memory,
| row-major order otherwise. 'K' means to flatten
| `a` in the order the elements occur in memory.
| The default is 'C'.
|
| Returns
| -------
| y : ndarray
| A copy of the input array, flattened to one dimension.
|
| See Also
| --------
| ravel : Return a flattened array.
| flat : A 1-D flat iterator over the array.
|
| Examples
| --------
| >>> a = np.array([[1,2], [3,4]])
| >>> a.flatten()
| array([1, 2, 3, 4])
| >>> a.flatten('F')
| array([1, 3, 2, 4])
|
| getfield(...)
| a.getfield(dtype, offset=0)
|
| Returns a field of the given array as a certain type.
|
| A field is a view of the array data with a given data-type. The values in
| the view are determined by the given type and the offset into the current
| array in bytes. The offset needs to be such that the view dtype fits in the
| array dtype; for example an array of dtype complex128 has 16-byte elements.
| If taking a view with a 32-bit integer (4 bytes), the offset needs to be
| between 0 and 12 bytes.
|
| Parameters
| ----------
| dtype : str or dtype
| The data type of the view. The dtype size of the view can not be larger
| than that of the array itself.
| offset : int
| Number of bytes to skip before beginning the element view.
|
| Examples
| --------
| >>> x = np.diag([1.+1.j]*2)
| >>> x[1, 1] = 2 + 4.j
| >>> x
| array([[1.+1.j, 0.+0.j],
| [0.+0.j, 2.+4.j]])
| >>> x.getfield(np.float64)
| array([[1., 0.],
| [0., 2.]])
|
| By choosing an offset of 8 bytes we can select the complex part of the
| array for our view:
|
| >>> x.getfield(np.float64, offset=8)
| array([[1., 0.],
| [0., 4.]])
|
| item(...)
| a.item(*args)
|
| Copy an element of an array to a standard Python scalar and return it.
|
| Parameters
| ----------
| \*args : Arguments (variable number and type)
|
| * none: in this case, the method only works for arrays
| with one element (`a.size == 1`), which element is
| copied into a standard Python scalar object and returned.
|
| * int_type: this argument is interpreted as a flat index into
| the array, specifying which element to copy and return.
|
| * tuple of int_types: functions as does a single int_type argument,
| except that the argument is interpreted as an nd-index into the
| array.
|
| Returns
| -------
| z : Standard Python scalar object
| A copy of the specified element of the array as a suitable
| Python scalar
|
| Notes
| -----
| When the data type of `a` is longdouble or clongdouble, item() returns
| a scalar array object because there is no available Python scalar that
| would not lose information. Void arrays return a buffer object for item(),
| unless fields are defined, in which case a tuple is returned.
|
| `item` is very similar to a[args], except, instead of an array scalar,
| a standard Python scalar is returned. This can be useful for speeding up
| access to elements of the array and doing arithmetic on elements of the
| array using Python's optimized math.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.item(3)
| 1
| >>> x.item(7)
| 0
| >>> x.item((0, 1))
| 2
| >>> x.item((2, 2))
| 1
|
| itemset(...)
| a.itemset(*args)
|
| Insert scalar into an array (scalar is cast to array's dtype, if possible)
|
| There must be at least 1 argument, and define the last argument
| as *item*. Then, ``a.itemset(*args)`` is equivalent to but faster
| than ``a[args] = item``. The item should be a scalar value and `args`
| must select a single item in the array `a`.
|
| Parameters
| ----------
| \*args : Arguments
| If one argument: a scalar, only used in case `a` is of size 1.
| If two arguments: the last argument is the value to be set
| and must be a scalar, the first argument specifies a single array
| element location. It is either an int or a tuple.
|
| Notes
| -----
| Compared to indexing syntax, `itemset` provides some speed increase
| for placing a scalar into a particular location in an `ndarray`,
| if you must do this. However, generally this is discouraged:
| among other problems, it complicates the appearance of the code.
| Also, when using `itemset` (and `item`) inside a loop, be sure
| to assign the methods to a local variable to avoid the attribute
| look-up at each loop iteration.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.itemset(4, 0)
| >>> x.itemset((2, 2), 9)
| >>> x
| array([[2, 2, 6],
| [1, 0, 6],
| [1, 0, 9]])
|
| max(...)
| a.max(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the maximum along a given axis.
|
| Refer to `numpy.amax` for full documentation.
|
| See Also
| --------
| numpy.amax : equivalent function
|
| mean(...)
| a.mean(axis=None, dtype=None, out=None, keepdims=False)
|
| Returns the average of the array elements along given axis.
|
| Refer to `numpy.mean` for full documentation.
|
| See Also
| --------
| numpy.mean : equivalent function
|
| min(...)
| a.min(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the minimum along a given axis.
|
| Refer to `numpy.amin` for full documentation.
|
| See Also
| --------
| numpy.amin : equivalent function
|
| newbyteorder(...)
| arr.newbyteorder(new_order='S')
|
| Return the array with the same data viewed with a different byte order.
|
| Equivalent to::
|
| arr.view(arr.dtype.newbytorder(new_order))
|
| Changes are also made in all fields and sub-arrays of the array data
| type.
|
|
|
| Parameters
| ----------
| new_order : string, optional
| Byte order to force; a value from the byte order specifications
| below. `new_order` codes can be any of:
|
| * 'S' - swap dtype from current to opposite endian
| * {'<', 'L'} - little endian
| * {'>', 'B'} - big endian
| * {'=', 'N'} - native order
| * {'|', 'I'} - ignore (no change to byte order)
|
| The default value ('S') results in swapping the current
| byte order. The code does a case-insensitive check on the first
| letter of `new_order` for the alternatives above. For example,
| any of 'B' or 'b' or 'biggish' are valid to specify big-endian.
|
|
| Returns
| -------
| new_arr : array
| New array object with the dtype reflecting given change to the
| byte order.
|
| nonzero(...)
| a.nonzero()
|
| Return the indices of the elements that are non-zero.
|
| Refer to `numpy.nonzero` for full documentation.
|
| See Also
| --------
| numpy.nonzero : equivalent function
|
| partition(...)
| a.partition(kth, axis=-1, kind='introselect', order=None)
|
| Rearranges the elements in the array in such a way that the value of the
| element in kth position is in the position it would be in a sorted array.
| All elements smaller than the kth element are moved before this element and
| all equal or greater are moved behind it. The ordering of the elements in
| the two partitions is undefined.
|
| .. versionadded:: 1.8.0
|
| Parameters
| ----------
| kth : int or sequence of ints
| Element index to partition by. The kth element value will be in its
| final sorted position and all smaller elements will be moved before it
| and all equal or greater elements behind it.
| The order of all elements in the partitions is undefined.
| If provided with a sequence of kth it will partition all elements
| indexed by kth of them into their sorted position at once.
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'introselect'}, optional
| Selection algorithm. Default is 'introselect'.
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need to be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.partition : Return a parititioned copy of an array.
| argpartition : Indirect partition.
| sort : Full sort.
|
| Notes
| -----
| See ``np.partition`` for notes on the different algorithms.
|
| Examples
| --------
| >>> a = np.array([3, 4, 2, 1])
| >>> a.partition(3)
| >>> a
| array([2, 1, 3, 4])
|
| >>> a.partition((1, 3))
| >>> a
| array([1, 2, 3, 4])
|
| prod(...)
| a.prod(axis=None, dtype=None, out=None, keepdims=False, initial=1, where=True)
|
| Return the product of the array elements over the given axis
|
| Refer to `numpy.prod` for full documentation.
|
| See Also
| --------
| numpy.prod : equivalent function
|
| ptp(...)
| a.ptp(axis=None, out=None, keepdims=False)
|
| Peak to peak (maximum - minimum) value along a given axis.
|
| Refer to `numpy.ptp` for full documentation.
|
| See Also
| --------
| numpy.ptp : equivalent function
|
| put(...)
| a.put(indices, values, mode='raise')
|
| Set ``a.flat[n] = values[n]`` for all `n` in indices.
|
| Refer to `numpy.put` for full documentation.
|
| See Also
| --------
| numpy.put : equivalent function
|
| ravel(...)
| a.ravel([order])
|
| Return a flattened array.
|
| Refer to `numpy.ravel` for full documentation.
|
| See Also
| --------
| numpy.ravel : equivalent function
|
| ndarray.flat : a flat iterator on the array.
|
| repeat(...)
| a.repeat(repeats, axis=None)
|
| Repeat elements of an array.
|
| Refer to `numpy.repeat` for full documentation.
|
| See Also
| --------
| numpy.repeat : equivalent function
|
| reshape(...)
| a.reshape(shape, order='C')
|
| Returns an array containing the same data with a new shape.
|
| Refer to `numpy.reshape` for full documentation.
|
| See Also
| --------
| numpy.reshape : equivalent function
|
| Notes
| -----
| Unlike the free function `numpy.reshape`, this method on `ndarray` allows
| the elements of the shape parameter to be passed in as separate arguments.
| For example, ``a.reshape(10, 11)`` is equivalent to
| ``a.reshape((10, 11))``.
|
| resize(...)
| a.resize(new_shape, refcheck=True)
|
| Change shape and size of array in-place.
|
| Parameters
| ----------
| new_shape : tuple of ints, or `n` ints
| Shape of resized array.
| refcheck : bool, optional
| If False, reference count will not be checked. Default is True.
|
| Returns
| -------
| None
|
| Raises
| ------
| ValueError
| If `a` does not own its own data or references or views to it exist,
| and the data memory must be changed.
| PyPy only: will always raise if the data memory must be changed, since
| there is no reliable way to determine if references or views to it
| exist.
|
| SystemError
| If the `order` keyword argument is specified. This behaviour is a
| bug in NumPy.
|
| See Also
| --------
| resize : Return a new array with the specified shape.
|
| Notes
| -----
| This reallocates space for the data area if necessary.
|
| Only contiguous arrays (data elements consecutive in memory) can be
| resized.
|
| The purpose of the reference count check is to make sure you
| do not use this array as a buffer for another Python object and then
| reallocate the memory. However, reference counts can increase in
| other ways so if you are sure that you have not shared the memory
| for this array with another Python object, then you may safely set
| `refcheck` to False.
|
| Examples
| --------
| Shrinking an array: array is flattened (in the order that the data are
| stored in memory), resized, and reshaped:
|
| >>> a = np.array([[0, 1], [2, 3]], order='C')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [1]])
|
| >>> a = np.array([[0, 1], [2, 3]], order='F')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [2]])
|
| Enlarging an array: as above, but missing entries are filled with zeros:
|
| >>> b = np.array([[0, 1], [2, 3]])
| >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
| >>> b
| array([[0, 1, 2],
| [3, 0, 0]])
|
| Referencing an array prevents resizing...
|
| >>> c = a
| >>> a.resize((1, 1))
| Traceback (most recent call last):
| ...
| ValueError: cannot resize an array that references or is referenced ...
|
| Unless `refcheck` is False:
|
| >>> a.resize((1, 1), refcheck=False)
| >>> a
| array([[0]])
| >>> c
| array([[0]])
|
| round(...)
| a.round(decimals=0, out=None)
|
| Return `a` with each element rounded to the given number of decimals.
|
| Refer to `numpy.around` for full documentation.
|
| See Also
| --------
| numpy.around : equivalent function
|
| searchsorted(...)
| a.searchsorted(v, side='left', sorter=None)
|
| Find indices where elements of v should be inserted in a to maintain order.
|
| For full documentation, see `numpy.searchsorted`
|
| See Also
| --------
| numpy.searchsorted : equivalent function
|
| setfield(...)
| a.setfield(val, dtype, offset=0)
|
| Put a value into a specified place in a field defined by a data-type.
|
| Place `val` into `a`'s field defined by `dtype` and beginning `offset`
| bytes into the field.
|
| Parameters
| ----------
| val : object
| Value to be placed in field.
| dtype : dtype object
| Data-type of the field in which to place `val`.
| offset : int, optional
| The number of bytes into the field at which to place `val`.
|
| Returns
| -------
| None
|
| See Also
| --------
| getfield
|
| Examples
| --------
| >>> x = np.eye(3)
| >>> x.getfield(np.float64)
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
| >>> x.setfield(3, np.int32)
| >>> x.getfield(np.int32)
| array([[3, 3, 3],
| [3, 3, 3],
| [3, 3, 3]], dtype=int32)
| >>> x
| array([[1.0e+000, 1.5e-323, 1.5e-323],
| [1.5e-323, 1.0e+000, 1.5e-323],
| [1.5e-323, 1.5e-323, 1.0e+000]])
| >>> x.setfield(np.eye(3), np.int32)
| >>> x
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
|
| setflags(...)
| a.setflags(write=None, align=None, uic=None)
|
| Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY),
| respectively.
|
| These Boolean-valued flags affect how numpy interprets the memory
| area used by `a` (see Notes below). The ALIGNED flag can only
| be set to True if the data is actually aligned according to the type.
| The WRITEBACKIFCOPY and (deprecated) UPDATEIFCOPY flags can never be set
| to True. The flag WRITEABLE can only be set to True if the array owns its
| own memory, or the ultimate owner of the memory exposes a writeable buffer
| interface, or is a string. (The exception for string is made so that
| unpickling can be done without copying memory.)
|
| Parameters
| ----------
| write : bool, optional
| Describes whether or not `a` can be written to.
| align : bool, optional
| Describes whether or not `a` is aligned properly for its type.
| uic : bool, optional
| Describes whether or not `a` is a copy of another "base" array.
|
| Notes
| -----
| Array flags provide information about how the memory area used
| for the array is to be interpreted. There are 7 Boolean flags
| in use, only four of which can be changed by the user:
| WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED.
|
| WRITEABLE (W) the data area can be written to;
|
| ALIGNED (A) the data and strides are aligned appropriately for the hardware
| (as determined by the compiler);
|
| UPDATEIFCOPY (U) (deprecated), replaced by WRITEBACKIFCOPY;
|
| WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced
| by .base). When the C-API function PyArray_ResolveWritebackIfCopy is
| called, the base array will be updated with the contents of this array.
|
| All flags can be accessed using the single (upper case) letter as well
| as the full name.
|
| Examples
| --------
| >>> y = np.array([[3, 1, 7],
| ... [2, 0, 0],
| ... [8, 5, 9]])
| >>> y
| array([[3, 1, 7],
| [2, 0, 0],
| [8, 5, 9]])
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : True
| ALIGNED : True
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(write=0, align=0)
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : False
| ALIGNED : False
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(uic=1)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: cannot set WRITEBACKIFCOPY flag to True
|
| sort(...)
| a.sort(axis=-1, kind=None, order=None)
|
| Sort an array in-place. Refer to `numpy.sort` for full documentation.
|
| Parameters
| ----------
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional
| Sorting algorithm. The default is 'quicksort'. Note that both 'stable'
| and 'mergesort' use timsort under the covers and, in general, the
| actual implementation will vary with datatype. The 'mergesort' option
| is retained for backwards compatibility.
|
| .. versionchanged:: 1.15.0.
| The 'stable' option was added.
|
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.sort : Return a sorted copy of an array.
| numpy.argsort : Indirect sort.
| numpy.lexsort : Indirect stable sort on multiple keys.
| numpy.searchsorted : Find elements in sorted array.
| numpy.partition: Partial sort.
|
| Notes
| -----
| See `numpy.sort` for notes on the different sorting algorithms.
|
| Examples
| --------
| >>> a = np.array([[1,4], [3,1]])
| >>> a.sort(axis=1)
| >>> a
| array([[1, 4],
| [1, 3]])
| >>> a.sort(axis=0)
| >>> a
| array([[1, 3],
| [1, 4]])
|
| Use the `order` keyword to specify a field to use when sorting a
| structured array:
|
| >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
| >>> a.sort(order='y')
| >>> a
| array([(b'c', 1), (b'a', 2)],
| dtype=[('x', 'S1'), ('y', '<i8')])
|
| squeeze(...)
| a.squeeze(axis=None)
|
| Remove single-dimensional entries from the shape of `a`.
|
| Refer to `numpy.squeeze` for full documentation.
|
| See Also
| --------
| numpy.squeeze : equivalent function
|
| std(...)
| a.std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the standard deviation of the array elements along given axis.
|
| Refer to `numpy.std` for full documentation.
|
| See Also
| --------
| numpy.std : equivalent function
|
| sum(...)
| a.sum(axis=None, dtype=None, out=None, keepdims=False, initial=0, where=True)
|
| Return the sum of the array elements over the given axis.
|
| Refer to `numpy.sum` for full documentation.
|
| See Also
| --------
| numpy.sum : equivalent function
|
| swapaxes(...)
| a.swapaxes(axis1, axis2)
|
| Return a view of the array with `axis1` and `axis2` interchanged.
|
| Refer to `numpy.swapaxes` for full documentation.
|
| See Also
| --------
| numpy.swapaxes : equivalent function
|
| take(...)
| a.take(indices, axis=None, out=None, mode='raise')
|
| Return an array formed from the elements of `a` at the given indices.
|
| Refer to `numpy.take` for full documentation.
|
| See Also
| --------
| numpy.take : equivalent function
|
| tobytes(...)
| a.tobytes(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| .. versionadded:: 1.9.0
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]], dtype='<u2')
| >>> x.tobytes()
| b'\x00\x00\x01\x00\x02\x00\x03\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x02\x00\x01\x00\x03\x00'
|
| tofile(...)
| a.tofile(fid, sep="", format="%s")
|
| Write array to a file as text or binary (default).
|
| Data is always written in 'C' order, independent of the order of `a`.
| The data produced by this method can be recovered using the function
| fromfile().
|
| Parameters
| ----------
| fid : file or str or Path
| An open file object, or a string containing a filename.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| sep : str
| Separator between array items for text output.
| If "" (empty), a binary file is written, equivalent to
| ``file.write(a.tobytes())``.
| format : str
| Format string for text file output.
| Each entry in the array is formatted to text by first converting
| it to the closest Python type, and then using "format" % item.
|
| Notes
| -----
| This is a convenience function for quick storage of array data.
| Information on endianness and precision is lost, so this method is not a
| good choice for files intended to archive data or transport data between
| machines with different endianness. Some of these problems can be overcome
| by outputting the data as text files, at the expense of speed and file
| size.
|
| When fid is a file object, array contents are directly written to the
| file, bypassing the file object's ``write`` method. As a result, tofile
| cannot be used with files objects supporting compression (e.g., GzipFile)
| or file-like objects that do not support ``fileno()`` (e.g., BytesIO).
|
| tolist(...)
| a.tolist()
|
| Return the array as an ``a.ndim``-levels deep nested list of Python scalars.
|
| Return a copy of the array data as a (nested) Python list.
| Data items are converted to the nearest compatible builtin Python type, via
| the `~numpy.ndarray.item` function.
|
| If ``a.ndim`` is 0, then since the depth of the nested list is 0, it will
| not be a list at all, but a simple Python scalar.
|
| Parameters
| ----------
| none
|
| Returns
| -------
| y : object, or list of object, or list of list of object, or ...
| The possibly nested list of array elements.
|
| Notes
| -----
| The array may be recreated via ``a = np.array(a.tolist())``, although this
| may sometimes lose precision.
|
| Examples
| --------
| For a 1D array, ``a.tolist()`` is almost the same as ``list(a)``,
| except that ``tolist`` changes numpy scalars to Python scalars:
|
| >>> a = np.uint32([1, 2])
| >>> a_list = list(a)
| >>> a_list
| [1, 2]
| >>> type(a_list[0])
| <class 'numpy.uint32'>
| >>> a_tolist = a.tolist()
| >>> a_tolist
| [1, 2]
| >>> type(a_tolist[0])
| <class 'int'>
|
| Additionally, for a 2D array, ``tolist`` applies recursively:
|
| >>> a = np.array([[1, 2], [3, 4]])
| >>> list(a)
| [array([1, 2]), array([3, 4])]
| >>> a.tolist()
| [[1, 2], [3, 4]]
|
| The base case for this recursion is a 0D array:
|
| >>> a = np.array(1)
| >>> list(a)
| Traceback (most recent call last):
| ...
| TypeError: iteration over a 0-d array
| >>> a.tolist()
| 1
|
| tostring(...)
| a.tostring(order='C')
|
| A compatibility alias for `tobytes`, with exactly the same behavior.
|
| Despite its name, it returns `bytes` not `str`\ s.
|
| .. deprecated:: 1.19.0
|
| trace(...)
| a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)
|
| Return the sum along diagonals of the array.
|
| Refer to `numpy.trace` for full documentation.
|
| See Also
| --------
| numpy.trace : equivalent function
|
| transpose(...)
| a.transpose(*axes)
|
| Returns a view of the array with axes transposed.
|
| For a 1-D array this has no effect, as a transposed vector is simply the
| same vector. To convert a 1-D array into a 2D column vector, an additional
| dimension must be added. `np.atleast2d(a).T` achieves this, as does
| `a[:, np.newaxis]`.
| For a 2-D array, this is a standard matrix transpose.
| For an n-D array, if axes are given, their order indicates how the
| axes are permuted (see Examples). If axes are not provided and
| ``a.shape = (i[0], i[1], ... i[n-2], i[n-1])``, then
| ``a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])``.
|
| Parameters
| ----------
| axes : None, tuple of ints, or `n` ints
|
| * None or no argument: reverses the order of the axes.
|
| * tuple of ints: `i` in the `j`-th place in the tuple means `a`'s
| `i`-th axis becomes `a.transpose()`'s `j`-th axis.
|
| * `n` ints: same as an n-tuple of the same ints (this form is
| intended simply as a "convenience" alternative to the tuple form)
|
| Returns
| -------
| out : ndarray
| View of `a`, with axes suitably permuted.
|
| See Also
| --------
| ndarray.T : Array property returning the array transposed.
| ndarray.reshape : Give a new shape to an array without changing its data.
|
| Examples
| --------
| >>> a = np.array([[1, 2], [3, 4]])
| >>> a
| array([[1, 2],
| [3, 4]])
| >>> a.transpose()
| array([[1, 3],
| [2, 4]])
| >>> a.transpose((1, 0))
| array([[1, 3],
| [2, 4]])
| >>> a.transpose(1, 0)
| array([[1, 3],
| [2, 4]])
|
| var(...)
| a.var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the variance of the array elements, along given axis.
|
| Refer to `numpy.var` for full documentation.
|
| See Also
| --------
| numpy.var : equivalent function
|
| view(...)
| a.view([dtype][, type])
|
| New view of array with the same data.
|
| .. note::
| Passing None for ``dtype`` is different from omitting the parameter,
| since the former invokes ``dtype(None)`` which is an alias for
| ``dtype('float_')``.
|
| Parameters
| ----------
| dtype : data-type or ndarray sub-class, optional
| Data-type descriptor of the returned view, e.g., float32 or int16.
| Omitting it results in the view having the same data-type as `a`.
| This argument can also be specified as an ndarray sub-class, which
| then specifies the type of the returned object (this is equivalent to
| setting the ``type`` parameter).
| type : Python type, optional
| Type of the returned view, e.g., ndarray or matrix. Again, omission
| of the parameter results in type preservation.
|
| Notes
| -----
| ``a.view()`` is used two different ways:
|
| ``a.view(some_dtype)`` or ``a.view(dtype=some_dtype)`` constructs a view
| of the array's memory with a different data-type. This can cause a
| reinterpretation of the bytes of memory.
|
| ``a.view(ndarray_subclass)`` or ``a.view(type=ndarray_subclass)`` just
| returns an instance of `ndarray_subclass` that looks at the same array
| (same shape, dtype, etc.) This does not cause a reinterpretation of the
| memory.
|
| For ``a.view(some_dtype)``, if ``some_dtype`` has a different number of
| bytes per entry than the previous dtype (for example, converting a
| regular array to a structured array), then the behavior of the view
| cannot be predicted just from the superficial appearance of ``a`` (shown
| by ``print(a)``). It also depends on exactly how ``a`` is stored in
| memory. Therefore if ``a`` is C-ordered versus fortran-ordered, versus
| defined as a slice or transpose, etc., the view may give different
| results.
|
|
| Examples
| --------
| >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])
|
| Viewing array data using a different type and dtype:
|
| >>> y = x.view(dtype=np.int16, type=np.matrix)
| >>> y
| matrix([[513]], dtype=int16)
| >>> print(type(y))
| <class 'numpy.matrix'>
|
| Creating a view on a structured array so it can be used in calculations
|
| >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
| >>> xv = x.view(dtype=np.int8).reshape(-1,2)
| >>> xv
| array([[1, 2],
| [3, 4]], dtype=int8)
| >>> xv.mean(0)
| array([2., 3.])
|
| Making changes to the view changes the underlying array
|
| >>> xv[0,1] = 20
| >>> x
| array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')])
|
| Using a view to convert an array to a recarray:
|
| >>> z = x.view(np.recarray)
| >>> z.a
| array([1, 3], dtype=int8)
|
| Views share data:
|
| >>> x[0] = (9, 10)
| >>> z[0]
| (9, 10)
|
| Views that change the dtype size (bytes per entry) should normally be
| avoided on arrays defined by slices, transposes, fortran-ordering, etc.:
|
| >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
| >>> y = x[:, 0:2]
| >>> y
| array([[1, 2],
| [4, 5]], dtype=int16)
| >>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
| Traceback (most recent call last):
| ...
| ValueError: To change to a dtype of a different size, the array must be C-contiguous
| >>> z = y.copy()
| >>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
| array([[(1, 2)],
| [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| T
| The transposed array.
|
| Same as ``self.transpose()``.
|
| Examples
| --------
| >>> x = np.array([[1.,2.],[3.,4.]])
| >>> x
| array([[ 1., 2.],
| [ 3., 4.]])
| >>> x.T
| array([[ 1., 3.],
| [ 2., 4.]])
| >>> x = np.array([1.,2.,3.,4.])
| >>> x
| array([ 1., 2., 3., 4.])
| >>> x.T
| array([ 1., 2., 3., 4.])
|
| See Also
| --------
| transpose
|
| __array_finalize__
| None.
|
| __array_interface__
| Array protocol: Python side.
|
| __array_priority__
| Array priority.
|
| __array_struct__
| Array protocol: C-struct side.
|
| base
| Base object if memory is from some other object.
|
| Examples
| --------
| The base of an array that owns its memory is None:
|
| >>> x = np.array([1,2,3,4])
| >>> x.base is None
| True
|
| Slicing creates a view, whose memory is shared with x:
|
| >>> y = x[2:]
| >>> y.base is x
| True
|
| ctypes
| An object to simplify the interaction of the array with the ctypes
| module.
|
| This attribute creates an object that makes it easier to use arrays
| when calling shared libraries with the ctypes module. The returned
| object has, among others, data, shape, and strides attributes (see
| Notes below) which themselves return ctypes objects that can be used
| as arguments to a shared library.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| c : Python object
| Possessing attributes data, shape, strides, etc.
|
| See Also
| --------
| numpy.ctypeslib
|
| Notes
| -----
| Below are the public attributes of this object which were documented
| in "Guide to NumPy" (we have omitted undocumented public attributes,
| as well as documented private attributes):
|
| .. autoattribute:: numpy.core._internal._ctypes.data
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.shape
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.strides
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.data_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.shape_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.strides_as
| :noindex:
|
| If the ctypes module is not available, then the ctypes attribute
| of array objects still returns something useful, but ctypes objects
| are not returned and errors may be raised instead. In particular,
| the object will still have the ``as_parameter`` attribute which will
| return an integer equal to the data attribute.
|
| Examples
| --------
| >>> import ctypes
| >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32)
| >>> x
| array([[0, 1],
| [2, 3]], dtype=int32)
| >>> x.ctypes.data
| 31962608 # may vary
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32))
| <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents
| c_uint(0)
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents
| c_ulong(4294967296)
| >>> x.ctypes.shape
| <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary
| >>> x.ctypes.strides
| <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary
|
| data
| Python buffer object pointing to the start of the array's data.
|
| dtype
| Data-type of the array's elements.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| d : numpy dtype object
|
| See Also
| --------
| numpy.dtype
|
| Examples
| --------
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.dtype
| dtype('int32')
| >>> type(x.dtype)
| <type 'numpy.dtype'>
|
| flags
| Information about the memory layout of the array.
|
| Attributes
| ----------
| C_CONTIGUOUS (C)
| The data is in a single, C-style contiguous segment.
| F_CONTIGUOUS (F)
| The data is in a single, Fortran-style contiguous segment.
| OWNDATA (O)
| The array owns the memory it uses or borrows it from another object.
| WRITEABLE (W)
| The data area can be written to. Setting this to False locks
| the data, making it read-only. A view (slice, etc.) inherits WRITEABLE
| from its base array at creation time, but a view of a writeable
| array may be subsequently locked while the base array remains writeable.
| (The opposite is not true, in that a view of a locked array may not
| be made writeable. However, currently, locking a base object does not
| lock any views that already reference it, so under that circumstance it
| is possible to alter the contents of a locked array via a previously
| created writeable view onto it.) Attempting to change a non-writeable
| array raises a RuntimeError exception.
| ALIGNED (A)
| The data and all elements are aligned appropriately for the hardware.
| WRITEBACKIFCOPY (X)
| This array is a copy of some other array. The C-API function
| PyArray_ResolveWritebackIfCopy must be called before deallocating
| to the base array will be updated with the contents of this array.
| UPDATEIFCOPY (U)
| (Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array.
| When this array is
| deallocated, the base array will be updated with the contents of
| this array.
| FNC
| F_CONTIGUOUS and not C_CONTIGUOUS.
| FORC
| F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).
| BEHAVED (B)
| ALIGNED and WRITEABLE.
| CARRAY (CA)
| BEHAVED and C_CONTIGUOUS.
| FARRAY (FA)
| BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.
|
| Notes
| -----
| The `flags` object can be accessed dictionary-like (as in ``a.flags['WRITEABLE']``),
| or by using lowercased attribute names (as in ``a.flags.writeable``). Short flag
| names are only supported in dictionary access.
|
| Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be
| changed by the user, via direct assignment to the attribute or dictionary
| entry, or by calling `ndarray.setflags`.
|
| The array flags cannot be set arbitrarily:
|
| - UPDATEIFCOPY can only be set ``False``.
| - WRITEBACKIFCOPY can only be set ``False``.
| - ALIGNED can only be set ``True`` if the data is truly aligned.
| - WRITEABLE can only be set ``True`` if the array owns its own memory
| or the ultimate owner of the memory exposes a writeable buffer
| interface or is a string.
|
| Arrays can be both C-style and Fortran-style contiguous simultaneously.
| This is clear for 1-dimensional arrays, but can also be true for higher
| dimensional arrays.
|
| Even for contiguous arrays a stride for a given dimension
| ``arr.strides[dim]`` may be *arbitrary* if ``arr.shape[dim] == 1``
| or the array has no elements.
| It does *not* generally hold that ``self.strides[-1] == self.itemsize``
| for C-style contiguous arrays or ``self.strides[0] == self.itemsize`` for
| Fortran-style contiguous arrays is true.
|
| flat
| A 1-D iterator over the array.
|
| This is a `numpy.flatiter` instance, which acts similarly to, but is not
| a subclass of, Python's built-in iterator object.
|
| See Also
| --------
| flatten : Return a copy of the array collapsed into one dimension.
|
| flatiter
|
| Examples
| --------
| >>> x = np.arange(1, 7).reshape(2, 3)
| >>> x
| array([[1, 2, 3],
| [4, 5, 6]])
| >>> x.flat[3]
| 4
| >>> x.T
| array([[1, 4],
| [2, 5],
| [3, 6]])
| >>> x.T.flat[3]
| 5
| >>> type(x.flat)
| <class 'numpy.flatiter'>
|
| An assignment example:
|
| >>> x.flat = 3; x
| array([[3, 3, 3],
| [3, 3, 3]])
| >>> x.flat[[1,4]] = 1; x
| array([[3, 1, 3],
| [3, 1, 3]])
|
| imag
| The imaginary part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.imag
| array([ 0. , 0.70710678])
| >>> x.imag.dtype
| dtype('float64')
|
| itemsize
| Length of one array element in bytes.
|
| Examples
| --------
| >>> x = np.array([1,2,3], dtype=np.float64)
| >>> x.itemsize
| 8
| >>> x = np.array([1,2,3], dtype=np.complex128)
| >>> x.itemsize
| 16
|
| nbytes
| Total bytes consumed by the elements of the array.
|
| Notes
| -----
| Does not include memory consumed by non-element attributes of the
| array object.
|
| Examples
| --------
| >>> x = np.zeros((3,5,2), dtype=np.complex128)
| >>> x.nbytes
| 480
| >>> np.prod(x.shape) * x.itemsize
| 480
|
| ndim
| Number of array dimensions.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3])
| >>> x.ndim
| 1
| >>> y = np.zeros((2, 3, 4))
| >>> y.ndim
| 3
|
| real
| The real part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.real
| array([ 1. , 0.70710678])
| >>> x.real.dtype
| dtype('float64')
|
| See Also
| --------
| numpy.real : equivalent function
|
| shape
| Tuple of array dimensions.
|
| The shape property is usually used to get the current shape of an array,
| but may also be used to reshape the array in-place by assigning a tuple of
| array dimensions to it. As with `numpy.reshape`, one of the new shape
| dimensions can be -1, in which case its value is inferred from the size of
| the array and the remaining dimensions. Reshaping an array in-place will
| fail if a copy is required.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3, 4])
| >>> x.shape
| (4,)
| >>> y = np.zeros((2, 3, 4))
| >>> y.shape
| (2, 3, 4)
| >>> y.shape = (3, 8)
| >>> y
| array([[ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.]])
| >>> y.shape = (3, 6)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: total size of new array must be unchanged
| >>> np.zeros((4,2))[::2].shape = (-1,)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| AttributeError: Incompatible shape for in-place modification. Use
| `.reshape()` to make a copy with the desired shape.
|
| See Also
| --------
| numpy.reshape : similar function
| ndarray.reshape : similar method
|
| size
| Number of elements in the array.
|
| Equal to ``np.prod(a.shape)``, i.e., the product of the array's
| dimensions.
|
| Notes
| -----
| `a.size` returns a standard arbitrary precision Python integer. This
| may not be the case with other methods of obtaining the same value
| (like the suggested ``np.prod(a.shape)``, which returns an instance
| of ``np.int_``), and may be relevant if the value is used further in
| calculations that may overflow a fixed size integer type.
|
| Examples
| --------
| >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
| >>> x.size
| 30
| >>> np.prod(x.shape)
| 30
|
| strides
| Tuple of bytes to step in each dimension when traversing an array.
|
| The byte offset of element ``(i[0], i[1], ..., i[n])`` in an array `a`
| is::
|
| offset = sum(np.array(i) * a.strides)
|
| A more detailed explanation of strides can be found in the
| "ndarray.rst" file in the NumPy reference guide.
|
| Notes
| -----
| Imagine an array of 32-bit integers (each 4 bytes)::
|
| x = np.array([[0, 1, 2, 3, 4],
| [5, 6, 7, 8, 9]], dtype=np.int32)
|
| This array is stored in memory as 40 bytes, one after the other
| (known as a contiguous block of memory). The strides of an array tell
| us how many bytes we have to skip in memory to move to the next position
| along a certain axis. For example, we have to skip 4 bytes (1 value) to
| move to the next column, but 20 bytes (5 values) to get to the same
| position in the next row. As such, the strides for the array `x` will be
| ``(20, 4)``.
|
| See Also
| --------
| numpy.lib.stride_tricks.as_strided
|
| Examples
| --------
| >>> y = np.reshape(np.arange(2*3*4), (2,3,4))
| >>> y
| array([[[ 0, 1, 2, 3],
| [ 4, 5, 6, 7],
| [ 8, 9, 10, 11]],
| [[12, 13, 14, 15],
| [16, 17, 18, 19],
| [20, 21, 22, 23]]])
| >>> y.strides
| (48, 16, 4)
| >>> y[1,1,1]
| 17
| >>> offset=sum(y.strides * np.array((1,1,1)))
| >>> offset/y.itemsize
| 17
|
| >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
| >>> x.strides
| (32, 4, 224, 1344)
| >>> i = np.array([3,5,2,2])
| >>> offset = sum(i * x.strides)
| >>> x[3,5,2,2]
| 813
| >>> offset / x.itemsize
| 813
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
###Markdown
Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
_____no_output_____
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
(4,)
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
original:
[1 2 3 4]
reshaped:
[[1 2]
[3 4]]
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
[ 1 2 3 4 7 8 9 10]
[[1 2 3 4 7]
[5 6 7 8 8]]
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
concatenate:
[1 2 3 4 1 1 1 1]
vstack:
[[1 2 3 4]
[1 2 3 4]
[5 6 7 8]]
hstack:
[[1 2 3 4 1 2 3 4]
[5 6 7 8 5 6 7 8]]
column_stack:
[[1 2 3 4 1 2 3 4]
[5 6 7 8 5 6 7 8]]
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
original:
[[1 2 3 4]
[5 6 7 8]]
hsplit:
[array([[1, 2],
[5, 6]]), array([[3, 4],
[7, 8]])]
vsplit:
[array([[1, 2, 3, 4]]), array([[5, 6, 7, 8]])]
###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.13021952 0.00986839]
[0.81531226 0.05478637]
[0.55653812 0.42425206]
[0.10867637 0.00272479]
[0.80888784 0.12277835]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
# Find all values in the matrix less than 0.5
demo_g < 0.5
###Output
_____no_output_____
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
Help on built-in function array in module numpy:
array(...)
array(object, dtype=None, *, copy=True, order='K', subok=False, ndmin=0)
Create an array.
Parameters
----------
object : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
dtype : data-type, optional
The desired data-type for the array. If not given, then the type will
be determined as the minimum type required to hold the objects in the
sequence.
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy will
only be made if __array__ returns a copy, if obj is a nested sequence,
or if a copy is needed to satisfy any of the other requirements
(`dtype`, `order`, etc.).
order : {'K', 'A', 'C', 'F'}, optional
Specify the memory layout of the array. If object is not an array, the
newly created array will be in C order (row major) unless 'F' is
specified, in which case it will be in Fortran order (column major).
If object is an array the following holds.
===== ========= ===================================================
order no copy copy=True
===== ========= ===================================================
'K' unchanged F & C order preserved, otherwise most similar order
'A' unchanged F order if input is F and not C, otherwise C order
'C' C order C order
'F' F order F order
===== ========= ===================================================
When ``copy=False`` and a copy is made for other reasons, the result is
the same as if ``copy=True``, with some exceptions for `A`, see the
Notes section. The default order is 'K'.
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise
the returned array will be forced to be a base-class array (default).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting
array should have. Ones will be pre-pended to the shape as
needed to meet this requirement.
Returns
-------
out : ndarray
An array object satisfying the specified requirements.
See Also
--------
empty_like : Return an empty array with shape and type of input.
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
ones : Return a new array setting values to one.
zeros : Return a new array setting values to zero.
full : Return a new array of given shape filled with value.
Notes
-----
When order is 'A' and `object` is an array in neither 'C' nor 'F' order,
and a copy is forced by a change in dtype, then the order of the result is
not necessarily 'C' as expected. This is likely a bug.
Examples
--------
>>> np.array([1, 2, 3])
array([1, 2, 3])
Upcasting:
>>> np.array([1, 2, 3.0])
array([ 1., 2., 3.])
More than one dimension:
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
Minimum dimensions 2:
>>> np.array([1, 2, 3], ndmin=2)
array([[1, 2, 3]])
Type provided:
>>> np.array([1, 2, 3], dtype=complex)
array([ 1.+0.j, 2.+0.j, 3.+0.j])
Data-type consisting of more than one element:
>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>> x['a']
array([1, 3])
Creating an array from sub-classes:
>>> np.array(np.mat('1 2; 3 4'))
array([[1, 2],
[3, 4]])
>>> np.array(np.mat('1 2; 3 4'), subok=True)
matrix([[1, 2],
[3, 4]])
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out:
###Code
# Call help on an object we created.
x = np.array([1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
The original matrix
[[0.94051713 0.34787977 0.12363991]
[0.942026 0.62580363 0.68494286]]
The matrix after being tranposed
[[0.94051713 0.942026 ]
[0.34787977 0.62580363]
[0.12363991 0.68494286]]
The tranposed matrix from the T attribute
[[0.94051713 0.942026 ]
[0.34787977 0.62580363]
[0.12363991 0.68494286]]
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
(4,)
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
original:
[1 2 3 4]
reshaped:
[[1 2]
[3 4]]
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
[ 1 2 3 4 7 8 9 10]
[[1 2 3 4 7]
[5 6 7 8 8]]
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
_____no_output_____
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
_____no_output_____
###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.78973617 0.35030806]
[0.53211916 0.56316152]
[0.90227849 0.22153756]
[0.07809684 0.96292035]
[0.08574207 0.60090918]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Find all values in the matrix less than 0.5
print(demo_g < 0.5)
###Output
[[0.36073734 0.03358855]
[0.37934735 0.49706761]
[0.38377388 0.96208718]
[0.6297675 0.45430684]
[0.65761279 0.88349612]]
[[ True True]
[ True True]
[ True False]
[False True]
[False False]]
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
_____no_output_____
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out:
###Code
# Call help on an object we created.
x = np.array([1, 2, 3, 4])
help(x)
###Output
_____no_output_____
###Markdown
Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
The original matrix
[[0.75987425 0.83121914 0.48148215]
[0.04480995 0.1217822 0.31196599]]
The matrix after being tranposed
[[0.75987425 0.04480995]
[0.83121914 0.1217822 ]
[0.48148215 0.31196599]]
The tranposed matrix from the T attribute
[[0.75987425 0.04480995]
[0.83121914 0.1217822 ]
[0.48148215 0.31196599]]
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
(4,)
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
original:
[1 2 3 4]
reshaped:
[[1 2]
[3 4]]
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
[ 1 2 3 4 7 8 9 10]
[[1 2 3 4 7]
[5 6 7 8 8]]
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
concatenate:
[1 2 3 4 1 1 1 1]
vstack:
[[1 2 3 4]
[1 2 3 4]
[5 6 7 8]]
hstack:
[[1 2 3 4 1 2 3 4]
[5 6 7 8 5 6 7 8]]
column_stack:
[[1 2 3 4 1 2 3 4]
[5 6 7 8 5 6 7 8]]
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
original:
[[1 2 3 4]
[5 6 7 8]]
hsplit:
[array([[1, 2],
[5, 6]]), array([[3, 4],
[7, 8]])]
vsplit:
[array([[1, 2, 3, 4]]), array([[5, 6, 7, 8]])]
###Markdown
Lesson 2: NumPy Part 2This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:> NumPy is the fundamental package for scientific computing with Python. It contains among other things:> - a powerful N-dimensional array object> - sophisticated (broadcasting) functions> - tools for integrating C/C++ and Fortran code> - useful linear algebra, Fourier transform, and random number capabilities>> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. InstructionsThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. --- 1. Getting StartedFirst, we must import the NumPy library.
###Code
# Import numpy
import numpy as np
###Output
_____no_output_____
###Markdown
Task 1a: SetupIn the practice notebook, import the following packages:+ `numpy` as `np` 2 Basic Indexing: Subsets and SlicingWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. The following code examples demonstrate how to subset a NumPy array:```python Get items from "start" to "end" (but the end is not included!)a[start:end] Get all items from "start" through the rest of the arraya[start:] Get items from the beginning to "end" (but the end is not included!)a[:end] ```Similarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
print(demo_g)
# Get the last item from the last 'row':
demo_g[-1, -1]
###Output
[[0.21847921 0.58694935]
[0.61815126 0.81658475]
[0.4294856 0.02043049]
[0.38807909 0.75901409]
[0.78980718 0.67693608]]
###Markdown
Task 2a: Indexing by Subsetting and SlicingIn the practice notebook perform the following:1. Create (or re-use) 3 arrays, each containing three dimensions.2. Slice each of these arrays so that: + One element / number is returned. + One dimension is returned. + A subset of a dimension is returned.3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays). *Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.* 3. "Fancy" IndexingFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array. 3.1 Using a Boolean Array for IndexingRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. For example, review and then execute the following code:
###Code
# Create a 5 x 2 array of random numbers
demo_g = np.random.random((5,2))
# Find all values in the matrix less than 0.5
demo_g < 0.5
###Output
_____no_output_____
###Markdown
Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:
###Code
demo_g[demo_g < 0.5]
###Output
_____no_output_____
###Markdown
Or alternatively:
###Code
sig_list = demo_g < 0.5
demo_g[sig_list]
###Output
_____no_output_____
###Markdown
Task 3a: Boolean IndexingIn the practice notebook perform the following:+ Experiment with the following boolean conditionals to generate boolean arrays for indexing: + Greater than + Less than + Equals + Combine two or more of the above with: + or `|` + and `&`You can create arrays or use existing ones 3.2 Using exact indiciesAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. For example, review and then execute the following code:
###Code
# Generate a list of 500 random numbers
demo_f = np.random.random((500))
# Retreive 5 random numbers from the list
demo_f[[0,100,200,300,400]]
###Output
_____no_output_____
###Markdown
4. Intermission -- Getting HelpPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.The output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:
###Code
# Call help on anything from a package.
help(np.array)
###Output
_____no_output_____
###Markdown
Additionally, we can get help about an object that we created! Execute the following code to try it out:
###Code
# Call help on an object we created.
x = np.array([1, 2, 3, 4])
help(x)
###Output
Help on ndarray object:
class ndarray(builtins.object)
| ndarray(shape, dtype=float, buffer=None, offset=0,
| strides=None, order=None)
|
| An array object represents a multidimensional, homogeneous array
| of fixed-size items. An associated data-type object describes the
| format of each element in the array (its byte-order, how many bytes it
| occupies in memory, whether it is an integer, a floating point number,
| or something else, etc.)
|
| Arrays should be constructed using `array`, `zeros` or `empty` (refer
| to the See Also section below). The parameters given here refer to
| a low-level method (`ndarray(...)`) for instantiating an array.
|
| For more information, refer to the `numpy` module and examine the
| methods and attributes of an array.
|
| Parameters
| ----------
| (for the __new__ method; see Notes below)
|
| shape : tuple of ints
| Shape of created array.
| dtype : data-type, optional
| Any object that can be interpreted as a numpy data type.
| buffer : object exposing buffer interface, optional
| Used to fill the array with data.
| offset : int, optional
| Offset of array data in buffer.
| strides : tuple of ints, optional
| Strides of data in memory.
| order : {'C', 'F'}, optional
| Row-major (C-style) or column-major (Fortran-style) order.
|
| Attributes
| ----------
| T : ndarray
| Transpose of the array.
| data : buffer
| The array's elements, in memory.
| dtype : dtype object
| Describes the format of the elements in the array.
| flags : dict
| Dictionary containing information related to memory use, e.g.,
| 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc.
| flat : numpy.flatiter object
| Flattened version of the array as an iterator. The iterator
| allows assignments, e.g., ``x.flat = 3`` (See `ndarray.flat` for
| assignment examples; TODO).
| imag : ndarray
| Imaginary part of the array.
| real : ndarray
| Real part of the array.
| size : int
| Number of elements in the array.
| itemsize : int
| The memory use of each array element in bytes.
| nbytes : int
| The total number of bytes required to store the array data,
| i.e., ``itemsize * size``.
| ndim : int
| The array's number of dimensions.
| shape : tuple of ints
| Shape of the array.
| strides : tuple of ints
| The step-size required to move from one element to the next in
| memory. For example, a contiguous ``(3, 4)`` array of type
| ``int16`` in C-order has strides ``(8, 2)``. This implies that
| to move from element to element in memory requires jumps of 2 bytes.
| To move from row-to-row, one needs to jump 8 bytes at a time
| (``2 * 4``).
| ctypes : ctypes object
| Class containing properties of the array needed for interaction
| with ctypes.
| base : ndarray
| If the array is a view into another array, that array is its `base`
| (unless that array is also a view). The `base` array is where the
| array data is actually stored.
|
| See Also
| --------
| array : Construct an array.
| zeros : Create an array, each element of which is zero.
| empty : Create an array, but leave its allocated memory unchanged (i.e.,
| it contains "garbage").
| dtype : Create a data-type.
|
| Notes
| -----
| There are two modes of creating an array using ``__new__``:
|
| 1. If `buffer` is None, then only `shape`, `dtype`, and `order`
| are used.
| 2. If `buffer` is an object exposing the buffer interface, then
| all keywords are interpreted.
|
| No ``__init__`` method is needed because the array is fully initialized
| after the ``__new__`` method.
|
| Examples
| --------
| These examples illustrate the low-level `ndarray` constructor. Refer
| to the `See Also` section above for easier ways of constructing an
| ndarray.
|
| First mode, `buffer` is None:
|
| >>> np.ndarray(shape=(2,2), dtype=float, order='F')
| array([[0.0e+000, 0.0e+000], # random
| [ nan, 2.5e-323]])
|
| Second mode:
|
| >>> np.ndarray((2,), buffer=np.array([1,2,3]),
| ... offset=np.int_().itemsize,
| ... dtype=int) # offset = 1*itemsize, i.e. skip first element
| array([2, 3])
|
| Methods defined here:
|
| __abs__(self, /)
| abs(self)
|
| __add__(self, value, /)
| Return self+value.
|
| __and__(self, value, /)
| Return self&value.
|
| __array__(...)
| a.__array__([dtype], /) -> reference if type unchanged, copy otherwise.
|
| Returns either a new reference to self if dtype is not given or a new array
| of provided data type if dtype is different from the current dtype of the
| array.
|
| __array_function__(...)
|
| __array_prepare__(...)
| a.__array_prepare__(obj) -> Object of same type as ndarray object obj.
|
| __array_ufunc__(...)
|
| __array_wrap__(...)
| a.__array_wrap__(obj) -> Object of same type as ndarray object a.
|
| __bool__(self, /)
| self != 0
|
| __complex__(...)
|
| __contains__(self, key, /)
| Return key in self.
|
| __copy__(...)
| a.__copy__()
|
| Used if :func:`copy.copy` is called on an array. Returns a copy of the array.
|
| Equivalent to ``a.copy(order='K')``.
|
| __deepcopy__(...)
| a.__deepcopy__(memo, /) -> Deep copy of array.
|
| Used if :func:`copy.deepcopy` is called on an array.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __divmod__(self, value, /)
| Return divmod(self, value).
|
| __eq__(self, value, /)
| Return self==value.
|
| __float__(self, /)
| float(self)
|
| __floordiv__(self, value, /)
| Return self//value.
|
| __format__(...)
| Default object formatter.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getitem__(self, key, /)
| Return self[key].
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Return self+=value.
|
| __iand__(self, value, /)
| Return self&=value.
|
| __ifloordiv__(self, value, /)
| Return self//=value.
|
| __ilshift__(self, value, /)
| Return self<<=value.
|
| __imatmul__(self, value, /)
| Return self@=value.
|
| __imod__(self, value, /)
| Return self%=value.
|
| __imul__(self, value, /)
| Return self*=value.
|
| __index__(self, /)
| Return self converted to an integer, if self is suitable for use as an index into a list.
|
| __int__(self, /)
| int(self)
|
| __invert__(self, /)
| ~self
|
| __ior__(self, value, /)
| Return self|=value.
|
| __ipow__(self, value, /)
| Return self**=value.
|
| __irshift__(self, value, /)
| Return self>>=value.
|
| __isub__(self, value, /)
| Return self-=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __itruediv__(self, value, /)
| Return self/=value.
|
| __ixor__(self, value, /)
| Return self^=value.
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lshift__(self, value, /)
| Return self<<value.
|
| __lt__(self, value, /)
| Return self<value.
|
| __matmul__(self, value, /)
| Return self@value.
|
| __mod__(self, value, /)
| Return self%value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __neg__(self, /)
| -self
|
| __or__(self, value, /)
| Return self|value.
|
| __pos__(self, /)
| +self
|
| __pow__(self, value, mod=None, /)
| Return pow(self, value, mod).
|
| __radd__(self, value, /)
| Return value+self.
|
| __rand__(self, value, /)
| Return value&self.
|
| __rdivmod__(self, value, /)
| Return divmod(value, self).
|
| __reduce__(...)
| a.__reduce__()
|
| For pickling.
|
| __reduce_ex__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| __rfloordiv__(self, value, /)
| Return value//self.
|
| __rlshift__(self, value, /)
| Return value<<self.
|
| __rmatmul__(self, value, /)
| Return value@self.
|
| __rmod__(self, value, /)
| Return value%self.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __ror__(self, value, /)
| Return value|self.
|
| __rpow__(self, value, mod=None, /)
| Return pow(value, self, mod).
|
| __rrshift__(self, value, /)
| Return value>>self.
|
| __rshift__(self, value, /)
| Return self>>value.
|
| __rsub__(self, value, /)
| Return value-self.
|
| __rtruediv__(self, value, /)
| Return value/self.
|
| __rxor__(self, value, /)
| Return value^self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __setstate__(...)
| a.__setstate__(state, /)
|
| For unpickling.
|
| The `state` argument must be a sequence that contains the following
| elements:
|
| Parameters
| ----------
| version : int
| optional pickle version. If omitted defaults to 0.
| shape : tuple
| dtype : data-type
| isFortran : bool
| rawdata : string or list
| a binary string with the data (or a list if 'a' is an object array)
|
| __sizeof__(...)
| Size of object in memory, in bytes.
|
| __str__(self, /)
| Return str(self).
|
| __sub__(self, value, /)
| Return self-value.
|
| __truediv__(self, value, /)
| Return self/value.
|
| __xor__(self, value, /)
| Return self^value.
|
| all(...)
| a.all(axis=None, out=None, keepdims=False)
|
| Returns True if all elements evaluate to True.
|
| Refer to `numpy.all` for full documentation.
|
| See Also
| --------
| numpy.all : equivalent function
|
| any(...)
| a.any(axis=None, out=None, keepdims=False)
|
| Returns True if any of the elements of `a` evaluate to True.
|
| Refer to `numpy.any` for full documentation.
|
| See Also
| --------
| numpy.any : equivalent function
|
| argmax(...)
| a.argmax(axis=None, out=None)
|
| Return indices of the maximum values along the given axis.
|
| Refer to `numpy.argmax` for full documentation.
|
| See Also
| --------
| numpy.argmax : equivalent function
|
| argmin(...)
| a.argmin(axis=None, out=None)
|
| Return indices of the minimum values along the given axis of `a`.
|
| Refer to `numpy.argmin` for detailed documentation.
|
| See Also
| --------
| numpy.argmin : equivalent function
|
| argpartition(...)
| a.argpartition(kth, axis=-1, kind='introselect', order=None)
|
| Returns the indices that would partition this array.
|
| Refer to `numpy.argpartition` for full documentation.
|
| .. versionadded:: 1.8.0
|
| See Also
| --------
| numpy.argpartition : equivalent function
|
| argsort(...)
| a.argsort(axis=-1, kind=None, order=None)
|
| Returns the indices that would sort this array.
|
| Refer to `numpy.argsort` for full documentation.
|
| See Also
| --------
| numpy.argsort : equivalent function
|
| astype(...)
| a.astype(dtype, order='K', casting='unsafe', subok=True, copy=True)
|
| Copy of the array, cast to a specified type.
|
| Parameters
| ----------
| dtype : str or dtype
| Typecode or data-type to which the array is cast.
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout order of the result.
| 'C' means C order, 'F' means Fortran order, 'A'
| means 'F' order if all the arrays are Fortran contiguous,
| 'C' order otherwise, and 'K' means as close to the
| order the array elements appear in memory as possible.
| Default is 'K'.
| casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
| Controls what kind of data casting may occur. Defaults to 'unsafe'
| for backwards compatibility.
|
| * 'no' means the data types should not be cast at all.
| * 'equiv' means only byte-order changes are allowed.
| * 'safe' means only casts which can preserve values are allowed.
| * 'same_kind' means only safe casts or casts within a kind,
| like float64 to float32, are allowed.
| * 'unsafe' means any data conversions may be done.
| subok : bool, optional
| If True, then sub-classes will be passed-through (default), otherwise
| the returned array will be forced to be a base-class array.
| copy : bool, optional
| By default, astype always returns a newly allocated array. If this
| is set to false, and the `dtype`, `order`, and `subok`
| requirements are satisfied, the input array is returned instead
| of a copy.
|
| Returns
| -------
| arr_t : ndarray
| Unless `copy` is False and the other conditions for returning the input
| array are satisfied (see description for `copy` input parameter), `arr_t`
| is a new array of the same shape as the input array, with dtype, order
| given by `dtype`, `order`.
|
| Notes
| -----
| .. versionchanged:: 1.17.0
| Casting between a simple data type and a structured one is possible only
| for "unsafe" casting. Casting to multiple fields is allowed, but
| casting from multiple fields is not.
|
| .. versionchanged:: 1.9.0
| Casting from numeric to string types in 'safe' casting mode requires
| that the string dtype length is long enough to store the max
| integer/float value converted.
|
| Raises
| ------
| ComplexWarning
| When casting from complex to float or int. To avoid this,
| one should use ``a.real.astype(t)``.
|
| Examples
| --------
| >>> x = np.array([1, 2, 2.5])
| >>> x
| array([1. , 2. , 2.5])
|
| >>> x.astype(int)
| array([1, 2, 2])
|
| byteswap(...)
| a.byteswap(inplace=False)
|
| Swap the bytes of the array elements
|
| Toggle between low-endian and big-endian data representation by
| returning a byteswapped array, optionally swapped in-place.
| Arrays of byte-strings are not swapped. The real and imaginary
| parts of a complex number are swapped individually.
|
| Parameters
| ----------
| inplace : bool, optional
| If ``True``, swap bytes in-place, default is ``False``.
|
| Returns
| -------
| out : ndarray
| The byteswapped array. If `inplace` is ``True``, this is
| a view to self.
|
| Examples
| --------
| >>> A = np.array([1, 256, 8755], dtype=np.int16)
| >>> list(map(hex, A))
| ['0x1', '0x100', '0x2233']
| >>> A.byteswap(inplace=True)
| array([ 256, 1, 13090], dtype=int16)
| >>> list(map(hex, A))
| ['0x100', '0x1', '0x3322']
|
| Arrays of byte-strings are not swapped
|
| >>> A = np.array([b'ceg', b'fac'])
| >>> A.byteswap()
| array([b'ceg', b'fac'], dtype='|S3')
|
| ``A.newbyteorder().byteswap()`` produces an array with the same values
| but different representation in memory
|
| >>> A = np.array([1, 2, 3])
| >>> A.view(np.uint8)
| array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0,
| 0, 0], dtype=uint8)
| >>> A.newbyteorder().byteswap(inplace=True)
| array([1, 2, 3])
| >>> A.view(np.uint8)
| array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0,
| 0, 3], dtype=uint8)
|
| choose(...)
| a.choose(choices, out=None, mode='raise')
|
| Use an index array to construct a new array from a set of choices.
|
| Refer to `numpy.choose` for full documentation.
|
| See Also
| --------
| numpy.choose : equivalent function
|
| clip(...)
| a.clip(min=None, max=None, out=None, **kwargs)
|
| Return an array whose values are limited to ``[min, max]``.
| One of max or min must be given.
|
| Refer to `numpy.clip` for full documentation.
|
| See Also
| --------
| numpy.clip : equivalent function
|
| compress(...)
| a.compress(condition, axis=None, out=None)
|
| Return selected slices of this array along given axis.
|
| Refer to `numpy.compress` for full documentation.
|
| See Also
| --------
| numpy.compress : equivalent function
|
| conj(...)
| a.conj()
|
| Complex-conjugate all elements.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| conjugate(...)
| a.conjugate()
|
| Return the complex conjugate, element-wise.
|
| Refer to `numpy.conjugate` for full documentation.
|
| See Also
| --------
| numpy.conjugate : equivalent function
|
| copy(...)
| a.copy(order='C')
|
| Return a copy of the array.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| Controls the memory layout of the copy. 'C' means C-order,
| 'F' means F-order, 'A' means 'F' if `a` is Fortran contiguous,
| 'C' otherwise. 'K' means match the layout of `a` as closely
| as possible. (Note that this function and :func:`numpy.copy` are very
| similar, but have different default values for their order=
| arguments.)
|
| See also
| --------
| numpy.copy
| numpy.copyto
|
| Examples
| --------
| >>> x = np.array([[1,2,3],[4,5,6]], order='F')
|
| >>> y = x.copy()
|
| >>> x.fill(0)
|
| >>> x
| array([[0, 0, 0],
| [0, 0, 0]])
|
| >>> y
| array([[1, 2, 3],
| [4, 5, 6]])
|
| >>> y.flags['C_CONTIGUOUS']
| True
|
| cumprod(...)
| a.cumprod(axis=None, dtype=None, out=None)
|
| Return the cumulative product of the elements along the given axis.
|
| Refer to `numpy.cumprod` for full documentation.
|
| See Also
| --------
| numpy.cumprod : equivalent function
|
| cumsum(...)
| a.cumsum(axis=None, dtype=None, out=None)
|
| Return the cumulative sum of the elements along the given axis.
|
| Refer to `numpy.cumsum` for full documentation.
|
| See Also
| --------
| numpy.cumsum : equivalent function
|
| diagonal(...)
| a.diagonal(offset=0, axis1=0, axis2=1)
|
| Return specified diagonals. In NumPy 1.9 the returned array is a
| read-only view instead of a copy as in previous NumPy versions. In
| a future version the read-only restriction will be removed.
|
| Refer to :func:`numpy.diagonal` for full documentation.
|
| See Also
| --------
| numpy.diagonal : equivalent function
|
| dot(...)
| a.dot(b, out=None)
|
| Dot product of two arrays.
|
| Refer to `numpy.dot` for full documentation.
|
| See Also
| --------
| numpy.dot : equivalent function
|
| Examples
| --------
| >>> a = np.eye(2)
| >>> b = np.ones((2, 2)) * 2
| >>> a.dot(b)
| array([[2., 2.],
| [2., 2.]])
|
| This array method can be conveniently chained:
|
| >>> a.dot(b).dot(b)
| array([[8., 8.],
| [8., 8.]])
|
| dump(...)
| a.dump(file)
|
| Dump a pickle of the array to the specified file.
| The array can be read back with pickle.load or numpy.load.
|
| Parameters
| ----------
| file : str or Path
| A string naming the dump file.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| dumps(...)
| a.dumps()
|
| Returns the pickle of the array as a string.
| pickle.loads or numpy.loads will convert the string back to an array.
|
| Parameters
| ----------
| None
|
| fill(...)
| a.fill(value)
|
| Fill the array with a scalar value.
|
| Parameters
| ----------
| value : scalar
| All elements of `a` will be assigned this value.
|
| Examples
| --------
| >>> a = np.array([1, 2])
| >>> a.fill(0)
| >>> a
| array([0, 0])
| >>> a = np.empty(2)
| >>> a.fill(1)
| >>> a
| array([1., 1.])
|
| flatten(...)
| a.flatten(order='C')
|
| Return a copy of the array collapsed into one dimension.
|
| Parameters
| ----------
| order : {'C', 'F', 'A', 'K'}, optional
| 'C' means to flatten in row-major (C-style) order.
| 'F' means to flatten in column-major (Fortran-
| style) order. 'A' means to flatten in column-major
| order if `a` is Fortran *contiguous* in memory,
| row-major order otherwise. 'K' means to flatten
| `a` in the order the elements occur in memory.
| The default is 'C'.
|
| Returns
| -------
| y : ndarray
| A copy of the input array, flattened to one dimension.
|
| See Also
| --------
| ravel : Return a flattened array.
| flat : A 1-D flat iterator over the array.
|
| Examples
| --------
| >>> a = np.array([[1,2], [3,4]])
| >>> a.flatten()
| array([1, 2, 3, 4])
| >>> a.flatten('F')
| array([1, 3, 2, 4])
|
| getfield(...)
| a.getfield(dtype, offset=0)
|
| Returns a field of the given array as a certain type.
|
| A field is a view of the array data with a given data-type. The values in
| the view are determined by the given type and the offset into the current
| array in bytes. The offset needs to be such that the view dtype fits in the
| array dtype; for example an array of dtype complex128 has 16-byte elements.
| If taking a view with a 32-bit integer (4 bytes), the offset needs to be
| between 0 and 12 bytes.
|
| Parameters
| ----------
| dtype : str or dtype
| The data type of the view. The dtype size of the view can not be larger
| than that of the array itself.
| offset : int
| Number of bytes to skip before beginning the element view.
|
| Examples
| --------
| >>> x = np.diag([1.+1.j]*2)
| >>> x[1, 1] = 2 + 4.j
| >>> x
| array([[1.+1.j, 0.+0.j],
| [0.+0.j, 2.+4.j]])
| >>> x.getfield(np.float64)
| array([[1., 0.],
| [0., 2.]])
|
| By choosing an offset of 8 bytes we can select the complex part of the
| array for our view:
|
| >>> x.getfield(np.float64, offset=8)
| array([[1., 0.],
| [0., 4.]])
|
| item(...)
| a.item(*args)
|
| Copy an element of an array to a standard Python scalar and return it.
|
| Parameters
| ----------
| \*args : Arguments (variable number and type)
|
| * none: in this case, the method only works for arrays
| with one element (`a.size == 1`), which element is
| copied into a standard Python scalar object and returned.
|
| * int_type: this argument is interpreted as a flat index into
| the array, specifying which element to copy and return.
|
| * tuple of int_types: functions as does a single int_type argument,
| except that the argument is interpreted as an nd-index into the
| array.
|
| Returns
| -------
| z : Standard Python scalar object
| A copy of the specified element of the array as a suitable
| Python scalar
|
| Notes
| -----
| When the data type of `a` is longdouble or clongdouble, item() returns
| a scalar array object because there is no available Python scalar that
| would not lose information. Void arrays return a buffer object for item(),
| unless fields are defined, in which case a tuple is returned.
|
| `item` is very similar to a[args], except, instead of an array scalar,
| a standard Python scalar is returned. This can be useful for speeding up
| access to elements of the array and doing arithmetic on elements of the
| array using Python's optimized math.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.item(3)
| 1
| >>> x.item(7)
| 0
| >>> x.item((0, 1))
| 2
| >>> x.item((2, 2))
| 1
|
| itemset(...)
| a.itemset(*args)
|
| Insert scalar into an array (scalar is cast to array's dtype, if possible)
|
| There must be at least 1 argument, and define the last argument
| as *item*. Then, ``a.itemset(*args)`` is equivalent to but faster
| than ``a[args] = item``. The item should be a scalar value and `args`
| must select a single item in the array `a`.
|
| Parameters
| ----------
| \*args : Arguments
| If one argument: a scalar, only used in case `a` is of size 1.
| If two arguments: the last argument is the value to be set
| and must be a scalar, the first argument specifies a single array
| element location. It is either an int or a tuple.
|
| Notes
| -----
| Compared to indexing syntax, `itemset` provides some speed increase
| for placing a scalar into a particular location in an `ndarray`,
| if you must do this. However, generally this is discouraged:
| among other problems, it complicates the appearance of the code.
| Also, when using `itemset` (and `item`) inside a loop, be sure
| to assign the methods to a local variable to avoid the attribute
| look-up at each loop iteration.
|
| Examples
| --------
| >>> np.random.seed(123)
| >>> x = np.random.randint(9, size=(3, 3))
| >>> x
| array([[2, 2, 6],
| [1, 3, 6],
| [1, 0, 1]])
| >>> x.itemset(4, 0)
| >>> x.itemset((2, 2), 9)
| >>> x
| array([[2, 2, 6],
| [1, 0, 6],
| [1, 0, 9]])
|
| max(...)
| a.max(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the maximum along a given axis.
|
| Refer to `numpy.amax` for full documentation.
|
| See Also
| --------
| numpy.amax : equivalent function
|
| mean(...)
| a.mean(axis=None, dtype=None, out=None, keepdims=False)
|
| Returns the average of the array elements along given axis.
|
| Refer to `numpy.mean` for full documentation.
|
| See Also
| --------
| numpy.mean : equivalent function
|
| min(...)
| a.min(axis=None, out=None, keepdims=False, initial=<no value>, where=True)
|
| Return the minimum along a given axis.
|
| Refer to `numpy.amin` for full documentation.
|
| See Also
| --------
| numpy.amin : equivalent function
|
| newbyteorder(...)
| arr.newbyteorder(new_order='S')
|
| Return the array with the same data viewed with a different byte order.
|
| Equivalent to::
|
| arr.view(arr.dtype.newbytorder(new_order))
|
| Changes are also made in all fields and sub-arrays of the array data
| type.
|
|
|
| Parameters
| ----------
| new_order : string, optional
| Byte order to force; a value from the byte order specifications
| below. `new_order` codes can be any of:
|
| * 'S' - swap dtype from current to opposite endian
| * {'<', 'L'} - little endian
| * {'>', 'B'} - big endian
| * {'=', 'N'} - native order
| * {'|', 'I'} - ignore (no change to byte order)
|
| The default value ('S') results in swapping the current
| byte order. The code does a case-insensitive check on the first
| letter of `new_order` for the alternatives above. For example,
| any of 'B' or 'b' or 'biggish' are valid to specify big-endian.
|
|
| Returns
| -------
| new_arr : array
| New array object with the dtype reflecting given change to the
| byte order.
|
| nonzero(...)
| a.nonzero()
|
| Return the indices of the elements that are non-zero.
|
| Refer to `numpy.nonzero` for full documentation.
|
| See Also
| --------
| numpy.nonzero : equivalent function
|
| partition(...)
| a.partition(kth, axis=-1, kind='introselect', order=None)
|
| Rearranges the elements in the array in such a way that the value of the
| element in kth position is in the position it would be in a sorted array.
| All elements smaller than the kth element are moved before this element and
| all equal or greater are moved behind it. The ordering of the elements in
| the two partitions is undefined.
|
| .. versionadded:: 1.8.0
|
| Parameters
| ----------
| kth : int or sequence of ints
| Element index to partition by. The kth element value will be in its
| final sorted position and all smaller elements will be moved before it
| and all equal or greater elements behind it.
| The order of all elements in the partitions is undefined.
| If provided with a sequence of kth it will partition all elements
| indexed by kth of them into their sorted position at once.
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'introselect'}, optional
| Selection algorithm. Default is 'introselect'.
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need to be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.partition : Return a parititioned copy of an array.
| argpartition : Indirect partition.
| sort : Full sort.
|
| Notes
| -----
| See ``np.partition`` for notes on the different algorithms.
|
| Examples
| --------
| >>> a = np.array([3, 4, 2, 1])
| >>> a.partition(3)
| >>> a
| array([2, 1, 3, 4])
|
| >>> a.partition((1, 3))
| >>> a
| array([1, 2, 3, 4])
|
| prod(...)
| a.prod(axis=None, dtype=None, out=None, keepdims=False, initial=1, where=True)
|
| Return the product of the array elements over the given axis
|
| Refer to `numpy.prod` for full documentation.
|
| See Also
| --------
| numpy.prod : equivalent function
|
| ptp(...)
| a.ptp(axis=None, out=None, keepdims=False)
|
| Peak to peak (maximum - minimum) value along a given axis.
|
| Refer to `numpy.ptp` for full documentation.
|
| See Also
| --------
| numpy.ptp : equivalent function
|
| put(...)
| a.put(indices, values, mode='raise')
|
| Set ``a.flat[n] = values[n]`` for all `n` in indices.
|
| Refer to `numpy.put` for full documentation.
|
| See Also
| --------
| numpy.put : equivalent function
|
| ravel(...)
| a.ravel([order])
|
| Return a flattened array.
|
| Refer to `numpy.ravel` for full documentation.
|
| See Also
| --------
| numpy.ravel : equivalent function
|
| ndarray.flat : a flat iterator on the array.
|
| repeat(...)
| a.repeat(repeats, axis=None)
|
| Repeat elements of an array.
|
| Refer to `numpy.repeat` for full documentation.
|
| See Also
| --------
| numpy.repeat : equivalent function
|
| reshape(...)
| a.reshape(shape, order='C')
|
| Returns an array containing the same data with a new shape.
|
| Refer to `numpy.reshape` for full documentation.
|
| See Also
| --------
| numpy.reshape : equivalent function
|
| Notes
| -----
| Unlike the free function `numpy.reshape`, this method on `ndarray` allows
| the elements of the shape parameter to be passed in as separate arguments.
| For example, ``a.reshape(10, 11)`` is equivalent to
| ``a.reshape((10, 11))``.
|
| resize(...)
| a.resize(new_shape, refcheck=True)
|
| Change shape and size of array in-place.
|
| Parameters
| ----------
| new_shape : tuple of ints, or `n` ints
| Shape of resized array.
| refcheck : bool, optional
| If False, reference count will not be checked. Default is True.
|
| Returns
| -------
| None
|
| Raises
| ------
| ValueError
| If `a` does not own its own data or references or views to it exist,
| and the data memory must be changed.
| PyPy only: will always raise if the data memory must be changed, since
| there is no reliable way to determine if references or views to it
| exist.
|
| SystemError
| If the `order` keyword argument is specified. This behaviour is a
| bug in NumPy.
|
| See Also
| --------
| resize : Return a new array with the specified shape.
|
| Notes
| -----
| This reallocates space for the data area if necessary.
|
| Only contiguous arrays (data elements consecutive in memory) can be
| resized.
|
| The purpose of the reference count check is to make sure you
| do not use this array as a buffer for another Python object and then
| reallocate the memory. However, reference counts can increase in
| other ways so if you are sure that you have not shared the memory
| for this array with another Python object, then you may safely set
| `refcheck` to False.
|
| Examples
| --------
| Shrinking an array: array is flattened (in the order that the data are
| stored in memory), resized, and reshaped:
|
| >>> a = np.array([[0, 1], [2, 3]], order='C')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [1]])
|
| >>> a = np.array([[0, 1], [2, 3]], order='F')
| >>> a.resize((2, 1))
| >>> a
| array([[0],
| [2]])
|
| Enlarging an array: as above, but missing entries are filled with zeros:
|
| >>> b = np.array([[0, 1], [2, 3]])
| >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
| >>> b
| array([[0, 1, 2],
| [3, 0, 0]])
|
| Referencing an array prevents resizing...
|
| >>> c = a
| >>> a.resize((1, 1))
| Traceback (most recent call last):
| ...
| ValueError: cannot resize an array that references or is referenced ...
|
| Unless `refcheck` is False:
|
| >>> a.resize((1, 1), refcheck=False)
| >>> a
| array([[0]])
| >>> c
| array([[0]])
|
| round(...)
| a.round(decimals=0, out=None)
|
| Return `a` with each element rounded to the given number of decimals.
|
| Refer to `numpy.around` for full documentation.
|
| See Also
| --------
| numpy.around : equivalent function
|
| searchsorted(...)
| a.searchsorted(v, side='left', sorter=None)
|
| Find indices where elements of v should be inserted in a to maintain order.
|
| For full documentation, see `numpy.searchsorted`
|
| See Also
| --------
| numpy.searchsorted : equivalent function
|
| setfield(...)
| a.setfield(val, dtype, offset=0)
|
| Put a value into a specified place in a field defined by a data-type.
|
| Place `val` into `a`'s field defined by `dtype` and beginning `offset`
| bytes into the field.
|
| Parameters
| ----------
| val : object
| Value to be placed in field.
| dtype : dtype object
| Data-type of the field in which to place `val`.
| offset : int, optional
| The number of bytes into the field at which to place `val`.
|
| Returns
| -------
| None
|
| See Also
| --------
| getfield
|
| Examples
| --------
| >>> x = np.eye(3)
| >>> x.getfield(np.float64)
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
| >>> x.setfield(3, np.int32)
| >>> x.getfield(np.int32)
| array([[3, 3, 3],
| [3, 3, 3],
| [3, 3, 3]], dtype=int32)
| >>> x
| array([[1.0e+000, 1.5e-323, 1.5e-323],
| [1.5e-323, 1.0e+000, 1.5e-323],
| [1.5e-323, 1.5e-323, 1.0e+000]])
| >>> x.setfield(np.eye(3), np.int32)
| >>> x
| array([[1., 0., 0.],
| [0., 1., 0.],
| [0., 0., 1.]])
|
| setflags(...)
| a.setflags(write=None, align=None, uic=None)
|
| Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY),
| respectively.
|
| These Boolean-valued flags affect how numpy interprets the memory
| area used by `a` (see Notes below). The ALIGNED flag can only
| be set to True if the data is actually aligned according to the type.
| The WRITEBACKIFCOPY and (deprecated) UPDATEIFCOPY flags can never be set
| to True. The flag WRITEABLE can only be set to True if the array owns its
| own memory, or the ultimate owner of the memory exposes a writeable buffer
| interface, or is a string. (The exception for string is made so that
| unpickling can be done without copying memory.)
|
| Parameters
| ----------
| write : bool, optional
| Describes whether or not `a` can be written to.
| align : bool, optional
| Describes whether or not `a` is aligned properly for its type.
| uic : bool, optional
| Describes whether or not `a` is a copy of another "base" array.
|
| Notes
| -----
| Array flags provide information about how the memory area used
| for the array is to be interpreted. There are 7 Boolean flags
| in use, only four of which can be changed by the user:
| WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED.
|
| WRITEABLE (W) the data area can be written to;
|
| ALIGNED (A) the data and strides are aligned appropriately for the hardware
| (as determined by the compiler);
|
| UPDATEIFCOPY (U) (deprecated), replaced by WRITEBACKIFCOPY;
|
| WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced
| by .base). When the C-API function PyArray_ResolveWritebackIfCopy is
| called, the base array will be updated with the contents of this array.
|
| All flags can be accessed using the single (upper case) letter as well
| as the full name.
|
| Examples
| --------
| >>> y = np.array([[3, 1, 7],
| ... [2, 0, 0],
| ... [8, 5, 9]])
| >>> y
| array([[3, 1, 7],
| [2, 0, 0],
| [8, 5, 9]])
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : True
| ALIGNED : True
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(write=0, align=0)
| >>> y.flags
| C_CONTIGUOUS : True
| F_CONTIGUOUS : False
| OWNDATA : True
| WRITEABLE : False
| ALIGNED : False
| WRITEBACKIFCOPY : False
| UPDATEIFCOPY : False
| >>> y.setflags(uic=1)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: cannot set WRITEBACKIFCOPY flag to True
|
| sort(...)
| a.sort(axis=-1, kind=None, order=None)
|
| Sort an array in-place. Refer to `numpy.sort` for full documentation.
|
| Parameters
| ----------
| axis : int, optional
| Axis along which to sort. Default is -1, which means sort along the
| last axis.
| kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional
| Sorting algorithm. The default is 'quicksort'. Note that both 'stable'
| and 'mergesort' use timsort under the covers and, in general, the
| actual implementation will vary with datatype. The 'mergesort' option
| is retained for backwards compatibility.
|
| .. versionchanged:: 1.15.0.
| The 'stable' option was added.
|
| order : str or list of str, optional
| When `a` is an array with fields defined, this argument specifies
| which fields to compare first, second, etc. A single field can
| be specified as a string, and not all fields need be specified,
| but unspecified fields will still be used, in the order in which
| they come up in the dtype, to break ties.
|
| See Also
| --------
| numpy.sort : Return a sorted copy of an array.
| numpy.argsort : Indirect sort.
| numpy.lexsort : Indirect stable sort on multiple keys.
| numpy.searchsorted : Find elements in sorted array.
| numpy.partition: Partial sort.
|
| Notes
| -----
| See `numpy.sort` for notes on the different sorting algorithms.
|
| Examples
| --------
| >>> a = np.array([[1,4], [3,1]])
| >>> a.sort(axis=1)
| >>> a
| array([[1, 4],
| [1, 3]])
| >>> a.sort(axis=0)
| >>> a
| array([[1, 3],
| [1, 4]])
|
| Use the `order` keyword to specify a field to use when sorting a
| structured array:
|
| >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
| >>> a.sort(order='y')
| >>> a
| array([(b'c', 1), (b'a', 2)],
| dtype=[('x', 'S1'), ('y', '<i8')])
|
| squeeze(...)
| a.squeeze(axis=None)
|
| Remove single-dimensional entries from the shape of `a`.
|
| Refer to `numpy.squeeze` for full documentation.
|
| See Also
| --------
| numpy.squeeze : equivalent function
|
| std(...)
| a.std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the standard deviation of the array elements along given axis.
|
| Refer to `numpy.std` for full documentation.
|
| See Also
| --------
| numpy.std : equivalent function
|
| sum(...)
| a.sum(axis=None, dtype=None, out=None, keepdims=False, initial=0, where=True)
|
| Return the sum of the array elements over the given axis.
|
| Refer to `numpy.sum` for full documentation.
|
| See Also
| --------
| numpy.sum : equivalent function
|
| swapaxes(...)
| a.swapaxes(axis1, axis2)
|
| Return a view of the array with `axis1` and `axis2` interchanged.
|
| Refer to `numpy.swapaxes` for full documentation.
|
| See Also
| --------
| numpy.swapaxes : equivalent function
|
| take(...)
| a.take(indices, axis=None, out=None, mode='raise')
|
| Return an array formed from the elements of `a` at the given indices.
|
| Refer to `numpy.take` for full documentation.
|
| See Also
| --------
| numpy.take : equivalent function
|
| tobytes(...)
| a.tobytes(order='C')
|
| Construct Python bytes containing the raw data bytes in the array.
|
| Constructs Python bytes showing a copy of the raw contents of
| data memory. The bytes object can be produced in either 'C' or 'Fortran',
| or 'Any' order (the default is 'C'-order). 'Any' order means C-order
| unless the F_CONTIGUOUS flag in the array is set, in which case it
| means 'Fortran' order.
|
| .. versionadded:: 1.9.0
|
| Parameters
| ----------
| order : {'C', 'F', None}, optional
| Order of the data for multidimensional arrays:
| C, Fortran, or the same as for the original array.
|
| Returns
| -------
| s : bytes
| Python bytes exhibiting a copy of `a`'s raw data.
|
| Examples
| --------
| >>> x = np.array([[0, 1], [2, 3]], dtype='<u2')
| >>> x.tobytes()
| b'\x00\x00\x01\x00\x02\x00\x03\x00'
| >>> x.tobytes('C') == x.tobytes()
| True
| >>> x.tobytes('F')
| b'\x00\x00\x02\x00\x01\x00\x03\x00'
|
| tofile(...)
| a.tofile(fid, sep="", format="%s")
|
| Write array to a file as text or binary (default).
|
| Data is always written in 'C' order, independent of the order of `a`.
| The data produced by this method can be recovered using the function
| fromfile().
|
| Parameters
| ----------
| fid : file or str or Path
| An open file object, or a string containing a filename.
|
| .. versionchanged:: 1.17.0
| `pathlib.Path` objects are now accepted.
|
| sep : str
| Separator between array items for text output.
| If "" (empty), a binary file is written, equivalent to
| ``file.write(a.tobytes())``.
| format : str
| Format string for text file output.
| Each entry in the array is formatted to text by first converting
| it to the closest Python type, and then using "format" % item.
|
| Notes
| -----
| This is a convenience function for quick storage of array data.
| Information on endianness and precision is lost, so this method is not a
| good choice for files intended to archive data or transport data between
| machines with different endianness. Some of these problems can be overcome
| by outputting the data as text files, at the expense of speed and file
| size.
|
| When fid is a file object, array contents are directly written to the
| file, bypassing the file object's ``write`` method. As a result, tofile
| cannot be used with files objects supporting compression (e.g., GzipFile)
| or file-like objects that do not support ``fileno()`` (e.g., BytesIO).
|
| tolist(...)
| a.tolist()
|
| Return the array as an ``a.ndim``-levels deep nested list of Python scalars.
|
| Return a copy of the array data as a (nested) Python list.
| Data items are converted to the nearest compatible builtin Python type, via
| the `~numpy.ndarray.item` function.
|
| If ``a.ndim`` is 0, then since the depth of the nested list is 0, it will
| not be a list at all, but a simple Python scalar.
|
| Parameters
| ----------
| none
|
| Returns
| -------
| y : object, or list of object, or list of list of object, or ...
| The possibly nested list of array elements.
|
| Notes
| -----
| The array may be recreated via ``a = np.array(a.tolist())``, although this
| may sometimes lose precision.
|
| Examples
| --------
| For a 1D array, ``a.tolist()`` is almost the same as ``list(a)``,
| except that ``tolist`` changes numpy scalars to Python scalars:
|
| >>> a = np.uint32([1, 2])
| >>> a_list = list(a)
| >>> a_list
| [1, 2]
| >>> type(a_list[0])
| <class 'numpy.uint32'>
| >>> a_tolist = a.tolist()
| >>> a_tolist
| [1, 2]
| >>> type(a_tolist[0])
| <class 'int'>
|
| Additionally, for a 2D array, ``tolist`` applies recursively:
|
| >>> a = np.array([[1, 2], [3, 4]])
| >>> list(a)
| [array([1, 2]), array([3, 4])]
| >>> a.tolist()
| [[1, 2], [3, 4]]
|
| The base case for this recursion is a 0D array:
|
| >>> a = np.array(1)
| >>> list(a)
| Traceback (most recent call last):
| ...
| TypeError: iteration over a 0-d array
| >>> a.tolist()
| 1
|
| tostring(...)
| a.tostring(order='C')
|
| A compatibility alias for `tobytes`, with exactly the same behavior.
|
| Despite its name, it returns `bytes` not `str`\ s.
|
| .. deprecated:: 1.19.0
|
| trace(...)
| a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)
|
| Return the sum along diagonals of the array.
|
| Refer to `numpy.trace` for full documentation.
|
| See Also
| --------
| numpy.trace : equivalent function
|
| transpose(...)
| a.transpose(*axes)
|
| Returns a view of the array with axes transposed.
|
| For a 1-D array this has no effect, as a transposed vector is simply the
| same vector. To convert a 1-D array into a 2D column vector, an additional
| dimension must be added. `np.atleast2d(a).T` achieves this, as does
| `a[:, np.newaxis]`.
| For a 2-D array, this is a standard matrix transpose.
| For an n-D array, if axes are given, their order indicates how the
| axes are permuted (see Examples). If axes are not provided and
| ``a.shape = (i[0], i[1], ... i[n-2], i[n-1])``, then
| ``a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])``.
|
| Parameters
| ----------
| axes : None, tuple of ints, or `n` ints
|
| * None or no argument: reverses the order of the axes.
|
| * tuple of ints: `i` in the `j`-th place in the tuple means `a`'s
| `i`-th axis becomes `a.transpose()`'s `j`-th axis.
|
| * `n` ints: same as an n-tuple of the same ints (this form is
| intended simply as a "convenience" alternative to the tuple form)
|
| Returns
| -------
| out : ndarray
| View of `a`, with axes suitably permuted.
|
| See Also
| --------
| ndarray.T : Array property returning the array transposed.
| ndarray.reshape : Give a new shape to an array without changing its data.
|
| Examples
| --------
| >>> a = np.array([[1, 2], [3, 4]])
| >>> a
| array([[1, 2],
| [3, 4]])
| >>> a.transpose()
| array([[1, 3],
| [2, 4]])
| >>> a.transpose((1, 0))
| array([[1, 3],
| [2, 4]])
| >>> a.transpose(1, 0)
| array([[1, 3],
| [2, 4]])
|
| var(...)
| a.var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)
|
| Returns the variance of the array elements, along given axis.
|
| Refer to `numpy.var` for full documentation.
|
| See Also
| --------
| numpy.var : equivalent function
|
| view(...)
| a.view([dtype][, type])
|
| New view of array with the same data.
|
| .. note::
| Passing None for ``dtype`` is different from omitting the parameter,
| since the former invokes ``dtype(None)`` which is an alias for
| ``dtype('float_')``.
|
| Parameters
| ----------
| dtype : data-type or ndarray sub-class, optional
| Data-type descriptor of the returned view, e.g., float32 or int16.
| Omitting it results in the view having the same data-type as `a`.
| This argument can also be specified as an ndarray sub-class, which
| then specifies the type of the returned object (this is equivalent to
| setting the ``type`` parameter).
| type : Python type, optional
| Type of the returned view, e.g., ndarray or matrix. Again, omission
| of the parameter results in type preservation.
|
| Notes
| -----
| ``a.view()`` is used two different ways:
|
| ``a.view(some_dtype)`` or ``a.view(dtype=some_dtype)`` constructs a view
| of the array's memory with a different data-type. This can cause a
| reinterpretation of the bytes of memory.
|
| ``a.view(ndarray_subclass)`` or ``a.view(type=ndarray_subclass)`` just
| returns an instance of `ndarray_subclass` that looks at the same array
| (same shape, dtype, etc.) This does not cause a reinterpretation of the
| memory.
|
| For ``a.view(some_dtype)``, if ``some_dtype`` has a different number of
| bytes per entry than the previous dtype (for example, converting a
| regular array to a structured array), then the behavior of the view
| cannot be predicted just from the superficial appearance of ``a`` (shown
| by ``print(a)``). It also depends on exactly how ``a`` is stored in
| memory. Therefore if ``a`` is C-ordered versus fortran-ordered, versus
| defined as a slice or transpose, etc., the view may give different
| results.
|
|
| Examples
| --------
| >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])
|
| Viewing array data using a different type and dtype:
|
| >>> y = x.view(dtype=np.int16, type=np.matrix)
| >>> y
| matrix([[513]], dtype=int16)
| >>> print(type(y))
| <class 'numpy.matrix'>
|
| Creating a view on a structured array so it can be used in calculations
|
| >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
| >>> xv = x.view(dtype=np.int8).reshape(-1,2)
| >>> xv
| array([[1, 2],
| [3, 4]], dtype=int8)
| >>> xv.mean(0)
| array([2., 3.])
|
| Making changes to the view changes the underlying array
|
| >>> xv[0,1] = 20
| >>> x
| array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')])
|
| Using a view to convert an array to a recarray:
|
| >>> z = x.view(np.recarray)
| >>> z.a
| array([1, 3], dtype=int8)
|
| Views share data:
|
| >>> x[0] = (9, 10)
| >>> z[0]
| (9, 10)
|
| Views that change the dtype size (bytes per entry) should normally be
| avoided on arrays defined by slices, transposes, fortran-ordering, etc.:
|
| >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
| >>> y = x[:, 0:2]
| >>> y
| array([[1, 2],
| [4, 5]], dtype=int16)
| >>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
| Traceback (most recent call last):
| ...
| ValueError: To change to a dtype of a different size, the array must be C-contiguous
| >>> z = y.copy()
| >>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
| array([[(1, 2)],
| [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| T
| The transposed array.
|
| Same as ``self.transpose()``.
|
| Examples
| --------
| >>> x = np.array([[1.,2.],[3.,4.]])
| >>> x
| array([[ 1., 2.],
| [ 3., 4.]])
| >>> x.T
| array([[ 1., 3.],
| [ 2., 4.]])
| >>> x = np.array([1.,2.,3.,4.])
| >>> x
| array([ 1., 2., 3., 4.])
| >>> x.T
| array([ 1., 2., 3., 4.])
|
| See Also
| --------
| transpose
|
| __array_finalize__
| None.
|
| __array_interface__
| Array protocol: Python side.
|
| __array_priority__
| Array priority.
|
| __array_struct__
| Array protocol: C-struct side.
|
| base
| Base object if memory is from some other object.
|
| Examples
| --------
| The base of an array that owns its memory is None:
|
| >>> x = np.array([1,2,3,4])
| >>> x.base is None
| True
|
| Slicing creates a view, whose memory is shared with x:
|
| >>> y = x[2:]
| >>> y.base is x
| True
|
| ctypes
| An object to simplify the interaction of the array with the ctypes
| module.
|
| This attribute creates an object that makes it easier to use arrays
| when calling shared libraries with the ctypes module. The returned
| object has, among others, data, shape, and strides attributes (see
| Notes below) which themselves return ctypes objects that can be used
| as arguments to a shared library.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| c : Python object
| Possessing attributes data, shape, strides, etc.
|
| See Also
| --------
| numpy.ctypeslib
|
| Notes
| -----
| Below are the public attributes of this object which were documented
| in "Guide to NumPy" (we have omitted undocumented public attributes,
| as well as documented private attributes):
|
| .. autoattribute:: numpy.core._internal._ctypes.data
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.shape
| :noindex:
|
| .. autoattribute:: numpy.core._internal._ctypes.strides
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.data_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.shape_as
| :noindex:
|
| .. automethod:: numpy.core._internal._ctypes.strides_as
| :noindex:
|
| If the ctypes module is not available, then the ctypes attribute
| of array objects still returns something useful, but ctypes objects
| are not returned and errors may be raised instead. In particular,
| the object will still have the ``as_parameter`` attribute which will
| return an integer equal to the data attribute.
|
| Examples
| --------
| >>> import ctypes
| >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32)
| >>> x
| array([[0, 1],
| [2, 3]], dtype=int32)
| >>> x.ctypes.data
| 31962608 # may vary
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32))
| <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents
| c_uint(0)
| >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents
| c_ulong(4294967296)
| >>> x.ctypes.shape
| <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary
| >>> x.ctypes.strides
| <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary
|
| data
| Python buffer object pointing to the start of the array's data.
|
| dtype
| Data-type of the array's elements.
|
| Parameters
| ----------
| None
|
| Returns
| -------
| d : numpy dtype object
|
| See Also
| --------
| numpy.dtype
|
| Examples
| --------
| >>> x
| array([[0, 1],
| [2, 3]])
| >>> x.dtype
| dtype('int32')
| >>> type(x.dtype)
| <type 'numpy.dtype'>
|
| flags
| Information about the memory layout of the array.
|
| Attributes
| ----------
| C_CONTIGUOUS (C)
| The data is in a single, C-style contiguous segment.
| F_CONTIGUOUS (F)
| The data is in a single, Fortran-style contiguous segment.
| OWNDATA (O)
| The array owns the memory it uses or borrows it from another object.
| WRITEABLE (W)
| The data area can be written to. Setting this to False locks
| the data, making it read-only. A view (slice, etc.) inherits WRITEABLE
| from its base array at creation time, but a view of a writeable
| array may be subsequently locked while the base array remains writeable.
| (The opposite is not true, in that a view of a locked array may not
| be made writeable. However, currently, locking a base object does not
| lock any views that already reference it, so under that circumstance it
| is possible to alter the contents of a locked array via a previously
| created writeable view onto it.) Attempting to change a non-writeable
| array raises a RuntimeError exception.
| ALIGNED (A)
| The data and all elements are aligned appropriately for the hardware.
| WRITEBACKIFCOPY (X)
| This array is a copy of some other array. The C-API function
| PyArray_ResolveWritebackIfCopy must be called before deallocating
| to the base array will be updated with the contents of this array.
| UPDATEIFCOPY (U)
| (Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array.
| When this array is
| deallocated, the base array will be updated with the contents of
| this array.
| FNC
| F_CONTIGUOUS and not C_CONTIGUOUS.
| FORC
| F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).
| BEHAVED (B)
| ALIGNED and WRITEABLE.
| CARRAY (CA)
| BEHAVED and C_CONTIGUOUS.
| FARRAY (FA)
| BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.
|
| Notes
| -----
| The `flags` object can be accessed dictionary-like (as in ``a.flags['WRITEABLE']``),
| or by using lowercased attribute names (as in ``a.flags.writeable``). Short flag
| names are only supported in dictionary access.
|
| Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be
| changed by the user, via direct assignment to the attribute or dictionary
| entry, or by calling `ndarray.setflags`.
|
| The array flags cannot be set arbitrarily:
|
| - UPDATEIFCOPY can only be set ``False``.
| - WRITEBACKIFCOPY can only be set ``False``.
| - ALIGNED can only be set ``True`` if the data is truly aligned.
| - WRITEABLE can only be set ``True`` if the array owns its own memory
| or the ultimate owner of the memory exposes a writeable buffer
| interface or is a string.
|
| Arrays can be both C-style and Fortran-style contiguous simultaneously.
| This is clear for 1-dimensional arrays, but can also be true for higher
| dimensional arrays.
|
| Even for contiguous arrays a stride for a given dimension
| ``arr.strides[dim]`` may be *arbitrary* if ``arr.shape[dim] == 1``
| or the array has no elements.
| It does *not* generally hold that ``self.strides[-1] == self.itemsize``
| for C-style contiguous arrays or ``self.strides[0] == self.itemsize`` for
| Fortran-style contiguous arrays is true.
|
| flat
| A 1-D iterator over the array.
|
| This is a `numpy.flatiter` instance, which acts similarly to, but is not
| a subclass of, Python's built-in iterator object.
|
| See Also
| --------
| flatten : Return a copy of the array collapsed into one dimension.
|
| flatiter
|
| Examples
| --------
| >>> x = np.arange(1, 7).reshape(2, 3)
| >>> x
| array([[1, 2, 3],
| [4, 5, 6]])
| >>> x.flat[3]
| 4
| >>> x.T
| array([[1, 4],
| [2, 5],
| [3, 6]])
| >>> x.T.flat[3]
| 5
| >>> type(x.flat)
| <class 'numpy.flatiter'>
|
| An assignment example:
|
| >>> x.flat = 3; x
| array([[3, 3, 3],
| [3, 3, 3]])
| >>> x.flat[[1,4]] = 1; x
| array([[3, 1, 3],
| [3, 1, 3]])
|
| imag
| The imaginary part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.imag
| array([ 0. , 0.70710678])
| >>> x.imag.dtype
| dtype('float64')
|
| itemsize
| Length of one array element in bytes.
|
| Examples
| --------
| >>> x = np.array([1,2,3], dtype=np.float64)
| >>> x.itemsize
| 8
| >>> x = np.array([1,2,3], dtype=np.complex128)
| >>> x.itemsize
| 16
|
| nbytes
| Total bytes consumed by the elements of the array.
|
| Notes
| -----
| Does not include memory consumed by non-element attributes of the
| array object.
|
| Examples
| --------
| >>> x = np.zeros((3,5,2), dtype=np.complex128)
| >>> x.nbytes
| 480
| >>> np.prod(x.shape) * x.itemsize
| 480
|
| ndim
| Number of array dimensions.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3])
| >>> x.ndim
| 1
| >>> y = np.zeros((2, 3, 4))
| >>> y.ndim
| 3
|
| real
| The real part of the array.
|
| Examples
| --------
| >>> x = np.sqrt([1+0j, 0+1j])
| >>> x.real
| array([ 1. , 0.70710678])
| >>> x.real.dtype
| dtype('float64')
|
| See Also
| --------
| numpy.real : equivalent function
|
| shape
| Tuple of array dimensions.
|
| The shape property is usually used to get the current shape of an array,
| but may also be used to reshape the array in-place by assigning a tuple of
| array dimensions to it. As with `numpy.reshape`, one of the new shape
| dimensions can be -1, in which case its value is inferred from the size of
| the array and the remaining dimensions. Reshaping an array in-place will
| fail if a copy is required.
|
| Examples
| --------
| >>> x = np.array([1, 2, 3, 4])
| >>> x.shape
| (4,)
| >>> y = np.zeros((2, 3, 4))
| >>> y.shape
| (2, 3, 4)
| >>> y.shape = (3, 8)
| >>> y
| array([[ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.],
| [ 0., 0., 0., 0., 0., 0., 0., 0.]])
| >>> y.shape = (3, 6)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ValueError: total size of new array must be unchanged
| >>> np.zeros((4,2))[::2].shape = (-1,)
| Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| AttributeError: Incompatible shape for in-place modification. Use
| `.reshape()` to make a copy with the desired shape.
|
| See Also
| --------
| numpy.reshape : similar function
| ndarray.reshape : similar method
|
| size
| Number of elements in the array.
|
| Equal to ``np.prod(a.shape)``, i.e., the product of the array's
| dimensions.
|
| Notes
| -----
| `a.size` returns a standard arbitrary precision Python integer. This
| may not be the case with other methods of obtaining the same value
| (like the suggested ``np.prod(a.shape)``, which returns an instance
| of ``np.int_``), and may be relevant if the value is used further in
| calculations that may overflow a fixed size integer type.
|
| Examples
| --------
| >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
| >>> x.size
| 30
| >>> np.prod(x.shape)
| 30
|
| strides
| Tuple of bytes to step in each dimension when traversing an array.
|
| The byte offset of element ``(i[0], i[1], ..., i[n])`` in an array `a`
| is::
|
| offset = sum(np.array(i) * a.strides)
|
| A more detailed explanation of strides can be found in the
| "ndarray.rst" file in the NumPy reference guide.
|
| Notes
| -----
| Imagine an array of 32-bit integers (each 4 bytes)::
|
| x = np.array([[0, 1, 2, 3, 4],
| [5, 6, 7, 8, 9]], dtype=np.int32)
|
| This array is stored in memory as 40 bytes, one after the other
| (known as a contiguous block of memory). The strides of an array tell
| us how many bytes we have to skip in memory to move to the next position
| along a certain axis. For example, we have to skip 4 bytes (1 value) to
| move to the next column, but 20 bytes (5 values) to get to the same
| position in the next row. As such, the strides for the array `x` will be
| ``(20, 4)``.
|
| See Also
| --------
| numpy.lib.stride_tricks.as_strided
|
| Examples
| --------
| >>> y = np.reshape(np.arange(2*3*4), (2,3,4))
| >>> y
| array([[[ 0, 1, 2, 3],
| [ 4, 5, 6, 7],
| [ 8, 9, 10, 11]],
| [[12, 13, 14, 15],
| [16, 17, 18, 19],
| [20, 21, 22, 23]]])
| >>> y.strides
| (48, 16, 4)
| >>> y[1,1,1]
| 17
| >>> offset=sum(y.strides * np.array((1,1,1)))
| >>> offset/y.itemsize
| 17
|
| >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
| >>> x.strides
| (32, 4, 224, 1344)
| >>> i = np.array([3,5,2,2])
| >>> offset = sum(i * x.strides)
| >>> x[3,5,2,2]
| 813
| >>> offset / x.itemsize
| 813
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
###Markdown
Task 4a: Getting HelpIn the practice notebook peform the following:+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` + Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? 5. Manipulating ArraysThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays. 5.1 TransposingTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:(image source: https://en.wikipedia.org/wiki/Transpose)Numpy allows you to tranpose a matrix in one of two ways:+ Using the `transpose()` function+ Accessing the `T` attribute.Execute the following code examples to see an example of an array transpose
###Code
# Create a 2 x 3 random matrix
demo_f = np.random.random((2,3))
print("The original matrix")
print(demo_f)
print("\nThe matrix after being tranposed")
print(np.transpose(demo_f))
print("\nThe tranposed matrix from the T attribute")
print(demo_f.T)
###Output
_____no_output_____
###Markdown
Task 5a: Transposing an ArrayIn the practice notebook peform the following:+ Create a matrix of any size and transpose it. 5.2 Reshaping and ResizingYou can change the dimensions of your array by use of the following two functions: + `resize()` + `reshape()` The `resize()` function allows you to "stretch" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.The `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.Examine and execute the following code adapted from the DataCamp Tutorial:
###Code
# Create an array x of size 4 x 1. Print the shape of `x`
x = np.array([1,1,1,1])
print(x.shape)
# Resize `x` to ((6,4))
np.resize(x, (6,4))
###Output
_____no_output_____
###Markdown
Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.
###Code
# Reshape `x` to (2,6)
x = np.array([1,2,3,4])
print("\noriginal:")
print(x)
print("\nreshaped:")
print(x.reshape((2,2)))
###Output
_____no_output_____
###Markdown
Task 5b: Reshaping an ArrayIn the practice notebook perform the following:+ Create a matrix and resize it by adding 2 extra columns+ Create a matrix and resize it by adding 1 extra row+ Create a matrix of 8 x 2 and resize it to 4 x 4 5.3 Appending ArraysSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:+ `0`: the first dimension (the columns, or x-axis)+ `1`: the second dimension (the rows, or y-axis)+ `2`: the third dimension (the z-axis)+ `3`: the fourth dimension+ etc...For example, examine and execute this code borrowed from the DataCamp tutorial:
###Code
# Append a 1D array to your `my_array`
my_array = np.array([1,2,3,4])
new_array = np.append(my_array, [7, 8, 9, 10])
# Print `new_array`
print(new_array)
# Append an extra column to your `my_2d_array`
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
new_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)
# Print `new_2d_array`
print(new_2d_array)
###Output
_____no_output_____
###Markdown
In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter. Task 5c: Appending to an ArrayIn the practice notebook perform the following: + Create a three dimensional array and append another row to the array + Append another colum to the array + Print the final results 5.4 Inserting and Deleting ElementsYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. Task 5d: Inserting and Deleting ElementsIn the practice notebook perform the following:+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.+ Create a matrix and practice inserting a row and deleting a column. 5.5 Joining ArraysThere are a variety of functions for joining arrays: + `concatenate()` + `vstack()` + `hstack()` + `column_stack()`Each of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:
###Code
# Concatentate `my_array` and `x`: similar to np.append()
my_array = np.array([1,2,3,4])
x = np.array([1,1,1,1])
print("concatenate:")
print(np.concatenate((my_array, x)))
# Stack arrays row-wise
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("\nvstack:")
print(np.vstack((my_array, my_2d_array)))
# Stack arrays horizontally
print("\nhstack:")
print(np.hstack((my_2d_array, my_2d_array)))
# Stack arrays column-wise
print("\ncolumn_stack:")
print(np.column_stack((my_2d_array, my_2d_array)))
###Output
_____no_output_____
###Markdown
Task 5e: Joining ArraysIn the practice notebook perform the following:+ Execute the code (as shown above).+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). + Respond to the following question + Can you identify what is happening with each of them? 5.5 Splitting an ArrayYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically: + `vsplit()` + `hsplit()` Examine and execute the following code borrowed from the DataCamp Tutorial:
###Code
# Create a 2D array.
my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
print("original:")
print(my_2d_array)
# Split `my_stacked_array` horizontally at the 2nd index
print("\nhsplit:")
print(np.hsplit(my_2d_array, 2))
# Split `my_stacked_array` vertically at the 2nd index
print("\nvsplit:")
print(np.vsplit(my_2d_array, 2))
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/kubeflow-katib/early-stopping.ipynb | ###Markdown
Kubeflow Pipelines with Katib componentIn this notebook you will:- Create Katib Experiment using random algorithm.- Use median stopping rule as an early stopping algorithm.- Use Kubernetes Job with mxnet mnist training container as a Trial template.- Create Pipeline to get the optimal hyperparameters.Reference documentation:- https://kubeflow.org/docs/components/katib/experiment/random-search- https://kubeflow.org/docs/components/katib/early-stopping/- https://kubeflow.org/docs/pipelines/overview/concepts/component/ Install required packageKubeflow Pipelines SDK and Kubeflow Katib SDK.
###Code
# Update the PIP version.
!python -m pip install --upgrade pip
!pip install kfp==1.1.1
!pip install kubeflow-katib==0.10.1
###Output
Defaulting to user installation because normal site-packages is not writeable
Collecting pip
Downloading pip-20.3-py2.py3-none-any.whl (1.5 MB)
[K |████████████████████████████████| 1.5 MB 30.4 MB/s eta 0:00:01
[?25hInstalling collected packages: pip
[33m WARNING: The scripts pip, pip3 and pip3.6 are installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
Successfully installed pip-20.3
[33mWARNING: You are using pip version 20.0.2; however, version 20.3 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.[0m
Defaulting to user installation because normal site-packages is not writeable
Collecting kfp==1.1.1
Downloading kfp-1.1.1.tar.gz (162 kB)
[K |████████████████████████████████| 162 kB 16.2 MB/s eta 0:00:01
[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (5.3)
Requirement already satisfied: google-cloud-storage>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.25.0)
Requirement already satisfied: kubernetes<12.0.0,>=8.0.0 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (10.0.1)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.2.2)
Requirement already satisfied: jsonschema>=3.0.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (3.2.0)
Collecting click
Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
[K |████████████████████████████████| 82 kB 1.2 MB/s eta 0:00:01
[?25hCollecting Deprecated
Downloading Deprecated-1.2.10-py2.py3-none-any.whl (8.7 kB)
Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.6/dist-packages (from Deprecated->kfp==1.1.1) (1.11.2)
Collecting docstring-parser>=0.7.3
Downloading docstring_parser-0.7.3.tar.gz (13 kB)
Installing build dependencies ... [?25ldone
[?25h Getting requirements to build wheel ... [?25ldone
[?25h Preparing wheel metadata ... [?25ldone
[?25hRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (0.2.8)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (4.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (4.0.0)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: google-resumable-media<0.6dev,>=0.5.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage>=1.13.0->kfp==1.1.1) (0.5.0)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage>=1.13.0->kfp==1.1.1) (1.3.0)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (1.16.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2019.3)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (3.11.2)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (3.11.2)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from jsonschema>=3.0.1->kfp==1.1.1) (19.3.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.6/dist-packages (from jsonschema>=3.0.1->kfp==1.1.1) (1.4.0)
Requirement already satisfied: pyrsistent>=0.14.0 in /usr/local/lib/python3.6/dist-packages (from jsonschema>=3.0.1->kfp==1.1.1) (0.15.7)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==1.1.1) (2.1.0)
Collecting kfp-pipeline-spec<0.2.0,>=0.1.0
Downloading kfp_pipeline_spec-0.1.2-py3-none-any.whl (21 kB)
Collecting kfp-server-api<2.0.0,>=0.2.5
Downloading kfp-server-api-1.0.4.tar.gz (51 kB)
[K |████████████████████████████████| 51 kB 945 kB/s eta 0:00:01
[?25hRequirement already satisfied: urllib3>=1.15 in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (1.25.8)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2019.11.28)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2.8.1)
Requirement already satisfied: requests-oauthlib in /usr/local/lib/python3.6/dist-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.1.1) (1.3.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2019.11.28)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (5.3)
Requirement already satisfied: urllib3>=1.15 in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (1.25.8)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /usr/local/lib/python3.6/dist-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.1.1) (0.57.0)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2.8.1)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==1.1.1) (0.4.8)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (3.0.4)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2019.11.28)
Requirement already satisfied: urllib3>=1.15 in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (1.25.8)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.6)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib->kubernetes<12.0.0,>=8.0.0->kfp==1.1.1) (3.1.0)
Collecting requests_toolbelt>=0.8.0
Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
[K |████████████████████████████████| 54 kB 3.7 MB/s eta 0:00:01
[?25hRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==1.1.1) (0.4.8)
Collecting strip-hints
Downloading strip-hints-0.1.9.tar.gz (30 kB)
Requirement already satisfied: wheel in /usr/lib/python3/dist-packages (from strip-hints->kfp==1.1.1) (0.30.0)
Collecting tabulate
Downloading tabulate-0.8.7-py3-none-any.whl (24 kB)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Building wheels for collected packages: kfp, docstring-parser, kfp-server-api, strip-hints
Building wheel for kfp (setup.py) ... [?25ldone
[?25h Created wheel for kfp: filename=kfp-1.1.1-py3-none-any.whl size=222518 sha256=a3a3530489f3ca42e25ad6a221909142725d94df20b45106775801e271807ecb
Stored in directory: /home/jovyan/.cache/pip/wheels/88/9f/b1/149db749eeb26a01ddc590b68d2b9c27a9c7eedb799f47d9f1
Building wheel for docstring-parser (PEP 517) ... [?25ldone
[?25h Created wheel for docstring-parser: filename=docstring_parser-0.7.3-py3-none-any.whl size=19230 sha256=df43b4eb88d8f80a3c47f9aa2ecd07ddbfcbead920ed2697911659cd9f185a64
Stored in directory: /home/jovyan/.cache/pip/wheels/32/63/02/bb6eebc5261f10a6de3dcf26336a7b2b8b8dc8cacb6c00f75f
Building wheel for kfp-server-api (setup.py) ... [?25ldone
[?25h Created wheel for kfp-server-api: filename=kfp_server_api-1.0.4-py3-none-any.whl size=105034 sha256=74c422f07d14ea87395dc7d3287076bf79f9391bcbc076171ea7afeb43ed77bc
Stored in directory: /home/jovyan/.cache/pip/wheels/0f/50/2d/28c9ae498b2e5ff5bf5a9765bca3dcd08ab4a79333670de6cb
Building wheel for strip-hints (setup.py) ... [?25ldone
[?25h Created wheel for strip-hints: filename=strip_hints-0.1.9-py2.py3-none-any.whl size=24671 sha256=33ada8d9d7a77196257d7b1205856c86ae7390df64b65dba4911c42895b568e4
Stored in directory: /home/jovyan/.cache/pip/wheels/21/6d/fa/7ed7c0560e1ef39ebabd5cc0241e7fca711660bae1ad752e2b
Successfully built kfp docstring-parser kfp-server-api strip-hints
Installing collected packages: tabulate, strip-hints, requests-toolbelt, kfp-server-api, kfp-pipeline-spec, docstring-parser, Deprecated, click, kfp
[33m WARNING: The script tabulate is installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
[33m WARNING: The script strip-hints is installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
[33m WARNING: The scripts dsl-compile, dsl-compile-v2 and kfp are installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
Successfully installed Deprecated-1.2.10 click-7.1.2 docstring-parser-0.7.3 kfp-1.1.1 kfp-pipeline-spec-0.1.2 kfp-server-api-1.0.4 requests-toolbelt-0.9.1 strip-hints-0.1.9 tabulate-0.8.7
Defaulting to user installation because normal site-packages is not writeable
Collecting kubeflow-katib==0.10.1
Downloading kubeflow_katib-0.10.1-py3-none-any.whl (113 kB)
[K |████████████████████████████████| 113 kB 30.1 MB/s eta 0:00:01
[?25hRequirement already satisfied: kubernetes==10.0.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (10.0.1)
Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (1.25.8)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (45.1.0)
Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2019.11.28)
Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2.8.1)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: requests-oauthlib in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (1.3.0)
Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (1.25.8)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (45.1.0)
Requirement already satisfied: pyyaml>=3.12 in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (5.3)
Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2019.11.28)
Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2.8.1)
Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.57.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (2.22.0)
Requirement already satisfied: google-auth>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.2.8)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (4.0)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (45.1.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (4.0.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.4.8)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2019.11.28)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->kubernetes==10.0.1->kubeflow-katib==0.10.1) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests->kubernetes==10.0.1->kubeflow-katib==0.10.1) (2.6)
Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (1.25.8)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (2.22.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib->kubernetes==10.0.1->kubeflow-katib==0.10.1) (3.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.4.8)
Collecting table-logger>=0.3.5
Downloading table_logger-0.3.6-py3-none-any.whl (14 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from table-logger>=0.3.5->kubeflow-katib==0.10.1) (1.18.1)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Installing collected packages: table-logger, kubeflow-katib
Successfully installed kubeflow-katib-0.10.1 table-logger-0.3.6
###Markdown
Restart the Notebook kernel to use the SDK packages
###Code
from IPython.display import display_html
display_html("<script>Jupyter.notebook.kernel.restart()</script>",raw=True)
###Output
_____no_output_____
###Markdown
Import required packages
###Code
import kfp
import kfp.dsl as dsl
from kfp import components
from kubeflow.katib import ApiClient
from kubeflow.katib import V1beta1ExperimentSpec
from kubeflow.katib import V1beta1AlgorithmSpec
from kubeflow.katib import V1beta1EarlyStoppingSpec
from kubeflow.katib import V1beta1EarlyStoppingSetting
from kubeflow.katib import V1beta1ObjectiveSpec
from kubeflow.katib import V1beta1ParameterSpec
from kubeflow.katib import V1beta1FeasibleSpace
from kubeflow.katib import V1beta1TrialTemplate
from kubeflow.katib import V1beta1TrialParameterSpec
###Output
_____no_output_____
###Markdown
Define an ExperimentYou have to create an Experiment object before deploying it. This Experiment is similar to [this](https://github.com/kubeflow/katib/blob/master/examples/v1beta1/early-stopping/median-stop.yaml) YAML.
###Code
# Experiment name and namespace.
experiment_name = "median-stop"
experiment_namespace = "anonymous"
# Trial count specification.
max_trial_count = 18
max_failed_trial_count = 3
parallel_trial_count = 2
# Objective specification.
objective=V1beta1ObjectiveSpec(
type="maximize",
goal= 0.99,
objective_metric_name="Validation-accuracy",
additional_metric_names=[
"Train-accuracy"
]
)
# Algorithm specification.
algorithm=V1beta1AlgorithmSpec(
algorithm_name="random",
)
# Early Stopping specification.
early_stopping=V1beta1EarlyStoppingSpec(
algorithm_name="medianstop",
algorithm_settings=[
V1beta1EarlyStoppingSetting(
name="min_trials_required",
value="2"
)
]
)
# Experiment search space.
# In this example we tune learning rate, number of layer and optimizer.
# Learning rate has bad feasible space to show more early stopped Trials.
parameters=[
V1beta1ParameterSpec(
name="lr",
parameter_type="double",
feasible_space=V1beta1FeasibleSpace(
min="0.01",
max="0.3"
),
),
V1beta1ParameterSpec(
name="num-layers",
parameter_type="int",
feasible_space=V1beta1FeasibleSpace(
min="2",
max="5"
),
),
V1beta1ParameterSpec(
name="optimizer",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=[
"sgd",
"adam",
"ftrl"
]
),
),
]
###Output
_____no_output_____
###Markdown
Define a Trial templateIn this example, the Trial's Worker is the Kubernetes Job.
###Code
# JSON template specification for the Trial's Worker Kubernetes Job.
trial_spec={
"apiVersion": "batch/v1",
"kind": "Job",
"spec": {
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "training-container",
"image": "docker.io/kubeflowkatib/mxnet-mnist:v1beta1-e294a90",
"command": [
"python3",
"/opt/mxnet-mnist/mnist.py",
"--batch-size=64",
"--lr=${trialParameters.learningRate}",
"--num-layers=${trialParameters.numberLayers}",
"--optimizer=${trialParameters.optimizer}"
]
}
],
"restartPolicy": "Never"
}
}
}
}
# Configure parameters for the Trial template.
# We set the retain parameter to "True" to not clean-up the Trial Job's Kubernetes Pods.
trial_template=V1beta1TrialTemplate(
retain=True,
primary_container_name="training-container",
trial_parameters=[
V1beta1TrialParameterSpec(
name="learningRate",
description="Learning rate for the training model",
reference="lr"
),
V1beta1TrialParameterSpec(
name="numberLayers",
description="Number of training model layers",
reference="num-layers"
),
V1beta1TrialParameterSpec(
name="optimizer",
description="Training model optimizer (sdg, adam or ftrl)",
reference="optimizer"
),
],
trial_spec=trial_spec
)
###Output
_____no_output_____
###Markdown
Define an Experiment specificationCreate an Experiment specification from the above parameters.
###Code
experiment_spec=V1beta1ExperimentSpec(
max_trial_count=max_trial_count,
max_failed_trial_count=max_failed_trial_count,
parallel_trial_count=parallel_trial_count,
objective=objective,
algorithm=algorithm,
early_stopping=early_stopping,
parameters=parameters,
trial_template=trial_template
)
###Output
_____no_output_____
###Markdown
Create a Pipeline using Katib componentThe best hyperparameters are printed after Experiment is finished.The Experiment is not deleted after the Pipeline is finished.
###Code
# Get the Katib launcher.
katib_experiment_launcher_op = components.load_component_from_url(
"https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/katib-launcher/component.yaml")
@dsl.pipeline(
name="Launch Katib early stopping Experiment",
description="An example to launch Katib Experiment with early stopping"
)
def median_stop():
# Katib launcher component.
# Experiment Spec should be serialized to a valid Kubernetes object.
op = katib_experiment_launcher_op(
experiment_name=experiment_name,
experiment_namespace=experiment_namespace,
experiment_spec=ApiClient().sanitize_for_serialization(experiment_spec),
experiment_timeout_minutes=60,
delete_finished_experiment=False)
# Output container to print the results.
op_out = dsl.ContainerOp(
name="best-hp",
image="library/bash:4.4.23",
command=["sh", "-c"],
arguments=["echo Best HyperParameters: %s" % op.output],
)
###Output
_____no_output_____
###Markdown
Run the PipelineYou can check the Katib Experiment info in the Katib UI.
###Code
kfp.Client().create_run_from_pipeline_func(median_stop, arguments={})
###Output
/home/jovyan/.local/lib/python3.6/site-packages/kfp/dsl/_container_op.py:1028: FutureWarning: Please create reusable components instead of constructing ContainerOp instances directly. Reusable components are shareable, portable and have compatibility and support guarantees. Please see the documentation: https://www.kubeflow.org/docs/pipelines/sdk/component-development/#writing-your-component-definition-file The components can be created manually (or, in case of python, using kfp.components.create_component_from_func or func_to_container_op) and then loaded using kfp.components.load_component_from_file, load_component_from_uri or load_component_from_text: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.load_component_from_file
category=FutureWarning,
###Markdown
Kubeflow Pipelines with Katib componentIn this notebook you will:- Create Katib Experiment using random algorithm.- Use median stopping rule as an early stopping algorithm.- Use Kubernetes Job with mxnet mnist training container as a Trial template.- Create Pipeline to get the optimal hyperparameters.Reference documentation:- https://kubeflow.org/docs/components/katib/experiment/random-search- https://kubeflow.org/docs/components/katib/early-stopping/- https://kubeflow.org/docs/pipelines/overview/concepts/component/ Install required packageKubeflow Pipelines SDK and Kubeflow Katib SDK.
###Code
# Update the PIP version.
!python -m pip install --upgrade pip
!pip install kfp==1.1.1
!pip install kubeflow-katib==0.10.1
###Output
Defaulting to user installation because normal site-packages is not writeable
Collecting pip
Downloading pip-20.3-py2.py3-none-any.whl (1.5 MB)
[K |████████████████████████████████| 1.5 MB 30.4 MB/s eta 0:00:01
[?25hInstalling collected packages: pip
[33m WARNING: The scripts pip, pip3 and pip3.6 are installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
Successfully installed pip-20.3
[33mWARNING: You are using pip version 20.0.2; however, version 20.3 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.[0m
Defaulting to user installation because normal site-packages is not writeable
Collecting kfp==1.1.1
Downloading kfp-1.1.1.tar.gz (162 kB)
[K |████████████████████████████████| 162 kB 16.2 MB/s eta 0:00:01
[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (5.3)
Requirement already satisfied: google-cloud-storage>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.25.0)
Requirement already satisfied: kubernetes<12.0.0,>=8.0.0 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (10.0.1)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.2.2)
Requirement already satisfied: jsonschema>=3.0.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (3.2.0)
Collecting click
Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
[K |████████████████████████████████| 82 kB 1.2 MB/s eta 0:00:01
[?25hCollecting Deprecated
Downloading Deprecated-1.2.10-py2.py3-none-any.whl (8.7 kB)
Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.6/dist-packages (from Deprecated->kfp==1.1.1) (1.11.2)
Collecting docstring-parser>=0.7.3
Downloading docstring_parser-0.7.3.tar.gz (13 kB)
Installing build dependencies ... [?25ldone
[?25h Getting requirements to build wheel ... [?25ldone
[?25h Preparing wheel metadata ... [?25ldone
[?25hRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (0.2.8)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (4.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (4.0.0)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: google-resumable-media<0.6dev,>=0.5.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage>=1.13.0->kfp==1.1.1) (0.5.0)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage>=1.13.0->kfp==1.1.1) (1.3.0)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (1.16.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2019.3)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (3.11.2)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (3.11.2)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from jsonschema>=3.0.1->kfp==1.1.1) (19.3.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.6/dist-packages (from jsonschema>=3.0.1->kfp==1.1.1) (1.4.0)
Requirement already satisfied: pyrsistent>=0.14.0 in /usr/local/lib/python3.6/dist-packages (from jsonschema>=3.0.1->kfp==1.1.1) (0.15.7)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==1.1.1) (2.1.0)
Collecting kfp-pipeline-spec<0.2.0,>=0.1.0
Downloading kfp_pipeline_spec-0.1.2-py3-none-any.whl (21 kB)
Collecting kfp-server-api<2.0.0,>=0.2.5
Downloading kfp-server-api-1.0.4.tar.gz (51 kB)
[K |████████████████████████████████| 51 kB 945 kB/s eta 0:00:01
[?25hRequirement already satisfied: urllib3>=1.15 in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (1.25.8)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2019.11.28)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2.8.1)
Requirement already satisfied: requests-oauthlib in /usr/local/lib/python3.6/dist-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.1.1) (1.3.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2019.11.28)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (5.3)
Requirement already satisfied: urllib3>=1.15 in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (1.25.8)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: google-auth>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from kfp==1.1.1) (1.11.0)
Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /usr/local/lib/python3.6/dist-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.1.1) (0.57.0)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2.8.1)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (45.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==1.1.1) (0.4.8)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (3.0.4)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (2019.11.28)
Requirement already satisfied: urllib3>=1.15 in /usr/local/lib/python3.6/dist-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.1.1) (1.25.8)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.6)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib->kubernetes<12.0.0,>=8.0.0->kfp==1.1.1) (3.1.0)
Collecting requests_toolbelt>=0.8.0
Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
[K |████████████████████████████████| 54 kB 3.7 MB/s eta 0:00:01
[?25hRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.1.1) (2.22.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==1.1.1) (0.4.8)
Collecting strip-hints
Downloading strip-hints-0.1.9.tar.gz (30 kB)
Requirement already satisfied: wheel in /usr/lib/python3/dist-packages (from strip-hints->kfp==1.1.1) (0.30.0)
Collecting tabulate
Downloading tabulate-0.8.7-py3-none-any.whl (24 kB)
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3/dist-packages (from google-auth>=1.6.1->kfp==1.1.1) (1.11.0)
Building wheels for collected packages: kfp, docstring-parser, kfp-server-api, strip-hints
Building wheel for kfp (setup.py) ... [?25ldone
[?25h Created wheel for kfp: filename=kfp-1.1.1-py3-none-any.whl size=222518 sha256=a3a3530489f3ca42e25ad6a221909142725d94df20b45106775801e271807ecb
Stored in directory: /home/jovyan/.cache/pip/wheels/88/9f/b1/149db749eeb26a01ddc590b68d2b9c27a9c7eedb799f47d9f1
Building wheel for docstring-parser (PEP 517) ... [?25ldone
[?25h Created wheel for docstring-parser: filename=docstring_parser-0.7.3-py3-none-any.whl size=19230 sha256=df43b4eb88d8f80a3c47f9aa2ecd07ddbfcbead920ed2697911659cd9f185a64
Stored in directory: /home/jovyan/.cache/pip/wheels/32/63/02/bb6eebc5261f10a6de3dcf26336a7b2b8b8dc8cacb6c00f75f
Building wheel for kfp-server-api (setup.py) ... [?25ldone
[?25h Created wheel for kfp-server-api: filename=kfp_server_api-1.0.4-py3-none-any.whl size=105034 sha256=74c422f07d14ea87395dc7d3287076bf79f9391bcbc076171ea7afeb43ed77bc
Stored in directory: /home/jovyan/.cache/pip/wheels/0f/50/2d/28c9ae498b2e5ff5bf5a9765bca3dcd08ab4a79333670de6cb
Building wheel for strip-hints (setup.py) ... [?25ldone
[?25h Created wheel for strip-hints: filename=strip_hints-0.1.9-py2.py3-none-any.whl size=24671 sha256=33ada8d9d7a77196257d7b1205856c86ae7390df64b65dba4911c42895b568e4
Stored in directory: /home/jovyan/.cache/pip/wheels/21/6d/fa/7ed7c0560e1ef39ebabd5cc0241e7fca711660bae1ad752e2b
Successfully built kfp docstring-parser kfp-server-api strip-hints
Installing collected packages: tabulate, strip-hints, requests-toolbelt, kfp-server-api, kfp-pipeline-spec, docstring-parser, Deprecated, click, kfp
[33m WARNING: The script tabulate is installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
[33m WARNING: The script strip-hints is installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
[33m WARNING: The scripts dsl-compile, dsl-compile-v2 and kfp are installed in '/home/jovyan/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.[0m
Successfully installed Deprecated-1.2.10 click-7.1.2 docstring-parser-0.7.3 kfp-1.1.1 kfp-pipeline-spec-0.1.2 kfp-server-api-1.0.4 requests-toolbelt-0.9.1 strip-hints-0.1.9 tabulate-0.8.7
Defaulting to user installation because normal site-packages is not writeable
Collecting kubeflow-katib==0.10.1
Downloading kubeflow_katib-0.10.1-py3-none-any.whl (113 kB)
[K |████████████████████████████████| 113 kB 30.1 MB/s eta 0:00:01
[?25hRequirement already satisfied: kubernetes==10.0.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (10.0.1)
Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (1.25.8)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (45.1.0)
Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2019.11.28)
Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2.8.1)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: requests-oauthlib in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (1.3.0)
Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (1.25.8)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (45.1.0)
Requirement already satisfied: pyyaml>=3.12 in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (5.3)
Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2019.11.28)
Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2.8.1)
Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.57.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (2.22.0)
Requirement already satisfied: google-auth>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.2.8)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (4.0)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (45.1.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (4.0.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.4.8)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (2019.11.28)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->kubernetes==10.0.1->kubeflow-katib==0.10.1) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests->kubernetes==10.0.1->kubeflow-katib==0.10.1) (2.6)
Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.6/dist-packages (from kubeflow-katib==0.10.1) (1.25.8)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from kubernetes==10.0.1->kubeflow-katib==0.10.1) (2.22.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib->kubernetes==10.0.1->kubeflow-katib==0.10.1) (3.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes==10.0.1->kubeflow-katib==0.10.1) (0.4.8)
Collecting table-logger>=0.3.5
Downloading table_logger-0.3.6-py3-none-any.whl (14 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from table-logger>=0.3.5->kubeflow-katib==0.10.1) (1.18.1)
Requirement already satisfied: six>=1.10 in /usr/lib/python3/dist-packages (from kubeflow-katib==0.10.1) (1.11.0)
Installing collected packages: table-logger, kubeflow-katib
Successfully installed kubeflow-katib-0.10.1 table-logger-0.3.6
###Markdown
Restart the Notebook kernel to use the SDK packages
###Code
from IPython.display import display_html
display_html("<script>Jupyter.notebook.kernel.restart()</script>",raw=True)
###Output
_____no_output_____
###Markdown
Import required packages
###Code
import kfp
import kfp.dsl as dsl
from kfp import components
from kubeflow.katib import ApiClient
from kubeflow.katib import V1beta1ExperimentSpec
from kubeflow.katib import V1beta1AlgorithmSpec
from kubeflow.katib import V1beta1EarlyStoppingSpec
from kubeflow.katib import V1beta1EarlyStoppingSetting
from kubeflow.katib import V1beta1ObjectiveSpec
from kubeflow.katib import V1beta1ParameterSpec
from kubeflow.katib import V1beta1FeasibleSpace
from kubeflow.katib import V1beta1TrialTemplate
from kubeflow.katib import V1beta1TrialParameterSpec
###Output
_____no_output_____
###Markdown
Define an ExperimentYou have to create an Experiment object before deploying it. This Experiment is similar to [this](https://github.com/kubeflow/katib/blob/master/examples/v1beta1/early-stopping/median-stop.yaml) YAML.
###Code
# Experiment name and namespace.
experiment_name = "median-stop"
experiment_namespace = "anonymous"
# Trial count specification.
max_trial_count = 18
max_failed_trial_count = 3
parallel_trial_count = 2
# Objective specification.
objective=V1beta1ObjectiveSpec(
type="maximize",
goal= 0.99,
objective_metric_name="Validation-accuracy",
additional_metric_names=[
"Train-accuracy"
]
)
# Algorithm specification.
algorithm=V1beta1AlgorithmSpec(
algorithm_name="random",
)
# Early Stopping specification.
early_stopping=V1beta1EarlyStoppingSpec(
algorithm_name="medianstop",
algorithm_settings=[
V1beta1EarlyStoppingSetting(
name="min_trials_required",
value="2"
)
]
)
# Experiment search space.
# In this example we tune learning rate, number of layer and optimizer.
# Learning rate has bad feasible space to show more early stopped Trials.
parameters=[
V1beta1ParameterSpec(
name="lr",
parameter_type="double",
feasible_space=V1beta1FeasibleSpace(
min="0.01",
max="0.3"
),
),
V1beta1ParameterSpec(
name="num-layers",
parameter_type="int",
feasible_space=V1beta1FeasibleSpace(
min="2",
max="5"
),
),
V1beta1ParameterSpec(
name="optimizer",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=[
"sgd",
"adam",
"ftrl"
]
),
),
]
###Output
_____no_output_____
###Markdown
Define a Trial templateIn this example, the Trial's Worker is the Kubernetes Job.
###Code
# JSON template specification for the Trial's Worker Kubernetes Job.
trial_spec={
"apiVersion": "batch/v1",
"kind": "Job",
"spec": {
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "training-container",
"image": "docker.io/kubeflowkatib/mxnet-mnist:v1beta1-e294a90",
"command": [
"python3",
"/opt/mxnet-mnist/mnist.py",
"--batch-size=64",
"--lr=${trialParameters.learningRate}",
"--num-layers=${trialParameters.numberLayers}",
"--optimizer=${trialParameters.optimizer}"
]
}
],
"restartPolicy": "Never"
}
}
}
}
# Configure parameters for the Trial template.
# We set the retain parameter to "True" to not clean-up the Trial Job's Kubernetes Pods.
trial_template=V1beta1TrialTemplate(
retain=True,
primary_container_name="training-container",
trial_parameters=[
V1beta1TrialParameterSpec(
name="learningRate",
description="Learning rate for the training model",
reference="lr"
),
V1beta1TrialParameterSpec(
name="numberLayers",
description="Number of training model layers",
reference="num-layers"
),
V1beta1TrialParameterSpec(
name="optimizer",
description="Training model optimizer (sdg, adam or ftrl)",
reference="optimizer"
),
],
trial_spec=trial_spec
)
###Output
_____no_output_____
###Markdown
Define an Experiment specificationCreate an Experiment specification from the above parameters.
###Code
experiment_spec=V1beta1ExperimentSpec(
max_trial_count=max_trial_count,
max_failed_trial_count=max_failed_trial_count,
parallel_trial_count=parallel_trial_count,
objective=objective,
algorithm=algorithm,
early_stopping=early_stopping,
parameters=parameters,
trial_template=trial_template
)
###Output
_____no_output_____
###Markdown
Create a Pipeline using Katib componentThe best hyperparameters are printed after Experiment is finished.The Experiment is not deleted after the Pipeline is finished.
###Code
# Get the Katib launcher.
katib_experiment_launcher_op = components.load_component_from_url(
"https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/katib-launcher/component.yaml")
@dsl.pipeline(
name="Launch Katib early stopping Experiment",
description="An example to launch Katib Experiment with early stopping"
)
def median_stop():
# Katib launcher component.
# Experiment Spec should be serialized to a valid Kubernetes object.
op = katib_experiment_launcher_op(
experiment_name=experiment_name,
experiment_namespace=experiment_namespace,
experiment_spec=ApiClient().sanitize_for_serialization(experiment_spec),
experiment_timeout_minutes=60,
delete_finished_experiment=False)
# Output container to print the results.
op_out = dsl.ContainerOp(
name="best-hp",
image="library/bash:4.4.23",
command=["sh", "-c"],
arguments=["echo Best HyperParameters: %s" % op.output],
)
###Output
_____no_output_____
###Markdown
Run the PipelineYou can check the Katib Experiment info in the Katib UI.
###Code
kfp.Client().create_run_from_pipeline_func(median_stop, arguments={})
###Output
/home/jovyan/.local/lib/python3.6/site-packages/kfp/dsl/_container_op.py:1028: FutureWarning: Please create reusable components instead of constructing ContainerOp instances directly. Reusable components are shareable, portable and have compatibility and support guarantees. Please see the documentation: https://www.kubeflow.org/docs/pipelines/sdk/component-development/#writing-your-component-definition-file The components can be created manually (or, in case of python, using kfp.components.create_component_from_func or func_to_container_op) and then loaded using kfp.components.load_component_from_file, load_component_from_uri or load_component_from_text: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.load_component_from_file
category=FutureWarning,
|
Project/.ipynb_checkpoints/Local-Sagemaker-checkpoint.ipynb | ###Markdown
Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011. gunzip -c aclImdb_v1.tar.gz | tar xopf - Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
###Code
import os
import glob
def read_imdb_data(data_dir='/Users/kurie_jumi/dev/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
###Output
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
###Markdown
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
###Code
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
###Output
IMDb reviews (combined): train = 25000, test = 25000
###Markdown
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
###Code
print(train_X[100])
print(train_y[100])
###Output
Of all the movies I have seen, and that's most of them, this is by far the best one made that is primarily about the U.S. Naval Airships (Blimps) during the WW-II era. Yes there are other good LTA related movies, but most use special effects more than any real-time shots. This Man's Navy has considerably more real-time footage of blimps etc. True, lots of corny dialog but that's what makes more interesting Hollywood movies, even today. P.S. I spent 10 years(out of 20) and have over 5,000 hours in Navy Airships of all types, from 1949 through 1959. Proud member of the Naval Airship Association etc. [ATC(LA/AC) USN Retired]
1
###Markdown
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
###Code
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
###Output
_____no_output_____
###Markdown
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
###Code
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
# I will skip the cache part (And probably just work with fewer data just for testing)
###Output
_____no_output_____
###Markdown
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
###Code
sample_data = train_X[:100]
sample_data[1]
sample_data_s = train_X[:2]
sample_data_s = ['The story is interesting.', 'The film was interesting','Sorry, gave it a 1.', 'The movie sucks.']
import numpy as np
import operator
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for sentence in data:
# preprocess the sentence in 'data' by using review_to_words to create a list of words
word_list = review_to_words(sentence)
print(word_list)
# add words as key to word_count dictionary and count of words as values
for word in word_list:
if word in word_count:
# print('word {} exists. Increase count'.format(word))
word_count[word]+=1
else :
# print('Add new word {} to the word_count dictionary'.format(word))
word_count.update( {word : 1} )
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
print(word_count)
sorted_words = list(dict(sorted(word_count.items(), key=operator.itemgetter(1),reverse=True)).keys())
print(type(sorted_words))
print(sorted_words)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(sample_data_s)
# TODO: Use this space to determine the five most frequently appearing words in the training set.
for key,value in word_dict.items():
if value < 7:
print("key = {}, value = {}".format(key,value))
###Output
key = interest, value = 2
key = stori, value = 3
key = film, value = 4
key = sorri, value = 5
key = gave, value = 6
|
12_lstm_csm_review.ipynb | ###Markdown
Create the splits
###Code
def splitter(df):
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=999)
for train_index, test_index in split.split(df, df['csm_rating']):
train_data= df.loc[train_index]
test_data = df.loc[test_index]
return train_data, test_data
train_data, test_data = splitter(df)
x_tr, y_tr = train_data['csm_review'].values, train_data["csm_rating"].values
x_val, y_val = test_data["csm_review"].values, test_data["csm_rating"].values
print(x_tr.shape, y_tr.shape)
print(x_val.shape, y_val.shape)
###Output
(1164,) (1164,)
###Markdown
Prepare the Data
###Code
#Tokenize the sentences
tokenizer = Tokenizer()
#preparing vocabulary
tokenizer.fit_on_texts(list(x_tr))
#converting text into integer sequences
x_tr_seq = tokenizer.texts_to_sequences(x_tr)
x_val_seq = tokenizer.texts_to_sequences(x_val)
print(len(max(x_tr_seq, key=len)))
max_length = len(max(x_tr_seq, key=len))
print(len(min(x_tr_seq, key=len)))
#padding to prepare sequences of same length
x_tr_seq = pad_sequences(x_tr_seq, maxlen=max_length)
x_val_seq = pad_sequences(x_val_seq, maxlen=max_length)
print(len(max(x_tr_seq, key=len)))
print(len(min(x_tr_seq, key=len)))
size_of_vocabulary=len(tokenizer.word_index) + 1 #+1 for padding
print(size_of_vocabulary)
###Output
30700
###Markdown
Create embeddings
###Code
word_index = tokenizer.word_index
print("Found %s unique tokens." % len(word_index))
###Output
Found 30699 unique tokens.
###Markdown
Create the Model
###Code
model=Sequential()
#embedding layer
model.add(Embedding(size_of_vocabulary,300,input_length=max_length,trainable=True))
#lstm layer
model.add(LSTM(128,return_sequences=True,dropout=0.2))
#Global Maxpooling
model.add(GlobalMaxPooling1D())
#Dense Layer
model.add(Dense(64,activation='relu'))
model.add(Dense(1,activation='relu'))
#Add loss function, metrics, optimizer
#optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(optimizer="RMSprop", loss='mean_squared_error', metrics=["mae"])
#Print summary of model
print(model.summary())
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 502, 300) 9210000
_________________________________________________________________
lstm (LSTM) (None, 502, 128) 219648
_________________________________________________________________
global_max_pooling1d (Global (None, 128) 0
_________________________________________________________________
dense (Dense) (None, 64) 8256
_________________________________________________________________
dense_1 (Dense) (None, 1) 65
=================================================================
Total params: 9,437,969
Trainable params: 9,437,969
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Callbacks
###Code
earlystop = EarlyStopping(patience=10)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_mae',
patience=2,
verbose=1,
factor=0.5,
min_lr=0.00001)
callbacks = [earlystop, learning_rate_reduction]
###Output
_____no_output_____
###Markdown
Fit the Model
###Code
history = model.fit(np.array(x_tr_seq),
np.array(y_tr),
batch_size=128,
epochs=1000,
validation_data=(np.array(x_val_seq),np.array(y_val)),
verbose=1,
callbacks=callbacks)
#evaluation
val_loss, val_mae = model.evaluate(x_val_seq, y_val)
print("The val_mae is %.3f." % val_mae)
plt.plot(model.history.history["loss"], label="loss");
plt.plot(model.history.history["val_loss"], label="val_loss");
plt.legend();
plt.show();
plt.close();
plt.plot(model.history.history["mae"], label="mae");
plt.plot(model.history.history["val_mae"], label="val_mae");
plt.legend();
###Output
_____no_output_____
###Markdown
[Use Transfer Learning](https://www.analyticsvidhya.com/blog/2020/03/pretrained-word-embeddings-nlp/)
###Code
# load the whole embedding into memory
embeddings_index = dict()
with open("/content/drive/My Drive/Colab Notebooks/glove.6B/glove.6B.300d.txt") as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
print('Loaded %s word vectors.' % len(embeddings_index))
# create a weight matrix for words in training docs
embedding_matrix = np.zeros((size_of_vocabulary, 300))
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
model2=Sequential()
#embedding layer
model2.add(Embedding(size_of_vocabulary,300,input_length=max_length,trainable=True))
#lstm layer
model2.add(LSTM(128,return_sequences=True,dropout=0.2))
#Global Maxpooling
model2.add(GlobalMaxPooling1D())
#Dense Layer
model2.add(Dense(64,activation='relu'))
model2.add(Dense(1,activation='relu'))
#Add loss function, metrics, optimizer
#optimizer = tf.keras.optimizers.RMSprop(0.001)
model2.compile(optimizer="RMSprop", loss='mean_squared_error', metrics=["mae"])
#Print summary of model
print(model2.summary())
history = model2.fit(np.array(x_tr_seq),
np.array(y_tr),
batch_size=128,
epochs=1000,
validation_data=(np.array(x_val_seq),np.array(y_val)),
verbose=1,
callbacks=callbacks)
#evaluation
_, val_mae = model2.evaluate(x_val_seq, y_val)
print("The val_mae is %.3f." % val_mae)
model2.save('/content/drive/My Drive/final_project/lstm_csm_review_transfer_model_scaled')
plt.plot(model2.history.history["loss"], label="loss");
plt.plot(model2.history.history["val_loss"], label="val_loss");
plt.legend();
plt.show();
plt.close();
plt.plot(model2.history.history["mae"], label="mae");
plt.plot(model2.history.history["val_mae"], label="val_mae");
plt.legend();
a = model2.predict(x_val_seq)
b = model2.predict(x_val_seq)
#a = min_max_scaler_target.inverse_transform(a)
#b = min_max_scaler_target.inverse_transform(b)
csm_lstm_predictions_df = pd.DataFrame({"csm_custom": list(a), "csm_transfer": list(b)},
columns = ["csm_custom", "csm_transfer"],
index=test_data.index)
csm_lstm_predictions_df.to_csv('/content/drive/My Drive/final_project/lstm_csm_review.csv', index="False")
###Output
_____no_output_____ |
Module 06 - Inferring/Class 33 - Statistical inference and multiple regression.ipynb | ###Markdown
INFO 3402 – Class 33: Statistical inference and multiple regression[Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan) University of Colorado Boulder Credit also goes to Jake VanderPlas's *[Python Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html)* and Justin Markham's [DAT4](https://github.com/justmarkham/DAT4/blob/master/notebooks/08_linear_regression.ipynb) notebooks.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import numpy as np
import pandas as pd
pd.options.display.max_columns = 200
###Output
_____no_output_____
###Markdown
Basic intuitionWe often want to understand the relationship between continuous variables. Let's make a simple toy example.
###Code
rng = np.random.RandomState(20190408)
x = 10 * rng.rand(100)
# y = ax + b + μ
# μ is an error term, or additional noise, because real data also has errors/noise
y = 2 * x + 5 + rng.randn(100)
plt.scatter(x,y,s=25,color='k')
###Output
_____no_output_____
###Markdown
There is a very strong correlation between these two variables.
###Code
np.corrcoef(x,y)[0,1]
###Output
_____no_output_____
###Markdown
We could also imagine fitting a line through this data, defined by the equation:$y = ax + b$where $y$ is outcome variable, $a$ is the slope, $x$ is the input variable, and $b$ is the intercept. We already have $x$ and $y$ values and we explictly designed y such that $a=2$ and $b=5$.The goal of regression is to learn possible $a$ and $b$ values from this noisy data.
###Code
# From: https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html
from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True)
model.fit(x[:, np.newaxis], y)
xfit = np.linspace(0, 10, 1000)
yfit = model.predict(xfit[:, np.newaxis])
plt.scatter(x, y, s=25, color='k')
plt.plot(xfit, yfit, color='r')
###Output
_____no_output_____
###Markdown
The `model` class contains the $a$ and $b$ values within `.coef_` and `.intercept_`, respectively. These values are close, but not identical to the "true" values because of the error/noise we introduced:$y = 1.96x + 5.32$
###Code
print("Model slope: {0:.2f}".format(model.coef_[0]))
print("Model intercept: {0:.2f}".format(model.intercept_))
###Output
Model slope: 1.96
Model intercept: 5.32
###Markdown
If we saw new values of $x$ at 15, 18, and 21, we could use this `model` to `predict` their $y$ values.
###Code
new_x = [12,13,18]
predictions = model.predict(np.reshape(new_x,(-1,1)))
predictions
###Output
_____no_output_____
###Markdown
Plotting the `new_x` values with their predicted $y$ values from the `model`, in green:
###Code
plt.scatter(x, y, s=25, color='k')
plt.plot(xfit, yfit, lw=3, color='r')
plt.scatter(new_x,predictions,s=100,color='g')
###Output
_____no_output_____
###Markdown
How did the `model` learn these values? In case your statistics training did not introduce you to the concept of "least squares", which means it found the parameters for the line the minimized the *sum of squared residuals*.In the figure, the black dots are the observed values, the blue line is the model line, and the red lines are the residuals or the distance between the observed value and model prediction. Some residuals are positive when the observed value is greater than the model's predicted value and other residuals are negative when the observed value is less than the model's predicted value. The model "learns" its paramaters by minimizing the sum of these squared residuals. (Squaring the residuals allows us to treat negative and positive residuals similarly). This simplest form of estimating the paramaters of a linear regression model is also sometimes called "Ordinary Least Squares" or OLS regression. Load dataWe are going to use 2016 data from the [World Happiness report](http://worldhappiness.report/ed/2018/). The [report defines](https://s3.amazonaws.com/happiness-report/2018/Appendix1ofChapter2.pdf) each of these variables, generally drawing on data from the [Gallup World Poll](https://www.gallup.com/analytics/232838/world-poll.aspx) (GWP) — no, this raw data is not publicly available – and the World Bank's [World Development Indicators](https://datacatalog.worldbank.org/dataset/world-development-indicators).* **`Life Ladder`** - National average response to the question "Please imagine a ladder, with steps numbered from0 at the bottom to 10 at the top. The top of the ladder represents the best possible life for you and the bottom of the ladder represents the worst possible life for you. On which step of the ladder would you say you personally feel you stand at this time?"* **`Log GDP per capita`** - Log-normalized gross domestic product* **`Social support`** - National average of the binary response to the question "If you were in trouble, do you have relatives or friends you can count on to help you whenever you need them, or not?"* **`Healthy life expectancy at birth`** - Number of years of good heath a newborn can expect (from [WHO](https://www.who.int/gho/mortality_burden_disease/life_tables/hale_text/en/))* **`Freedom to make life choices`** - National average of the binary response to the question "Are you satisfied or dissatisfied with your freedom to choose what you do with your life?"* **`Generosity`** - The residual of regressing national average of responses to the question "Have you donated money to a charity in the past month?" on GDP per capita* **`Perceptions of corruptions`** - National average of the survey responses to two questions: "Is corruption widespread throughout the government or not?" and "Is corruption widespread within businesses or not?"* **`Positive affect`** - The average of three positive affect measures: happiness, laugh, and enjoyment* **`Negative affect`** - The average of three positive affect measures: worry, sadness, and anger* **`Confidence in national government`** - National average of the binary response to the question "Do you have confidence in the national government?”* **`Democratic Quality`** - National average of survey items related to (1) Voice and Accountability and (2) Political Stability and Absence of Violence* **`Delivery Quality`** -National average of survey items related to (1) Government Effectiveness, (2) Regulatory Quality, (3) Rule of Law, (4) Control of Corruption* **`Gini coefficient`** - The [Gini coefficient](https://en.wikipedia.org/wiki/Gini_coefficient) is a measure of income inequality, 0 is absolute equality and 1 is absolute inequality
###Code
whi_df = pd.read_csv('world_happiness_indicators_2016.csv').dropna(how='all')
whi_df.head()
whi_df.describe()
###Output
_____no_output_____
###Markdown
Explore dataStart exploring the data by making a [heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html) of the variables in `whi_df`.
###Code
# Only correlate the numeric columns
whi_corr = whi_df[whi_df.columns[2:]].corr()
# Using masking code from: https://seaborn.pydata.org/generated/seaborn.heatmap.html
whi_mask = np.zeros_like(whi_corr)
whi_mask[np.triu_indices_from(whi_mask)] = True
# Set up the plotting environment
f,ax = plt.subplots(1,1,figsize=(12,12))
# Make a heatmap
sb.heatmap(whi_corr,vmin=-1,vmax=1,mask=whi_mask,annot=True,square=True,ax=ax,cmap='coolwarm_r')
###Output
_____no_output_____
###Markdown
Explore a few of the strongest pairwise correlations in more detail. Seaborn's [`lmplot`](https://seaborn.pydata.org/generated/seaborn.lmplot.html) will fit a simple linear model to the data.
###Code
sb.lmplot(x='Log GDP per capita',y='Healthy life expectancy at birth',data=whi_df,ci=0,
line_kws={'color':'r','lw':3},scatter_kws={'color':'k'})
sb.lmplot(x='Delivery Quality',y='Democratic Quality',data=whi_df,
ci=0,line_kws={'color':'r','lw':3},scatter_kws={'color':'k'})
sb.lmplot(x='Healthy life expectancy at birth',y='Social support',data=whi_df,
ci=0,line_kws={'color':'r','lw':3},scatter_kws={'color':'k'})
sb.lmplot(x='Healthy life expectancy at birth',y='Gini coefficient',data=whi_df,
ci=0,line_kws={'color':'r','lw':3},scatter_kws={'color':'k'})
###Output
_____no_output_____
###Markdown
Multiple variablesThe "Life ladder" variable is the most direct measure of national happiness. We see from the correlation heatmap that multiple variables are strongly correlated with it, such as GDP, freedom, social support, and life expectancy while other variables like the Gini coefficient and Perceptions of corruption are strongly anti-correlated with it.
###Code
f,axs = plt.subplots(2,3,figsize=(12,8),sharey=True)
cols = ['Log GDP per capita','Social support','Freedom to make life choices','Healthy life expectancy at birth',
'Gini coefficient','Perceptions of corruption']
for i,ax in enumerate(axs.flatten()):
# Plot the data
whi_df.plot.scatter(x=cols[i],y='Life Ladder',ax=ax,color='k')
# Deal with missing data
_df = whi_df[[cols[i],'Life Ladder']].dropna(how='any')
# Compute the correlation coefficient
corr_coef = np.corrcoef(_df[cols[i]],_df['Life Ladder'])[0,1]
# Make the correlation coefficient the axis title
ax.set_title('$r^2 = {0:.2f}$'.format(corr_coef))
f.tight_layout()
###Output
_____no_output_____
###Markdown
Do some of these variables have a stronger relationship than others? What is the effect of each of these variables on the outcome? Regression allows us to put these multiple variables into the model.You can use scikit-learn's `LinearRegression` module to do the same, but we will use a library called [statsmodels](https://www.statsmodels.org/stable/index.html) instead.
###Code
import statsmodels.formula.api as smf
###Output
_____no_output_____
###Markdown
statsmodels doesn't like variables with spaces in the name, so clean that up by giving the columns new titles.
###Code
# Make a new DataFrame with relevant columns
whi_subdf = whi_df[cols + ['Life Ladder']].copy()
# Put the country names in the index
whi_subdf.set_index(whi_df['country'],inplace=True)
# Rename the columns
whi_subdf.columns = ['gdp','support','freedom','expectancy','gini','corruption','ladder']
# Describe the values in each column
whi_subdf.describe()
whi_subdf.loc['Chile','gini']
###Output
_____no_output_____
###Markdown
We can specify a simple single-variable linear model using statsmodels's formula style, which emulates the popular style from R. R's implementation is – regrettably – much more robust than this Python implementation.Here, the linear relationship we are modeling is:$ladder = \beta_0 * expectancy + \beta_1 $There are a lot of descriptive and diagnostic statistics in this summary report from the model. Some key metrics:* **No. Observations** - The number of countries data we are modeling* **R-squared** - The [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination) is the proportion of variance in the data explained by the model. Higher values are generally better.* **coef** for each variable - The regression coefficients* **std error** for each variable - The standard error of the estimated coefficient* **t** for each variable - The test statistic for evaluating the statistical significance of the variableThis model believe the linear relationship is:$ladder = 0.1137 * expectancy -1.7857 $It is ***CRUCIAL*** for any job that you be able to interpret these regression coefficients. We are modeling a relationship between ladder score (0-10) and life expectancy (years). The model estimate says that every additional year of life expectancy a country has, the ladder score increases by 0.1137 points. Everything else being equal, a country with a life expectancy 10 years greater than another country should have a ladder score 1.137 points higher. We can reject the null hypothesis that the $\beta_0$ value is 0 because the estimated $\beta_0$ value of 0.1137±0.008 is much more extreme than 0, making this a "statistically significant" result.
###Code
# Specify the model relationship using column titles, make sure to also fit the model
lm0 = smf.ols(formula = 'ladder ~ expectancy',data=whi_subdf).fit()
# Summarize the model
lm0.summary()
###Output
_____no_output_____
###Markdown
We can begin to add more variables to the model to see how the parameters and their interpretations change. Adding in "gini", its coefficient is -0.0511±0.702, which produces a small test statistic, indicating this does not have statistically significant relationship with ladder. The R-squared value remains 0.608 indicating that adding this variable did not improve model fit. More advanced diagnostic metrics (you would want to take a class like BCOR 1025, PSYC 2111, STAT 4520, ECON 4818 to really dive in). The [AIC](https://en.wikipedia.org/wiki/Akaike_information_criterion) and [BIC](https://en.wikipedia.org/wiki/Bayesian_information_criterion) values measure relative model performance: their values increased from `lm0` to `lm1` indicating the new variable added in `lm1` is probably is not worth including. There's also a [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity) warning at the bottom because "gini" and "expectancy" are strongly correlated with each other as well as with the "ladder" score.
###Code
# Specify the model relationship using column titles, make sure to also fit the model
lm1 = smf.ols(formula = 'ladder ~ expectancy + gini',data=whi_subdf).fit()
# Summarize the model
lm1.summary()
###Output
_____no_output_____
###Markdown
Specifying a full model with all six columns in `whi_subdf`, the model fit (R-squared) improves substantially to 0.774 (the AIC and BIC scores are lowered compared to `lm0`, meaning adding these variables is probably justified), however we lost 13 observations due to missing data.By including these other variables, the effect size and statistical significance of "expectancy" changed.
###Code
# Specify the model relationship using column titles, make sure to also fit the model
lm2 = smf.ols(formula = 'ladder ~ expectancy + gini + gdp + support + freedom + corruption',data=whi_subdf).fit()
# Summarize the model
lm2.summary()
###Output
_____no_output_____
###Markdown
Using our model, we could predict the "ladder" score of a hypothetical dystopia. We will make:* **life expectancy**: 30 years* **gini coefficient**: 1, all the money in the country is owned by one person* **log GDP per capita**: 7 log US dollars is approximately \$1100* **support**: 0, no one reports being able to count on anyone* **freedom**: 0, no one report freedom to make life choices* **corruption**: 1, everyone reports the business and government are corruptSpecifying these values as a dictionary keyed by the variable name and passing them to the `predict` method for the `lm2` model we trained, we can get the model's "ladder" score prediction for our dystopian country. This comes back as -0.140, or less than 0, so we did a good job of creating a dystopia.
###Code
dystopia_vars = {'expectancy':70,'gini':0,'gdp':11,'support':1,'freedom':1,'corruption':0}
lm2.predict(dystopia_vars)
np.exp(11)
###Output
_____no_output_____
###Markdown
Exercise***BECAUSE THE ABILITY TO INTERPRET REGRESSION COEFFICIENTS IS SO CRITICALLY IMPORTANT FOR DATA SCIENTISTS***, interpret the `lm2` model.* What are the units of "expectancy", "gini", "gdp", "support", "freedom", and "corruption", and *most importantly* "ladder"? (Years, dollars, meters, points, *etc*.?) What are the minimum, maximum, and typical values for these variables? * expectancy * gini * gdp * support * freedom * corruption * ladder* Can you compare a one unit change in expectancy to a one unit change in "gdp" or "freedom"?* Using the $t$ statistics (or looking at the P>|t| column), which variables have statistically significant results?* If you were creating a new country that wanted to have the highest possible ladder score, which variables would you try to minimize and maximize? Try creating a socialist utopia, libertarian paradise, *etc*. using the `predict` method.* Having experimented with different scenarios (and thinking about the different kinds of units), changing which variables produces the greatest changes in the ladder score? AppendixData cleaning the [raw WDI CSV](http://databank.worldbank.org/data/download/WDI_csv.zip).
###Code
wdi_df = pd.read_csv('world-development-indicators.csv')
wdi_df.head()
###Output
_____no_output_____
###Markdown
Look at a single year of data. Set the index to country and indicator name, select the 2016 column, and reshape.
###Code
wdi_2016_df = wdi_df.set_index(['Country Name','Indicator Name'])['2016'].unstack(1)
wdi_2016_df.head()
###Output
_____no_output_____
###Markdown
How does varying the amount of missing data permitted in a column before dropping change the number of columns in the DataFrame?
###Code
null_rate = wdi_2016_df.isnull().sum()/len(wdi_2016_df)
thresh_len_d = {}
for _thresh in np.arange(0,1,.01):
_no_null_columns = null_rate[null_rate < _thresh].index
thresh_len_d[_thresh] = len(wdi_2016_df[_no_null_columns].columns)
ax = pd.Series(thresh_len_d).plot(lw=3)
ax.set_xlim((0,1))
ax.set_ylim((0,1400))
ax.set_xlabel('Keeping columns with this % of missing values')
ax.set_ylabel('Produces this many columns in the DataFrame')
ax.axvline(.1,c='r',ls='--',lw=1)
###Output
_____no_output_____
###Markdown
Keeping columns with 10% of missing data seems like a reasonable compromise.
###Code
no_null_columns = null_rate[null_rate < 0.2].index
wdi_2016_few_nulls_df = wdi_2016_df[no_null_columns]
print("There are {0:,} columns in the DataFrame.".format(len(wdi_2016_few_nulls_df.columns)))
wdi_2016_few_nulls_df.to_csv('world_development_indicators_2016.csv')
wdi_2016_few_nulls_df.head()
###Output
_____no_output_____
###Markdown
Data cleaning the [raw WHI data](https://s3.amazonaws.com/happiness-report/2018/WHR2018Chapter2OnlineData.xls).
###Code
whi_df = pd.read_excel('world-happiness-report.xls',sheet_name=0)
whi_2016_df = whi_df.query('year == 2016').reset_index(drop=True)
whi_2016_df = whi_2016_df[list(whi_2016_df.columns[:-5])+[whi_2016_df.columns[-1]]]
whi_2016_df.rename(columns={'gini of household income reported in Gallup, by wp5-year':'Gini coefficient'},inplace=True)
whi_2016_df.to_csv('world_happiness_indicators_2016.csv',index=False)
whi_2016_df.head()
###Output
_____no_output_____ |
tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb | ###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postproceesing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `time_length` (e.g. 0.63s) by `shift_length` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Get frame level prediction from above step and use ```python/scripts/voice_activity_detection/vad_overlap_posterior.py```Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py`. FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** to binarize predictions or use typical VAD postpocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [threshold tuning](Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `time_length` (e.g. 0.63s) by `shift_length` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Get frame level prediction from above step and use ```python/scripts/voice_activity_detection/vad_overlap_posterior.py```Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py`. FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) Tuning threshold Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best threshold if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
The above prediction is based on `threshold=0.4`.You can play with other [threshold](Tuning-threshold) and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!mkdir -p ort
%cd ort
!git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
!./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
!pip install ./build/Linux/Release/dist/onnxruntime*.whl
%cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postproceesing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. You can find all necessary steps about inference in ```python Script: /examples/asr/speech_classification/vad_infer.py Config: /examples/asr/conf/VAD/vad_inference_postprocessing.yaml```Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `window_length_in_sec` (e.g. 0.63s) by `shift_length_in_sec` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/speech_classification/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Perform this step alongside with above step with flag **gen_overlap_seq=True** or use```python/scripts/voice_activity_detection/vad_overlap_posterior.py```if you already have frame level prediction. Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py` or use flag **gen_seg_table=True** alongside with `vad_infer.py` FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** (achieved by onset=offet=0.5) to binarize predictions or use typical VAD postpocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postproceesing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `time_length` (e.g. 0.63s) by `shift_length` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Get frame level prediction from above step and use ```python/scripts/voice_activity_detection/vad_overlap_posterior.py```Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py`. FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** to binarize predictions or use typical VAD postpocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!mkdir -p ort
%cd ort
!git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
!./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
!pip install ./build/Linux/Release/dist/onnxruntime*.whl
%cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postprocessing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. You can find all necessary steps about inference in ```python Script: /examples/asr/speech_classification/vad_infer.py Config: /examples/asr/conf/VAD/vad_inference_postprocessing.yaml```Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `window_length_in_sec` (e.g. 0.63s) by `shift_length_in_sec` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/speech_classification/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Perform this step alongside with above step with flag **gen_overlap_seq=True** or use```python/scripts/voice_activity_detection/vad_overlap_posterior.py```if you already have frame level prediction. Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py` or use flag **gen_seg_table=True** alongside with `vad_infer.py` FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** (achieved by onset=offset=0.5) to binarize predictions or use typical VAD postprocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postprocessing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. You can find all necessary steps about inference in ```python Script: /examples/asr/speech_classification/vad_infer.py Config: /examples/asr/conf/VAD/vad_inference_postprocessing.yaml```Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `window_length_in_sec` (e.g. 0.63s) by `shift_length_in_sec` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/speech_classification/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Perform this step alongside with above step with flag **gen_overlap_seq=True** or use```python/scripts/voice_activity_detection/vad_overlap_posterior.py```if you already have frame level prediction. Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py` or use flag **gen_seg_table=True** alongside with `vad_infer.py` FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** (achieved by onset=offset=0.5) to binarize predictions or use typical VAD postprocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postproceesing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `time_length` (e.g. 0.63s) by `shift_length` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Get frame level prediction from above step and use ```python/scripts/voice_activity_detection/vad_overlap_posterior.py```Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py`. FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** to binarize predictions or use typical VAD postpocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!mkdir -p ort
%cd ort
!git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
!./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
!pip install ./build/Linux/Release/dist/onnxruntime*.whl
%cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postproceesing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `time_length` (e.g. 0.63s) by `shift_length` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Get frame level prediction from above step and use ```python/scripts/voice_activity_detection/vad_overlap_posterior.py```Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py`. FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** to binarize predictions or use typical VAD postpocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postprocessing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. You can find all necessary steps about inference in ```python Script: /examples/asr/speech_classification/vad_infer.py Config: /examples/asr/conf/VAD/vad_inference_postprocessing.yaml```Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `window_length_in_sec` (e.g. 0.63s) by `shift_length_in_sec` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/speech_classification/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Perform this step alongside with above step with flag **gen_overlap_seq=True** or use```python/scripts/voice_activity_detection/vad_overlap_posterior.py```if you already have frame level prediction. Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py` or use flag **gen_seg_table=True** alongside with `vad_infer.py` FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** (achieved by onset=offset=0.5) to binarize predictions or use typical VAD postprocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____
###Markdown
Voice Activity Detection (VAD)This notebook demonstrates how to perform1. [offline streaming inference on audio files (offline VAD)](Offline-streaming-inference);2. [finetuning](Finetune) and use [posterior](Posterior);2. [vad postprocessing and threshold tuning](VAD-postprocessing-and-Tuning-threshold);4. [online streaming inference](Online-streaming-inference);3. [online streaming inference from a microphone's stream](Online-streaming-inference-through-microphone). The notebook requires PyAudio library to get a signal from an audio device.For Ubuntu, please run the following commands to install it:```sudo apt-get install -y portaudio19-devpip install pyaudio``` This notebook requires the `torchaudio` library to be installed for MarbleNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audioinstallation) to install the appropriate version of torchaudio.If you would like to install the latest version, please run the following command to install it:```conda install -c pytorch torchaudio```
###Code
import numpy as np
import pyaudio as pa
import os, time
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
%matplotlib inline
import nemo
import nemo.collections.asr as nemo_asr
# sample rate, Hz
SAMPLE_RATE = 16000
###Output
_____no_output_____
###Markdown
Restore the model from NGC
###Code
vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')
###Output
_____no_output_____
###Markdown
Observing the config of the model
###Code
from omegaconf import OmegaConf
import copy
# Preserve a copy of the full config
cfg = copy.deepcopy(vad_model._cfg)
print(OmegaConf.to_yaml(cfg))
###Output
_____no_output_____
###Markdown
Setup preprocessor with these settings
###Code
vad_model.preprocessor = vad_model.from_config_dict(cfg.preprocessor)
# Set model to inference mode
vad_model.eval();
vad_model = vad_model.to(vad_model.device)
###Output
_____no_output_____
###Markdown
We demonstrate two methods for streaming inference:1. [offline streaming inference (script)](Offline-streaming-inference)2. [online streaming inference (step-by-step)](Online-streaming-inference) Offline streaming inferenceVAD relies on shorter fixed-length segments for prediction. You can find all necessary steps about inference in ```python Script: /examples/asr/speech_classification/vad_infer.py Config: /examples/asr/conf/vad/vad_inference_postprocessing.yaml```Duration inference, we generate frame-level prediction by two approaches:1. shift the window of length `window_length_in_sec` (e.g. 0.63s) by `shift_length_in_sec` (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame; Use ```python /examples/asr/speech_classification/vad_infer.py``` This script will automatically split long audio file to avoid CUDA memory issue and performing **streaming** inside `AudioLabelDataset`. Posterior2. generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. Perform this step alongside with above step with flag **gen_overlap_seq=True** or use```python/scripts/voice_activity_detection/vad_overlap_posterior.py```if you already have frame level prediction. Have a look at [MarbleNet paper](https://arxiv.org/pdf/2010.13886.pdf) for choices about segment length, smoothing filter, etc. And play with those parameters with your data.You can also find posterior about converting frame level prediction to speech/no-speech segment in start and end times format in `vad_overlap_posterior.py` or use flag **gen_seg_table=True** alongside with `vad_infer.py` FinetuneYou might need to finetune on your data for better performance. For finetuning/transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) VAD postprocessing and Tuning threshold We can use a single **threshold** (achieved by onset=offset=0.5) to binarize predictions or use typical VAD postprocessing including Binarization:1. **onset** and **offset** threshold for detecting the beginning and end of a speech;2. padding durations before (**pad_onset**) and after (**pad_offset**) each speech segment. Filtering:1. threshold for short speech segment deletion (**min_duration_on**);2. threshold for small silence deletion (**min_duration_off**);3. Whether to perform short speech segment deletion first (**filter_speech_first**).Of course you can do threshold tuning on frame level prediction. We also provide a script ```python/scripts/voice_activity_detection/vad_tune_threshold.py```to help you find best thresholds if you have ground truth label file in RTTM format. Online streaming inference Setting up data for Streaming Inference
###Code
from nemo.core.classes import IterableDataset
from nemo.core.neural_types import NeuralType, AudioSignal, LengthsType
import torch
from torch.utils.data import DataLoader
# simple data layer to pass audio signal
class AudioDataLayer(IterableDataset):
@property
def output_types(self):
return {
'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
'a_sig_length': NeuralType(tuple('B'), LengthsType()),
}
def __init__(self, sample_rate):
super().__init__()
self._sample_rate = sample_rate
self.output = True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
self.output = False
return torch.as_tensor(self.signal, dtype=torch.float32), \
torch.as_tensor(self.signal_shape, dtype=torch.int64)
def set_signal(self, signal):
self.signal = signal.astype(np.float32)/32768.
self.signal_shape = self.signal.size
self.output = True
def __len__(self):
return 1
data_layer = AudioDataLayer(sample_rate=cfg.train_ds.sample_rate)
data_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)
# inference method for audio signal (single instance)
def infer_signal(model, signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
return logits
# class for streaming frame-based VAD
# 1) use reset() method to reset FrameVAD's state
# 2) call transcribe(frame) to do VAD on
# contiguous signal's frames
# To simplify the flow, we use single threshold to binarize predictions.
class FrameVAD:
def __init__(self, model_definition,
threshold=0.5,
frame_len=2, frame_overlap=2.5,
offset=10):
'''
Args:
threshold: If prob of speech is larger than threshold, classify the segment to be speech.
frame_len: frame's duration, seconds
frame_overlap: duration of overlaps before and after current frame, seconds
offset: number of symbols to drop for smooth streaming
'''
self.vocab = list(model_definition['labels'])
self.vocab.append('_')
self.sr = model_definition['sample_rate']
self.threshold = threshold
self.frame_len = frame_len
self.n_frame_len = int(frame_len * self.sr)
self.frame_overlap = frame_overlap
self.n_frame_overlap = int(frame_overlap * self.sr)
timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']
for block in model_definition['JasperEncoder']['jasper']:
timestep_duration *= block['stride'][0] ** block['repeat']
self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,
dtype=np.float32)
self.offset = offset
self.reset()
def _decode(self, frame, offset=0):
assert len(frame)==self.n_frame_len
self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]
self.buffer[-self.n_frame_len:] = frame
logits = infer_signal(vad_model, self.buffer).cpu().numpy()[0]
decoded = self._greedy_decoder(
self.threshold,
logits,
self.vocab
)
return decoded
@torch.no_grad()
def transcribe(self, frame=None):
if frame is None:
frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)
if len(frame) < self.n_frame_len:
frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')
unmerged = self._decode(frame, self.offset)
return unmerged
def reset(self):
'''
Reset frame_history and decoder's state
'''
self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)
self.prev_char = ''
@staticmethod
def _greedy_decoder(threshold, logits, vocab):
s = []
if logits.shape[0]:
probs = torch.softmax(torch.as_tensor(logits), dim=-1)
probas, _ = torch.max(probs, dim=-1)
probas_s = probs[1].item()
preds = 1 if probas_s >= threshold else 0
s = [preds, str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]
return s
###Output
_____no_output_____
###Markdown
Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.
###Code
STEP_LIST = [0.01,0.01]
WINDOW_SIZE_LIST = [0.31,0.15]
import wave
def offline_inference(wave_file, STEP = 0.025, WINDOW_SIZE = 0.5, threshold=0.5):
FRAME_LEN = STEP # infer every STEP seconds
CHANNELS = 1 # number of audio channels (expect mono signal)
RATE = 16000 # sample rate, Hz
CHUNK_SIZE = int(FRAME_LEN*RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=threshold,
frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE-FRAME_LEN)/2,
offset=0)
wf = wave.open(wave_file, 'rb')
p = pa.PyAudio()
empty_counter = 0
preds = []
proba_b = []
proba_s = []
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=CHANNELS,
rate=RATE,
output = True)
data = wf.readframes(CHUNK_SIZE)
while len(data) > 0:
data = wf.readframes(CHUNK_SIZE)
signal = np.frombuffer(data, dtype=np.int16)
result = vad.transcribe(signal)
preds.append(result[0])
proba_b.append(result[2])
proba_s.append(result[3])
if len(result):
print(result,end='\n')
empty_counter = 3
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='')
p.terminate()
vad.reset()
return preds, proba_b, proba_s
###Output
_____no_output_____
###Markdown
Here we show an example of online streaming inferenceYou can use your file or download the provided demo audio file.
###Code
demo_wave = 'VAD_demo.wav'
if not os.path.exists(demo_wave):
!wget "https://dldata-public.s3.us-east-2.amazonaws.com/VAD_demo.wav"
wave_file = demo_wave
CHANNELS = 1
RATE = 16000
audio, sample_rate = librosa.load(wave_file, sr=RATE)
dur = librosa.get_duration(audio)
print(dur)
ipd.Audio(audio, rate=sample_rate)
threshold=0.4
results = []
for STEP, WINDOW_SIZE in zip(STEP_LIST, WINDOW_SIZE_LIST, ):
print(f'====== STEP is {STEP}s, WINDOW_SIZE is {WINDOW_SIZE}s ====== ')
preds, proba_b, proba_s = offline_inference(wave_file, STEP, WINDOW_SIZE, threshold)
results.append([STEP, WINDOW_SIZE, preds, proba_b, proba_s])
###Output
_____no_output_____
###Markdown
To simplify the flow, the above prediction is based on single threshold and `threshold=0.4`.You can play with other [threshold](VAD-postprocessing-and-Tuning-threshold) or use postprocessing and see how they would impact performance. **Note** if you want better performance, [finetune](Finetune) on your data and use posteriors such as [overlapped prediction](Posterior). Let's plot the prediction and melspectrogram
###Code
import librosa.display
plt.figure(figsize=[20,10])
num = len(results)
for i in range(num):
len_pred = len(results[i][2])
FRAME_LEN = results[i][0]
ax1 = plt.subplot(num+1,1,i+1)
ax1.plot(np.arange(audio.size) / sample_rate, audio, 'b')
ax1.set_xlim([-0.01, int(dur)+1])
ax1.tick_params(axis='y', labelcolor= 'b')
ax1.set_ylabel('Signal')
ax1.set_ylim([-1, 1])
proba_s = results[i][4]
pred = [1 if p > threshold else 0 for p in proba_s]
ax2 = ax1.twinx()
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(pred) , 'r', label='pred')
ax2.plot(np.arange(len_pred)/(1/results[i][0]), np.array(proba_s) , 'g--', label='speech prob')
ax2.tick_params(axis='y', labelcolor='r')
legend = ax2.legend(loc='lower right', shadow=True)
ax1.set_ylabel('prediction')
ax2.set_title(f'step {results[i][0]}s, buffer size {results[i][1]}s')
ax2.set_ylabel('Preds and Probas')
ax = plt.subplot(num+1,1,i+2)
S = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sample_rate, fmax=8000)
ax.set_title('Mel-frequency spectrogram')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Online streaming inference through microphone **Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**
###Code
STEP = 0.01
WINDOW_SIZE = 0.31
CHANNELS = 1
RATE = 16000
FRAME_LEN = STEP
THRESHOLD = 0.5
CHUNK_SIZE = int(STEP * RATE)
vad = FrameVAD(model_definition = {
'sample_rate': SAMPLE_RATE,
'AudioToMFCCPreprocessor': cfg.preprocessor,
'JasperEncoder': cfg.encoder,
'labels': cfg.labels
},
threshold=THRESHOLD,
frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2,
offset=0)
vad.reset()
p = pa.PyAudio()
print('Available audio input devices:')
input_devices = []
for i in range(p.get_device_count()):
dev = p.get_device_info_by_index(i)
if dev.get('maxInputChannels'):
input_devices.append(i)
print(i, dev.get('name'))
if len(input_devices):
dev_idx = -2
while dev_idx not in input_devices:
print('Please type input device ID:')
dev_idx = int(input())
empty_counter = 0
def callback(in_data, frame_count, time_info, status):
global empty_counter
signal = np.frombuffer(in_data, dtype=np.int16)
text = vad.transcribe(signal)
if len(text):
print(text,end='\n')
empty_counter = vad.offset
elif empty_counter > 0:
empty_counter -= 1
if empty_counter == 0:
print(' ',end='\n')
return (in_data, pa.paContinue)
stream = p.open(format=pa.paInt16,
channels=CHANNELS,
rate=SAMPLE_RATE,
input=True,
input_device_index=dev_idx,
stream_callback=callback,
frames_per_buffer=CHUNK_SIZE)
print('Listening...')
stream.start_stream()
# Interrupt kernel and then speak for a few more words to exit the pyaudio loop !
try:
while stream.is_active():
time.sleep(0.1)
finally:
stream.stop_stream()
stream.close()
p.terminate()
print()
print("PyAudio stopped")
else:
print('ERROR: No audio input device found.')
###Output
_____no_output_____
###Markdown
ONNX DeploymentYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:
###Code
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip install ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
###Output
_____no_output_____
###Markdown
Then just replace `infer_signal` implementation with this code:
###Code
import onnxruntime
vad_model.export('vad.onnx')
ort_session = onnxruntime.InferenceSession('vad.onnx')
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def infer_signal(signal):
data_layer.set_signal(signal)
batch = next(iter(data_loader))
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(vad_model.device), audio_signal_len.to(vad_model.device)
processed_signal, processed_signal_len = vad_model.preprocessor(
input_signal=audio_signal, length=audio_signal_len,
)
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
return logits
###Output
_____no_output_____ |
Ex2/Ex2.ipynb | ###Markdown
Use .apply() to build a new feature with the counts for each of the selected_words
###Code
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
products['word_count'][0]
def awesome_count(dict):
if 'awesome' in dict:
return dict['awesome']
else:
return 0
products['awesome'] = products['word_count'].apply(awesome_count)
products.head()
products.tail()
# 有 bug
# def word_count(dict, word):
# if word in dict:
# return dict[word]
# else:
# return 0
# for word in selected_words:
# products[word] = products['word_count'], word.apply(word_count)
def great_count(dict):
if 'great' in dict:
return dict['great']
else:
return 0
products['great'] = products['word_count'].apply(great_count)
def fantastic_count(dict):
if 'fantastic' in dict:
return dict['fantastic']
else:
return 0
products['fantastic'] = products['word_count'].apply(fantastic_count)
def amazing_count(dict):
if 'amazing' in dict:
return dict['amazing']
else:
return 0
products['amazing'] = products['word_count'].apply(amazing_count)
def love_count(dict):
if 'love' in dict:
return dict['love']
else:
return 0
products['love'] = products['word_count'].apply(love_count)
def horrible_count(dict):
if 'horrible' in dict:
return dict['horrible']
else:
return 0
products['horrible'] = products['word_count'].apply(horrible_count)
def bad_count(dict):
if 'bad' in dict:
return dict['bad']
else:
return 0
products['bad'] = products['word_count'].apply(bad_count)
def terrible_count(dict):
if 'terrible' in dict:
return dict['terrible']
else:
return 0
products['terrible'] = products['word_count'].apply(terrible_count)
def awful_count(dict):
if 'awful' in dict:
return dict['awful']
else:
return 0
products['awful'] = products['word_count'].apply(awful_count)
def wow_count(dict):
if 'wow' in dict:
return dict['wow']
else:
return 0
products['wow'] = products['word_count'].apply(wow_count)
def hate_count(dict):
if 'hate' in dict:
return dict['hate']
else:
return 0
products['hate'] = products['word_count'].apply(hate_count)
products.head()
for word in selected_words:
print word
print products[word].sum()
###Output
awesome
2090
great
45206
fantastic
932
amazing
1363
love
42065
horrible
734
bad
3724
terrible
748
awful
383
wow
144
hate
1220
###Markdown
Create a new sentiment analysis model using only the selected_words as features
###Code
products = products[products['rating'] != 3]
products['sentiment'] = products['rating'] >= 4
train_data,test_data = products.random_split(.8, seed=0)
products.head()
selected_words_model = graphlab.logistic_classifier.create(train_data, target='sentiment', features=selected_words, validation_set=test_data)
selected_words_model['coefficients']
selected_words_model['coefficients'].tail()
###Output
_____no_output_____
###Markdown
Comparing the accuracy of different sentiment analysis model
###Code
selected_words_model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
Interpreting the difference in performance between the models
###Code
diaper_champ_reviews = products[products['name'] == 'Baby Trend Diaper Champ']
diaper_champ_reviews['predicted_sentiment'] = selected_words_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews.head()
selected_words_model.predict(diaper_champ_reviews[0:1], output_type='probability')
###Output
_____no_output_____ |
1_Linear_Regression/04_Regression_TF_2_0.ipynb | ###Markdown
TensorFlow Regression Example Creating Data
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# 1 Million Points
x_data = np.linspace(0.0,10.0,1000000)
noise = np.random.randn(len(x_data))
# y = mx + b + noise_levels
b = 10
y_true = (2.5 * x_data ) + 15 + noise
sample_indx = np.random.randint(len(x_data),size=(250))
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
###Output
_____no_output_____
###Markdown
Tensorflow 2.0
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
BATCH_SIZE = 1000
BATCHS = 10000
display_step = 1000
learning_rate = 0.001
w = tf.Variable(initial_value=0.)
b = tf.Variable(initial_value=0.)
def next_batch(x_data, batch_size):
batch_index = np.random.randint(len(x_data),size=(BATCH_SIZE))
x_train = x_data[batch_index]
y_train = y_true[batch_index]
return x_train, y_train
optimizer = tf.keras.optimizers.SGD(learning_rate= learing_rate)
x_train, y_train = next_batch(x_data, BATCH_SIZE)
y_train.reshape((-1,1)).shape
x_train.reshape((-1,1)).shape
for step in range(BATCHS):
x_train, y_train = next_batch(x_data, BATCH_SIZE)
with tf.GradientTape() as tape:
y_pred = w * x_train + b
# loss = tf.reduce_sum(tf.square(y_pred - y_train))/(BATCH_SIZE)
loss = tf.reduce_mean(tf.square(y_pred - y_train))
grads = tape.gradient(loss, [w, b])
optimizer.apply_gradients(grads_and_vars=zip(grads,[w,b]))
if (step + 1) % display_step == 0 or step == 0:
print("Step : {}, loss : {} , w : {}, b : {}".format(step, loss.numpy(), w.numpy(), b.numpy()))
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
plt.plot(x_data, w.numpy()*x_data+b.numpy(),'r')
###Output
_____no_output_____
###Markdown
Using Keras Models
###Code
class LinearModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense = tf.keras.layers.Dense(
units=1)
# kernel_initializer=tf.zeros_initializer(),
# bias_initializer = tf.zeros_initializer())
def call(self,input):
output = self.dense(input)
return output
model = LinearModel()
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
for step in range(BATCHS):
x_train, y_train = next_batch(x_data, BATCH_SIZE)
x_train = x_train.reshape((-1,1))
y_train = y_train.reshape((-1,1))
with tf.GradientTape() as tape:
y_pred = model(x_train)
loss = tf.reduce_mean((y_pred - y_train)**2)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
if step%1000 == 0:
print("Step: {} loss: {}".format(step,loss.numpy()))
w,b = model.variables
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
plt.plot(x_data, w.numpy()[0]*x_data+b.numpy(),'r')
###Output
_____no_output_____
###Markdown
TF Eager Execution
###Code
import tensorflow as tf
# Set Eager API
tf.enable_eager_execution()
tfe = tf.contrib.eager
BATCH_SIZE = 1000
BATCHS = 10000
def next_batch(x_data, batch_size):
batch_index = np.random.randint(len(x_data),size=(BATCH_SIZE))
x_train = x_data[batch_index]
y_train = y_true[batch_index]
return x_train, y_train
###Output
_____no_output_____
###Markdown
**Variables**
###Code
w_tfe = tf.Variable(np.random.uniform())
b_tfe = tf.Variable(np.random.uniform(1,10))
###Output
_____no_output_____
###Markdown
**Linear regression function**
###Code
# Linear regression (Wx + b)
def linear_regression(inputs):
return inputs * w_tfe + b_tfe
###Output
_____no_output_____
###Markdown
**Lost function: MS**
###Code
def mean_square_fn(model_fn, inputs, labels):
return tf.reduce_sum(tf.pow(model_fn(inputs) - labels, 2)) / (2 * BATCH_SIZE)
###Output
_____no_output_____
###Markdown
**Optimizer**
###Code
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
grad = tfe.implicit_gradients(mean_square_fn)
# Initial cost
x_train, y_train = next_batch(x_data, BATCH_SIZE)
print("Initial cost= {:.9f}".format(
mean_square_fn(linear_regression, x_train, y_train)),
"W=", w_tfe.numpy(), "b=", b_tfe.numpy())
###Output
Initial cost= 301.666534424 W= 0.29334104 b= 2.6102347
###Markdown
**Training**
###Code
# Training
display_step = 100
for step in range(BATCHS):
x_train, y_train = next_batch(x_data, BATCH_SIZE)
optimizer.apply_gradients(grad(linear_regression, x_train, y_train))
if (step + 1) % display_step == 0 or step == 0:
print("Epoch:", '%04d' % (step + 1), "cost=",
"{:.9f}".format(mean_square_fn(linear_regression, x_train, y_train)),
"W=", w_tfe.numpy(), "b=", b_tfe.numpy())
###Output
Epoch: 0001 cost= 275.481536865 W= 0.42854565 b= 2.6336372
Epoch: 0100 cost= 16.771112442 W= 4.11131 b= 3.4798746
Epoch: 0200 cost= 16.047227859 W= 4.1926284 b= 3.7750118
Epoch: 0300 cost= 15.135506630 W= 4.155428 b= 4.045262
Epoch: 0400 cost= 14.601335526 W= 4.1137743 b= 4.309869
Epoch: 0500 cost= 12.988318443 W= 4.07434 b= 4.5675364
Epoch: 0600 cost= 13.962398529 W= 4.0381775 b= 4.8191485
Epoch: 0700 cost= 11.749617577 W= 3.9993713 b= 5.064703
Epoch: 0800 cost= 11.554651260 W= 3.9644341 b= 5.3054633
Epoch: 0900 cost= 11.580221176 W= 3.928494 b= 5.5401406
Epoch: 1000 cost= 11.125984192 W= 3.8980548 b= 5.770336
Epoch: 1100 cost= 10.007103920 W= 3.862554 b= 5.9936194
Epoch: 1200 cost= 9.616973877 W= 3.8300939 b= 6.2107058
Epoch: 1300 cost= 9.293504715 W= 3.7960896 b= 6.4219265
Epoch: 1400 cost= 9.302433968 W= 3.7648456 b= 6.6274505
Epoch: 1500 cost= 8.918160439 W= 3.7349737 b= 6.8296146
Epoch: 1600 cost= 8.196873665 W= 3.700922 b= 7.025527
Epoch: 1700 cost= 8.114662170 W= 3.6736612 b= 7.217224
Epoch: 1800 cost= 7.420712471 W= 3.646926 b= 7.405056
Epoch: 1900 cost= 7.244167805 W= 3.6210873 b= 7.5892572
Epoch: 2000 cost= 7.027961254 W= 3.595361 b= 7.7688875
Epoch: 2100 cost= 6.667112827 W= 3.5663705 b= 7.94299
Epoch: 2200 cost= 6.635849476 W= 3.5434058 b= 8.11352
Epoch: 2300 cost= 6.271118164 W= 3.5149488 b= 8.278675
Epoch: 2400 cost= 6.085773945 W= 3.4919684 b= 8.440779
Epoch: 2500 cost= 5.780133724 W= 3.4696915 b= 8.599832
Epoch: 2600 cost= 5.293220520 W= 3.4442346 b= 8.753782
Epoch: 2700 cost= 4.847285271 W= 3.4196973 b= 8.903981
Epoch: 2800 cost= 4.648473740 W= 3.3974876 b= 9.051678
Epoch: 2900 cost= 4.737245083 W= 3.3806791 b= 9.197361
Epoch: 3000 cost= 4.325105190 W= 3.3542683 b= 9.336985
Epoch: 3100 cost= 4.441687107 W= 3.3339086 b= 9.473751
Epoch: 3200 cost= 4.256507874 W= 3.31495 b= 9.606302
Epoch: 3300 cost= 4.133140564 W= 3.294693 b= 9.736195
Epoch: 3400 cost= 3.762845039 W= 3.2755926 b= 9.86446
Epoch: 3500 cost= 3.711015701 W= 3.2582154 b= 9.988206
Epoch: 3600 cost= 3.403789759 W= 3.2384858 b= 10.109075
Epoch: 3700 cost= 3.416311502 W= 3.2214046 b= 10.227655
Epoch: 3800 cost= 3.367571354 W= 3.204864 b= 10.343255
Epoch: 3900 cost= 3.045504808 W= 3.1882067 b= 10.45586
Epoch: 4000 cost= 3.017071247 W= 3.1692305 b= 10.565196
Epoch: 4100 cost= 2.872391939 W= 3.153776 b= 10.671853
Epoch: 4200 cost= 2.893660307 W= 3.1390457 b= 10.776576
Epoch: 4300 cost= 2.697478056 W= 3.1202095 b= 10.878318
Epoch: 4400 cost= 2.426154852 W= 3.107227 b= 10.978092
Epoch: 4500 cost= 2.500243902 W= 3.0942423 b= 11.074939
Epoch: 4600 cost= 2.320518017 W= 3.078808 b= 11.169363
Epoch: 4700 cost= 2.285497665 W= 3.0669324 b= 11.26185
Epoch: 4800 cost= 2.077934504 W= 3.0534801 b= 11.352316
Epoch: 4900 cost= 2.126533985 W= 3.0373795 b= 11.439814
Epoch: 5000 cost= 1.963438749 W= 3.0228853 b= 11.524724
Epoch: 5100 cost= 1.893150449 W= 3.0125375 b= 11.609567
Epoch: 5200 cost= 1.800761700 W= 2.999554 b= 11.69099
Epoch: 5300 cost= 1.826110840 W= 2.9880226 b= 11.770436
Epoch: 5400 cost= 1.718632936 W= 2.977614 b= 11.848662
Epoch: 5500 cost= 1.741396189 W= 2.9636 b= 11.925114
Epoch: 5600 cost= 1.685652852 W= 2.9538183 b= 11.999487
Epoch: 5700 cost= 1.694853544 W= 2.9414043 b= 12.071367
Epoch: 5800 cost= 1.520098925 W= 2.9319844 b= 12.142838
Epoch: 5900 cost= 1.461071968 W= 2.91992 b= 12.211847
Epoch: 6000 cost= 1.357613325 W= 2.9111903 b= 12.279675
Epoch: 6100 cost= 1.475282669 W= 2.9010105 b= 12.344868
Epoch: 6200 cost= 1.343915820 W= 2.892209 b= 12.409323
Epoch: 6300 cost= 1.281312704 W= 2.8823316 b= 12.471721
Epoch: 6400 cost= 1.285332322 W= 2.8736398 b= 12.532852
Epoch: 6500 cost= 1.212053418 W= 2.8630872 b= 12.592061
Epoch: 6600 cost= 1.218060493 W= 2.8541899 b= 12.649966
Epoch: 6700 cost= 1.172313809 W= 2.847736 b= 12.706566
Epoch: 6800 cost= 1.166100621 W= 2.8382943 b= 12.762114
Epoch: 6900 cost= 1.098492861 W= 2.829959 b= 12.816151
Epoch: 7000 cost= 1.063740015 W= 2.8225312 b= 12.868826
Epoch: 7100 cost= 1.023402214 W= 2.8145664 b= 12.919719
Epoch: 7200 cost= 0.982637823 W= 2.8071923 b= 12.969754
Epoch: 7300 cost= 1.033883810 W= 2.7985566 b= 13.018748
Epoch: 7400 cost= 0.994429886 W= 2.792207 b= 13.066607
Epoch: 7500 cost= 0.954064608 W= 2.784292 b= 13.113034
Epoch: 7600 cost= 0.927848518 W= 2.7786987 b= 13.158699
Epoch: 7700 cost= 0.887622297 W= 2.7697585 b= 13.202898
Epoch: 7800 cost= 0.881225646 W= 2.7627225 b= 13.245891
Epoch: 7900 cost= 0.895305872 W= 2.7579055 b= 13.28829
Epoch: 8000 cost= 0.851209164 W= 2.7529163 b= 13.3296385
Epoch: 8100 cost= 0.745737910 W= 2.7459857 b= 13.370074
Epoch: 8200 cost= 0.795846760 W= 2.7407546 b= 13.409106
Epoch: 8300 cost= 0.850925267 W= 2.733101 b= 13.447238
Epoch: 8400 cost= 0.762317002 W= 2.7285335 b= 13.484957
Epoch: 8500 cost= 0.799837947 W= 2.7227023 b= 13.521192
Epoch: 8600 cost= 0.793936789 W= 2.7185183 b= 13.557222
Epoch: 8700 cost= 0.703119099 W= 2.7118874 b= 13.591786
Epoch: 8800 cost= 0.688881338 W= 2.7092729 b= 13.626094
Epoch: 8900 cost= 0.720998645 W= 2.7014139 b= 13.658497
Epoch: 9000 cost= 0.728614748 W= 2.698639 b= 13.691119
Epoch: 9100 cost= 0.673389077 W= 2.6924767 b= 13.722499
Epoch: 9200 cost= 0.665706694 W= 2.6888242 b= 13.75342
Epoch: 9300 cost= 0.681291282 W= 2.6839147 b= 13.783423
Epoch: 9400 cost= 0.666010678 W= 2.678437 b= 13.812644
Epoch: 9500 cost= 0.643584788 W= 2.6734912 b= 13.841193
Epoch: 9600 cost= 0.676403344 W= 2.6705093 b= 13.869479
Epoch: 9700 cost= 0.583694935 W= 2.6669464 b= 13.896692
Epoch: 9800 cost= 0.559415162 W= 2.662542 b= 13.922997
Epoch: 9900 cost= 0.641796768 W= 2.6587977 b= 13.94918
Epoch: 10000 cost= 0.607424080 W= 2.654848 b= 13.974417
###Markdown
**Results**
###Code
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
plt.plot(x_data, w_tfe*x_data+b_tfe,'r')
###Output
_____no_output_____
###Markdown
TensorFlow Regression Example Creating Data
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# 1 Million Points
x_data = np.linspace(0.0,10.0,1000000)
noise = np.random.randn(len(x_data))
# y = mx + b + noise_levels
b = 10
y_true = (2.5 * x_data ) + 15 + noise
sample_indx = np.random.randint(len(x_data),size=(250))
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
###Output
_____no_output_____
###Markdown
Tensorflow 2.0
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
BATCH_SIZE = 1000
BATCHS = 10000
display_step = 1000
learning_rate = 0.001
w = tf.Variable(initial_value=0.)
b = tf.Variable(initial_value=0.)
def next_batch(x_data, batch_size):
batch_index = np.random.randint(len(x_data),size=(BATCH_SIZE))
x_train = x_data[batch_index]
y_train = y_true[batch_index]
return x_train, y_train
optimizer = tf.keras.optimizers.SGD(learning_rate= learing_rate)
x_train, y_train = next_batch(x_data, BATCH_SIZE)
y_train.reshape((-1,1)).shape
x_train.reshape((-1,1)).shape
for step in range(BATCHS):
x_train, y_train = next_batch(x_data, BATCH_SIZE)
with tf.GradientTape() as tape:
y_pred = w * x_train + b
# loss = tf.reduce_sum(tf.square(y_pred - y_train))/(BATCH_SIZE)
loss = tf.reduce_mean(tf.square(y_pred - y_train))
grads = tape.gradient(loss, [w, b])
optimizer.apply_gradients(grads_and_vars=zip(grads,[w,b]))
if (step + 1) % display_step == 0 or step == 0:
print("Step : {}, loss : {} , w : {}, b : {}".format(step, loss.numpy(), w.numpy(), b.numpy()))
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
plt.plot(x_data, w.numpy()*x_data+b.numpy(),'r')
###Output
_____no_output_____
###Markdown
Using Keras Models
###Code
class LinearModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense = tf.keras.layers.Dense(
units=1)
# kernel_initializer=tf.zeros_initializer(),
# bias_initializer = tf.zeros_initializer())
def call(self,input):
output = self.dense(input)
return output
model = LinearModel()
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
for step in range(BATCHS):
x_train, y_train = next_batch(x_data, BATCH_SIZE)
x_train = x_train.reshape((-1,1))
y_train = y_train.reshape((-1,1))
with tf.GradientTape() as tape:
y_pred = model(x_train)
loss = tf.reduce_mean((y_pred - y_train)**2)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
if step%1000 == 0:
print("Step: {} loss: {}".format(step,loss.numpy()))
w,b = model.variables
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
plt.plot(x_data, w.numpy()[0]*x_data+b.numpy(),'r')
###Output
_____no_output_____
###Markdown
TF Eager Execution
###Code
import tensorflow as tf
# Set Eager API
tf.enable_eager_execution()
tfe = tf.contrib.eager
BATCH_SIZE = 1000
BATCHS = 10000
def next_batch(x_data, batch_size):
batch_index = np.random.randint(len(x_data),size=(BATCH_SIZE))
x_train = x_data[batch_index]
y_train = y_true[batch_index]
return x_train, y_train
###Output
_____no_output_____
###Markdown
**Variables**
###Code
w_tfe = tf.Variable(np.random.uniform())
b_tfe = tf.Variable(np.random.uniform(1,10))
###Output
_____no_output_____
###Markdown
**Linear regression function**
###Code
# Linear regression (Wx + b)
def linear_regression(inputs):
return inputs * w_tfe + b_tfe
###Output
_____no_output_____
###Markdown
**Lost function: MS**
###Code
def mean_square_fn(model_fn, inputs, labels):
return tf.reduce_sum(tf.pow(model_fn(inputs) - labels, 2)) / (2 * BATCH_SIZE)
###Output
_____no_output_____
###Markdown
**Optimizer**
###Code
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
grad = tfe.implicit_gradients(mean_square_fn)
# Initial cost
x_train, y_train = next_batch(x_data, BATCH_SIZE)
print("Initial cost= {:.9f}".format(
mean_square_fn(linear_regression, x_train, y_train)),
"W=", w_tfe.numpy(), "b=", b_tfe.numpy())
###Output
Initial cost= 301.666534424 W= 0.29334104 b= 2.6102347
###Markdown
**Training**
###Code
# Training
display_step = 100
for step in range(BATCHS):
x_train, y_train = next_batch(x_data, BATCH_SIZE)
optimizer.apply_gradients(grad(linear_regression, x_train, y_train))
if (step + 1) % display_step == 0 or step == 0:
print("Epoch:", '%04d' % (step + 1), "cost=",
"{:.9f}".format(mean_square_fn(linear_regression, x_train, y_train)),
"W=", w_tfe.numpy(), "b=", b_tfe.numpy())
###Output
Epoch: 0001 cost= 275.481536865 W= 0.42854565 b= 2.6336372
Epoch: 0100 cost= 16.771112442 W= 4.11131 b= 3.4798746
Epoch: 0200 cost= 16.047227859 W= 4.1926284 b= 3.7750118
Epoch: 0300 cost= 15.135506630 W= 4.155428 b= 4.045262
Epoch: 0400 cost= 14.601335526 W= 4.1137743 b= 4.309869
Epoch: 0500 cost= 12.988318443 W= 4.07434 b= 4.5675364
Epoch: 0600 cost= 13.962398529 W= 4.0381775 b= 4.8191485
Epoch: 0700 cost= 11.749617577 W= 3.9993713 b= 5.064703
Epoch: 0800 cost= 11.554651260 W= 3.9644341 b= 5.3054633
Epoch: 0900 cost= 11.580221176 W= 3.928494 b= 5.5401406
Epoch: 1000 cost= 11.125984192 W= 3.8980548 b= 5.770336
Epoch: 1100 cost= 10.007103920 W= 3.862554 b= 5.9936194
Epoch: 1200 cost= 9.616973877 W= 3.8300939 b= 6.2107058
Epoch: 1300 cost= 9.293504715 W= 3.7960896 b= 6.4219265
Epoch: 1400 cost= 9.302433968 W= 3.7648456 b= 6.6274505
Epoch: 1500 cost= 8.918160439 W= 3.7349737 b= 6.8296146
Epoch: 1600 cost= 8.196873665 W= 3.700922 b= 7.025527
Epoch: 1700 cost= 8.114662170 W= 3.6736612 b= 7.217224
Epoch: 1800 cost= 7.420712471 W= 3.646926 b= 7.405056
Epoch: 1900 cost= 7.244167805 W= 3.6210873 b= 7.5892572
Epoch: 2000 cost= 7.027961254 W= 3.595361 b= 7.7688875
Epoch: 2100 cost= 6.667112827 W= 3.5663705 b= 7.94299
Epoch: 2200 cost= 6.635849476 W= 3.5434058 b= 8.11352
Epoch: 2300 cost= 6.271118164 W= 3.5149488 b= 8.278675
Epoch: 2400 cost= 6.085773945 W= 3.4919684 b= 8.440779
Epoch: 2500 cost= 5.780133724 W= 3.4696915 b= 8.599832
Epoch: 2600 cost= 5.293220520 W= 3.4442346 b= 8.753782
Epoch: 2700 cost= 4.847285271 W= 3.4196973 b= 8.903981
Epoch: 2800 cost= 4.648473740 W= 3.3974876 b= 9.051678
Epoch: 2900 cost= 4.737245083 W= 3.3806791 b= 9.197361
Epoch: 3000 cost= 4.325105190 W= 3.3542683 b= 9.336985
Epoch: 3100 cost= 4.441687107 W= 3.3339086 b= 9.473751
Epoch: 3200 cost= 4.256507874 W= 3.31495 b= 9.606302
Epoch: 3300 cost= 4.133140564 W= 3.294693 b= 9.736195
Epoch: 3400 cost= 3.762845039 W= 3.2755926 b= 9.86446
Epoch: 3500 cost= 3.711015701 W= 3.2582154 b= 9.988206
Epoch: 3600 cost= 3.403789759 W= 3.2384858 b= 10.109075
Epoch: 3700 cost= 3.416311502 W= 3.2214046 b= 10.227655
Epoch: 3800 cost= 3.367571354 W= 3.204864 b= 10.343255
Epoch: 3900 cost= 3.045504808 W= 3.1882067 b= 10.45586
Epoch: 4000 cost= 3.017071247 W= 3.1692305 b= 10.565196
Epoch: 4100 cost= 2.872391939 W= 3.153776 b= 10.671853
Epoch: 4200 cost= 2.893660307 W= 3.1390457 b= 10.776576
Epoch: 4300 cost= 2.697478056 W= 3.1202095 b= 10.878318
Epoch: 4400 cost= 2.426154852 W= 3.107227 b= 10.978092
Epoch: 4500 cost= 2.500243902 W= 3.0942423 b= 11.074939
Epoch: 4600 cost= 2.320518017 W= 3.078808 b= 11.169363
Epoch: 4700 cost= 2.285497665 W= 3.0669324 b= 11.26185
Epoch: 4800 cost= 2.077934504 W= 3.0534801 b= 11.352316
Epoch: 4900 cost= 2.126533985 W= 3.0373795 b= 11.439814
Epoch: 5000 cost= 1.963438749 W= 3.0228853 b= 11.524724
Epoch: 5100 cost= 1.893150449 W= 3.0125375 b= 11.609567
Epoch: 5200 cost= 1.800761700 W= 2.999554 b= 11.69099
Epoch: 5300 cost= 1.826110840 W= 2.9880226 b= 11.770436
Epoch: 5400 cost= 1.718632936 W= 2.977614 b= 11.848662
Epoch: 5500 cost= 1.741396189 W= 2.9636 b= 11.925114
Epoch: 5600 cost= 1.685652852 W= 2.9538183 b= 11.999487
Epoch: 5700 cost= 1.694853544 W= 2.9414043 b= 12.071367
Epoch: 5800 cost= 1.520098925 W= 2.9319844 b= 12.142838
Epoch: 5900 cost= 1.461071968 W= 2.91992 b= 12.211847
Epoch: 6000 cost= 1.357613325 W= 2.9111903 b= 12.279675
Epoch: 6100 cost= 1.475282669 W= 2.9010105 b= 12.344868
Epoch: 6200 cost= 1.343915820 W= 2.892209 b= 12.409323
Epoch: 6300 cost= 1.281312704 W= 2.8823316 b= 12.471721
Epoch: 6400 cost= 1.285332322 W= 2.8736398 b= 12.532852
Epoch: 6500 cost= 1.212053418 W= 2.8630872 b= 12.592061
Epoch: 6600 cost= 1.218060493 W= 2.8541899 b= 12.649966
Epoch: 6700 cost= 1.172313809 W= 2.847736 b= 12.706566
Epoch: 6800 cost= 1.166100621 W= 2.8382943 b= 12.762114
Epoch: 6900 cost= 1.098492861 W= 2.829959 b= 12.816151
Epoch: 7000 cost= 1.063740015 W= 2.8225312 b= 12.868826
Epoch: 7100 cost= 1.023402214 W= 2.8145664 b= 12.919719
Epoch: 7200 cost= 0.982637823 W= 2.8071923 b= 12.969754
Epoch: 7300 cost= 1.033883810 W= 2.7985566 b= 13.018748
Epoch: 7400 cost= 0.994429886 W= 2.792207 b= 13.066607
Epoch: 7500 cost= 0.954064608 W= 2.784292 b= 13.113034
Epoch: 7600 cost= 0.927848518 W= 2.7786987 b= 13.158699
Epoch: 7700 cost= 0.887622297 W= 2.7697585 b= 13.202898
Epoch: 7800 cost= 0.881225646 W= 2.7627225 b= 13.245891
Epoch: 7900 cost= 0.895305872 W= 2.7579055 b= 13.28829
Epoch: 8000 cost= 0.851209164 W= 2.7529163 b= 13.3296385
Epoch: 8100 cost= 0.745737910 W= 2.7459857 b= 13.370074
Epoch: 8200 cost= 0.795846760 W= 2.7407546 b= 13.409106
Epoch: 8300 cost= 0.850925267 W= 2.733101 b= 13.447238
Epoch: 8400 cost= 0.762317002 W= 2.7285335 b= 13.484957
Epoch: 8500 cost= 0.799837947 W= 2.7227023 b= 13.521192
Epoch: 8600 cost= 0.793936789 W= 2.7185183 b= 13.557222
Epoch: 8700 cost= 0.703119099 W= 2.7118874 b= 13.591786
Epoch: 8800 cost= 0.688881338 W= 2.7092729 b= 13.626094
Epoch: 8900 cost= 0.720998645 W= 2.7014139 b= 13.658497
Epoch: 9000 cost= 0.728614748 W= 2.698639 b= 13.691119
Epoch: 9100 cost= 0.673389077 W= 2.6924767 b= 13.722499
Epoch: 9200 cost= 0.665706694 W= 2.6888242 b= 13.75342
Epoch: 9300 cost= 0.681291282 W= 2.6839147 b= 13.783423
Epoch: 9400 cost= 0.666010678 W= 2.678437 b= 13.812644
Epoch: 9500 cost= 0.643584788 W= 2.6734912 b= 13.841193
Epoch: 9600 cost= 0.676403344 W= 2.6705093 b= 13.869479
Epoch: 9700 cost= 0.583694935 W= 2.6669464 b= 13.896692
Epoch: 9800 cost= 0.559415162 W= 2.662542 b= 13.922997
Epoch: 9900 cost= 0.641796768 W= 2.6587977 b= 13.94918
Epoch: 10000 cost= 0.607424080 W= 2.654848 b= 13.974417
###Markdown
**Results**
###Code
plt.plot(x_data[sample_indx],y_true[sample_indx],'*')
plt.plot(x_data, w_tfe*x_data+b_tfe,'r')
###Output
_____no_output_____ |
datacourse/data-wrangling/DS_Pandas.ipynb | ###Markdown
Pandas
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
We introduced the Pandas module and the DataFrame object in the lesson on [basic data science modules](DS_Basic_DS_Modules.ipynb). We learned how to construct a DataFrame, add data, retrieve data, and [basic reading and writing to disk](DS_IO.ipynb). Now we'll explore the DataFrame object and its powerful analysis methods in more depth.We'll work with a data set from the online review site, Yelp. The file is stored as a compressed JSON file.
###Code
!ls -lh ./data/yelp.json.gz
import gzip
import simplejson as json
with gzip.open('./data/yelp.json.gz', 'r') as f:
yelp_data = [json.loads(line) for line in f]
yelp_df = pd.DataFrame(yelp_data)
yelp_df.head()
###Output
_____no_output_____
###Markdown
Pandas DataFrame and SeriesThe Pandas DataFrame is a highly structured object. Each row corresponds with some physical entity or event. We think of all of the information in a given row as referring to one object (e.g. a business). Each column contains one type of data, both semantically (e.g. names, counts of reviews, star ratings) and syntactically.
###Code
yelp_df.dtypes
###Output
_____no_output_____
###Markdown
We can reference the columns by name, like we would with a `dict`.
###Code
yelp_df['city'].head()
type(yelp_df['city'])
###Output
_____no_output_____
###Markdown
An individual column is a Pandas `Series`. A `Series` has a `name` and a `dtype` (similar to a NumPy array). A `DataFrame` is essentially a `dict` of `Series` objects. The `Series` has an `index` attribute, which label the rows. The index is essentially a set of keys for referencing the rows. We can have an index composed of numbers, strings, timestamps, or any hashable Python object. The index will also have homogeneous type.
###Code
yelp_df['city'].index
###Output
_____no_output_____
###Markdown
The `DataFrame` has an `index` given by the union of indices of its constituent `Series` (we'll explore this later in more detail). Since a `DataFrame` is a `dict` of `Series`, we can select a column and then a row using square bracket notation, but not the reverse (however, the `loc` method works around this).
###Code
# this works
yelp_df['city'][100]
%%expect_exception KeyError
# this doesn't
yelp_df[100]['city']
yelp_df.loc[100, 'city']
###Output
_____no_output_____
###Markdown
Understanding the underlying structure of the `DataFrame` object as a `dict` of `Series` will help you avoid errors and help you think about how the `DataFrame` should behave when we begin doing more complicated analysis.We can _aggregate_ data in a `DataFrame` using methods like `mean`, `sum`, `count`, and `std`. To view a collection of summary statistics for each column we can use the `describe` method.
###Code
yelp_df.describe()
###Output
_____no_output_____
###Markdown
The utility of a DataFrame comes from its ability to split data into groups, using the `groupby` method, and then perform custom aggregations using the `apply` or `aggregate` method. This process of splitting the data into groups, applying an aggregation, and then collecting the results is [discussed in detail in the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/groupby.html), and is one of the main focuses of this notebook. DataFrame constructionSince a `DataFrame` is a `dict` of `Series`, the natural way to construct a `DataFrame` is to use a `dict` of `Series`-like objects.
###Code
from string import ascii_letters, digits
import numpy as np
import datetime
usernames = ['alice36', 'bob_smith', 'eve']
passwords = [''.join(np.random.choice(list(ascii_letters + digits), 8)) for x in range(3)]
creation_dates = [datetime.datetime.now().date() - datetime.timedelta(int(x)) for x in np.random.randint(0, 1500, 3)]
df = pd.DataFrame({'username': usernames, 'password': passwords, 'date-created': pd.to_datetime(creation_dates)})
df
df.dtypes
###Output
_____no_output_____
###Markdown
The `DataFrame` is also closely related to the NumPy `ndarray`.
###Code
random_data = np.random.random((4,3))
random_data
df_random = pd.DataFrame(random_data, columns=['a', 'b', 'c'])
df_random
###Output
_____no_output_____
###Markdown
To add a new column or row, we simply use `dict`-like assignment.
###Code
emails = ['[email protected]', '[email protected]', '[email protected]']
df['email'] = emails
df
# loc references index value, NOT position
# for position use iloc
df.loc[3] = ['2015-01-29', '38uzFJ1n', 'melvintherobot', '[email protected]']
df
###Output
_____no_output_____
###Markdown
We can also drop columns and rows.
###Code
df.drop(3)
# to drop a column, need axis=1
df.drop('email', axis=1)
###Output
_____no_output_____
###Markdown
Notice when we dropped the `'email'` column, the row at index 3 was in the `DataFrame`, even though we just dropped it! Most operations in Pandas return a _copy_ of the `DataFrame`, rather than modifying the `DataFrame` object itself. Therefore, in order to permanently alter the `DataFrame`, we either need to reassign the `df` variable, or use the `inplace` keyword.
###Code
df.drop(3, inplace=True)
df
###Output
_____no_output_____
###Markdown
Since the `index` and column names are important for interacting with data in the DataFrame, we should make sure to set them to useful values. We can do this during construction or after.
###Code
df = pd.DataFrame({'email': emails, 'password': passwords, 'date-created': creation_dates}, index=usernames)
df.index.name = 'users' # it can be helpful to give the index a name
df
# alternatively
df = pd.DataFrame(list(zip(usernames, emails, passwords, creation_dates)))
df
df.columns = ['username', 'email', 'password', 'date-created']
df.set_index('username', inplace=True)
df
# to reset index to a column
df.reset_index(inplace=True)
df
###Output
_____no_output_____
###Markdown
We can have multiple levels to an index. We'll discover that for some data sets it is necessary to have multiple levels to the index in order to uniquely identify a row.
###Code
df.set_index(['username', 'email'])
###Output
_____no_output_____
###Markdown
Reading data from fileWe can also construct a DataFrame using data stored in a file or received from a website. The data source might be [JSON](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html), [HTML](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html), [CSV](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.htmlpandas.read_csv), [Excel](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html), [Python pickle](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_pickle.html), or even a [database connection](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html). Each format will have its own methods for reading and writing data that take different arguments. The arguments of these methods usually depend on the particular formatting of the file. For example, the values in a CSV might be separate by commas or semi-colons, it might have a header or it might not.The `read_csv` method has to deal with the most formatting possibilities, so we will explore that method with a few examples. Try to apply these ideas when working with other file formats, but keep in mind that each format and read method is different. Always check [the Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) when having trouble with reading or writing data.
###Code
csv = [','.join(map(lambda x: str(x), row)) for row in np.vstack([df.columns, df])]
with open('./data/read_csv_example.csv', 'w') as f:
[f.write(line + '\n') for line in csv]
!cat ./data/read_csv_example.csv
pd.read_csv('./data/read_csv_example.csv')
# we can also set an index from the data
pd.read_csv('./data/read_csv_example.csv', index_col=0)
# what if our data had no header?
with open('./data/read_csv_noheader_example.csv', 'w') as f:
[f.write(line + '\n') for i, line in enumerate(csv) if i != 0]
!cat ./data/read_csv_noheader_example.csv
pd.read_csv('./data/read_csv_noheader_example.csv', names=['username', 'email', 'password', 'date-created'], header=None)
# what if our data was tab-delimited?
tsv = ['\t'.join(map(lambda x: str(x), row)) for row in np.vstack([df.columns, df])]
with open('./data/read_csv_example.tsv', 'w') as f:
[f.write(line + '\n') for line in tsv]
!cat ./data/read_csv_example.tsv
pd.read_csv('./data/read_csv_example.tsv', delimiter='\t')
###Output
_____no_output_____
###Markdown
Even within a single file format, data can be arranged and formatted in many ways. These have been just a few examples of the kinds of arguments you might need to use with `read_csv` in order to read data into a DataFrame in an organized way. Filtering DataFramesOne of the powerful analytical tools of the Pandas DataFrame is its syntax for filtering data. Often we'll only want to work with a certain subset of our data based on some criteria. Let's look at our Yelp data for an example.
###Code
yelp_df.head()
###Output
_____no_output_____
###Markdown
We see the Yelp data set has a `'state'` column. If we are only interested in businesses in Arizona (AZ), we can filter the DataFrame and select only that data.
###Code
az_yelp_df = yelp_df[yelp_df['state'] == 'AZ']
az_yelp_df.head()
az_yelp_df['state'].unique()
###Output
_____no_output_____
###Markdown
We can combine criteria using logic. What if we're only interested in businesses with more than 10 reviews in Arizona?
###Code
yelp_df[(yelp_df['state'] == 'AZ') & (yelp_df['review_count'] > 10)].head()
###Output
_____no_output_____
###Markdown
How does this filtering work?When we write `yelp_df['state'] == 'AZ'`, Pandas selects the `'state'` column and checks whether each row is `'AZ'`. If so, that row is marked `True`, and if not, it is marked `False`. This is how we would normally expect a conditional to work, only now applied to an entire Pandas `Series`. We end up with a Pandas `Series` of Boolean variables.
###Code
(yelp_df['state'] == 'AZ').head()
###Output
_____no_output_____
###Markdown
We can use a `Series` (or any similar object) of Boolean variables to index the DataFrame.
###Code
df
df[[True, False, True]]
###Output
_____no_output_____
###Markdown
This let's us filter a DataFrame using idiomatic logical expressions like `yelp_df['review_count'] > 10`.As another example, let's consider the `'open'` column, which is a `True`/`False` flag for whether a business is open. This is also a Boolean Pandas `Series`, so we can just use it directly.
###Code
# the open businesses
yelp_df[yelp_df['open']].head()
# the closed businesses
yelp_df[~yelp_df['open']].head()
###Output
_____no_output_____
###Markdown
Notice in an earlier expression we wrote `(yelp_df['state'] == 'AZ') & (yelp_df['review_count'] > 10)`. Normally in Python we use the word `and` when we are working with logic. In Pandas we have to use _bit-wise_ logical operators; all that's important to know is the following equivalencies:`~` = `not` `&` = `and` `|` = `or` We can also use Panda's built-in [string operations](https://pandas.pydata.org/pandas-docs/stable/text.html) for doing pattern matching. For example, there are a lot of businesses in Las Vegas in our data set. However, there are also businesses in 'Las Vegas East' and 'South Las Vegas'. To get all of the Las Vegas businesses I might do the following.
###Code
vegas_yelp_df = yelp_df[yelp_df['city'].str.contains('Vegas')]
vegas_yelp_df.head()
vegas_yelp_df['city'].unique()
###Output
_____no_output_____
###Markdown
Applying functions and data aggregationTo analyze the data in the dataframe, we'll need to be able to apply functions to it. Pandas has many mathematical functions built in already, and DataFrames and Series can be passed to NumPy functions (since they behave like NumPy arrays).
###Code
log_review_count = np.log(yelp_df['review_count'])
print(log_review_count.head())
print(log_review_count.shape)
mean_review_count = yelp_df['review_count'].mean()
print(mean_review_count)
###Output
_____no_output_____
###Markdown
In the first example we took the _logarithm_ of the review count for each business. In the second case, we calculated the mean review count of all businesses. In the first case, we ended up with a number for each business. We _transformed_ the review counts using the logarithm. In the second case, we _summarized_ the review counts of all the businesses in one number. This summary is a form of _data aggregation_, in which we take many data points and combine them into some smaller representation. The functions we apply to our data sets will either be in the category of **transformations** or **aggregations**.Sometimes we will need to transform our data in order for it to be usable. For instance, in the `'attributes'` column of our DataFrame, we have a `dict` for each business listing all of its properties. If I wanted to find a restaurant that offers delivery service, it would be difficult for me to filter the DataFrame, even though that information is in the `'attributes'` column. First, I need to transform the `dict` into something more useful.
###Code
def get_delivery_attr(attr_dict):
return attr_dict.get('Delivery')
###Output
_____no_output_____
###Markdown
If we give this function a `dict` from the `'attributes'` column, it will look for the `'Delivery'` key. If it finds that key, it returns the value. If it doesn't find the key, it will return none.
###Code
print(get_delivery_attr(yelp_df.loc[0, 'attributes']))
print(get_delivery_attr(yelp_df.loc[1, 'attributes']))
print(get_delivery_attr(yelp_df.loc[2, 'attributes']))
###Output
_____no_output_____
###Markdown
We could iterate over the rows of `yelp_df['attributes']` to get all of the values, but there is a better way. DataFrames and Series have an `apply` method that allows us to apply our function to the entire data set at once, like we did earlier with `np.log`.
###Code
delivery_attr = yelp_df['attributes'].apply(get_delivery_attr)
delivery_attr.head()
###Output
_____no_output_____
###Markdown
We can make a new column in our DataFrame with this transformed (and useful) information.
###Code
yelp_df['delivery'] = delivery_attr
# to find businesses that deliver
yelp_df[yelp_df['delivery'].fillna(False)].head()
###Output
_____no_output_____
###Markdown
It's less common (though possible) to use `apply` on an entire DataFrame rather than just one column. Since a DataFrame might contain many types of data, we won't usually want to apply the same transformation or aggregation across all of the columns. Data aggregation with `groupby`Data aggregation is an [_overloaded_](https://en.wikipedia.org/wiki/Function_overloading) term. It refers to both data summarization (as above) but also to the combining of different data sets.With our Yelp data, we might be interested in comparing the star ratings of businesses in different cities. We could calculate the mean star rating for each city, and this would allow us to easily compare them. First we would have to split up our data by city, calculate the mean for each city, and then combine it back at the end. This procedure is known as [split-apply-combine](https://pandas.pydata.org/pandas-docs/stable/groupby.html) and is a classic example of data aggregation (in the sense of both summarizing data and also combining different data sets).We achieve the splitting and recombining using the `groupby` method.
###Code
stars_by_city = yelp_df.groupby('city')['stars'].mean()
stars_by_city.head()
###Output
_____no_output_____
###Markdown
We can also apply multiple functions at once. It might be helpful to know the standard deviation of star ratings, the total number of reviews, and the count of businesses as well.
###Code
agg_by_city = yelp_df.groupby('city').agg({'stars': ['mean', 'std'], 'review_count': 'sum', 'business_id': 'count'})
agg_by_city.head()
# unstacking the columns
new_columns = map(lambda x: '_'.join(x),
zip(agg_by_city.columns.get_level_values(0),
agg_by_city.columns.get_level_values(1)))
agg_by_city.columns = new_columns
agg_by_city.head()
###Output
_____no_output_____
###Markdown
How does this work? What does `groupby` do? Let's start by inspecting the result of `groupby`.
###Code
by_city = yelp_df.groupby('city')
by_city
dir(by_city)
print(type(by_city.groups))
list(by_city.groups.items())[:5]
by_city.get_group('Anthem').head()
###Output
_____no_output_____
###Markdown
When we use `groupby` on a column, Pandas builds a `dict`, using the unique elements of the column as the keys and the index of the rows in each group as the values. This `dict` is stored in the `groups` attribute. Pandas can then use this `dict` to direct the application of aggregating functions over the different groups. SortingEven though the DataFrame in many ways behaves similarly to a `dict`, it also is ordered. Therefore we can sort the data in it. Pandas provides two sorting methods, `sort_values` and `sort_index`.
###Code
yelp_df.sort_values('stars').head()
yelp_df.set_index('business_id').sort_index().head()
###Output
_____no_output_____
###Markdown
Don't forget that most Pandas operations return a copy of the DataFrame, and do not update the DataFrame in place (unless we tell it to)! Joining data sets Often we will want to augment one data set with data from another. For instance, businesses in big cities probably get more reviews than those in small cities. It could be useful to scale the review counts by the city's population. To do that, we'll need to add population data to the Yelp data. We can get population data from the US census.
###Code
census = pd.read_csv('./data/PEP_2016_PEPANNRES.csv', skiprows=[1])
census.head()
# construct city & state fields
census['city'] = census['GEO.display-label'].apply(lambda x: x.split(', ')[0])
census['state'] = census['GEO.display-label'].apply(lambda x: x.split(', ')[2])
# convert state names to abbreviations
print(census['state'].unique())
state_abbr = dict(zip(census['state'].unique(), ['CT', 'IL', 'IN', 'KS', 'ME', 'MA', 'MI', 'MN', 'MO', 'NE', 'NH', 'NJ', 'NY', 'ND', 'OH', 'PA', 'RI', 'SD', 'VT', 'WI']))
census['state'] = census['state'].replace(state_abbr)
# remove last word (e.g. 'city', 'town', township', 'borough', 'village') from city names
census['city'] = census['city'].apply(lambda x: ' '.join(x.split(' ')[:-1]))
merged_df = yelp_df.merge(census, on=['state', 'city'])
merged_df.head()
###Output
_____no_output_____
###Markdown
The `merge` function looks through the `'state'` and `'city'` columns of `yelp_df` and `census` and tries to match up rows that share values. When a match is found, the rows are combined. What happens when a match is not found? We can imagine four scenarios: 1. We only keep rows from `yelp_df` and `census` if they match. Any rows from either table that have no match are discarded. This is called an _inner join_. 2. We keep all rows from `yelp_df` and `census`, even if they have no match. In this case, when a row in `yelp_df` has no match in `census`, all the columns from `census` are merged in with null values. When a row in `census` has no match in `yelp_df`, all the columns from `yelp_df` are merged in with null values. This is called an _outer join_.3. We privilege the `yelp_df` data. If a row in `yelp_df` has no match in `census`, we keep it and fill in the missing `census` columns as null values. If a row in `census` has no match in `yelp_df`, we discard it. This is called a _left join_.4. We privilege the `census` data. This is called a _right join_.The default behavior for Pandas is case 1, the _inner join_. This means if there are cities in `yelp_df` that we don't have matching `census` data for, they are dropped. Therefore, `merged_df` might be smaller than `yelp_df`.
###Code
print(yelp_df.shape)
print(merged_df.shape)
###Output
_____no_output_____
###Markdown
There are a lot of cities in `yelp_df` that aren't in `census`! We might want to keep these rows, but we don't need any census data where there are no businesses. Then we should use a _left join_.
###Code
merged_df = yelp_df.merge(census, on=['state', 'city'], how='left')
print(yelp_df.shape)
print(merged_df.shape)
###Output
_____no_output_____
###Markdown
Sometimes we don't need to merge together the columns of separate data sets, but just need to add more rows. For example, the New York City subway system [releases data about how many customers enter and exit the station each week](http://web.mta.info/developers/turnstile.html). Each weekly data set has the same columns, so if we want multiple weeks of data, we just have to append one week to another.
###Code
nov18 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_171118.txt')
nov11 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_171111.txt')
nov18.head()
nov11.head()
nov = pd.concat([nov18, nov11])
nov['DATE'].unique()
###Output
_____no_output_____
###Markdown
We can also use `concat` to perform inner and outer joins based on index. For example, we can perform some data aggregation and then join the results onto the original DataFrame.
###Code
city_counts = yelp_df.groupby('city')['business_id'].count().rename('city_counts')
city_counts.head()
pd.concat([yelp_df.set_index('city'), city_counts], axis=1, join='inner').reset_index().head()
###Output
_____no_output_____
###Markdown
Pandas provides [extensive documentation](https://pandas.pydata.org/pandas-docs/stable/merging.html) with diagrammed examples on different methods and approaches for joining data. Working with time seriesPandas has a well-designed backend for inferring dates and times from strings and doing meaningful computations with them.
###Code
pop_growth = pd.read_html('https://web.archive.org/web/20170127165708/https://www.census.gov/population/international/data/worldpop/table_population.php', attrs={'class': 'query_table'}, parse_dates=[0])[0]
pop_growth.dropna(inplace=True)
pop_growth.head()
###Output
_____no_output_____
###Markdown
By setting the `'Year'` column to the index, we can easily aggregate data by date using the `resample` method. The `resample` method allows us to decrease or increase the sampling frequency of our data. For instance, maybe instead of yearly data, we want to see average quantities for each decade.
###Code
pop_growth.set_index('Year', inplace=True)
pop_growth.resample('10AS').mean()
###Output
_____no_output_____
###Markdown
This kind of resampling is called _downsampling_, because we are decreasing the sampling frequency of the data. We can choose how to aggregate the data from each decade (e.g. `mean`). Options for aggregation include `mean`, `median`, `sum`, `last`, and `first`.We can also _upsample_ data. In this case, we don't have data for each quarter, so we have to tell Pandas has to fill in the missing data.
###Code
pop_growth.resample('1Q').bfill().head()
pop_growth.resample('1Q').ffill().head()
###Output
_____no_output_____
###Markdown
Pandas' time series capabilities are built on the Pandas `Timestamp` class.
###Code
print(pd.Timestamp('January 8, 2017'))
print(pd.Timestamp('01/08/17 20:13'))
print(pd.Timestamp(1.4839*10**18))
print(pd.Timestamp('Feb. 11 2016 2:30 am') - pd.Timestamp('2015-08-03 5:14 pm'))
from pandas.tseries.offsets import BDay, Day, BMonthEnd
print(pd.Timestamp('January 9, 2017') - Day(4))
print(pd.Timestamp('January 9, 2017') - BDay(4))
print(pd.Timestamp('January 9, 2017') + BMonthEnd(4))
###Output
_____no_output_____
###Markdown
If we're entering time series data into a DataFrame it will often be useful to create a range of dates.
###Code
pd.date_range(start='1/8/2017', end='3/2/2017', freq='B')
###Output
_____no_output_____
###Markdown
The `Timestamp` class is compatible with Python's `datetime` module.
###Code
import datetime
pd.Timestamp('May 1, 2017') - datetime.datetime(2017, 1, 8)
###Output
_____no_output_____
###Markdown
Visualizing data with PandasVisualizing a data set is an important first step in drawing insights. We can easily pass data from Pandas to Matplotlib for visualizations, but Pandas also plugs into Matplotlib directly through methods like `plot` and `hist`.
###Code
yelp_df['review_count'].apply(np.log).hist(bins=30);
pop_growth['Annual Growth Rate (%)'].plot();
###Output
_____no_output_____
###Markdown
Pandas
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
We introduced the Pandas module and the DataFrame object in the lesson on [basic data science modules](DS_Basic_DS_Modules.ipynb). We learned how to construct a DataFrame, add data, retrieve data, and [basic reading and writing to disk](DS_IO.ipynb). Now we'll explore the DataFrame object and its powerful analysis methods in more depth.We'll work with a data set from the online review site, Yelp. The file is stored as a compressed JSON file.
###Code
!ls -lh ./data/yelp.json.gz
import gzip
import simplejson as json
with gzip.open('./data/yelp.json.gz', 'r') as f:
yelp_data = [json.loads(line) for line in f]
yelp_df = pd.DataFrame(yelp_data)
yelp_df.head()
###Output
_____no_output_____
###Markdown
Pandas DataFrame and SeriesThe Pandas DataFrame is a highly structured object. Each row corresponds with some physical entity or event. We think of all of the information in a given row as referring to one object (e.g. a business). Each column contains one type of data, both semantically (e.g. names, counts of reviews, star ratings) and syntactically.
###Code
yelp_df.dtypes
###Output
_____no_output_____
###Markdown
We can reference the columns by name, like we would with a `dict`.
###Code
yelp_df['city'].head()
type(yelp_df['city'])
###Output
_____no_output_____
###Markdown
An individual column is a Pandas `Series`. A `Series` has a `name` and a `dtype` (similar to a NumPy array). A `DataFrame` is essentially a `dict` of `Series` objects. The `Series` has an `index` attribute, which label the rows. The index is essentially a set of keys for referencing the rows. We can have an index composed of numbers, strings, timestamps, or any hashable Python object. The index will also have homogeneous type.
###Code
yelp_df['city'].index
###Output
_____no_output_____
###Markdown
The `DataFrame` has an `index` given by the union of indices of its constituent `Series` (we'll explore this later in more detail). Since a `DataFrame` is a `dict` of `Series`, we can select a column and then a row using square bracket notation, but not the reverse (however, the `loc` method works around this).
###Code
# this works
yelp_df['city'][100]
%%expect_exception KeyError
# this doesn't
yelp_df[100]['city']
yelp_df.loc[100, 'city']
###Output
_____no_output_____
###Markdown
Understanding the underlying structure of the `DataFrame` object as a `dict` of `Series` will help you avoid errors and help you think about how the `DataFrame` should behave when we begin doing more complicated analysis.We can _aggregate_ data in a `DataFrame` using methods like `mean`, `sum`, `count`, and `std`. To view a collection of summary statistics for each column we can use the `describe` method.
###Code
yelp_df.describe()
###Output
_____no_output_____
###Markdown
The utility of a DataFrame comes from its ability to split data into groups, using the `groupby` method, and then perform custom aggregations using the `apply` or `aggregate` method. This process of splitting the data into groups, applying an aggregation, and then collecting the results is [discussed in detail in the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/groupby.html), and is one of the main focuses of this notebook. DataFrame constructionSince a `DataFrame` is a `dict` of `Series`, the natural way to construct a `DataFrame` is to use a `dict` of `Series`-like objects.
###Code
from string import ascii_letters, digits
import numpy as np
import datetime
usernames = ['alice36', 'bob_smith', 'eve']
passwords = [''.join(np.random.choice(list(ascii_letters + digits), 8)) for x in range(3)]
creation_dates = [datetime.datetime.now().date() - datetime.timedelta(int(x)) for x in np.random.randint(0, 1500, 3)]
df = pd.DataFrame({'username': usernames, 'password': passwords, 'date-created': pd.to_datetime(creation_dates)})
df
df.dtypes
###Output
_____no_output_____
###Markdown
The `DataFrame` is also closely related to the NumPy `ndarray`.
###Code
random_data = np.random.random((4,3))
random_data
df_random = pd.DataFrame(random_data, columns=['a', 'b', 'c'])
df_random
###Output
_____no_output_____
###Markdown
To add a new column or row, we simply use `dict`-like assignment.
###Code
emails = ['[email protected]', '[email protected]', '[email protected]']
df['email'] = emails
df
# loc references index value, NOT position
# for position use iloc
df.loc[3] = ['2015-01-29', '38uzFJ1n', 'melvintherobot', '[email protected]']
df
###Output
_____no_output_____
###Markdown
We can also drop columns and rows.
###Code
df.drop(3)
# to drop a column, need axis=1
df.drop('email', axis=1)
###Output
_____no_output_____
###Markdown
Notice when we dropped the `'email'` column, the row at index 3 was in the `DataFrame`, even though we just dropped it! Most operations in Pandas return a _copy_ of the `DataFrame`, rather than modifying the `DataFrame` object itself. Therefore, in order to permanently alter the `DataFrame`, we either need to reassign the `df` variable, or use the `inplace` keyword.
###Code
df.drop(3, inplace=True)
df
###Output
_____no_output_____
###Markdown
Since the `index` and column names are important for interacting with data in the DataFrame, we should make sure to set them to useful values. We can do this during construction or after.
###Code
df = pd.DataFrame({'email': emails, 'password': passwords, 'date-created': creation_dates}, index=usernames)
df.index.name = 'users' # it can be helpful to give the index a name
df
# alternatively
df = pd.DataFrame(list(zip(usernames, emails, passwords, creation_dates)))
df
df.columns = ['username', 'email', 'password', 'date-created']
df.set_index('username', inplace=True)
df
# to reset index to a column
df.reset_index(inplace=True)
df
###Output
_____no_output_____
###Markdown
We can have multiple levels to an index. We'll discover that for some data sets it is necessary to have multiple levels to the index in order to uniquely identify a row.
###Code
df.set_index(['username', 'email'])
###Output
_____no_output_____
###Markdown
Reading data from fileWe can also construct a DataFrame using data stored in a file or received from a website. The data source might be [JSON](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html), [HTML](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html), [CSV](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.htmlpandas.read_csv), [Excel](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html), [Python pickle](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_pickle.html), or even a [database connection](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html). Each format will have its own methods for reading and writing data that take different arguments. The arguments of these methods usually depend on the particular formatting of the file. For example, the values in a CSV might be separate by commas or semi-colons, it might have a header or it might not.The `read_csv` method has to deal with the most formatting possibilities, so we will explore that method with a few examples. Try to apply these ideas when working with other file formats, but keep in mind that each format and read method is different. Always check [the Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) when having trouble with reading or writing data.
###Code
csv = [','.join(map(lambda x: str(x), row)) for row in np.vstack([df.columns, df])]
with open('./data/read_csv_example.csv', 'w') as f:
[f.write(line + '\n') for line in csv]
!cat ./data/read_csv_example.csv
pd.read_csv('./data/read_csv_example.csv')
# we can also set an index from the data
pd.read_csv('./data/read_csv_example.csv', index_col=0)
# what if our data had no header?
with open('./data/read_csv_noheader_example.csv', 'w') as f:
[f.write(line + '\n') for i, line in enumerate(csv) if i != 0]
!cat ./data/read_csv_noheader_example.csv
pd.read_csv('./data/read_csv_noheader_example.csv', names=['username', 'email', 'password', 'date-created'], header=None)
# what if our data was tab-delimited?
tsv = ['\t'.join(map(lambda x: str(x), row)) for row in np.vstack([df.columns, df])]
with open('./data/read_csv_example.tsv', 'w') as f:
[f.write(line + '\n') for line in tsv]
!cat ./data/read_csv_example.tsv
pd.read_csv('./data/read_csv_example.tsv', delimiter='\t')
###Output
_____no_output_____
###Markdown
Even within a single file format, data can be arranged and formatted in many ways. These have been just a few examples of the kinds of arguments you might need to use with `read_csv` in order to read data into a DataFrame in an organized way. Filtering DataFramesOne of the powerful analytical tools of the Pandas DataFrame is its syntax for filtering data. Often we'll only want to work with a certain subset of our data based on some criteria. Let's look at our Yelp data for an example.
###Code
yelp_df.head()
###Output
_____no_output_____
###Markdown
We see the Yelp data set has a `'state'` column. If we are only interested in businesses in Arizona (AZ), we can filter the DataFrame and select only that data.
###Code
az_yelp_df = yelp_df[yelp_df['state'] == 'AZ']
az_yelp_df.head()
az_yelp_df['state'].unique()
###Output
_____no_output_____
###Markdown
We can combine criteria using logic. What if we're only interested in businesses with more than 10 reviews in Arizona?
###Code
yelp_df[(yelp_df['state'] == 'AZ') & (yelp_df['review_count'] > 10)].head()
###Output
_____no_output_____
###Markdown
How does this filtering work?When we write `yelp_df['state'] == 'AZ'`, Pandas selects the `'state'` column and checks whether each row is `'AZ'`. If so, that row is marked `True`, and if not, it is marked `False`. This is how we would normally expect a conditional to work, only now applied to an entire Pandas `Series`. We end up with a Pandas `Series` of Boolean variables.
###Code
(yelp_df['state'] == 'AZ').head()
###Output
_____no_output_____
###Markdown
We can use a `Series` (or any similar object) of Boolean variables to index the DataFrame.
###Code
df
df[[True, False, True]]
###Output
_____no_output_____
###Markdown
This let's us filter a DataFrame using idiomatic logical expressions like `yelp_df['review_count'] > 10`.As another example, let's consider the `'open'` column, which is a `True`/`False` flag for whether a business is open. This is also a Boolean Pandas `Series`, so we can just use it directly.
###Code
# the open businesses
yelp_df[yelp_df['open']].head()
# the closed businesses
yelp_df[~yelp_df['open']].head()
###Output
_____no_output_____
###Markdown
Notice in an earlier expression we wrote `(yelp_df['state'] == 'AZ') & (yelp_df['review_count'] > 10)`. Normally in Python we use the word `and` when we are working with logic. In Pandas we have to use _bit-wise_ logical operators; all that's important to know is the following equivalencies:`~` = `not` `&` = `and` `|` = `or` We can also use Panda's built-in [string operations](https://pandas.pydata.org/pandas-docs/stable/text.html) for doing pattern matching. For example, there are a lot of businesses in Las Vegas in our data set. However, there are also businesses in 'Las Vegas East' and 'South Las Vegas'. To get all of the Las Vegas businesses I might do the following.
###Code
vegas_yelp_df = yelp_df[yelp_df['city'].str.contains('Vegas')]
vegas_yelp_df.head()
vegas_yelp_df['city'].unique()
###Output
_____no_output_____
###Markdown
Applying functions and data aggregationTo analyze the data in the dataframe, we'll need to be able to apply functions to it. Pandas has many mathematical functions built in already, and DataFrames and Series can be passed to NumPy functions (since they behave like NumPy arrays).
###Code
log_review_count = np.log(yelp_df['review_count'])
print(log_review_count.head())
print(log_review_count.shape)
mean_review_count = yelp_df['review_count'].mean()
print(mean_review_count)
###Output
_____no_output_____
###Markdown
In the first example we took the _logarithm_ of the review count for each business. In the second case, we calculated the mean review count of all businesses. In the first case, we ended up with a number for each business. We _transformed_ the review counts using the logarithm. In the second case, we _summarized_ the review counts of all the businesses in one number. This summary is a form of _data aggregation_, in which we take many data points and combine them into some smaller representation. The functions we apply to our data sets will either be in the category of **transformations** or **aggregations**.Sometimes we will need to transform our data in order for it to be usable. For instance, in the `'attributes'` column of our DataFrame, we have a `dict` for each business listing all of its properties. If I wanted to find a restaurant that offers delivery service, it would be difficult for me to filter the DataFrame, even though that information is in the `'attributes'` column. First, I need to transform the `dict` into something more useful.
###Code
def get_delivery_attr(attr_dict):
return attr_dict.get('Delivery')
###Output
_____no_output_____
###Markdown
If we give this function a `dict` from the `'attributes'` column, it will look for the `'Delivery'` key. If it finds that key, it returns the value. If it doesn't find the key, it will return none.
###Code
print(get_delivery_attr(yelp_df.loc[0, 'attributes']))
print(get_delivery_attr(yelp_df.loc[1, 'attributes']))
print(get_delivery_attr(yelp_df.loc[2, 'attributes']))
###Output
_____no_output_____
###Markdown
We could iterate over the rows of `yelp_df['attributes']` to get all of the values, but there is a better way. DataFrames and Series have an `apply` method that allows us to apply our function to the entire data set at once, like we did earlier with `np.log`.
###Code
delivery_attr = yelp_df['attributes'].apply(get_delivery_attr)
delivery_attr.head()
###Output
_____no_output_____
###Markdown
We can make a new column in our DataFrame with this transformed (and useful) information.
###Code
yelp_df['delivery'] = delivery_attr
# to find businesses that deliver
yelp_df[yelp_df['delivery'].fillna(False)].head()
###Output
_____no_output_____
###Markdown
It's less common (though possible) to use `apply` on an entire DataFrame rather than just one column. Since a DataFrame might contain many types of data, we won't usually want to apply the same transformation or aggregation across all of the columns. Data aggregation with `groupby`Data aggregation is an [_overloaded_](https://en.wikipedia.org/wiki/Function_overloading) term. It refers to both data summarization (as above) but also to the combining of different data sets.With our Yelp data, we might be interested in comparing the star ratings of businesses in different cities. We could calculate the mean star rating for each city, and this would allow us to easily compare them. First we would have to split up our data by city, calculate the mean for each city, and then combine it back at the end. This procedure is known as [split-apply-combine](https://pandas.pydata.org/pandas-docs/stable/groupby.html) and is a classic example of data aggregation (in the sense of both summarizing data and also combining different data sets).We achieve the splitting and recombining using the `groupby` method.
###Code
stars_by_city = yelp_df.groupby('city')['stars'].mean()
stars_by_city.head()
###Output
_____no_output_____
###Markdown
We can also apply multiple functions at once. It might be helpful to know the standard deviation of star ratings, the total number of reviews, and the count of businesses as well.
###Code
agg_by_city = yelp_df.groupby('city').agg({'stars': ['mean', 'std'], 'review_count': 'sum', 'business_id': 'count'})
agg_by_city.head()
# unstacking the columns
new_columns = map(lambda x: '_'.join(x),
zip(agg_by_city.columns.get_level_values(0),
agg_by_city.columns.get_level_values(1)))
agg_by_city.columns = new_columns
agg_by_city.head()
###Output
_____no_output_____
###Markdown
How does this work? What does `groupby` do? Let's start by inspecting the result of `groupby`.
###Code
by_city = yelp_df.groupby('city')
by_city
dir(by_city)
print(type(by_city.groups))
list(by_city.groups.items())[:5]
by_city.get_group('Anthem').head()
###Output
_____no_output_____
###Markdown
When we use `groupby` on a column, Pandas builds a `dict`, using the unique elements of the column as the keys and the index of the rows in each group as the values. This `dict` is stored in the `groups` attribute. Pandas can then use this `dict` to direct the application of aggregating functions over the different groups. SortingEven though the DataFrame in many ways behaves similarly to a `dict`, it also is ordered. Therefore we can sort the data in it. Pandas provides two sorting methods, `sort_values` and `sort_index`.
###Code
yelp_df.sort_values('stars').head()
yelp_df.set_index('business_id').sort_index().head()
###Output
_____no_output_____
###Markdown
Don't forget that most Pandas operations return a copy of the DataFrame, and do not update the DataFrame in place (unless we tell it to)! Joining data sets Often we will want to augment one data set with data from another. For instance, businesses in big cities probably get more reviews than those in small cities. It could be useful to scale the review counts by the city's population. To do that, we'll need to add population data to the Yelp data. We can get population data from the US census.
###Code
census = pd.read_csv('./data/PEP_2016_PEPANNRES.csv', skiprows=[1])
census.head()
# construct city & state fields
census['city'] = census['GEO.display-label'].apply(lambda x: x.split(', ')[0])
census['state'] = census['GEO.display-label'].apply(lambda x: x.split(', ')[2])
# convert state names to abbreviations
print(census['state'].unique())
state_abbr = dict(zip(census['state'].unique(), ['CT', 'IL', 'IN', 'KS', 'ME', 'MA', 'MI', 'MN', 'MO', 'NE', 'NH', 'NJ', 'NY', 'ND', 'OH', 'PA', 'RI', 'SD', 'VT', 'WI']))
census['state'] = census['state'].replace(state_abbr)
# remove last word (e.g. 'city', 'town', township', 'borough', 'village') from city names
census['city'] = census['city'].apply(lambda x: ' '.join(x.split(' ')[:-1]))
merged_df = yelp_df.merge(census, on=['state', 'city'])
merged_df.head()
###Output
_____no_output_____
###Markdown
The `merge` function looks through the `'state'` and `'city'` columns of `yelp_df` and `census` and tries to match up rows that share values. When a match is found, the rows are combined. What happens when a match is not found? We can imagine four scenarios: 1. We only keep rows from `yelp_df` and `census` if they match. Any rows from either table that have no match are discarded. This is called an _inner join_. 2. We keep all rows from `yelp_df` and `census`, even if they have no match. In this case, when a row in `yelp_df` has no match in `census`, all the columns from `census` are merged in with null values. When a row in `census` has no match in `yelp_df`, all the columns from `yelp_df` are merged in with null values. This is called an _outer join_.3. We privilege the `yelp_df` data. If a row in `yelp_df` has no match in `census`, we keep it and fill in the missing `census` columns as null values. If a row in `census` has no match in `yelp_df`, we discard it. This is called a _left join_.4. We privilege the `census` data. This is called a _right join_.The default behavior for Pandas is case 1, the _inner join_. This means if there are cities in `yelp_df` that we don't have matching `census` data for, they are dropped. Therefore, `merged_df` might be smaller than `yelp_df`.
###Code
print(yelp_df.shape)
print(merged_df.shape)
###Output
_____no_output_____
###Markdown
There are a lot of cities in `yelp_df` that aren't in `census`! We might want to keep these rows, but we don't need any census data where there are no businesses. Then we should use a _left join_.
###Code
merged_df = yelp_df.merge(census, on=['state', 'city'], how='left')
print(yelp_df.shape)
print(merged_df.shape)
###Output
_____no_output_____
###Markdown
Sometimes we don't need to merge together the columns of separate data sets, but just need to add more rows. For example, the New York City subway system [releases data about how many customers enter and exit the station each week](http://web.mta.info/developers/turnstile.html). Each weekly data set has the same columns, so if we want multiple weeks of data, we just have to append one week to another.
###Code
nov18 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_171118.txt')
nov11 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_171111.txt')
nov18.head()
nov11.head()
nov = pd.concat([nov18, nov11])
nov['DATE'].unique()
###Output
_____no_output_____
###Markdown
We can also use `concat` to perform inner and outer joins based on index. For example, we can perform some data aggregation and then join the results onto the original DataFrame.
###Code
city_counts = yelp_df.groupby('city')['business_id'].count().rename('city_counts')
city_counts.head()
pd.concat([yelp_df.set_index('city'), city_counts], axis=1, join='inner').reset_index().head()
###Output
_____no_output_____
###Markdown
Pandas provides [extensive documentation](https://pandas.pydata.org/pandas-docs/stable/merging.html) with diagrammed examples on different methods and approaches for joining data. Working with time seriesPandas has a well-designed backend for inferring dates and times from strings and doing meaningful computations with them.
###Code
pop_growth = pd.read_html('https://web.archive.org/web/20170127165708/https://www.census.gov/population/international/data/worldpop/table_population.php', attrs={'class': 'query_table'}, parse_dates=[0])[0]
pop_growth.dropna(inplace=True)
pop_growth.head()
###Output
_____no_output_____
###Markdown
By setting the `'Year'` column to the index, we can easily aggregate data by date using the `resample` method. The `resample` method allows us to decrease or increase the sampling frequency of our data. For instance, maybe instead of yearly data, we want to see average quantities for each decade.
###Code
pop_growth.set_index('Year', inplace=True)
pop_growth.resample('10AS').mean()
###Output
_____no_output_____
###Markdown
This kind of resampling is called _downsampling_, because we are decreasing the sampling frequency of the data. We can choose how to aggregate the data from each decade (e.g. `mean`). Options for aggregation include `mean`, `median`, `sum`, `last`, and `first`.We can also _upsample_ data. In this case, we don't have data for each quarter, so we have to tell Pandas has to fill in the missing data.
###Code
pop_growth.resample('1Q').bfill().head()
pop_growth.resample('1Q').ffill().head()
###Output
_____no_output_____
###Markdown
Pandas' time series capabilities are built on the Pandas `Timestamp` class.
###Code
print(pd.Timestamp('January 8, 2017'))
print(pd.Timestamp('01/08/17 20:13'))
print(pd.Timestamp(1.4839*10**18))
print(pd.Timestamp('Feb. 11 2016 2:30 am') - pd.Timestamp('2015-08-03 5:14 pm'))
from pandas.tseries.offsets import BDay, Day, BMonthEnd
print(pd.Timestamp('January 9, 2017') - Day(4))
print(pd.Timestamp('January 9, 2017') - BDay(4))
print(pd.Timestamp('January 9, 2017') + BMonthEnd(4))
###Output
_____no_output_____
###Markdown
If we're entering time series data into a DataFrame it will often be useful to create a range of dates.
###Code
pd.date_range(start='1/8/2017', end='3/2/2017', freq='B')
###Output
_____no_output_____
###Markdown
The `Timestamp` class is compatible with Python's `datetime` module.
###Code
import datetime
pd.Timestamp('May 1, 2017') - datetime.datetime(2017, 1, 8)
###Output
_____no_output_____
###Markdown
Visualizing data with PandasVisualizing a data set is an important first step in drawing insights. We can easily pass data from Pandas to Matplotlib for visualizations, but Pandas also plugs into Matplotlib directly through methods like `plot` and `hist`.
###Code
yelp_df['review_count'].apply(np.log).hist(bins=30)
pop_growth['Annual Growth Rate (%)'].plot()
###Output
_____no_output_____ |
data_cleaning/data_cleaning.ipynb | ###Markdown
Advanced GIS: Interactive Web Mapping Final Project | 3/31/2022**Purpose**: clean and combine housing choice voucher data and neighborhood tabulation geographies for visualization
###Code
# Packages and custom functions
import numpy as np
import pandas as pd
import re
import os
import geojson
import geopandas as gpd
import requests as r
from tabula.io import read_pdf
import tabula
def get_county(x):
c = re.findall('NY New York [\d]{3} (.* County)',x)
if len(c) > 0:
return(c[0])
else:
return(None)
###Output
_____no_output_____
###Markdown
Read in voucher data; add fields for filtering, joining, and analysis; filter to just NYC; handle missing data**Source**: https://www.huduser.gov/portal/datasets/assthsg.html2009-2021_data, 2021 data**Documentation**: https://www.huduser.gov/portal/datasets/pictures/dictionary_2021.pdf**Definition of Missing values**Some cell entries across variables report no data or are suppressed. In such casesone of the following codes will apply to such missing values in the downloadable file"NA" = Not applicable"-1" = Missing"-4" = Suppressed (where the cell entry is less than 11 for reported families)"-5" = Non-reporting (where reporting rates--see % Reported--are less than 50%) * The Bronx is Bronx County (ANSI / FIPS 36005)* Brooklyn is Kings County (ANSI / FIPS 36047)* Manhattan is New York County (ANSI / FIPS 36061)* Queens is Queens County (ANSI / FIPS 36081)* Staten Island is Richmond County (ANSI / FIPS 36085)
###Code
## Read in voucher data
dat = pd.read_excel("TRACT_MO_WY_2021.xlsx")
## Filter to NY State
dat = dat.loc[dat.states == "NY New York"]
## Check that entire dataset is at census tract level
assert len(dat.loc[dat.sumlevel!=7]) == 0, "Error! Mixed levels of analysis; not all data at census tract level"
## Create county, census tract, and borough fields
### County
dat["county"] = dat["entities"].apply(get_county)
### Tract
dat['census_tract'] = dat['code'].apply(lambda x: re.sub('36005|36047|36061|36081|36085|36XXX','',str(x)))
### Borough
boros = {"Kings County":3,
"Queens County":4,
"Bronx County":2,
"New York County":1,
"Richmond County":5}
dat["borocode"] = dat["county"].replace(boros)
## Create aggregate fields for units and occupied for quality checks
dat["est_total_occupied"] = dat["total_units"] * (dat["pct_occupied"] / 100)
dat["diff_occupied_reported"] = dat["est_total_occupied"] - dat["number_reported"]
## Filter to just NYC
cut = dat.loc[dat.county.isin([
"Kings County",
"Queens County",
"Bronx County",
"Richmond County",
"New York County"
])]
# Filter to just HCV
hcv = cut.loc[cut.program_label == "Housing Choice Vouchers",
["program_label",
"code",
"county",
"borocode",
"census_tract",
"number_reported",
"people_total",
"total_units",
"est_total_occupied",
"diff_occupied_reported"]].copy()
## Remove suppressed data
hcv = hcv.loc[hcv.number_reported > 0]
hcv.replace(to_replace = -4, value = None, inplace = True)
## Make sure HCV cut is unique on borough and census tract
check = hcv.groupby(["borocode","census_tract"]).aggregate({"program_label":"count"})
assert len(check.loc[check.program_label > 1]) == 0, "Error! Data is not unique on borough and census tract"
## Group HCV by borough and census tract
hcv = hcv.groupby(["program_label","borocode","county","census_tract","code"]).\
aggregate({
"total_units":"max",
"number_reported":"max",
"est_total_occupied":"max",
"diff_occupied_reported":"max",
"people_total":"max"
}).\
reset_index()
###Output
_____no_output_____
###Markdown
Check data accuracy
###Code
## Check that estimated occupied units is not greater than the total number of available units
assert len(hcv.loc[hcv.total_units<hcv.est_total_occupied]) == 0, "Error! Total occupied units greater than total available units"
## Check that the number of reported occupied units is not greater than the total number of available units
assert len(hcv.loc[hcv.total_units<hcv.number_reported]) == 0, "Error! Reported units greater than total available units"
## Check that only 1 record (Bronx 15800) where reported occupied units is less than estimated occupied units
assert len(hcv.loc[hcv.diff_occupied_reported<0]) == 1,"Error! Total occupied units less than reported units"
## Check that dividing people_total by number_reported is an accurate (within 1 decimal place) estimate of hh size
cut['people_over_reported'] = cut['people_total'] / cut['number_reported']
cut['diff_people_over_reported'] = cut['people_over_reported']-cut['people_per_unit']
assert len(cut.loc[(cut.diff_people_over_reported > 0.1) & (cut.people_total > 0),
['people_total','people_over_reported','people_per_unit']]) == 0, "Error! people_total/number_reported not an accurate estimate of hh size"
## Check out the average difference between the number of estimated occupied units vs. number reported
## Note that there is a pretty big spread (up to 40%)
(hcv.diff_occupied_reported / hcv.number_reported*100).describe()
###Output
/var/folders/99/zg43g9jx77n46pfkq365c75r0000gn/T/ipykernel_897/1650092946.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
cut['people_over_reported'] = cut['people_total'] / cut['number_reported']
/var/folders/99/zg43g9jx77n46pfkq365c75r0000gn/T/ipykernel_897/1650092946.py:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
cut['diff_people_over_reported'] = cut['people_over_reported']-cut['people_per_unit']
###Markdown
Read in census tract shape file and floodplain files, combine with HCV data, join and create overlap boolean**Data Source Definition:** NYC Planning 2010 Census Tract Shape Files**Data source and documenation:** https://www1.nyc.gov/site/planning/data-maps/open-data/census-download-metadata.page**Data Source Definition:** NYC Open Data Portal 100 and 500 year floodplains**Source and documentation**: https://data.cityofnewyork.us/Environment/Sea-Level-Rise-Maps-2020s-100-year-Floodplain-/ezfn-5dsb,https://data.cityofnewyork.us/Environment/Sea-Level-Rise-Maps-2020s-500-year-Floodplain-/ajyu-7sgg
###Code
# Read in census tract shapefiles, format census tract code, and group by census tract
census10 = gpd.read_file('nyct2010_22a/nyct2010.shp').to_crs(epsg=4326)
census10['census_tract'] = census10['CT2010'].astype('str')
census10['borocode'] = census10['BoroCode'].apply(lambda x: int(x))
census10 = census10[['census_tract','borocode','geometry']].drop_duplicates().dissolve(by='census_tract')
# Merge to hcv data
hcvt = hcv.merge(census10,
how = 'left',
on = ['census_tract','borocode'])
# Convert to geodatframe
hcvt = gpd.GeoDataFrame(hcvt)
# Check that merge is complete
assert len(hcvt) == len(hcv), "Error! Merge has changed total number of rows"
# Write to csv for QGIS analysis
hcvt.to_file("hcvt.geojson")
hcvt.head()
# Read in floodplain files that have been fixed, clipped to shoreline, and dissolved in QGIS
# Many thanks to Joann for advice on this!! https://github.com/joannlee-nyc
_500yr_floodplain = gpd.read_file("../data/floodplain_500_dissolved.geojson")
_100yr_floodplain = gpd.read_file("../data/floodplain_100_dissolved.geojson")
# Rename and format columns
join100 = _100yr_floodplain[['geometry','fld_zone','id']].rename(columns={'fld_zone':'fld_zone100','id':'id_100'})
join100['id_100'] = join100['id_100'].apply(lambda x: int(x))
join500 = _500yr_floodplain[['geometry','fld_zone','id']].rename(columns={'fld_zone':'fld_zone500','id':'id_500'})
join500['id_500'] = join500['id_500'].apply(lambda x: int(x))
# overlay 500 year floodplain over census tract-level HCV data
#hcvt_test_500 = hcvt.\
overlay(join500, how = 'union').\
drop(columns=['geometry']).\
groupby(list(hcv.columns.values)).\
aggregate({'id_500':'max'})
# overlay 100 year floodplain over census tract-level HCV data
#hcvt_test_100 = hcvt.\
overlay(join100, how = 'union').\
drop(columns=['geometry']).\
groupby(list(hcv.columns.values)).\
aggregate({'id_100':'max'})
# create 0/1 variables at census tract level to mark whether the tract is in 100 or 500 year floodplains
hcvt_test_100['_f100'] = hcvt_test_100.id_100.apply(lambda x: 0 if pd.isna(x) else 1)
hcvt_test_500['_f500'] = hcvt_test_500.id_500.apply(lambda x: 0 if pd.isna(x) else 1)
# Check that fi100 is unique on census tract
check = hcvt_test_100.reset_index().groupby(['census_tract','borocode']).aggregate({'program_label':'nunique'})
assert len(check.loc[check.program_label>1])==0, "Error! fi100 is not unique on census tract"
# Check that hcvt_test_500 is unique on census tract
check = hcvt_test_100.reset_index().groupby(['census_tract','borocode']).aggregate({'program_label':'nunique'})
assert len(check.loc[check.program_label>1])==0, "Error! hcvt_test_500 is not unique on census tract"
hcvt_test_100 = hcvt_test_100.reset_index()[['census_tract','borocode','_f100']].copy()
hcvt_test_500_merge = hcvt_test_500.reset_index()[['census_tract','borocode','_f500']].copy()
flmerge = pd.concat([hcvt_test_500_merge,hcvt_test_100]).\
groupby(['census_tract','borocode']).\
max().\
fillna(0).\
reset_index()
hcvf = hcvt.merge(
flmerge,
how = 'left',
left_on = ['census_tract','borocode'],
right_on = ['census_tract','borocode'])
hcvf['_f_any'] = hcvf.apply(lambda x: max(x._f100,x._f500), axis = 1)
hcvf['_f100'].fillna(0,inplace=True)
hcvf['_f500'].fillna(0,inplace=True)
hcvf['_f_any'].fillna(0,inplace=True)
assert len(hcvf) == len(hcvt), "Error! Rows lost in merge"
###Output
_____no_output_____
###Markdown
Get estimated voucher holders in flood plain
###Code
# Reported households
hcvf['hhs_in_100fp'] = hcvf['number_reported'] * hcvf['_f100']
hcvf['hhs_in_500fp'] = hcvf['number_reported'] * hcvf['_f500']
hcvf['hhs_in_any_fp'] = hcvf.apply(lambda x: x.number_reported * max(x._f500,x._f100),
axis = 1)
# Estimated households
hcvf['est_hhs_in_100fp'] = hcvf['est_total_occupied'] * hcvf['_f100']
hcvf['est_hhs_in_500fp'] = hcvf['est_total_occupied'] * hcvf['_f500']
# Reported people
hcvf['people_in_100fp'] = hcvf['people_total'] * hcvf['_f100']
hcvf['people_in_500fp'] = hcvf['people_total'] * hcvf['_f500']
hcvf['people_in_any_fp'] = hcvf.apply(lambda x: x.people_total * max(x._f500,x._f100),
axis = 1)
# Estimated people
hcvf['est_people_in_100fp'] = hcvf['est_total_occupied'] * hcvf['people_total'] / hcvf['number_reported'] * hcvf['_f100']
hcvf['est_people_in_500fp'] = hcvf['est_total_occupied'] * hcvf['people_total'] / hcvf['number_reported'] * hcvf['_f500']
# Check that any floodplain count is accurate
hcvf.pivot_table(
columns=['_f100','_f500'],
values='hhs_in_any_fp',
aggfunc='max')
###Output
_____no_output_____
###Markdown
Read in PUMA xwalk, filter to NYC, add borough code, and merge to HVC Dataset**Data Source:** https://www2.census.gov/geo/docs/maps-data/data/rel/2010_Census_Tract_to_2010_PUMA.txt**Documentation**: https://www.census.gov/geographies/reference-files/time-series/geo/relationship-files.2010.htmlpar_list_0**Matching County FP to Borough:** https://guides.newman.baruch.cuny.edu/nyc_data* The Bronx is Bronx County (ANSI / FIPS 36005)* Brooklyn is Kings County (ANSI / FIPS 36047)* Manhattan is New York County (ANSI / FIPS 36061)* Queens is Queens County (ANSI / FIPS 36081)* Staten Island is Richmond County (ANSI / FIPS 36085)
###Code
## Read in
pumas = pd.read_csv("2010_Census_Tract_to_2010_PUMA.csv")
## Filter to NY
nypumas = pumas.loc[(pumas.STATEFP == 36) &
(pumas.COUNTYFP.isin([5,47,61,81,85]))].copy()
## Add borocode
boros = {47:3, 81:4, 5:2, 61:1, 85:5}
nypumas["borocode"] = nypumas["COUNTYFP"].replace(boros)
# Merge with HCV dataset and check for uniqueness
## Merge
hcv_puma = hcvf.merge(nypumas,
how = 'left',
left_on = ['borocode','census_tract'],
right_on = ['borocode','TRACTCE'])
## Check that all records merged except those with invalid census tract codes
assert len(hcv_puma.loc[(hcv_puma.census_tract != 999999) &
(hcv_puma.TRACTCE.isnull())]) == 0, "Error! Not all records merged"
## Check that data is unique on borough, tract and puma
check = hcv_puma.groupby(['borocode','census_tract','PUMA5CE']).aggregate({'program_label':'count'}).reset_index()
assert len(check.loc[check.program_label>1]) == 0, "Error! Data is not unique on borough, tract, and puma"
###Output
_____no_output_____
###Markdown
Scrape PUMA names from map and aggregate HCV data at PUMA level**Data source:** https://www1.nyc.gov/assets/planning/download/pdf/data-maps/nyc-population/census2010/puma_cd_map.pdf (scraped below)PUMA Community districts and PUMA names
###Code
## Scrape pdf data into dataframe
pdf = "https://www1.nyc.gov/assets/planning/download/pdf/data-maps/nyc-population/census2010/puma_cd_map.pdf"
df = read_pdf(pdf, pages = 'all')
## Parse first three columns
table1 = df[0].iloc[:,0:3].dropna(how = "all")
table1.columns = table1.loc[2]
table1 = table1.loc[table1.CD != "CD"]
## Parse second three columns
table2 = df[0].iloc[:,3:5]
table2.columns = table2.loc[2]
table2["CD"] = table2["CD PUMA"].\
apply(lambda x: re.findall("^(\d+\s*\&*\d*)\s",str(x))[0] if re.match("^(\d+\s*\&*\d*)\s",str(x)) else None)
table2["PUMA"] = table2["CD PUMA"].\
apply(lambda x: str(x)[-4:] if re.match("\d{4}",str(x)[-4:]) else None)
table2 = table2[["CD","PUMA","PUMA Name"]].dropna(how = "all")
## Add Staten Island rows that get cut off
table3 = pd.DataFrame(
data = [
["1","3903","Port Richmond, Stapleton & Mariner's Harbor"],
["2","3902","New Springville & South Beach"],
["3","3901","Tottenville, Great Kills & Annadale"]
],
columns = ["CD","PUMA","PUMA Name"])
## Combine tables
table = pd.concat([table1,table2,table3])
table["PUMA"] = table["PUMA"].apply(lambda x: int(x) if x != None else None)
## Correct two PUMA names that get wonky
table["PUMA Name"] = table["PUMA Name"].replace(
{
"CD 5Bedford Park, Fordham North & Norwood":"Bedford Park, Fordham North & Norwood",
"4106Brooklyn Heights & Fort Greene":"Brooklyn Heights & Fort Greene"
})
## Check that dataset is unique on PUMA
assert (table.PUMA.value_counts() == 1).all() == True, "Error! PUMA codes that map to more than one PUMA"
# Combine HCV dataset and PUMA names + CDs, check for uniqueness, aggregate at PUMA level
## Merge HCV and PUMA names and CDs
hcv_puma = hcv_puma.merge(table, how = 'left', left_on = 'PUMA5CE', right_on = 'PUMA')
## Check that data is unique on borough, tract and puma
check = hcv_puma.groupby(['borocode','census_tract','PUMA Name']).\
aggregate({'program_label':'count'}).\
reset_index()
assert len(check.loc[check.program_label>1]) == 0, "Error! Data is not unique on borough, tract, and puma"
## Aggregate at PUMA level
hcv_puma_g = hcv_puma_g.groupby(['borocode','PUMA','PUMA Name','CD']).\
aggregate({'est_total_occupied':'sum',
'number_reported':'sum',
'people_total':'sum',
'hhs_in_100fp':'sum',
'hhs_in_500fp':'sum',
'hhs_in_any_fp':'sum',
'est_hhs_in_100fp':'sum',
'est_hhs_in_500fp':'sum',
'people_in_100fp':'sum',
'people_in_500fp':'sum',
'est_people_in_100fp':'sum',
'est_people_in_500fp':'sum',
'hhs_in_any_fp':'sum'
}).\
reset_index()
## Add average voucher household size stat
hcv_puma_g['avg_hh_size'] = np.round(hcv_puma_g['people_total']/hcv_puma_g['number_reported'],2)
## Add % in floodplain (reported)
hcv_puma_g['pct_hh_in_100fp'] = hcv_puma_g['hhs_in_100fp']/hcv_puma_g['number_reported']
hcv_puma_g['pct_hh_in_500fp'] = hcv_puma_g['hhs_in_500fp']/hcv_puma_g['number_reported']
hcv_puma_g['pct_hh_in_any_fp'] = hcv_puma_g['hhs_in_any_fp']/hcv_puma_g['number_reported']
## Check that all records merged
assert len(hcv_puma.loc[(~hcv_puma.PUMA5CE.isna()) &
(~hcv_puma.PUMA5CE.isin(hcv_puma_g.PUMA)),['PUMA5CE']]) == 0, "Error! PUMAS in HCV dataset are missing in PUMA name dataset"
###Output
_____no_output_____
###Markdown
Sense check aggregated data--does it line up with expectations about vouchers in NYC?**Data Sources:** https://www1.nyc.gov/site/nycha/section-8/about-section-8.page:~:text=Approximately%2085%2C000%20Section%208%20vouchers,programs%20in%20New%20York%20City.,https://www1.nyc.gov/site/hpd/services-and-information/about-section-8.page,https://www.cbpp.org/research/housing/federal-rental-assistance-fact-sheetsNY* NYCHA: "Approximately 85,000 Section 8 vouchers ... currently participate in the program."* HPD: "In total, HPD serives over 39,000 households in all five boroughs."* CBPP: "Number of Households Receiving Major Types of Federal Rental Assistance in New York. Housing Choice Vouchers: 232,000" for all of New York StateEstimated total: 85K + 39K = **124K housholds**, about half the total for New York State
###Code
print("Total reported voucher households: " + "{:,.0f}".format(hcv_puma_g.number_reported.sum()))
print("Total estimated voucher households (% occupied * total units): " + "{:,.0f}".format(hcv_puma_g.est_total_occupied.sum()))
print("Total people in voucher households: " + "{:,.0f}".format(hcv_puma_g.people_total.sum()))
print("Add data note that total occupied may be overestimating")
print("Total reported voucher households in 100 year floodplain: " + "{:,.0f}".format(hcv_puma_g.hhs_in_100fp.sum()))
print("Total reported voucher households in 500 year floodplain: " + "{:,.0f}".format(hcv_puma_g.hhs_in_500fp.sum()))
print('\n')
print("Total estimated voucher households in 100 year floodplain: " + "{:,.0f}".format(hcv_puma_g.est_hhs_in_100fp.sum()))
print("Total estimated voucher households in 500 year floodplain: " + "{:,.0f}".format(hcv_puma_g.est_hhs_in_500fp.sum()))
print('\n')
print("Total reported people with vouchers in 100 year floodplain: " + "{:,.0f}".format(hcv_puma_g.people_in_100fp.sum()))
print("Total reported people with vouchers in 500 year floodplain: " + "{:,.0f}".format(hcv_puma_g.people_in_500fp.sum()))
print('\n')
print("Total estimated people with vouchers in 100 year floodplain: " + "{:,.0f}".format(hcv_puma_g.est_people_in_100fp.sum()))
print("Total estimated people with vouchers in 500 year floodplain: " + "{:,.0f}".format(hcv_puma_g.est_people_in_500fp.sum()))
print('\n')
print("Total reported voucher households in either floodplain: " + "{:,.0f}".format(hcv_puma_g.hhs_in_any_fp.sum()))
print("Percent of reported voucher households in either floodplain: " + "{:.0%}".format(hcv_puma_g.hhs_in_any_fp.sum()/hcv_puma_g.number_reported.sum()))
###Output
Total reported voucher households in 100 year floodplain: 53,031
Total reported voucher households in 500 year floodplain: 64,497
Total estimated voucher households in 100 year floodplain: 59,359
Total estimated voucher households in 500 year floodplain: 72,219
Total reported people with vouchers in 100 year floodplain: 118,351
Total reported people with vouchers in 500 year floodplain: 145,983
Total estimated people with vouchers in 100 year floodplain: 131,163
Total estimated people with vouchers in 500 year floodplain: 161,814
Total reported voucher households in either floodplain: 64,497
Percent of reported voucher households in either floodplain: 55%
###Markdown
Exploring issues with this dataset * Used to find issue with incorrectly merged census shapefile
###Code
# Check some of the unexpectedly high shares derived from the geopandas analysis above
print('total voucher holders in Crown Heights North & Prospect Heights: {}'.format(
hcv_puma_g.loc[(hcv_puma_g.PUMA==4006)&(hcv_puma_g.borocode==3),'number_reported'].sum()))
print('total voucher holders in the 500 year floodplain in Crown Heights North & Prospect Heights: {}'.format(
hcv_puma_g.loc[(hcv_puma_g.PUMA==4006),'hhs_in_any_fp'].values[0]))
print('total voucher holders in the 500 year floodplain in Crown Heights North & Prospect Heights: {}'.format(
hcv_puma_g.loc[(hcv_puma_g.PUMA==4006),'pct_hh_in_any_fp'].values[0]))
hcv_puma.loc[hcv_puma.PUMA5CE==4006,['census_tract','number_reported','_f500']].sort_values('number_reported',ascending=False).head()
hcv.loc[hcv.census_tract==30900]
hcv_puma.loc[hcv.census_tract==30900]
check = hcvt_test_500.reset_index()
check.loc[check.census_tract==30900]
# Compare with intersection files prepared manually in QGIS just to be sure
intersection_500 = gpd.read_file("hcv_floodplain_500_intersection.geojson")
check_500 = intersection_500.\
groupby(['PUMA','borocode']).\
aggregate({'number_reported':'sum','est_total_occupied':'sum'}).\
reset_index()
len(hcv_puma_g.merge(check_500, how = 'inner', on = ['PUMA','borocode']))-len(hcv_puma_g)
# Check out PUMAs that have 500 year floodplain matches based on geopandas overlay but not in QGIS file
hcv_puma_g.loc[~hcv_puma_g.PUMA.isin(check_500.PUMA)]
# Spot check these values
# All seem to show up in the "intersections" layer in QGIS
hcv_puma.loc[(hcv_puma.PUMA5CE.isin([3806,3706,4015])) &
(hcv_puma.people_in_500fp >0),
['census_tract','borocode','number_reported','people_in_500fp']]
# Descriptives for number_reported
hcv_puma_g.number_reported.describe()
###Output
_____no_output_____
###Markdown
Read in geoJSON datasets, merge with HCV data, write to geoJSON file for mapping**Source and documentation**: https://data.cityofnewyork.us/Housing-Development/2010-Public-Use-Microdata-Areas-PUMAs-/cwiz-gcty**Data Source Definition:** NYC Open Data Portal GeoJSON file for PUMAs
###Code
## Read in and inspect data
puma_json = gpd.read_file("https://data.cityofnewyork.us/api/geospatial/cwiz-gcty?method=export&format=GeoJSON")
puma_json.head()
### Convert PUMA to string type for merging
hcv_puma_g['PUMA'] = hcv_puma_g['PUMA'].apply(lambda x: str(int(x)))
## Merge
viz = puma_json.merge(hcv_puma_g,
how = 'inner',
left_on = 'puma',
right_on = 'PUMA')
## Check that all records merged
assert len(hcv_puma_g.loc[~(hcv_puma_g.PUMA.isin(viz.PUMA))]) == 0, "Error! Not all records merged to geoJSON"
## rename PUMA Name to proper variable name
viz.rename(columns = {"PUMA Name":"puma_name"}, inplace = True)
## Write to geoJSON file for reduction on https://mapshaper.org/ and then mapping
viz.to_file("../data/hcv_dat.geojson",driver='GeoJSON')
###Output
_____no_output_____
###Markdown
Filling with 0
###Code
marks.fillna(0).iloc[0:1,0:5]
marks.fillna(method='ffill').iloc[0:1,0:5]
nba = pd.read_csv('data/nba.csv')
nba.head(1)
nba = nba.dropna(how='all')
nba = nba.dropna(subset=['Salary'])
###Output
_____no_output_____
###Markdown
Use `astype('category')` for columns with very less unique values. Importing Data ```pythonpd.read_csv(filename) From a CSV filepd.read_table(filename) From a delimited text file (like TSV)pd.read_excel(filename) From an Excel filepd.read_sql(query, connection_object) Reads from a SQL table/databasepd.read_json(json_string) Reads from a JSON formatted string, URL or file.pd.read_html(url) Parses an html URL, string or file and extracts tables to a list of dataframespd.read_clipboard() Takes the contents of your clipboard and passes it to read_table()pd.DataFrame(dict) From a dict, keys for columns names, values for data as lists``` Exploring data```pythondf.shape() Prints number of rows and columns in dataframedf.head(n) Prints first n rows of the DataFramedf.tail(n) Prints last n rows of the DataFramedf.info() Index, Datatype and Memory informationdf.describe() Summary statistics for numerical columnss.value_counts(dropna=False) Views unique values and countsdf.apply(pd.Series.value_counts) Unique values and counts for all columnsdf.describe() Summary statistics for numerical columnsdf.mean() Returns the mean of all columnsdf.corr() Returns the correlation between columns in a DataFramedf.count() Returns the number of non-null values in each DataFrame columndf.max() Returns the highest value in each columndf.min() Returns the lowest value in each columndf.median() Returns the median of each columndf.std() Returns the standard deviation of each column``` Selecting```pythondf[col] Returns column with label col as Seriesdf[[col1, col2]] Returns Columns as a new DataFrames.iloc[0] Selection by position (selects first element)s.loc[0] Selection by index (selects element at index 0)df.iloc[0,:] First rowdf.iloc[0,0] First element of first column``` Data cleaning```pythondf.columns = ['a','b','c'] Renames columnspd.isnull() Checks for null Values, Returns Boolean Arraypd.notnull() Opposite of s.isnull()df.dropna() Drops all rows that contain null valuesdf.dropna(axis=1) Drops all columns that contain null valuesdf.dropna(axis=1,thresh=n) Drops all rows have have less than n non null valuesdf.fillna(x) Replaces all null values with xs.fillna(s.mean()) Replaces all null values with the mean (mean can be replaced with almost any function from the statistics section)s.astype(float) Converts the datatype of the series to floats.replace(1,'one') Replaces all values equal to 1 with 'one's.replace([1,3],['one','three']) Replaces all 1 with 'one' and 3 with 'three'df.rename(columns=lambda x: x + 1) Mass renaming of columnsdf.rename(columns={'old_name': 'new_ name'}) Selective renamingdf.set_index('column_one') Changes the indexdf.rename(index=lambda x: x + 1) Mass renaming of index``` Filter, Sort and Group By```pythondf[df[col] > 0.5] Rows where the col column is greater than 0.5df[(df[col] > 0.5) & (df[col] < 0.7)] Rows where 0.5 < col < 0.7df.sort_values(col1) Sorts values by col1 in ascending orderdf.sort_values(col2,ascending=False) Sorts values by col2 in descending orderdf.sort_values([col1,col2], ascending=[True,False]) Sorts values by col1 in ascending order then col2 in descending orderdf.groupby(col) Returns a groupby object for values from one columndf.groupby([col1,col2]) Returns a groupby object values from multiple columnsdf.groupby(col1)[col2].mean() Returns the mean of the values in col2, grouped by the values in col1 (mean can be replaced with almost any function from the statistics section)df.pivot_table(index=col1, values= col2,col3], aggfunc=mean) Creates a pivot table that groups by col1 and calculates the mean of col2 and col3df.groupby(col1).agg(np.mean) Finds the average across all columns for every unique column 1 groupdf.apply(np.mean) Applies a function across each columndf.apply(np.max, axis=1) Applies a function across each row``` Finding outlier
###Code
import scipy.stats
dummy_age = [20, 21, 24, 19, 23, 45, 34, 20, 30, 34, 45, 29, 100, 6]
dummy_height = [140, 150, 280, 170, 160, 150,
159, 168, 167, 170, 169, 159, 160, 140]
dummy_df = pd.DataFrame(
list(zip(dummy_age, dummy_height)), columns=['age', 'height'])
dummy_df
# Calculate z-score
scipy.stats.zscore(dummy_df['height'])
zscore_height = np.abs(scipy.stats.zscore(dummy_df['height']))
dummy_df.iloc[np.where(zscore_height>3)]
###Output
_____no_output_____
###Markdown
Z score can be effected by outlier as it depends on mean A better approach can be to use Median Absolute Deviation (MAD) Using IQR to identify outliers
###Code
def get_lower_upper_bound(data):
q1 = np.percentile(data, 25)
q3 = np.percentile(data, 75)
iqr = q3-q1
lower_bound = q1 - (iqr * 1.5)
upper_bound = q3 + (iqr * 1.5)
return lower_bound, upper_bound
def get_outlier_iqr(data):
lower, upper = get_lower_upper_bound(data)
return data[np.where((data > upper) | (data < lower))]
get_outlier_iqr(dummy_df['height'].values)
get_outlier_iqr(dummy_df['age'].values)
dummy_df.boxplot(column='age')
###Output
_____no_output_____
###Markdown
Remove Source Name from Content
###Code
for row in transform.read():
for word in row['source'].split():
row = transform.remove_word(row, 'content', word)
if word == 'Its':
row = transform.remove_word(row, 'content', "It's")
transform.write(row, output)
###Output
_____no_output_____ |
3_Image_Related_Neural_Networks/03_MonitoringAndImprovingNN.ipynb | ###Markdown
Monitoring and Improving Neural Networks Live Demos
###Code
(train_images, train_labels), (test_images, test_labels)= mnist.load_data()
plt.imshow(train_images[0], cmap='gray')
plt.title(train_labels[0])
plt.show()
# Important step - otherwise there will be model exploding!
# train_images = train_images / 255.0
# test_images = test_images / 255.0
# Float16 - to get more data in GPU memory
train_images = (train_images / 255.0).astype(np.float16)
test_images = (test_images / 255.0).astype(np.float16)
test_images.dtype
DROPOUT_RATE = 0.1
model = Sequential([
Input(shape=train_images[0].shape), # Images with the same shape
Flatten(),
Dense(units=20, activation=tf.keras.activations.relu),
Dropout(DROPOUT_RATE),
Dense(units=50, activation=tf.keras.activations.relu),
Dropout(DROPOUT_RATE),
Dense(units=30, activation=tf.keras.activations.relu),
Dropout(DROPOUT_RATE),
Dense(units=10, activation=tf.keras.activations.softmax)
])
model.summary()
model.compile(
optimizer = Adam(learning_rate=0.001),
loss = SparseCategoricalCrossentropy(),
metrics = [SparseCategoricalAccuracy()]
)
history = model.fit(
x = train_images,
y = train_labels,
epochs = 20,
batch_size = 8,
validation_split = 0.1,
# initial_epoch = 0 # Where to start from remebered data
callbacks = [TensorBoard()]
)
# Use "tensorboard --logdir logs" to visualize
model.predict(train_images[0:10])
test_predicitons = model.predict_classes(test_images[:10])
test_predicitons
test_labels[:10]
(test_predicitons == test_labels[:10]).sum()/len(test_predicitons)
history.history
plt.plot(range(20), history.history["loss"], c="g", label = "train loss")
plt.plot(range(20), history.history["val_loss"], c="r", label = "validation loss")
plt.xticks(list(range(1, 21)))
plt.ylim(0, 0.6)
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.plot()
###Output
_____no_output_____ |
1_mosaic_data_attention_experiments/3_stage_wise_training/alternate_minimization/theory/type0_post_mortem/codes/linear_linear_init0_sgdmomentum_simultaneous_sigmoid.ipynb | ###Markdown
Generate dataset
###Code
y = np.random.randint(0,3,500)
idx= []
for i in range(3):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((500,))
np.random.seed(12)
x[idx[0]] = np.random.uniform(low =-1,high =0,size= sum(idx[0]))
x[idx[1]] = np.random.uniform(low =0,high =1,size= sum(idx[1]))
x[idx[2]] = np.random.uniform(low =2,high =3,size= sum(idx[2]))
x[idx[0]][0], x[idx[2]][5]
print(x.shape,y.shape)
idx= []
for i in range(3):
idx.append(y==i)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
bg_idx = [ np.where(idx[2] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
foreground_classes = {'class_0','class_1' }
background_classes = {'class_2'}
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(9,1))
a=np.reshape(a,(3,3))
plt.imshow(a)
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,2)
fg_idx = 0
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(9,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(9):
print(mosaic_list_of_images[0][j])
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000] , fore_idx[0:1000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000] , fore_idx[1000:2000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.fc1 = nn.Linear(1, 1)
# self.fc2 = nn.Linear(2, 1)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
y = y.to("cuda")
x = x.to("cuda")
# print(x.shape, z.shape)
for i in range(9):
# print(z[:,i].shape)
# print(self.helper(z[:,i])[:,0].shape)
x[:,i] = self.helper(z[:,i])[:,0]
# print(x.shape, z.shape)
x = F.softmax(x,dim=1)
# print(x.shape, z.shape)
# x1 = x[:,0]
# print(torch.mul(x[:,0],z[:,0]).shape)
for i in range(9):
# x1 = x[:,i]
y = y + torch.mul(x[:,i],z[:,i])
# print(x.shape, y.shape)
return x, y
def helper(self, x):
x = x.view(-1, 1)
# x = F.relu(self.fc1(x))
x = (self.fc1(x))
return x
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.fc1 = nn.Linear(1, 1)
def forward(self, x):
x = x.view(-1, 1)
x = self.fc1(x)
# print(x.shape)
return x
torch.manual_seed(12)
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
torch.manual_seed(12)
classify = Classification().double()
classify = classify.to("cuda")
focus_net.fc1.weight, focus_net.fc1.bias
classify.fc1.weight, classify.fc1.bias
focus_net.fc1.weight = torch.nn.Parameter(torch.tensor(np.array([[0.0]])))
focus_net.fc1.bias = torch.nn.Parameter(torch.tensor(np.array([0.0])))
focus_net.fc1.weight, focus_net.fc1.bias
classify.fc1.weight = torch.nn.Parameter(torch.tensor(np.array([[0.0]])))
classify.fc1.bias = torch.nn.Parameter(torch.tensor(np.array([0.0])))
classify.fc1.weight, classify.fc1.bias
focus_net = focus_net.to("cuda")
classify = classify.to("cuda")
focus_net.fc1.weight, focus_net.fc1.bias
classify.fc1.weight, classify.fc1.bias
import torch.optim as optim
criterion = nn.BCEWithLogitsLoss()
optimizer_classify = optim.SGD(classify.parameters(), lr=0.01, momentum=0.9)
optimizer_focus = optim.SGD(focus_net.parameters(), lr=0.01, momentum=0.9)
# optimizer_classify = optim.Adam(classify.parameters(), lr=0.01)
# optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.01)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
# print(outputs.shape)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
# print(predicted.shape)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
# print(focus, fore_idx[j], predicted[j])
if(focus == fore_idx[j] and predicted[j] == labels[j].item()):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j].item()):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j].item()):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j].item()):
focus_false_pred_false += 1
total += labels.size(0)
correct += np.sum( np.concatenate(predicted,axis=0)== labels.cpu().numpy() )
print('Accuracy of the network on the 1000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j].item()):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j].item()):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j].item()):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j].item()):
focus_false_pred_false += 1
total += labels.size(0)
correct += np.sum( np.concatenate(predicted,axis=0)== labels.cpu().numpy() )
print('Accuracy of the network on the 1000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
nos_epochs = 1000
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
inputs = inputs.double()
labels = labels.float()
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
# print(predicted.shape)
# print(outputs.shape,labels.shape)
# print(outputs)
# print(labels)
loss = criterion(outputs[:,0], labels)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 3
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j].item()):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j].item()):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j].item()):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j].item()):
focus_false_pred_false +=1
if(np.mean(epoch_loss) <= 0.001):
break;
if epoch % 5 == 0:
col1.append(epoch + 1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
# print("="*20)
# print("Train FTPT : ", col4)
# print("Train FFPT : ", col5)
#************************************************************************
#testing data set
# focus_net.eval()
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j].item()):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j].item()):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j].item()):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j].item()):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
# print("Test FTPT : ", col10)
# print("Test FFPT : ", col11)
# print("="*20)
print('Finished Training')
df_train = pd.DataFrame()
df_test = pd.DataFrame()
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
# plt.figure(12,12)
plt.plot(col1,np.array(col2)/10, label='argmax > 0.5')
plt.plot(col1,np.array(col3)/10, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,np.array(col4)/10, label ="focus_true_pred_true ")
plt.plot(col1,np.array(col5)/10, label ="focus_false_pred_true ")
plt.plot(col1,np.array(col6)/10, label ="focus_true_pred_false ")
plt.plot(col1,np.array(col7)/10, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.show()
df_test
# plt.figure(12,12)
plt.plot(col1,np.array(col8)/10, label='argmax > 0.5')
plt.plot(col1,np.array(col9)/10, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,np.array(col10)/10, label ="focus_true_pred_true ")
plt.plot(col1,np.array(col11)/10, label ="focus_false_pred_true ")
plt.plot(col1,np.array(col12)/10, label ="focus_true_pred_false ")
plt.plot(col1,np.array(col13)/10, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j].item()):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j].item()):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j].item()):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j].item()):
focus_false_pred_false += 1
total += labels.size(0)
correct += np.sum( np.concatenate(predicted,axis=0)== labels.cpu().numpy() )
print('Accuracy of the network on the 1000 train images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j].item()):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j].item()):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j].item()):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j].item()):
focus_false_pred_false += 1
total += labels.size(0)
correct += np.sum( np.concatenate(predicted,axis=0)== labels.cpu().numpy() )
print('Accuracy of the network on the 1000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
total += labels.size(0)
correct += np.sum( np.concatenate(predicted,axis=0)== labels.cpu().numpy() )
print('Accuracy of the network on the 1000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
predicted = np.round(torch.sigmoid(outputs.data).cpu().numpy())
total += labels.size(0)
correct += np.sum( np.concatenate(predicted,axis=0)== labels.cpu().numpy() )
print('Accuracy of the network on the 1000 test images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
focus_net.fc1.weight, focus_net.fc1.bias
classify.fc1.weight, classify.fc1.bias
###Output
_____no_output_____ |
notebooks/example_utilility_f1_attraction_repulsion_constant_hmm_1.ipynb | ###Markdown
Attraction Case 1: $w_1 >> w_2$ t = 3
###Code
utility_params = {'state':0 , 'w1':100000, 'w2':1, 'c':1, 't':3}
x_vector = np.array([ [5], [3], [4], [3], [5]])
t_range = range(1,6)
for t in t_range:
print('t = ', t)
utility_params['t'] = t
print()
for state in range(3):
utility_params['state'] = state
print('Attraction to state ', utility_params['state'])
example_attraction_repulsion(g_w_dict, x_vector, utility_params)
print()
###Output
t = 1
Attraction to state 0
###Markdown
Case 2: $w_1 << w_2$
###Code
utility_params = {'state':0 , 'w1':1, 'w2':100000, 'c':1, 't':3}
x_vector = np.array([ [5], [3], [4], [3], [5]])
t_range = range(1,6)
for t in t_range:
print('t = ', t)
utility_params['t'] = t
print()
for state in range(3):
utility_params['state'] = state
print('Attraction to state ', utility_params['state'])
example_attraction_repulsion(g_w_dict, x_vector, utility_params)
print()
###Output
t = 1
Attraction to state 0
###Markdown
Case 3: $w_1 = 0.95, w_2 = 0.05$
###Code
utility_params = {'state':0 , 'w1':0.95, 'w2':0.05, 'c': 1, 't':3}
x_vector = np.array([ [5], [3], [4], [3], [5]])
t_range = range(1,6)
for t in t_range:
print('t = ', t)
utility_params['t'] = t
print()
for state in range(3):
utility_params['state'] = state
print('Attraction to state ', utility_params['state'])
example_attraction_repulsion(g_w_dict, x_vector, utility_params)
print()
###Output
t = 1
Attraction to state 0
###Markdown
Repulsion * Case 1: $w_1 >> w_2$
###Code
utility_params = {'state':0 , 'w1':100000, 'w2':1, 'c':-1, 't':3}
x_vector = np.array([ [5], [3], [4], [3], [5]])
t_range = range(1,6)
for t in t_range:
print('t = ', t)
utility_params['t'] = t
print()
for state in range(3):
utility_params['state'] = state
print('Attraction to state ', utility_params['state'])
example_attraction_repulsion(g_w_dict, x_vector, utility_params)
print()
###Output
t = 1
Attraction to state 0
###Markdown
Case 2: $w_2 >> w_1$
###Code
utility_params = {'state':0 , 'w1':1, 'w2':100000, 'c':-1, 't':3}
x_vector = np.array([ [5], [3], [4], [3], [5]])
t_range = range(1,6)
for t in t_range:
print('t = ', t)
utility_params['t'] = t
print()
for state in range(3):
utility_params['state'] = state
print('Attraction to state ', utility_params['state'])
example_attraction_repulsion(g_w_dict, x_vector, utility_params)
print()
###Output
t = 1
Attraction to state 0
###Markdown
Case 3: $w_1 = 0.95, w_2 = 0.05$
###Code
utility_params = {'state':0 , 'w1':0.95, 'w2':0.05, 'c':-1, 't':3}
x_vector = np.array([ [5], [3], [4], [3], [5]])
t_range = range(1,6)
for t in t_range:
print('t = ', t)
utility_params['t'] = t
print()
for state in range(3):
utility_params['state'] = state
print('Attraction to state ', utility_params['state'])
example_attraction_repulsion(g_w_dict, x_vector, utility_params)
print()
###Output
t = 1
Attraction to state 0
|
models/.ipynb_checkpoints/to_upload_angle_pssm_1dim_aditional_features_pre_act-checkpoint.ipynb | ###Markdown
Loading the Dataset
###Code
# Load outputs/labels from file
outputs = np.genfromtxt("../data/angles/outputs.txt")
outputs[np.isnan(outputs)] = 0.0
outputs.shape
# Convert angles to sin/cos to remove angle periodicity
out = []
out.append(np.sin(outputs[:,0]))
out.append(np.cos(outputs[:,0]))
out.append(np.sin(outputs[:,1]))
out.append(np.cos(outputs[:,1]))
out = np.array(out).T
print(out.shape)
def get_ins(path = "../data/angles/input_aa.txt", pssm=None):
""" Gets inputs from both AminoAcids (input_aa) and PSSM (input_pssm)"""
# handles both files
if pssm: path = "../data/angles/input_pssm.txt"
# Opn file and read text
with open(path, "r") as f:
lines = f.read().split('\n')
# Extract numeric data from text
pre_ins = []
for i,line in enumerate(lines):
# Read each protein separately
if line == "NEW":
prot = []
raw = lines[i+1:i+(17*2+1)]
# Read each line as a vector + ensemble one-hot vectors as a matrix
for r in raw:
prot.append(np.array([float(x) for x in r.split(" ") if x != ""]))
# Add prot to dataset
pre_ins.append(np.array(prot))
return np.array(pre_ins)
# Get inputs data
aas = get_ins()
pssms = get_ins(pssm=True)
# Check that shapes match
print(aas.shape, pssms.shape)
# Concatenate input features
inputs = np.concatenate((aas, pssms[:, :, :20]), axis=2)
inputs.shape
# Plot some angle sto make sure they follow a Ramachandran plot distribution
plt.scatter(outputs[:1000,0], outputs[:1000,1], marker=".")
plt.xlim(-np.pi, np.pi)
plt.ylim(-np.pi, np.pi)
plt.show()
""" WE DON'T PREPROCESS INPUTS SINCE THEY'RE IN 0-1 RANGE"""
# Preprocess outputs (mean/std)
# mean = np.mean(inputs,axis=(0,1,2))
# std = np.std(inputs,axis=(0,1,2))
# pre_inputs = (inputs-mean)/(std+1e-7)
# print("Mean: ", mean)
# print("Std: ", std)
###Output
_____no_output_____
###Markdown
Loading the model
###Code
# Using AMSGrad optimizer for speed
kernel_size, filters = 3, 16
adam = keras.optimizers.Adam(amsgrad=True)
# Using AMSGrad optimizer for speed
kernel_size, filters = 3, 16
adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
# Create model
model = resnet_v2(input_shape=(17*2,41), depth=20, num_classes=4, conv_first=True)
model.compile(optimizer=adam, loss=custom_mse_mae,
metrics=["mean_absolute_error", "mean_squared_error"])
model.summary()
###Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 34, 41) 0
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, 34, 16) 1984 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 34, 16) 64 conv1d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 34, 16) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, 34, 16) 272 activation_1[0][0]
__________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, 34, 16) 784 conv1d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 34, 16) 64 conv1d_3[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 34, 16) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv1d_4 (Conv1D) (None, 34, 64) 1088 activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 34, 64) 256 conv1d_4[0][0]
__________________________________________________________________________________________________
conv1d_5 (Conv1D) (None, 34, 64) 1088 activation_1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 34, 64) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 34, 64) 0 conv1d_5[0][0]
activation_3[0][0]
__________________________________________________________________________________________________
conv1d_6 (Conv1D) (None, 34, 16) 1040 add_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 34, 16) 64 conv1d_6[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 34, 16) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv1d_7 (Conv1D) (None, 34, 16) 784 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 34, 16) 64 conv1d_7[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 34, 16) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv1d_8 (Conv1D) (None, 34, 64) 1088 activation_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 34, 64) 256 conv1d_8[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 34, 64) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 34, 64) 0 add_1[0][0]
activation_6[0][0]
__________________________________________________________________________________________________
conv1d_9 (Conv1D) (None, 17, 64) 4160 add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 17, 64) 256 conv1d_9[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 17, 64) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv1d_10 (Conv1D) (None, 17, 64) 12352 activation_7[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 17, 64) 256 conv1d_10[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 17, 64) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv1d_11 (Conv1D) (None, 17, 128) 8320 activation_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 17, 128) 512 conv1d_11[0][0]
__________________________________________________________________________________________________
conv1d_12 (Conv1D) (None, 17, 128) 8320 add_2[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 17, 128) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 17, 128) 0 conv1d_12[0][0]
activation_9[0][0]
__________________________________________________________________________________________________
conv1d_13 (Conv1D) (None, 17, 64) 8256 add_3[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 17, 64) 256 conv1d_13[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 17, 64) 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv1d_14 (Conv1D) (None, 17, 64) 12352 activation_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 17, 64) 256 conv1d_14[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 17, 64) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv1d_15 (Conv1D) (None, 17, 128) 8320 activation_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 17, 128) 512 conv1d_15[0][0]
__________________________________________________________________________________________________
activation_12 (Activation) (None, 17, 128) 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 17, 128) 0 add_3[0][0]
activation_12[0][0]
__________________________________________________________________________________________________
conv1d_16 (Conv1D) (None, 9, 128) 16512 add_4[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 9, 128) 512 conv1d_16[0][0]
__________________________________________________________________________________________________
activation_13 (Activation) (None, 9, 128) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv1d_17 (Conv1D) (None, 9, 128) 49280 activation_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 9, 128) 512 conv1d_17[0][0]
__________________________________________________________________________________________________
activation_14 (Activation) (None, 9, 128) 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv1d_18 (Conv1D) (None, 9, 256) 33024 activation_14[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 9, 256) 1024 conv1d_18[0][0]
__________________________________________________________________________________________________
conv1d_19 (Conv1D) (None, 9, 256) 33024 add_4[0][0]
__________________________________________________________________________________________________
activation_15 (Activation) (None, 9, 256) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 9, 256) 0 conv1d_19[0][0]
activation_15[0][0]
__________________________________________________________________________________________________
conv1d_20 (Conv1D) (None, 9, 128) 32896 add_5[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 9, 128) 512 conv1d_20[0][0]
__________________________________________________________________________________________________
activation_16 (Activation) (None, 9, 128) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
conv1d_21 (Conv1D) (None, 9, 128) 49280 activation_16[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 9, 128) 512 conv1d_21[0][0]
__________________________________________________________________________________________________
activation_17 (Activation) (None, 9, 128) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv1d_22 (Conv1D) (None, 9, 256) 33024 activation_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 9, 256) 1024 conv1d_22[0][0]
__________________________________________________________________________________________________
activation_18 (Activation) (None, 9, 256) 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 9, 256) 0 add_5[0][0]
activation_18[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 9, 256) 1024 add_6[0][0]
__________________________________________________________________________________________________
activation_19 (Activation) (None, 9, 256) 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
average_pooling1d_1 (AveragePoo (None, 3, 256) 0 activation_19[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 768) 0 average_pooling1d_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 4) 3076 flatten_1[0][0]
==================================================================================================
Total params: 328,260
Trainable params: 324,292
Non-trainable params: 3,968
__________________________________________________________________________________________________
###Markdown
Model training
###Code
# Separate data between training and testing
split = 38700
x_train, x_test = inputs[:split], inputs[split:]
y_train, y_test = out[:split], out[split:]
# his = model.fit(inputs, out, epochs=5, batch_size=16, verbose=1, shuffle=True, validation_split=0.1)
# Resnet (pre-act structure) with 34*41 columns as inputs - leaving a subset for validation
his = model.fit(x_train, y_train, epochs=5, batch_size=16, verbose=1, shuffle=True,
validation_data=(x_test, y_test))
###Output
Train on 38700 samples, validate on 4301 samples
Epoch 1/5
38700/38700 [==============================] - 74s 2ms/step - loss: 1.1764 - mean_absolute_error: 0.4641 - mean_squared_error: 0.3590 - val_loss: 1.1334 - val_mean_absolute_error: 0.4706 - val_mean_squared_error: 0.3994
Epoch 2/5
38700/38700 [==============================] - 66s 2ms/step - loss: 0.9392 - mean_absolute_error: 0.4207 - mean_squared_error: 0.3089 - val_loss: 0.9370 - val_mean_absolute_error: 0.4317 - val_mean_squared_error: 0.3372
Epoch 3/5
38700/38700 [==============================] - 66s 2ms/step - loss: 0.8221 - mean_absolute_error: 0.3955 - mean_squared_error: 0.2831 - val_loss: 0.8474 - val_mean_absolute_error: 0.4180 - val_mean_squared_error: 0.3059
Epoch 4/5
38700/38700 [==============================] - 66s 2ms/step - loss: 0.7630 - mean_absolute_error: 0.3815 - mean_squared_error: 0.2709 - val_loss: 0.7937 - val_mean_absolute_error: 0.4066 - val_mean_squared_error: 0.2880
Epoch 5/5
38700/38700 [==============================] - 67s 2ms/step - loss: 0.7271 - mean_absolute_error: 0.3722 - mean_squared_error: 0.2632 - val_loss: 0.7797 - val_mean_absolute_error: 0.4082 - val_mean_squared_error: 0.2865
###Markdown
Making predictions
###Code
preds = model.predict(x_test)
# Get angle values from sin and cos
refactor = []
for pred in preds:
angles = []
phi_sin, phi_cos, psi_sin, psi_cos = pred[0], pred[1], pred[2], pred[3]
# PHI - First and fourth quadrant
if (phi_sin>=0 and phi_cos>=0) or (phi_cos>=0 and phi_sin<=0):
angles.append(np.arctan(phi_sin/phi_cos))
# 2nd and 3rd quadrant
else:
angles.append(np.pi + np.arctan(phi_sin/phi_cos))
# PSI - First and fourth quadrant
if (psi_sin>=0 and psi_cos>=0) or (psi_cos>=0 and psi_sin<=0):
angles.append(np.arctan(psi_sin/psi_cos))
# 2nd and 3rd quadrant
else:
angles.append(np.pi + np.arctan(psi_sin/psi_cos))
refactor.append(angles)
refactor = np.array(refactor)
print(refactor.shape)
# Experimental debugging prints to validate the predictions
# print("PREDS: ", preds[40:50])
# print("OUT: ", out[40:50])
# print("----------------------------------------")
# print("REFACTOR: ", refactor[:10])
# print("OUTPUTS: ", outputs[:10])
# Set angle range in (-pi, pi)
refactor[refactor>np.pi] = np.pi
refactor[refactor<-np.pi] = -np.pi
plt.scatter(outputs[split:,0], outputs[split:,1], marker=".")
plt.scatter(refactor[:,0], refactor[:,1], marker=".")
plt.legend(["Truth distribution", "Predictions distribution"], loc="lower right")
plt.xlim(-np.pi, np.pi)
plt.ylim(-np.pi, np.pi)
plt.xlabel("Phi")
plt.ylabel("Psi")
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate correlation between predictions and ground truth
###Code
# Calculate Perason coefficient btwn cosines of both angles (true values and predicted ones)
cos_phi = np.corrcoef(np.cos(refactor[:,0]), np.cos(outputs[split:,0]))
cos_psi = np.corrcoef(np.cos(refactor[:,1]), np.cos(outputs[split:,1]))
print("Correlation coefficients - SOTA is Phi: 0.65 | Psi: 0.7")
print("Cos Phi: ", cos_phi[0,1])
print("Cos Psi: ", cos_psi[0,1])
model.save("resnet_1d_angles.h5")
###Output
_____no_output_____
###Markdown
Done! Loading later
###Code
# from keras.models import load_model
# model = load_model("resnet_1d_angles.h5", custom_objects={'custom_mse_mae': custom_mse_mae})
###Output
_____no_output_____ |
examples/DistinguishingNonRegularGraphs.ipynb | ###Markdown
Distinguishing non-regular graphsThere are instances of graphs that are not *k*-regular nor isomorphic and yetare not distinguishable via the message passing GNNs when their nodeshave identical features. An example of such graphs is shown in the following image.In PyNeuraLogic, we are capable of distinguishing those graphs,for example, via the previously proposed model (Distinguishing K Regular Graphs example)which captures triangles of graph _a_ to distinguish between graphs. Install PyNeuraLogic from PyPI
###Code
! pip install neuralogic
from neuralogic.nn import get_evaluator
from neuralogic.core import Backend
from neuralogic.core import Relation, Template, Var
from neuralogic.core.settings import Settings, Optimizer
from neuralogic.utils.data import Dataset
train_dataset = Dataset()
template = Template()
template.add_rules([
# Captures triangle
Relation.triangle(Var.X)[1,] <= (
Relation.edge(Var.X, Var.Y), Relation.feature(Var.Y)[1,],
Relation.edge(Var.Y, Var.Z), Relation.feature(Var.Z)[1,],
Relation.edge(Var.Z, Var.X), Relation.feature(Var.X)[1,],
),
# Captures general graph
Relation.general(Var.X)[1,] <= (Relation.edge(Var.X, Var.Y), Relation.feature(Var.Y)[1,]),
Relation.general(Var.X)[1,] <= Relation.feature(Var.Y)[1,],
Relation.predict <= Relation.general(Var.X)[1,],
Relation.predict <= Relation.triangle(Var.X)[1,],
])
# Encoding of graph a)
train_dataset.add_example(
[
Relation.edge(1, 2), Relation.edge(2, 3), Relation.edge(3, 1), Relation.edge(2, 4),
Relation.edge(4, 5), Relation.edge(5, 6), Relation.edge(6, 4),
Relation.edge(2, 1), Relation.edge(3, 2), Relation.edge(1, 3), Relation.edge(4, 2),
Relation.edge(5, 4), Relation.edge(6, 5), Relation.edge(4, 6),
Relation.feature(1), Relation.feature(2), Relation.feature(3),
Relation.feature(4), Relation.feature(5), Relation.feature(6),
],
)
# Encoding of graph b)
train_dataset.add_example(
[
Relation.edge(1, 2), Relation.edge(2, 3), Relation.edge(3, 4), Relation.edge(4, 1),
Relation.edge(2, 5), Relation.edge(5, 6), Relation.edge(6, 3),
Relation.edge(2, 1), Relation.edge(3, 2), Relation.edge(4, 3), Relation.edge(1, 4),
Relation.edge(5, 2), Relation.edge(6, 5), Relation.edge(3, 6),
Relation.feature(1), Relation.feature(2), Relation.feature(3),
Relation.feature(4), Relation.feature(5), Relation.feature(6),
],
)
train_dataset.add_queries([
Relation.predict[1],
Relation.predict[0],
])
settings = Settings(optimizer=Optimizer.SGD, epochs=200)
neuralogic_evaluator = get_evaluator(template, Backend.JAVA, settings)
for _ in neuralogic_evaluator.train(train_dataset):
pass
graphs = ["a", "b"]
for graph_id, (label, predicted) in enumerate(neuralogic_evaluator.test(train_dataset)):
print(f"Graph {graphs[graph_id]} is predicted to be class: {int(round(predicted))} | {predicted}")
###Output
Graph a is predicted to be class: 1 | 0.7534297802241336
Graph b is predicted to be class: 0 | 0.10480525876542958
###Markdown
Another interesting approach of a slightly different extensionof vanilla GNNs might be capturing based on the structure and thecardinality of nodes. We can add additional information about thecardinality of each node into examples, for instance, as atoms withpredicate's name *cardinality* with two terms -the node id and its cardinality. We can then choose which atom willbe aggregated based on its cardinality to distinguish graph _a_ and graph *b*, as shown in Example 2, where we capture only sub-graphs of graphsThe `a_graph` captures a triangle (`Var.X`, `Var.Y`, `Var.Z`)connected to one node (`Var.T`) with a cardinality of three.In contrast, the `b_graph` captures a cycle of length of four (`Var.X`, `Var.Y`, `Var.Z`, `Var.T`) which has to satisfy required cardinalities. Example 2: Distinguishing between graphs based on their cardinality
###Code
train_dataset = Dataset()
template = Template()
template.add_rules([
Relation.a_graph(Var.X) <= (
Relation.edge(Var.X, Var.Y), Relation.cardinality(Var.Y, 2)[1,],
Relation.edge(Var.Y, Var.Z), Relation.cardinality(Var.Z, 2)[1,],
Relation.edge(Var.Z, Var.X), Relation.cardinality(Var.X, 3)[1,],
Relation.edge(Var.X, Var.T), Relation.cardinality(Var.T, 3)[1,],
Relation.special.alldiff(...),
),
Relation.b_graph(Var.X) <= (
Relation.edge(Var.X, Var.Y), Relation.cardinality(Var.Y, 2)[1,],
Relation.edge(Var.Y, Var.Z), Relation.cardinality(Var.Z, 2)[1,],
Relation.edge(Var.Z, Var.T), Relation.cardinality(Var.T, 3)[1,],
Relation.edge(Var.T, Var.X), Relation.cardinality(Var.X, 3)[1,],
Relation.special.alldiff(...),
),
Relation.predict <= Relation.a_graph(Var.X)[1,],
Relation.predict <= Relation.b_graph(Var.X)[1,],
])
# Encoding of graph a)
train_dataset.add_example(
[
Relation.edge(1, 2), Relation.edge(2, 3), Relation.edge(3, 1), Relation.edge(2, 4),
Relation.edge(4, 5), Relation.edge(5, 6), Relation.edge(6, 4),
Relation.edge(2, 1), Relation.edge(3, 2), Relation.edge(1, 3), Relation.edge(4, 2),
Relation.edge(5, 4), Relation.edge(6, 5), Relation.edge(4, 6),
Relation.cardinality(1, 2), Relation.cardinality(2, 3), Relation.cardinality(3, 2),
Relation.cardinality(4, 3), Relation.cardinality(5, 2), Relation.cardinality(6, 2),
],
)
# Encoding of graph b)
train_dataset.add_example(
[
Relation.edge(1, 2), Relation.edge(2, 3), Relation.edge(3, 4), Relation.edge(4, 1),
Relation.edge(2, 5), Relation.edge(5, 6), Relation.edge(6, 3),
Relation.edge(2, 1), Relation.edge(3, 2), Relation.edge(4, 3), Relation.edge(1, 4),
Relation.edge(5, 2), Relation.edge(6, 5), Relation.edge(3, 6),
Relation.cardinality(1, 2), Relation.cardinality(2, 3), Relation.cardinality(3, 3),
Relation.cardinality(4, 2), Relation.cardinality(5, 2), Relation.cardinality(6, 2),
],
)
train_dataset.add_queries([
Relation.predict[1],
Relation.predict[0],
])
settings = Settings(optimizer=Optimizer.SGD, epochs=200)
neuralogic_evaluator = get_evaluator(template, Backend.JAVA, settings)
for _ in neuralogic_evaluator.train(train_dataset):
pass
graphs = ["a", "b"]
for graph_id, (label, predicted) in enumerate(neuralogic_evaluator.test(train_dataset)):
print(f"Graph {graphs[graph_id]} is predicted to be class: {int(round(predicted))} | {predicted}")
###Output
Graph a is predicted to be class: 1 | 0.7256464036394323
Graph b is predicted to be class: 0 | -2.9710538058333872e-09
###Markdown
The image above shows two graphs, aand b,representing a real-world structure of two molecules _Bicyclopentyl_and *Decalin*, respectively. The message passing GNN cannot again distinguish betweengraphs under the condition of identical features for all nodes.In PyNeuraLogic, we can embed, for example, the cycle of length five present ingraph _a_ and thus distinguish those instances, such as is shown inExample 3. Example 3: Capturing the cycle of the length of five
###Code
train_dataset = Dataset()
template = Template()
template.add_rules([
# Captures cycle of the length of five (Bicyclopentyl)
Relation.cycle_of_the_length_of_five(Var.X)[1,] <= (
Relation.edge(Var.X, Var.Y), Relation.feature(Var.Y)[1,],
Relation.edge(Var.Y, Var.Z), Relation.feature(Var.Z)[1,],
Relation.edge(Var.Z, Var.R), Relation.feature(Var.R)[1,],
Relation.edge(Var.R, Var.S), Relation.feature(Var.S)[1,],
Relation.edge(Var.S, Var.X), Relation.feature(Var.X)[1,],
Relation.special.alldiff(...),
),
# Captures general graph (such as Decalin)
Relation.general(Var.X)[1,] <= (Relation.edge(Var.X, Var.Y), Relation.feature(Var.Y)[1,]),
Relation.general(Var.X)[1,] <= Relation.feature(Var.Y)[1,],
Relation.predict <= Relation.general(Var.X)[1,],
Relation.predict <= Relation.cycle_of_the_length_of_five(Var.X)[1,],
])
# Encoding of graph Bicyclopentyl
train_dataset.add_example(
[
Relation.edge(1, 2), Relation.edge(2, 3), Relation.edge(3, 4), Relation.edge(4, 5), Relation.edge(5, 1), Relation.edge(1, 6),
Relation.edge(2, 1), Relation.edge(3, 2), Relation.edge(4, 3), Relation.edge(5, 4), Relation.edge(1, 5), Relation.edge(6, 1),
Relation.edge(6, 7), Relation.edge(7, 8), Relation.edge(8, 9), Relation.edge(9, 10), Relation.edge(10, 6),
Relation.edge(7, 6), Relation.edge(8, 7), Relation.edge(9, 8), Relation.edge(10, 9), Relation.edge(6, 10),
Relation.feature(1), Relation.feature(2), Relation.feature(3), Relation.feature(4), Relation.feature(5),
Relation.feature(6), Relation.feature(7), Relation.feature(8), Relation.feature(9), Relation.feature(10),
],
)
# Encoding of graph Decalin
train_dataset.add_example(
[
Relation.edge(1, 2), Relation.edge(2, 3), Relation.edge(3, 4), Relation.edge(4, 5), Relation.edge(5, 6), Relation.edge(1, 6),
Relation.edge(2, 1), Relation.edge(3, 2), Relation.edge(4, 3), Relation.edge(5, 4), Relation.edge(6, 5), Relation.edge(6, 1),
Relation.edge(6, 7), Relation.edge(7, 8), Relation.edge(8, 9), Relation.edge(9, 10), Relation.edge(10, 1),
Relation.edge(7, 6), Relation.edge(8, 7), Relation.edge(9, 8), Relation.edge(10, 9), Relation.edge(1, 10),
Relation.feature(1), Relation.feature(2), Relation.feature(3), Relation.feature(4), Relation.feature(5),
Relation.feature(6), Relation.feature(7), Relation.feature(8), Relation.feature(9), Relation.feature(10),
],
)
train_dataset.add_queries([
Relation.predict[1],
Relation.predict[0],
])
settings = Settings(optimizer=Optimizer.SGD, epochs=200)
neuralogic_evaluator = get_evaluator(template, Backend.JAVA, settings)
for _ in neuralogic_evaluator.train(train_dataset):
pass
graphs = ["Bicyclopentyl", "Decalin"]
for graph_id, (label, predicted) in enumerate(neuralogic_evaluator.test(train_dataset)):
print(f"Graph {graphs[graph_id]} is predicted to be class: {int(round(predicted))} | {predicted}")
###Output
Graph Bicyclopentyl is predicted to be class: 1 | 0.7610862875240267
Graph Decalin is predicted to be class: 0 | 0.10176012948447899
|
deeplearning1/nbs/lesson5-recreate.ipynb | ###Markdown
Get Glove Dataset
###Code
def get_glove_dataset(dataset):
md5sums = {
'6B.50d': '8e1557d1228decbda7db6dfd81cd9909',
'6B.100d': 'c92dbbeacde2b0384a43014885a60b2c',
'6B.200d': 'af271b46c04b0b2e41a84d8cd806178d',
'6B.300d': '30290210376887dcc6d0a5a6374d8255'
}
glove_path = os.path.abspath('data/glove/results')
%mkdir -p glove_path
return get_file(
dataset,
'http://files.fast.ai/models/glove/' + dataset + '.tgz',
cache_subdir=glove_path,
md5_hash=md5sums.get(dataset, None),
untar=True
)
def load_vectors(loc):
return (load_array(loc+'.dat'),
pickle.load(open(loc+'_words.pkl', 'rb')),
pickle.load(open(loc+'_idx.pkl', 'rb')))
vecs, words, wordidx = load_vectors(get_glove_dataset('6B.100d'))
vecs[:2]
words[:10]
wordidx
def create_emb():
"""
1. create empty matrix to store pre-trained vectors
2. grab each word in our vocabulary
3. if the word has a pre-trained vector, grab it
4. otherwise, set it to a random normal vector
"""
n_factor = vecs.shape[1]
emb = np.zeros((5000, n_factor))
for i in range(1, len(emb)):
# get the actual word
word = idx2word[i]
# make sure the word exists (i.e. isn't whitespace)
# and matches the regex we deem reasonable
if word and re.match(r"^[a-zA-Z0-9\-]*$", word):
src_idx = wordidx[word]
emb[i] = vecs[src_idx, :]
else:
emb[i] = normal(scale=0.6, size=(n_factor,))
emb[-1] = normal(scale=0.6, size=(n_factor,))
# why do we shrink these?
emb /= 3
return emb
emb = create_emb()
mdl_pt = Sequential([
Embedding(5000, 100, input_length=500, dropout=0.2,
weights=[emb], trainable=True),
Dropout(0.25),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.25),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
mdl_pt.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=10,
validation_data=(x_test_w, label_test))
mdl_pt.layers[1].trainable = False
mdl_pt.compile(loss='binary_crossentropy', optimizer=Adam(1e-5), metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
inp2 = Input(shape=(500,50))
convs = []
for i in range(3,7):
x = Convolution1D(64, i, border_mode='same', activation='relu')(inp2)
x = MaxPooling1D()(x)
x = Flatten()(x)
convs.append(x)
out2 = Merge(mode="concat")(convs)
graph = Model(input=inp2, output=out2)
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
dropout=0.2, weights=[emb], trainable=False)(inp)
dr = Dropout(0.25)(ebd)
gr = graph(dr)
dr2 = Dropout(0.50)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.50)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=10,
validation_data=(x_test_w, label_test))
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=10,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Add a little dropout
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
dropout=0.2, weights=[emb], trainable=False)(inp)
dr = Dropout(0.25)(ebd)
gr = graph(dr)
dr2 = Dropout(0.50)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.70)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=10,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Lower the dropout
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
dropout=0.2, weights=[emb], trainable=False)(inp)
dr = Dropout(0.25)(ebd)
gr = graph(dr)
dr2 = Dropout(0.20)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.20)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=10,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Looking better. Let's try to lower the LR and continue to train.
###Code
mdl_pt.compile(Adam(1e-3), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=10,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
No Dropout
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
dropout=0.2, weights=[emb], trainable=False)(inp)
gr = graph(ebd)
de = Dense(100, activation='relu')(gr)
out = Dense(1, activation='sigmoid')(de)
mdl_pt = Model(input=inp, output=out)
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Dropout somewhere in between
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
dropout=0.2, weights=[emb], trainable=False)(inp)
dr = Dropout(0.3)(ebd)
gr = graph(dr)
dr2 = Dropout(0.3)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.3)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Add more dropout by the end
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
dropout=0.2, weights=[emb], trainable=False)(inp)
dr = Dropout(0.3)(ebd)
gr = graph(dr)
dr2 = Dropout(0.3)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.5)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Remove dropout from embedding layer
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
weights=[emb], trainable=False)(inp)
dr = Dropout(0.3)(ebd)
gr = graph(dr)
dr2 = Dropout(0.3)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.3)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Now add more dropout
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
weights=[emb], trainable=False)(inp)
dr = Dropout(0.5)(ebd)
gr = graph(dr)
dr2 = Dropout(0.5)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.5)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Add even more dropout to the last layer
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
weights=[emb], trainable=False)(inp)
dr = Dropout(0.5)(ebd)
gr = graph(dr)
dr2 = Dropout(0.5)(gr)
de = Dense(100, activation='relu')(dr2)
dr3 = Dropout(0.7)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
Add Batch Normalization
###Code
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
weights=[emb], trainable=False)(inp)
bn = BatchNormalization()(ebd)
dr = Dropout(0.5)(bn)
gr = graph(dr)
bn2 = BatchNormalization()(gr)
dr2 = Dropout(0.5)(bn2)
de = Dense(100, activation='relu')(dr2)
bn3 = BatchNormalization()(de)
dr3 = Dropout(0.5)(bn3)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.layers[1].trainable = True
mdl_pt.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
_____no_output_____
###Markdown
More dropout
###Code
inp2 = Input(shape=(5000,50))
convs = []
for i in range(3,6):
x = Convolution1D(64, i, border_mode='same', activation='relu')(inp2)
x = MaxPooling1D()(x)
x = Flatten()(x)
convs.append(x)
out2 = Merge(mode="concat")(convs)
graph = Model(input=inp2, output=out2)
inp = Input(shape=(500,))
ebd = Embedding(input_dim=emb.shape[0], output_dim=emb.shape[1], input_length=500,
weights=[emb], trainable=True)(inp)
# bn = BatchNormalization()(ebd)
dr = Dropout(0.2)(ebd)
gr = graph(dr)
# bn2 = BatchNormalization()(gr)
dr2 = Dropout(0.2)(gr)
de = Dense(100, activation='relu')(dr2)
# bn3 = BatchNormalization()(de)
dr3 = Dropout(0.3)(de)
out = Dense(1, activation='sigmoid')(dr3)
mdl_pt = Model(input=inp, output=out)
mdl_pt.compile(Adam(0.01), loss='binary_crossentropy', metrics=['accuracy'])
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
mdl_pt.fit(x_train_w, label_train, batch_size=32, nb_epoch=3,
validation_data=(x_test_w, label_test))
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/3
25000/25000 [==============================] - 28s - loss: 0.3402 - acc: 0.8621 - val_loss: 0.3851 - val_acc: 0.8691
Epoch 2/3
25000/25000 [==============================] - 28s - loss: 0.3921 - acc: 0.8386 - val_loss: 0.4317 - val_acc: 0.8310
Epoch 3/3
25000/25000 [==============================] - 28s - loss: 0.3845 - acc: 0.8423 - val_loss: 0.5495 - val_acc: 0.8563
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.