path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Smaller Projects/classification_song_genres_from_audio_data/notebook.ipynb
|
###Markdown
1. Preparing our datasetThese recommendations are so on point! How does this playlist know me so well?Over the past few years, streaming services with huge catalogs have become the primary means through which most people listen to their favorite music. But at the same time, the sheer amount of music on offer can mean users might be a bit overwhelmed when trying to look for newer music that suits their tastes.For this reason, streaming services have looked into means of categorizing music to allow for personalized recommendations. One method involves direct analysis of the raw audio information in a given song, scoring the raw data on a variety of metrics. Today, we'll be examining data compiled by a research group known as The Echo Nest. Our goal is to look through this dataset and classify songs as being either 'Hip-Hop' or 'Rock' - all without listening to a single one ourselves. In doing so, we will learn how to clean our data, do some exploratory data visualization, and use feature reduction towards the goal of feeding our data through some simple machine learning algorithms, such as decision trees and logistic regression.To begin with, let's load the metadata about our tracks alongside the track metrics compiled by The Echo Nest. A song is about more than its title, artist, and number of listens. We have another dataset that has musical features of each track such as danceability and acousticness on a scale from -1 to 1. These exist in two different files, which are in different formats - CSV and JSON. While CSV is a popular file format for denoting tabular data, JSON is another common file format in which databases often return the results of a given query.Let's start by creating two pandas DataFrames out of these files that we can merge so we have features and labels (often also referred to as X and y) for the classification later on.
###Code
import os
print(os.listdir("datasets/"))
# print(os.popen("wc -l datasets/fma-rock-vs-hiphop.csv").read())
import pandas as pd
# Read in track metadata with genre labels
tracks = pd.read_csv('datasets/fma-rock-vs-hiphop.csv')
# Read in track metrics with the features
echonest_metrics = pd.read_json('datasets/echonest-metrics.json', precise_float=True)
# Merge the relevant columns of tracks and echonest_metrics
echo_tracks = pd.merge(echonest_metrics, tracks[['track_id','genre_top']], on='track_id')
# Inspect the resultant dataframe
print(echo_tracks.info())
# display(tracks.head(2))
# display(echonest_metrics.head(2))
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 4802 entries, 0 to 4801
Data columns (total 10 columns):
acousticness 4802 non-null float64
danceability 4802 non-null float64
energy 4802 non-null float64
instrumentalness 4802 non-null float64
liveness 4802 non-null float64
speechiness 4802 non-null float64
tempo 4802 non-null float64
track_id 4802 non-null int64
valence 4802 non-null float64
genre_top 4802 non-null object
dtypes: float64(8), int64(1), object(1)
memory usage: 412.7+ KB
None
###Markdown
2. Pairwise relationships between continuous variablesWe typically want to avoid using variables that have strong correlations with each other -- hence avoiding feature redundancy -- for a few reasons:To keep the model simple and improve interpretability (with many features, we run the risk of overfitting).When our datasets are very large, using fewer features can drastically speed up our computation time.To get a sense of whether there are any strongly correlated features in our data, we will use built-in functions in the pandas package.
###Code
# Create a correlation matrix
corr_metrics = echo_tracks.corr()
corr_metrics.style.background_gradient()
###Output
_____no_output_____
###Markdown
3. Normalizing the feature dataAs mentioned earlier, it can be particularly useful to simplify our models and use as few features as necessary to achieve the best result. Since we didn't find any particular strong correlations between our features, we can instead use a common approach to reduce the number of features called principal component analysis (PCA). It is possible that the variance between genres can be explained by just a few features in the dataset. PCA rotates the data along the axis of highest variance, thus allowing us to determine the relative contribution of each feature of our data towards the variance between classes. However, since PCA uses the absolute variance of a feature to rotate the data, a feature with a broader range of values will overpower and bias the algorithm relative to the other features. To avoid this, we must first normalize our data. There are a few methods to do this, but a common way is through standardization, such that all features have a mean = 0 and standard deviation = 1 (the resultant is a z-score).
###Code
# Define our features
features = echo_tracks.drop(['track_id', 'genre_top'], axis=1)
# Define our labels
labels = echo_tracks.genre_top
# Import the StandardScaler
from sklearn.preprocessing import StandardScaler
# Scale the features and set the values to a new variable
scaler = StandardScaler()
scaled_train_features = scaler.fit_transform(features)
###Output
_____no_output_____
###Markdown
4. Principal Component Analysis on our scaled dataNow that we have preprocessed our data, we are ready to use PCA to determine by how much we can reduce the dimensionality of our data. We can use scree-plots and cumulative explained ratio plots to find the number of components to use in further analyses.Scree-plots display the number of components against the variance explained by each component, sorted in descending order of variance. Scree-plots help us get a better sense of which components explain a sufficient amount of variance in our data. When using scree plots, an 'elbow' (a steep drop from one data point to the next) in the plot is typically used to decide on an appropriate cutoff.
###Code
# This is just to make plots appear in the notebook
%matplotlib inline
import numpy as np
# Import our plotting module, and PCA class
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Get our explained variance ratios from PCA using all features
pca = PCA()
pca.fit(scaled_train_features)
exp_variance = pca.explained_variance_ratio_
# plot the explained variance using a barplot
fig, ax = plt.subplots()
ax.bar(np.arange(len(exp_variance)), exp_variance)
ax.set_xlabel('Principal Component #')
###Output
_____no_output_____
###Markdown
5. Further visualization of PCAUnfortunately, there does not appear to be a clear elbow in this scree plot, which means it is not straightforward to find the number of intrinsic dimensions using this method. But all is not lost! Instead, we can also look at the cumulative explained variance plot to determine how many features are required to explain, say, about 90% of the variance (cutoffs are somewhat arbitrary here, and usually decided upon by 'rules of thumb'). Once we determine the appropriate number of components, we can perform PCA with that many components, ideally reducing the dimensionality of our data.
###Code
# Import numpy
import numpy as np
# Calculate the cumulative explained variance
cum_exp_variance = np.cumsum(exp_variance)
# Plot the cumulative explained variance and draw a dashed line at 0.90.
fig, ax = plt.subplots()
ax.plot(cum_exp_variance)
ax.axhline(y=0.9, linestyle='--')
n_components = 6
# Perform PCA with the chosen number of components and project data onto components
pca = PCA(n_components, random_state=10)
pca.fit(scaled_train_features)
pca_projection = pca.transform(scaled_train_features)
###Output
_____no_output_____
###Markdown
6. Train a decision tree to classify genreNow we can use the lower dimensional PCA projection of the data to classify songs into genres. To do that, we first need to split our dataset into 'train' and 'test' subsets, where the 'train' subset will be used to train our model while the 'test' dataset allows for model performance validation.Here, we will be using a simple algorithm known as a decision tree. Decision trees are rule-based classifiers that take in features and follow a 'tree structure' of binary decisions to ultimately classify a data point into one of two or more categories. In addition to being easy to both use and interpret, decision trees allow us to visualize the 'logic flowchart' that the model generates from the training data.Here is an example of a decision tree that demonstrates the process by which an input image (in this case, of a shape) might be classified based on the number of sides it has and whether it is rotated.
###Code
# Import train_test_split function and Decision tree classifier
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import LabelEncoder
# Split our data: use test_size instead of random_state and the error msg will be unintelligible
train_features, test_features, train_labels, test_labels = train_test_split(pca_projection, labels, random_state=10)
# Train our decision tree
tree = DecisionTreeClassifier(random_state=10)
tree.fit(train_features, train_labels)
# Predict the labels for the test data
pred_labels_tree = tree.predict(test_features)
###Output
_____no_output_____
###Markdown
7. Compare our decision tree to a logistic regressionAlthough our tree's performance is decent, it's a bad idea to immediately assume that it's therefore the perfect tool for this job -- there's always the possibility of other models that will perform even better! It's always a worthwhile idea to at least test a few other algorithms and find the one that's best for our data.Sometimes simplest is best, and so we will start by applying logistic regression. Logistic regression makes use of what's called the logistic function to calculate the odds that a given data point belongs to a given class. Once we have both models, we can compare them on a few performance metrics, such as false positive and false negative rate (or how many points are inaccurately classified).
###Code
# Import LogisticRegression
from sklearn.linear_model import LogisticRegression
# Train our logistic regression and predict labels for the test set
logreg = LogisticRegression(random_state=10)
logreg.fit(train_features, train_labels)
pred_labels_logit = logreg.predict(test_features)
# Create the classification report for both models
from sklearn.metrics import classification_report
class_rep_tree = classification_report(test_labels, pred_labels_tree)
class_rep_log = classification_report(test_labels, pred_labels_logit)
print("Decision Tree: \n", class_rep_tree)
print("Logistic Regression: \n", class_rep_log)
###Output
Decision Tree:
precision recall f1-score support
Hip-Hop 0.66 0.66 0.66 229
Rock 0.92 0.92 0.92 972
avg / total 0.87 0.87 0.87 1201
Logistic Regression:
precision recall f1-score support
Hip-Hop 0.75 0.57 0.65 229
Rock 0.90 0.95 0.93 972
avg / total 0.87 0.88 0.87 1201
###Markdown
8. Balance our data for greater performanceBoth our models do similarly well, boasting an average precision of 87% each. However, looking at our classification report, we can see that rock songs are fairly well classified, but hip-hop songs are disproportionately misclassified as rock songs. Why might this be the case? Well, just by looking at the number of data points we have for each class, we see that we have far more data points for the rock classification than for hip-hop, potentially skewing our model's ability to distinguish between classes. This also tells us that most of our model's accuracy is driven by its ability to classify just rock songs, which is less than ideal.To account for this, we can weight the value of a correct classification in each class inversely to the occurrence of data points for each class. Since a correct classification for "Rock" is not more important than a correct classification for "Hip-Hop" (and vice versa), we only need to account for differences in sample size of our data points when weighting our classes here, and not relative importance of each class.
###Code
# Subset only the hip-hop tracks, and then only the rock tracks
hop_only = echo_tracks.loc[echo_tracks['genre_top'] == 'Hip-Hop']
rock_only = echo_tracks.loc[echo_tracks['genre_top'] == 'Rock']
# sample the rocks songs to be the same number as there are hip-hop songs
rock_only = rock_only.sample(hop_only.shape[0], random_state=10)
# concatenate the dataframes rock_only and hop_only
rock_hop_bal = pd.concat([rock_only,hop_only], axis=0)
# The features, labels, and pca projection are created for the balanced dataframe
features = rock_hop_bal.drop(['genre_top', 'track_id'], axis=1)
labels = rock_hop_bal['genre_top']
pca_projection = pca.fit_transform(scaler.fit_transform(features))
# Redefine the train and test set with the pca_projection from the balanced data
train_features, test_features, train_labels, test_labels = train_test_split(pca_projection, labels, random_state=10)
###Output
_____no_output_____
###Markdown
9. Does balancing our dataset improve model bias?We've now balanced our dataset, but in doing so, we've removed a lot of data points that might have been crucial to training our models. Let's test to see if balancing our data improves model bias towards the "Rock" classification while retaining overall classification performance. Note that we have already reduced the size of our dataset and will go forward without applying any dimensionality reduction. In practice, we would consider dimensionality reduction more rigorously when dealing with vastly large datasets and when computation times become prohibitively large.
###Code
# Train our decision tree on the balanced data
tree = DecisionTreeClassifier(random_state=10)
tree.fit(train_features, train_labels)
pred_labels_tree = tree.predict(test_features)
# Train our logistic regression on the balanced data
logreg = LogisticRegression(random_state=10)
logreg.fit(train_features, train_labels)
pred_labels_logit = logreg.predict(test_features)
# Compare the models
print("Decision Tree: \n", classification_report(test_labels, pred_labels_tree))
print("Logistic Regression: \n", classification_report(test_labels, pred_labels_logit))
###Output
Decision Tree:
precision recall f1-score support
Hip-Hop 0.77 0.77 0.77 230
Rock 0.76 0.76 0.76 225
avg / total 0.76 0.76 0.76 455
Logistic Regression:
precision recall f1-score support
Hip-Hop 0.82 0.83 0.82 230
Rock 0.82 0.81 0.82 225
avg / total 0.82 0.82 0.82 455
###Markdown
10. Using cross-validation to evaluate our modelsSuccess! Balancing our data has removed bias towards the more prevalent class. To get a good sense of how well our models are actually performing, we can apply what's called cross-validation (CV). This step allows us to compare models in a more rigorous fashion.Since the way our data is split into train and test sets can impact model performance, CV attempts to split the data multiple ways and test the model on each of the splits. Although there are many different CV methods, all with their own advantages and disadvantages, we will use what's known as K-fold CV here. K-fold first splits the data into K different, equally sized subsets. Then, it iteratively uses each subset as a test set while using the remainder of the data as train sets. Finally, we can then aggregate the results from each fold for a final model performance score.
###Code
from sklearn.model_selection import KFold, cross_val_score
# Set up our K-fold cross-validation
kf = KFold(n_splits=10, random_state=10)
tree = DecisionTreeClassifier(random_state=10)
logreg = LogisticRegression(random_state=10)
# Train our models using KFold cv
tree_score = cross_val_score(tree, pca_projection, labels, cv=kf)
logit_score = cross_val_score(logreg, pca_projection, labels, cv=kf)
# Print the mean of each array of scores
print("Decision Tree:", np.mean(tree_score), "Logistic Regression:", np.mean(logit_score))
###Output
Decision Tree: 0.7241758241758242 Logistic Regression: 0.7752747252747252
|
notebooks/utility_notebooks/polygons_to_points.ipynb
|
###Markdown
Create a point dataset from a polygon datasetEach point is at the centroid of the input polygon. Can subsample to select a subset of polygons
###Code
import geopandas as gpd
input_file = '../../data/MinesNeg_caleb_selection.geojson'
df = gpd.read_file(input_file)
df['geometry'] = [poly.centroid for poly in df['geometry']]
df['id'] = range(len(df))
subsample = 1
df = df.iloc[range(0,len(df), subsample)]
print(len(df))
df.to_file(f"../../data/sampling_locations/{input_file.split('/')[-1].split('.')[0]}_subsample_{subsample}_negatives.geojson", driver='GeoJSON')
df
###Output
_____no_output_____
|
Code/Assignment-5/PCA.ipynb
|
###Markdown
PCA Utilities
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Plot explained variance ratio
def plot(ex_var_ratio):
plt.plot(ex_var_ratio)
plt.ylabel('Explained Variance Ratio')
plt.xlabel('Number of Principal Components')
def pca(X, n):
pca = PCA(n_components=n)
pca_X = pca.fit_transform(X)
print '\nExplained Variance Ratios:'
print pca.explained_variance_ratio_
print '\nSum of Explained Variance Ratios:',
print np.sum(pca.explained_variance_ratio_)
return pca_X, pca.explained_variance_ratio_
from sklearn.decomposition import SparsePCA
# Compute explained variance ratio of transformed data
def compute_explained_variance_ratio(transformed_data):
explained_variance = np.var(transformed_data, axis=0)
explained_variance_ratio = explained_variance / np.sum(explained_variance)
explained_variance_ratio = np.sort(explained_variance_ratio)[::-1]
return explained_variance_ratio
def sparse_pca(X, n):
spca = SparsePCA(n_components=n)
spca_transform = spca.fit_transform(X)
ex_var_ratio = compute_explained_variance_ratio(spca_transform)
return spca_transform, ex_var_ratio
from sklearn.decomposition import KernelPCA
def kernel_pca(X):
kpca = KernelPCA()
kpca_transform = kpca.fit_transform(X)
ex_var_ratio = compute_explained_variance_ratio(kpca_transform)
return kpca_transform, ex_var_ratio
###Output
_____no_output_____
###Markdown
Baseline & Concentration Data
###Code
pca_base_concen, ex_var_ratio = pca(df_base_concen.get_values(), 3)
plot(ex_var_ratio)
###Output
Explained Variance Ratios:
[ 0.9577978 0.03984365 0.00149917]
Sum of Explained Variance Ratios: 0.999140620396
###Markdown
Disorder data
###Code
# Keep 10 components for disorder data
spca_disorders, ex_var_ratio = sparse_pca(df_disorders.get_values(), 10)
plot(ex_var_ratio)
###Output
/Library/Python/2.7/site-packages/sklearn/linear_model/least_angle.py:162: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
elif Gram == 'auto':
###Markdown
Questionnaire Data
###Code
# Keep 10 components for questionnaire data
spca_questionnaire, ex_var_ratio = sparse_pca(df_questionnaire.get_values(), 10)
plot(ex_var_ratio)
# Put everything together
df_base_concen = pd.DataFrame(pca_base_concen)
df_disorders = pd.DataFrame(spca_disorders)
df_questionnaire = pd.DataFrame(spca_questionnaire)
df = pd.concat([df_patient, df_base_concen, df_disorders, df_questionnaire], axis=1)
print 'Reduced data size is', df.shape
# Have a look at the distribution of reduced data
df.plot(kind='hist', alpha=0.5, legend=None, title='After Dimension Reduction')
# Save reduced features to file
df.to_csv('reduced_data.csv', index=False)
###Output
_____no_output_____
|
Train_CNN_Analog-Readout_Version-Small1-very-long.ipynb
|
###Markdown
CNN TrainingTarget of this code is to train a CNN network to extract the needle position of an analog needle device. Preparing the training* First all libraries are loaded * It is assumed, that they are installed during the Python setup* matplotlib is set to print the output inline in the jupyter notebook
###Code
########### Basic Parameters for Running: ################################
TFliteNamingAndVersion = "ana0910s1_long" # Used for tflite Filename
Training_Percentage = 0.2 # 0.0 = Use all Images for Training
Epoch_Anz = 100
##########################################################################
import os
import tensorflow as tf
import matplotlib.pyplot as plt
import glob
import numpy as np
from sklearn.utils import shuffle
from tensorflow.python import keras
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Dense, InputLayer, Conv2D, MaxPool2D, Flatten, BatchNormalization
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import History
import math
from PIL import Image
loss_ges = np.array([])
val_loss_ges = np.array([])
%matplotlib inline
np.set_printoptions(precision=4)
np.set_printoptions(suppress=True)
###Output
_____no_output_____
###Markdown
Load training data* The data is expected in the "Input_dir"* Picture size must be 32x32 with 3 color channels (RGB)* The filename contains the informations needed for training in the first 3 digits::* Typical filename: * x.y-zzzz.jpg * e.g. "4.6_Lfd-1406_zeiger3_2019-06-02T050011"|Place holder | Meaning | Usage ||------------- |-----------------------------|--------------|| **x.y** | readout value | **to be learned** || zzzz | additional information | not needed |* The images are stored in the x_data[]* The expected output for each image in the corresponding y_data[] * The periodic nature is reflected in a **sin/cos coding**, which allows to restore the angle/counter value with an arctan later on.* The last step is a shuffle (from sklearn.utils) as the filenames are on order due to the encoding of the expected analog readout in the filename
###Code
Input_dir='data_resize_all'
files = glob.glob(Input_dir + '/*.*')
x_data = []
y_data = []
for aktfile in files:
test_image = Image.open(aktfile)
test_image = np.array(test_image, dtype="float32")
test_image = np.reshape(test_image, (32,32,3))
base = os.path.basename(aktfile)
target_number = (float(base[0:3])) / 10
target_sin = math.sin(target_number * math.pi * 2)
target_cos = math.cos(target_number * math.pi * 2)
x_data.append(test_image)
zw = np.array([target_sin, target_cos])
y_data.append(zw)
x_data = np.array(x_data)
y_data = np.array(y_data)
print(x_data.shape)
print(y_data.shape)
x_data, y_data = shuffle(x_data, y_data)
if (Training_Percentage > 0):
X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size=Training_Percentage)
else:
X_train = x_data
y_train = y_data
###Output
(5127, 32, 32, 3)
(5127, 2)
###Markdown
Define the modelThe layout of the network ist a typcial CNN network with alternating **Conv2D** and **MaxPool2D** layers. Finished after **flattening** with additional **Dense** layer. Important* Shape of the input layer: (32, 32, 3)* Shape of the output layer: (2) - sin and cos
###Code
model = Sequential()
model.add(BatchNormalization(input_shape=(32,32,3)))
model.add(Conv2D(32, (5, 5), input_shape=(32,32,3), padding='same', activation="relu"))
model.add(MaxPool2D(pool_size=(4,4)))
model.add(Conv2D(32, (5, 5), padding='same', activation="relu"))
model.add(MaxPool2D(pool_size=(4,4)))
model.add(Conv2D(32, (3, 3), padding='same', activation="relu"))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128,activation="relu"))
model.add(Dense(64,activation="relu"))
model.add(Dense(2))
model.summary()
model.compile(loss=keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adadelta(learning_rate=1.0, rho=0.95), metrics = ["accuracy"])
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
batch_normalization (BatchNo (None, 32, 32, 3) 12
_________________________________________________________________
conv2d (Conv2D) (None, 32, 32, 32) 2432
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 8, 8, 32) 25632
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 2, 2, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 2, 2, 32) 9248
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 1, 1, 32) 0
_________________________________________________________________
flatten (Flatten) (None, 32) 0
_________________________________________________________________
dense (Dense) (None, 128) 4224
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dense_2 (Dense) (None, 2) 130
=================================================================
Total params: 49,934
Trainable params: 49,928
Non-trainable params: 6
_________________________________________________________________
###Markdown
TrainingThe input pictures are randomly scattered for brightness and pixel shift variations. These is implemented with a ImageDataGenerator.The training is splitted into two steps:1. Variation of the brightness only2. Variation of brightness and Pixel Shift Step 1: Brigthness scattering only
###Code
Batch_Size = 8
Epoch_Anz = 100
Shift_Range = 0
Brightness_Range = 0.3
datagen = ImageDataGenerator(width_shift_range=[-Shift_Range,Shift_Range], height_shift_range=[-Shift_Range,Shift_Range],brightness_range=[1-Brightness_Range,1+Brightness_Range])
if (Training_Percentage > 0):
train_iterator = datagen.flow(x_data, y_data, batch_size=Batch_Size)
validation_iterator = datagen.flow(X_test, y_test, batch_size=Batch_Size)
history = model.fit_generator(train_iterator, validation_data = validation_iterator, epochs = Epoch_Anz)
else:
train_iterator = datagen.flow(x_data, y_data, batch_size=Batch_Size)
history = model.fit_generator(train_iterator, epochs = Epoch_Anz)
###Output
Epoch 1/100
###Markdown
Step 1: Learing result * Visualization of the training and validation results
###Code
loss_ges = np.append(loss_ges, history.history['loss'])
if (Training_Percentage > 0):
val_loss_ges = np.append(val_loss_ges, history.history['val_loss'])
plt.semilogy(val_loss_ges)
plt.semilogy(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','eval'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Brigthness and Pixel Shift scatteringHere a higher number of epochs is used to reach the minimum loss function
###Code
Batch_Size = 8
Epoch_Anz = 400
Shift_Range = 3
Brightness_Range = 0.3
datagen = ImageDataGenerator(width_shift_range=[-Shift_Range,Shift_Range], height_shift_range=[-Shift_Range,Shift_Range],brightness_range=[1-Brightness_Range,1+Brightness_Range])
if (Training_Percentage > 0):
train_iterator = datagen.flow(x_data, y_data, batch_size=Batch_Size)
validation_iterator = datagen.flow(X_test, y_test, batch_size=Batch_Size)
history = model.fit_generator(train_iterator, validation_data = validation_iterator, epochs = Epoch_Anz)
else:
train_iterator = datagen.flow(x_data, y_data, batch_size=Batch_Size)
history = model.fit_generator(train_iterator, epochs = Epoch_Anz)
###Output
C:\Users\Muell\anaconda3\envs\anaconda-win11-env\lib\site-packages\tensorflow\python\keras\engine\training.py:1844: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
warnings.warn('`Model.fit_generator` is deprecated and '
###Markdown
Overall Learing results (Step 1 & Step 2)
###Code
loss_ges = np.append(loss_ges, history.history['loss'])
if (Training_Percentage > 0):
val_loss_ges = np.append(val_loss_ges, history.history['val_loss'])
plt.semilogy(val_loss_ges)
plt.semilogy(loss_ges)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','eval'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Check the model by hand* The following code uses the trained model to check the deviation for each picture.* The evaluation takes the periodic character of the results into account (dev1 ... dev2).* Images, that have a bigger deviation as the parameter "deviation_max_list" are printed in a list to check the picture and labeling itself
###Code
Input_dir='data_resize_all'
#Input_dir='test_result'
files = glob.glob(Input_dir + '/*.*')
res = []
stat_Anz = []
stat_Abweichung = []
i = 0
deviation_max_list = 0.05
for i in range(100):
stat_Anz.append(0)
stat_Abweichung.append(0)
for aktfile in files:
base = os.path.basename(aktfile)
target = (float(base[0:3])) / 10
target_sin = math.sin(target * math.pi * 2)
target_cos = math.cos(target * math.pi * 2)
test_image = Image.open(aktfile)
test_image = np.array(test_image, dtype="float32")
img = np.reshape(test_image,[1,32,32,3])
classes = model.predict(img)
out_sin = classes[0][0]
out_cos = classes[0][1]
out_target = (np.arctan2(out_sin, out_cos)/(2*math.pi)) % 1
dev_sin = target_sin - out_sin
dev_cos = target_cos - out_cos
dev_target = target - out_target
if abs(dev_target + 1) < abs(dev_target):
out_target = out_target - 1
dev_target = target - out_target
else:
if abs(dev_target - 1) < abs(dev_target):
out_target = out_target + 1
dev_target = target - out_target
target_int = int ((float(base[0:3])) * 10)
stat_Abweichung[target_int] = stat_Abweichung[target_int] + dev_target
stat_Anz[target_int] = stat_Anz[target_int] + 1
res.append(np.array([target, out_target, dev_target, out_sin, out_cos, i]))
if abs(dev_target) > deviation_max_list:
print(aktfile + " " + str(target) + " " + str(out_target) + " " + str(dev_target))
for i in range(100):
stat_Abweichung[i] = stat_Abweichung[i] / stat_Anz[i]
res = np.asarray(res)
res_step_1 = res
###Output
data_resize_all\6.1_3006_zeiger1_2020-04-29_11-47-02.jpg 0.61 0.7008715427625808 -0.0908715427625808
data_resize_all\6.2_3048_zeiger1_2020-04-29_11-48-02.jpg 0.62 0.7157019704529248 -0.09570197045292483
data_resize_all\6.3_3120_zeiger1_2020-04-29_13-06-02.jpg 0.63 0.7676629191550123 -0.13766291915501228
data_resize_all\6.4_3194_zeiger1_2020-04-29_14-27-02.jpg 0.64 0.7214565945561375 -0.08145659455613752
data_resize_all\6.5_3121_zeiger1_2020-04-29_13-07-02.jpg 0.65 0.7491460865876307 -0.09914608658763069
data_resize_all\6.5_3271_zeiger1_2020-04-29_11-49-17.jpg 0.65 0.7030573171334711 -0.05305731713347106
data_resize_all\6.6_3288_zeiger1_2020-04-29_14-28-02.jpg 0.6599999999999999 0.7260966835687857 -0.06609668356878573
data_resize_all\6.7_3341_zeiger1_2020-04-29_14-29-02.jpg 0.67 0.7322627317057532 -0.06226273170575314
data_resize_all\7.6_3866_zeiger3_2020-04-29_13-31-02.jpg 0.76 0.6963659136128434 0.06363408638715662
data_resize_all\7.7_3914_zeiger3_2020-04-29_12-09-01.jpg 0.77 0.7162816827587297 0.0537183172412703
data_resize_all\7.8_3943_zeiger3_2020-04-29_12-05-02.jpg 0.78 0.729442603515698 0.05055739648430202
data_resize_all\7.8_3944_zeiger3_2020-04-29_13-52-02.jpg 0.78 0.7276772083154743 0.052322791684525694
###Markdown
Results
###Code
plt.plot(res[:,3])
plt.plot(res[:,4])
plt.title('Result')
plt.ylabel('value')
plt.xlabel('#Picture')
plt.legend(['sin', 'cos'], loc='lower left')
plt.show()
plt.plot(res[:,0])
plt.plot(res[:,1])
plt.title('Result')
plt.ylabel('Counter Value')
plt.xlabel('#Picture')
plt.legend(['Orginal', 'Prediction'], loc='upper left')
plt.show()
plt.plot(stat_Abweichung)
plt.title('Result')
plt.ylabel('Counter Value')
plt.xlabel('#Picture')
plt.legend(['Orginal', 'Prediction'], loc='upper left')
plt.show()
plt.plot(stat_Anz)
plt.title('Result')
plt.ylabel('Counter Value')
plt.xlabel('#Picture')
plt.legend(['Orginal', 'Prediction'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Deviation from Expected Value
###Code
plt.plot(res[:,2])
plt.title('Deviation')
plt.ylabel('Deviation from expected value')
plt.xlabel('#Picture')
plt.legend(['Deviation'], loc='upper left')
#plt.ylim(-0.3, 0.3)
plt.show()
statistic = np.array([np.mean(res[:,2]), np.std(res[:,2]), np.min(res[:,2]), np.max(res[:,2])])
print(statistic)
###Output
_____no_output_____
###Markdown
Save the model* Save the model to the file with the "h5" file format
###Code
FileName = TFliteNamingAndVersion
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(FileName + ".tflite", "wb").write(tflite_model)
from pathlib import Path
import tensorflow as tf
FileName = TFliteNamingAndVersion + "q" + ".tflite"
def representative_dataset():
for n in range(x_data[0].size):
data = np.expand_dims(x_data[5], axis=0)
yield [data.astype(np.float32)]
converter2 = tf.lite.TFLiteConverter.from_keras_model(model)
converter2.representative_dataset = representative_dataset
converter2.optimizations = [tf.lite.Optimize.DEFAULT]
converter2.representative_dataset = representative_dataset
tflite_quant_model = converter2.convert()
open(FileName, "wb").write(tflite_quant_model)
print(FileName)
Path(FileName).stat().st_size
###Output
INFO:tensorflow:Assets written to: C:\Users\Muell\AppData\Local\Temp\tmpubkfhhld\assets
|
Guns/Guns - Data collection & clean-up.ipynb
|
###Markdown
Collecting data
###Code
import pandas as pd
import numpy as np
import re
page = 'https://en.wikipedia.org/wiki/List_of_countries_by_firearm-related_death_rate'
wiki_data = pd.read_html(page,header=0)
print(type(wiki_data))
print(len(wiki_data))
print(type(wiki_data[0]))
print(wiki_data[0][0:5])
print(type(wiki_data[1]))
print(wiki_data[1][0:5])
print(type(wiki_data[2]))
print(wiki_data[2][0:5])
print(type(wiki_data[3]))
print(wiki_data[3][0:5])
###Output
<class 'list'>
4
<class 'pandas.core.frame.DataFrame'>
Empty DataFrame
Columns: [Unnamed: 0, This article needs to be updated. Please update this article to reflect recent events or newly available information. (April 2017)]
Index: []
<class 'pandas.core.frame.DataFrame'>
Empty DataFrame
Columns: [Unnamed: 0, This article relies largely or entirely on a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources. (June 2017)]
Index: []
<class 'pandas.core.frame.DataFrame'>
Country Total Method of Calculation Homicides \
0 Argentina ! Argentina 6.36 (2009) 2.58 (2012)
1 Australia ! Australia 0.93 (2013) 0.16 (2013)
2 Austria ! Austria 2.63 (2011) 0.10 (2011)
3 Azerbaijan ! Azerbaijan 0.30 (incomplete) 0.27 (2010)
4 Barbados ! Barbados 3.12 (incomplete) 3.12 (2013)
Suicides Unintentional Undetermined Sources and notes \
0 1.57 (2009) 0.05 (2009) 2.57 (2009) Guns in Argentina[1][2]
1 0.74 (2013) 0.02 (2013) 0.02 (2013) Guns in Australia[3]
2 2.43 (2011) 0.01 (2009) 0.04 (2011) Guns in Austria[4]
3 0.01 (2007) 0.02 (2007) unavailable Guns in Azerbaijan[5]
4 unavailable unavailable unavailable Guns in Barbados[6]
Guns per 100 hab
0 10.2
1 21.7
2 30.4
3 3.5
4 7.8
<class 'pandas.core.frame.DataFrame'>
v t e Lists of countries by laws and law enforcement rankings \
0 Age of
1 Drugs
2 Death
3 Guns
4 Punishment
Unnamed: 1
0 Consent Legal candidacy for political office C...
1 Alcohol Alcohol consumption Alcohol law Bath s...
2 Legality of euthanasia Homicide by decade Law ...
3 Deaths Ownership
4 Corporal punishment At home At school In court...
###Markdown
The second frame seems to have the data we want.
###Code
gun_toters = wiki_data[2]
gun_toters.style
gun_toters.dtypes
###Output
_____no_output_____
###Markdown
We need to fix quite a lot of that.1. The 73 row seems to be a copy of the column names, so that should go.1. The 'Country' column should preferably not contain weird repetitions.1. The columns 'Guns per 100 hab' and 'Deaths' should be numeric.1. 'Guns per 100 hab' is a bit long.1. We don't need the contributing factors to 'Deaths'.1. Sources are useful, but not here.1. If year is useful, we would prefer to only have one.
###Code
# 1. Drop it like it's hot ...
gun_toters.drop(73, inplace=True)
# 3. & 4. Go numeric and go short.
gun_toters['Deaths'] = pd.to_numeric(gun_toters['Total'])
gun_toters['Guns'] = pd.to_numeric(gun_toters['Guns per 100 hab'], errors='coerce')
# 5. Since some values under 'Method of Calculation' are weird, we take them all.
def extract_year(text):
m = re.search('(\d{4})', text)
if m is None:
return np.NaN
else:
return pd.to_numeric(m.group(0))
gun_toters['Method of Calculation'] = gun_toters['Method of Calculation'].apply(extract_year)
gun_toters['Homicides'] = pd.to_numeric(gun_toters['Homicides'].apply(extract_year))
gun_toters['Suicides'] = gun_toters['Suicides'].apply(extract_year)
gun_toters['Unintentional'] = gun_toters['Unintentional'].apply(extract_year)
gun_toters['Undetermined'] = gun_toters['Undetermined'].apply(extract_year)
# 7. And select the "best" one for a new column.
def best_year(row):
if not np.isnan(row['Method of Calculation']):
return row['Method of Calculation']
else:
return max([row['Homicides'], row['Suicides'], row['Unintentional'], row['Undetermined']])
gun_toters['Year'] = gun_toters.apply(best_year, axis=1)
# 6. Again; Drop it like it's hot ...
gun_toters.drop(['Total',
'Guns per 100 hab',
'Method of Calculation',
'Homicides',
'Suicides',
'Unintentional',
'Undetermined',
'Sources and notes'], inplace=True,axis=1)
gun_toters.sort_values(by='Deaths', inplace=True)
gun_toters.dropna(inplace=True)
gun_toters.style
###Output
_____no_output_____
###Markdown
World bank dataLet's get some data from the [World bank](http://data.worldbank.org). They have all kinds of interesting information. To do that we need to get the countries by two letter codes first. So we can look at just the things we want.This turned a bit messy. The library 'pycountry' was the best I could find, but people do love to write names in creative ways. So the first thing we need is a function to keep track of that kind of madness.But once we are done the country code makes for a good index. Since countries appear only once. Country codeThe World bank prefers to deal data to those who hand in a list of country codes. Also, the country names in the Wikipedia data does not match the names in the World bank data. But the three letter code seems to work. And the names are similar enough ...To make this more fun, neither matches the names in the 'pycountry' package. But we are lazy and don't really care. So lets force every name to match the 'pycountry' names. This will make it easier if we want to use that package later.
###Code
import pycountry as pc
def clean_country_code(country):
country = country \
.split("!")[0].strip() \
.replace("Macedonia","Macedonia, Republic of") \
.replace("South Korea","Korea, Republic of") \
.replace("Korea, Rep.","Korea, Republic of") \
.replace("Venezuela, RB","Venezuela")
code = 'none found'
try:
code = pc.countries.get(name=country)
except KeyError:
try:
code = pc.countries.get(common_name=country)
except KeyError:
try:
code = pc.countries.get(official_name=country)
except:
print ("Whooopsie ...\n" + country)
return (code.alpha_3,code.name)
gun_toters['CountryCode'],gun_toters['Country'] = zip(*gun_toters['Country'].apply(lambda s: clean_country_code(s)))
gun_toters.style
###Output
_____no_output_____
###Markdown
Now we can get that juicy World bank data.
###Code
from pandas_datareader import wb
indicators=[
'NY.GDP.PCAP.KD', # GDP per capita
'TX.VAL.TECH.MF.ZS' # Percent of export that is High-tech
]
wb_values = wb.download(indicator=indicators, country=gun_toters.CountryCode)
wb_values.reset_index(inplace=True)
wb_values.columns = [
'Country',
'Year',
'GDP',
'High-tech']
wb_values['CountryCode'],wb_values['Country'] = zip(*wb_values['Country'].apply(lambda s: clean_country_code(s)))
wb_values.style
###Output
_____no_output_____
###Markdown
We don't really care to have the granularity of years though. So for each country we take the mean of the values. We also need the country code to match up with the other table. So let's make that an index here too. And we should drop the 'Country' column to avoid collisions with the other data set.
###Code
wb_values = \
wb_values \
.groupby(['CountryCode']) \
.mean() \
.reset_index()
wb_values.style
###Output
_____no_output_____
###Markdown
That seems fine. But some of the stuff we want to do will fail on missing values. So we should see if there is anything that could be a problem.
###Code
wb_values.isnull().sum()
missing_value = wb_values[wb_values['High-tech'].isnull()]
missing_value
###Output
_____no_output_____
###Markdown
Ok ... Montenegro. Not a huge country in tech, from what I can tell. Lets find the countries in the same GDP range and assign something similar. Say the mean of that range.I have no idea if it's reasonable, but plus minus 1000 in GDP would give a range to work with ...
###Code
min_gdp = missing_value.GDP[43] - 1000
max_gdp = missing_value.GDP[43] + 1000
MNE_tech_range = wb_values[(wb_values.GDP > min_gdp) & (wb_values.GDP < max_gdp)]
MNE_tech_range
MNE_tech_range_average = MNE_tech_range['High-tech'].mean()
MNE_tech_range_average
wb_values.set_value(43,'High-tech', MNE_tech_range_average).style
###Output
_____no_output_____
###Markdown
Joining it all upNow we have the numbers from the World bank and Wikipedia. Both have the three letter country code as 'Country'.
###Code
gun_toters = pd.merge(gun_toters, wb_values, on='CountryCode')
gun_toters.style
###Output
_____no_output_____
###Markdown
FeatherFinally we dump the data to a feather file. This enables trivial, fast and compact data reads, writes and storage.
###Code
import feather
path = 'gun_toters.feather'
feather.write_dataframe(gun_toters, path)
###Output
_____no_output_____
|
sesion03/Caso_Segmentacion_Banco.ipynb
|
###Markdown
Modelo de Segmentación por Preferencias de Compra con Tarjeta para un Banco Retail Utilizamos los patrones de compra con tarjeta de los clientes para poder armar una estrategia de fidelización a través de ofertas brindadas por los socios comerciales del banco Cargar los datos
###Code
matriz<-read.csv("segmentacion.csv")
###Output
_____no_output_____
###Markdown
Tamaño de la Matriz de Datos
###Code
dim(matriz)
###Output
_____no_output_____
###Markdown
Visualizamos los primeros registros con 'head'
###Code
head(matriz)
###Output
_____no_output_____
###Markdown
Entrenamos el Modelo utilizando la función 'kmeans'
###Code
seg6<-kmeans(matriz[2:25],6)
###Output
_____no_output_____
###Markdown
Visualizamos los resultados del objeto seg6
###Code
seg6
###Output
_____no_output_____
###Markdown
Exportamos los Centroides y Frecuencias
###Code
write.csv(cbind(seg6$centers,n=seg6$size),"arch06.csv")
###Output
_____no_output_____
###Markdown
Probamos con 9 Segmentos
###Code
seg9<-kmeans(matriz[2:25],9)
write.csv(cbind(seg9$centers,n=seg9$size),"arch9.csv")
###Output
_____no_output_____
|
advanced_functionality/huggingface_byo_scripts_and_data/huggingface-custom-text-summarizer.ipynb
|
###Markdown
For ease of use, we advise opening this notebook in an Amazon SageMaker notebook instance using the `conda_pytorch_latest_p36` kernel, or in Amazon SageMaker Studio using a `Python 3 (PyTorch 1.8 Python 3.6 CPU Optimized)` kernel on a `ml.t3.medium` instance. Fine-tuning and deploying a Hugging Face summarization model on SageMaker with your own scripts and dataset In this notebook, we will see how to fine-tune and deploy one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task on [Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) with your own scripts and data.In the first part "Preparing the dataset" we show how to load your own dataset to s3 into separated files for training, validation and testing. We will use the [Women's E-Commerce Clothing Reviews dataset](https://www.kaggle.com/nicapotato/womens-ecommerce-clothing-reviews/) which contains e-commerce clothing reviews and review titles, but we also provide code to do it for your own custom dataset. In our case the text and summary columns are called `review_text` and `title` respectively, and the data is saved in s3 under the prefix `DEMO-sagemaker-huggingface-summarization`.Afterwards, we walk you through how to create your own train and inference scripts to fine-tune and deploy a Hugging Face model on Amazon SageMaker. Make sure that the latest version of SageMaker SDK is installed
###Code
# Install the required libraries
import sys
!{sys.executable} -m pip install datasets
!{sys.executable} -m pip install py7zr
!{sys.executable} -m pip install -U sagemaker
# Ensure packages are reloaded without having to restart Kernel
import importlib
import datasets
import py7zr
import sagemaker
importlib.reload(datasets)
importlib.reload(py7zr)
importlib.reload(sagemaker)
###Output
_____no_output_____
###Markdown
Part 1: Preparing the dataset for Hugging Face on Amazon SageMaker One way to prepare your dataset for training on Amazon SageMaker is to have your training, validation and test datasets saved separately. This enables to effectively decouple data preparation from training in an architecture and for example ensure that the same datasets can be reused by different models with the same split. In this example we download the [Women's E-Commerce Clothing Reviews dataset](https://www.kaggle.com/nicapotato/womens-ecommerce-clothing-reviews/) and prepare it for Hugging Face using the [`datasets`](https://github.com/huggingface/datasets) library. Any dataset containing text and something that could be considered a summary (e.g. titles) can work here. We first import required packages and define the prefix where we will save the data:
###Code
import os
import json
import io, boto3, sagemaker
import pandas as pd
from datasets import load_dataset, filesystems, DatasetDict
s3_resource = boto3.resource("s3")
session = sagemaker.Session()
session_bucket = session.default_bucket()
s3_prefix = "DEMO-sagemaker-huggingface-summarization"
###Output
_____no_output_____
###Markdown
We read the raw dataset directly from its source
###Code
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/womens_clothing_ecommerce/Womens_Clothing_E-Commerce_Reviews.csv .
path_to_input_file = "Womens_Clothing_E-Commerce_Reviews.csv"
df = pd.read_csv(path_to_input_file)
###Output
_____no_output_____
###Markdown
This raw dataset has missing values in the columns that are interesting for us: "Review text" and "Title". So we drop rows with missing values in those 2 columns. Additionally, we reformat the column names to be lowercase and replace space by underscore.
###Code
df.columns = df.columns.str.lower()
df.columns = df.columns.str.replace(" ", "_")
df = df.dropna(subset=["title", "review_text"])
df.head()
###Output
_____no_output_____
###Markdown
The cleaned dataset should contain 19675 rows.
###Code
path_to_your_file = "Womens_Clothing_E-Commerce_Reviews.csv"
df.to_csv(path_to_your_file, index=False)
###Output
_____no_output_____
###Markdown
Now that we've cleaned the data from missing reviews and titles, we will split it into train, validation and test set using the `load_dataset()` functions from the `datasets` library.
###Code
# When using your own custom dataset (single CSV/JSON), you can use the datasets.Dataset.train_test_split() method to shuffle and split your data.
# The splits will be shuffled by default. You can deactivate this behavior by setting shuffle=False
# Replace type to 'json' if you are using a JSON files, the rest of the steps are exactly the same
data = load_dataset("csv", data_files=path_to_your_file, split="train") # path to your file
# Split into 70% train, 30% test + validation
train_test_validation = data.train_test_split(test_size=0.3)
# Split 30% test + validation into half test, half validation
test_validation = train_test_validation["test"].train_test_split(test_size=0.5)
# Gather the splits to have a single DatasetDict
dataset = DatasetDict(
{
"train": train_test_validation["train"],
"validation": test_validation["train"],
"test": test_validation["test"],
}
)
dataset
###Output
_____no_output_____
###Markdown
We can inspect an example review:
###Code
print("Review Text\n{text}".format(text=dataset["train"]["review_text"][12]))
print("\nTitle\n{summary}".format(summary=dataset["train"]["title"][12]))
print("\nRating\n{rating}".format(rating=dataset["train"]["rating"][12]))
###Output
_____no_output_____
###Markdown
Finally, we write the training, validation and test data frames to separate CSVs and upload them to S3. Use the `save_to_disk` method to directly save your dataset to S3 in Hugging Face dataset format. The format is backed by the Apache Arrow format which enables processing of large datasets with zero-copy reads without any memory constraints for optimal speed and efficiency. You can use the `load_to_disk` method in your train script to directly load the dataset in the format it was saved.
###Code
s3 = filesystems.S3FileSystem()
dataset.save_to_disk(f"s3://{session_bucket}/{s3_prefix}/train/", fs=s3)
###Output
_____no_output_____
###Markdown
Part 2: Fine-tune and deploy a Hugging Face model on Amazon SageMaker Now that the data is ready and saved in s3, we will demonstrate how to fine-tune and deploy a Hugging Face model on Amazon SageMaker with your own scripts.
###Code
text_column = "review_text"
target_column = "title"
###Output
_____no_output_____
###Markdown
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`pegasus-xsum`](https://huggingface.co/google/pegasus-xsum) checkpoint.
###Code
model_name = "google/pegasus-xsum"
###Output
_____no_output_____
###Markdown
Write the training script To fine-tune a Hugging Face model with a custom dataset on Amazon SageMaker, we will write a training script to be used by the Amazon SageMaker Training Job.The training script will need to do the following steps:- Load a pretrained Tokenizer and Model- Load and Tokenize datasets- Define the Training Arguments- Define a Trainer- Train the model and save the checkpoint with the best performance on the validation set- Evaluate the best checkpoint on the test setThese steps will be done in a `train()` function which uses a couple helper functions:`tokenize()` takes a batch, specified text and target columns, and tokenizes them with the Tokenizer loaded in memory,`load_and_tokenize()` which reads data from s3 and applies the `tokenize()` function, and `compute_metrics()` to compute ROUGE scores for evaluation.The script uses `AutoTokenizer` and `AutoModelForSeq2SeqLM` which works with any [🤗 Transformers](https://github.com/huggingface/transformers) model for summarization. You might however want to change some hyperparameters depending on what works best for each model. Here we used `adafactor` as optimizer for Pegasus for example.All computations will be running inside Amazon SageMaker Hugging Face training and inference containers, which we call using the [SageMaker SDK](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
###Code
!pygmentize source/train.py
###Output
_____no_output_____
###Markdown
By default, the `Trainer` saves several checkpoints before selecting the best one. Once the best checkpoint is loaded in memory and saved, those remaining checkpoints are not needed anymore. They can be safely deleted (which we do in the last line of the `train()`) to liberate space in the `SM_MODEL_DIR` which content will be used later for creating a SageMaker Model and deploy it to an endpoint. Fine-tuning the model on SageMaker We first load a couple of libraries and objects, namely `sagemaker` and the `HuggingFace` SageMaker Estimator which will be used to launch a training job.
###Code
role = sagemaker.get_execution_role()
from sagemaker.huggingface import HuggingFace
output_path = f"s3://{session_bucket}/{s3_prefix}"
###Output
_____no_output_____
###Markdown
We define a few arguments to be sent to the training script which will be read by the parser.
###Code
# We set the number of epochs to 1 to reduce the training time in this demo.
# For complete fine-tuning of the model please consider increasing the number of epochs to e.g. 5
hyperparameters = {
"model-name": model_name,
"text-column": text_column,
"target-column": target_column,
"epoch": 1,
}
metric_definitions = [
{"Name": "training:loss", "Regex": "'loss': (.*?),"},
{"Name": "validation:loss", "Regex": "'eval_loss': (.*?),"},
{"Name": "validation:rouge1", "Regex": "'eval_rouge1': (.*?),"},
{"Name": "validation:rouge2", "Regex": "'eval_rouge2': (.*?),"},
{"Name": "validation:rougeL", "Regex": "'eval_rougeL': (.*?),"},
{"Name": "validation:rougeLsum", "Regex": "'eval_rougeLsum': (.*?),"},
{"Name": "validation:gen_len", "Regex": "'eval_gen_len': (.*?),"},
]
###Output
_____no_output_____
###Markdown
Thanks to [🤗 Transformers'](https://github.com/huggingface/transformers) `Trainer` seamless integration with [SageMaker Distributed Data Parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html), we can make use of instances with several GPU units to parallelize and speed up training, without any modification to our training script.When defining the SageMaker Hugging Face Estimator we specify a training script and source directory (here only containing `train.py`, but it could contain any additional modules and a `requirements.txt`), as well as the instance type on which to run the Training Job.
###Code
# configuration for running training on smdistributed Data Parallel
# Estimated runtime: 1.5h for 1 epoch
distribution = {"smdistributed": {"dataparallel": {"enabled": True}}}
huggingface_estimator = HuggingFace(
entry_point="train.py",
source_dir="source",
base_job_name="huggingface-summarizer",
instance_type="ml.p3.16xlarge",
instance_count=1,
volume_size=200,
transformers_version="4.6.1",
pytorch_version="1.7.1",
py_version="py36",
output_path=output_path,
role=role,
hyperparameters=hyperparameters,
metric_definitions=metric_definitions,
distribution=distribution,
)
###Output
_____no_output_____
###Markdown
We then launch the training job by specifying where to read the data from.'train' will be loaded inside `SM_CHANNEL_TRAIN`, 'validation' inside `SM_CHANNEL_VALIDATION` and 'test' inside `SM_CHANNEL_TEST`, which will be the data directories inside the container running `train.py`.
###Code
huggingface_estimator.fit({"train": f"s3://{session_bucket}/{s3_prefix}/train/"})
###Output
_____no_output_____
###Markdown
With distributed training on a p3.16xlarge instance, the training should take around 6 hours for 5 epochs. Bring your own inference script Our friends at Hugging Face have made inference on SageMaker for transformers model simpler than ever thanks to the [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit). You can directly deploy the previously trained model by simply setting up the environment variable "HF_TASK":"summarization" following the instructions on the [HuggingFace website](https://huggingface.co/google/pegasus-xsum) selecting "Deploy" and then "Amazon SageMaker", without the need to write an inference script.However, when needing specific post-processing, for example if for a same input you want to return several summaries based on different text generation parameters, bringing your own `inference.py` script might be useful, and relatively straightforward:
###Code
!pygmentize source/inference.py
###Output
_____no_output_____
###Markdown
As we can see, the only requirements to writing such an inference script for Hugging Face on SageMaker is that the inference script shall contain the following template functions:- `model_fn()` reading the content of what was saved at the end of the training job inside `SM_MODEL_DIR`, or from an existing model weights directory saved as a `tar.gz` in s3. We will use it to load the trained Model and associated Tokenizer- `input_fn()` used here simply to format the data receives from a request made to the endpoint.- `predict_fn()` calling the output of `model_fn()` (so here the model and tokenizer) to run inference on the output of `input_fn()`.Optionally a `output_fn()` can be created for inference formatting, using the output of `predict_fn()`, but we did not use it here. Create and deploy a SageMaker Model to an endpoint and test it This time we will import the SageMaker `HuggingFaceModel` object which will help us create a SageMaker Model and deploy it to an endpoint.
###Code
from sagemaker.huggingface import HuggingFaceModel
###Output
_____no_output_____
###Markdown
Again, we specify here the inference script that we wrote earlier, a source directory (here again containing only `inference.py` but could contain modules and a `requirements.txt`) and `model_data` specifying where to load the model weights from. Using `huggingface_estimator.model_data` directly points to the s3 location where the output of the `huggingface_estimator` (after training) was saved, but any s3 arn containing pre-trained weights compressed as a `tar.gz` could work.
###Code
model_name = "summarization-model"
model_for_deployment = HuggingFaceModel(
entry_point="inference.py",
source_dir="source",
model_data=huggingface_estimator.model_data,
role=role,
pytorch_version="1.7.1",
py_version="py36",
transformers_version="4.6.1",
name=model_name,
)
###Output
_____no_output_____
###Markdown
Finally, we deploy the register model by specifying the instance type.
###Code
endpoint_name = "summarization-endpoint"
predictor = model_for_deployment.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge",
endpoint_name=endpoint_name,
serializer=sagemaker.serializers.JSONSerializer(),
deserializer=sagemaker.deserializers.JSONDeserializer(),
)
###Output
_____no_output_____
###Markdown
Once the model is deployed, you can test it directly:Feel free to change the parameters list to see different predictions
###Code
article_index = 12
print("Review Text\n{text}".format(text=dataset["test"]["review_text"][article_index]))
print("\nTitle\n{summary}".format(summary=dataset["test"]["title"][article_index]))
print("\nRating\n{rating}".format(rating=dataset["test"]["rating"][article_index]))
# Examples taken from the test set
texts = [dataset["test"]["review_text"][article_index]]
inputs = {
"inputs": texts,
"parameters_list": [
{"length_penalty": 2, "num_beams": 5, "do_sample": True},
{"length_penalty": 1, "num_beams": 5, "do_sample": True},
{"length_penalty": 0.6, "num_beams": 3, "do_sample": True},
{"max_length": 25, "top_p": 0.92, "top_k": 50, "do_sample": True},
],
}
summaries = predictor.predict(inputs)
for s in summaries:
print(s)
###Output
_____no_output_____
###Markdown
Lastly, please remember to delete the Amazon SageMaker endpoint to avoid charges.
###Code
predictor.delete_model()
predictor.delete_endpoint()
###Output
_____no_output_____
|
_webscraping/Web_scraping_epicurious.ipynb
|
###Markdown
Webscraping intro Scraping rules- You should check a site's terms and conditions before you scrape them. It's their data and they likely have some rules to govern it.- Be nice - A computer will send web requests much quicker than a user can. Make sure you space out your requests a bit so that you don't hammer the site's server.- Scrapers break - Sites change their layout all the time. If that happens, be prepared to rewrite your code.- Web pages are inconsistent - There's sometimes some manual clean up that has to happen even after you've gotten your data. Import necessary modules
###Code
import requests
from bs4 import BeautifulSoup
import json
import os
###Output
_____no_output_____
###Markdown
requests- requests executes HTTP requests, like GET- The requests object holds the results of the request. This is page content and other items like HTTP status codes and headers.- requests only gets the page content without any parsing.- Beautiful Soup does the parsing of the HTML and finding content within the HTML. requests - connect as function
###Code
def connect(url):
response = requests.get(url)
if response.status_code == 200:
print('successfully connected, response code: {}'.format(response.status_code))
else:
print('connection failed')
return response
url = 'http://www.epicurious.com/search/'
connect(url);
###Output
_____no_output_____
###Markdown
requests pass search keyword
###Code
keywords = input("Please enter the things you want to see in a recipe: ")
connect(url + keywords)
###Output
_____no_output_____
###Markdown
BeautifulSoup
###Code
n_chars = 1000
soup = BeautifulSoup(connect(url).content, 'lxml')
print(soup.prettify()[:n_chars])
###Output
_____no_output_____
###Markdown
Get result page as function
###Code
def result_page(url, keywords=''):
response = requests.get(url + keywords)
if not response.status_code == 200:
return None
return BeautifulSoup(response.content, 'lxml')
keywords = input("Please enter the things you want to see in a recipe: ")
soup = result_page(url, keywords)
soup.body.div
###Output
_____no_output_____
###Markdown
BS4 functions find_all list of results
###Code
n_lines = 5
all_a_tags = soup.find_all('a')
print(type(all_a_tags))
all_a_tags[:n_lines]
###Output
_____no_output_____
###Markdown
find first result
###Code
div_tag = soup.find('div')
type(div_tag), div_tag
soup.find_all('a')[0] == soup.find('a')
###Output
_____no_output_____
###Markdown
Recursively apply on elements (traverse)
###Code
(soup
.find('div')
.find('a')
.get_text())
###Output
_____no_output_____
###Markdown
find and find_all as css selectorsusing selector=value, e.g. class_='recipe-content-card')using a dictionary, e.g. {'class':'recipe-content-card'}class is a reserved word in python, please use as 'class' or class_
###Code
selector = 'recipe-content-card'
soup.find_all('article', class_=selector)[0] == results_page.find('article', {'class':selector})
###Output
_____no_output_____
###Markdown
get_text() Returns the content enclosed in a tag
###Code
soup.find('article', {'class':selector}).get_text()
###Output
_____no_output_____
###Markdown
get()Returns the value of a tag attribute
###Code
recipe_tag = soup.find('article',{'class':selector})
recipe_link = recipe_tag.find('a')
link_url = recipe_link.get('href')
recipe_content = recipe_tag.find('a').get_text()
print('a tag: {}\n - content: {}\n - link url: {}\n - link type: {} '.format(recipe_link, recipe_content, link_url, type(link_url)))
###Output
_____no_output_____
###Markdown
List of recipes
###Code
def get_recipes(url, keywords='', selector=''):
recipe_list = []
try:
soup = result_page(url, keywords)
recipes = soup.find_all('article', class_=selector)
for recipe in recipes:
recipe_link = url + recipe.find('a').get('href')
recipe_name = recipe.find('a').get_text()
try:
recipe_description = recipe.find('p', class_='dek').get_text()
except:
recipe_description = ''
recipe_list.append((recipe_name, recipe_link, recipe_description))
return recipe_list
except:
return None
url = 'http://www.epicurious.com/search/'
keywords = input('Please enter the things you want to see in a recipe: ')
selector = 'recipe-content-card'
get_recipes(url, keywords, selector)
###Output
_____no_output_____
###Markdown
Recipe ingredients and preparation
###Code
def get_recipe_info(url, keywords='', selector=''):
recipe_dict = {}
try:
soup = result_page(url, keywords)
ingredient_list, prep_steps_list = [], []
for ingredient in soup.find_all('li', class_='ingredient'):
ingredient_list.append(ingredient.get_text())
for prep_step in soup.find_all('li', class_='preparation-step'):
prep_steps_list.append(prep_step.get_text().strip())
recipe_dict['ingredients'], recipe_dict['preparation'] = ingredient_list, prep_steps_list
return recipe_dict
except:
return recipe_dict
url = 'http://www.epicurious.com'
link = '/recipes/food/views/spicy-lemongrass-tofu-233844'
recipe_info = get_recipe_info(url + link)
recipe_info
###Output
_____no_output_____
###Markdown
Get all recipes
###Code
def get_all_recipes(url, keywords='', selector=''):
results = []
all_recipes = get_recipes(url, keywords, selector)
for recipe in all_recipes:
recipe_dict = get_recipe_info(recipe[1])
recipe_dict['name'] = recipe[0]
recipe_dict['description'] = recipe[2]
results.append(recipe_dict)
return results
keywords = input('Please enter the things you want to see in a recipe: ')
selector = 'recipe-content-card'
all_recipes = get_all_recipes(url, keywords, selector)
all_recipes
import pandas as pd
pd.DataFrame(all_recipes)
###Output
_____no_output_____
|
extraction/search_github_api.ipynb
|
###Markdown
Extração de dados do GithubPesquisaremos por iniciativas/projetos que utilizam Dados Abertos Governamentais através da [API do Github](https://developer.github.com/v3/)A partir das apontamentos feitos na qualificação desse projeto de pesquisa, entendemos que caracterizar toda a comunidade que utiliza dados abertos governamentais é um escopo muito abrangente e de difícil validação.A ideia é especificar esse escopo para assim poder avaliar melhor os seus resultados, por exemplo verificar se os projetos referências nessas áreas aparecem nos repositórios extraídos no Github.Por fim, consideramos o contexto de dados abertos governamentais de educação.O processo de geração de palavras chaves considerou 3 recursos:- Palavras chaves utilizadas em trabalhos anteriores- Palavras chaves nas bases de dados do portal do INEP- Palavras chaves em portais referências para a educação brasileira: - [Ministério da Educação](https://www.mec.gov.br/) (sisu, enem, fies, prouni, mec) - [Dados Abertos do Ministério da Educação](http://dadosabertos.mec.gov.br/) (pme, prouni, pronatec, pnp, fies) - [FUNDEB](https://www.fnde.gov.br/financiamento/fundeb) (fundeb, fnde, siope) - [INEP](http://inep.gov.br/dados) (saeb, mde, indicadores financeiros educacionais) - [Portal brasileiro de dados abertos](http://www.dados.gov.br/aplicativos): As palavras chaves nesse site já tinham sido cobertas. Porém acho válido tentar contato direto com os projetos listados na seção de Aplicativos.
###Code
import requests
import pandas as pd
import time
import logging
###Output
_____no_output_____
###Markdown
As principais palavras chaves usadas por [Attard et al. (2015)](https://www.researchgate.net/publication/281349915_A_Systematic_Review_of_Open_Government_Data_Initiatives) são análise, portal, publicação, consumir juntamente a dados abertos governamentais ou do governo. Porém nos testes com a API do Github *consumir* não se mostrou crucial para retornar resultados relevantes.
###Code
first_search_strings = [
'dados abertos',
'dados abertos brasil',
'dados abertos governo',
'dados abertos governamentais',
'dados governamentais',
'dados publicos abertos',
'dados do governo',
'analise de dados do governo',
'analise de dados governamentais',
'portal de dados do governo',
'portal de dados governamentais',
'portal publico do governo',
'portal de dados abertos do governo',
]
actual_search_strings = [
'dados educacao',
'dados educacao basica',
'dados educacionais',
'analise educacao',
'analise educacao basica',
'analise educacional',
'censo educacao superior',
'dados educacao superior',
'analise educacao superior',
'censo profissionais magistério',
'dados profissionais magistério',
'analise profissionais magistério',
'censo escolar',
'dados escola inep',
'dados enade',
'analise enade',
'dados encceja',
'analise encceja',
'dados enem',
'analise enem',
'enem por escola',
'dados prova brasil',
'analise prova brasil',
'dados ideb',
'indicadores educacionais',
'dados ies',
'analise ies',
'dados inep',
'analise inep',
'microdados inep',
'dados sisu',
'analise sisu',
'dados fies',
'analise fies',
'dados prouni',
'analise prouni',
'dados mec',
'analise mec',
'dados pme',
'analise pme',
'dados pronatec',
'analise pronatec',
'dados pnp',
'analise pnp',
'dados fundeb',
'analise fundeb',
'dados fnde',
'analise fnde',
'dados siope',
'analise siope',
'dados saeb',
'analise saeb',
'dados mde',
'analise mde',
'indicadores financeiros educacionais']
'Temos um total de ' + str(len(actual_search_strings)) + ' palavras chaves'
###Output
_____no_output_____
###Markdown
Configuração para gerar arquivo de log
###Code
logging.basicConfig(level=logging.DEBUG,
filename="log_file.txt",
filemode="a+",
format="%(asctime)s - %(levelname)s - %(funcName)s - %(message)s")
logging.info("Extração de dados do Github")
###Output
_____no_output_____
###Markdown
Para a acesso a alguns recursos da API do github é preciso se autenticar, como aumentar o limite de requisições. Informações sobre autenticação podem ser encontradas [aqui](https://developer.github.com/v3/authentication).
###Code
credentials = ('<user_name>','<token>')
###Output
_____no_output_____
###Markdown
Limite de requisições sem autenticação
###Code
limits = requests.get('https://api.github.com/rate_limit')
limits.json()
###Output
_____no_output_____
###Markdown
Limite de requisições com autenticação
###Code
limits = requests.get('https://api.github.com/rate_limit', auth=credentials)
limits.json()
###Output
_____no_output_____
###Markdown
Verificando limitação de extração de dados da API
###Code
page_35 = 'https://api.github.com/search/repositories?q=stars%3A%3E1&sort=stars&order=desc&page=35'
t = requests.get(page_35, auth=credentials)
t.json()
###Output
_____no_output_____
###Markdown
Informações sobre a ferramenta de pesquisa da API podem ser encontradas [aqui](https://developer.github.com/v3/search/)
###Code
url_base = 'https://api.github.com/search/repositories?q='
###Output
_____no_output_____
###Markdown
Podemos adicionar uma ordenação nos resultados, como quantidade de _stars_ de forma descrescente.
###Code
sort = '&sort=stars&order=desc'
###Output
_____no_output_____
###Markdown
Extraindo informações gerais
###Code
def extract_results(data):
items_list = []
logging.info('Debug data keys: {0}'.format(data.keys()))
if data.get('message', False):
logging.info('Debug data message: {0}'.format(data.get('message', None)))
logging.info('Debug data documentation_url: {0}'.format(data.get('documentation_url', None)))
for item in data.get('items', None):
item_dict = {
'id': item.get('id'),
'full_name': item.get('full_name', None),
'description': item.get('description', None),
'owner_type': item.get('owner').get('type', None),
'owner_api_url': item.get('owner').get('url', None),
'owner_url': item.get('owner').get('html_url', None),
'api_url': item.get('url', None),
'url': item.get('html_url', None),
'fork': item.get('fork', None),
'created_at': item.get('created_at', None),
'updated_at': item.get('updated_at', None),
'pushed_at': item.get('pushed_at', None),
'size': item.get('size', None),
'stargazers_count': item.get('stargazers_count', None),
'language': item.get('language', None),
'has_issues': item.get('has_issues', None),
'has_wiki': item.get('has_wiki', None),
'forks_count': item.get('forks_count', None),
'forks': item.get('forks', None),
'open_issues': item.get('open_issues', None),
'license': item.get('license').get('name', None) if item.get('license', None) else None,
'timestamp_extract': str(time.time()).split('.')[0]
}
items_list.append(item_dict)
return items_list
def check_limit():
limit = requests.get('https://api.github.com/rate_limit', auth=credentials)
limit = limit.json().get('resources').get('search').get('remaining')
if limit == 0: # A API só permite 30 requisições por minuto ao chegar
time.sleep(180)
results_by_page = 30
def scroll_pages(url):
check_limit()
results = requests.get(url, auth=credentials)
data = results.json()
total = data.get('total_count', None)
logging.info('Foram encontrados {0} resultados. Extraindo...'.format(total))
items_list = []
items_list = extract_results(data)
iterations = total // results_by_page
for iteracao in range(0, iterations):
header = results.links
if header.get('next', False):
next_url = header.get('next').get('url')
check_limit()
results = requests.get(next_url, auth=credentials)
data = results.json()
items_list = items_list + extract_results(data)
return items_list
%%time
items_list = []
results_summary = []
repositories_df = None
for string in actual_search_strings:
url = url_base + string + sort
logging.info("Pesquisando repositórios para a string: '{0}'".format(string))
results = scroll_pages(url)
items_list = items_list + results
results_summary.append({'string': string, 'qtd':len(results)})
repositories_df = pd.DataFrame(items_list)
results_summary = pd.DataFrame(results_summary)
results_summary.sort_values('qtd', ascending=False)
results_summary.to_csv('../data/results_summary.csv', index=False)
repositories_df.tail(3)
###Output
_____no_output_____
###Markdown
Quantidade de resultados:
###Code
len(repositories_df)
###Output
_____no_output_____
###Markdown
Retirando registros duplicados visto que palavras de busca diferentes podem levar a um mesmo repositório.
###Code
repositories_df = repositories_df.drop_duplicates(['id', 'api_url'])
len(repositories_df)
###Output
_____no_output_____
###Markdown
Quantidade de colunas:
###Code
len(repositories_df.columns)
repositories_df = repositories_df.sort_values('stargazers_count', ascending=False)
repositories_df.head(3)
###Output
_____no_output_____
###Markdown
Extraindo _Commits_, _Contributors_ e dados do _Owner_
###Code
def extract_commits(url_repo):
commits_url = url_repo + '/commits'
results = requests.get(commits_url, auth=credentials)
if results.status_code == 409:
return None
commits = len(results.json())
header = results.links
while header.get('next', False):
next_url = header.get('next').get('url')
results = requests.get(next_url, auth=credentials)
commits = commits + len(results.json())
header = results.links
return commits
def extract_contributors(url_repo):
contributors_url = url_repo + '/contributors'
results = requests.get(contributors_url, auth=credentials)
if results.status_code == 204:
return None
contributors = len(results.json())
header = results.links
while header.get('next', False):
next_url = header.get('next').get('url')
results = requests.get(next_url, auth=credentials)
contributors = contributors + len(results.json())
header = results.links
return contributors
def extract_owner_data(owner_api_url):
results = requests.get(owner_api_url, auth=credentials)
data = results.json()
owner_data = {
'owner_location': data.get('location', None),
'owner_email': data.get('email', None),
'owner_blog': data.get('blog', None),
'owner_name': data.get('name', None)
}
return owner_data
%%time
urls = repositories_df['api_url']
for url in urls:
owner_api_url = repositories_df.loc[repositories_df["api_url"] == url]['owner_api_url'].item()
owner_data = extract_owner_data(owner_api_url)
commits = extract_commits(url)
contributors = extract_contributors(url)
logging.info("Repositório: {0}".format(url))
logging.info("Tem {0} Commits - {1} Contributors".format(commits,contributors))
logging.info("Owner location: {0}".format(owner_data.get('owner_location')))
repositories_df.loc[repositories_df["api_url"] == url, 'commits'] = commits
repositories_df.loc[repositories_df["api_url"] == url, 'contributors'] = contributors
repositories_df.loc[repositories_df["api_url"] == url, 'owner_location'] = owner_data.get('owner_location')
repositories_df.loc[repositories_df["api_url"] == url, 'owner_email'] = owner_data.get('owner_email')
repositories_df.loc[repositories_df["api_url"] == url, 'owner_blog'] = owner_data.get('owner_blog')
repositories_df.loc[repositories_df["api_url"] == url, 'owner_name'] = owner_data.get('owner_name')
###Output
CPU times: user 42.7 s, sys: 1.4 s, total: 44.1 s
Wall time: 29min 34s
###Markdown
Agora devemos ter mais 6 colunas
###Code
len(repositories_df.columns)
repositories_df.head(3)
###Output
_____no_output_____
###Markdown
Conferindo valores nulos
###Code
len(repositories_df.loc[repositories_df['commits'].isnull()][['api_url', 'commits', 'contributors']])
###Output
_____no_output_____
###Markdown
Alguns repositórios realmente não tem nenhum commit como o [Scripts_INEP](https://github.com/ronielsampaio/Scripts_INEP).
###Code
repositories_df.loc[repositories_df['contributors'].isnull()][['id', 'url', 'api_url', 'commits', 'contributors']]
repositories_df = repositories_df.loc[repositories_df['contributors'].notnull()]
len(repositories_df)
###Output
_____no_output_____
###Markdown
Salvando os repositórios
###Code
repositories_df.to_csv('../data/repositories_edu.csv', index=False)
###Output
_____no_output_____
###Markdown
Extraindo contribuidores dos repositórios
###Code
def get_contributors(data, repo_data):
list_contributors = []
for item in data:
contributor = {
'repo_id': repo_data.get('repo_id', None),
'repo_name': repo_data.get('repo_name', None),
'repo_url': repo_data.get('repo_url', None),
'repo_api_url': repo_data.get('repo_api_url', None),
'contributor_id': item.get('id', None),
'contributor_login': item.get('login', None),
'contributor_type': item.get('type', None),
'contributor_url': item.get('html_url', None),
'contributor_api_url': item.get('url', None),
'timestamp_extract': str(time.time()).split('.')[0]
}
list_contributors.append(contributor)
return list_contributors
def scroll_contributors(url, repo_data):
list_contributors = []
results = requests.get(url, auth=credentials)
if results.status_code is 204:
return None
data = results.json()
list_contributors = get_contributors(data, repo_data)
header = results.links
while header.get('next', False):
next_url = header.get('next').get('url')
results = requests.get(next_url, auth=credentials)
data = results.json()
list_contributors = list_contributors + get_contributors(data, repo_data)
header = results.links
return list_contributors
def search_contributors(repositories_df):
urls = repositories_df['api_url']
list_contributors_all_repo = []
for url in urls:
logging.info('Extraindo contribuidores de: {0}'.format(url))
repo_data = {
'repo_id': repositories_df.loc[repositories_df["api_url"] == url, 'id'].values[0],
'repo_name': repositories_df.loc[repositories_df["api_url"] == url, 'full_name'].values[0],
'repo_url': repositories_df.loc[repositories_df["api_url"] == url, 'url'].values[0],
'repo_api_url': url,
}
url_contributors = url + '/contributors'
contributors = scroll_contributors(url_contributors, repo_data)
if contributors:
list_contributors_all_repo = list_contributors_all_repo + contributors
contributors_df = pd.DataFrame(list_contributors_all_repo)
return contributors_df
%%time
contributors_df = search_contributors(repositories_df)
contributors_df.head(3)
###Output
_____no_output_____
###Markdown
Verificando se há contribuidores repetidos para um mesmo repositório.
###Code
contributors_df[contributors_df.duplicated(['contributor_id', 'repo_id'])]
len(contributors_df)
###Output
_____no_output_____
###Markdown
Salvando dataframe com mapeamento de repositórios e contribuidores.
###Code
contributors_df.to_csv('../data/contributors_edu.csv', index=False)
###Output
_____no_output_____
|
ch07/7_1_Model1_Unconditioned_Surname_Generation.ipynb
|
###Markdown
Surname Generation Imports
###Code
import os
from argparse import Namespace
from collections import Counter
import json
import re
import string
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from torch.nn import functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from tqdm import notebook
###Output
_____no_output_____
###Markdown
Data Vectorization classes Vocabulary
###Code
class Vocabulary(object):
"""Class to process text and extract vocabulary for mapping"""
def __init__(self, token_to_idx=None):
"""
Args:
token_to_idx (dict): a pre-existing map of tokens to indices
"""
if token_to_idx is None:
token_to_idx = {}
self._token_to_idx = token_to_idx
self._idx_to_token = {idx: token
for token, idx in self._token_to_idx.items()}
def to_serializable(self):
""" returns a dictionary that can be serialized """
return {'token_to_idx': self._token_to_idx}
@classmethod
def from_serializable(cls, contents):
""" instantiates the Vocabulary from a serialized dictionary """
return cls(**contents)
def add_token(self, token):
"""Update mapping dicts based on the token.
Args:
token (str): the item to add into the Vocabulary
Returns:
index (int): the integer corresponding to the token
"""
if token in self._token_to_idx:
index = self._token_to_idx[token]
else:
index = len(self._token_to_idx)
self._token_to_idx[token] = index
self._idx_to_token[index] = token
return index
def add_many(self, tokens):
"""Add a list of tokens into the Vocabulary
Args:
tokens (list): a list of string tokens
Returns:
indices (list): a list of indices corresponding to the tokens
"""
return [self.add_token(token) for token in tokens]
def lookup_token(self, token):
"""Retrieve the index associated with the token
Args:
token (str): the token to look up
Returns:
index (int): the index corresponding to the token
"""
return self._token_to_idx[token]
def lookup_index(self, index):
"""Return the token associated with the index
Args:
index (int): the index to look up
Returns:
token (str): the token corresponding to the index
Raises:
KeyError: if the index is not in the Vocabulary
"""
if index not in self._idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self._idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self._token_to_idx)
class SequenceVocabulary(Vocabulary):
def __init__(self, token_to_idx=None, unk_token="<UNK>",
mask_token="<MASK>", begin_seq_token="<BEGIN>",
end_seq_token="<END>"):
super(SequenceVocabulary, self).__init__(token_to_idx)
self._mask_token = mask_token
self._unk_token = unk_token
self._begin_seq_token = begin_seq_token
self._end_seq_token = end_seq_token
self.mask_index = self.add_token(self._mask_token)
self.unk_index = self.add_token(self._unk_token)
self.begin_seq_index = self.add_token(self._begin_seq_token)
self.end_seq_index = self.add_token(self._end_seq_token)
def to_serializable(self):
contents = super(SequenceVocabulary, self).to_serializable()
contents.update({'unk_token': self._unk_token,
'mask_token': self._mask_token,
'begin_seq_token': self._begin_seq_token,
'end_seq_token': self._end_seq_token})
return contents
def lookup_token(self, token):
"""Retrieve the index associated with the token
or the UNK index if token isn't present.
Args:
token (str): the token to look up
Returns:
index (int): the index corresponding to the token
Notes:
`unk_index` needs to be >=0 (having been added into the Vocabulary)
for the UNK functionality
"""
if self.unk_index >= 0:
return self._token_to_idx.get(token, self.unk_index)
else:
return self._token_to_idx[token]
###Output
_____no_output_____
###Markdown
Vectorizer
###Code
class SurnameVectorizer(object):
""" The Vectorizer which coordinates the Vocabularies and puts them to use"""
def __init__(self, char_vocab, nationality_vocab):
"""
Args:
char_vocab (Vocabulary): maps words to integers
nationality_vocab (Vocabulary): maps nationalities to integers
"""
self.char_vocab = char_vocab
self.nationality_vocab = nationality_vocab
def vectorize(self, surname, vector_length=-1):
"""Vectorize a surname into a vector of observations and targets
The outputs are the vectorized surname split into two vectors:
surname[:-1] and surname[1:]
At each timestep, the first vector is the observation and the second vector is the target.
Args:
surname (str): the surname to be vectorized
vector_length (int): an argument for forcing the length of index vector
Returns:
a tuple: (from_vector, to_vector)
from_vector (numpy.ndarray): the observation vector
to_vector (numpy.ndarray): the target prediction vector
"""
indices = [self.char_vocab.begin_seq_index]
indices.extend(self.char_vocab.lookup_token(token) for token in surname)
indices.append(self.char_vocab.end_seq_index)
if vector_length < 0:
vector_length = len(indices) - 1
from_vector = np.empty(vector_length, dtype=np.int64)
from_indices = indices[:-1]
from_vector[:len(from_indices)] = from_indices
from_vector[len(from_indices):] = self.char_vocab.mask_index
to_vector = np.empty(vector_length, dtype=np.int64)
to_indices = indices[1:]
to_vector[:len(to_indices)] = to_indices
to_vector[len(to_indices):] = self.char_vocab.mask_index
return from_vector, to_vector
@classmethod
def from_dataframe(cls, surname_df):
"""Instantiate the vectorizer from the dataset dataframe
Args:
surname_df (pandas.DataFrame): the surname dataset
Returns:
an instance of the SurnameVectorizer
"""
char_vocab = SequenceVocabulary()
nationality_vocab = Vocabulary()
for index, row in surname_df.iterrows():
for char in row.surname:
char_vocab.add_token(char)
nationality_vocab.add_token(row.nationality)
return cls(char_vocab, nationality_vocab)
@classmethod
def from_serializable(cls, contents):
"""Instantiate the vectorizer from saved contents
Args:
contents (dict): a dict holding two vocabularies for this vectorizer
This dictionary is created using `vectorizer.to_serializable()`
Returns:
an instance of SurnameVectorizer
"""
char_vocab = SequenceVocabulary.from_serializable(contents['char_vocab'])
nat_vocab = Vocabulary.from_serializable(contents['nationality_vocab'])
return cls(char_vocab=char_vocab, nationality_vocab=nat_vocab)
def to_serializable(self):
""" Returns the serializable contents """
return {'char_vocab': self.char_vocab.to_serializable(),
'nationality_vocab': self.nationality_vocab.to_serializable()}
###Output
_____no_output_____
###Markdown
Dataset
###Code
class SurnameDataset(Dataset):
def __init__(self, surname_df, vectorizer):
"""
Args:
surname_df (pandas.DataFrame): the dataset
vectorizer (SurnameVectorizer): vectorizer instatiated from dataset
"""
self.surname_df = surname_df
self._vectorizer = vectorizer
self._max_seq_length = max(map(len, self.surname_df.surname)) + 2
self.train_df = self.surname_df[self.surname_df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.surname_df[self.surname_df.split=='val']
self.validation_size = len(self.val_df)
self.test_df = self.surname_df[self.surname_df.split=='test']
self.test_size = len(self.test_df)
self._lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.validation_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
@classmethod
def load_dataset_and_make_vectorizer(cls, surname_csv):
"""Load dataset and make a new vectorizer from scratch
Args:
surname_csv (str): location of the dataset
Returns:
an instance of SurnameDataset
"""
surname_df = pd.read_csv(surname_csv)
return cls(surname_df, SurnameVectorizer.from_dataframe(surname_df))
@classmethod
def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath):
"""Load dataset and the corresponding vectorizer.
Used in the case in the vectorizer has been cached for re-use
Args:
surname_csv (str): location of the dataset
vectorizer_filepath (str): location of the saved vectorizer
Returns:
an instance of SurnameDataset
"""
surname_df = pd.read_csv(surname_csv)
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(surname_df, vectorizer)
@staticmethod
def load_vectorizer_only(vectorizer_filepath):
"""a static method for loading the vectorizer from file
Args:
vectorizer_filepath (str): the location of the serialized vectorizer
Returns:
an instance of SurnameVectorizer
"""
with open(vectorizer_filepath) as fp:
return SurnameVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
"""saves the vectorizer to disk using json
Args:
vectorizer_filepath (str): the location to save the vectorizer
"""
with open(vectorizer_filepath, "w") as fp:
json.dump(self._vectorizer.to_serializable(), fp)
def get_vectorizer(self):
""" returns the vectorizer """
return self._vectorizer
def set_split(self, split="train"):
self._target_split = split
self._target_df, self._target_size = self._lookup_dict[split]
def __len__(self):
return self._target_size
def __getitem__(self, index):
"""the primary entry point method for PyTorch datasets
Args:
index (int): the index to the data point
Returns:
a dictionary holding the data point: (x_data, y_target, class_index)
"""
row = self._target_df.iloc[index]
from_vector, to_vector = \
self._vectorizer.vectorize(row.surname, self._max_seq_length)
nationality_index = \
self._vectorizer.nationality_vocab.lookup_token(row.nationality)
return {'x_data': from_vector,
'y_target': to_vector,
'class_index': nationality_index}
def get_num_batches(self, batch_size):
"""Given a batch size, return the number of batches in the dataset
Args:
batch_size (int)
Returns:
number of batches in the dataset
"""
return len(self) // batch_size
def generate_batches(dataset, batch_size, shuffle=True,
drop_last=True, device="cpu"):
"""
A generator function which wraps the PyTorch DataLoader. It will
ensure each tensor is on the write device location.
"""
dataloader = DataLoader(dataset=dataset, batch_size=batch_size,
shuffle=shuffle, drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
###Output
_____no_output_____
###Markdown
The Model: SurnameGenerationModel
###Code
class SurnameGenerationModel(nn.Module):
def __init__(self, char_embedding_size, char_vocab_size, rnn_hidden_size,
batch_first=True, padding_idx=0, dropout_p=0.5):
"""
Args:
char_embedding_size (int): The size of the character embeddings
char_vocab_size (int): The number of characters to embed
rnn_hidden_size (int): The size of the RNN's hidden state
batch_first (bool): Informs whether the input tensors will
have batch or the sequence on the 0th dimension
padding_idx (int): The index for the tensor padding;
see torch.nn.Embedding
dropout_p (float): the probability of zeroing activations using
the dropout method. higher means more likely to zero.
"""
super(SurnameGenerationModel, self).__init__()
self.char_emb = nn.Embedding(num_embeddings=char_vocab_size,
embedding_dim=char_embedding_size,
padding_idx=padding_idx)
self.rnn = nn.GRU(input_size=char_embedding_size,
hidden_size=rnn_hidden_size,
batch_first=batch_first)
self.fc = nn.Linear(in_features=rnn_hidden_size,
out_features=char_vocab_size)
self._dropout_p = dropout_p
def forward(self, x_in, apply_softmax=False):
"""The forward pass of the model
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, input_dim)
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, char_vocab_size)
"""
x_embedded = self.char_emb(x_in)
y_out, _ = self.rnn(x_embedded)
batch_size, seq_size, feat_size = y_out.shape
y_out = y_out.contiguous().view(batch_size * seq_size, feat_size)
y_out = self.fc(F.dropout(y_out, p=self._dropout_p))
if apply_softmax:
y_out = F.softmax(y_out, dim=1)
new_feat_size = y_out.shape[-1]
y_out = y_out.view(batch_size, seq_size, new_feat_size)
return y_out
def sample_from_model(model, vectorizer, num_samples=1, sample_size=20,
temperature=1.0):
"""Sample a sequence of indices from the model
Args:
model (SurnameGenerationModel): the trained model
vectorizer (SurnameVectorizer): the corresponding vectorizer
num_samples (int): the number of samples
sample_size (int): the max length of the samples
temperature (float): accentuates or flattens
the distribution.
0.0 < temperature < 1.0 will make it peakier.
temperature > 1.0 will make it more uniform
Returns:
indices (torch.Tensor): the matrix of indices;
shape = (num_samples, sample_size)
"""
begin_seq_index = [vectorizer.char_vocab.begin_seq_index
for _ in range(num_samples)]
begin_seq_index = torch.tensor(begin_seq_index,
dtype=torch.int64).unsqueeze(dim=1)
indices = [begin_seq_index]
h_t = None
for time_step in range(sample_size):
x_t = indices[time_step]
x_emb_t = model.char_emb(x_t)
rnn_out_t, h_t = model.rnn(x_emb_t, h_t)
prediction_vector = model.fc(rnn_out_t.squeeze(dim=1))
probability_vector = F.softmax(prediction_vector / temperature, dim=1)
indices.append(torch.multinomial(probability_vector, num_samples=1))
indices = torch.stack(indices).squeeze().permute(1, 0)
return indices
def decode_samples(sampled_indices, vectorizer):
"""Transform indices into the string form of a surname
Args:
sampled_indices (torch.Tensor): the inidces from `sample_from_model`
vectorizer (SurnameVectorizer): the corresponding vectorizer
"""
decoded_surnames = []
vocab = vectorizer.char_vocab
for sample_index in range(sampled_indices.shape[0]):
surname = ""
for time_step in range(sampled_indices.shape[1]):
sample_item = sampled_indices[sample_index, time_step].item()
if sample_item == vocab.begin_seq_index:
continue
elif sample_item == vocab.end_seq_index:
break
else:
surname += vocab.lookup_index(sample_item)
decoded_surnames.append(surname)
return decoded_surnames
###Output
_____no_output_____
###Markdown
Training Routine Helper functions
###Code
def make_train_state(args):
return {'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'learning_rate': args.learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': args.model_state_file}
def update_train_state(args, model, train_state):
"""Handle the training state updates.
Components:
- Early Stopping: Prevent overfitting.
- Model Checkpoint: Model is saved if the model is better
:param args: main arguments
:param model: model to train
:param train_state: a dictionary representing the training state values
:returns:
a new train_state
"""
# Save one model at least
if train_state['epoch_index'] == 0:
torch.save(model.state_dict(), train_state['model_filename'])
train_state['stop_early'] = False
# Save model if performance improved
elif train_state['epoch_index'] >= 1:
loss_tm1, loss_t = train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= loss_tm1:
# Update step
train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < train_state['early_stopping_best_val']:
torch.save(model.state_dict(), train_state['model_filename'])
train_state['early_stopping_best_val'] = loss_t
# Reset early stopping step
train_state['early_stopping_step'] = 0
# Stop early ?
train_state['stop_early'] = \
train_state['early_stopping_step'] >= args.early_stopping_criteria
return train_state
def normalize_sizes(y_pred, y_true):
"""Normalize tensor sizes
Args:
y_pred (torch.Tensor): the output of the model
If a 3-dimensional tensor, reshapes to a matrix
y_true (torch.Tensor): the target predictions
If a matrix, reshapes to be a vector
"""
if len(y_pred.size()) == 3:
y_pred = y_pred.contiguous().view(-1, y_pred.size(2))
if len(y_true.size()) == 2:
y_true = y_true.contiguous().view(-1)
return y_pred, y_true
def compute_accuracy(y_pred, y_true, mask_index):
y_pred, y_true = normalize_sizes(y_pred, y_true)
_, y_pred_indices = y_pred.max(dim=1)
correct_indices = torch.eq(y_pred_indices, y_true).float()
valid_indices = torch.ne(y_true, mask_index).float()
n_correct = (correct_indices * valid_indices).sum().item()
n_valid = valid_indices.sum().item()
return n_correct / n_valid * 100
def sequence_loss(y_pred, y_true, mask_index):
y_pred, y_true = normalize_sizes(y_pred, y_true)
return F.cross_entropy(y_pred, y_true, ignore_index=mask_index)
###Output
_____no_output_____
###Markdown
General utilities
###Code
def set_seed_everywhere(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
def handle_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
###Output
_____no_output_____
###Markdown
Settings and some prep work
###Code
args = Namespace(
# Data and Path information
surname_csv="../data/surnames/surnames_with_splits.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="../model_storage/ch7/model1_unconditioned_surname_generation",
# Model hyper parameters
char_embedding_size=32,
rnn_hidden_size=32,
# Training hyper parameters
seed=1337,
learning_rate=0.001,
batch_size=128,
num_epochs=100,
early_stopping_criteria=5,
# Runtime options
catch_keyboard_interrupt=True,
cuda=True,
expand_filepaths_to_save_dir=True,
reload_from_files=False,
)
if args.expand_filepaths_to_save_dir:
args.vectorizer_file = os.path.join(args.save_dir,
args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir,
args.model_state_file)
print("Expanded filepaths: ")
print("\t{}".format(args.vectorizer_file))
print("\t{}".format(args.model_state_file))
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
# Set seed for reproducibility
set_seed_everywhere(args.seed, args.cuda)
# handle dirs
handle_dirs(args.save_dir)
###Output
Expanded filepaths:
../model_storage/ch7/model1_unconditioned_surname_generation\vectorizer.json
../model_storage/ch7/model1_unconditioned_surname_generation\model.pth
Using CUDA: False
###Markdown
Initializations
###Code
if args.reload_from_files:
# training from a checkpoint
dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv,
args.vectorizer_file)
else:
# create dataset and vectorizer
dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.get_vectorizer()
model = SurnameGenerationModel(char_embedding_size=args.char_embedding_size,
char_vocab_size=len(vectorizer.char_vocab),
rnn_hidden_size=args.rnn_hidden_size,
padding_idx=vectorizer.char_vocab.mask_index)
###Output
_____no_output_____
###Markdown
Training loop
###Code
mask_index = vectorizer.char_vocab.mask_index
model = model.to(args.device)
optimizer = optim.Adam(model.parameters(), lr=args.learning_rate)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer,
mode='min', factor=0.5,
patience=1)
train_state = make_train_state(args)
epoch_bar = notebook.tqdm(desc='training routine',
total=args.num_epochs,
position=0)
dataset.set_split('train')
train_bar = notebook.tqdm(desc='split=train',
total=dataset.get_num_batches(args.batch_size),
position=1,
leave=True)
dataset.set_split('val')
val_bar = notebook.tqdm(desc='split=val',
total=dataset.get_num_batches(args.batch_size),
position=1,
leave=True)
try:
for epoch_index in range(args.num_epochs):
train_state['epoch_index'] = epoch_index
# Iterate over training dataset
# setup: batch generator, set loss and acc to 0, set train mode on
dataset.set_split('train')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.0
running_acc = 0.0
model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# the training routine is these 5 steps:
# --------------------------------------
# step 1. zero the gradients
optimizer.zero_grad()
# step 2. compute the output
y_pred = model(x_in=batch_dict['x_data'])
# step 3. compute the loss
loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index)
# step 4. use loss to produce gradients
loss.backward()
# step 5. use optimizer to take gradient step
optimizer.step()
# -----------------------------------------
# compute the running loss and running accuracy
running_loss += (loss.item() - running_loss) / (batch_index + 1)
acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index)
running_acc += (acc_t - running_acc) / (batch_index + 1)
# update bar
train_bar.set_postfix(loss=running_loss,
acc=running_acc,
epoch=epoch_index)
train_bar.update()
train_state['train_loss'].append(running_loss)
train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# setup: batch generator, set loss and acc to 0; set eval mode on
dataset.set_split('val')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.
running_acc = 0.
model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = model(x_in=batch_dict['x_data'])
# step 3. compute the loss
loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index)
# compute the running loss and running accuracy
running_loss += (loss.item() - running_loss) / (batch_index + 1)
acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index)
running_acc += (acc_t - running_acc) / (batch_index + 1)
# Update bar
val_bar.set_postfix(loss=running_loss, acc=running_acc,
epoch=epoch_index)
val_bar.update()
train_state['val_loss'].append(running_loss)
train_state['val_acc'].append(running_acc)
train_state = update_train_state(args=args, model=model,
train_state=train_state)
scheduler.step(train_state['val_loss'][-1])
if train_state['stop_early']:
break
# move model to cpu for sampling
model = model.cpu()
sampled_surnames = decode_samples(
sample_from_model(model, vectorizer, num_samples=2),
vectorizer)
epoch_bar.set_postfix(sample1=sampled_surnames[0],
sample2=sampled_surnames[1])
# move model back to whichever device it should be on
model = model.to(args.device)
train_bar.n = 0
val_bar.n = 0
epoch_bar.update()
except KeyboardInterrupt:
print("Exiting loop")
np.random.choice(np.arange(len(vectorizer.nationality_vocab)), replace=True, size=2)
# compute the loss & accuracy on the test set using the best available model
model.load_state_dict(torch.load(train_state['model_filename']))
model = model.to(args.device)
dataset.set_split('test')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_acc = 0.
model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = model(x_in=batch_dict['x_data'])
# compute the loss
loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index)
# compute the accuracy
running_loss += (loss.item() - running_loss) / (batch_index + 1)
acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index)
running_acc += (acc_t - running_acc) / (batch_index + 1)
train_state['test_loss'] = running_loss
train_state['test_acc'] = running_acc
print("Test loss: {};".format(train_state['test_loss']))
print("Test Accuracy: {}".format(train_state['test_acc']))
###Output
Test loss: 2.5487931768099465;
Test Accuracy: 24.94713629429804
###Markdown
Inference
###Code
# number of names to generate
num_names = 10
model = model.cpu()
# Generate nationality hidden state
sampled_surnames = decode_samples(
sample_from_model(model, vectorizer, num_samples=num_names),
vectorizer)
# Show results
print ("-"*15)
for i in range(num_names):
print (sampled_surnames[i])
###Output
---------------
Sardido
Sowyti
êeñaveh
Patlajinia
Volisteh
Gevmensa
Lassat
Bubkov
Sna
Busn
|
scripts/extra/socialmedia_data_EDA_DAL.ipynb
|
###Markdown
Aplicación en Bioestadística: Análisis de Depresión en RR.SS. AI Saturdays Euskadi Donostia 2020
###Code
import pandas as pd
from itertools import combinations
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('../raw-data/OSF_socialmedia_data.csv', index_col=0)
df.head()
print("Number of participants: {}".format(df.Participant.nunique()))
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 12245 entries, 1 to 12245
Data columns (total 25 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Participant 12245 non-null int64
1 Date 12245 non-null object
2 Day 12245 non-null object
3 Time 12245 non-null object
4 Session.Name 12245 non-null object
5 Notification.No 12245 non-null int64
6 LifePak.Download.No 12245 non-null int64
7 Responded 12245 non-null int64
8 Completed.Session 12245 non-null int64
9 Session.Instance 8695 non-null float64
10 Session.Instance.Response.Lapse 8695 non-null object
11 Reminders.Delivered 12245 non-null int64
12 Instr_DQs 0 non-null float64
13 Fatigue 8653 non-null float64
14 DeprMood 8648 non-null float64
15 Loneliness 8646 non-null float64
16 Concentrat 8645 non-null float64
17 LossOfInt 8646 non-null float64
18 Inferior 8646 non-null float64
19 Hopeless 8650 non-null float64
20 Stress 8649 non-null float64
21 PSMU 8646 non-null float64
22 AutoPSMU 8651 non-null object
23 News 8647 non-null float64
24 Active 8645 non-null float64
dtypes: float64(13), int64(6), object(6)
memory usage: 2.4+ MB
###Markdown
¿Cómo se distribuyen las variables de respuesta?
###Code
#Select columns that belong to the depressive symptoms questionnaire
ESM_quest = df.iloc[:,13:-4]
ESM_quest.describe()
sns.set_style("white")
ESM_quest.plot.hist(subplots=True, layout=(2, 4), figsize=(15, 8), sharey=True,colormap='viridis')
sns.despine()
df['Date']= pd.to_datetime(df['Date'])
df.head()
###Output
_____no_output_____
###Markdown
¿Cuándo participó cada sujeto?Como comenté en la reunión del miércoles, una opción que podría ser interesante es ver si podemos agrupar los sujetos en función de cuándo empezaron y cuándo terminaron el experimento, de tal forma que para cada grupo sí tendríamos series temporales *correctas*.
###Code
# Create a new array with start and end dates for each participant
dates = pd.DataFrame([])
for participant in df.Participant.unique():
dates = dates.append(pd.DataFrame({'start': df[df.Participant==participant].Date.min(), 'end': df[df.Participant==participant].Date.max()}, index=[0]), ignore_index=True)
dates.head()
my_range=range(1,len(dates.index)+1)
sns.set_style("whitegrid")
fig = plt.figure(figsize=(15, 10))
plt.hlines(y=my_range, xmin=dates.start, xmax=dates.end, color='grey', alpha=0.4)
plt.scatter(dates.start, my_range, color='skyblue', alpha=1, label='start')
plt.scatter(dates.end, my_range, color='green', alpha=0.4 , label='end')
plt.legend()
sns.despine(left=True, bottom=True, trim=True)
ordered_dates = dates.sort_values(by=['start'])
fig = plt.figure(figsize=(15, 10))
plt.hlines(y=my_range, xmin=ordered_dates.start, xmax=ordered_dates.end, color='grey', alpha=0.4)
plt.scatter(ordered_dates.start, my_range, color='skyblue', alpha=1, label='start')
plt.scatter(ordered_dates.end, my_range, color='green', alpha=0.4 , label='end')
plt.legend()
sns.despine(left=True, bottom=True, trim=True)
###Output
_____no_output_____
|
notebooks/hadisst_aa.ipynb
|
###Markdown
Archetypal analysis of HadISST SST anomaliesThis notebook contains results from an archetypal analysis of HadISST SST anomalies. Packages
###Code
%matplotlib inline
import itertools
from math import pi
import os
import time
import cartopy.crs as ccrs
import cmocean
import matplotlib.dates as mdates
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import xarray as xr
from cartopy.util import add_cyclic_point
from mpl_toolkits.mplot3d import Axes3D
from sklearn.manifold import MDS, TSNE
###Output
_____no_output_____
###Markdown
Analysis parameters
###Code
TIME_NAME = 'time'
LAT_NAME = 'latitude'
LON_NAME = 'longitude'
ANOMALY_NAME = 'sst_anom'
STANDARDIZED_ANOMALY_NAME = 'sst_std_anom'
# First and last years to retain for analysis
START_YEAR = 1870
END_YEAR = 2018
# First and last years of climatology base period
BASE_PERIOD_START_YEAR = 1981
BASE_PERIOD_END_YEAR = 2010
# Order of trend removed from anomalies
ANOMALY_TREND_ORDER = 1
# Zonal extents of analysis region
MIN_LATITUDE = -45.5
MAX_LATITUDE = 45.5
# Weighting used for EOFs
LAT_WEIGHTS = 'scos'
RESTRICT_TO_CLIMATOLOGY_BASE_PERIOD = False
# Number of random restarts to use
N_INIT = 100
# If cross-validation is used, number of cross-validation folds
N_FOLDS = 10
# Relaxation parameter for dictionary
DELTA = 0
###Output
_____no_output_____
###Markdown
File paths
###Code
def get_aa_output_filename(input_file, lat_weights, n_components, delta, n_init, cross_validate=False, n_folds=N_FOLDS):
"""Get AA output file corresponding to a given input file."""
basename, ext = os.path.splitext(input_file)
suffix = 'aa.{}.k{:d}.delta{:5.3e}.n_init{:d}'.format(lat_weights, n_components, delta, n_init)
if cross_validate:
suffix = '.'.join([suffix, 'n_folds{:d}'.format(n_folds)])
return '.'.join([basename, suffix]) + ext
PROJECT_DIR = os.path.join(os.getenv('HOME'), 'projects', 'convex-dim-red-expts')
BIN_DIR = os.path.join(PROJECT_DIR, 'bin')
BASE_RESULTS_DIR = os.path.join(PROJECT_DIR, 'results')
RESULTS_DIR = os.path.join(BASE_RESULTS_DIR, 'hadisst', 'nc')
CSV_DIR = os.path.join(BASE_RESULTS_DIR, 'hadisst', 'csv')
PLOTS_DIR = os.path.join(BASE_RESULTS_DIR, 'hadisst', 'plt')
if not os.path.exists(RESULTS_DIR):
os.makedirs(RESULTS_DIR)
if not os.path.exists(CSV_DIR):
os.makedirs(CSV_DIR)
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
SST_ANOM_INPUT_FILE = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER))
SST_STD_ANOM_INPUT_FILE = os.path.join(RESULTS_DIR, 'HadISST_sst.std_anom.{:d}_{:d}.trend_order{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER))
if not os.path.exists(SST_ANOM_INPUT_FILE):
raise RuntimeError("Input data file '%s' does not exist" % SST_ANOM_INPUT_FILE)
if not os.path.exists(SST_STD_ANOM_INPUT_FILE):
raise RuntimeError("Input data file '%s' does not exist" % SST_STD_ANOM_INPUT_FILE)
###Output
_____no_output_____
###Markdown
AA of SST anomaliesAs for $k$-means, the archetypes are fitted using the first 90% of theunstandardized SST anomalies, and the remaining 10% of the data is used toget a rough estimate of the out-of-sample RMSE.The optimization if performed for k = 2, ..., 20 archetypes, with randominitial guesses for the archetypes and weights. The fits are performedusing the following script (see bin/run_hadisst_aa.py):
###Code
with open(os.path.join(BIN_DIR, 'run_hadisst_aa.py')) as ifs:
for line in ifs:
print(line.strip())
max_n_components = 20
sst_k = []
sst_train_cost = []
sst_train_rmse = []
sst_test_cost = []
sst_test_rmse = []
for i in range(1, max_n_components + 1):
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, i, DELTA, N_INIT)
with xr.open_dataset(output_file) as ds:
sst_k.append(ds.sizes['component'])
sst_train_cost.append(float(ds.attrs['training_set_cost']))
sst_train_rmse.append(float(ds.attrs['training_set_rmse']))
sst_test_cost.append(float(ds.attrs['test_set_cost']))
sst_test_rmse.append(float(ds.attrs['test_set_rmse']))
sst_k = np.array(sst_k)
sst_train_cost = np.array(sst_train_cost)
sst_train_rmse = np.array(sst_train_rmse)
sst_test_cost = np.array(sst_test_cost)
sst_test_rmse = np.array(sst_test_rmse)
cost_output_file = 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.aa.{}.delta{:5.3e}.n_init{:d}.cost.csv'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, DELTA, N_INIT)
cost_output_file = os.path.join(CSV_DIR, cost_output_file)
cost_data = np.zeros((sst_k.shape[0], 5))
cost_data[:, 0] = sst_k
cost_data[:, 1] = sst_train_cost
cost_data[:, 2] = sst_train_rmse
cost_data[:, 3] = sst_test_cost
cost_data[:, 4] = sst_test_rmse
header = 'n_components,training_set_cost,training_set_rmse,test_set_cost,test_set_rmse'
fmt = '%d,%16.8e,%16.8e,%16.8e,%16.8e'
np.savetxt(cost_output_file, cost_data, header=header, fmt=fmt)
fig = plt.figure(figsize=(7, 5))
ax = plt.gca()
ax.plot(sst_k, sst_train_rmse, 'b-', label='Training set RMSE')
ax.plot(sst_k, sst_test_rmse, 'b:', label='Test set RMSE')
ax.grid(ls='--', color='gray', alpha=0.5)
ax.legend(fontsize=14)
ax.set_xlabel('Number of clusters', fontsize=14)
ax.set_ylabel('RMSE', fontsize=14)
ax.xaxis.set_major_locator(ticker.MultipleLocator(2))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(1))
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%d'))
ax.tick_params(labelsize=14)
plt.show()
plt.close()
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
components = aa_ds['component'].values
n_samples = aa_ds.sizes[TIME_NAME]
projection = ccrs.PlateCarree(central_longitude=180)
wrap_lon = True
cmap = cmocean.cm.thermal
component_vmins = np.empty(n_components)
vmin = None
for i, component in enumerate(components):
component_vmin = aa_ds['dictionary'].sel(component=component).min().item()
if vmin is None or component_vmin < vmin:
vmin = component_vmin
component_vmins[:] = vmin
component_vmaxs = np.empty(n_components)
vmax = None
for i, component in enumerate(components):
component_vmax = aa_ds['dictionary'].sel(component=component).max().item()
if vmax is None or component_vmax > vmax:
vmax = component_vmax
component_vmaxs[:] = vmax
ncols = 2 if n_components % 2 == 0 else 3
nrows = int(np.ceil(n_components / ncols))
height_ratios = np.ones((nrows + 1))
height_ratios[-1] = 0.1
fig = plt.figure(constrained_layout=False, figsize=(6 * ncols, 3 * nrows))
gs = gridspec.GridSpec(ncols=ncols, nrows=nrows + 1, figure=fig,
wspace=0.09, hspace=0.12,
height_ratios=height_ratios)
lat = aa_ds[LAT_NAME]
lon = aa_ds[LON_NAME]
row_index = 0
col_index = 0
for i, component in enumerate(components):
archetype_data = aa_ds['archetypes'].sel(component=component).values
if wrap_lon:
archetype_data, archetype_lon = add_cyclic_point(archetype_data, coord=lon)
else:
archetype_lon = lon
lon_grid, lat_grid = np.meshgrid(archetype_lon, lat)
ax = fig.add_subplot(gs[row_index, col_index], projection=projection)
ax.coastlines()
ax.set_global()
ax_vmin = component_vmins[i]
ax_vmax = component_vmaxs[i]
cs = ax.contourf(lon_grid, lat_grid, archetype_data, # vmin=ax_vmin, vmax=ax_vmax,
cmap=cmap, transform=ccrs.PlateCarree())
cb = fig.colorbar(cs, pad=0.03, orientation='horizontal')
cb.set_label(r'Weighted SSTA/${}^\circ$C', fontsize=13)
ax.set_ylim([MIN_LATITUDE, MAX_LATITUDE])
ax.set_title('Archetype {}'.format(component + 1), fontsize=14)
ax.set_aspect('equal')
fig.canvas.draw()
col_index += 1
if col_index == ncols:
col_index = 0
row_index += 1
output_file = os.path.join(PLOTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.aa.{}.k{:d}.delta{:5.3e}.n_init{:d}.archetypes.unsorted.pdf'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS,
n_components, DELTA, N_INIT))
plt.savefig(output_file, bbox_inches='tight')
plt.show()
plt.close()
aa_ds.close()
def to_1d_array(da):
"""Convert DataArray to flat array."""
flat_data = np.ravel(da.values)
missing_features = np.isnan(flat_data)
return flat_data[np.logical_not(missing_features)]
def pattern_correlation(state, eof):
"""Calculate pattern correlation between state and EOF."""
flat_state = to_1d_array(state)
flat_eof = to_1d_array(eof)
data = np.vstack([flat_state, flat_eof])
r = np.corrcoef(data, rowvar=True)
return r[0, 1]
def sort_states(ds, eofs_reference_file):
"""Sort states according to pattern correlation with EOFs."""
n_components = ds.sizes['component']
sort_order = []
with xr.open_dataset(eofs_reference_file) as eofs_ds:
n_eofs = eofs_ds.sizes['component']
for i in range(n_eofs):
correlations = np.empty((n_components,))
for k in range(n_components):
correlations[k] = pattern_correlation(
ds['archetypes'].sel(component=k),
eofs_ds['EOFs'].sel(component=i))
ordering = np.argsort(-np.abs(correlations))
for k in range(n_components):
if ordering[k] not in sort_order:
sort_order.append(ordering[k])
break
if np.size(sort_order) == n_components:
break
assert len(sort_order) <= n_components
assert np.size(np.unique(sort_order)) == np.size(sort_order)
if len(sort_order) < n_components:
unassigned = [i for i in range(n_components) if i not in sort_order]
sort_order += unassigned
assert len(sort_order) == n_components
assert np.size(np.unique(sort_order)) == np.size(sort_order)
sorted_ds = xr.zeros_like(ds)
for i in range(n_components):
sorted_ds = xr.where(sorted_ds['component'] == i,
ds.sel(component=sort_order[i]), sorted_ds)
for a in ds.attrs:
sorted_ds.attrs[a] = ds.attrs[a]
return sorted_ds
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
# Calculate angle between leading principal axis and vector between first and second archetypes
if n_components > 1:
with xr.open_dataset(eofs_reference_file) as eofs_ds:
first_eof = eofs_ds['EOFs'].sel(component=0).squeeze().fillna(0)
archetypes_difference = (aa_ds['archetypes'].sel(component=0) - aa_ds['archetypes'].sel(component=1)).squeeze().fillna(0)
overlap = first_eof.dot(archetypes_difference) / np.sqrt(first_eof.dot(first_eof) * archetypes_difference.dot(archetypes_difference))
print('cos(theta) = ', overlap)
components = aa_ds['component'].values
n_samples = aa_ds.sizes[TIME_NAME]
projection = ccrs.PlateCarree(central_longitude=180)
wrap_lon = True
cmap = cmocean.cm.thermal
component_vmins = np.empty(n_components)
vmin = None
for i, component in enumerate(components):
component_vmin = aa_ds['archetypes'].sel(component=component).min().item()
if vmin is None or component_vmin < vmin:
vmin = component_vmin
component_vmins[:] = vmin
component_vmaxs = np.empty(n_components)
vmax = None
for i, component in enumerate(components):
component_vmax = aa_ds['archetypes'].sel(component=component).max().item()
if vmax is None or component_vmax > vmax:
vmax = component_vmax
component_vmaxs[:] = vmax
ncols = 2 if n_components % 2 == 0 else 3
nrows = int(np.ceil(n_components / ncols))
height_ratios = np.ones((nrows + 1))
height_ratios[-1] = 0.1
fig = plt.figure(constrained_layout=False, figsize=(6 * ncols, 3 * nrows))
gs = gridspec.GridSpec(ncols=ncols, nrows=nrows + 1, figure=fig,
wspace=0.09, hspace=0.12,
height_ratios=height_ratios)
lat = aa_ds[LAT_NAME]
lon = aa_ds[LON_NAME]
row_index = 0
col_index = 0
for i, component in enumerate(components):
archetype_data = aa_ds['archetypes'].sel(component=component).values
if wrap_lon:
archetype_data, archetype_lon = add_cyclic_point(archetype_data, coord=lon)
else:
archetype_lon = lon
lon_grid, lat_grid = np.meshgrid(archetype_lon, lat)
ax = fig.add_subplot(gs[row_index, col_index], projection=projection)
ax.coastlines()
ax.set_global()
ax_vmin = component_vmins[i]
ax_vmax = component_vmaxs[i]
cs = ax.contourf(lon_grid, lat_grid, archetype_data, # vmin=ax_vmin, vmax=ax_vmax,
cmap=cmap, transform=ccrs.PlateCarree())
cb = fig.colorbar(cs, pad=0.03, orientation='horizontal')
cb.set_label(r'Weighted SSTA/${}^\circ$C', fontsize=13)
ax.set_ylim([MIN_LATITUDE, MAX_LATITUDE])
ax.set_title('Archetype {}'.format(component + 1), fontsize=14)
ax.set_aspect('equal')
fig.canvas.draw()
col_index += 1
if col_index == ncols:
col_index = 0
row_index += 1
output_file = os.path.join(PLOTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.aa.{}.k{:d}.delta{:5.3e}.n_init{:d}.archetypes.sorted.pdf'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS,
n_components, DELTA, N_INIT))
plt.savefig(output_file, bbox_inches='tight')
plt.show()
plt.close()
aa_ds.close()
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
ncols = 2 if n_components % 2 == 0 else 3
nrows = int(np.ceil(n_components / ncols))
fig = plt.figure(constrained_layout=False, figsize=(6 * ncols, 3 * nrows))
gs = gridspec.GridSpec(ncols=ncols, nrows=nrows + 1, figure=fig,
wspace=0.2, hspace=0.12,
height_ratios=height_ratios)
row_index = 0
col_index = 0
for i, component in enumerate(components):
dictionary_data = aa_ds['dictionary'].sel(component=component).values
ax = fig.add_subplot(gs[row_index, col_index])
ax.plot(aa_ds[TIME_NAME], dictionary_data)
ax.grid(ls='--', color='gray', alpha=0.5)
ax.set_xlabel('Date')
ax.set_ylabel('Dictionary weight')
ax.set_title('Archetype {}'.format(component + 1), fontsize=14)
col_index += 1
if col_index == ncols:
col_index = 0
row_index += 1
plt.show()
plt.close()
aa_ds.close()
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
ncols = 2 if n_components % 2 == 0 else 3
nrows = int(np.ceil(n_components / ncols))
fig = plt.figure(constrained_layout=False, figsize=(6 * ncols, 3 * nrows))
gs = gridspec.GridSpec(ncols=ncols, nrows=nrows + 1, figure=fig,
wspace=0.2, hspace=0.12,
height_ratios=height_ratios)
row_index = 0
col_index = 0
for i, component in enumerate(components):
dictionary_data = aa_ds['weights'].sel(component=component).values
ax = fig.add_subplot(gs[row_index, col_index])
ax.plot(aa_ds[TIME_NAME], dictionary_data)
ax.grid(ls='--', color='gray', alpha=0.5)
ax.set_xlabel('Date')
ax.set_ylabel('Weight')
ax.set_title('Archetype {}'.format(component + 1), fontsize=14)
col_index += 1
if col_index == ncols:
col_index = 0
row_index += 1
plt.show()
plt.close()
aa_ds.close()
def get_latitude_weights(da, lat_weights='scos', lat_name=LAT_NAME):
"""Get latitude weights."""
if lat_weights == 'cos':
return np.cos(np.deg2rad(da[lat_name])).clip(0., 1.)
if lat_weights == 'scos':
return np.cos(np.deg2rad(da[lat_name])).clip(0., 1.) ** 0.5
if lat_weights == 'none':
return xr.ones_like(da[lat_name])
raise ValueError("Invalid weights descriptor '%r'" % lat_weights)
def weight_and_flatten_data(da, weights=None, sample_dim=TIME_NAME):
"""Apply weighting to data and convert to 2D array."""
feature_dims = [d for d in da.dims if d != sample_dim]
original_shape = [da.sizes[d] for d in da.dims if d != sample_dim]
if weights is not None:
weighted_da = (weights * da).transpose(*da.dims)
else:
weighted_da = da
if weighted_da.get_axis_num(sample_dim) != 0:
weighted_da = weighted_da.transpose(*([sample_dim] + feature_dims))
n_samples = weighted_da.sizes[sample_dim]
n_features = np.product(original_shape)
flat_data = weighted_da.data.reshape(n_samples, n_features)
return flat_data
def run_mds(da, archetypes_da, n_components=2, lat_weights=LAT_WEIGHTS, metric=True,
n_init=4, max_iter=300, verbose=0, eps=0.001, n_jobs=None,
random_state=None, lat_name=LAT_NAME, sample_dim=TIME_NAME):
"""Run MDS on given data."""
feature_dims = [d for d in da.dims if d != sample_dim]
original_shape = [da.sizes[d] for d in da.dims if d != sample_dim]
# Get requested latitude weights
weights = get_latitude_weights(da, lat_weights=lat_weights,
lat_name=lat_name)
# Convert input data array to plain 2D array
flat_data = weight_and_flatten_data(da, weights=weights, sample_dim=sample_dim)
n_samples, n_features = flat_data.shape
# Remove any features/columns with missing data
missing_features = np.any(np.isnan(flat_data), axis=0)
valid_data = flat_data[:, np.logical_not(missing_features)]
# Add the climatological point for reference
valid_data = np.vstack([valid_data, np.zeros(valid_data.shape[1])])
# Append archetypes to data to be projected
n_archetypes = archetypes_da.sizes['component']
flat_archetypes = np.reshape(archetypes_da.values, (n_archetypes, n_features))
valid_archetypes = flat_archetypes[:, np.logical_not(missing_features)]
valid_data = np.vstack([valid_data, valid_archetypes])
mds = MDS(n_components=n_components, metric=metric, n_init=n_init,
max_iter=max_iter, verbose=verbose, eps=eps, n_jobs=n_jobs,
random_state=random_state, dissimilarity='euclidean').fit(valid_data)
embedding_da = xr.DataArray(
mds.embedding_[:n_samples],
coords={sample_dim: da[sample_dim], 'component': np.arange(n_components)},
dims=[sample_dim, 'component'])
origin_da = xr.DataArray(
mds.embedding_[n_samples],
coords={'component': np.arange(n_components)},
dims=['component'])
archetypes_embed_da = xr.DataArray(
mds.embedding_[n_samples + 1:],
coords={'archetype': np.arange(archetypes_da.sizes['component']), 'component': np.arange(n_components)},
dims=['archetype', 'component'])
mds_ds = xr.Dataset(data_vars={'embedding': embedding_da, 'origin': origin_da, 'archetypes': archetypes_embed_da})
mds_ds.attrs['stress'] = '{:16.8e}'.format(mds.stress_)
return mds_ds
sst_anom_ds = xr.open_dataset(SST_ANOM_INPUT_FILE)
sst_anom_ds = sst_anom_ds.where(
(sst_anom_ds[TIME_NAME].dt.year >= START_YEAR) &
(sst_anom_ds[TIME_NAME].dt.year <= END_YEAR), drop=True)
sst_anom_ds = sst_anom_ds.where(
(sst_anom_ds[LAT_NAME] >= MIN_LATITUDE) &
(sst_anom_ds[LAT_NAME] <= MAX_LATITUDE), drop=True)
sst_anom_da = sst_anom_ds[ANOMALY_NAME]
if RESTRICT_TO_CLIMATOLOGY_BASE_PERIOD:
clim_base_period = [int(sst_anom_ds.attrs['base_period_start_year']),
int(sst_anom_ds.attrs['base_period_end_year'])]
sst_anom_da = sst_anom_da.where(
(sst_anom_da[TIME_NAME].dt.year >= clim_base_period[0]) &
(sst_anom_da[TIME_NAME].dt.year <= clim_base_period[1]), drop=True)
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
mds_2d_scos = run_mds(sst_anom_da, aa_ds['archetypes'], n_components=2, lat_weights='scos', random_state=0)
n_samples = aa_ds.sizes[TIME_NAME]
fig = plt.figure(figsize=(7, 5))
ax = plt.gca()
ax.plot(mds_2d_scos['embedding'].sel(component=0), mds_2d_scos['embedding'].sel(component=1), '.')
markers = itertools.cycle(('.', 's', 'x', 'o', '+'))
for i in range(n_components):
ax.plot(mds_2d_scos['archetypes'].sel(archetype=i, component=0),
mds_2d_scos['archetypes'].sel(archetype=i, component=1),
marker=next(markers), ls='none',
label='Archetype {:d}'.format(i + 1))
ax.plot(mds_2d_scos['origin'].sel(component=0), mds_2d_scos['origin'].sel(component=1), 'ko', markersize=8,
label='Mean state')
ax.grid(ls='--', color='gray', alpha=0.5)
ax.legend(fontsize=13)
ax.set_xlabel('Principal coordinate 1', fontsize=14)
ax.set_ylabel('Principal coordinate 2', fontsize=14)
ax.axes.tick_params(labelsize=13)
output_file = os.path.join(PLOTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.aa.{}.k{:d}.delta{:5.3e}.n_init{:d}.mds.pdf'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS,
n_components, DELTA, N_INIT))
plt.savefig(output_file, bbox_inches='tight')
plt.show()
plt.close()
aa_ds.close()
n_components = 2
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
mds_3d_scos = run_mds(sst_anom_da, aa_ds['archetypes'], n_components=3, lat_weights='scos', random_state=0)
n_samples = aa_ds.sizes[TIME_NAME]
fig = plt.figure(figsize=(7, 5))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(mds_3d_scos['embedding'].sel(component=0), mds_3d_scos['embedding'].sel(component=1),
mds_3d_scos['embedding'].sel(component=2), '.')
markers = itertools.cycle(('.', 's', 'x', 'o', '+'))
for i in range(n_components):
ax.scatter(mds_3d_scos['archetypes'].sel(archetype=i, component=0),
mds_3d_scos['archetypes'].sel(archetype=i, component=1),
mds_3d_scos['archetypes'].sel(archetype=i, component=2),
marker=next(markers),
label='Archetype {:d}'.format(i + 1))
#ax.plot(mds_2d_scos['origin'].sel(component=0), mds_2d_scos['origin'].sel(component=1), 'ko', markersize=8,
# label='Mean state')
ax.grid(ls='--', color='gray', alpha=0.5)
ax.legend(fontsize=13)
ax.set_xlabel('Principal coordinate 1', fontsize=14)
ax.set_ylabel('Principal coordinate 2', fontsize=14)
ax.set_zlabel('Principal coordinate 3', fontsize=14)
ax.axes.tick_params(labelsize=13)
#output_file = os.path.join(PLOTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.aa.{}.k{:d}.delta{:5.3e}.n_init{:d}.mds.pdf'.format(
# BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS,
# n_components, DELTA, N_INIT))
#plt.savefig(output_file, bbox_inches='tight')
plt.show()
plt.close()
aa_ds.close()
scos_weights = get_latitude_weights(sst_anom_da, lat_weights='scos')
flat_sst_anom_scos = weight_and_flatten_data(sst_anom_da, weights=scos_weights)
missing_features = np.any(np.isnan(flat_sst_anom_scos), axis=0)
valid_sst_anom_scos = flat_sst_anom_scos[:, np.logical_not(missing_features)]
magnitudes = np.linalg.norm(valid_sst_anom_scos, axis=1)
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
archetype_magnitudes = np.zeros(n_components)
for i in range(n_components):
archetype = aa_ds['archetypes'].sel(component=i).fillna(0)
archetype_magnitudes[i] = archetype.dot(archetype) ** 0.5
fig = plt.figure(figsize=(7, 5))
ax = plt.gca()
ax.hist(magnitudes, bins=20)
linestyles = itertools.cycle(('-', '--', ':', '-.'))
colors = itertools.cycle(('red', 'green', 'blue'))
for i in range(n_components):
ax.axvline(archetype_magnitudes[i], color=next(colors), ls=next(linestyles),
label='Archetype {:d}'.format(i + 1))
ax.legend()
ax.set_xlabel('Magnitude')
ax.set_ylabel('Frequency')
plt.show()
plt.close()
def simplex_plot(da, weights_da, archetypes_da, sample_dim=TIME_NAME):
feature_dims = [d for d in da.dims if d != sample_dim]
original_shape = [da.sizes[d] for d in da.dims if d != sample_dim]
reconstruction = weights_da.dot(archetypes_da)
residuals = weight_and_flatten_data((da - reconstruction).fillna(0), sample_dim=sample_dim)
magnitudes = np.linalg.norm(residuals, axis=1)
n_components = archetypes_da.sizes['component']
archetype_coords = np.zeros((n_components, 2))
for i in range(n_components):
archetype_coords[i, 0] = np.cos(2 * pi * (i + 1) / n_components)
archetype_coords[i, 1] = np.sin(2 * pi * (i + 1) / n_components)
sample_weights = weight_and_flatten_data(weights_da, sample_dim=sample_dim)
sample_coords = sample_weights.dot(archetype_coords)
fig = plt.figure(figsize=(7, 5))
ax = plt.gca()
ax.plot(archetype_coords[:, 0], archetype_coords[:, 1], 'ko', ls='-')
cs = ax.scatter(sample_coords[:, 0], sample_coords[:, 1], c=magnitudes, alpha=0.75)
cb = plt.colorbar(cs)
ax.axes.tick_params(tick1On=False, tick2On=False)
plt.show()
plt.close()
n_components = 3
output_file = get_aa_output_filename(SST_ANOM_INPUT_FILE, LAT_WEIGHTS, n_components, DELTA, N_INIT)
aa_ds = xr.open_dataset(output_file)
eofs_reference_file = os.path.join(RESULTS_DIR, 'HadISST_sst.anom.{:d}_{:d}.trend_order{:d}.pca.{}.k{:d}.nc'.format(
BASE_PERIOD_START_YEAR, BASE_PERIOD_END_YEAR, ANOMALY_TREND_ORDER, LAT_WEIGHTS, n_components))
aa_ds = sort_states(aa_ds, eofs_reference_file)
sst_anom_scos_da = (scos_weights * sst_anom_da).transpose(*sst_anom_da.dims)
simplex_plot(sst_anom_scos_da, aa_ds['weights'], aa_ds['archetypes'])
###Output
_____no_output_____
|
DigitalBiomarkers-HumanActivityRecognition/10_code/50_deep_learning/53_tensorflow_models/53_tensorflow_Duke_Data/.ipynb_checkpoints/50_TF_Dense_no_windows-checkpoint.ipynb
|
###Markdown
HAR Densenet notes: Still need to normalize features
###Code
import pandas as pd
import numpy as np
import tensorflow as tf
import random
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import LSTM
from keras.utils import to_categorical
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
random.seed(321)
tf.__version__
###Output
_____no_output_____
###Markdown
Prepare data
###Code
df = pd.read_csv('../../../../10_code/40_usable_data_for_models/Duke_Data/plain_data.csv')
df
le2 = LabelEncoder()
df['Subject_ID'] = le2.fit_transform(df['Subject_ID'])
df.tail(3)
df = pd.get_dummies(df, prefix='SID', columns = ['Subject_ID'], drop_first = True)
df
y = df['Activity']
X = df.drop(['Activity', 'Round'], axis =1)
print(y.shape, X.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
le = LabelEncoder()
y_train = le.fit_transform(y_train)
y_test = le.transform(y_test)
#y_val = le.transform(y_test)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train = X_train.values
X_test = X_test.values
y_train_dummy = np_utils.to_categorical(y_train)
y_test_dummy = np_utils.to_categorical(y_test)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train.shape
model = Sequential()
model.add(Dense(32, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dense(4, activation='softmax')) #4 outputs are possible
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
from keras.callbacks import ModelCheckpoint
filepath="models/weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
#Using test data as val data for now, we will be using LOOCV so dont need to worry about creating validation set
model.fit(X_train, y_train_dummy, epochs = 50, validation_data = (X_test, y_test_dummy), batch_size = 32, verbose = 1, callbacks=callbacks_list)
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) multiple 2048
_________________________________________________________________
dense_1 (Dense) multiple 2112
_________________________________________________________________
dropout (Dropout) multiple 0
_________________________________________________________________
dense_2 (Dense) multiple 8320
_________________________________________________________________
dense_3 (Dense) multiple 516
=================================================================
Total params: 12,996
Trainable params: 12,996
Non-trainable params: 0
_________________________________________________________________
###Markdown
Ignore everything below here since the test set was used for validation
###Code
accuracy = model.evaluate(X_test, y_test_dummy, batch_size = 32, verbose = 1)
print(accuracy)
y_pred = np.argmax(model.predict(X_test), axis=-1)
pd.unique(y_pred)
# col 1 = y_pred
# col 2 = y_test ground truth labels
print(np.concatenate((y_pred.reshape(-1, 1), y_test.reshape(-1,1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score
confusion_matrix(y_pred, y_test)
accuracy_score(y_pred, y_test)
f1_score(y_pred, y_test, average = 'weighted')
###Output
_____no_output_____
|
docs/sources/notebooks/SwissHouseRSlaTankDHW.ipynb
|
###Markdown
SwissHouseRSlaTankDHW notebook example In this example, the usage of the model "SwissHouseRad-v0" is demonstrated. First, we import Energym and create the simulation environment by specifying the model, a weather file and the number of simulation days.
###Code
import energym
weather = "CH_BS_Basel"
env = energym.make("SwissHouseRSlaTankDhw-v0", weather=weather, simulation_days=20)
###Output
the initial variables are {'uFlowDHW': 0, 'uHP': 0, 'uRSla': 0, 'uValveDHW': 0}
###Markdown
The control inputs can be inspected using the `get_inputs_names()` method.
###Code
inputs = env.get_inputs_names()
outputs = env.get_outputs_names()
print("inputs:", inputs)
print("outputs:", outputs)
###Output
inputs: ['uFlowDHW', 'uHP', 'uRSla', 'uValveDHW']
outputs: ['TOut.T', 'heaPum.COP', 'heaPum.COPCar', 'heaPum.P', 'heaPum.QCon_flow', 'heaPum.QEva_flow', 'heaPum.TConAct', 'heaPum.TEvaAct', 'preHea.Q_flow', 'sla.QTot', 'sla.heatPortEmb[1].T', 'sla.m_flow', 'temRoo.T', 'weaBus.HDifHor', 'weaBus.HDirNor', 'weaBus.HGloHor', 'weaBus.HHorIR']
###Markdown
To run the simulation, a number of steps is specified (here 288 steps per day for 10 days), a control input is specified and passed to the simulation model with the `step()` method. To generate some plots later on, we save all the outputs in lists.
###Code
from scipy import signal
import math
steps = 288*5
out_list = []
outputs = env.get_output()
controls = []
hour = 0
for i in range(steps):
control = {}
control['uHP'] = [0.5*(signal.square(0.1*i)+1.0)]
control['uRSla'] = [0.5*(math.cos(0.01*i)+1.0)]
control['uFlowDHW'] = [0.1]
control['uValveDHW'] = [0.0]
controls +=[ {p:control[p][0] for p in control} ]
outputs = env.step(control)
_,hour,_,_ = env.get_date()
out_list.append(outputs)
###Output
_____no_output_____
###Markdown
Since the outputs are given as dictionaries and are collected in lists, we can simply load them as a pandas.DataFrame.
###Code
import pandas as pd
out_df = pd.DataFrame(out_list)
out_df
###Output
_____no_output_____
###Markdown
To generate plots, we can directly get the data from the DataFrames, by using the key names. Displayed are the room temperature, the supply temperature and the return temperature, as well as the external temperature, and the heat pump energy.
###Code
import matplotlib.pyplot as plt
%matplotlib notebook
f, (ax1,ax2,ax3) = plt.subplots(3,figsize=(10,15))#
ax1.plot(out_df['temRoo.T']-273.15, 'r')
ax1.plot(out_df['sla.heatPortEmb[1].T']-273.15, 'b--')
ax1.plot(out_df['heaPum.TEvaAct']-273.15, 'orange')
ax1.set_ylabel('Temp')
ax1.set_xlabel('Steps')
ax2.plot(out_df['TOut.T']-273.15, 'r')
ax2.set_ylabel('Temp')
ax2.set_xlabel('Steps')
ax3.plot(out_df['heaPum.QCon_flow'], 'g')
ax3.set_ylabel('Energy')
ax3.set_xlabel('Steps')
plt.subplots_adjust(hspace=0.4)
plt.show()
###Output
_____no_output_____
###Markdown
To end the simulation, the `close()` method is called. It deletes files that were produced during the simulation and stores some information about the simulation in the *energym_runs* folder.
###Code
env.close()
###Output
_____no_output_____
|
deployment/TextClassificationWithMMLSpark.ipynb
|
###Markdown
Text classification on Spark with MMLSparkThis notebook shows how to make a text classfication web service using MML Spark serving and deploy it to a Sparl cluster. Get data
###Code
# get the text data from the github repo and unzip it
from fit_and_store_pipeline import unzip_file_here
import urllib
import os
if not os.path.isfile('./text_data/attack_data.csv'):
if not os.path.isfile('./text_data.zip'):
urllib.request.urlretrieve('https://activelearning.blob.core.windows.net/activelearningdemo/text_data.zip', 'text_data.zip')
unzip_file_here('text_data.zip')
if not os.path.isfile('miniglove_6B_50d_w2v.txt'):
unzip_file_here('miniglove_6B_50d_w2v.zip')
print('Data files here')
# ensure workers spawned use the same environment/executable
import os
import sys
os.environ["PYSPARK_PYTHON"] = sys.executable
# make a train-test data pair
from fit_and_store_pipeline import create_train_test_split
# requires training_set_01.csv and test_set_01.csv to be present
training_data, test_data = create_train_test_split()
# if pyspark is missing on your machine, you could do
# !{sys.executable} -m pip install pyspark
from pyspark.sql import SparkSession
# configure Spark session to use mmlspark v0.13 (DSVM comes with 0.12)
sparkSB = SparkSession.builder.appName("MyApp")\
.config("spark.jars.packages", "Azure:mmlspark:0.13")\
.config("spark.pyspark.python", sys.executable)\
.config("spark.pyspark.driver.python", sys.executable)
spark = sparkSB.getOrCreate()
import mmlspark
spark
# put data in the spark format
train_sdf = spark.createDataFrame(training_data)
train_sdf = train_sdf\
.withColumn("label", train_sdf["is_attack"].cast('integer'))\
.select(["comment", "label"])
test_sdf = spark.createDataFrame(test_data)
test_sdf = test_sdf\
.withColumn("label", test_sdf["is_attack"].cast('integer'))\
.select(["comment", "label"])
# What have we?
# train_sdf.limit(10).toPandas()
# train_sdf.groupBy("label").count().toPandas()
# make an ML-Lib pipeline involving preprocessor and vectorizer
from pyspark.ml import Pipeline
from pyspark.ml.feature import Tokenizer, Word2Vec
from pyspark.ml.classification import RandomForestClassifier
# comment is the text field
tokenizer = Tokenizer(inputCol="comment", outputCol="words")
partitions = train_sdf.rdd.getNumPartitions()
word2vec = Word2Vec(maxIter=4, seed=44, inputCol="words", outputCol="features"
# , numPartitions=partitions
)
rfc = RandomForestClassifier(labelCol="label")
textClassifier = Pipeline(stages = [tokenizer, word2vec, rfc]).fit(train_sdf)
# if you are going to try a couple different models, pre-featurize first
textFeaturizer = Pipeline(stages = [tokenizer, word2vec]).fit(train_sdf)
ptrain = textFeaturizer.transform(train_sdf).select(["label", "features"])
ptest = textFeaturizer.transform(test_sdf).select(["label", "features"])
ptrain.limit(5).toPandas()
# test prediction on some new data
import pandas as pd
test_attacks = ['You are scum.', 'I like your shoes.', 'You are pxzx.',
'Your mother was a hamster and your father smelt of elderberries',
'One bag of hagfish slime, please']
ta_sdf = spark.createDataFrame(pd.DataFrame({"comment" : test_attacks}))
prediction = textClassifier.transform(ta_sdf)
prediction.toPandas()
# test prediction on the larger test set
scored_test = textClassifier.transform(test_sdf)
scored_test.groupBy(["label", "prediction"]).count()\
.toPandas().pivot(index="label", columns="prediction")
###Output
_____no_output_____
###Markdown
Deploy the model as a Spark Streaming job
###Code
# now deploy the trained classifier as a streaming job
# define the interface to be like the model's input
from pyspark.sql.functions import col, from_json
from pyspark.sql.types import *
import uuid
serving_inputs = spark.readStream.server() \
.address("localhost", 9977, "text_api") \
.load()\
.withColumn("variables", from_json(col("value"), test_sdf.schema))\
.select("id","variables.*")
# says to extract "variables" from the "value" field of json-encoded webservice input
serving_outputs = textClassifier.transform(serving_inputs) \
.withColumn("prediction", col("prediction").cast("string"))
server = serving_outputs.writeStream \
.server() \
.option("name", "text_api") \
.queryName("mml_text_query") \
.option("replyCol", "prediction") \
.option("checkpointLocation", "checkpoints-{}".format(uuid.uuid1())) \
.start()
# if we want to change something above (like the port), we'll need
# to stop the active server
server.stop()
###Output
_____no_output_____
###Markdown
Test web service
###Code
# inputs and outputs - schema
serving_inputs
serving_outputs
import requests
import json
import time
# calling the service
data = pd.DataFrame({ "comment" : test_attacks })
for instance in range(len(test_attacks)):
row_as_dict = data.to_dict('records')[instance]
r = requests.post(data=json.dumps(row_as_dict), url="http://localhost:9977/text_api")
time.sleep(0.2)
print("Response to : '{}' is {}".format(test_attacks[instance], r.text))
###Output
Response to : 'You are scum.' is 1.0
Response to : 'I like your shoes.' is 0.0
Response to : 'You are pxzx.' is 1.0
Response to : 'Your mother was a hamster and your father smelt of elderberries' is 0.0
Response to : 'One bag of hagfish slime, please' is 0.0
|
archive/FCCeeDriftChamber.ipynb
|
###Markdown
FCCee: Full Simulation of IDEA Driftchamber- [Overview](overview)- [Generate and Simulate Events](generate-events)- [Analyze Events](analyze-events)- [Plot events](plot-events)- [Homework exercise](homework-exercise) Overview---------------------- visualize and use the Driftchamber model in FCCSW- simulate the particle passage in Geant4- run digitization and get wire signal- run Hough Transform for a first track reconstruction- produce plots Part I: Simulations with the IDEA detector model in FCCSW----------------------------------------------------This tutorial is based on the FCC Note http://cds.cern.ch/record/2670936 and describes the use of the FCCee IDEA Driftchamber in the FCC software framework.
###Code
import ROOT
ROOT.enableJSVis()
# Unfortunately this way of displaying the detector won't work until dd4hep v1-11 is installed in LCG releases
# In the meantime, find a similar display here: http://hep-fcc.github.io/FCCSW/geo/geo-ee.html
# load the dd4hep detector model
#import dd4hep
#import os
#fcc_det_path = os.path.join(os.environ.get("FCC_DETECTORS", ""), "share/FCCSW/Detector/DetFCCeeIDEA/compact/FCCee_DectMaster.xml")
#print fcc_det_path
#description = dd4hep.Detector.getInstance()
#description.fromXML(fcc_det_path)
#c = ROOT.TCanvas("c_detector_display", "", 600,600)
#description.manager().SetVisLevel(6)
#description.manager().SetVisOption(1)
#vol = description.manager().GetTopVolume()
#vol.Draw()
!ls $FCCSWBASEDIR/share/FCCSW/Detector/DetFCCeeIDEA/compact
###Output
_____no_output_____
###Markdown
From the detector display or the command line, check that the detector subsystems are as you would expect them from the specifications in the Conceptual Design Report.
###Code
!fccrun $FCCSWBASEDIR/share/FCCSW/Examples/options/geant_fullsim_fccee_pgun.py --detectors $FCCSWBASEDIR/share/FCCSW/Detector/DetFCCeeIDEA/compact/FCCee_DectMaster.xml --etaMin -3.5 --etaMax 3.5 -n 20000
###Output
_____no_output_____
###Markdown
You can see the created files:
###Code
import ROOT
f = ROOT.TFile("root://eospublic.cern.ch//eos/experiment/fcc/ee/tutorial/fccee_idea_pgun.root")
events = f.Get("events")
c = ROOT.TCanvas("c_positionedHits_DCH_xy", "", 700, 600)
# draw hits for first five events
events.Draw("positionedHits_DCH.position.x:positionedHits_DCH.position.y", "", "", 10, 0)
c.Draw()
%%writefile mergeDCHits.py
import os
from Gaudi.Configuration import *
import GaudiKernel.SystemOfUnits as units
from Configurables import ApplicationMgr, FCCDataSvc, PodioOutput
podioevent = FCCDataSvc("EventDataSvc", input="root://eospublic.cern.ch//eos/experiment/fcc/ee/tutorial/fccee_idea_pgun.root")
from Configurables import PodioInput, ReadTestConsumer
podioinput = PodioInput("PodioReader", collections=["positionedHits_DCH"], OutputLevel=DEBUG)
# Parses the given xml file
from Configurables import GeoSvc
geoservice = GeoSvc("GeoSvc", detectors=[os.environ.get("FCC_DETECTORS", "") + '/share/FCCSW/Detector/DetFCCeeIDEA/compact/FCCee_DectMaster.xml',])
from Configurables import CreateDCHHits
createhits = CreateDCHHits("CreateDCHHits",
readoutName = "DriftChamberCollection",
EdepCut = 100*1e-9,
DCACut = 0.8,
OutputLevel=INFO)
createhits.positionedHits.Path = "positionedHits_DCH"
createhits.mergedHits.Path = "merged_DCH"
from Configurables import PodioOutput
out = PodioOutput("out")
out.OutputLevel=DEBUG
out.outputCommands = ["keep *"]
out.filename="mergedDCHits.root"
ApplicationMgr( TopAlg = [
podioinput,
createhits,
out,
],
EvtSel = 'NONE',
EvtMax = 20000,
ExtSvc = [podioevent, geoservice ],
OutputLevel = INFO
)
!fccrun mergeDCHits.py
!rootls -t mergedDCHits.root
###Output
_____no_output_____
###Markdown
By now, we have produced the two files `fccee_idea_pgun.root` and `mergedDCHits.root`.You can try to put them in a "test" folder on the shared disk space on eos.The files can already be found under the path `/eos/experiment/fcc/ee/tutorial`.To use files on eos, you can simply prepend `root://eospublic.cern.ch//eos/experiment/fcc/ee/tutorial/` when using TFile, or use `xrdcp root://eospublic.cern.ch/ `And again, check that your files are present in your current directory:
###Code
! xrdcp root://eospublic.cern.ch//eos/experiment/fcc/ee/tutorial/mergedDCHits.root mergedDCHits3.root
import ROOT
f = ROOT.TFile("root://eospublic.cern.ch//eos/experiment/fcc/ee/tutorial/mergedDCHits.root")
events = f.Get("events")
# draw hits for first five events
events.Draw("DCHitInfo.hit_start.Perp():DCHitInfo.hit_start.z()", "DCHitInfo.layerId==5&&DCHitInfo.wireId==7", "")
c = ROOT.TCanvas("c_DCH_xy", "", 700, 600)
g = ROOT.TGraph(events.GetSelectedRows(), events.GetV2(), events.GetV1())
g.SetMarkerStyle(4)
g.SetTitle("DriftChamber Hits on one Wire;x;z")
g.Draw("AP")
c.Draw()
import ROOT
import numpy as np
f = ROOT.TFile("mergedDCHits.root")
events = f.Get("events")
c = ROOT.TCanvas("c_DCH_id", "", 700, 600)
events.Draw("DCHitInfo.hit_start.x():DCHitInfo.hit_start.y()", "", "")
dat_x = events.GetV1()
dat_y = events.GetV2()
x = []
y = []
for i in range(events.GetSelectedRows()):
x.append(dat_x[i])
y.append(dat_y[i])
events.Draw("DCHitInfo.hit_start.z():DCHitInfo.hit_start.z()", "", "")
dat_z = events.GetV1()
z = []
for i in range(events.GetSelectedRows()):
z.append(dat_z[i])
events.Draw("DCHitInfo.wireId:DCHitInfo.layerId", "", "")
dat_wid = events.GetV1()
dat_lid = events.GetV2()
wid = []
lid = []
for i in range(events.GetSelectedRows()):
lid.append(dat_lid[i])
wid.append(dat_wid[i])
c.Draw()
lid = np.array(lid)
wid = np.array(wid)
x = np.array(x)
y = np.array(y)
z = np.array(z)
%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for i in range(500):
cond = (lid ==1 ) * (wid == i)
f_x = x[cond]
f_y = y[cond]
f_z = z[cond]
ax.scatter(f_x, f_y, f_z)
plt.show()
###Output
_____no_output_____
|
NY_taxi_trip.ipynb
|
###Markdown
머신 러닝 실습 데이터 :https://www.kaggle.com/c/nyc-taxi-trip-duration/data 1. 정보 단계 : 수집, 가공* 문제 & 답 컬럼 선택 : X(N개도 가능) & Y* 선택 컬럼에 문제 여부 판단. 2. 교육 단계 : 머신 대상* 데이터 split* 교육(Linear Regression)* 모델 점수 확인(train, test) 3. 서비스 단계 : 고객 응대¶* pickle file로 저장, 불러오기* predict=> 분석 : model 점수를 보고 자신의 의견(제일 상단)
###Code
import sklearn
import pandas as pd
import numpy as np
data = pd.read_csv('./files/train.csv')
data
data.info()
data2 = data.copy()
data2['dif_longitude'] = np.abs(data2['pickup_longitude'] - data2['dropoff_longitude'])
data2['dif_latitude'] = np.abs(data2['pickup_latitude'] - data2['dropoff_latitude'])
data2['dist'] = np.sqrt(np.power(data2['dif_longitude'],2) + np.power(data2['dif_latitude'],2))
data2['dist_m'] = data2['dif_longitude'] - data2['dif_latitude']
# data2['dropoff_datetime'] - data2['pickup_datetime']
data2
###Output
_____no_output_____
###Markdown
정보 단계
###Code
x = data2[['passenger_count','dist']]
y = data2[['trip_duration']]
x.shape, y.shape
###Output
_____no_output_____
###Markdown
교육 단계 데이터 split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x,y)
X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
교육: Linear Regression
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, Y_train)
lr.coef_, lr.intercept_
lr.score(X_train,Y_train)
lr.score(X_test,Y_test)
###Output
_____no_output_____
###Markdown
서비스 단계
###Code
import pickle
pickle.dump(lr, open('./saves/ny_taxi_trip.pkl', 'wb'))
###Output
_____no_output_____
|
Projects/Project2/Test/Testing_prep2_NN.ipynb
|
###Markdown
Source: https://github.com/UdiBhaskar/Deep-Learning/blob/master/DNN%20in%20python%20from%20scratch.ipynband https://static.latexstudio.net/article/2018/0912/neuralnetworksanddeeplearning.pdf Developing a code for doing neural networks with back propagationOne can identify a set of key steps when using neural networks to solve supervised learning problems: 1. Collect and pre-process data 2. Define model and architecture 3. Choose cost function and optimizer 4. Train the model 5. Evaluate model performance on test data 6. Adjust hyperparameters (if necessary, network architecture) Collect and pre-process data$X = (n_{inputs}, n_{features})$$Y = (n_{inputs}, n_{categories})$``` flatten the image the value -1 means dimension is inferred from the remaining dimensions: 8x8 = 64n_inputs = len(inputs)inputs = inputs.reshape(n_inputs, -1)``` Test and train```train_size = 0.8test_size = 1 - train_sizeX_train, X_test, Y_train, Y_test = train_test_split(inputs, labels, train_size=train_size,test_size=test_size)``` etc... week41
###Code
import numpy as np
layer_dims = [1,2,2]
parameters = {}
L = len(layer_dims)
for l in range(1, L):
parameters['W' + str(l)] = np.random.normal(0,np.sqrt(2.0/layer_dims[l-1]),(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.normal(0,np.sqrt(2.0/layer_dims[l-1]),(layer_dims[l], 1))
display(parameters)
n_feutures = 5
n_categories = 10
hidden_layers_dims = [2,3,2]
layers_dims = [n_feutures]+ hidden_layers_dims + [n_categories]
print(layers_dims)
for i in layers_dims:
print(i)
###Output
[5, 2, 3, 2, 10]
5
2
3
2
10
###Markdown
AlgorithmGeneral steps to build neural network:Define the neural network structure ( of input units, of hidden units, etc)Initialize the model's parametersLoop:Implement forward propagationCompute lossImplement backward propagation to get the gradientsUpdate parameters
###Code
def weights_init(layer_dims,init_type='he_normal',seed=None):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
layer_dims lis is like [ no of input features,# of neurons in hidden layer-1,..,
# of neurons in hidden layer-n shape,output]
init_type -- he_normal --> N(0,sqrt(2/fanin))
he_uniform --> Uniform(-sqrt(6/fanin),sqrt(6/fanin))
xavier_normal --> N(0,2/(fanin+fanout))
xavier_uniform --> Uniform(-sqrt(6/fanin+fanout),sqrt(6/fanin+fanout))
seed -- random seed to generate weights
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(seed)
parameters = {}
opt_parameters = {}
L = len(layer_dims) # number of layers in the network
if init_type == 'he_normal':
for l in range(1, L):
parameters['W' + str(l)] = np.random.normal(0,np.sqrt(2.0/layer_dims[l-1]),(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.normal(0,np.sqrt(2.0/layer_dims[l-1]),(layer_dims[l], 1))
elif init_type == 'he_uniform':
for l in range(1, L):
parameters['W' + str(l)] = np.random.uniform(-np.sqrt(6.0/layer_dims[l-1]),
np.sqrt(6.0/layer_dims[l-1]),
(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.uniform(-np.sqrt(6.0/layer_dims[l-1]),
np.sqrt(6.0/layer_dims[l-1]),
(layer_dims[l], 1))
elif init_type == 'xavier_normal':
for l in range(1, L):
parameters['W' + str(l)] = np.random.normal(0,2.0/(layer_dims[l]+layer_dims[l-1]),
(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.normal(0,2.0/(layer_dims[l]+layer_dims[l-1]),
(layer_dims[l], 1))
elif init_type == 'xavier_uniform':
for l in range(1, L):
parameters['W' + str(l)] = np.random.uniform(-(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.uniform(-(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(layer_dims[l], 1))
return parameters
def forward_propagation(X, hidden_layers,parameters,keep_proba=1,seed=None):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
hidden_layers -- List of hideden layers
weights -- Output of weights_init dict (parameters)
keep_prob -- probability of keeping a neuron active during drop-out, scalar
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
if seed != None:
np.random.seed(seed)
caches = []
A = X
L = len(hidden_layers)
for l,active_function in enumerate(hidden_layers,start=1):
A_prev = A
Z = np.dot(parameters['W' + str(l)],A_prev)+parameters['b' + str(l)]
if active_function == "sigmoid":
A = sigmoid(Z)
elif active_function == "relu":
A = ReLU(Z)
elif active_function == "tanh":
A = Tanh(Z)
elif active_function == "softmax":
A = softmax(Z)
if keep_proba != 1 and l != L and l != 1:
D = np.random.rand(A.shape[0],A.shape[1])
D = (D<keep_prob)
A = np.multiply(A,D)
A = A / keep_prob
cache = ((A_prev, parameters['W' + str(l)],parameters['b' + str(l)],D), Z)
caches.append(cache_temp)
else:
cache = ((A_prev, parameters['W' + str(l)],parameters['b' + str(l)]), Z)
#print(A.shape)
caches.append(cache)
return A, caches
def compute_cost(A, Y, parameters, lamda=0,penality=None):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A -- post-activation, output of forward propagation
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function
"""
m = Y.shape[1]
cost = np.squeeze(-np.sum(np.multiply(np.log(A),Y))/m)
L = len(parameters)//2
if penality == 'l2' and lamda != 0:
sum_weights = 0
for l in range(1, L):
sum_weights = sum_weights + np.sum(np.square(parameters['W' + str(l)]))
cost = cost + sum_weights * (lambd/(2*m))
elif penality == 'l1' and lamda != 0:
sum_weights = 0
for l in range(1, L):
sum_weights = sum_weights + np.sum(np.abs(parameters['W' + str(l)]))
cost = cost + sum_weights * (lambd/(2*m))
return cost
def back_propagation(AL, Y, caches, hidden_layers, keep_prob=1, penality=None,lamda=0):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
hidden_layers -- hidden layer names
keep_prob -- probabaility for dropout
penality -- regularization penality 'l1' or 'l2' or None
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape)
# Initializing the backpropagation
dZL = AL - Y
cache = caches[L-1]
linear_cache, activation_cache = cache
AL, W, b = linear_cache
grads["dW" + str(L)] = np.dot(dZL,AL.T)/m
grads["db" + str(L)] = np.sum(dZL,axis=1,keepdims=True)/m
grads["dA" + str(L-1)] = np.dot(W.T,dZL)
# Loop from l=L-2 to l=0
v_dropout = 0
for l in reversed(range(L-1)):
cache = caches[l]
active_function = hidden_layers[l]
linear_cache, Z = cache
try:
A_prev, W, b = linear_cache
except:
A_prev, W, b, D = linear_cache
v_dropout = 1
m = A_prev.shape[1]
if keep_prob != 1 and v_dropout == 1:
dA_prev = np.multiply(grads["dA" + str(l + 1)],D)
dA_prev = dA_prev/keep_prob
v_dropout = 0
else:
dA_prev = grads["dA" + str(l + 1)]
v_dropout = 0
if active_function == "sigmoid":
dZ = np.multiply(dA_prev,sigmoid(Z,derivative=True))
elif active_function == "relu":
dZ = np.multiply(dA_prev,ReLU(Z,derivative=True))
elif active_function == "tanh":
dZ = np.multiply(dA_prev,Tanh(Z,derivative=True))
grads["dA" + str(l)] = np.dot(W.T,dZ)
if penality == 'l2':
grads["dW" + str(l + 1)] = (np.dot(dZ,A_prev.T)/m) + ((lambd * W)/m)
elif penality == 'l1':
grads["dW" + str(l + 1)] = (np.dot(dZ,A_prev.T)/m) + ((lambd * np.sign(W+10**-8))/m)
else:
grads["dW" + str(l + 1)] = (np.dot(dZ,A_prev.T)/m)
grads["db" + str(l + 1)] = np.sum(dZ,axis=1,keepdims=True)/m
return grads
def update_parameters(parameters, grads,learning_rate,iter_no,method = 'SGD',opt_params=None,beta1=0.9,beta2=0.999):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
method -- method for updation of weights
'SGD','SGDM','RMSP','ADAM'
learning rate -- learning rate alpha value
beta1 -- weighted avg parameter for SGDM and ADAM
beta2 -- weighted avg parameter for RMSP and ADAM
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
if method == 'SGD':
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*grads["dW" + str(l + 1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate*grads["db" + str(l + 1)]
elif method == 'SGDM':
for l in range(L):
opt_parameters['vdb'+str(l+1)] = beta1*opt_parameters['vdb'+str(l+1)] + (1-beta1)*grads["db" + str(l + 1)]
opt_parameters['vdw'+str(l+1)] = beta1*opt_parameters['vdw'+str(l+1)] + (1-beta1)*grads["dW" + str(l + 1)]
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*opt_parameters['vdw'+str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate*opt_parameters['vdb'+str(l+1)]
elif method == 'RMSP':
for l in range(L):
opt_parameters['sdb'+str(l+1)] = beta2*opt_parameters['sdb'+str(l+1)] + (1-beta2)*np.square(grads["db" + str(l + 1)])
opt_parameters['sdw'+str(l+1)] = beta2*opt_parameters['sdw'+str(l+1)] + (1-beta2)*np.square(grads["dW" + str(l + 1)])
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - \
learning_rate*(grads["dW" + str(l + 1)]/(np.sqrt(opt_parameters['sdw'+str(l+1)])+10**-8))
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - \
learning_rate*(grads["db" + str(l + 1)]/(np.sqrt(opt_parameters['sdb'+str(l+1)])+10**-8))
elif method == 'ADAM':
for l in range(L):
opt_parameters['vdb'+str(l+1)] = beta1*opt_parameters['vdb'+str(l+1)] + (1-beta1)*grads["db" + str(l + 1)]
opt_parameters['vdw'+str(l+1)] = beta1*opt_parameters['vdw'+str(l+1)] + (1-beta1)*grads["dW" + str(l + 1)]
opt_parameters['sdb'+str(l+1)] = beta2*opt_parameters['sdb'+str(l+1)] + (1-beta2)*np.square(grads["db" + str(l + 1)])
opt_parameters['sdw'+str(l+1)] = beta2*opt_parameters['sdw'+str(l+1)] + (1-beta2)*np.square(grads["dW" + str(l + 1)])
learningrate = learning_rate * np.sqrt((1-beta2**iter_no)/((1-beta1**iter_no)+10**-8))
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - \
learning_rate*(opt_parameters['vdw'+str(l+1)]/\
(np.sqrt(opt_parameters['sdw'+str(l+1)])+10**-8))
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - \
learning_rate*(opt_parameters['vdb'+str(l+1)]/\
(np.sqrt(opt_parameters['sdb'+str(l+1)])+10**-8))
return parameters,opt_parameters
def predict(parameters, X,hidden_layers,return_prob=False):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
A, cache = forward_propagation(X,hidden_layers,parameters,seed=3)
if return_prob == True:
return A
else:
return np.argmax(A, axis=0)
###Output
_____no_output_____
###Markdown
All in one object The code was inspired by the source: https://github.com/UdiBhaskar/Deep-Learning/blob/master/DNN%20in%20python%20from%20scratch.ipynb . I really appreciated the completeness and the tidiness of it.The idea behind is to create an object 'NeuralNetwork' to perform ... When initialiazing it, it is basically building the architecture of the DNN (n of layers, n of neurons for each layer, activation functions), while when fitting it (fit()), X and y are provided so to create the specific case to perform the analysis.The methods of the class are:---For any further details on the class and its usage look at the documentation of the code [link to the class].For further info on the optimizers: https://ruder.io/optimizing-gradient-descent/
###Code
class DNNClassifier(object):
'''
Parameters: layer_dims -- List Dimensions of layers including input and output layer
hidden_layers -- List of hidden layers
'relu','sigmoid','tanh','softplus','arctan','elu','identity','softmax'
Note: 1. last layer must be softmax
2. For relu and elu need to mention alpha value as below
['tanh',('relu',alpha1),('elu',alpha2),('relu',alpha3),'softmax']
need to give a tuple for relu and elu if you want to mention alpha
if not default alpha is 0
init_type -- init_type -- he_normal --> N(0,sqrt(2/fanin))
he_uniform --> Uniform(-sqrt(6/fanin),sqrt(6/fanin))
xavier_normal --> N(0,2/(fanin+fanout))
xavier_uniform --> Uniform(-sqrt(6/fanin+fanout),sqrt(6/fanin+fanout))
learning_rate -- Learning rate
optimization_method -- optimization method 'SGD','SGDM','RMSP','ADAM'
batch_size -- Batch size to update weights
max_epoch -- Max epoch number
Note : Max_iter = max_epoch * (size of traing / batch size)
tolarance -- if abs(previous cost - current cost ) < tol training will be stopped
if None -- No check will be performed
keep_proba -- probability for dropout
if 1 then there is no dropout
penality -- regularization penality
values taken 'l1','l2',None(default)
lamda -- l1 or l2 regularization value
beta1 -- SGDM and adam optimization param
beta2 -- RMSP and adam optimization value
seed -- Random seed to generate randomness
verbose -- takes 0 or 1
'''
def __init__(self,layer_dims,hidden_layers,init_type='he_normal',learning_rate=0.1,
optimization_method = 'SGD',batch_size=64,max_epoch=100,tolarance = 0.00001,
keep_proba=1,penality=None,lamda=0,beta1=0.9,
beta2=0.999,seed=None,verbose=0):
self.layer_dims = layer_dims
self.hidden_layers = hidden_layers
self.init_type = init_type
self.learning_rate = learning_rate
self.optimization_method = optimization_method
self.batch_size = batch_size
self.keep_proba = keep_proba
self.penality = penality
self.lamda = lamda
self.beta1 = beta1
self.beta2 = beta2
self.seed = seed
self.max_epoch = max_epoch
self.tol = tolarance
self.verbose = verbose
@staticmethod
def weights_init(layer_dims,init_type='he_normal',seed=None):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
layer_dims lis is like [ no of input features,# of neurons in hidden layer-1,..,
# of neurons in hidden layer-n shape,output]
init_type -- he_normal --> N(0,sqrt(2/fanin))
he_uniform --> Uniform(-sqrt(6/fanin),sqrt(6/fanin))
xavier_normal --> N(0,2/(fanin+fanout))
xavier_uniform --> Uniform(-sqrt(6/fanin+fanout),sqrt(6/fanin+fanout))
seed -- random seed to generate weights
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(seed)
parameters = {}
opt_parameters = {}
L = len(layer_dims) # number of layers in the network
if init_type == 'he_normal':
for l in range(1, L):
parameters['W' + str(l)] = np.random.normal(0,np.sqrt(2.0/layer_dims[l-1]),(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.normal(0,np.sqrt(2.0/layer_dims[l-1]),(layer_dims[l], 1))
elif init_type == 'he_uniform':
for l in range(1, L):
parameters['W' + str(l)] = np.random.uniform(-np.sqrt(6.0/layer_dims[l-1]),
np.sqrt(6.0/layer_dims[l-1]),
(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.uniform(-np.sqrt(6.0/layer_dims[l-1]),
np.sqrt(6.0/layer_dims[l-1]),
(layer_dims[l], 1))
elif init_type == 'xavier_normal':
for l in range(1, L):
parameters['W' + str(l)] = np.random.normal(0,2.0/(layer_dims[l]+layer_dims[l-1]),
(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.normal(0,2.0/(layer_dims[l]+layer_dims[l-1]),
(layer_dims[l], 1))
elif init_type == 'xavier_uniform':
for l in range(1, L):
parameters['W' + str(l)] = np.random.uniform(-(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(layer_dims[l], layer_dims[l-1]))
parameters['b' + str(l)] = np.random.uniform(-(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(np.sqrt(6.0/(layer_dims[l]+layer_dims[l-1]))),
(layer_dims[l], 1))
return parameters
@staticmethod
def sigmoid(X,derivative=False):
'''Compute Sigmaoid and its derivative'''
if derivative == False:
out = 1 / (1 + np.exp(-np.array(X)))
elif derivative == True:
s = 1 / (1 + np.exp(-np.array(X)))
out = s*(1-s)
return out
@staticmethod
def ReLU(X,alpha=0,derivative=False):
'''Compute ReLU function and derivative'''
X = np.array(X,dtype=np.float64)
if derivative == False:
return np.where(X<0,alpha*X,X)
elif derivative == True:
X_relu = np.ones_like(X,dtype=np.float64)
X_relu[X < 0] = alpha
return X_relu
@staticmethod
def Tanh(X,derivative=False):
'''Compute tanh values and derivative of tanh'''
X = np.array(X)
if derivative == False:
return np.tanh(X)
if derivative == True:
return 1 - (np.tanh(X))**2
@staticmethod
def softplus(X,derivative=False):
'''Compute tanh values and derivative of tanh'''
X = np.array(X)
if derivative == False:
return np.log(1+np.exp(X))
if derivative == True:
return 1 / (1 + np.exp(-np.array(X)))
@staticmethod
def arctan(X,derivative=False):
'''Compute tan^-1(X) and derivative'''
if derivative == False:
return np.arctan(X)
if derivative == True:
return 1/ (1 + np.square(X))
@staticmethod
def identity(X,derivative=False):
'''identity function and derivative f(x) = x'''
X = np.array(X)
if derivative == False:
return X
if derivative == True:
return np.ones_like(X)
@staticmethod
def elu(X,alpha=0,derivative=False):
'''Exponential Linear Unit'''
X = np.array(X,dtype=np.float64)
if derivative == False:
return np.where(X<0,alpha*(np.exp(X)-1),X)
elif derivative == True:
return np.where(X<0,alpha*(np.exp(X)),1)
@staticmethod
def softmax(X):
"""Compute softmax values for each sets of scores in x."""
return np.exp(X) / np.sum(np.exp(X),axis=0)
@staticmethod
def forward_propagation(X, hidden_layers,parameters,keep_prob=1,seed=None):
""""
Arguments:
X -- data, numpy array of shape (input size, number of examples)
hidden_layers -- List of hideden layers
weights -- Output of weights_init dict (parameters)
keep_prob -- probability of keeping a neuron active during drop-out, scalar
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
if seed != None:
np.random.seed(seed)
caches = []
A = X
L = len(hidden_layers)
for l,active_function in enumerate(hidden_layers,start=1):
A_prev = A
Z = np.dot(parameters['W' + str(l)],A_prev)+parameters['b' + str(l)]
if type(active_function) is tuple:
if active_function[0] == "relu":
A = DNNClassifier.ReLU(Z,active_function[1])
elif active_function[0] == 'elu':
A = DNNClassifier.elu(Z,active_function[1])
else:
if active_function == "sigmoid":
A = DNNClassifier.sigmoid(Z)
elif active_function == "identity":
A = DNNClassifier.identity(Z)
elif active_function == "arctan":
A = DNNClassifier.arctan(Z)
elif active_function == "softplus":
A = DNNClassifier.softplus(Z)
elif active_function == "tanh":
A = DNNClassifier.Tanh(Z)
elif active_function == "softmax":
A = DNNClassifier.softmax(Z)
elif active_function == "relu":
A = DNNClassifier.ReLU(Z)
elif active_function == 'elu':
A = DNNClassifier.elu(Z)
if keep_prob != 1 and l != L and l != 1:
D = np.random.rand(A.shape[0],A.shape[1])
D = (D<keep_prob)
A = np.multiply(A,D)
A = A / keep_prob
cache = ((A_prev, parameters['W' + str(l)],parameters['b' + str(l)],D), Z)
caches.append(cache)
else:
cache = ((A_prev, parameters['W' + str(l)],parameters['b' + str(l)]), Z)
#print(A.shape)
caches.append(cache)
return A, caches
@staticmethod
def compute_cost(A, Y, parameters, lamda=0,penality=None):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A -- post-activation, output of forward propagation
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function
"""
m = Y.shape[1]
cost = np.squeeze(-np.sum(np.multiply(np.log(A),Y))/m)
L = len(parameters)//2
if penality == 'l2' and lamda != 0:
sum_weights = 0
for l in range(1, L):
sum_weights = sum_weights + np.sum(np.square(parameters['W' + str(l)]))
cost = cost + sum_weights * (lamda/(2*m))
elif penality == 'l1' and lamda != 0:
sum_weights = 0
for l in range(1, L):
sum_weights = sum_weights + np.sum(np.abs(parameters['W' + str(l)]))
cost = cost + sum_weights * (lamda/(2*m))
return cost
@staticmethod
def back_propagation(AL, Y, caches, hidden_layers, keep_prob=1, penality=None,lamda=0):
"""
Implement the backward propagation
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
hidden_layers -- hidden layer names
keep_prob -- probabaility for dropout
penality -- regularization penality 'l1' or 'l2' or None
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape)
# Initializing the backpropagation
dZL = AL - Y
cache = caches[L-1]
linear_cache, activation_cache = cache
AL, W, b = linear_cache
grads["dW" + str(L)] = np.dot(dZL,AL.T)/m
grads["db" + str(L)] = np.sum(dZL,axis=1,keepdims=True)/m
grads["dA" + str(L-1)] = np.dot(W.T,dZL)
# Loop from l=L-2 to l=0
v_dropout = 0
for l in reversed(range(L-1)):
cache = caches[l]
active_function = hidden_layers[l]
linear_cache, Z = cache
try:
A_prev, W, b = linear_cache
except:
A_prev, W, b, D = linear_cache
v_dropout = 1
m = A_prev.shape[1]
if keep_prob != 1 and v_dropout == 1:
dA_prev = np.multiply(grads["dA" + str(l + 1)],D)
dA_prev = dA_prev/keep_prob
v_dropout = 0
else:
dA_prev = grads["dA" + str(l + 1)]
v_dropout = 0
if type(active_function) is tuple:
if active_function[0] == "relu":
dZ = np.multiply(dA_prev,DNNClassifier.ReLU(Z,active_function[1],derivative=True))
elif active_function[0] == 'elu':
dZ = np.multiply(dA_prev,DNNClassifier.elu(Z,active_function[1],derivative=True))
else:
if active_function == "sigmoid":
dZ = np.multiply(dA_prev,DNNClassifier.sigmoid(Z,derivative=True))
elif active_function == "relu":
dZ = np.multiply(dA_prev,DNNClassifier.ReLU(Z,derivative=True))
elif active_function == "tanh":
dZ = np.multiply(dA_prev,DNNClassifier.Tanh(Z,derivative=True))
elif active_function == "identity":
dZ = np.multiply(dA_prev,DNNClassifier.identity(Z,derivative=True))
elif active_function == "arctan":
dZ = np.multiply(dA_prev,DNNClassifier.arctan(Z,derivative=True))
elif active_function == "softplus":
dZ = np.multiply(dA_prev,DNNClassifier.softplus(Z,derivative=True))
elif active_function == 'elu':
dZ = np.multiply(dA_prev,DNNClassifier.elu(Z,derivative=True))
grads["dA" + str(l)] = np.dot(W.T,dZ)
if penality == 'l2':
grads["dW" + str(l + 1)] = (np.dot(dZ,A_prev.T)/m) + ((lamda * W)/m)
elif penality == 'l1':
grads["dW" + str(l + 1)] = (np.dot(dZ,A_prev.T)/m) + ((lamda * np.sign(W+10**-8))/m)
else:
grads["dW" + str(l + 1)] = (np.dot(dZ,A_prev.T)/m)
grads["db" + str(l + 1)] = np.sum(dZ,axis=1,keepdims=True)/m
return grads
@staticmethod
def update_parameters(parameters, grads,learning_rate,iter_no,method = 'SGD',opt_parameters=None,beta1=0.9,beta2=0.999):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
method -- method for updation of weights
'SGD','SGDM','RMSP','ADAM'
learning rate -- learning rate alpha value
beta1 -- weighted avg parameter for SGDM and ADAM
beta2 -- weighted avg parameter for RMSP and ADAM
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
opt_parameters
"""
L = len(parameters) // 2 # number of layers in the neural network
if method == 'SGD':
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*grads["dW" + str(l + 1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate*grads["db" + str(l + 1)]
opt_parameters = None
elif method == 'SGDM':
for l in range(L):
opt_parameters['vdb'+str(l+1)] = beta1*opt_parameters['vdb'+str(l+1)] + (1-beta1)*grads["db" + str(l + 1)]
opt_parameters['vdw'+str(l+1)] = beta1*opt_parameters['vdw'+str(l+1)] + (1-beta1)*grads["dW" + str(l + 1)]
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*opt_parameters['vdw'+str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate*opt_parameters['vdb'+str(l+1)]
elif method == 'RMSP':
for l in range(L):
opt_parameters['sdb'+str(l+1)] = beta2*opt_parameters['sdb'+str(l+1)] + \
(1-beta2)*np.square(grads["db" + str(l + 1)])
opt_parameters['sdw'+str(l+1)] = beta2*opt_parameters['sdw'+str(l+1)] + \
(1-beta2)*np.square(grads["dW" + str(l + 1)])
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - \
learning_rate*(grads["dW" + str(l + 1)]/(np.sqrt(opt_parameters['sdw'+str(l+1)])+10**-8))
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - \
learning_rate*(grads["db" + str(l + 1)]/(np.sqrt(opt_parameters['sdb'+str(l+1)])+10**-8))
elif method == 'ADAM':
for l in range(L):
opt_parameters['vdb'+str(l+1)] = beta1*opt_parameters['vdb'+str(l+1)] + (1-beta1)*grads["db" + str(l + 1)]
opt_parameters['vdw'+str(l+1)] = beta1*opt_parameters['vdw'+str(l+1)] + (1-beta1)*grads["dW" + str(l + 1)]
opt_parameters['sdb'+str(l+1)] = beta2*opt_parameters['sdb'+str(l+1)] + \
(1-beta2)*np.square(grads["db" + str(l + 1)])
opt_parameters['sdw'+str(l+1)] = beta2*opt_parameters['sdw'+str(l+1)] + \
(1-beta2)*np.square(grads["dW" + str(l + 1)])
learning_rate = learning_rate * np.sqrt((1-beta2**iter_no)/((1-beta1**iter_no)+10**-8))
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - \
learning_rate*(opt_parameters['vdw'+str(l+1)]/\
(np.sqrt(opt_parameters['sdw'+str(l+1)])+10**-8))
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - \
learning_rate*(opt_parameters['vdb'+str(l+1)]/\
(np.sqrt(opt_parameters['sdb'+str(l+1)])+10**-8))
return parameters,opt_parameters
def fit(self,X,y):
'''
X -- data, numpy array of shape (input size, number of examples)
y -- lables, numpy array of shape (no of classes,n)
'''
np.random.seed(self.seed)
self.grads = {}
self.costs = []
M = X.shape[1]
opt_parameters = {}
if self.verbose == 1:
print('Initilizing Weights...')
self.parameters = self.weights_init(self.layer_dims,self.init_type,self.seed)
self.iter_no = 0
idx = np.arange(0,M)
if self.optimization_method != 'SGD':
for l in range(1, len(self.layer_dims)):
opt_parameters['vdw' + str(l)] = np.zeros((self.layer_dims[l], self.layer_dims[l-1]))
opt_parameters['vdb' + str(l)] = np.zeros((self.layer_dims[l], 1))
opt_parameters['sdw' + str(l)] = np.zeros((self.layer_dims[l], self.layer_dims[l-1]))
opt_parameters['sdb' + str(l)] = np.zeros((self.layer_dims[l], 1))
if self.verbose == 1:
print('Starting Training...')
for epoch_no in range(1,self.max_epoch+1):
np.random.shuffle(idx)
X = X[:,idx]
y = y[:,idx]
for i in range(0,M, self.batch_size):
self.iter_no = self.iter_no + 1
X_batch = X[:,i:i + self.batch_size]
y_batch = y[:,i:i + self.batch_size]
# Forward propagation:
AL, cache = self.forward_propagation(X_batch,self.hidden_layers,self.parameters,self.keep_proba,self.seed)
#cost
cost = self.compute_cost(AL, y_batch, self.parameters,self.lamda,self.penality)
self.costs.append(cost)
if self.tol != None:
try:
if abs(cost - self.costs[-2]) < self.tol:
return self
except:
pass
#back prop
grads = self.back_propagation(AL, y_batch, cache,self.hidden_layers,self.keep_proba,self.penality,self.lamda)
#update params
self.parameters,opt_parameters = self.update_parameters(self.parameters,grads,self.learning_rate,
self.iter_no-1,self.optimization_method,
opt_parameters,self.beta1,self.beta2)
if self.verbose == 1:
if self.iter_no % 100 == 0:
print("Cost after iteration {}: {}".format(self.iter_no, cost))
return self
def predict(self,X,proba=False):
'''predicting values
arguments: X - iput data
proba -- False then return value
True then return probabaility
'''
out, _ = self.forward_propagation(X,self.hidden_layers,self.parameters,self.keep_proba,self.seed)
if proba == True:
return out.T
else:
return np.argmax(out, axis=0)
###Output
_____no_output_____
|
notebooks/drive trigger on off.ipynb
|
###Markdown
Analyze
###Code
def getPSD(tdata):
fs = fft.rfftfreq(len(tdata), DT)
Z = real(z * conj(z))
Z /= float(len(tdata) * RATE)
Z[1:] *= 2.
# assert ((sum(Z)* 1./DT) - sum(tdata**2)) < 1e-6
return Z, fs
figure()
Z, fs = getPSD(indata)
plot(fs / 1e3, Z**0.5)
xlabel("Frequency (kHz)")
ylabel("PSD (V / $\sqrt{\mathsf{Hz}}$ )")
Zdrive, _ = getPSD(sig)
mask = Zdrive > mean(Zdrive)
figure()
plot(fs[mask] / 1e3, Z[mask] / Zdrive[mask])
xlabel("Frequency (kHz)")
ylabel("Normalized Response")
###Output
_____no_output_____
|
_notebooks/2022_02_09_Image_classification.ipynb
|
###Markdown
Bird spaces classification> In this project we will use fastai package to classify bird spaces.- toc: true - badges: true- comments: true- categories: [jupyter]- image: images/goldfinch.jpg
###Code
#hide
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
#hide
from google.colab import drive
drive.mount('/content/drive')
import os
os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/MyDrive/Kaggle"
%cd /content/drive/MyDrive/Kaggle
#hide
#!kaggle datasets download -d gpiosenka/100-bird-species
#hide
#!unzip \*.zip && rm *.zip
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
import torch
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from fastai.vision.all import *
#from nbdev.showdoc import *
set_seed(2)
%matplotlib inline
path = Path('/content/drive/MyDrive/Kaggle')
bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
# Generic container to quickly build Datasets and DataLoaders
birds = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=parent_label,
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224, min_scale=0.75))
dls = birds.dataloaders(path, valid_pct=0.2)
dls.show_batch(max_n=9, figsize=(9,6))
print(dls.vocab)
len(dls.vocab),dls.c
###Output
['AFRICAN CROWNED CRANE', 'AFRICAN FIREFINCH', 'ALBATROSS', 'ALEXANDRINE PARAKEET', 'AMERICAN AVOCET', 'AMERICAN BITTERN', 'AMERICAN COOT', 'AMERICAN GOLDFINCH', 'AMERICAN KESTREL', 'AMERICAN PIPIT', 'AMERICAN REDSTART', 'ANHINGA', 'ANNAS HUMMINGBIRD', 'ANTBIRD', 'ARARIPE MANAKIN', 'ASIAN CRESTED IBIS', 'BALD EAGLE', 'BALD IBIS', 'BALI STARLING', 'BALTIMORE ORIOLE', 'BANANAQUIT', 'BANDED BROADBILL', 'BANDED PITA', 'BAR-TAILED GODWIT', 'BARN OWL', 'BARN SWALLOW', 'BARRED PUFFBIRD', 'BAY-BREASTED WARBLER', 'BEARDED BARBET', 'BEARDED BELLBIRD', 'BEARDED REEDLING', 'BELTED KINGFISHER', 'BIRD OF PARADISE', 'BLACK & YELLOW bROADBILL', 'BLACK BAZA', 'BLACK FRANCOLIN', 'BLACK SKIMMER', 'BLACK SWAN', 'BLACK TAIL CRAKE', 'BLACK THROATED BUSHTIT', 'BLACK THROATED WARBLER', 'BLACK VULTURE', 'BLACK-CAPPED CHICKADEE', 'BLACK-NECKED GREBE', 'BLACK-THROATED SPARROW', 'BLACKBURNIAM WARBLER', 'BLONDE CRESTED WOODPECKER', 'BLUE COAU', 'BLUE GROUSE', 'BLUE HERON', 'BLUE THROATED TOUCANET', 'BOBOLINK', 'BORNEAN BRISTLEHEAD', 'BORNEAN LEAFBIRD', 'BORNEAN PHEASANT', 'BRANDT CORMARANT', 'BROWN CREPPER', 'BROWN NOODY', 'BROWN THRASHER', 'BULWERS PHEASANT', 'CACTUS WREN', 'CALIFORNIA CONDOR', 'CALIFORNIA GULL', 'CALIFORNIA QUAIL', 'CANARY', 'CAPE GLOSSY STARLING', 'CAPE MAY WARBLER', 'CAPPED HERON', 'CAPUCHINBIRD', 'CARMINE BEE-EATER', 'CASPIAN TERN', 'CASSOWARY', 'CEDAR WAXWING', 'CERULEAN WARBLER', 'CHARA DE COLLAR', 'CHESTNET BELLIED EUPHONIA', 'CHIPPING SPARROW', 'CHUKAR PARTRIDGE', 'CINNAMON TEAL', 'CLARKS NUTCRACKER', 'COCK OF THE ROCK', 'COCKATOO', 'COLLARED ARACARI', 'COMMON FIRECREST', 'COMMON GRACKLE', 'COMMON HOUSE MARTIN', 'COMMON LOON', 'COMMON POORWILL', 'COMMON STARLING', 'COUCHS KINGBIRD', 'CRESTED AUKLET', 'CRESTED CARACARA', 'CRESTED NUTHATCH', 'CRIMSON SUNBIRD', 'CROW', 'CROWNED PIGEON', 'CUBAN TODY', 'CUBAN TROGON', 'CURL CRESTED ARACURI', 'D-ARNAUDS BARBET', 'DARK EYED JUNCO', 'DOUBLE BARRED FINCH', 'DOUBLE BRESTED CORMARANT', 'DOWNY WOODPECKER', 'EASTERN BLUEBIRD', 'EASTERN MEADOWLARK', 'EASTERN ROSELLA', 'EASTERN TOWEE', 'ELEGANT TROGON', 'ELLIOTS PHEASANT', 'EMPEROR PENGUIN', 'EMU', 'ENGGANO MYNA', 'EURASIAN GOLDEN ORIOLE', 'EURASIAN MAGPIE', 'EVENING GROSBEAK', 'FAIRY BLUEBIRD', 'FIRE TAILLED MYZORNIS', 'FLAME TANAGER', 'FLAMINGO', 'FRIGATE', 'GAMBELS QUAIL', 'GANG GANG COCKATOO', 'GILA WOODPECKER', 'GILDED FLICKER', 'GLOSSY IBIS', 'GO AWAY BIRD', 'GOLD WING WARBLER', 'GOLDEN CHEEKED WARBLER', 'GOLDEN CHLOROPHONIA', 'GOLDEN EAGLE', 'GOLDEN PHEASANT', 'GOLDEN PIPIT', 'GOULDIAN FINCH', 'GRAY CATBIRD', 'GRAY KINGBIRD', 'GRAY PARTRIDGE', 'GREAT GRAY OWL', 'GREAT KISKADEE', 'GREAT POTOO', 'GREATOR SAGE GROUSE', 'GREEN BROADBILL', 'GREEN JAY', 'GREEN MAGPIE', 'GREY PLOVER', 'GROVED BILLED ANI', 'GUINEA TURACO', 'GUINEAFOWL', 'GYRFALCON', 'HARLEQUIN DUCK', 'HARPY EAGLE', 'HAWAIIAN GOOSE', 'HELMET VANGA', 'HIMALAYAN MONAL', 'HOATZIN', 'HOODED MERGANSER', 'HOOPOES', 'HORNBILL', 'HORNED GUAN', 'HORNED LARK', 'HORNED SUNGEM', 'HOUSE FINCH', 'HOUSE SPARROW', 'HYACINTH MACAW', 'IMPERIAL SHAQ', 'INCA TERN', 'INDIAN BUSTARD', 'INDIAN PITTA', 'INDIAN ROLLER', 'INDIGO BUNTING', 'IWI', 'JABIRU', 'JAVA SPARROW', 'KAGU', 'KAKAPO', 'KILLDEAR', 'KING VULTURE', 'KIWI', 'KOOKABURRA', 'LARK BUNTING', 'LAZULI BUNTING', 'LILAC ROLLER', 'LONG-EARED OWL', 'MAGPIE GOOSE', 'MALABAR HORNBILL', 'MALACHITE KINGFISHER', 'MALAGASY WHITE EYE', 'MALEO', 'MALLARD DUCK', 'MANDRIN DUCK', 'MANGROVE CUCKOO', 'MARABOU STORK', 'MASKED BOOBY', 'MASKED LAPWING', 'MIKADO PHEASANT', 'MOURNING DOVE', 'MYNA', 'NICOBAR PIGEON', 'NOISY FRIARBIRD', 'NORTHERN CARDINAL', 'NORTHERN FLICKER', 'NORTHERN FULMAR', 'NORTHERN GANNET', 'NORTHERN GOSHAWK', 'NORTHERN JACANA', 'NORTHERN MOCKINGBIRD', 'NORTHERN PARULA', 'NORTHERN RED BISHOP', 'NORTHERN SHOVELER', 'OCELLATED TURKEY', 'OKINAWA RAIL', 'ORANGE BRESTED BUNTING', 'ORIENTAL BAY OWL', 'OSPREY', 'OSTRICH', 'OVENBIRD', 'OYSTER CATCHER', 'PAINTED BUNTIG', 'PALILA', 'PARADISE TANAGER', 'PARAKETT AKULET', 'PARUS MAJOR', 'PATAGONIAN SIERRA FINCH', 'PEACOCK', 'PELICAN', 'PEREGRINE FALCON', 'PHILIPPINE EAGLE', 'PINK ROBIN', 'POMARINE JAEGER', 'PUFFIN', 'PURPLE FINCH', 'PURPLE GALLINULE', 'PURPLE MARTIN', 'PURPLE SWAMPHEN', 'PYGMY KINGFISHER', 'QUETZAL', 'RAINBOW LORIKEET', 'RAZORBILL', 'RED BEARDED BEE EATER', 'RED BELLIED PITTA', 'RED BROWED FINCH', 'RED FACED CORMORANT', 'RED FACED WARBLER', 'RED FODY', 'RED HEADED DUCK', 'RED HEADED WOODPECKER', 'RED HONEY CREEPER', 'RED NAPED TROGON', 'RED TAILED HAWK', 'RED TAILED THRUSH', 'RED WINGED BLACKBIRD', 'RED WISKERED BULBUL', 'REGENT BOWERBIRD', 'RING-NECKED PHEASANT', 'ROADRUNNER', 'ROBIN', 'ROCK DOVE', 'ROSY FACED LOVEBIRD', 'ROUGH LEG BUZZARD', 'ROYAL FLYCATCHER', 'RUBY THROATED HUMMINGBIRD', 'RUDY KINGFISHER', 'RUFOUS KINGFISHER', 'RUFUOS MOTMOT', 'SAMATRAN THRUSH', 'SAND MARTIN', 'SANDHILL CRANE', 'SATYR TRAGOPAN', 'SCARLET CROWNED FRUIT DOVE', 'SCARLET IBIS', 'SCARLET MACAW', 'SCARLET TANAGER', 'SHOEBILL', 'SHORT BILLED DOWITCHER', 'SMITHS LONGSPUR', 'SNOWY EGRET', 'SNOWY OWL', 'SORA', 'SPANGLED COTINGA', 'SPLENDID WREN', 'SPOON BILED SANDPIPER', 'SPOONBILL', 'SPOTTED CATBIRD', 'SRI LANKA BLUE MAGPIE', 'STEAMER DUCK', 'STORK BILLED KINGFISHER', 'STRAWBERRY FINCH', 'STRIPED OWL', 'STRIPPED MANAKIN', 'STRIPPED SWALLOW', 'SUPERB STARLING', 'SWINHOES PHEASANT', 'TAIWAN MAGPIE', 'TAKAHE', 'TASMANIAN HEN', 'TEAL DUCK', 'TIT MOUSE', 'TOUCHAN', 'TOWNSENDS WARBLER', 'TREE SWALLOW', 'TROPICAL KINGBIRD', 'TRUMPTER SWAN', 'TURKEY VULTURE', 'TURQUOISE MOTMOT', 'UMBRELLA BIRD', 'VARIED THRUSH', 'VENEZUELIAN TROUPIAL', 'VERMILION FLYCATHER', 'VICTORIA CROWNED PIGEON', 'VIOLET GREEN SWALLOW', 'VULTURINE GUINEAFOWL', 'WALL CREAPER', 'WATTLED CURASSOW', 'WHIMBREL', 'WHITE BROWED CRAKE', 'WHITE CHEEKED TURACO', 'WHITE NECKED RAVEN', 'WHITE TAILED TROPIC', 'WHITE THROATED BEE EATER', 'WILD TURKEY', 'WILSONS BIRD OF PARADISE', 'WOOD DUCK', 'YELLOW BELLIED FLOWERPECKER', 'YELLOW CACIQUE', 'YELLOW HEADED BLACKBIRD', 'images to test']
###Markdown
shows the difference between an image that has been zoomed, interpolated, rotated, and then interpolated again (which is the approach used by all other deep learning libraries), shown here on the right, and an image that has been zoomed and rotated as one operation and then interpolated just once on the left (the fastai approach), shown here on the left.
###Code
#hide_input
#id interpolations
#caption A comparison of fastai's data augmentation strategy (left) and the traditional approach (right).
dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_y=parent_label,
item_tfms=Resize(460))
dls1 = dblock1.dataloaders([(Path.cwd()/'images to test'/'3.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones
x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2)
x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz=224)
x1 = x1.rotate(draw=30, p=1.)
x1 = x1.zoom(draw=1.2, p=1.)
x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);
###Output
_____no_output_____
###Markdown
Training: resnet34
###Code
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)
learn.model
###Output
_____no_output_____
###Markdown
Improving our model
###Code
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.lr_find(start_lr=1e-5, end_lr=1e1)
learn.fine_tune??
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(3, 3e-3)
learn.unfreeze()
learn.lr_find()
learn.fit_one_cycle(10, lr_max=9e-5)
learn.recorder.plot_loss()
###Output
_____no_output_____
|
Chapter_11/crnn_gan/simple_gan.ipynb
|
###Markdown
Music Generation using GANsIn this notebook, we will use a simplified version of [C-RNN-GAN](https://arxiv.org/abs/1611.09904) to generate music. We will focus upon:+ Utilities to prepare generator model+ Utilities to prepare generator model+ Data preparation+ Utility to prepare custom training loopWe will make use of the same dataset we used in LSTM based music generation notebook. [](https://colab.research.google.com/github/PacktPublishing/Hands-On-Generative-AI-with-Python-and-TensorFlow-2/blob/master/Chapter_10/crnn_gan/simple_gan.ipynb) Import Libraries
###Code
import sys
import matplotlib.pyplot as plt
import numpy as np
import pickle
import glob
from music21 import converter, instrument, note, chord, stream
from tensorflow.keras.layers import Input, Dense, Reshape, Dropout, LSTM, Bidirectional
from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
###Output
_____no_output_____
###Markdown
Extract MIDI Dataset
###Code
!unzip midi_dataset.zip
###Output
_____no_output_____
###Markdown
Prepare Generator
###Code
def build_generator(latent_dim,seq_shape):
model = Sequential()
model.add(Dense(256, input_dim=latent_dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(np.prod(seq_shape), activation='tanh'))
model.add(Reshape(seq_shape))
model.summary()
noise = Input(shape=(latent_dim,))
seq = model(noise)
return Model(noise, seq)
###Output
_____no_output_____
###Markdown
Prepare Discriminator
###Code
def build_discriminator(seq_shape):
model = Sequential()
model.add(LSTM(512, input_shape=seq_shape, return_sequences=True))
model.add(Bidirectional(LSTM(512)))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
model.summary()
seq = Input(shape=seq_shape)
validity = model(seq)
return Model(seq, validity)
###Output
_____no_output_____
###Markdown
Data Preparation Utility
###Code
def prepare_sequences(notes, n_vocab):
""" Prepare the sequences used by the Neural Network """
sequence_length = 100
# Get all pitch names
pitchnames = sorted(set(item for item in notes))
# Create a dictionary to map pitches to integers
note_to_int = dict((note, number) for number, note in enumerate(pitchnames))
network_input = []
network_output = []
# create input sequences and the corresponding outputs
for i in range(0, len(notes) - sequence_length, 1):
sequence_in = notes[i:i + sequence_length]
sequence_out = notes[i + sequence_length]
network_input.append([note_to_int[char] for char in sequence_in])
network_output.append(note_to_int[sequence_out])
n_patterns = len(network_input)
# Reshape the input into a format compatible with LSTM layers
network_input = np.reshape(network_input, (n_patterns, sequence_length, 1))
# Normalize input between -1 and 1
network_input = (network_input - float(n_vocab)/2) / (float(n_vocab)/2)
network_output = to_categorical(network_output)
return (network_input, network_output)
###Output
_____no_output_____
###Markdown
Utility to Transform Model output to MIDI
###Code
def create_midi(prediction_output, filename):
""" convert the output from the prediction to notes and create a midi file
from the notes """
offset = 0
output_notes = []
# create note and chord objects based on the values generated by the model
for item in prediction_output:
pattern = item[0]
# pattern is a chord
if ('.' in pattern) or pattern.isdigit():
notes_in_chord = pattern.split('.')
notes = []
for current_note in notes_in_chord:
new_note = note.Note(int(current_note))
new_note.storedInstrument = instrument.Piano()
notes.append(new_note)
new_chord = chord.Chord(notes)
new_chord.offset = offset
output_notes.append(new_chord)
# pattern is a note
else:
new_note = note.Note(pattern)
new_note.offset = offset
new_note.storedInstrument = instrument.Piano()
output_notes.append(new_note)
# increase offset each iteration so that notes do not stack
offset += 0.5
midi_stream = stream.Stream(output_notes)
midi_stream.write('midi', fp='{}.mid'.format(filename))
###Output
_____no_output_____
###Markdown
Utilities for Training GAN
###Code
def generate(latent_dim, generator, input_notes,filename='gan_final'):
# Get pitch names and store in a dictionary
notes = input_notes
pitchnames = sorted(set(item for item in notes))
int_to_note = dict((number, note) for number, note in enumerate(pitchnames))
# Use random noise to generate sequences
noise = np.random.normal(0, 1, (1, latent_dim))
predictions = generator.predict(noise)
pred_notes = [x*242+242 for x in predictions[0]]
pred_notes = [int_to_note[int(x)] for x in pred_notes]
create_midi(pred_notes, filename)
def plot_loss(disc_loss, gen_loss):
plt.plot(disc_loss, c='red')
plt.plot(gen_loss, c='blue')
plt.title("GAN Loss per Epoch")
plt.legend(['Discriminator', 'Generator'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.savefig('GAN_Loss_per_Epoch_final.png', transparent=True)
plt.show()
plt.close()
def train(latent_dim,
notes,
generator,
discriminator,
gan,
epochs,
batch_size=128,
sample_interval=50):
disc_loss =[]
gen_loss = []
n_vocab = len(set(notes))
X_train, y_train = prepare_sequences(notes, n_vocab)
# ground truths
real = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
idx = np.random.randint(0, X_train.shape[0], batch_size)
real_seqs = X_train[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
# generate a batch of new note sequences
gen_seqs = generator.predict(noise)
# train the discriminator
d_loss_real = discriminator.train_on_batch(real_seqs, real)
d_loss_fake = discriminator.train_on_batch(gen_seqs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# train the Generator
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
# visualize progress
if epoch % sample_interval == 0:
print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch,
d_loss[0],
100*d_loss[1],
g_loss))
disc_loss.append(d_loss[0])
gen_loss.append(g_loss)
generate(latent_dim, generator, notes,filename='gan_epoch'+str(epoch))
generate(latent_dim, generator, notes)
plot_loss(disc_loss,gen_loss)
###Output
_____no_output_____
###Markdown
Prepare Generator, Discriminator and GAN Models
###Code
rows = 100
seq_length = rows
seq_shape = (seq_length, 1)
latent_dim = 1000
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
discriminator = build_discriminator(seq_shape)
discriminator.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
# Build the generator
generator = build_generator(latent_dim,seq_shape)
# The generator takes noise as input and generates note sequences
z = Input(shape=(latent_dim,))
generated_seq = generator(z)
# For the combined model we will only train the generator
discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
validity = discriminator(generated_seq)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
gan = Model(z, validity)
gan.compile(loss='binary_crossentropy', optimizer=optimizer)
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 100, 512) 1052672
_________________________________________________________________
bidirectional (Bidirectional (None, 1024) 4198400
_________________________________________________________________
dense (Dense) (None, 512) 524800
_________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 131328
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 257
=================================================================
Total params: 5,907,457
Trainable params: 5,907,457
Non-trainable params: 0
_________________________________________________________________
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 256) 256256
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 256) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 256) 1024
_________________________________________________________________
dense_4 (Dense) (None, 512) 131584
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 512) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 512) 2048
_________________________________________________________________
dense_5 (Dense) (None, 1024) 525312
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 1024) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 1024) 4096
_________________________________________________________________
dense_6 (Dense) (None, 100) 102500
_________________________________________________________________
reshape (Reshape) (None, 100, 1) 0
=================================================================
Total params: 1,022,820
Trainable params: 1,019,236
Non-trainable params: 3,584
_________________________________________________________________
###Markdown
Prepare Dataset
###Code
from tqdm.notebook import tqdm
def get_notes():
""" Get all the notes and chords from the midi files """
notes = []
for file in tqdm(glob.glob("midi_dataset/*.mid")):
midi = converter.parse(file)
print("Parsing %s" % file)
notes_to_parse = None
try: # file has instrument parts
s2 = instrument.partitionByInstrument(midi)
notes_to_parse = s2.parts[0].recurse()
except: # file has notes in a flat structure
notes_to_parse = midi.flat.notes
for element in notes_to_parse:
if isinstance(element, note.Note):
notes.append(str(element.pitch))
elif isinstance(element, chord.Chord):
notes.append('.'.join(str(n) for n in element.normalOrder))
return notes
# Load and convert the data
notes = get_notes()
###Output
_____no_output_____
###Markdown
Train the GAN
###Code
train(latent_dim, notes, generator, discriminator, gan,epochs=300, batch_size=64, sample_interval=5)
###Output
0 [D loss: 0.700599, acc.: 2.34%] [G loss: 0.690003]
5 [D loss: 0.330439, acc.: 87.50%] [G loss: 0.965924]
10 [D loss: 0.169127, acc.: 95.31%] [G loss: 2.610085]
15 [D loss: 0.522524, acc.: 78.12%] [G loss: 1.376299]
20 [D loss: 0.284567, acc.: 91.41%] [G loss: 2.609391]
25 [D loss: 0.248017, acc.: 91.41%] [G loss: 2.243087]
30 [D loss: 0.337728, acc.: 88.28%] [G loss: 2.095068]
35 [D loss: 0.339113, acc.: 81.25%] [G loss: 2.083154]
40 [D loss: 0.351858, acc.: 84.38%] [G loss: 1.952475]
45 [D loss: 0.273926, acc.: 90.62%] [G loss: 2.543951]
50 [D loss: 0.248093, acc.: 90.62%] [G loss: 2.172482]
55 [D loss: 0.271015, acc.: 91.41%] [G loss: 2.190949]
60 [D loss: 0.356364, acc.: 81.25%] [G loss: 1.983520]
65 [D loss: 0.405131, acc.: 81.25%] [G loss: 1.702689]
70 [D loss: 0.283381, acc.: 87.50%] [G loss: 2.024445]
75 [D loss: 0.393422, acc.: 80.47%] [G loss: 1.576271]
80 [D loss: 0.443313, acc.: 78.91%] [G loss: 1.448712]
85 [D loss: 0.478743, acc.: 78.91%] [G loss: 1.748747]
90 [D loss: 0.419174, acc.: 82.03%] [G loss: 1.563135]
95 [D loss: 0.605185, acc.: 66.41%] [G loss: 1.500393]
100 [D loss: 0.445336, acc.: 79.69%] [G loss: 1.744036]
105 [D loss: 0.475579, acc.: 77.34%] [G loss: 1.837940]
110 [D loss: 0.466865, acc.: 78.91%] [G loss: 1.504704]
115 [D loss: 0.546925, acc.: 74.22%] [G loss: 1.405013]
120 [D loss: 0.368323, acc.: 77.34%] [G loss: 3.156564]
125 [D loss: 0.614834, acc.: 69.53%] [G loss: 1.279521]
130 [D loss: 0.495636, acc.: 75.00%] [G loss: 1.788097]
135 [D loss: 0.559726, acc.: 74.22%] [G loss: 1.299424]
140 [D loss: 0.498714, acc.: 77.34%] [G loss: 1.562320]
145 [D loss: 0.526421, acc.: 75.00%] [G loss: 1.254598]
150 [D loss: 0.619119, acc.: 66.41%] [G loss: 1.231451]
155 [D loss: 0.586559, acc.: 68.75%] [G loss: 1.293758]
160 [D loss: 0.565761, acc.: 70.31%] [G loss: 1.328592]
165 [D loss: 0.588010, acc.: 67.97%] [G loss: 1.293877]
170 [D loss: 0.539475, acc.: 75.78%] [G loss: 1.601530]
175 [D loss: 0.604363, acc.: 66.41%] [G loss: 1.113365]
180 [D loss: 0.615736, acc.: 67.97%] [G loss: 1.073653]
185 [D loss: 0.567547, acc.: 75.00%] [G loss: 1.203989]
190 [D loss: 0.665460, acc.: 58.59%] [G loss: 1.032968]
195 [D loss: 0.594707, acc.: 66.41%] [G loss: 1.124367]
200 [D loss: 0.631250, acc.: 60.16%] [G loss: 1.083897]
205 [D loss: 0.596987, acc.: 70.31%] [G loss: 1.108447]
210 [D loss: 0.632303, acc.: 64.84%] [G loss: 0.894515]
215 [D loss: 0.613716, acc.: 64.84%] [G loss: 0.939993]
220 [D loss: 0.675116, acc.: 56.25%] [G loss: 0.993536]
225 [D loss: 0.627448, acc.: 62.50%] [G loss: 1.263386]
230 [D loss: 2.464319, acc.: 31.25%] [G loss: 3.088368]
235 [D loss: 0.617016, acc.: 64.84%] [G loss: 0.955980]
240 [D loss: 0.667406, acc.: 63.28%] [G loss: 0.843850]
245 [D loss: 0.622891, acc.: 67.97%] [G loss: 0.858379]
250 [D loss: 0.702244, acc.: 61.72%] [G loss: 0.918469]
255 [D loss: 0.653216, acc.: 64.84%] [G loss: 0.759386]
260 [D loss: 0.706601, acc.: 55.47%] [G loss: 0.739120]
265 [D loss: 0.641720, acc.: 64.84%] [G loss: 0.885466]
270 [D loss: 0.658520, acc.: 62.50%] [G loss: 0.761149]
275 [D loss: 0.672581, acc.: 60.16%] [G loss: 0.765445]
280 [D loss: 0.674952, acc.: 57.81%] [G loss: 0.743806]
285 [D loss: 0.665785, acc.: 61.72%] [G loss: 0.747912]
290 [D loss: 0.694079, acc.: 50.00%] [G loss: 0.719472]
295 [D loss: 0.695304, acc.: 52.34%] [G loss: 0.741878]
|
furniture/model-ensemble.ipynb
|
###Markdown
**Model**
###Code
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.xception import Xception
from keras.applications.inception_v3 import InceptionV3
from keras.applications.nasnet import NASNetLarge
input_tensor = Input(shape=(INPUT_SIZE[0], INPUT_SIZE[1], 3)) # input image
print("Building base model for InceptionResNetV2...")
inceptionresnet_base = InceptionResNetV2(input_tensor=input_tensor, weights='imagenet', include_top=False)
features_iresnet = GlobalAveragePooling2D()(inceptionresnet_base.output)
print("Building base model for Xception...")
xception_base = Xception(input_tensor=input_tensor, weights='imagenet', include_top=False)
features_xception = GlobalAveragePooling2D()(xception_base.output)
'''print("Building base model for InceptionV3...")
inception_base = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=False)
features_inception = MaxPooling2D((2,2))(inception_base.output)
features_inception = GlobalAveragePooling2D()(features_inception)'''
print("Building base model for NASNetLarge...")
nasnet_base = NASNetLarge(input_tensor=input_tensor, weights='imagenet', include_top=False)
features_nasnet = GlobalAveragePooling2D()(nasnet_base.output)
print("Done!")
features_list = [features_iresnet, features_xception, features_nasnet]
x = Concatenate(axis=1)(features_list)
x = Dense(1024, activation='relu', kernel_regularizer=regularizers.l2(0.00))(x)
x = BatchNormalization()(x)
predictions = Dense(128, activation='softmax')(dropout_1)
model = Model(inputs=input_tensor, outputs=predictions)
def set_trainable(boolean):
global xception_base, inceptionresnet_base, inception_base
for layer in xception_base.layers[:46]:
layer.trainable = False
for layer in xception_base.layers[46:]:
layer.trainable = boolean[0]
for layer in inceptionresnet_base.layers[:712]:
layer.trainable = False
for layer in inceptionresnet_base.layers[712:]:
layer.trainable = boolean[1]
for layer in nasnet_base.layers[:720]:
layer.trainable = False
for layer in nasnet_base.layers[720:]:
layer.trainable = boolean[2]
# default
set_trainable([False, False, False])
###Output
_____no_output_____
###Markdown
**Training**
###Code
tensorboard = callbacks.TensorBoard(log_dir='./logs', histogram_freq=0, batch_size=16,
write_grads=True , write_graph=True)
checkpoints = callbacks.ModelCheckpoint("inceptionresnet-{val_loss:.3f}-{val_acc:.3f}.h5",
monitor='val_loss', verbose=1, save_best_only=True,
save_weights_only=False, mode='auto', period=0)
reduce_on_plateau = callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=6, verbose=1,
mode='auto', min_delta=0.0001, cooldown=0, min_lr=0)
adadelta = optimizers.Adadelta(lr=2.0, rho=0.95, epsilon=None, decay=0.1)
sgd_warmup = optimizers.SGD(lr=0.01, momentum=0.1, decay=0.0, nesterov=False)
sgd = optimizers.SGD(lr=0.1, momentum=0.5, decay=0.1, nesterov=False)
!nvidia-settings -a [gpu:0]/GPUFanControlState=1
!nvidia-settings -a [fan:0]/GPUTargetFanSpeed=90
!rm -R logs
model.compile(loss='categorical_crossentropy',
optimizer=sgd_warmup,
metrics=['acc'])
print("Training Progress:")
model_log = model.fit_generator(train_aug_generator, validation_data=validation_generator,
epochs=1, workers=5, use_multiprocessing=True,
callbacks=[checkpoints])
model.compile(loss='categorical_crossentropy',
optimizer=adadelta,
metrics=['acc'])
print("Training Progress:")
model_log = model.fit_generator(train_aug_generator, validation_data=validation_generator,
epochs=1, workers=5, use_multiprocessing=True,
callbacks=[checkpoints])
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['acc'])
print("Training Progress:")
model_log = model.fit_generator(train_aug_generator, validation_data=validation_generator,
epochs=3, workers=5, use_multiprocessing=True,
callbacks=[checkpoints])
!nvidia-settings -a [gpu:0]/GPUFanControlState=0
###Output
_____no_output_____
###Markdown
**Fine-tuning**
###Code
BATCH_SIZE = 50
train_datagen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_generator = train_a_datagen.flow_from_directory('data2/train',
target_size=INPUT_SIZE,
batch_size=BATCH_SIZE)
train_aug_datagen = ImageDataGenerator(rotation_range=3,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1./255,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_aug_generator = train_b_datagen.flow_from_directory('data2/train',
target_size=INPUT_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical')
train_maxaug_datagen = ImageDataGenerator(rotation_range=3,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1./255,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_maxaug_generator = train_b_datagen.flow_from_directory('train_aug',
target_size=INPUT_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical')
validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
'data2/validation',
target_size=INPUT_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical')
!nvidia-settings -a [gpu:0]/GPUFanControlState=1
!nvidia-settings -a [fan:0]/GPUTargetFanSpeed=90
options = [[[True, False, False], train_aug_generator],
[[False, True, False], train_aug_generator],
[[False, False, True], train_aug_generator],
[[True, False, False], train_maxaug_generator],
[[False, True, False], train_maxaug_generator],
[[False, False, True], train_maxaug_generator],
]
sgd = optimizers.SGD(lr=0.1, momentum=0.5, decay=0.1, nesterov=False)
for option in options:
set_trainable(option[0])
model.compile(optimizer=sgd, loss='categorical_crossentropy',metrics=['acc'])
print("Training Progress for", option,":")
model_log = model.fit_generator(option[1], validation_data=validation_generator,
epochs=3,
callbacks=[checkpoints])
!nvidia-settings -a [gpu:0]/GPUFanControlState=0
###Output
_____no_output_____
###Markdown
**Evaluation**TODO: Fix class mapping
###Code
"""from keras.models import load_model
from sklearn.metrics import classification_report, confusion_matrix
import matplotlib.pyplot as plt
import numpy as np
%config InlineBackend.figure_format = 'retina'
import itertools, pickle
from glob import glob
class_names = glob("train_aug/*") # Reads all the folders in which images are present
class_names = sorted(class_names) # Sorting them
fixed_classes = []
for class_name in class_names:
fixed_classes.append(class_name[10:])
name_id_map = dict(zip(range(len(class_names)), fixed_classes))
og_classes = [str(x) for x in range(1,129)]
"""validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
'data2/validation', shuffle=False,
target_size=(299, 299),
batch_size=batch_size,
class_mode='categorical')
"""Y_pred = model.predict_generator(validation_generator, 6322 // batch_size+1)
y_pred = np.argmax(Y_pred, axis=1)
"""corr_preds = []
for pred in y_pred:
corr_preds.append(int(name_id_map[pred]))
"""print('Classification Report')
print(classification_report(validation_generator.classes, y_pred, target_names=og_classes))
###Output
_____no_output_____
|
notebooks/cv2PYNQ - Get Started.ipynb
|
###Markdown
cv2PYNQ: Get Started This jupyter notebook serves as a quick start guide of the cv2PYNQ library. It demonstrates its capabilities as well as the limitations and what to pay attention to. This notebook was created based on [this](https://github.com/Xilinx/PYNQ-ComputerVision/tree/master/notebooks/computer_vision) template. Include cv2PYNQ
###Code
import cv2pynq as cv2
###Output
_____no_output_____
###Markdown
The video subsystem with HDMI The library uses the video subsystem from the base PYNQ design.If you want to learn all about its capabilities, use the notebooks https://github.com/Xilinx/PYNQ/tree/master/boards/Pynq-Z1/base/notebooks/videoprovided by Xilinx as an introduction. You can access the video subsystem simply with *cv2.video* It contains the HDMI-in and HDMI-out interfaces. CAUTION: hdmi_in.start() will take some time and will fail if no incoming video signal is detected.
###Code
hdmi_in = cv2.video.hdmi_in
hdmi_out = cv2.video.hdmi_out
hdmi_in.configure(cv2.PIXEL_GRAY)
hdmi_out.configure(hdmi_in.mode)
hdmi_in.start()
hdmi_out.start()
print(hdmi_in.mode)
###Output
VideoMode: width=1920 height=1080 bpp=8
###Markdown
Run the original OpenCV Sobel 5x5
###Code
import cv2 as openCV
import time
iterations = 10
start = time.time()
for i in range(iterations):
inframe = hdmi_in.readframe()
outframe = hdmi_out.newframe()
openCV.Sobel(inframe,-1,1,0,ksize=5,dst=outframe)
inframe.freebuffer()
hdmi_out.writeframe(outframe)
end = time.time()
print("Frames per second using OpenCV: " + str(iterations / (end - start)))
###Output
Frames per second using OpenCV: 2.7751992189247194
###Markdown
Run the cv2PYNQ Sobel 5x5 in the Programmable Logic
###Code
import time
iterations = 10
start = time.time()
for i in range(iterations):
inframe = hdmi_in.readframe()
outframe = hdmi_out.newframe()
cv2.Sobel(inframe,-1,1,0,ksize=5,dst=outframe)
inframe.freebuffer()
hdmi_out.writeframe(outframe)
end = time.time()
print("Frames per second using cv2PYNQ: " + str(iterations / (end - start)))
###Output
Frames per second using cv2PYNQ: 59.75132486181549
###Markdown
cv2PYNQ and continous memoryThe video subsystem returns images as [contiguous memory arrays](https://pynq.readthedocs.io/en/latest/pynq_libraries/xlnk.html). This allows the cv2PYNQ library to stream the data directly through the hardware. If the image is a normal numpy ndarray and no destination is given, the library must execute two copy functions. This results in a perspicuous drop of the framerate but is still faster than the software version.
###Code
import numpy as np
image = np.ndarray(shape=(1080,1920),dtype=np.uint8)
iterations = 10
start = time.time()
for i in range(iterations):
sobel = cv2.Sobel(image,-1,1,0,ksize=5)
end = time.time()
print("Frames per second using cv2PYNQ without CMA: " + str(iterations / (end - start)))
###Output
Frames per second using cv2PYNQ without CMA: 16.418144610118855
###Markdown
The solution to this problem is allocating contiguous memory arrays and use them as images. Don't forget to free them after use.
###Code
from pynq import Xlnk
xlnk = Xlnk()
image_buffer = xlnk.cma_array(shape=(1080,1920), dtype=np.uint8)
return_buffer = xlnk.cma_array(shape=(1080,1920), dtype=np.uint8)
iterations = 10
start = time.time()
for i in range(iterations):
cv2.Sobel(image_buffer,-1,1,0,ksize=5,dst=return_buffer)
end = time.time()
print("Frames per second using cv2PYNQ with CMA: " + str(iterations / (end - start)))
image_buffer.close()
return_buffer.close()
###Output
Frames per second using cv2PYNQ with CMA: 65.67885150201688
###Markdown
Clean up HDMI driversNOTE: This is needed to reset the HDMI drivers in a clean state. If this is not run, subsequent executions of this notebook may show visual artifacts on the HDMI out (usually a shifted output image)
###Code
hdmi_out.close()
hdmi_in.close()
###Output
_____no_output_____
###Markdown
Clean up cv2PYNQNOTE: This cleanup is needed because the library allocates contiguous memory and must free it. Otherwise, it may allocate all the available contiguous memory after including it a few times. The only solution is a reboot of the device, therefore do the cleanup ;)
###Code
cv2.close()
###Output
_____no_output_____
|
notebook_final.ipynb
|
###Markdown
1. Introduction to Baby Names Data What’s in a name? That which we call a rose, By any other name would smell as sweet.In this project, we will explore a rich dataset of first names of babies born in the US, that spans a period of more than 100 years! This suprisingly simple dataset can help us uncover so many interesting stories, and that is exactly what we are going to be doing. Let us start by reading the data.
###Code
# Import modules
import pandas as pd
# Read names into a dataframe: bnames
bnames = pd.read_csv('datasets/names.csv.gz')
print(bnames.head())
###Output
name sex births year
0 Mary F 7065 1880
1 Anna F 2604 1880
2 Emma F 2003 1880
3 Elizabeth F 1939 1880
4 Minnie F 1746 1880
###Markdown
2. Exploring Trends in NamesOne of the first things we want to do is to understand naming trends. Let us start by figuring out the top five most popular male and female names for this decade (born 2011 and later). Do you want to make any guesses? Go on, be a sport!!
###Code
# get the all the names in the last decade
bnames_2010 = bnames.loc[bnames['year'] > 2010]
# sum up all of the people based on name and sex
bnames_2010_agg = bnames_2010.groupby(['sex', 'name'],as_index = False)['births'].sum()
#sort all the names based on births with female first then get the top 5 of each and remove
# the indexing
bnames_top5 = bnames_2010_agg.sort_values(['sex', 'births'],ascending=[True, False]).\
groupby('sex').head().reset_index(drop=True)
print(bnames_top5)
###Output
sex name births
0 F Emma 121375
1 F Sophia 117352
2 F Olivia 111691
3 F Isabella 103947
4 F Ava 94507
5 M Noah 110280
6 M Mason 105104
7 M Jacob 104722
8 M Liam 103250
9 M William 99144
###Markdown
3. Proportion of BirthsWhile the number of births is a useful metric, making comparisons across years becomes difficult, as one would have to control for population effects. One way around this is to normalize the number of births by the total number of births in that year.
###Code
bnames2 = bnames.copy()
# Compute the proportion of births by year and add it as a new column
total_births_by_year = bnames2.groupby('year')['births'].transform(sum)
bnames2['prop_births'] = bnames2['births']/total_births_by_year
print(bnames2.head())
###Output
name sex births year prop_births
0 Mary F 7065 1880 0.035065
1 Anna F 2604 1880 0.012924
2 Emma F 2003 1880 0.009941
3 Elizabeth F 1939 1880 0.009624
4 Minnie F 1746 1880 0.008666
###Markdown
4. Popularity of NamesNow that we have the proportion of births, let us plot the popularity of a name through the years. How about plotting the popularity of the names Alex, and Emilia, and inspecting the underlying trends for any interesting patterns!
###Code
# Set up matplotlib for plotting in the notebook.
%matplotlib inline
import matplotlib.pyplot as plt
def plot_trends(name, sex):
# -- YOUR CODE HERE --
data = bnames[(bnames.name == name) & (bnames.sex == sex)]
ax = data.plot(x='year',y='births')
ax.set_xlim(1880,2016)
return ax
# -- YOUR CODE HERE --
plot_trends('Alex','M')
plot_trends('Elizabeth','F')
###Output
_____no_output_____
###Markdown
5. Trendy vs. Stable NamesBased on the plots we created earlier, we can see that Elizabeth is a fairly stable name, while Alex is not. An interesting question to ask would be what are the top 5 stable and top 5 trendiest names. A stable name is one whose proportion across years does not vary drastically, while a trendy name is one whose popularity peaks for a short period and then dies down. There are many ways to measure trendiness. A simple measure would be to look at the maximum proportion of births for a name, normalized by the sume of proportion of births across years. For example, if the name Joe had the proportions 0.1, 0.2, 0.1, 0.1, then the trendiness measure would be 0.2/(0.1 + 0.2 + 0.1 + 0.1) which equals 0.5.Let us use this idea to figure out the top 10 trendy names in this data set, with at least a 1000 births.
###Code
# top10_trendy_names | A Data Frame of the top 10 most trendy names
names = pd.DataFrame()
name_and_sex= bnames.groupby(['name', 'sex'])
names['total'] = name_and_sex['births'].sum()
names['max'] = name_and_sex['births'].max()
names['trendiness'] = names['max']/names['total']
top10_trendy_names = names.loc[names['total'] >= 1000].sort_values('trendiness', ascending=False).head(10).reset_index()
print(top10_trendy_names)
###Output
name sex total max trendiness
0 Christop M 1082 1082 1.000000
1 Royalty F 1057 581 0.549669
2 Kizzy F 2325 1116 0.480000
3 Aitana F 1203 564 0.468828
4 Deneen F 3602 1604 0.445308
5 Moesha F 1067 426 0.399250
6 Marely F 2527 1004 0.397309
7 Kanye M 1304 507 0.388804
8 Tennille F 2172 769 0.354052
9 Kadijah F 1411 486 0.344437
###Markdown
6. Bring in Mortality DataSo, what more is in a name? Well, with some further work, it is possible to predict the age of a person based on the name (Whoa! Really????). For this, we will need actuarial data that can tell us the chances that someone is still alive, based on when they were born. Fortunately, the SSA provides detailed actuarial life tables by birth cohorts.yearageqxlxdxLxTxexsex1910390.002837827522278164312963639.98F1910400.002977805323277937305147239.09F1910410.003187782124877697297353538.21F1910420.003327757325777444289583837.33F1910430.003467731626877182281839436.45F1910440.003517704827076913274121235.58FYou can read the documentation for the lifetables to understand what the different columns mean. The key column of interest to us is lx, which provides the number of people born in a year who live upto a given age. The probability of being alive can be derived as lx by 100,000. Given that 2016 is the latest year in the baby names dataset, we are interested only in a subset of this data, that will help us answer the question, "What percentage of people born in Year X are still alive in 2016?" Let us use this data and plot it to get a sense of the mortality distribution!
###Code
# Read lifetables from datasets/lifetables.csv
lifetables= pd.read_csv('datasets/lifetables.csv')
# Extract subset relevant to those alive in 2016
lifetables_2016 = lifetables[lifetables['age'] + lifetables['year'] == 2016]
# Plot the mortality distribution: year vs. lx
lifetables_2016.plot(x='year',y='lx')
###Output
_____no_output_____
###Markdown
7. Smoothen the Curve!We are almost there. There is just one small glitch. The cohort life tables are provided only for every decade. In order to figure out the distribution of people alive, we need the probabilities for every year. One way to fill up the gaps in the data is to use some kind of interpolation. Let us keep things simple and use linear interpolation to fill out the gaps in values of lx, between the years 1900 and 2016.
###Code
# Create smoothened lifetable_2016_s by interpolating values of lx
import numpy as np
year = np.arange(1900, 2016)
male_and_female = {"M": pd.DataFrame(), "F": pd.DataFrame()}
for sex in ["M", "F"]:
d = lifetables_2016[lifetables_2016['sex']==sex][["year", "lx"]]
male_and_female[sex] = d.set_index('year').reindex(year).interpolate().reset_index()
male_and_female[sex]['sex'] = sex
lifetable_2016_s = pd.concat(male_and_female, ignore_index = True)
lifetable_2016_s[(lifetable_2016_s.sex=="M")].plot(x='year',y='lx',label='lx Male')
lifetable_2016_s[(lifetable_2016_s.sex=="F")].plot(x='year',y='lx',label='lx Female')
###Output
_____no_output_____
###Markdown
8. Distribution of People Alive by NameNow that we have all the required data, we need a few helper functions to help us with our analysis. The first function we will write is get_data,which takes name and sex as inputs and returns a data frame with the distribution of number of births and number of people alive by year.The second function is plot_name which accepts the same arguments as get_data, but returns a line plot of the distribution of number of births, overlaid by an area plot of the number alive by year.The third function is plot_range which accepts the same arguments as plot_data and it also needs two integers. This allows you to simply focus more on a specific time that you would like to see information about your name.Using these functions, we will plot the distribution of births for boys named Alex and girls named Michele.
###Code
def get_data(name, sex):
name_sex = ((bnames['name'] == name) & (bnames['sex'] == sex))
data = bnames[name_sex].merge(lifetable_2016_s)
data['n_alive'] = data['lx']/(10**5)*data['births']
return data
def plot_data(name, sex):
fig, ax = plt.subplots()
dat = get_data(name, sex)
dat.plot(x = 'year' , y = 'births', ax = ax,
color = 'black')
dat.plot(x = 'year', y = 'n_alive',
kind = 'area', ax = ax,
color = 'red', alpha = 0.5)
ax.set_xlim(1900, 2016)
fig.suptitle('Plot for '+name, fontsize=13)
return ax
def plot_range(name, sex,start,end):
fig, ax = plt.subplots()
dat = get_data(name, sex)
dat.plot(x = 'year' , y = 'births', ax = ax,
color = 'black')
dat.plot(x = 'year', y = 'n_alive',
kind = 'area', ax = ax,
color = 'red', alpha = 0.5)
ax.set_xlim(start, end)
fig.suptitle('Plot for '+name, fontsize=13)
return ax
# Plot the distribution of births and number alive for Joseph and Brittany
plot_data('Kanye', 'M')
plot_range('Kanye', 'M',2002,2016)
###Output
_____no_output_____
###Markdown
9. Estimate AgeIn this section, we want to figure out the probability that a person with a certain name is alive, as well as the quantiles of their age distribution. In particular, we will estimate the age of a female named Gertrude. Any guesses on how old a person with this name is? How about a male named Alex?
###Code
# Import modules
from wquantiles import quantile
# Function to estimate age quantiles
def estimate_age(name, sex):
data = get_data(name, sex)
qs = [0.75, 0.5, 0.25]
quantiles = [2018 - int(quantile(data.year, data.n_alive, q)) for q in qs]
result = dict(zip(['q25', 'q50', 'q75'], quantiles))
result['p_alive'] = round(data.n_alive.sum()/data.births.sum()*100, 2)
result['sex'] = sex
result['name'] = name
return pd.Series(result)
# Estimate the age of Gertrude
print(estimate_age('Gertrude','F'))
print(estimate_age('Alex','M'))
###Output
name Gertrude
p_alive 18.73
q25 72
q50 82
q75 91
sex F
dtype: object
name Alex
p_alive 87.82
q25 15
q50 23
q75 32
sex M
dtype: object
###Markdown
10. Median Age of Top 10 Female and Males NamesIn the previous section, we estimated the age of a female named Emilia and a male named Alex. Let's go one step further this time, and compute the 25th, 50th and 75th percentiles of age, and the probability of being alive for the top 10 most common female and male names of all time. This should give us some interesting insights on how these names stack up in terms of median ages!
###Code
# Create median_ages: DataFrame with Top 10 Female names,
# age percentiles and probability of being alive
def get_median_age(sex):
error= 'There are only two genders: try M or F'
if sex == 'M':
s_query = 'sex == "M"'
elif sex =='F':
s_query = 'sex == "F"'
else :
return error
top_10_names = bnames.groupby(['name', 'sex'], as_index = False).agg({'births': np.sum}).\
sort_values('births', ascending = False).query(s_query).head(10).reset_index(drop = True)
estimates = pd.concat([estimate_age(name, sex) for name in top_10_names.name], axis = 1)
median_ages = estimates.T.sort_values('q50', ascending = False).reset_index(drop = True)
return median_ages
print(get_median_age('M'))
print(get_median_age('F'))
###Output
name p_alive q25 q50 q75 sex
0 Richard 66.66 46 60 70 M
1 Charles 59.36 39 58 70 M
2 Robert 63.31 42 57 69 M
3 James 65.15 39 56 68 M
4 John 62.7 39 56 67 M
5 Thomas 70.7 35 55 66 M
6 William 61.32 33 54 68 M
7 David 80.95 35 52 62 M
8 Michael 86.79 31 46 59 M
9 Joseph 69.77 26 42 60 M
name p_alive q25 q50 q75 sex
0 Dorothy 35.81 66 77 87 F
1 Barbara 70.61 60 68 76 F
2 Mary 54.41 55 66 76 F
3 Linda 83.43 59 66 71 F
4 Margaret 49.47 53 66 77 F
5 Patricia 76.75 56 65 73 F
6 Susan 85.8 54 61 67 F
7 Elizabeth 74.49 25 40 60 F
8 Jennifer 96.35 33 40 46 F
9 Sarah 86.05 22 32 40 F
###Markdown
11. Find other informationWould you like to know when your name was popular? Use the get_max_year function to get the information about the year in which your name was the most popular.How many Alex were there born in 1998? Use the get_year_info function to find out.Do you wanna find out specific information about your name in a certain period of time? Use the get_total_births_range function to fund out
###Code
def get_max_year(name, sex):
data = bnames[(bnames.name == name) & (bnames.sex == sex)]
sort = data.sort_values( by = 'births', ascending=False)
year = sort.iloc[0]['year']
births = sort.iloc[0]['births']
result = name + ' was the most popular name in '+ str(year)+' with '+ str(births)+' births'
return result
print(get_max_year('Alexander','M'))
def get_year_info(name,year):
data = bnames[(bnames.name == name) & (bnames.year == year)].reset_index(drop=True)
size= data.shape[0]
if (size == 1):
if(data.loc[0]['sex'] == 'M'):
output = 'There were '+ str(data.iloc[0]['births']) + ' males born in '+str(data.iloc[0]['year'])
else:
output = 'There were '+ str(data.iloc[0]['births']) + ' females born in '+str(data.iloc[0]['year'])
else:
females = 'There were '+ str(data.iloc[0]['births']) + ' females born in '+str(data.iloc[0]['year'])+' named '+ name
males = 'There were '+ str(data.iloc[1]['births']) + ' males born in ' +str(data.iloc[1]['year'])+' named ' + name
output= females + ' \n' +males
return output
print(get_year_info('Alex',1998))
def get_total_births_range(name, sex, start, end):
output=''
data = bnames[(bnames.name == name) & (bnames.year > start-1)& (bnames.year < end+1) &(bnames.sex == sex)].reset_index(drop=True)
for i in range(0,data.shape[0]):
output+= 'There were '+ str(data.loc[i]['births'])+' '+ name+' born in the year ' +str(data.iloc[i]['year'])+ '\n'
return output
print(get_total_births_range('Kanye', "M", 2000, 2016))
print(get_max_year('Kanye','M'))
plot_data('Kanye', 'M')
plot_range('Kanye', 'M',2002,2016)
print(get_total_births_range('Kanye', "M", 2000, 2016))
###Output
Kanye was the most popular name in 2004 with 507 births
There were 5 Kanye born in the year 2002
There were 87 Kanye born in the year 2003
There were 507 Kanye born in the year 2004
There were 202 Kanye born in the year 2005
There were 101 Kanye born in the year 2006
There were 53 Kanye born in the year 2007
There were 81 Kanye born in the year 2008
There were 64 Kanye born in the year 2009
There were 30 Kanye born in the year 2010
There were 35 Kanye born in the year 2011
There were 34 Kanye born in the year 2012
There were 40 Kanye born in the year 2013
There were 22 Kanye born in the year 2014
There were 26 Kanye born in the year 2015
There were 17 Kanye born in the year 2016
|
[homework,adv]knn.ipynb
|
###Markdown
Школа глубокого обучения ФПМИ МФТИБазовый поток. Осень 2020Домашнее задание. Библиотека sklearn и классификация с помощью KNN На основе [курса по Машинному Обучению ФИВТ МФТИ](https://github.com/ml-mipt/ml-mipt) и [Открытого курса по Машинному Обучению](https://habr.com/ru/company/ods/blog/322626/). --- K Nearest Neighbors (KNN) Метод ближайших соседей (k Nearest Neighbors, или kNN) — очень популярный метод классификации, также иногда используемый в задачах регрессии. Это один из самых понятных подходов к классификации. На уровне интуиции суть метода такова: посмотри на соседей; какие преобладают --- таков и ты. Формально основой метода является гипотеза компактности: если метрика расстояния между примерами введена достаточно удачно, то схожие примеры гораздо чаще лежат в одном классе, чем в разных. Для классификации каждого из объектов тестовой выборки необходимо последовательно выполнить следующие операции:* Вычислить расстояние до каждого из объектов обучающей выборки* Отобрать объектов обучающей выборки, расстояние до которых минимально* Класс классифицируемого объекта — это класс, наиболее часто встречающийся среди $k$ ближайших соседей Будем работать с подвыборкой из [данных о типе лесного покрытия из репозитория UCI](http://archive.ics.uci.edu/ml/datasets/Covertype). Доступно 7 различных классов. Каждый объект описывается 54 признаками, 40 из которых являются бинарными. Описание данных доступно по ссылке. Обработка данных
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Сcылка на датасет (лежит в папке): https://drive.google.com/drive/folders/16TSz1P-oTF8iXSQ1xrt0r_VO35xKmUes?usp=sharing
###Code
all_data = pd.read_csv('forest_dataset.csv')
all_data.head()
all_data.shape
###Output
_____no_output_____
###Markdown
Выделим значения метки класса в переменную `labels`, признаковые описания --- в переменную `feature_matrix`. Так как данные числовые и не имеют пропусков, переведем их в `numpy`-формат с помощью метода `.values`.
###Code
labels = all_data[all_data.columns[-1]].values
feature_matrix = all_data[all_data.columns[:-1]].values
all_data[all_data.columns[-1]]
###Output
_____no_output_____
###Markdown
Пара слов о sklearn **[sklearn](https://scikit-learn.org/stable/index.html)** -- удобная библиотека для знакомства с машинным обучением. В ней реализованны большинство стандартных алгоритмов для построения моделей и работ с выборками. У неё есть подробная документация на английском, с которой вам придётся поработать. `sklearn` предпологает, что ваши выборки имеют вид пар $(X, y)$, где $X$ -- матрица признаков, $y$ -- вектор истинных значений целевой переменной, или просто $X$, если целевые переменные неизвестны. Познакомимся со вспомогательной функцией [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).С её помощью можно разбить выборку на обучающую и тестовую части.
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Вернёмся к датасету. Сейчас будем работать со всеми 7 типами покрытия (данные уже находятся в переменных `feature_matrix` и `labels`, если Вы их не переопределили). Разделим выборку на обучающую и тестовую с помощью метода `train_test_split`.
###Code
train_feature_matrix, test_feature_matrix, train_labels, test_labels = train_test_split(
feature_matrix, labels, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Параметр `test_size` контролирует, какая часть выборки будет тестовой. Более подробно о нём можно прочитать в [документации](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). Основные объекты `sklearn` -- так называемые `estimators`, что можно перевести как *оценщики*, но не стоит, так как по сути это *модели*. Они делятся на **классификаторы** и **регрессоры**.В качестве примера модели можно привести классификаторы[метод ближайших соседей](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) и [логистическую регрессию](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). Что такое логистическая регрессия и как она работает сейчас не важно. У всех моделей в `sklearn` обязательно должно быть хотя бы 2 метода (подробнее о методах и классах в python будет в следующих занятиях) -- `fit` и `predict`. Метод `fit(X, y)` отвечает за обучение модели и принимает на вход обучающую выборку в виде *матрицы признаков* $X$ и *вектора ответов* $y$.У обученной после `fit` модели теперь можно вызывать метод `predict(X)`, который вернёт предсказания этой модели на всех объектах из матрицы $X$ в виде вектора.Вызывать `fit` у одной и той же модели можно несколько раз, каждый раз она будет обучаться заново на переданном наборе данных.Ещё у моделей есть *гиперпараметры*, которые обычно задаются при создании модели.Рассмотрим всё это на примере логистической регрессии.
###Code
from sklearn.linear_model import LogisticRegression
# создание модели с указанием гиперпараметра C
clf = LogisticRegression(C=1)
# обучение модели
clf.fit(train_feature_matrix, train_labels)
# предсказание на тестовой выборке
y_pred = clf.predict(test_feature_matrix)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
Теперь хотелось бы измерить качество нашей модели. Для этого можно использовать метод `score(X, y)`, который посчитает какую-то функцию ошибки на выборке $X, y$, но какую конкретно уже зависит от модели. Также можно использовать одну из функций модуля `metrics`, например [accuracy_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html), которая, как понятно из названия, вычислит нам точность предсказаний.
###Code
from sklearn.metrics import accuracy_score
accuracy_score(test_labels, y_pred)
###Output
_____no_output_____
###Markdown
Наконец, последним, о чём хотелось бы упомянуть, будет перебор гиперпараметров по сетке. Так как у моделей есть много гиперпараметров, которые можно изменять, и от этих гиперпараметров существенно зависит качество модели, хотелось бы найти наилучшие в этом смысле параметры. Самый простой способ это сделать -- просто перебрать все возможные варианты в разумных пределах.Сделать это можно с помощью класса [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), который осуществляет поиск (search) по сетке (grid) и вычисляет качество модели с помощью кросс-валидации (CV).У логистической регрессии, например, можно поменять параметры `C` и `penalty`. Сделаем это. Учтите, что поиск может занять долгое время. Смысл параметров смотрите в документации.
###Code
from sklearn.model_selection import GridSearchCV
# заново создадим модель, указав солвер
clf = LogisticRegression(solver='saga')
# опишем сетку, по которой будем искать
param_grid = {
'C': np.arange(1, 5), # также можно указать обычный массив, [1, 2, 3, 4]
'penalty': ['l1', 'l2'],
}
# создадим объект GridSearchCV
search = GridSearchCV(clf, param_grid, n_jobs=-1, cv=5, refit=True, scoring='accuracy')
# запустим поиск
search.fit(feature_matrix, labels)
# выведем наилучшие параметры
print(search.best_params_)
###Output
{'C': 2, 'penalty': 'l1'}
###Markdown
В данном случае, поиск перебирает все возможные пары значений C и penalty из заданных множеств.
###Code
accuracy_score(labels, search.best_estimator_.predict(feature_matrix))
###Output
_____no_output_____
###Markdown
Заметьте, что мы передаём в GridSearchCV всю выборку, а не только её обучающую часть. Это можно делать, так как поиск всё равно использует кроссвалидацию. Однако порой от выборки всё-же отделяют *валидационную* часть, так как гиперпараметры в процессе поиска могли переобучиться под выборку. В заданиях вам предстоит повторить это для метода ближайших соседей. Обучение модели Качество классификации/регрессии методом ближайших соседей зависит от нескольких параметров:* число соседей `n_neighbors`* метрика расстояния между объектами `metric`* веса соседей (соседи тестового примера могут входить с разными весами, например, чем дальше пример, тем с меньшим коэффициентом учитывается его "голос") `weights` Обучите на датасете `KNeighborsClassifier` из `sklearn`.
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
clf = KNeighborsClassifier()
feature_matrix=all_data[all_data.columns[:-1]].values
label_matrix=all_data[all_data.columns[-1]].values
train_feature_matrix, test_feature_matrix, train_label_matrix, test_label_matrix = train_test_split(
feature_matrix, label_matrix, test_size=0.2, random_state=42)
clf.fit(train_feature_matrix,train_label_matrix)
prediction = clf.predict(test_feature_matrix)
accuracy_score(test_label_matrix,prediction)
###Output
_____no_output_____
###Markdown
Вопрос 1:* Какое качество у вас получилось? Подберём параметры нашей модели * Переберите по сетке от `1` до `10` параметр числа соседей* Также вы попробуйте использоввать различные метрики: `['manhattan', 'euclidean']`* Попробуйте использовать различные стратегии вычисления весов: `[‘uniform’, ‘distance’]`
###Code
from sklearn.model_selection import GridSearchCV
params = {
'n_neighbors': np.arange(1, 10),
'metric': ['manhattan', 'euclidean'],
'weights': ['uniform','distance']
}
clf_grid = GridSearchCV(clf, params, cv=5, scoring='accuracy', n_jobs=-1)
clf_grid.fit(feature_matrix, labels)
###Output
_____no_output_____
###Markdown
Выведем лучшие параметры
###Code
clf_grid.best_params_
###Output
_____no_output_____
###Markdown
Вопрос 2:* Какую metric следует использовать? Вопрос 3:* Сколько n_neighbors следует использовать? Вопрос 4:* Какой тип weights следует использовать? Используя найденное оптимальное число соседей, вычислите вероятности принадлежности к классам для тестовой выборки (`.predict_proba`).
###Code
optimal_clf = KNeighborsClassifier(n_neighbors=4)
optimal_clf.fit(feature_matrix,labels)
pred_prob = optimal_clf.predict_proba(feature_matrix)
print(pred_prob[3])
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
unique, freq = np.unique(test_labels, return_counts=True)
freq = list(map(lambda x: x / len(test_labels),freq))
pred_freq = pred_prob.mean(axis=0)
print(pred_freq[2])
plt.figure(figsize=(10, 8))
plt.bar(range(1, 8), pred_freq, width=0.4, align="edge", label='prediction')
plt.bar(range(1, 8), freq, width=-0.4, align="edge", label='real')
plt.ylim(0, 0.54)
plt.legend()
plt.show()
###Output
0.057975
|
week12/_test_imports_week14.ipynb
|
###Markdown
Necessary Packages By WeekNote: it is possible a few others might be added, but this should get you started.**PLEASE NOTE** this is assuming you have installed Python & Jupyter Notebook using Anaconda. You are welcome to use JupyterLab instead of Jupyter Notebooks, however *we will not support JupyterLab ourselves in this class.*See https://github.com/jnaiman/IS-452AO-Fall2019/blob/master/installation_directions.md for more details about installing Anaconda (you can skip the PyCharm installation part).Make sure you see the same plots as are saved in this plot - if something doesn't display this means something has gone wrong. Note: anything with randomly selected numbers will look a little different.**Please do not worry if you run into some things you have trouble installing -- we will help you debug in class!**
###Code
import h5py
###Output
_____no_output_____
###Markdown
If the above doesn't work try uncommenting:
###Code
#!conda install -c anaconda h5py --yes
#import h5py
###Output
_____no_output_____
###Markdown
Week 13
###Code
import yt
###Output
_____no_output_____
###Markdown
If the above doesn't work try uncommenting:
###Code
#!conda install -c conda-forge yt --yes
#import yt
###Output
_____no_output_____
###Markdown
More info here: http://www2.compute.dtu.dk/projects/GEL/PyGEL/
###Code
from PyGEL3D import gel
from PyGEL3D import js
###Output
_____no_output_____
###Markdown
You will probably have to pip install:
###Code
#!pip install PyGEL3D
#from PyGEL3D import gel
#from PyGEL3D import js
import ipyvolume
###Output
_____no_output_____
###Markdown
You will probably have to install this:
###Code
#!conda install -c conda-forge ipyvolume --yes
#import ipyvolume
###Output
_____no_output_____
###Markdown
Week 14
###Code
import ipyvolume
###Output
_____no_output_____
###Markdown
If that doesn't work, try uncommenting:
###Code
#!conda install -c conda-forge ipyvolume
###Output
_____no_output_____
###Markdown
Or you can do:
###Code
#!pip install ipyvolume
###Output
_____no_output_____
###Markdown
Note: you may need to uncomment the following:
###Code
#!jupyter nbextension enable --py --sys-prefix ipyvolume
#!jupyter nbextension enable --py --sys-prefix widgetsnbextension
###Output
_____no_output_____
|
multiresolution-blending-opencv.ipynb
|
###Markdown
Multi-Resolution Image Blending using OpenCV & PIL The goal is to blend 2 images seamlessly using gaussian & laplacian pyramids. This is a form of image in-painting
###Code
%reload_ext autoreload
%autoreload 2
import cv2 as cv
import numpy as np
A = cv.imread('Hand.png', cv.IMREAD_REDUCED_COLOR_4)
# A = cv.resize(A, tuple((x//4 for x in A.shape[:2])))
print(A.shape)
B = cv.imread('Veles-Mask-Template.png', cv.IMREAD_REDUCED_COLOR_4)
# B = cv.resize(B, tuple((x//4 for x in A.shape[:2])))
print(B.shape)
M = cv.imread('Mask.png', cv.IMREAD_REDUCED_GRAYSCALE_4)
# M = M // 255
print(M.shape)
from pyramid import cv_laplacian, cv_pyramid, cv_multiresolution_blend, cv_reconstruct_laplacian
from helper import cv2pil, multiply_nn_mnn
from mpl_toolkits.axes_grid1 import ImageGrid
import matplotlib.pyplot as plt
gpA = cv_pyramid(A.copy(), scale=5)
gpB = cv_pyramid(B.copy(), scale=5)
gpM = cv_pyramid(M.copy(), scale=5)
lpA = cv_laplacian(gpA, scale=5)
lpB = cv_laplacian(gpB, scale=5)
lpM = cv_laplacian(gpM, scale=5)
# cv2pil(lpA[0]).show()
# cv2pil(lpB[0]).show()
# cv2pil(gpM[0]).show()
# lpM0 = gpM[0] // 255
# result0 = multiply_nn_mnn(lpM0, lpB[0]) + multiply_nn_mnn((1 - lpM0) , lpA[0])
# result0 = result0.astype(np.uint8)
# cv2pil(result0).show()
# TODO: Implement `reconstruct_laplacian` for cv2
# blended = cv.add(cv.pyrUp(gpA[1]), result0)
# cv2pil(blended).show()
# [cv2pil(x).show() for x in lpA]
blended_pyramid = cv_multiresolution_blend(gpM, lpA, lpB)
fig = plt.figure(figsize=(10, 10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(3, 3), # creates 2x2 grid of axes
axes_pad=0.5, # pad between axes in inch.
)
for i in range(len(blended_pyramid)):
ax = grid[i]
ax.imshow(blended_pyramid[i])
plt.show()
blended_image = cv_reconstruct_laplacian(blended_pyramid)
blended_image = cv2pil(blended_image)
blended_image.save('cv_blended_image.png')
# blended_image.show()
plt.imshow(blended_image)
###Output
_____no_output_____
|
lightning/analyze_lightning_signal.ipynb
|
###Markdown
Analysis of lightning discharges as seen on a Software Defined Radio Experimental Setup:RTL-SDR connected to a Linux laptop running Ubuntu. Radio tuned to 30MHz and ~4s of time series (I,Q) data collected at 2Ms/s. Radio is connected to a "rabbit ears" dipole antenna inside my home. 30MHz was choses as it is fairly quiet and as low frequency as the device will go without additional hardware. Noise is around .3 units so only data containing peaks about 0.5 units are saved using a numpy save file. These files are saved and timestamped for future analysis. This notebook shows what we can see during a lightning storm on a very simple (~$35) SDR set up. There is a lot we can do to improve this! Our goal is to equip Sage nodes with SDRs in the future to test these as a affordable ubiquitios lightbing detection network. Future work includes using am upconvertor to access lower frequencies.
###Code
#Import the goodness
import numpy as np
from matplotlib import pyplot as plt
from scipy import signal
%matplotlib inline
#load a time series file. Collected at 1626 UTC on the 19th of July
ts = np.load('/users/scollis/data/sage_lightning/data/event_200719_1626_00_0p6896.npy', allow_pickle=True).item()
#Create a time array using the known sampling frequency
dt = 1./2.048e6
xtime = np.arange(len(ts['sig'])) * dt
#Quick and dirty look at the data, note we use np.abs as the signal is complex
myfi = plt.figure(figsize=[15,5])
plt.plot(xtime, np.abs(ts['sig']))
plt.xlabel('Signal (Arb)')
plt.xlabel('Time since start (s)')
#Quick code to find the element where the maximum of the signal is
maxme = np.where(np.abs(ts['sig']) == np.max(np.abs(ts['sig'])))[0][0]
t_max = xtime[maxme]
print(t_max)
#What time window do we want to look at?
window = 4e-3
#Zoom in to the max signal times
myfi = plt.figure(figsize=[15,5])
plt.plot(xtime, np.abs(ts['sig']))
plt.ylabel('Signal (Arb)')
plt.xlabel('Time since start (s)')
t0 = t_max - window/2.
t1 = t_max + window/2.
plt.xlim([t0,t1])
#Code Courtesy of Eric Bruning TTU
#Plot the time series and rolling FFT of the signal
fs = 2.048e6
sub = slice(int(t0*fs),int(t1*fs))
dt = 1./fs
xtime = np.arange(len(ts['sig'])) * dt
myfi, axs = plt.subplots(2,1, figsize=[15,10], sharex=True)
axs[0].plot(xtime[sub]-t0, np.abs(ts['sig'])[sub])
_,_,_,sgimg = axs[1].specgram(ts['sig'][sub], Fs=fs, vmin=-100, vmax=-70)
axs[0].set_ylabel('Signal (Arb)')
axs[1].set_ylabel('Spectrum (Hz)')
axs[1].set_xlabel('Time since start (s)')
plt.colorbar(sgimg, orientation='horizontal', ax=axs[1])
###Output
_____no_output_____
###Markdown
You can see some coherent signals from radio etc... But the lightning impulse is broad band filling the whole bandpass window.
###Code
#The next but of code is designed to look for multiple peaks in the data.
#First, Smooth the data using a Savgol filter.. otherwise we get too many peaks
filtered_data = signal.savgol_filter(np.abs(ts['sig']),5 ,2)
#Now use SciPy's peak finder to find peaks.. These myst be at least .25 units above noise and
# at least 10,000 elements apart
locs, props = signal.find_peaks(filtered_data, height = .25, distance = 10000, width = 1)
locs
###Output
_____no_output_____
###Markdown
So we have three distinct peaks in this time series.
###Code
#Lets make Eric's code into a function
def plot_pulse(ts, time, window):
t0 = time - window/2.
t1 = time + window/2.
fs = 2.048e6
sub = slice(int(t0*fs),int(t1*fs))
dt = 1./fs
xtime = np.arange(len(ts['sig'])) * dt
myfi, axs = plt.subplots(2,1, figsize=[15,10], sharex=True)
axs[0].plot(xtime[sub]-t0, np.abs(ts['sig'])[sub])
_,_,_,sgimg = axs[1].specgram(ts['sig'][sub], Fs=fs, vmin=-100, vmax=-70)
axs[0].set_ylabel('Signal (Arb)')
axs[1].set_ylabel('Spectrum (Hz)')
axs[1].set_xlabel('Time since start (s)')
plt.colorbar(sgimg, orientation='horizontal', ax=axs[1])
#Now lets loop over all peaks and plot them
for loc in locs:
plot_pulse(ts, xtime[loc], 1e-3)
###Output
_____no_output_____
|
sdkv1/ch9/dist_training/Object Detection with Pascal VOC - Distributed training.ipynb
|
###Markdown
The results are in a format that is similar to the .lst format with an addition of a confidence score for each detected object. The format of the output can be represented as `[class_index, confidence_score, xmin, ymin, xmax, ymax]`. Typically, we don't consider low-confidence predictions.We have provided additional script to easily visualize the detection outputs. You can visulize the high-confidence preditions with bounding box by filtering out low-confidence detections using the script below:
###Code
print(detections)
def visualize_detection(img_file, dets, classes=[], thresh=0.6):
"""
visualize detections in one image
Parameters:
----------
img : numpy.array
image, in bgr format
dets : numpy.array
ssd detections, numpy.array([[id, score, x1, y1, x2, y2]...])
each row is one object
classes : tuple or list of str
class names
thresh : float
score threshold
"""
import random
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread(img_file)
plt.imshow(img)
height = img.shape[0]
width = img.shape[1]
colors = dict()
for det in dets:
(klass, score, x0, y0, x1, y1) = det
if score < thresh:
continue
cls_id = int(klass)
if cls_id not in colors:
colors[cls_id] = (random.random(), random.random(), random.random())
xmin = int(x0 * width)
ymin = int(y0 * height)
xmax = int(x1 * width)
ymax = int(y1 * height)
rect = plt.Rectangle((xmin, ymin), xmax - xmin,
ymax - ymin, fill=False,
edgecolor=colors[cls_id],
linewidth=3.5)
plt.gca().add_patch(rect)
class_name = str(cls_id)
if classes and len(classes) > cls_id:
class_name = classes[cls_id]
plt.gca().text(xmin, ymin - 2,
'{:s} {:.3f}'.format(class_name, score),
bbox=dict(facecolor=colors[cls_id], alpha=0.5),
fontsize=12, color='white')
plt.show()
###Output
_____no_output_____
###Markdown
For the sake of this notebook, we trained the model with only a few (10) epochs. This implies that the results might not be optimal. To achieve better detection results, you can try to tune the hyperparameters and train the model for more epochs. In our tests, the mAP can reach 0.79 on the Pascal VOC dataset after training the algorithm with `learning_rate=0.0005`, `image_shape=512` and `mini_batch_size=16` for 240 epochs.
###Code
%matplotlib inline
object_categories = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person',
'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
# Setting a threshold 0.20 will only plot detection results that have a confidence score greater than 0.20.
threshold = 0.30
# Visualize the detections.
visualize_detection(file_name, detections['prediction'], object_categories, threshold)
od_predictor.delete_endpoint()
###Output
_____no_output_____
|
examples/plot-mcmc-trace-plots.ipynb
|
###Markdown
Inference plots - Trace plotsThis example builds on the [adaptive covariance MCMC example](https://github.com/pints-team/pints/blob/master/examples/sampling-adaptive-covariance-mcmc.ipynb), and shows you a different way to plot the results.Inference plots:* [Predicted time series](https://github.com/pints-team/pints/blob/master/examples/plot-mcmc-predicted-time-series.ipynb)* __Trace plots__* [Autocorrelation](https://github.com/pints-team/pints/blob/master/examples/plot-mcmc-autocorrelation.ipynb)* [Pairwise scatterplots](https://github.com/pints-team/pints/blob/master/examples/plot-mcmc-pairwise-scatterplots.ipynb)* [Pairwise scatterplots with KDE](https://github.com/pints-team/pints/blob/master/examples/plot-mcmc-pairwise-kde-plots.ipynb) Setting up an MCMC routineSee the adaptive covariance MCMC example for details.
###Code
from __future__ import print_function
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 50
values = org_values + np.random.normal(0, noise, org_values.shape)
real_parameters = np.array(real_parameters + [noise])
# Get properties of the noise sample
noise_sample_mean = np.mean(values - org_values)
noise_sample_std = np.std(values - org_values)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, noise*0.1],
[0.02, 600, noise*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Perform sampling using MCMC, with three chains
xs = [
real_parameters * 1.1,
real_parameters * 1.15,
real_parameters * 0.9,
]
mcmc = pints.MCMCController(log_posterior, 3, xs)
mcmc.set_max_iterations(6000)
mcmc.set_log_to_screen(False)
###Output
_____no_output_____
###Markdown
TracesThe plots below show the chains generated by three independent runs of the adaptive MCMC routine (all from the same starting point). All three chains require an initial period (usually discarded as 'burn-in') before they converge to the same parameter values. These initial samples distort the shape of the histograms seen on the right.
###Code
print('Running...')
chains = mcmc.run()
print('Done!')
# Show histogram and traces
plt.figure(figsize=(14, 9))
nparam = len(real_parameters)
for i, real in enumerate(real_parameters):
# Add trace subplot
plt.subplot(nparam, 2, 2 + 2 * i)
plt.xlabel('Iteration')
plt.ylabel('Parameter ' + str(i + 1))
plt.axhline(real)
plt.plot(chains[0,:,i], alpha=0.5)
plt.plot(chains[1,:,i], alpha=0.5)
plt.plot(chains[2,:,i], alpha=0.5)
# Add histogram subplot
plt.subplot(nparam, 2, 1 + 2 * i)
plt.xlabel('Parameter ' + str(i + 1))
plt.ylabel('Frequency')
plt.axvline(real)
plt.hist(chains[0,:,i], label='p' + str(i + 1), bins=40, alpha=0.5)
plt.hist(chains[1,:,i], label='p' + str(i + 1), bins=40, alpha=0.5)
plt.hist(chains[2,:,i], label='p' + str(i + 1), bins=40, alpha=0.5)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
|
notebooks/6.2_Population.ipynb
|
###Markdown
NEW Regrid to GFDL
###Code
# Open population density data (raster 5 = 2020 population density)
ds = xr.open_dataset('../data/processed/population_density.nc').sel(raster=5)
ds = ds.rename({'UN WPP-Adjusted Population Density, v4.11 (2000, 2005, 2010, 2015, 2020): 1 degree':'pop_dens'})
ds = ds.where(np.isfinite(ds['pop_dens']),0)
# Regrid population density data first to be 0-360, not -180-180
ds = ds.assign_coords({'longitude':ds['longitude']%360})
# Rename coordinates
ds = ds.rename({'longitude':'lon','latitude':'lat'})
# Open land area file with desired gridding, convert m^2 to km^2
ds_area = xr.open_dataset('../data/processed/GFDL/esm2m.land_area')['land_area']/(10**6)
# Regrid population density to match land area
ds_regrid = ds.interp_like(ds_area)
# Convert density to population count
ds_pop_regrid = (ds_regrid * ds_area).rename({'pop_dens':'population'})
ds_pop_regrid.to_netcdf('../data/processed/GFDL/population_regrid_esm2m_2.nc')
###Output
_____no_output_____
###Markdown
NEW Regrid to CESM2
###Code
# Open population density data (raster 5 = 2020 population density)
ds = xr.open_dataset('../data/processed/population_density.nc').sel(raster=5)
ds = ds.rename({'UN WPP-Adjusted Population Density, v4.11 (2000, 2005, 2010, 2015, 2020): 1 degree':'pop_dens'})
ds = ds.where(np.isfinite(ds['pop_dens']),0)
# Regrid population density data first to be 0-360, not -180-180
ds = ds.assign_coords({'longitude':ds['longitude']%360})
# Rename coordinates
ds = ds.rename({'longitude':'lon','latitude':'lat'})
# Open land area file with desired gridding
ds_area = xr.open_dataarray('../data/processed/CESM2/cesm2.land_area').isel(ensemble=0)
# Regrid population density to match land area
ds_regrid = ds.interp_like(ds_area)
# Convert density to population count
ds_pop_regrid = (ds_regrid * ds_area).rename({'pop_dens':'population'})
ds_pop_regrid.to_netcdf('../data/processed/CESM2/population_regrid_cesm2_2.nc')
###Output
_____no_output_____
###Markdown
Population Maps
###Code
pop_dens = xr.open_dataarray('../data/processed/population_density.nc').sel(raster=5)
pop_dens = pop_dens.rename({'longitude':'lon','latitude':'lat'})
crs = ccrs.Robinson()
fig,ax = plt.subplots(figsize=(10,10),subplot_kw={'projection':crs})
# Specify variables
X = pop_dens['lon']
Y = pop_dens['lat']
Z = pop_dens.squeeze()
Z, X = add_cyclic_point(Z,coord=X)
cmap = plt.cm.get_cmap('YlOrBr', 12)
im = ax.contourf(X,Y,Z,levels=[0,25,50,100,200,400,800],colors=cmap(np.linspace(0,1,7)),transform=ccrs.PlateCarree(),extend='max')
ax.coastlines()
ax.add_feature(cfeature.OCEAN,zorder=1,facecolor='lightskyblue')
ax.set_extent([-140,160,-50,50],crs=ccrs.PlateCarree())
lf.grid(ax)
cbar = plt.colorbar(im,ax=ax,orientation='horizontal',fraction=0.05,pad=0.05)
cbar.set_label('Population Density [capita/sq.km]')
ax.set_title('UN WPP-adjusted Population Density, 2020',fontweight='bold')
###Output
_____no_output_____
|
notebooks/PBMC-multiome-ATAC.ipynb
|
###Markdown
Prelim Dataset downloaded from : https://support.10xgenomics.com/single-cell-multiome-atac-gex/datasets/1.0.0/pbmc_unsorted_10kData is available at `s3://fh-pi-setty-m-eco-public/single-cell-primers/multiome/`ArchR preprocessing script: https://github.com/settylab/single-cell-primers/blob/main/scripts/PBMC-mulitome-ATAC-ArchR-preprocessing.RReview the notebook `PBMC-RNA-standalone.ipynb` for setup instructions.
###Code
import os
import pandas as pd
import numpy as np
import scanpy as sc
import pyranges as pr
import warnings
import palantir
import phenograph
import harmony
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style('ticks')
matplotlib.rcParams['figure.figsize'] = [4, 4]
matplotlib.rcParams['figure.dpi'] = 100
matplotlib.rcParams['image.cmap'] = 'Spectral_r'
warnings.filterwarnings(action="ignore", module="matplotlib", message="findfont")
###Output
_____no_output_____
###Markdown
Utility functions
###Code
def log_transform(ad, ps=0.1):
ad.X.data = np.log2(ad.X.data + ps) - np.log2(ps)
def pyranges_from_strings(pos_list):
# Chromosome and positions
chr = pos_list.str.split(':').str.get(0)
start = pd.Series(pos_list.str.split(':').str.get(1)).str.split('-').str.get(0)
end = pd.Series(pos_list.str.split(':').str.get(1)).str.split('-').str.get(1)
# Create ranges
gr = pr.PyRanges(chromosomes=chr, starts=start, ends=end)
return gr
###Output
_____no_output_____
###Markdown
Load data ATAC
###Code
data_dir = os.path.expanduser('data/multiome/ArchR/pbmc_multiome_atac/export/')
###Output
_____no_output_____
###Markdown
Load all the exported results from ArchR Peaks data
###Code
# Peaks data
from scipy.io import mmread
counts = mmread(data_dir + 'peak_counts/counts.mtx')
# Cell and peak information
cells = pd.read_csv(data_dir + 'peak_counts/cells.csv', index_col=0).iloc[:, 0]
peaks = pd.read_csv(data_dir + 'peak_counts/peaks.csv', index_col=0)
peaks.index = peaks['seqnames'] + ':' + peaks['start'].astype(str) + '-' + peaks['end'].astype(str)
peaks.head()
ad = sc.AnnData(counts.T)
ad.obs_names = cells
ad.var_names = peaks.index
for col in peaks.columns:
ad.var[col] = peaks[col]
ad.X = ad.X.tocsr()
ad
###Output
_____no_output_____
###Markdown
SVD
###Code
ad.obsm['X_svd'] = pd.read_csv(data_dir + 'svd.csv', index_col=0).loc[ad.obs_names, : ].values
###Output
_____no_output_____
###Markdown
Metadata
###Code
cell_meta = pd.read_csv(data_dir + 'cell_metadata.csv', index_col=0).loc[ad.obs_names, : ]
for col in cell_meta.columns:
ad.obs[col] = cell_meta[col].values
ad
###Output
_____no_output_____
###Markdown
Gene scores
###Code
# Gene scores
gene_scores = pd.read_csv(data_dir + 'gene_scores.csv', index_col=0).T
ad.obsm['GeneScores'] = gene_scores.loc[ad.obs_names, :].values
ad.uns['GeneScoresColums'] = gene_scores.columns.values
ad
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# Leiden and UMAP
warnings.filterwarnings('ignore')
sc.pp.neighbors(ad, use_rep='X_svd')
sc.tl.umap(ad)
sc.tl.leiden(ad)
warnings.filterwarnings('default')
# Phenograph
ad.obs['phenograph'], _, _ = phenograph.cluster(ad.obsm['X_svd'])
ad.obs['phenograph'] = ad.obs['phenograph'].astype(str)
# Diffusion maps
warnings.filterwarnings('ignore')
dm_res = palantir.utils.run_diffusion_maps(pd.DataFrame(ad.obsm['X_svd'], index=ad.obs_names))
warnings.filterwarnings('default')
ad.obsp['DM_kernel'] = dm_res['kernel']
ad.obsm['DM_EigenVectors'] = dm_res['EigenVectors'].values
ad.uns['DM_EigenValues'] = dm_res['EigenValues'].values
###Output
Determing nearest neighbor graph...
###Markdown
Visualizations
###Code
sc.pl.scatter(ad, basis='umap', color=['leiden', 'phenograph'])
###Output
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'Sample' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'Clusters' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'phenograph' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'seqnames' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'strand' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'GroupReplicate' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'nearestGene' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'peakType' as categorical
/usr/local/anaconda3/envs/singlecell/lib/python3.8/site-packages/anndata/_core/anndata.py:1228: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Reordering categories will always return a new Categorical object.
c.reorder_categories(natsorted(c.categories), inplace=True)
... storing 'nearestTSS' as categorical
###Markdown
Save
###Code
ad
ad.write(data_dir + '../../../pbmc_multiome_atac.h5ad')
###Output
_____no_output_____
|
Projekty/Projekt1/Grupa1/GassowskaKozminskiPrzybylek/KamienMilowy3/Sick_FinalPart.ipynb
|
###Markdown
Etap III Sprawdziany krzyżowe (cross validation), strojenie hiperparametrów, finalne modelowanie Krótkie podsumowanie dotychczasowych prac:W trakcie poszukiwania optymalnego klasyfikatora dla zbioru danych dotyczących pacjentów badanych pod kątem endokrynologicznym w Australii wykonaliśmy:* eksploracyjnej analizy danych, w trakcie której dowiedzieliśmy się istotnych i ciekawych rzeczy, takich jak niezbalansowanie danych, wpływ poziomu hormonów na inne wskaźniki (np. korelacja TT4 i FTI), przeważająca liczba kobiet wśród badanych oraz że najwięcej pacjentów było z przedziału wiekowego 55-75* dogłębną inżynierię cech: usunęliśmy puste kolumny, uzupełniliśmy braki danych, zmienne binarne (czyli de facto boolean) zamieniliśmy na 0/1, zakodowaliśmy zmienne kategoryczne* imputację - zastosowaliśmy szereg metod imputacji danych: wypełnianie wartościami mediany, mody, średniej; zastosowanie algorytmów KNN Imputer oraz Iterative Imputer; zbadaliśmy jak na jakość klasyfikatorów wpłynie różne traktowanie wartości 455 w polu wiek (nie wpłynęło znacząco, potraktujemy tę wartość jako nieznaną)* kodowanie zmiennych kategorycznych na różne sposoby, które były dwie: sex oraz refferal_source. Płeć kodowaliśmy: target encoding, one hot encoding, jednak zdecydowaliśmy na losowe uzupełnianie z prawdopodobieństwem proporcjonalnym do częstości występowań obu płci w danych bez braków. Zmienna refferal_source również była kodowana przy pomocy TE oraz OHE* trenowanie modeli: XGBoost oraz RandomForest. Zespoły drzew XGBoost osiągały wysokie i równe wyniki miary Accuracy. Dla miary średniej geometrycznej z TPR i TNR (jej użycie jest wytłumaczone w dalszej części notatnika), nieznacznie niższe wyniki były osiągane przy imputacji Iterative Imputerem (o ok. 2,3 punktu procentowego). Jednocześnie ta funkcja dawała najlepsze rezultaty w Random Forest, lecz jedynie o ok. 0,15 punktu procentowego lepsze niż dla imputacji medianą, wobec czego używać będziemy zbioru danych z Target Encodingiem zmiennej refferal_source oraz imputacją braków przy pomocy mediany. Co uznalismy za jeszcze warte wykonania:* zbadać wpływ standaryzacji (normalizacji) zmiennych ciągłych na jakość algorytmów klasyfikujących* wykorzystać zautomatyzowane metody strojenia, CV* wytrenować też prosty model, np. regresja logistyczna oraz model autoML i porównać inne modele względem nich* spróbować odrzucić jedną z wysoce skorelowanych kolumn oraz informacji o refferal_source, potencjalnie nieprzydatnej i wytrenować na tym modele* oczywiście sprawdzić skuteczność modeli na danych testowych :)* zmierzyć jakość modeli przy pomocy miary krzywej ROC (AUC) oraz krzywej Precision/Recall* dokonać analizy feature importance oraz porównać ją z początkowymi obserwacjami i założeniami oraz sprawdzić modele dla k najlepszych kolumn a nie całego zbioru
###Code
import numpy as np
import pandas as pd
import sklearn
import imblearn
import seaborn as sns
import matplotlib.pyplot as plt
import math
import random
import warnings
warnings.filterwarnings('ignore')
from io import StringIO
import requests
from pandas.testing import assert_frame_equal
!pip install category_encoders
import category_encoders as ce
import missingno as mno
from sklearn.model_selection import train_test_split, cross_val_score, KFold, GridSearchCV, RandomizedSearchCV
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import SimpleImputer, KNNImputer, IterativeImputer
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.metrics import make_scorer, roc_curve, auc, precision_recall_curve, average_precision_score, classification_report
from sklearn.neighbors import KNeighborsClassifier
from sklearn.feature_selection import SelectKBest
from imblearn.metrics import geometric_mean_score
from scipy.stats import poisson, expon, randint
from xgboost import XGBClassifier
! pip install tpot
from tpot import TPOTClassifier
import time
import copy
# wczytanie danych
whole_data = pd.read_csv('whole-sick-data.csv').drop('Unnamed: 0', axis = 1) # dane przed inzynieria cech - posiadaja zmienne kategoryczne, braki wartosci itp
# niefortunnie wczytuje się także pierwsza kolumna - indeksy
train_data0 = pd.read_csv('train-data-aftersec-milestone.csv').drop('Unnamed: 0', axis = 1)
# dane z zakodowana plcia jako liczby, 'referral_source' jako target encoding, wiek '455' potraktowany jako nan, a wszystkie brakujace wartosci uzupelnione iterative imputer
test_data0 = pd.read_csv('test-data-aftersec-milestone.csv').drop('Unnamed: 0', axis = 1)
###Output
_____no_output_____
###Markdown
Po wczytaniu danych sprawdźmy rozmiar i stopień niezbalansowania oraz proporcje danych w zbiorach testowym i całym, w celu upewnienia się/przypomnienia jakie mamy zbiory.
###Code
print('Rozmiar danych treningowych: ' + str(train_data0.shape))
print('Rozmiar danych testowych: ' + str(test_data0.shape))
print('Rozmiar całych danych: {0} oraz jego stopień niezbalansowania: {1:.2f}'.format(whole_data.shape, sum(~whole_data.Thyroid_disease)/sum(whole_data.Thyroid_disease)))
print("Proporcje targetu w całym zbiorze danych:\n" + str(whole_data.Thyroid_disease.value_counts(True)))
print("\nProporcje targetu w zbiorze testowym:\n" + str(test_data0['Thyroid_disease'].value_counts(True)))
###Output
Proporcje targetu w całym zbiorze danych:
False 0.938759
True 0.061241
Name: Thyroid_disease, dtype: float64
Proporcje targetu w zbiorze testowym:
0 0.939073
1 0.060927
Name: Thyroid_disease, dtype: float64
###Markdown
Proporcje są prawidłowe, więc przechodzimy do odzielenia zmiennej celu od naszych zbiorów (zapisując dany zbiór połączylismy je). Ponowny podział na dane oraz **etykiety** (zmienna celu)
###Code
train_target = train_data0['Thyroid_disease'].to_numpy()
train_data = train_data0.drop('Thyroid_disease', axis = 1)
test_target = test_data0['Thyroid_disease'].to_numpy()
test_data = test_data0.drop('Thyroid_disease', axis = 1)
###Output
_____no_output_____
###Markdown
WAŻNE: Wyciągając wnioski z ostatnich wykładów, spróbujemy wykorzystać miarę średniej geometrycznej (geometric mean - GM) z TPR oraz TNR, jako że jest skuteczna w ocenie klasyfikatorów działających na niezbalansowanych danych.Drobna uwaga: gdy średnia geometryczna wychodzi 0, oznacza to że minimum jeden z TPR, TNR jest równy 0, niemniej drugi może być wysoki. Warto zastosować jeszcze inną miarę - stosowalismy miary graficzne Zbadanie wpływu standaryzacji oraz normalizacji na skuteczność algorytmów klasyfikującychPrzyjmuje się, że warto robić standaryzację/normalizację danych przed modelowaniem, dlatego sprawdzimy, czy zawsze daje to lepsze rezultaty. Do sprawdzenia wykorzystaliśmy klasyfikatory: Random Forest, Regresja Logistyczna oraz XGBoost.
###Code
GMscore = make_scorer(geometric_mean_score, average = 'binary')
def examineTransformingInfluence(model, name, score):
standarizer = StandardScaler()
minmax = MinMaxScaler()
standarizing_pipeline = make_pipeline(standarizer, model)
normalizing_pipeline = make_pipeline(minmax, model)
CVscores_no_transform = cross_val_score(model, train_data, train_target, cv = KFold(5), scoring=score)
CVscores_standarization = cross_val_score(standarizing_pipeline, train_data, train_target, cv = KFold(5), scoring=score)
CVscores_normalization = cross_val_score(normalizing_pipeline, train_data, train_target, cv = KFold(5), scoring=score)
print(name + ":")
print('Średnia wyników bez transformacji: {0:.4f}, odchylenie standardowe: {1:.4f}'.format(CVscores_no_transform.mean(), CVscores_no_transform.std()))
print('Średnia wyników po standaryzacji: {0:.4f}, odchylenie standardowe: {1:.4f}'.format(CVscores_standarization.mean(), CVscores_standarization.std()))
print('Średnia wyników po normalizacji: {0:.4f}, odchylenie standardowe: {1:.4f}\n'.format(CVscores_normalization.mean(), CVscores_normalization.std()))
lr_model = LogisticRegression(random_state=58, max_iter=2000)
xgb = XGBClassifier()
rf = RandomForestClassifier(random_state = 354)
examineTransformingInfluence(lr_model, 'Regresja logistyczna', GMscore)
examineTransformingInfluence(xgb, 'XGBoost', GMscore)
examineTransformingInfluence(rf, 'Random Forest', GMscore)
###Output
Regresja logistyczna:
Średnia wyników bez transformacji: 0.7515, odchylenie standardowe: 0.0690
Średnia wyników po standaryzacji: 0.7816, odchylenie standardowe: 0.0738
Średnia wyników po normalizacji: 0.2892, odchylenie standardowe: 0.0788
XGBoost:
Średnia wyników bez transformacji: 0.9418, odchylenie standardowe: 0.0277
Średnia wyników po standaryzacji: 0.9418, odchylenie standardowe: 0.0277
Średnia wyników po normalizacji: 0.9418, odchylenie standardowe: 0.0277
Random Forest:
Średnia wyników bez transformacji: 0.9058, odchylenie standardowe: 0.0525
Średnia wyników po standaryzacji: 0.9058, odchylenie standardowe: 0.0525
Średnia wyników po normalizacji: 0.9058, odchylenie standardowe: 0.0525
###Markdown
Krótki wniosek - w dalszych krokach warto rozważyć stosowanie standaryzacji. Zautomatyzowane szukanie optymalnych parametrów dla różnych modeliStworzyliśmy funkcję tuner, która dla pięciu modeli (Random Forest, K Ne-ighbors, Regresji logistycznej, XGBoost oraz GradienBoostingClassifier) sprawdza czy standaryzacja dobrze wpływa na działanie modelu oraz dobiera najlepsze hiperpara-metry (sprawdzając zarówno grid search jak i random search).
###Code
def tuner(data_obj, target_obj):
"""
Funkcja przyjmuje na wejściu ramke danych - zmienne objaśniające oraz zmienną celu.
Wykonuje strojenie hiperparametrów oraz porównanie jakości algorytmów klasyfikujących w zależności od staandaryzacji danych
Dla każdego spośród klasyfikatorów: Random Forest, K Neighbors, Regresji logistycznej, XGBoost oraz GradienBoostingClassifier wykonuje poszukiwanie optymalnego zestawu hiperparametrów,
zarówno za pomocą 'siatki poszukiwań' jak i losowo, biorąc pod uwagę wskazane wartości.
W trakcie swojej pracy, funkcja wypisuje dla każdego modelu najlepiej nastrojony zestaw, zarówno dla poszukiwania losowego, jak i w siatce.
Ponadto informuje, czy w tym przypadku zastosowanie standaryzacji przyniosło korzyść.
Zwracaną wartością jest słownik pieciu obiektów typu RandomizedSearchCV bądź GridSearchCV - w zaleznosci ktory byl najlepszy.
Z każdego takiego obiektu można wyłuskać najlepszy estymator za pomocą parametru .best_estimator_
"""
modeldict = {
'RandomForestClassifier': RandomForestClassifier(random_state=12),
'KNeighborsClassifier': KNeighborsClassifier(),
'LogisticRegression': LogisticRegression(random_state=342),
'XGBClassifier': XGBClassifier(random_state=23),
'GradientBoostingClassifier': GradientBoostingClassifier(random_state=67)
}
params = {
'RandomForestClassifier':{
"randomforestclassifier__n_estimators": [100, 200, 500, 1000],
"randomforestclassifier__max_features": ["auto", "log2"],
"randomforestclassifier__criterion": ['gini', 'entropy'],
"randomforestclassifier__max_depth": [None, 3, 7]
},
'KNeighborsClassifier': {
'kneighborsclassifier__n_neighbors': [3, 5, 7, 9],
'kneighborsclassifier__weights': ['uniform', 'distance'],
'kneighborsclassifier__algorithm': ['ball_tree', 'kd_tree', 'brute']
},
'LogisticRegression': {
'logisticregression__solver': ['newton-cg', 'sag', 'lbfgs'],
'logisticregression__max_iter': [100, 300]
},
'XGBClassifier': {
'xgbclassifier__max_depth': [3, 5, 8],
'xgbclassifier__learning_rate': [0.05, 0.1, 0.5],
'xgbclassifier__booster': ['gbtree', 'dart']
},
'GradientBoostingClassifier': {
'gradientboostingclassifier__n_estimators': [100, 400, 800],
'gradientboostingclassifier__learning_rate': [0.1, 0.5],
'gradientboostingclassifier__min_samples_split': [1, 3],
'gradientboostingclassifier__max_depth': [3, 8, 10],
'gradientboostingclassifier__ccp_alpha': [0.0, 0.9]
}
}
params_random = {
'RandomForestClassifier':{
"randomforestclassifier__n_estimators": randint(low = 50, high = 1300),#random.sample(range(50, 1300), 3),
"randomforestclassifier__max_features": ["auto", "log2"],
"randomforestclassifier__criterion": ['gini', 'entropy'],
"randomforestclassifier__max_depth": [None] + random.sample(range(1, 15), 2)
},
'KNeighborsClassifier': {
'kneighborsclassifier__n_neighbors': randint(low = 1, high = 15),#random.sample(range(1, 15), 3),
'kneighborsclassifier__weights': ['uniform', 'distance'],
'kneighborsclassifier__algorithm': ['ball_tree', 'kd_tree', 'brute']
},
'LogisticRegression': {
'logisticregression__solver': ['newton-cg', 'sag', 'lbfgs'],
'logisticregression__max_iter': randint(low = 100, high = 800),#random.sample(range(100, 800), 3)
},
'XGBClassifier': {
'xgbclassifier__max_depth': [None] + random.sample(range(2, 12), 3),
'xgbclassifier__learning_rate': expon(0.08),
'xgbclassifier__booster': ['gbtree', 'dart']
},
'GradientBoostingClassifier': {
'gradientboostingclassifier__n_estimators': randint(low = 50, high = 1300),#random.sample(range(50, 1300), 2),
'gradientboostingclassifier__learning_rate': expon(0.08),
'gradientboostingclassifier__min_samples_split': randint(low = 1, high = 8),#random.sample(range(1, 8), 2),
'gradientboostingclassifier__max_depth': randint(low = 2, high = 15),#random.sample(range(2, 15), 2),
'gradientboostingclassifier__ccp_alpha': expon(0.06)
}
}
A = {}
standarizer = StandardScaler()
for modelname in modeldict.keys():
model = modeldict[modelname]
param = params[modelname]
param_random = params_random[modelname]
standarizing_pipeline = make_pipeline(standarizer, model)
simple_pipeline = make_pipeline(model)
searcher = GridSearchCV(simple_pipeline, param_grid=param, cv=5, scoring = GMscore, n_jobs = -1)
searcher_with_standarization = GridSearchCV(standarizing_pipeline, param_grid=param, cv = 5, scoring = GMscore)
searcher.fit(data_obj, target_obj)
searcher_with_standarization.fit(data_obj, target_obj)
if searcher.best_score_ >= searcher_with_standarization.best_score_:
GSCV = searcher
gstnd = "NOT standarized"
else:
GSCV = searcher_with_standarization
gstnd = "STANDARIZED"
randomsearcher = RandomizedSearchCV(simple_pipeline, param_distributions=param_random, cv = 5, n_jobs=-1, random_state=59, scoring = GMscore)
randomsearcher_with_standarization = RandomizedSearchCV(standarizing_pipeline, param_distributions= param_random, cv = 5, n_jobs=-1, random_state=59, scoring = GMscore)
randomsearcher.fit(data_obj, target_obj)
randomsearcher_with_standarization.fit(data_obj, target_obj)
if randomsearcher.best_score_ >= randomsearcher_with_standarization.best_score_:
RSCV = randomsearcher
rstnd = "NOT standarized"
else:
RSCV = randomsearcher_with_standarization
rstnd = "STANDARIZED"
print(modelname + ':')
print("Random Search CV:\nBest parameters are:\n{0}\n{1}\nThe score: {2:.6f}".format(RSCV.best_params_, rstnd, RSCV.best_score_))
print("Grid Search CV:\nBest parameters are:\n{0}\n{1}\nThe score: {2:.6f}\n".format(GSCV.best_params_, gstnd, GSCV.best_score_))
best_model = GSCV if GSCV.best_score_ >= RSCV.best_score_ else RSCV
A.update({modelname: best_model})
return A
###Output
_____no_output_____
###Markdown
Testowanie dla naszego wybranego zbioru
###Code
#ponad 15 min - zaparz herbatę. Albo 5.
dict_of_best_classifiers = tuner(train_data, train_target)
###Output
RandomForestClassifier:
Random Search CV:
Best parameters are:
{'randomforestclassifier__criterion': 'gini', 'randomforestclassifier__max_depth': 10, 'randomforestclassifier__max_features': 'auto', 'randomforestclassifier__n_estimators': 630}
STANDARIZED
The score: 0.915979
Grid Search CV:
Best parameters are:
{'randomforestclassifier__criterion': 'gini', 'randomforestclassifier__max_depth': None, 'randomforestclassifier__max_features': 'auto', 'randomforestclassifier__n_estimators': 200}
NOT standarized
The score: 0.922336
KNeighborsClassifier:
Random Search CV:
Best parameters are:
{'kneighborsclassifier__algorithm': 'kd_tree', 'kneighborsclassifier__n_neighbors': 5, 'kneighborsclassifier__weights': 'distance'}
STANDARIZED
The score: 0.718243
Grid Search CV:
Best parameters are:
{'kneighborsclassifier__algorithm': 'ball_tree', 'kneighborsclassifier__n_neighbors': 5, 'kneighborsclassifier__weights': 'distance'}
STANDARIZED
The score: 0.718243
LogisticRegression:
Random Search CV:
Best parameters are:
{'logisticregression__max_iter': 277, 'logisticregression__solver': 'newton-cg'}
STANDARIZED
The score: 0.774083
Grid Search CV:
Best parameters are:
{'logisticregression__max_iter': 100, 'logisticregression__solver': 'newton-cg'}
STANDARIZED
The score: 0.774083
XGBClassifier:
Random Search CV:
Best parameters are:
{'xgbclassifier__booster': 'dart', 'xgbclassifier__learning_rate': 0.30293307560200333, 'xgbclassifier__max_depth': 9}
NOT standarized
The score: 0.933395
Grid Search CV:
Best parameters are:
{'xgbclassifier__booster': 'gbtree', 'xgbclassifier__learning_rate': 0.5, 'xgbclassifier__max_depth': 3}
NOT standarized
The score: 0.936878
GradientBoostingClassifier:
Random Search CV:
Best parameters are:
{'gradientboostingclassifier__ccp_alpha': 2.6374840925043395, 'gradientboostingclassifier__learning_rate': 0.25182157172314906, 'gradientboostingclassifier__max_depth': 13, 'gradientboostingclassifier__min_samples_split': 6, 'gradientboostingclassifier__n_estimators': 107}
NOT standarized
The score: 0.000000
Grid Search CV:
Best parameters are:
{'gradientboostingclassifier__ccp_alpha': 0.0, 'gradientboostingclassifier__learning_rate': 0.5, 'gradientboostingclassifier__max_depth': 3, 'gradientboostingclassifier__min_samples_split': 3, 'gradientboostingclassifier__n_estimators': 100}
NOT standarized
The score: 0.926539
###Markdown
Wnioski:- w każdym z przypadków stosowanie siatki poszukiwania optymalnych parametrów przynosiło nie gorsze rezultaty (czasami nawet lepsze)- standaryzacja danych wejściowych przynosi korzyści w przypadku algorytmów Regresji logistycznej, K Neighbors i czasem też dla Random Forest- bezwzględnie najlepszym modelem okazał się XGBoostZobaczmy rezultaty dla najlepszego modelu.
###Code
pd.DataFrame(dict_of_best_classifiers['XGBClassifier'].cv_results_)
###Output
_____no_output_____
###Markdown
Testowanie dla zbioru danych bez kolumn 'measured_'Próba sprawdzenia czy wyniki są wtedy lepsze.
###Code
no_measured_train_data = train_data.drop(['TSH_measured', 'T3_measured', 'TT4_measured', 'T4U_measured', 'FTI_measured'], axis = 1)
no_measured_test_data = test_data.drop(['TSH_measured', 'T3_measured', 'TT4_measured', 'T4U_measured', 'FTI_measured'], axis = 1)
no_measured_dict_of_best_classifiers = tuner(no_measured_train_data, train_target)
###Output
RandomForestClassifier:
Random Search CV:
Best parameters are:
{'randomforestclassifier__criterion': 'entropy', 'randomforestclassifier__max_depth': None, 'randomforestclassifier__max_features': 'log2', 'randomforestclassifier__n_estimators': 553}
NOT standarized
The score: 0.903882
Grid Search CV:
Best parameters are:
{'randomforestclassifier__criterion': 'gini', 'randomforestclassifier__max_depth': None, 'randomforestclassifier__max_features': 'auto', 'randomforestclassifier__n_estimators': 1000}
STANDARIZED
The score: 0.903894
KNeighborsClassifier:
Random Search CV:
Best parameters are:
{'kneighborsclassifier__algorithm': 'kd_tree', 'kneighborsclassifier__n_neighbors': 5, 'kneighborsclassifier__weights': 'distance'}
STANDARIZED
The score: 0.703292
Grid Search CV:
Best parameters are:
{'kneighborsclassifier__algorithm': 'ball_tree', 'kneighborsclassifier__n_neighbors': 5, 'kneighborsclassifier__weights': 'distance'}
STANDARIZED
The score: 0.703292
LogisticRegression:
Random Search CV:
Best parameters are:
{'logisticregression__max_iter': 305, 'logisticregression__solver': 'sag'}
STANDARIZED
The score: 0.781375
Grid Search CV:
Best parameters are:
{'logisticregression__max_iter': 300, 'logisticregression__solver': 'sag'}
STANDARIZED
The score: 0.781375
XGBClassifier:
Random Search CV:
Best parameters are:
{'xgbclassifier__booster': 'gbtree', 'xgbclassifier__learning_rate': 0.23566172442592875, 'xgbclassifier__max_depth': 7}
NOT standarized
The score: 0.930358
Grid Search CV:
Best parameters are:
{'xgbclassifier__booster': 'gbtree', 'xgbclassifier__learning_rate': 0.5, 'xgbclassifier__max_depth': 3}
NOT standarized
The score: 0.933912
GradientBoostingClassifier:
Random Search CV:
Best parameters are:
{'gradientboostingclassifier__ccp_alpha': 2.6374840925043395, 'gradientboostingclassifier__learning_rate': 0.25182157172314906, 'gradientboostingclassifier__max_depth': 13, 'gradientboostingclassifier__min_samples_split': 6, 'gradientboostingclassifier__n_estimators': 107}
NOT standarized
The score: 0.000000
Grid Search CV:
Best parameters are:
{'gradientboostingclassifier__ccp_alpha': 0.0, 'gradientboostingclassifier__learning_rate': 0.5, 'gradientboostingclassifier__max_depth': 3, 'gradientboostingclassifier__min_samples_split': 3, 'gradientboostingclassifier__n_estimators': 400}
NOT standarized
The score: 0.926719
###Markdown
Wniosek - usunięcie kolumn `_measured` okazało się nie mieć pozytywnego wpływu na skuteczność algorytmów - jedynie Regresja logistyczna osiągnęła nieznacznie wyższe wyniki. Wartości optymalnych parametrów się zmieniały - dla przykładu wzrost liczby estymatorów z 200 do 1000 dla Random Foresta z Grid Searchem. Bezwzględnie najlepszy ponownie okazał się XGBoost. Testowanie przy redukcja wymiarów: usunięcie TT4 skorelowanego z FTI oraz referral_sourcePrzypomnijmy, współczynnik korelacji między FTI a TT4 wyniósł 0.79, a referral_source jest informacją o szpitalu, z którego wzięto dane co powinno być nieprzydatne do oceny choroby.
###Code
reduced_train_data = train_data.drop(['TT4_measured', 'TT4', 'referral_source'], axis = 1)
reduced_dict_of_best_classificators = tuner(reduced_train_data, train_target)
###Output
RandomForestClassifier:
Random Search CV:
Best parameters are:
{'randomforestclassifier__criterion': 'entropy', 'randomforestclassifier__max_depth': None, 'randomforestclassifier__max_features': 'log2', 'randomforestclassifier__n_estimators': 553}
STANDARIZED
The score: 0.890266
Grid Search CV:
Best parameters are:
{'randomforestclassifier__criterion': 'gini', 'randomforestclassifier__max_depth': None, 'randomforestclassifier__max_features': 'auto', 'randomforestclassifier__n_estimators': 200}
NOT standarized
The score: 0.895893
KNeighborsClassifier:
Random Search CV:
Best parameters are:
{'kneighborsclassifier__algorithm': 'ball_tree', 'kneighborsclassifier__n_neighbors': 4, 'kneighborsclassifier__weights': 'distance'}
STANDARIZED
The score: 0.727376
Grid Search CV:
Best parameters are:
{'kneighborsclassifier__algorithm': 'ball_tree', 'kneighborsclassifier__n_neighbors': 5, 'kneighborsclassifier__weights': 'distance'}
STANDARIZED
The score: 0.714999
LogisticRegression:
Random Search CV:
Best parameters are:
{'logisticregression__max_iter': 166, 'logisticregression__solver': 'lbfgs'}
NOT standarized
The score: 0.740157
Grid Search CV:
Best parameters are:
{'logisticregression__max_iter': 100, 'logisticregression__solver': 'lbfgs'}
NOT standarized
The score: 0.746458
XGBClassifier:
Random Search CV:
Best parameters are:
{'xgbclassifier__booster': 'dart', 'xgbclassifier__learning_rate': 0.30293307560200333, 'xgbclassifier__max_depth': 4}
NOT standarized
The score: 0.916240
Grid Search CV:
Best parameters are:
{'xgbclassifier__booster': 'gbtree', 'xgbclassifier__learning_rate': 0.05, 'xgbclassifier__max_depth': 5}
NOT standarized
The score: 0.930577
GradientBoostingClassifier:
Random Search CV:
Best parameters are:
{'gradientboostingclassifier__ccp_alpha': 2.6374840925043395, 'gradientboostingclassifier__learning_rate': 0.25182157172314906, 'gradientboostingclassifier__max_depth': 13, 'gradientboostingclassifier__min_samples_split': 6, 'gradientboostingclassifier__n_estimators': 107}
NOT standarized
The score: 0.000000
Grid Search CV:
Best parameters are:
{'gradientboostingclassifier__ccp_alpha': 0.0, 'gradientboostingclassifier__learning_rate': 0.1, 'gradientboostingclassifier__max_depth': 3, 'gradientboostingclassifier__min_samples_split': 3, 'gradientboostingclassifier__n_estimators': 100}
NOT standarized
The score: 0.924744
###Markdown
Modyfikacja zbiorów danych ponownie nie przyniosła poprawienia modeli. Zdecydowana większość spośród najlepszych wyników była niższa w porównaniu ze zbiorem ze wszystkimi kolumnami (wyjątek - K Neighbours, który i tak przynosi słabe efekty). Czas działania funkcji po usunięciu kolumn, jeśli został zmniejszony to nieznacznie - nie przeprowadzaliśmy dokładnych testów. Można by rzec, że jednak te kolumny nie mają dużego wpływu, stąd pytanie jakie mają wpływ na ocenę choroby tarczycy - na razie będziemy działać na podstawowym, niezmienionym zbiorze, jednak do tematu wyboru kolumn wrócimy na koniec. OSTATECZNIE NIEWYKORZYSTANE - wróćmy do naszych pierwszych najlepszych modeli i je podsumujmy
###Code
# def removefromdict(slownik, klucz):
# r = copy.deepcopy(slownik)
# del r[klucz]
# return r
# selected_classifiers = removefromdict(dict_of_best_classifiers, 'KNeighborsClassifier')
# selected_classifiers = removefromdict(selected_classifiers, 'GradientBoostingClassifier')
# #selected_classifiers1 = dict_of_best_classifiers
# best_pipelines = [selected_classifiers[j].best_estimator_ for j in selected_classifiers.keys() ]
###Output
_____no_output_____
###Markdown
Sprawdzenie najlepszych klasyfikatorów na danych testowychWiemy już jakie parametry dają najlepsze wyniki dal naszych 5 modeli, dlatego pora sprawdzić ich wyniki na danych testowych, czyli tym co rzeczywiście chcemy przewidzieć. Wykorzystamy miary: GM, krzywa ROC, krzywa Precision-Recall.
###Code
def test_models(data_test, target_test, pplns):
"""
Funckja przyjmuje testowe dane oraz listę wartości zmiennej objaśnianej i listę obiektów typu Pipeline do sprawdzenia oraz wizualizacji ich skuteczności.
Rezultatem jej działaniea jest wypisanie wartości średnich geometrycznych oraz prezentacja wykresów: ROC AUC i precision-recall curve
"""
i=0
precision = [0]*len(pplns)
recall = [0]*len(pplns)
listap = [0]*len(pplns)
tpr = [0]*len(pplns)
fpr = [0]*len(pplns)
listauc = [0]*len(pplns)
listaucpred = [0]*len(pplns)
names = [0]*len(pplns)
for pipe in pplns:
pipename = pipe.steps[0][0] if pipe.steps[0][0] != 'standardscaler' else pipe.steps[1][0]
names[i] = pipename
y_pred = pipe.predict(data_test)
#print("Classification report of model {0}:\n{1}".format(pipename, classification_report(target_test, y_pred)))
print('GM score for {0} model: {1:.6f}\n'.format(pipename, geometric_mean_score(target_test, y_pred, average='binary')))
probabilities = pipe.predict_proba(data_test)[:, 1]
precision[i] = dict()
recall[i] = dict()
listap[i] = dict()
tpr[i] = dict()
fpr[i] = dict()
listauc[i] = dict()
fpr[i], tpr[i], _ = roc_curve(target_test, probabilities)
listauc[i] = auc(fpr[i], tpr[i])
precision[i], recall[i], _ = precision_recall_curve(target_test, probabilities)
listaucpred[i] = auc(recall[i], precision[i])
listap[i] = average_precision_score(target_test, probabilities)
i+=1
lw = 2
colors = ['red', 'green', 'blue', 'orange', 'yellow'] ### gdy pipelinow jest wiecej, nalezy rozszerzyc
labels = []
lines = []
## Pierwszy wykres - P/R
plt.figure(figsize=(10, 6))
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.01])
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.title('Precision-recall curve')
f_scores = np.linspace(0.2, 0.8, num=4)
for f_score in f_scores:
x = np.linspace(0.01, 1)
y = f_score * x / (2 * x - f_score)
l, = plt.plot(x[y >= 0], y[y >= 0], color='gray', alpha=0.2)
plt.annotate('f1={0:0.1f}'.format(f_score), xy=(0.9, y[45] + 0.02))
labels.append('iso-f1 curves')
lines.append(l)
for j in range(i):
l, = plt.plot(precision[j], recall[j], color=colors[j], lw=lw)
lines.append(l)
labels.append('Precision-recall for {0} (AP = {1:0.2f}, AUC = {2:.2f})'.format(names[j], listap[j], listaucpred[j]))
plt.legend(lines, labels, loc=(0, 0.08), prop=dict(size=9))
plt.show()
##Drugi wykres - ROC
plt.figure(figsize=(10, 6))
for j in range(i):
plt.plot(fpr[j], tpr[j], color=colors[j], lw=lw, label='{0} ROC curve (area = {1:.2f})'.format(names[j], listauc[j]))
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([-0.01, 1.01])
plt.ylim([0.0, 1.01])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
best_pipelines1 = [dict_of_best_classifiers[j].best_estimator_ for j in selected_classifiers1.keys() ]
test_models(test_data, test_target, best_pipelines1)
###Output
GM score for randomforestclassifier model: 0.894321
GM score for kneighborsclassifier model: 0.569023
GM score for logisticregression model: 0.688632
GM score for xgbclassifier model: 0.892416
GM score for gradientboostingclassifier model: 0.918825
###Markdown
W każdej z trzech miar, najmniej efektywnymi modelami są K Neighbor i Regresja Logistyczna.W graficznych miarach najlepiej prezentuje się Random Forest (kolor czerwony), choć pod względem średniej geometrycznej plasuje się na drugiej pozycji, za Gradient Boosting Classifierem. Analiza Feature Importance dla najlepszych modeliWiemy już jaki model jest najlepszy na zbiorze testowym, jednak okazało się, że usunięcie niektórych kolumn wcale nie pogorsza zbytnio wyników, dlatego przyjrzyjmy się jakie kolumny mają rzeczywisty wpływ na przewidywanie.
###Code
def featureimportances(data_test, target_test, pplns):
i = 0
colors = ['red', 'green', 'blue', 'orange', 'yellow']
for pipe in pplns:
pipename = pipe.steps[0][0] if pipe.steps[0][0] != 'standardscaler' else pipe.steps[1][0]
try:
rf_importances = pipe.steps[0][1].feature_importances_ if pipe.steps[0][0] != 'standardscaler' else pipe.steps[1][1].feature_importances_
except:
print('Feature importance not available for model: ' + pipename)
i+=1
continue
indices = np.argsort(rf_importances)[::-1]
plt.figure()
plt.title("Feature importance for " + pipename)
plt.bar(train_data.columns[0:10], rf_importances[indices][0:10],
color=colors[i], align="center")
plt.xticks(rotation=80, size = 12)
plt.show()
i += 1
featureimportances(test_data, test_target, best_pipelines1)
###Output
_____no_output_____
###Markdown
Dla sprawdzonych modeli można zauważyć, że to te same kolumny mają największy wpływ. Sprawdźmy teraz, czy modele wykorzystujące k najlepszych kolumn z feature importance mają lepsze czy gorsze wyniki. Wybieranie k najlepszych zmiennych przy pomocy SelectKBest()Badanie zostanie przeprowadzone dla trzech najlepszych modeli: Random Forest, XGBClassifier, Gradient Boosting Classifier. Sprawdzilismy krzywe ROC i Precision-Recall oraz GM Score dla 3,5, 7, 9, 11 i wszystkich kolumn.
###Code
rf_gbc_xgb = [j.steps[0] for j in best_pipelines1 if j.steps[0][0] in ['randomforestclassifier', 'gradientboostingclassifier', 'xgbclassifier']]
#select_k_ppln = [Pipeline([
# ('select', SelectKBest(k = list_of_k)),
# ('model', m)]) for m in rf_gbc for list_of_k in [3, 5, 7, 9]]
def check_selecting(datatrain, targettrain, datatest, targettest, list_of_model, list_of_k):
"""modyfikacja funkcji test_models na potrzeby oceny modeli po wybraniu k modeli"""
i=0
precision = [0]*(len(list_of_model)*len(list_of_k))
recall = [0]*(len(list_of_model)*len(list_of_k))
listap = [0]*(len(list_of_model)*len(list_of_k))
tpr = [0]*(len(list_of_model)*len(list_of_k))
fpr = [0]*(len(list_of_model)*len(list_of_k))
listauc = [0]*(len(list_of_model)*len(list_of_k))
listaucpred = [0]*(len(list_of_model)*len(list_of_k))
names = [0]*(len(list_of_model)*len(list_of_k))
for model in list_of_model:
for k in list_of_k:
ppl = Pipeline([
('select', SelectKBest(k = k)),
('model', model[1])
])
t0 = time.time()
ppl.fit(datatrain, targettrain)
y_pred = ppl.predict(datatest)
t1 = time.time()
score = geometric_mean_score(targettest, y_pred, average = 'binary')
print(" GM score for {0}: {1:.6f}; Number of selected features: {2}, time of working: {3:.6f}s.\n".format(model[0], score, k, t1-t0))
pipename = model[0]
names[i] = pipename
probabilities = ppl.predict_proba(datatest)[:, 1]
precision[i] = dict()
recall[i] = dict()
listap[i] = dict()
tpr[i] = dict()
fpr[i] = dict()
listauc[i] = dict()
fpr[i], tpr[i], _ = roc_curve(targettest, probabilities)
listauc[i] = auc(fpr[i], tpr[i])
precision[i], recall[i], _ = precision_recall_curve(targettest, probabilities)
listaucpred[i] = auc(recall[i], precision[i])
listap[i] = average_precision_score(targettest, probabilities)
i+=1
lw = 2
colors = ['red', 'green', 'blue', 'pink', 'yellow', 'brown'] ### gdy pipelinow jest wiecej, nalezy rozszerzyc
labels = []
lines = []
## Pierwszy wykres - P/R
plt.figure(figsize=(10, 6))
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.01])
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.title('Precision-recall curve')
f_scores = np.linspace(0.2, 0.8, num=4)
for f_score in f_scores:
x = np.linspace(0.01, 1)
y = f_score * x / (2 * x - f_score)
l, = plt.plot(x[y >= 0], y[y >= 0], color='gray', alpha=0.2)
plt.annotate('f1={0:0.1f}'.format(f_score), xy=(0.9, y[45] + 0.02))
labels.append('iso-f1 curves')
lines.append(l)
for j in range(i):
l, = plt.plot(precision[j], recall[j], color=colors[j], lw=lw)
lines.append(l)
labels.append('Prec-recall for {0}, {3} columns.(AP = {1:0.2f}, AUC = {2:.3f})'.format(names[j], listap[j], listaucpred[j], list_of_k[j]))
plt.legend(lines, labels, loc=(0, 0.08), prop=dict(size=9))
plt.show()
##Drugi wykres - ROC
plt.figure(figsize=(10, 6))
for j in range(i):
plt.plot(fpr[j], tpr[j], color=colors[j], lw=lw, label='{0} - {2} columns - ROC curve (area = {1:.3f})'.format(names[j], listauc[j], list_of_k[j]))
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([-0.01, 1.01])
plt.ylim([0.0, 1.01])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
i = 0
check_selecting(train_data, train_target, test_data, test_target, rf_gbc_xgb, [3, 5, 7, 9, 11, 'all'])
###Output
GM score for randomforestclassifier: 0.837982; Number of selected features: 3, time of working: 0.446007s.
GM score for randomforestclassifier: 0.853642; Number of selected features: 5, time of working: 0.510625s.
GM score for randomforestclassifier: 0.893686; Number of selected features: 7, time of working: 0.513846s.
GM score for randomforestclassifier: 0.905683; Number of selected features: 9, time of working: 0.555553s.
GM score for randomforestclassifier: 0.893686; Number of selected features: 11, time of working: 0.519911s.
GM score for randomforestclassifier: 0.894321; Number of selected features: all, time of working: 0.580486s.
###Markdown
Dla RandomForest i XGB najlepsze wyniki miał model wykonany na pełnej ramce danych. Najlepszy okazał się Gradient Boosting. **Gradient Boosting Classifier dla wybranych 7 zmiennych osiągnął niespotykaną dotąd skuteczność GM score 0.925, choć miara graficzna Precision - Recall wykazała, że wybranie 9 kolumn byłoby jeszcze lepsze.** Automatyczny wybór modelu - porównanie automatycznego klasyfikatora z już sprawdzonymi modelami
###Code
tpot = TPOTClassifier(generations=5,verbosity=2, random_state=234, scoring=GMscore, n_jobs = -2)
tpot.fit(train_data, train_target)
tpot.score(test_data, test_target)
###Output
_____no_output_____
|
notebook/kfold-augmentation.ipynb
|
###Markdown
Model features- augmentation (6 image generated)- k-fold cross validation
###Code
import sys
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import cv2
import random
from tqdm import tqdm
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
import keras
from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Dense, Dropout, Flatten
# import third-party library
sys.path.append('./my_lib/')
from data_augmentation import DataAugmentation
# import data
csv_train = pd.read_csv('../input/labels.csv')
csv_test = pd.read_csv('../input/sample_submission.csv')
# read training CSV
csv_train.head(10)
# read test csv
csv_test.head(10)
# reduce data size
# csv_train = csv_train.head(1000)
# csv_test = csv_test.head(1000)
# Generate Labels
targets_series = pd.Series(csv_train['breed'])
#print(targets_series)
one_hot = pd.get_dummies(targets_series, sparse = True)
#print(one_hot)
#labels = np.asarray(one_hot)
y_train = np.asarray(one_hot)
y_train =y_train.tolist()
#print(type(y_train))
#print(labels)
n_check = random.randint(0, len(y_train)-1)
#n_check = random.randint(0, len(labels)-1)
print(csv_train['breed'][n_check], 'is encoded as', ''.join((str(i) for i in y_train[n_check])))
im_size = 90
x_train = []
x_test = []
for i, (f, breed) in enumerate(tqdm(csv_train.values)):
img = cv2.imread('../input/train/{}.jpg'.format(f))
x_train.append(cv2.resize(img, (im_size, im_size)))
###Output
100%|██████████| 1000/1000 [00:02<00:00, 360.37it/s]
###Markdown
Use external module to execute data augmentation.The module execute:- [ ] Inversion- [ ] Sobel derivative- [ ] Scharr derivative- [ ] Laplacian - [ ] Blur- [ ] Gaussian blur [disable]- [ ] Median blur- [ ] Bilateral blur- [x] Horizontal flips- [x] Rotation
###Code
for i, images in enumerate(tqdm(DataAugmentation(x_train,
options={'inverse': False,
'sobel_derivative': False,
'scharr_derivative': False,
'laplacian': False,
'blur': False,
'gaussian_blur': False,
'median_blur': False,
'bilateral_blur': False,
'horizontal_flips': True,
'rotation': True,
'shuffle_result': False}))):
for image in images:
if i == 0:
plt.imshow(image, cmap = 'gray', interpolation = 'bicubic')
plt.show()
x_train.append(image)
y_train.append(y_train[i])
print('dataset became:', len(x_train))
# check train
n_check = random.randint(0, len(y_train)-1)
print('label:', ''.join((str(i) for i in y_train[n_check])))
plt.imshow(x_train[n_check], cmap = 'gray', interpolation = 'bicubic')
plt.show()
for f in tqdm(csv_test['id'].values):
img = cv2.imread('../input/test/{}.jpg'.format(f))
x_test.append(cv2.resize(img, (im_size, im_size)))
# build np array and normalise them
X_train = np.array(x_train, np.float32) / 255.
y_train = np.array(y_train, np.uint8)
X_test = np.array(x_test, np.float32) / 255.
#array of classes of 1 diension for sklearn
y_train_onedim = np.array([np.argmax(i) for i in y_train], np.uint32)
print("x_train shape:", X_train.shape)
print("y_train shape:", y_train.shape)
print("y_train_onedim shape:", y_train_onedim.shape)
print("x_test shape:", X_test.shape)
num_classes = y_train.shape[1]
classes = csv_test.columns.values[1:]
# Create the base pre-trained model
base_model = VGG16(weights="imagenet", include_top=False, input_shape=(im_size, im_size, 3))
# Add a new top layer
x = base_model.output
x = Flatten()(x)
predictions = Dense(num_classes, activation='softmax')(x)
# This is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# First: train only the top layers (which were randomly initialized)
for layer in base_model.layers:
layer.trainable = False
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
callbacks_list = [keras.callbacks.EarlyStopping(monitor='val_acc', patience=5, verbose=1)]
model.summary()
from sklearn.model_selection import StratifiedKFold
# Instantiate the cross validator
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
history_data = {}
# Loop through the indices the split() method returns
for index, (train_indices, val_indices) in enumerate(skf.split(X_train, y_train_onedim)):
print("Training on fold " + str(index+1) + "/5...")
# Generate batches from indices
xtrain, xval = X_train[train_indices], X_train[val_indices]
ytrain, yval = y_train[train_indices], y_train[val_indices]
# Debug message I guess
# print("Training new iteration on " + str(xtrain.shape[0]) + " training samples, " + str(xval.shape[0]) + " validation samples, this may be a while...")
history = model.fit(xtrain, ytrain, epochs=20, batch_size=48, validation_data=(xval, yval), # callbacks=callbacks_list,
verbose=1)
history_data['loss'] += history.history['loss']
history_data['val_loss'] += history.history['val_loss']
history_data['acc'] += history.history['acc']
history_data['val_acc'] += history.history['val_acc']
# history = model.fit(X_train, Y_train, epochs=5, batch_size=32, validation_data=(X_valid, Y_valid), callbacks=callbacks_list, verbose=1)
# list all data in history
print(history_data.keys())
# summarize history for accuracy
plt.plothistory_data['acc'])
plt.plot(history_data['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history_data['loss'])
plt.plot(history_data['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
preds = model.predict(x_test_raw, verbose=1)
# check predict
n_check = random.randint(0, len(x_test_raw)-1)
plt.imshow(x_test_raw[n_check], cmap = 'gray_r', interpolation = 'bicubic')
plt.show()
pre = model.predict(np.array([x_test_raw[n_check]]))
arg_max = np.argmax(pre)
print(np.max(pre), arg_max, labels[arg_max])
###Output
_____no_output_____
|
notebooks/variational_models/2022-02-12-variational-inference.ipynb
|
###Markdown
Variational Inference Goals: G1: Given probability distributions $p$ and $q$, find the divergence (measure of similarity) between themLet us first look at G1. Look at the illustration below. We have a normal distribution $p$ and two other normal distributions $q_1$ and $q_2$. Which of $q_1$ and $q_2$, would we consider closer to $p$? $q_2$, right? To understand the notion of similarity, we use a metric called the KL-divergence given as $D_{KL}(a || b)$ where $a$ and $b$ are the two distributions. For G1, we can say $q_2$ is closer to $p$ compared to $q_1$ as:$D_{KL}(q_2 || p) \lt D_{KL}(q_1 || p)$For the above example, we have the values as $D_{KL}(q_2|| p) = 0.07$ and $D_{KL}(q_1|| p)= 0.35$ G2: assuming $p$ to be fixed, can we find optimum parameters of $q$ to make it as close as possible to $p$The following GIF shows the process of finding the optimum set of parameters for a normal distribution $q$ so that it becomes as close as possible to $p$. This is equivalent of minimizing $D_{KL}(q || p)$The following GIF shows the above but for a two-dimensional distribution. G3: finding the "distance" between two distributions of different familiesThe below image shows the KL-divergence between distribution 1 (mixture of Gaussians) and distribution 2 (Gaussian) G4: optimizing the "distance" between two distributions of different familiesThe below GIF shows the optimization of the KL-divergence between distribution 1 (mixture of Gaussians) and distribution 2 (Gaussian) G5: Approximating the KL-divergence G6: Implementing variational inference for linear regression Basic Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
import torch
import seaborn as sns
import pandas as pd
dist =torch.distributions
sns.reset_defaults()
sns.set_context(context="talk", font_scale=1)
%matplotlib inline
%config InlineBackend.figure_format='retina'
###Output
_____no_output_____
###Markdown
Creating distributions Creating $p\sim\mathcal{N}(1.00, 4.00)$
###Code
p = dist.Normal(1, 4)
z_values = torch.linspace(-5, 15, 200)
prob_values_p = torch.exp(p.log_prob(z_values))
plt.plot(z_values, prob_values_p, label=r"$p\sim\mathcal{N}(1.00, 4.00)$")
sns.despine()
plt.legend()
plt.xlabel("x")
plt.ylabel("PDF")
###Output
_____no_output_____
###Markdown
Creating $q\sim\mathcal{N}(loc, scale)$
###Code
def create_q(loc, scale):
return dist.Normal(loc, scale)
###Output
_____no_output_____
###Markdown
Generating a few qs for different location and scale value
###Code
q = {}
q[(0, 1)] = create_q(0.0, 1.0)
for loc in [0, 1]:
for scale in [1, 2]:
q[(loc, scale)] = create_q(float(loc), float(scale))
plt.plot(z_values, prob_values_p, label=r"$p\sim\mathcal{N}(1.00, 4.00)$", lw=3)
plt.plot(
z_values,
torch.exp(create_q(0.0, 2.0).log_prob(z_values)),
label=r"$q_1\sim\mathcal{N}(0.00, 2.00)$",
lw=2,
linestyle="--",
)
plt.plot(
z_values,
torch.exp(create_q(1.0, 3.0).log_prob(z_values)),
label=r"$q_2\sim\mathcal{N}(1.00, 3.00)$",
lw=2,
linestyle="-.",
)
plt.legend(bbox_to_anchor=(1.04, 1), borderaxespad=0)
plt.xlabel("x")
plt.ylabel("PDF")
sns.despine()
plt.tight_layout()
plt.savefig(
"dkl.png",
dpi=150,
)
#### Computing KL-divergence
q_0_2_dkl = dist.kl_divergence(create_q(0.0, 2.0), p)
q_1_3_dkl = dist.kl_divergence(create_q(1.0, 3.0), p)
print(f"D_KL (q(0, 2)||p) = {q_0_2_dkl:0.2f}")
print(f"D_KL (q(1, 3)||p) = {q_1_3_dkl:0.2f}")
###Output
D_KL (q(0, 2)||p) = 0.35
D_KL (q(1, 3)||p) = 0.07
###Markdown
As mentioned earlier, clearly, $q_2\sim\mathcal{N}(1.00, 3.00)$ seems closer to $p$ Optimizing the KL-divergence between q and pWe could create a grid of (loc, scale) pairs and find the best, as shown below.
###Code
plt.plot(z_values, prob_values_p, label=r"$p\sim\mathcal{N}(1.00, 4.00)$", lw=5)
for loc in [0, 1]:
for scale in [1, 2]:
q_d = q[(loc, scale)]
kl_d = dist.kl_divergence(q[(loc, scale)], p)
plt.plot(
z_values,
torch.exp(q_d.log_prob(z_values)),
label=rf"$q\sim\mathcal{{N}}({loc}, {scale})$"
+ "\n"
+ rf"$D_{{KL}}(q||p)$ = {kl_d:0.2f}",
)
plt.legend(bbox_to_anchor=(1.04, 1), borderaxespad=0)
plt.xlabel("x")
plt.ylabel("PDF")
sns.despine()
###Output
_____no_output_____
###Markdown
Or, we could use continuous optimization to find the best loc and scale parameters for q.
###Code
loc = torch.tensor(8.0, requires_grad=True)
scale = torch.tensor(0.1, requires_grad=True)
loc_array = []
scale_array = []
loss_array = []
opt = torch.optim.Adam([loc, scale], lr=0.05)
for i in range(401):
scale_softplus = torch.functional.F.softplus(scale)
to_learn = dist.Normal(loc=loc, scale=scale_softplus)
loss = dist.kl_divergence(to_learn, p)
loss_array.append(loss.item())
loc_array.append(to_learn.loc.item())
scale_array.append(to_learn.scale.item())
loss.backward()
if i % 100 == 0:
print(
f"Iteration: {i}, Loss: {loss.item():0.2f}, Loc: {loc.item():0.2f}, Scale: {scale_softplus.item():0.2f}"
)
opt.step()
opt.zero_grad()
plt.plot(torch.tensor(scale_array))
plt.plot(torch.tensor(loc_array))
plt.plot(torch.tensor(loss_array))
###Output
_____no_output_____
###Markdown
After training, we are able to recover the scale and loc very close to that of $p$ Animation!
###Code
from matplotlib import animation
fig = plt.figure(tight_layout=True, figsize=(8, 4))
ax = fig.gca()
def animate(i):
ax.clear()
ax.plot(z_values, prob_values_p, label=r"$p\sim\mathcal{N}(1.00, 4.00)$", lw=5)
to_learn_q = dist.Normal(loc = loc_array[i], scale=scale_array[i])
loss = loss_array[i]
ax.plot(
z_values,
torch.exp(to_learn_q.log_prob(z_values)),
label=rf"$q\sim \mathcal{{N}}({loc:0.2f}, {scale:0.2f})$",
)
ax.set_title(rf"Iteration: {i}, $D_{{KL}}(q||p)$: {loss:0.2f}")
ax.legend(bbox_to_anchor=(1.1, 1), borderaxespad=0)
ax.set_ylim((0, 1))
ax.set_xlim((-5, 15))
ax.set_xlabel("x")
ax.set_ylabel("PDF")
sns.despine()
ani = animation.FuncAnimation(fig, animate, frames=350)
plt.close()
ani.save("kl_qp.gif", writer="imagemagick", fps=60)
###Output
_____no_output_____
###Markdown
 Finding the KL divergence for two distributions from different familiesLet us rework our example with `p` coming from a mixture of Gaussian distribution and `q` being Normal.
###Code
p_s = dist.MixtureSameFamily(
mixture_distribution=dist.Categorical(probs=torch.tensor([0.5, 0.5])),
component_distribution=dist.Normal(
loc=torch.tensor([-0.2, 1]), scale=torch.tensor([0.4, 0.5]) # One for each component.
),
)
p_s
plt.plot(z_values, torch.exp(p_s.log_prob(z_values)))
sns.despine()
###Output
_____no_output_____
###Markdown
Let us create two Normal distributions q_1 and q_2 and plot them to see which looks closer to p_s.
###Code
q_1 = create_q(3, 1)
q_2 = create_q(3, 4.5)
prob_values_p_s = torch.exp(p_s.log_prob(z_values))
prob_values_q_1 = torch.exp(q_1.log_prob(z_values))
prob_values_q_2 = torch.exp(q_2.log_prob(z_values))
plt.plot(z_values, prob_values_p_s, label=r"MOG")
plt.plot(z_values, prob_values_q_1, label=r"$q_1\sim\mathcal{N} (3, 1.0)$")
plt.plot(z_values, prob_values_q_2, label=r"$q_2\sim\mathcal{N} (3, 4.5)$")
sns.despine()
plt.legend()
plt.xlabel("x")
plt.ylabel("PDF")
plt.tight_layout()
plt.savefig(
"dkl-different.png",
dpi=150,
)
try:
dist.kl_divergence(q_1, p_s)
except NotImplementedError:
print(f"KL divergence not implemented between {q_1.__class__} and {p_s.__class__}")
###Output
KL divergence not implemented between <class 'torch.distributions.normal.Normal'> and <class 'torch.distributions.mixture_same_family.MixtureSameFamily'>
###Markdown
As we see above, we can not compute the KL divergence directly. The core idea would now be to leverage the Monte Carlo sampling and generating the expectation. The following function does that.
###Code
def kl_via_sampling(q, p, n_samples=100000):
# Get samples from q
sample_set = q.sample([n_samples])
# Use the definition of KL-divergence
return torch.mean(q.log_prob(sample_set) - p.log_prob(sample_set))
dist.kl_divergence(q_1, q_2)
kl_via_sampling(q_1, q_2)
kl_via_sampling(q_1, p_s), kl_via_sampling(q_2, p_s)
###Output
_____no_output_____
###Markdown
As we can see from KL divergence calculations, `q_1` is closer to our Gaussian mixture distribution. Optimizing the KL divergence for two distributions from different familiesWe saw that we can calculate the KL divergence between two different distribution families via sampling. But, as we did earlier, will we be able to optimize the parameters of our target surrogate distribution? The answer is No! As we have introduced sampling. However, there is still a way -- by reparameterization! Our surrogate q in this case is parameterized by `loc` and `scale`. The key idea here is to generate samples from a standard normal distribution (loc=0, scale=1) and then apply an affine transformation on the generated samples to get the samples generated from q. See my other post on sampling from normal distribution to understand this better.The loss can now be thought of as a function of `loc` and `scale`.
###Code
n_samples = 1000
def loss(loc, scale):
q = dist.Normal(loc=loc, scale=scale)
std_normal = dist.Normal(loc=0.0, scale=1.0)
sample_set = std_normal.sample([n_samples])
sample_set = loc + scale * sample_set
return torch.mean(q.log_prob(sample_set) - p_s.log_prob(sample_set))
###Output
_____no_output_____
###Markdown
Having defined the loss above, we can now optimize `loc` and `scale` to minimize the KL-divergence.
###Code
optimizer = tf.optimizers.Adam(learning_rate=0.05)
loc = torch.tensor(8.0, requires_grad=True)
scale = torch.tensor(0.1, requires_grad=True)
loc_array = []
scale_array = []
loss_array = []
opt = torch.optim.Adam([loc, scale], lr=0.05)
for i in range(401):
scale_softplus = torch.functional.F.softplus(scale)
to_learn = dist.Normal(loc=loc, scale=scale_softplus)
loss_value = loss(loc, scale_softplus)
loss_array.append(loss_value.item())
loc_array.append(to_learn.loc.item())
scale_array.append(to_learn.scale.item())
loss_value.backward()
if i % 100 == 0:
print(
f"Iteration: {i}, Loss: {loss_value.item():0.2f}, Loc: {loc.item():0.2f}, Scale: {scale_softplus.item():0.2f}"
)
opt.step()
opt.zero_grad()
q_s = dist.Normal(loc=loc, scale=scale_softplus)
q_s
prob_values_p_s = torch.exp(p_s.log_prob(z_values))
prob_values_q_s = torch.exp(q_s.log_prob(z_values))
plt.plot(z_values, prob_values_p_s.detach(), label=r"p")
plt.plot(z_values, prob_values_q_s.detach(), label=r"q")
sns.despine()
plt.legend()
plt.xlabel("x")
plt.ylabel("PDF")
prob_values_p_s = torch.exp(p_s.log_prob(z_values))
fig = plt.figure(tight_layout=True, figsize=(8, 4))
ax = fig.gca()
n_iter = 300
def a(iteration):
ax.clear()
loc = loc_array[iteration]
scale = scale_array[iteration]
q_s = dist.Normal(loc=loc, scale=scale)
prob_values_q_s = torch.exp(q_s.log_prob(z_values))
ax.plot(z_values, prob_values_p_s, label=r"p")
ax.plot(z_values, prob_values_q_s, label=r"q")
ax.set_title(f"Iteration {iteration}, Loss: {loss_array[iteration]:0.2f}")
ax.set_ylim((-0.05, 1.05))
ax.legend()
ani_mg = animation.FuncAnimation(fig, a, frames=n_iter)
plt.close()
plt.plot(loc_array, label="loc")
plt.plot(scale_array, label="scale")
plt.xlabel("Iterations")
sns.despine()
plt.legend()
ani_mg.save("kl_qp_mg.gif", writer="imagemagick")
###Output
_____no_output_____
###Markdown
 KL-Divergence and ELBOLet us consider linear regression. We have parameters $\theta \in R^D$ and we define a prior over them. Let us assume we define prior $p(\theta)\sim \mathcal{N_D} (\mu, \Sigma)$. Now, given our dataset $D = \{X, y\}$ and a parameter vector $\theta$, we can deifine our likelihood as $p(D|\theta)$ or $p(y|X, \theta) = \prod_{i=1}^{n} p(y_i|x_i, \theta) = \prod_{i=1}^{n} \mathcal{N}(y_i|x_i^T\theta, \sigma^2) $As per Bayes rule, we can obtain the posterior over $\theta$ as:$p(\theta|D) = \dfrac{p(D|\theta)p(\theta)}{p(D)}$Now, in general $p(D)$ is hard to compute. So, in variational inference, our aim is to use a surrogate distribution $q(\theta)$ such that it is very close to $p(\theta|D)$. We do so by minimizing the KL divergence between $q(\theta)$ and $p(\theta|D)$.Aim: $$q^*(\theta) = \underset{q(\theta) \in \mathcal{Q}}{\mathrm{argmin~}} D_{KL}[q(\theta)||p(\theta|D)]$$Now, $$D_{KL}[q(\theta)||p(\theta|D)] = \mathbb{E}_{q(\theta)}[\log\frac{q(\theta)}{p(\theta|D)}]$$Now, $$ = \mathbb{E}_{q(\theta)}[\log\frac{q(\theta)p(D)}{p(\theta, D)}]$$Now, $$ = \mathbb{E}_{q(\theta)}[\log q(\theta)]- \mathbb{E}_{q(\theta)}[\log p(\theta, D)] + \mathbb{E}_{q(\theta)}[\log p(D)] $$$$= \mathbb{E}_{q(\theta)}[\log q(\theta)]- \mathbb{E}_{q(\theta)}[\log p(\theta, D)] + \log p(D) $$Now, $p(D) \in \{0, 1\}$. Thus, $\log p(D) \in \{-\infty, 0 \}$Now, let us look at the quantities:$$\underbrace{D_{KL}[q(\theta)||p(\theta|D)]}_{\geq 0} = \underbrace{\mathbb{E}_{q(\theta)}[\log q(\theta)]- \mathbb{E}_{q(\theta)}[\log p(\theta, D)]}_{-\text{ELBO(q)}} + \underbrace{\log p(D)}_{\leq 0}$$Thus, we know that $\log p(D) \geq \text{ELBO(q)}$Thus, finally we can rewrite the optimisation from$$q^*(\theta) = \underset{q(\theta) \in \mathcal{Q}}{\mathrm{argmin~}} D_{KL}[q(\theta)||p(\theta|D)]$$to$$q^*(\theta) = \underset{q(\theta) \in \mathcal{Q}}{\mathrm{argmax~}} \text{ELBO(q)}$$Now, given our linear regression problem setup, we want to maximize the ELBO.We can do so by the following. As a simple example, let us assume $\theta \in R^2$- Assume some q. Say, a Normal distribution. So, $q\sim \mathcal{N}_2$- Draw samples from q. Say N samples. - Initilize ELBO = 0.0- For each sample: - Let us assume drawn sample is $[\theta_1, \theta_2]^T$ - Compute log_prob of prior on $[\theta_1, \theta_2]^T$ or `lp = p.log_prob(θ1, θ2)` - Compute log_prob of likelihood on $[\theta_1, \theta_2]^T$ or `ll = l.log_prob(θ1, θ2)` - Compute log_prob of q on $[\theta_1, \theta_2]^T$ or `lq = q.log_prob(θ1, θ2)` - ELBO = ELBO + (ll+lp-q)- Return ELBO/N
###Code
prior = dist.Normal(loc = 0., scale = 1.)
p = dist.Normal(loc = 5., scale = 1.)
samples = p.sample([1000])
mu = torch.tensor(1.0, requires_grad=True)
def surrogate_sample(mu):
std_normal = dist.Normal(loc = 0., scale=1.)
sample_std_normal = std_normal.sample()
return mu + sample_std_normal
samples_from_surrogate = surrogate_sample(mu)
samples_from_surrogate
def logprob_prior(mu):
return prior.log_prob(mu)
lp = logprob_prior(samples_from_surrogate)
def log_likelihood(mu, samples):
di = dist.Normal(loc=mu, scale=1)
return torch.sum(di.log_prob(samples))
ll = log_likelihood(samples_from_surrogate, samples)
ls = surrogate.log_prob(samples_from_surrogate)
def elbo_loss(mu, data_samples):
samples_from_surrogate = surrogate_sample(mu)
lp = logprob_prior(samples_from_surrogate)
ll = log_likelihood(samples_from_surrogate, data_samples)
ls = surrogate.log_prob(samples_from_surrogate)
return -lp - ll + ls
mu = torch.tensor(1.0, requires_grad=True)
loc_array = []
loss_array = []
opt = torch.optim.Adam([mu], lr=0.02)
for i in range(2000):
loss_val = elbo_loss(mu, samples)
loss_val.backward()
loc_array.append(mu.item())
loss_array.append(loss_val.item())
if i % 100 == 0:
print(
f"Iteration: {i}, Loss: {loss_val.item():0.2f}, Loc: {mu.item():0.3f}"
)
opt.step()
opt.zero_grad()
plt.plot(loss_array)
from numpy.lib.stride_tricks import sliding_window_view
plt.plot(np.average(sliding_window_view(loss_array, window_shape = 10), axis=1))
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
true_theta_0 = 3.
true_theta_1 = 4.
x = torch.linspace(-5, 5, 100)
y_true = true_theta_0 + true_theta_1*x
y_noisy = y_true + torch.normal(mean = torch.zeros_like(x), std = torch.ones_like(x))
plt.plot(x, y_true)
plt.scatter(x, y_noisy, s=20, alpha=0.5)
y_pred = x_dash@theta_prior.sample()
plt.plot(x, y_pred, label="Fit")
plt.scatter(x, y_noisy, s=20, alpha=0.5, label='Data')
plt.legend()
theta_prior = dist.MultivariateNormal(loc = torch.tensor([0., 0.]), covariance_matrix=torch.eye(2))
def likelihood(theta, x, y):
x_dash = torch.vstack((torch.ones_like(x), x)).t()
d = dist.Normal(loc=x_dash@theta, scale=torch.ones_like(x))
return torch.sum(d.log_prob(y))
likelihood(theta_prior.sample(), x, y_noisy)
loc = torch.tensor([-1., 1.], requires_grad=True)
surrogate_mvn = dist.MultivariateNormal(loc = loc, covariance_matrix=torch.eye(2))
surrogate_mvn
surrogate_mvn.sample()
def surrogate_sample_mvn(loc):
std_normal_mvn = dist.MultivariateNormal(loc = torch.zeros_like(loc), covariance_matrix=torch.eye(loc.shape[0]))
sample_std_normal = std_normal_mvn.sample()
return loc + sample_std_normal
def elbo_loss(loc, x, y):
samples_from_surrogate_mvn = surrogate_sample_mvn(loc)
lp = theta_prior.log_prob(samples_from_surrogate_mvn)
ll = likelihood(samples_from_surrogate_mvn, x, y_noisy)
ls = surrogate_mvn.log_prob(samples_from_surrogate_mvn)
return -lp - ll + ls
loc.shape, x.shape, y_noisy.shape
elbo_loss(loc, x, y_noisy)
loc = torch.tensor([-1., 1.], requires_grad=True)
loc_array = []
loss_array = []
opt = torch.optim.Adam([loc], lr=0.02)
for i in range(10000):
loss_val = elbo_loss(loc, x, y_noisy)
loss_val.backward()
loc_array.append(mu.item())
loss_array.append(loss_val.item())
if i % 1000 == 0:
print(
f"Iteration: {i}, Loss: {loss_val.item():0.2f}, Loc: {loc}"
)
opt.step()
opt.zero_grad()
learnt_surrogate = dist.MultivariateNormal(loc = loc, covariance_matrix=torch.eye(2))
y_samples_surrogate = x_dash@learnt_surrogate.sample([500]).t()
plt.plot(x, y_samples_surrogate, alpha = 0.02, color='k');
plt.scatter(x, y_noisy, s=20, alpha=0.5)
x_dash@learnt_surrogate.loc.detach().t()
theta_sd = torch.linalg.cholesky(learnt_surrogate.covariance_matrix)
#y_samples_surrogate = x_dash@learnt_surrogate.loc.t()
#plt.plot(x, y_samples_surrogate, alpha = 0.02, color='k');
#plt.scatter(x, y_noisy, s=20, alpha=0.5)
###Output
_____no_output_____
|
13_python_data.ipynb
|
###Markdown
Data science in Python- Course GitHub repo: https://github.com/pycam/python-data-science- Python website: https://www.python.org/ Session 1.3: Creating functions and modules to write reusable code- [Building reusable and modular code with functions](Building-reusable-and-modular-code-with-functions) - [Exercise 1.3.1](Exercise-1.3.1) - [Exercise 1.3.2](Exercise-1.3.2)- [Create your own module](Create-your-own-module) - [Exercise 1.3.3](Exercise-1.3.3) Mind map Building reusable and modular code with functions So far, we’ve used Python to explore and manipulate individual datasets by hand, much like we would do in a spreadsheet. The beauty of using a programming language like Python, though, comes from the ability to automate data processing through the use of loops and functions.Suppose now that we would like to calculate the average GDP per capita, its median and standard deviation for all continents over all the years. We could write specific conditions for each case, and write the same code over again for the different situation but that would be time consuming, error prone and hard to maintain. A more elegant solution would be to create a **reusable tool** that performs this task with minimum input from the user. To do this, we are going to turn the code we’ve already written into a **function**.Functions are reusable, self-contained pieces of code that are called with a single command. They can be designed to accept arguments as input and return values, but they don’t need to do either. Variables declared inside functions only exist while the function is running and if a variable within the function (a local variable) has the same name as a variable somewhere else in the code, the local variable hides but doesn’t overwrite the other.Every method used in Python (for example, **`print()`**) is a function, and the libraries we import (say, `csv` or `os`) are a collection of functions. We will first use functions that are housed within the same code that uses them, and then create our own module to write functions that can be used by different programs. Function definitionFunctions are declared following this general structure:
###Code
def this_is_the_function_name(input_argument1, input_argument2):
# The body of the function is indented
# This function prints the two arguments to screen
print('The function arguments are:', input_argument1, input_argument2, '(this is done inside the function!)')
# And returns their product
return input_argument1 * input_argument2
###Output
_____no_output_____
###Markdown
The function declaration starts with the word **`def`**, followed by the function name and any arguments in parenthesis, and ends with a colon. The body of the function is indented just like loops are. If the function returns something when it is called, it includes a **`return`** statement at the end.Once the `return` statement is reached the operation of the function ends, and anything on the return line is passed back as output. Function callThis is how we call the function:
###Code
product_of_inputs = this_is_the_function_name(2, 5)
print('Their product is:', product_of_inputs, '(this is done outside the function!)')
###Output
_____no_output_____
###Markdown
Function argumentsIf we change the values of the arguments when calling the function, then its output changes:
###Code
product_of_inputs = this_is_the_function_name(4, 7)
print('Their product is:', product_of_inputs, '(this is done outside the function!)')
###Output
_____no_output_____
###Markdown
If we call the function by giving it the wrong number of arguments (not 2), we get a `TypeError`:
###Code
product_of_inputs = this_is_the_function_name(4)
###Output
_____no_output_____
###Markdown
The arguments we have passed to the function so far have all been **mandatory**, if we do not supply them or if supply the wrong number of arguments Python will throw an error.**Mandatory arguments are assumed to come in the same order as the arguments in the function definition**, but you can also opt to specify the arguments using the argument names as _keywords_, supplying the values corresponding to each keyword with a `=` sign.
###Code
product_of_inputs = this_is_the_function_name(input_argument1=3, input_argument2=2)
product_of_inputs = this_is_the_function_name(input_argument2=3, input_argument1=2)
###Output
_____no_output_____
###Markdown
**BEWARE!** Unnamed (positional) arguments must come before named (keyword) arguments, otherwise we will get a `SyntaxError`:
###Code
product_of_inputs = this_is_the_function_name(3, input_argument2=2)
product_of_inputs = this_is_the_function_name(input_argument2=2, 3)
###Output
_____no_output_____
###Markdown
Function returned valuesIf we call the function by not assigning the function call to a variable (`product_of_inputs =`), we are unable to retrieve the output of the function passed back via the `return` statement, but the code within the function is still executed:
###Code
this_is_the_function_name(4, 7)
###Output
_____no_output_____
###Markdown
The function written so far has returned only a single value, however it is possible to pass back more than one value via the `return` statement. In the following example, we change the function that takes two arguments and passes back three values: the total, the difference and the product of these two arguments. The return values are really passed back inside a single tuple, which can be caught as a single collection of values.
###Code
def this_is_the_function_name_returning_multiple_values(input_argument1, input_argument2):
total = input_argument1 + input_argument2
difference = input_argument1 - input_argument2
product = input_argument1 * input_argument2
return total, difference, product
returned_collection = this_is_the_function_name_returning_multiple_values(2, 4)
print(returned_collection)
total_of_inputs, difference_of_inputs, product_of_inputs = this_is_the_function_name_returning_multiple_values(2, 4)
print(total_of_inputs, difference_of_inputs, product_of_inputs)
###Output
_____no_output_____
###Markdown
There can be more than one `return` statement in a function, although typically there is only one, at the end of the function. The `return` keyword immediately exits the function, and no more of the code in that function will be run once the function has returned.
###Code
def this_is_the_function_name(input_argument1, input_argument2):
# The body of the function is indented
# This is a variable inside the function
variable_inside_function = '(this is done inside the function!)'
# And returns their product
return input_argument1 * input_argument2
# This function does not print the two arguments to screen (no code executed after return statement)
print('The function arguments are:', input_argument1, input_argument2, variable_inside_function)
product_of_inputs = this_is_the_function_name(4, 7)
print('Their product is:', product_of_inputs, '(this is done outside the function!)')
###Output
_____no_output_____
###Markdown
Function variable scopeIf we declare a variable inside the function, it is a local variable only visible within the function, we are therefore unable to access it outside the function:
###Code
def this_is_the_function_name(input_argument1, input_argument2):
# The body of the function is indented
# This is a variable inside the function
variable_inside_function = '(this is done inside the function!)'
# This function prints the two arguments to screen
print('The function arguments are:', input_argument1, input_argument2, variable_inside_function)
# And returns their product
return input_argument1 * input_argument2
product_of_inputs = this_is_the_function_name(5, 2)
print(variable_inside_function)
print(product_of_inputs)
###Output
_____no_output_____
###Markdown
When a variable is declared both inside and outside the function using the same name, only the value of the outside variable (the global one) is visible and accessible, changing it within the function does not change it outside:
###Code
variable_inside_and_outside_function = 'this is a variable created outside the function'
def this_is_the_function_name(input_argument1, input_argument2):
# The body of the function is indented
# This is a variable inside the function
variable_inside_function = '(this is done inside the function!)'
# This is a variable created outside and modified inside the function
variable_inside_and_outside_function = 'this is a variable changed inside the function'
print(variable_inside_and_outside_function)
# This function prints the two arguments to screen
print('The function arguments are:', input_argument1, input_argument2, variable_inside_function)
# And returns their product
return input_argument1 * input_argument2
###Output
_____no_output_____
###Markdown
**BEWARE!** When using Jupyter Notebooks and modifying a function, you MUST re-run that cell in order for the changed function to be available to the rest of the code. Nothing will visibly happen when you do this, though, because simply defining a function without calling it doesn’t produce an output. Any cells that use the now-changed functions will also have to be re-run for their output to change.
###Code
product_of_inputs = this_is_the_function_name(10, 3)
print(variable_inside_and_outside_function)
###Output
_____no_output_____
###Markdown
Function documentationThe text between the two sets of triple double quotes is called a **docstring** and contains the documentation for the function. It does nothing when the function is running and is therefore not necessary, but it is good practice to include docstrings as a reminder of what the code does. Docstrings in functions also become part of their ‘official’ documentation:
###Code
def this_is_the_function_name(input_argument1, input_argument2):
"""
This is the documentation of the function.
Returns the product of the two arguments.
input_argument1 --- first input argument
input_argument2 --- second input argument
"""
# The body of the function is indented
# This function prints the two arguments to screen
print('The function arguments are:', input_argument1, input_argument2, '(this is done inside the function!)')
# And returns their product
return input_argument1 * input_argument2
help(this_is_the_function_name)
###Output
_____no_output_____
###Markdown
Exercise 1.3.1- Write a function that takes two arguments and returns their mean. - Give your function a meaningful name, and a good documentation. - Call your function multiple times with different values, and once using the keyword arguments with their associated values. - Print the result of these different function calls.- Write another function that takes a list as argument and returns the mean and the median of all the numbers in the list. Writing our own functionWe can now turn our code for calculating the average GDP per capita, its median and standard deviation for all continents over all the years into a function. Here is the original code we wrote:
###Code
import os
import statistics as stats
import csv
eu_gdppercap_1962 = []
americas_gdppercap_1962 = []
with open(os.path.join('data', 'gapminder.csv')) as f:
reader = csv.DictReader(f, delimiter = ",")
for data in reader:
if data['year'] == "1962":
if data['continent'] == "Europe":
eu_gdppercap_1962.append(float(data['gdpPercap']))
if data['continent'] == 'Americas':
americas_gdppercap_1962.append(float(data['gdpPercap']))
print('European GDP per Capita in 1962')
print(eu_gdppercap_1962)
print('average:', stats.mean(eu_gdppercap_1962))
print('median:', stats.median(eu_gdppercap_1962))
print('standard deviation:', stats.stdev(eu_gdppercap_1962))
print('American GDP per Capita in 1962')
print(americas_gdppercap_1962)
print('average:', stats.mean(americas_gdppercap_1962))
print('median:', stats.median(americas_gdppercap_1962))
print('standard deviation:', stats.stdev(americas_gdppercap_1962))
###Output
_____no_output_____
###Markdown
Let’s first write a function that filters data for a continent and a specific year, and calculates the average, median and standard deviation of the GDP of the countries of this continent:
###Code
import statistics as stats
import csv
def gdp_stats_by_continent_and_year(gapminder_filepath, continent, year):
"""
Returns a dictionary of the average, median and standard deviation of GDP per capita
for all countries of the selected continent for a given year.
gapminder_filepath --- gapminder file path with multi-continent and multi-year data
continent --- continent for which data is extracted
year --- year for which data is extracted
"""
gdppercap = []
with open(gapminder_filepath) as f:
reader = csv.DictReader(f, delimiter = ",")
for data in reader:
if data['continent'] == continent and data['year'] == year:
gdppercap.append(float(data['gdpPercap']))
print(continent, 'GDP per Capita in', year)
return {'mean': stats.mean(gdppercap), 'median': stats.median(gdppercap), 'stdev': stats.stdev(gdppercap)}
help(gdp_stats_by_continent_and_year)
import os
gdp_stats = gdp_stats_by_continent_and_year(os.path.join('data', 'gapminder.csv'), 'Europe', '1962')
print(gdp_stats)
import os
gdp_stats = gdp_stats_by_continent_and_year(os.path.join('data', 'gapminder.csv'), 'Europe', '2007')
print(gdp_stats['mean'])
import os
gdp_stats = gdp_stats_by_continent_and_year(os.path.join('data', 'gapminder.csv'), 'Americas', '1962')
print(gdp_stats)
import os
gdp_stats = gdp_stats_by_continent_and_year(os.path.join('data', 'gapminder.csv'), 'Africa', '1962')
print(gdp_stats)
###Output
_____no_output_____
###Markdown
Function arguments with default valuesThe functions we wrote demand that we give them a value for every argument. Ideally, we would like these functions to be as flexible and independent as possible. Let’s modify the function `gdp_stats_by_continent_and_year` so that the `continent` and `year` default to `Europe` and `1952` if they are not supplied by the user. We can do this by assigning some value to the named argument with the `=` operator in the function definition.Any arguments in the function without default values (here, `gapminder_filepath`) is a required argument and MUST come before the argument with default values (which are optional in the function call).
###Code
import statistics as stats
import csv
def gdp_stats_by_continent_and_year(gapminder_filepath, continent='Europe', year='1952'):
"""
Returns a dictionary of the average, median and standard deviation of GDP per capita
for all countries of the selected continent for a given year.
gapminder_filepath --- gapminder file path with multi-continent and multi-year data
continent --- continent for which data is extracted
year --- year for which data is extracted
"""
gdppercap = []
with open(gapminder_filepath) as f:
reader = csv.DictReader(f, delimiter = ",")
for data in reader:
if data['continent'] == continent and data['year'] == year:
gdppercap.append(float(data['gdpPercap']))
print(continent, 'GDP per Capita in', year)
return {'mean': stats.mean(gdppercap), 'median': stats.median(gdppercap), 'stdev': stats.stdev(gdppercap)}
import os
gdp_stats = gdp_stats_by_continent_and_year(os.path.join('data', 'gapminder.csv'))
print(gdp_stats)
###Output
_____no_output_____
###Markdown
Exercise 1.3.2- Generalise the code written for exercise 1.1.3 for finding which European countries have the largest population in 1952 and 2007 by creating a function that finds which country on a defined continent has the largest population for a given year. Provide default values for certain arguments. Create your own moduleSo far we have been writing Python code in files as executable scripts without knowing that they are also modules from which we are able to call the different functions defined in them.A module is a file containing Python definitions and statements. The file name is the module name with the suffix `.py` appended. Create a file called `this_is_the_module_name.py` in the current directory with the function `this_is_the_function_name()` written earlier as its contents:
###Code
def this_is_the_function_name(input_argument1, input_argument2):
"""
This is the documentation of the function.
Returns the product of the two arguments.
input_argument1 --- first input argument
input_argument2 --- second input argument
"""
# The body of the function is indented
# This function prints the two arguments to screen
print('The function arguments are:', input_argument1, input_argument2, '(this is done inside the function!)')
# And returns their product
return input_argument1 * input_argument2
###Output
_____no_output_____
###Markdown
Now open a terminal windown, enter into the Python interpreter from the directory you've created `this_is_the_module_name.py` file and import it:```python3Python 3.6.4 (default, Jan 21 2018, 20:11:12) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import this_is_the_module_name>>> product_of_inputs = this_is_the_module_name.this_is_the_function_name(10, 3)The function arguments are: 10 3 (this is done inside the function!)>>> print(product_of_inputs)30>>>```If you wish to import it into this notebook, below is what you need to do. If you wish to edit the module file and change the code or add another function, you will have to restart the notebook to have these changes taken into account using the restart the kernel button in the menu bar.
###Code
import this_is_the_module_name
product_of_inputs = this_is_the_module_name.this_is_the_function_name(10, 3)
###Output
_____no_output_____
|
Secret/mathematics-for-machine-learning-cousera/course1 - linear algebra/week5/PageRank.ipynb
|
###Markdown
PageRankIn this notebook, you'll build on your knowledge of eigenvectors and eigenvalues by exploring the PageRank algorithm.The notebook is in two parts, the first is a worksheet to get you up to speed with how the algorithm works - here we will look at a micro-internet with fewer than 10 websites and see what it does and what can go wrong.The second is an assessment which will test your application of eigentheory to this problem by writing code and calculating the page rank of a large network representing a sub-section of the internet. Part 1 - Worksheet IntroductionPageRank (developed by Larry Page and Sergey Brin) revolutionized web search by generating aranked list of web pages based on the underlying connectivity of the web. The PageRank algorithm isbased on an ideal random web surfer who, when reaching a page, goes to the next page by clicking on alink. The surfer has equal probability of clicking any link on the page and, when reaching a page with nolinks, has equal probability of moving to any other page by typing in its URL. In addition, the surfer mayoccasionally choose to type in a random URL instead of following the links on a page. The PageRank isthe ranked order of the pages from the most to the least probable page the surfer will be viewing.
###Code
# Before we begin, let's load the libraries.
%pylab notebook
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
PageRank as a linear algebra problemLet's imagine a micro-internet, with just 6 websites (**A**vocado, **B**ullseye, **C**atBabel, **D**romeda, **e**Tings, and **F**aceSpace).Each website links to some of the others, and this forms a network as shown,The design principle of PageRank is that important websites will be linked to by important websites.This somewhat recursive principle will form the basis of our thinking.Imagine we have 100 *Procrastinating Pat*s on our micro-internet, each viewing a single website at a time.Each minute the Pats follow a link on their website to another site on the micro-internet.After a while, the websites that are most linked to will have more Pats visiting them, and in the long run, each minute for every Pat that leaves a website, another will enter keeping the total numbers of Pats on each website constant.The PageRank is simply the ranking of websites by how many Pats they have on them at the end of this process.We represent the number of Pats on each website with the vector,$$\mathbf{r} = \begin{bmatrix} r_A \\ r_B \\ r_C \\ r_D \\ r_E \\ r_F \end{bmatrix}$$And say that the number of Pats on each website in minute $i+1$ is related to those at minute $i$ by the matrix transformation$$ \mathbf{r}^{(i+1)} = L \,\mathbf{r}^{(i)}$$with the matrix $L$ taking the form,$$ L = \begin{bmatrix}L_{A→A} & L_{B→A} & L_{C→A} & L_{D→A} & L_{E→A} & L_{F→A} \\L_{A→B} & L_{B→B} & L_{C→B} & L_{D→B} & L_{E→B} & L_{F→B} \\L_{A→C} & L_{B→C} & L_{C→C} & L_{D→C} & L_{E→C} & L_{F→C} \\L_{A→D} & L_{B→D} & L_{C→D} & L_{D→D} & L_{E→D} & L_{F→D} \\L_{A→E} & L_{B→E} & L_{C→E} & L_{D→E} & L_{E→E} & L_{F→E} \\L_{A→F} & L_{B→F} & L_{C→F} & L_{D→F} & L_{E→F} & L_{F→F} \\\end{bmatrix}$$where the columns represent the probability of leaving a website for any other website, and sum to one.The rows determine how likely you are to enter a website from any other, though these need not add to one.The long time behaviour of this system is when $ \mathbf{r}^{(i+1)} = \mathbf{r}^{(i)}$, so we'll drop the superscripts here, and that allows us to write,$$ L \,\mathbf{r} = \mathbf{r}$$which is an eigenvalue equation for the matrix $L$, with eigenvalue 1 (this is guaranteed by the probabalistic structure of the matrix $L$).Complete the matrix $L$ below, we've left out the column for which websites the *FaceSpace* website (F) links to.Remember, this is the probability to click on another website from this one, so each column should add to one (by scaling by the number of links).
###Code
# Replace the ??? here with the probability of clicking a link to each website when leaving Website F (FaceSpace).
L = np.array([[0, 1/2, 1/3, 0, 0, 0 ],
[1/3, 0, 0, 0, 1/2, 0 ],
[1/3, 1/2, 0, 1, 0, 1/2 ],
[1/3, 0, 1/3, 0, 1/2, 1/2 ],
[0, 0, 0, 0, 0, 0 ],
[0, 0, 1/3, 0, 0, 0 ]])
###Output
_____no_output_____
###Markdown
In principle, we could use a linear algebra library, as below, to calculate the eigenvalues and vectors.And this would work for a small system. But this gets unmanagable for large systems.And since we only care about the principal eigenvector (the one with the largest eigenvalue, which will be 1 in this case), we can use the *power iteration method* which will scale better, and is faster for large systems.Use the code below to peek at the PageRank for this micro-internet.
###Code
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0] # Sets r to be the principal eigenvector
100 * np.real(r / np.sum(r)) # Make this eigenvector sum to one, then multiply by 100 Procrastinating Pats
###Output
_____no_output_____
###Markdown
We can see from this list, the number of Procrastinating Pats that we expect to find on each website after long times.Putting them in order of *popularity* (based on this metric), the PageRank of this micro-internet is:**C**atBabel, **D**romeda, **A**vocado, **F**aceSpace, **B**ullseye, **e**TingsReferring back to the micro-internet diagram, is this what you would have expected?Convince yourself that based on which pages seem important given which others link to them, that this is a sensible ranking.Let's now try to get the same result using the Power-Iteration method that was covered in the video.This method will be much better at dealing with large systems.First let's set up our initial vector, $\mathbf{r}^{(0)}$, so that we have our 100 Procrastinating Pats equally distributed on each of our 6 websites.
###Code
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
r # Shows it's value
###Output
_____no_output_____
###Markdown
Next, let's update the vector to the next minute, with the matrix $L$.Run the following cell multiple times, until the answer stabilises.
###Code
r = L @ r # Apply matrix L to r
r # Show it's value
# Re-run this cell multiple times to converge to the correct answer.
###Output
_____no_output_____
###Markdown
We can automate applying this matrix multiple times as follows,
###Code
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
for i in np.arange(100) : # Repeat 100 times
r = L @ r
r
###Output
_____no_output_____
###Markdown
Or even better, we can keep running until we get to the required tolerance.
###Code
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L @ r
i += 1
print(str(i) + " iterations to convergence.")
r
###Output
18 iterations to convergence.
###Markdown
See how the PageRank order is established fairly quickly, and the vector converges on the value we calculated earlier after a few tens of repeats.Congratulations! You've just calculated your first PageRank! Damping ParameterThe system we just studied converged fairly quickly to the correct answer.Let's consider an extension to our micro-internet where things start to go wrong.Say a new website is added to the micro-internet: *Geoff's* Website.This website is linked to by *FaceSpace* and only links to itself.Intuitively, only *FaceSpace*, which is in the bottom half of the page rank, links to this website amongst the two others it links to,so we might expect *Geoff's* site to have a correspondingly low PageRank score.Build the new $L$ matrix for the expanded micro-internet, and use Power-Iteration on the Procrastinating Pat vector.See what happens…
###Code
# We'll call this one L2, to distinguish it from the previous L.
L2 = np.array([[0, 1/2, 1/3, 0, 0, 0, 0 ],
[1/3, 0, 0, 0, 1/2, 0, 0 ],
[1/3, 1/2, 0, 1, 0, 0, 0 ],
[1/3, 0, 1/3, 0, 1/2, 0, 0 ],
[0, 0, 0, 0, 0, 0, 0 ],
[0, 0, 1/3, 0, 0, 1, 0 ],
[0, 0, 0, 0, 0, 0, 1 ]])
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L2 @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L2 @ r
i += 1
print(str(i) + " iterations to convergence.")
r
###Output
46 iterations to convergence.
###Markdown
That's no good! *Geoff* seems to be taking all the traffic on the micro-internet, and somehow coming at the top of the PageRank.This behaviour can be understood, because once a Pat get's to *Geoff's* Website, they can't leave, as all links head back to Geoff.To combat this, we can add a small probability that the Procrastinating Pats don't follow any link on a webpage, but instead visit a website on the micro-internet at random.We'll say the probability of them following a link is $d$ and the probability of choosing a random website is therefore $1-d$.We can use a new matrix to work out where the Pat's visit each minute.$$ M = d \, L + \frac{1-d}{n} \, J $$where $J$ is an $n\times n$ matrix where every element is one.If $d$ is one, we have the case we had previously, whereas if $d$ is zero, we will always visit a random webpage and therefore all webpages will be equally likely and equally ranked.For this extension to work best, $1-d$ should be somewhat small - though we won't go into a discussion about exactly how small.Let's retry this PageRank with this extension.
###Code
d = 0.5 # Feel free to play with this parameter after running the code once.
M = d * L2 + (1-d)/7 * np.ones([7, 7]) # np.ones() is the J matrix, with ones for each entry.
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = M @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
i += 1
print(str(i) + " iterations to convergence.")
r
###Output
8 iterations to convergence.
###Markdown
This is certainly better, the PageRank gives sensible numbers for the Procrastinating Pats that end up on each webpage.This method still predicts Geoff has a high ranking webpage however.This could be seen as a consequence of using a small network. We could also get around the problem by not counting self-links when producing the L matrix (an if a website has no outgoing links, make it link to all websites equally).We won't look further down this route, as this is in the realm of improvements to PageRank, rather than eigenproblems.You are now in a good position, having gained an understanding of PageRank, to produce your own code to calculate the PageRank of a website with thousands of entries.Good Luck! Part 2 - AssessmentIn this assessment, you will be asked to produce a function that can calculate the PageRank for an arbitrarily large probability matrix.This, the final assignment of the course, will give less guidance than previous assessments.You will be expected to utilise code from earlier in the worksheet and re-purpose it to your needs. How to submitEdit the code in the cell below to complete the assignment.Once you are finished and happy with it, press the *Submit Assignment* button at the top of this notebook.Please don't change any of the function names, as these will be checked by the grading script.If you have further questions about submissions or programming assignments, here is a [list](https://www.coursera.org/learn/linear-algebra-machine-learning/discussions/weeks/1/threads/jB4klkn5EeibtBIQyzFmQg) of Q&A. You can also raise an issue on the discussion forum. Good luck!
###Code
# PACKAGE
# Here are the imports again, just in case you need them.
# There is no need to edit or submit this cell.
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
# GRADED FUNCTION
# Complete this function to provide the PageRank for an arbitrarily sized internet.
# I.e. the principal eigenvector of the damped system, using the power iteration method.
# (Normalisation doesn't matter here)
# The functions inputs are the linkMatrix, and d the damping parameter - as defined in this worksheet.
def pageRank(linkMatrix, d) :
n = linkMatrix.shape[0]
M = d * linkMatrix + (1-d)/n * np.ones([n, n])
r = 100 * np.ones(n) / n # Sets up this vector (6 entries of 1/6 × 100 each)
last = r
r = M @ r
while la.norm(last - r) > 0.01 :
last = r
r = M @ r
return r
###Output
_____no_output_____
###Markdown
Test your code before submissionTo test the code you've written above, run the cell (select the cell above, then press the play button [ ▶| ] or press shift-enter).You can then use the code below to test out your function.You don't need to submit this cell; you can edit and run it as much as you like.
###Code
# Use the following function to generate internets of different sizes.
generate_internet(5)
# Test your PageRank method against the built in "eig" method.
# You should see yours is a lot faster for large internets
L = generate_internet(100)
pageRank(L, 1)
# Do note, this is calculating the eigenvalues of the link matrix, L,
# without any damping. It may give different results that your pageRank function.
# If you wish, you could modify this cell to include damping.
# (There is no credit for this though)
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0]
100 * np.real(r / np.sum(r))
# You may wish to view the PageRank graphically.
# This code will draw a bar chart, for each (numbered) website on the generated internet,
# The height of each bar will be the score in the PageRank.
# Run this code to see the PageRank for each internet you generate.
# Hopefully you should see what you might expect
# - there are a few clusters of important websites, but most on the internet are rubbish!
%pylab notebook
r = pageRank(generate_internet(100), 0.9)
plt.bar(arange(r.shape[0]), r);
###Output
Populating the interactive namespace from numpy and matplotlib
|
Practice Projects/cnn/cifar10-classification/cifar10_cnn.ipynb
|
###Markdown
Convolutional Neural Networks---In this notebook, we train a CNN to classify images from the CIFAR-10 database. 1. Load CIFAR-10 Database
###Code
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
Using TensorFlow backend.
###Markdown
2. Visualize the First 24 Training Images
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
###Output
_____no_output_____
###Markdown
3. Rescale the Images by Dividing Every Pixel in Every Image by 255
###Code
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
###Output
_____no_output_____
###Markdown
4. Break Dataset into Training, Testing, and Validation Sets
###Code
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
###Output
x_train shape: (45000, 32, 32, 3)
45000 train samples
10000 test samples
5000 validation samples
###Markdown
5. Define the Model Architecture
###Code
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 32, 32, 16) 208
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 16) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 32) 2080
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 8, 8, 64) 8256
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 4, 4, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 500) 512500
_________________________________________________________________
dropout_2 (Dropout) (None, 500) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 5010
=================================================================
Total params: 528,054
Trainable params: 528,054
Non-trainable params: 0
_________________________________________________________________
###Markdown
6. Compile the Model
###Code
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
7. Train the Model
###Code
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=100,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
Epoch 00000: val_loss improved from inf to 1.35820, saving model to model.weights.best.hdf5
46s - loss: 1.6192 - acc: 0.4140 - val_loss: 1.3582 - val_acc: 0.5166
Epoch 2/100
Epoch 00001: val_loss improved from 1.35820 to 1.22245, saving model to model.weights.best.hdf5
53s - loss: 1.2881 - acc: 0.5402 - val_loss: 1.2224 - val_acc: 0.5644
Epoch 3/100
Epoch 00002: val_loss improved from 1.22245 to 1.12096, saving model to model.weights.best.hdf5
49s - loss: 1.1630 - acc: 0.5879 - val_loss: 1.1210 - val_acc: 0.6046
Epoch 4/100
Epoch 00003: val_loss improved from 1.12096 to 1.10724, saving model to model.weights.best.hdf5
56s - loss: 1.0928 - acc: 0.6160 - val_loss: 1.1072 - val_acc: 0.6134
Epoch 5/100
Epoch 00004: val_loss improved from 1.10724 to 0.97377, saving model to model.weights.best.hdf5
52s - loss: 1.0413 - acc: 0.6382 - val_loss: 0.9738 - val_acc: 0.6596
Epoch 6/100
Epoch 00005: val_loss improved from 0.97377 to 0.95501, saving model to model.weights.best.hdf5
50s - loss: 1.0090 - acc: 0.6484 - val_loss: 0.9550 - val_acc: 0.6768
Epoch 7/100
Epoch 00006: val_loss improved from 0.95501 to 0.94448, saving model to model.weights.best.hdf5
49s - loss: 0.9967 - acc: 0.6561 - val_loss: 0.9445 - val_acc: 0.6828
Epoch 8/100
Epoch 00007: val_loss did not improve
61s - loss: 0.9934 - acc: 0.6604 - val_loss: 1.1300 - val_acc: 0.6376
Epoch 9/100
Epoch 00008: val_loss improved from 0.94448 to 0.91779, saving model to model.weights.best.hdf5
49s - loss: 0.9858 - acc: 0.6672 - val_loss: 0.9178 - val_acc: 0.6882
Epoch 10/100
Epoch 00009: val_loss did not improve
50s - loss: 0.9839 - acc: 0.6658 - val_loss: 0.9669 - val_acc: 0.6748
Epoch 11/100
Epoch 00010: val_loss improved from 0.91779 to 0.91570, saving model to model.weights.best.hdf5
49s - loss: 1.0002 - acc: 0.6624 - val_loss: 0.9157 - val_acc: 0.6936
Epoch 12/100
Epoch 00011: val_loss did not improve
54s - loss: 1.0001 - acc: 0.6659 - val_loss: 1.1442 - val_acc: 0.6646
Epoch 13/100
Epoch 00012: val_loss did not improve
56s - loss: 1.0161 - acc: 0.6633 - val_loss: 0.9702 - val_acc: 0.6788
Epoch 14/100
Epoch 00013: val_loss did not improve
46s - loss: 1.0316 - acc: 0.6568 - val_loss: 0.9937 - val_acc: 0.6766
Epoch 15/100
Epoch 00014: val_loss did not improve
54s - loss: 1.0412 - acc: 0.6525 - val_loss: 1.1574 - val_acc: 0.6190
Epoch 16/100
Epoch 00015: val_loss did not improve
55s - loss: 1.0726 - acc: 0.6462 - val_loss: 1.0492 - val_acc: 0.6790
Epoch 17/100
Epoch 00016: val_loss did not improve
48s - loss: 1.0891 - acc: 0.6387 - val_loss: 1.0739 - val_acc: 0.6528
Epoch 18/100
Epoch 00017: val_loss did not improve
46s - loss: 1.1152 - acc: 0.6337 - val_loss: 1.0672 - val_acc: 0.6610
Epoch 19/100
Epoch 00018: val_loss did not improve
47s - loss: 1.1392 - acc: 0.6258 - val_loss: 1.5400 - val_acc: 0.5742
Epoch 20/100
Epoch 00019: val_loss did not improve
47s - loss: 1.1565 - acc: 0.6207 - val_loss: 1.0309 - val_acc: 0.6636
Epoch 21/100
Epoch 00020: val_loss did not improve
44s - loss: 1.1711 - acc: 0.6159 - val_loss: 1.4559 - val_acc: 0.5736
Epoch 22/100
Epoch 00021: val_loss did not improve
44s - loss: 1.1802 - acc: 0.6132 - val_loss: 1.1716 - val_acc: 0.6288
Epoch 23/100
Epoch 00022: val_loss did not improve
44s - loss: 1.2012 - acc: 0.6033 - val_loss: 1.3916 - val_acc: 0.6222
Epoch 24/100
Epoch 00023: val_loss did not improve
47s - loss: 1.2319 - acc: 0.5964 - val_loss: 1.5698 - val_acc: 0.5688
Epoch 25/100
Epoch 00024: val_loss did not improve
50s - loss: 1.2479 - acc: 0.5914 - val_loss: 1.2740 - val_acc: 0.6038
Epoch 26/100
Epoch 00025: val_loss did not improve
58s - loss: 1.2616 - acc: 0.5870 - val_loss: 1.2803 - val_acc: 0.5496
Epoch 27/100
Epoch 00026: val_loss did not improve
57s - loss: 1.2908 - acc: 0.5792 - val_loss: 1.0756 - val_acc: 0.6432
Epoch 28/100
Epoch 00027: val_loss did not improve
55s - loss: 1.3248 - acc: 0.5667 - val_loss: 1.2289 - val_acc: 0.5800
Epoch 29/100
Epoch 00028: val_loss did not improve
57s - loss: 1.3258 - acc: 0.5633 - val_loss: 1.3088 - val_acc: 0.5756
Epoch 30/100
Epoch 00029: val_loss did not improve
46s - loss: 1.3381 - acc: 0.5586 - val_loss: 1.2569 - val_acc: 0.6044
Epoch 31/100
Epoch 00030: val_loss did not improve
55s - loss: 1.3507 - acc: 0.5545 - val_loss: 1.3436 - val_acc: 0.5562
Epoch 32/100
Epoch 00031: val_loss did not improve
61s - loss: 1.3643 - acc: 0.5513 - val_loss: 1.2951 - val_acc: 0.5646
Epoch 33/100
Epoch 00032: val_loss did not improve
69s - loss: 1.3873 - acc: 0.5426 - val_loss: 1.4049 - val_acc: 0.6066
Epoch 34/100
Epoch 00033: val_loss did not improve
53s - loss: 1.3842 - acc: 0.5415 - val_loss: 1.8164 - val_acc: 0.5640
Epoch 35/100
Epoch 00034: val_loss did not improve
48s - loss: 1.4187 - acc: 0.5303 - val_loss: 1.7554 - val_acc: 0.5616
Epoch 36/100
Epoch 00035: val_loss did not improve
57s - loss: 1.4278 - acc: 0.5268 - val_loss: 1.9956 - val_acc: 0.5072
Epoch 37/100
Epoch 00036: val_loss did not improve
58s - loss: 1.4365 - acc: 0.5216 - val_loss: 1.8344 - val_acc: 0.4748
Epoch 38/100
Epoch 00037: val_loss did not improve
64s - loss: 1.4529 - acc: 0.5205 - val_loss: 1.2752 - val_acc: 0.5690
Epoch 39/100
Epoch 00038: val_loss did not improve
62s - loss: 1.4726 - acc: 0.5111 - val_loss: 1.7092 - val_acc: 0.5600
Epoch 40/100
Epoch 00039: val_loss did not improve
70s - loss: 1.4673 - acc: 0.5107 - val_loss: 1.2288 - val_acc: 0.5698
Epoch 41/100
Epoch 00040: val_loss did not improve
68s - loss: 1.4872 - acc: 0.5083 - val_loss: 1.4082 - val_acc: 0.5162
Epoch 42/100
Epoch 00041: val_loss did not improve
69s - loss: 1.4983 - acc: 0.5003 - val_loss: 1.5808 - val_acc: 0.4818
Epoch 43/100
Epoch 00042: val_loss did not improve
79s - loss: 1.5211 - acc: 0.4957 - val_loss: 1.2271 - val_acc: 0.5882
Epoch 44/100
Epoch 00043: val_loss did not improve
95s - loss: 1.5474 - acc: 0.4867 - val_loss: 3.7681 - val_acc: 0.3394
Epoch 45/100
Epoch 00044: val_loss did not improve
80s - loss: 1.5432 - acc: 0.4854 - val_loss: 1.3349 - val_acc: 0.5830
Epoch 46/100
Epoch 00045: val_loss did not improve
63s - loss: 1.5615 - acc: 0.4785 - val_loss: 1.4494 - val_acc: 0.5332
Epoch 47/100
Epoch 00046: val_loss did not improve
47s - loss: 1.5731 - acc: 0.4752 - val_loss: 1.4689 - val_acc: 0.4648
Epoch 48/100
Epoch 00047: val_loss did not improve
49s - loss: 1.5832 - acc: 0.4694 - val_loss: 1.6045 - val_acc: 0.3992
Epoch 49/100
Epoch 00048: val_loss did not improve
50s - loss: 1.6000 - acc: 0.4670 - val_loss: 3.0627 - val_acc: 0.3648
Epoch 50/100
Epoch 00049: val_loss did not improve
73s - loss: 1.5988 - acc: 0.4655 - val_loss: 1.4299 - val_acc: 0.5020
Epoch 51/100
Epoch 00050: val_loss did not improve
52s - loss: 1.6025 - acc: 0.4623 - val_loss: 1.6269 - val_acc: 0.4766
Epoch 52/100
Epoch 00051: val_loss did not improve
53s - loss: 1.6104 - acc: 0.4601 - val_loss: 1.4260 - val_acc: 0.5390
Epoch 53/100
Epoch 00052: val_loss did not improve
51s - loss: 1.6203 - acc: 0.4569 - val_loss: 1.3396 - val_acc: 0.5366
Epoch 54/100
Epoch 00053: val_loss did not improve
50s - loss: 1.6354 - acc: 0.4500 - val_loss: 1.6159 - val_acc: 0.4512
Epoch 55/100
Epoch 00054: val_loss did not improve
53s - loss: 1.6552 - acc: 0.4433 - val_loss: 1.7258 - val_acc: 0.4468
Epoch 56/100
Epoch 00055: val_loss did not improve
47s - loss: 1.6696 - acc: 0.4363 - val_loss: 1.4365 - val_acc: 0.4938
Epoch 57/100
Epoch 00056: val_loss did not improve
46s - loss: 1.6605 - acc: 0.4368 - val_loss: 2.5907 - val_acc: 0.3732
Epoch 58/100
Epoch 00057: val_loss did not improve
50s - loss: 1.6720 - acc: 0.4336 - val_loss: 1.5503 - val_acc: 0.4274
Epoch 59/100
Epoch 00058: val_loss did not improve
68s - loss: 1.6897 - acc: 0.4281 - val_loss: 1.5233 - val_acc: 0.4362
Epoch 60/100
Epoch 00059: val_loss did not improve
73s - loss: 1.7099 - acc: 0.4201 - val_loss: 1.4141 - val_acc: 0.5124
Epoch 61/100
Epoch 00060: val_loss did not improve
71s - loss: 1.7182 - acc: 0.4182 - val_loss: 1.5190 - val_acc: 0.4486
Epoch 62/100
Epoch 00061: val_loss did not improve
72s - loss: 1.7177 - acc: 0.4179 - val_loss: 1.4966 - val_acc: 0.4860
Epoch 63/100
Epoch 00062: val_loss did not improve
59s - loss: 1.7079 - acc: 0.4228 - val_loss: 1.6089 - val_acc: 0.4384
Epoch 64/100
Epoch 00063: val_loss did not improve
50s - loss: 1.7101 - acc: 0.4147 - val_loss: 1.6014 - val_acc: 0.4430
Epoch 65/100
Epoch 00064: val_loss did not improve
49s - loss: 1.7180 - acc: 0.4144 - val_loss: 2.2502 - val_acc: 0.3712
Epoch 66/100
Epoch 00065: val_loss did not improve
50s - loss: 1.7190 - acc: 0.4140 - val_loss: 1.3967 - val_acc: 0.4964
Epoch 67/100
Epoch 00066: val_loss did not improve
50s - loss: 1.7262 - acc: 0.4082 - val_loss: 1.5334 - val_acc: 0.4650
Epoch 68/100
Epoch 00067: val_loss did not improve
50s - loss: 1.7432 - acc: 0.4032 - val_loss: 1.7911 - val_acc: 0.3588
Epoch 69/100
Epoch 00068: val_loss did not improve
50s - loss: 1.7309 - acc: 0.4054 - val_loss: 1.6592 - val_acc: 0.3892
Epoch 70/100
Epoch 00069: val_loss did not improve
50s - loss: 1.7581 - acc: 0.3977 - val_loss: 1.6551 - val_acc: 0.4056
Epoch 71/100
Epoch 00070: val_loss did not improve
50s - loss: 1.7619 - acc: 0.3930 - val_loss: 1.5855 - val_acc: 0.4670
Epoch 72/100
Epoch 00071: val_loss did not improve
55s - loss: 1.7690 - acc: 0.3918 - val_loss: 1.5534 - val_acc: 0.4350
Epoch 73/100
Epoch 00072: val_loss did not improve
77s - loss: 1.7910 - acc: 0.3890 - val_loss: 1.5390 - val_acc: 0.4692
Epoch 74/100
Epoch 00073: val_loss did not improve
68s - loss: 1.7941 - acc: 0.3853 - val_loss: 1.4875 - val_acc: 0.4764
Epoch 75/100
Epoch 00074: val_loss did not improve
71s - loss: 1.8069 - acc: 0.3816 - val_loss: 1.6594 - val_acc: 0.3990
Epoch 76/100
Epoch 00075: val_loss did not improve
63s - loss: 1.8160 - acc: 0.3776 - val_loss: 1.6119 - val_acc: 0.3804
Epoch 77/100
Epoch 00076: val_loss did not improve
52s - loss: 1.8073 - acc: 0.3793 - val_loss: 1.5836 - val_acc: 0.4578
Epoch 78/100
Epoch 00077: val_loss did not improve
72s - loss: 1.8185 - acc: 0.3731 - val_loss: 1.6415 - val_acc: 0.4004
Epoch 79/100
Epoch 00078: val_loss did not improve
78s - loss: 1.8229 - acc: 0.3724 - val_loss: 1.7005 - val_acc: 0.3834
Epoch 80/100
Epoch 00079: val_loss did not improve
61s - loss: 1.8316 - acc: 0.3664 - val_loss: 1.8900 - val_acc: 0.2996
Epoch 81/100
Epoch 00080: val_loss did not improve
50s - loss: 1.8274 - acc: 0.3656 - val_loss: 1.6902 - val_acc: 0.3794
Epoch 82/100
Epoch 00081: val_loss did not improve
50s - loss: 1.8448 - acc: 0.3609 - val_loss: 1.9591 - val_acc: 0.3094
Epoch 83/100
Epoch 00082: val_loss did not improve
48s - loss: 1.8468 - acc: 0.3566 - val_loss: 1.6827 - val_acc: 0.4108
Epoch 84/100
Epoch 00083: val_loss did not improve
48s - loss: 1.9039 - acc: 0.3516 - val_loss: 1.5814 - val_acc: 0.4456
Epoch 85/100
Epoch 00084: val_loss did not improve
68s - loss: 1.8499 - acc: 0.3550 - val_loss: 1.8199 - val_acc: 0.3736
Epoch 86/100
Epoch 00085: val_loss did not improve
77s - loss: 1.8404 - acc: 0.3556 - val_loss: 1.7326 - val_acc: 0.3518
Epoch 87/100
Epoch 00086: val_loss did not improve
59s - loss: 1.8509 - acc: 0.3513 - val_loss: 1.6321 - val_acc: 0.4042
Epoch 88/100
Epoch 00087: val_loss did not improve
51s - loss: 1.8580 - acc: 0.3502 - val_loss: 2.8168 - val_acc: 0.3208
Epoch 89/100
Epoch 00088: val_loss did not improve
60s - loss: 1.8760 - acc: 0.3392 - val_loss: 1.6616 - val_acc: 0.4156
Epoch 90/100
Epoch 00089: val_loss did not improve
61s - loss: 1.8682 - acc: 0.3462 - val_loss: 1.6725 - val_acc: 0.3900
Epoch 91/100
Epoch 00090: val_loss did not improve
57s - loss: 1.8900 - acc: 0.3312 - val_loss: 1.6851 - val_acc: 0.3424
Epoch 92/100
Epoch 00091: val_loss did not improve
54s - loss: 1.8889 - acc: 0.3363 - val_loss: 1.6296 - val_acc: 0.4230
Epoch 93/100
Epoch 00092: val_loss did not improve
56s - loss: 1.9040 - acc: 0.3343 - val_loss: 1.7510 - val_acc: 0.3306
Epoch 94/100
Epoch 00093: val_loss did not improve
50s - loss: 1.9041 - acc: 0.3266 - val_loss: 1.7218 - val_acc: 0.3582
Epoch 95/100
Epoch 00094: val_loss did not improve
48s - loss: 1.8978 - acc: 0.3224 - val_loss: 1.6739 - val_acc: 0.3970
Epoch 96/100
Epoch 00095: val_loss did not improve
48s - loss: 1.9173 - acc: 0.3180 - val_loss: 1.7337 - val_acc: 0.3526
Epoch 97/100
Epoch 00096: val_loss did not improve
48s - loss: 1.9016 - acc: 0.3204 - val_loss: 1.7351 - val_acc: 0.3452
Epoch 98/100
Epoch 00097: val_loss did not improve
48s - loss: 1.9117 - acc: 0.3170 - val_loss: 2.2827 - val_acc: 0.2592
Epoch 99/100
Epoch 00098: val_loss did not improve
48s - loss: 1.9319 - acc: 0.3049 - val_loss: 2.9560 - val_acc: 0.3060
Epoch 100/100
Epoch 00099: val_loss did not improve
48s - loss: 1.9390 - acc: 0.3070 - val_loss: 1.9106 - val_acc: 0.3102
###Markdown
8. Load the Model with the Best Validation Accuracy
###Code
# load the weights that yielded the best validation accuracy
model.load_weights('model.weights.best.hdf5')
###Output
_____no_output_____
###Markdown
9. Calculate Classification Accuracy on Test Set
###Code
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
###Output
Test accuracy: 0.68
###Markdown
10. Visualize Some PredictionsThis may give you some insight into why the network is misclassifying certain objects.
###Code
# get predictions on the test set
y_hat = model.predict(x_test)
# define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html)
cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# plot a random sample of test images, their predicted labels, and ground truth
fig = plt.figure(figsize=(20, 8))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):
ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_test[idx]))
pred_idx = np.argmax(y_hat[idx])
true_idx = np.argmax(y_test[idx])
ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]),
color=("green" if pred_idx == true_idx else "red"))
###Output
_____no_output_____
|
ch04/theta-misspecification.ipynb
|
###Markdown
第4章: Theta-Misspecification
###Code
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import numpy as np
import torch
from torch.optim import Adam
from pytorchltr.datasets import MSLR30K
from model import MLPScoreFunc
from train import train_ranker
# 実験設定
batch_size = 32
hidden_layer_sizes = (10, 10)
learning_rate = 0.0001
n_epochs = 100
# MSLR30Kデータセットを読み込む(初回だけ時間がかかる)
train = MSLR30K(split="train")
test = MSLR30K(split="test")
###Output
_____no_output_____
###Markdown
pow_true=1.0のデータに対して、pow_used=0.0として、MLPRankerを学習(ナイーブ推定量と同じ)
###Code
torch.manual_seed(12345)
score_fn = MLPScoreFunc(
input_size=train[0].features.shape[1],
hidden_layer_sizes=hidden_layer_sizes,
)
optimizer = Adam(score_fn.parameters(), lr=learning_rate)
ndcg_score_list_naive = train_ranker(
score_fn=score_fn,
optimizer=optimizer,
estimator="ips",
train=train,
test=test,
batch_size=batch_size,
pow_true=1.0,
pow_used=0.0,
n_epochs=n_epochs,
)
###Output
100%|██████████| 100/100 [13:23<00:00, 8.04s/it]
###Markdown
pow_true=1.0のデータに対して、pow_used=0.5として、MLPRankerを学習(バイアスの過小見積もり)
###Code
torch.manual_seed(12345)
score_fn = MLPScoreFunc(
input_size=train[0].features.shape[1],
hidden_layer_sizes=hidden_layer_sizes,
)
optimizer = Adam(score_fn.parameters(), lr=learning_rate)
ndcg_score_list_ips_small_pow = train_ranker(
score_fn=score_fn,
optimizer=optimizer,
estimator="ips",
train=train,
test=test,
batch_size=batch_size,
pow_true=1.0,
pow_used=0.5,
n_epochs=n_epochs,
)
###Output
100%|██████████| 100/100 [13:25<00:00, 8.05s/it]
###Markdown
pow_true=1.0のデータに対して、pow_used=1.0として、MLPRankerを学習
###Code
torch.manual_seed(12345)
score_fn = MLPScoreFunc(
input_size=train[0].features.shape[1],
hidden_layer_sizes=hidden_layer_sizes,
)
optimizer = Adam(score_fn.parameters(), lr=learning_rate)
ndcg_score_list_ips = train_ranker(
score_fn=score_fn,
optimizer=optimizer,
estimator="ips",
train=train,
test=test,
batch_size=batch_size,
pow_true=1.0,
pow_used=1.0,
n_epochs=n_epochs,
)
###Output
100%|██████████| 100/100 [13:11<00:00, 7.91s/it]
###Markdown
pow_true=1.0のデータに対して、pow_used=1.5として、MLPRankerを学習(バイアスの過大見積もり)
###Code
torch.manual_seed(12345)
score_fn = MLPScoreFunc(
input_size=train[0].features.shape[1],
hidden_layer_sizes=hidden_layer_sizes,
)
optimizer = Adam(score_fn.parameters(), lr=learning_rate)
ndcg_score_list_ips_large_pow = train_ranker(
score_fn=score_fn,
optimizer=optimizer,
estimator="ips",
train=train,
test=test,
batch_size=batch_size,
pow_true=1.0,
pow_used=1.5,
n_epochs=n_epochs,
)
###Output
100%|██████████| 100/100 [13:05<00:00, 7.85s/it]
###Markdown
学習曲線の描画(図4.18)
###Code
plt.subplots(1, figsize=(8,6))
plt.plot(range(n_epochs), ndcg_score_list_naive, label="pow_used=0.0", linewidth=3, linestyle="dotted")
plt.plot(range(n_epochs), ndcg_score_list_ips_small_pow, label="pow_used=0.5", linewidth=3, linestyle="dashed")
plt.plot(range(n_epochs), ndcg_score_list_ips, label="pow_used=1.0", linewidth=3)
plt.plot(range(n_epochs), ndcg_score_list_ips_large_pow, label="pow_used=1.5", linewidth=3, linestyle="dashdot")
plt.title(f"Test nDCG@10 Curve Of IPS Estimator with Different Levels of Misspecification", fontdict=dict(size=15))
plt.xlabel("Number of Epochs", fontdict=dict(size=20))
plt.ylabel("Test nDCG@10", fontdict=dict(size=20))
plt.tight_layout()
plt.legend(loc="best", fontsize=20)
plt.show()
###Output
_____no_output_____
|
20181024_IE_DS.ipynb
|
###Markdown
Stop Reinventing Pandas The following post was first presented as a talk for the [IE@DS](https://www.facebook.com/groups/173376299978861/) community. It will also be presented at [PyData meetup](https://www.meetup.com/PyData-Tel-Aviv/) in December. All the resources for this post, including a runable notebook, can be found in the [github repo](https://github.com/DeanLa/dont_reinvent_pandas)   This notebook aims to show some nice ways modern Pandas makes your life easier. It is not about efficiency. I'm pretty sure using Pandas' built-in methods will be more efficient than reinventing pandas, but the main goal is to make the code easier to read, and more imoprtant - easier to write. 
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use(['classic', 'ggplot', 'seaborn-poster', 'dean.style'])
%load_ext autoreload
%autoreload 2
import my_utils
###Output
_____no_output_____
###Markdown
First Hacks! Reading the data and a few housekeeping tasks. is the first place we can make our code more readable.
###Code
df_io = pd.read_csv('./data.csv',index_col=0,parse_dates=['date_'])
df_io.head()
df = df_io.copy().sort_values('date_').set_index('date_').drop(columns='val_updated')
df.head()
###Output
_____no_output_____
###Markdown
Beautiful pipes!One line method chaining is hard to read and prone to human error, chaining each method in its own line makes it a lot more readable.
###Code
df_io\
.copy()\
.sort_values('date_')\
.set_index('date_')\
.drop(columns='val_updated')\
.head()
###Output
_____no_output_____
###Markdown
But it has a problem. You can't comment out and even comment in between
###Code
# This block will result in an error
df_io\
.copy()\ # This is an inline comment
# This is a regular comment
.sort_values('date_')\
# .set_index('date_')\
.drop(columns='val_updated')\
.head()
###Output
_____no_output_____
###Markdown
Even an unnoticeable space character may break everything
###Code
# This block will result in an error
df_io\
.copy()\
.sort_values('date_')\
.set_index('date_')\
.drop(columns='val_updated')\
.head()
###Output
_____no_output_____
###Markdown
The Penny DropsI like those "penny dropping" moments, when you realize you knew everything that is presented, yet it is presented in a new way you never thought of.
###Code
# We can split these value inside ()
users = (134856, 195373, 295817, 294003, 262166, 121066, 129678, 307120,
258759, 277922, 220794, 192312, 318486, 314631, 306448, 297059,206892,
169046, 181703, 146200, 199876, 247904, 250884, 282989, 234280, 202520,
138064, 133577, 301053, 242157)
# Penny Drop: We can also Split here
df = (df_io
.copy() # This is an inline comment
# This is a regular comment
.sort_values('date_')
.set_index('date_')
.drop(columns='val_updated')
)
df.head()
###Output
_____no_output_____
###Markdown
Map with dictA dict is a callable with $f(key) = value$, there for you can call `.map` with it. In this example I want to make int key codes into letter.
###Code
df.event_type.map(lambda x: x+3).head()
# A dict is also a calleble
df['event_type'] = df.event_type.map({
1:'A',
5:'B',
7:'C'
})
df.head()
###Output
_____no_output_____
###Markdown
Time Series ResampleTask: How many events happen each hour? The Old Way
###Code
bad = df.copy()
bad['day'] = bad.index.date
bad['hour'] = bad.index.hour
(bad
.groupby(['day','hour'])
.count()
)
###Output
_____no_output_____
###Markdown
* Many lines of code* unneeded columns* Index is not a time anymore* **missing rows** (Did you notice?) A Better Way
###Code
df.resample('H').count() # H is for Hour
###Output
_____no_output_____
###Markdown
But it's even better on non-round intervals
###Code
rs = df.resample('10T').count()
# T is for Minute, and pandas understands 10 T, it will also under stand 11T if you wonder
rs.head()
###Output
_____no_output_____
###Markdown
[Complete list of Pandas' time abbrevations](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Period.strftime.html) Slice EasilyPandas will automatically make string into timestamps, and it will understand what you want it to do.
###Code
# Take only timestamp in the hour of 21:00.
rs.loc['2018-10-09 21',:]
# Take all time stamps berfore 18:31
rs.loc[:'2018-10-09 18:31',:]
###Output
_____no_output_____
###Markdown
Time Windows: Rolling, Expanding, EWMIf your Dataframe is indexed on a time index (Which
###Code
fig, ax = plt.subplots()
rs.plot(ax=ax,linestyle='--')
(rs
.rolling(6)
.mean()
.rename(columns = {'event_type':'rolling mean'})
.plot(ax=ax)
)
rs.expanding(6).mean().rename(columns = {'event_type':'expanding mean'}).plot(ax=ax)
rs.ewm(6).mean().rename(columns = {'event_type':'ewm mean'}).plot(ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
With ApplyIntuitively, windows are like GroupBy, so you can apply anything you want after the grouping, e.g.: geometric mean.
###Code
fig, ax = plt.subplots()
rs.plot(ax=ax,linestyle='--')
(rs
.rolling(6)
.apply(lambda x: np.power(np.product(x),1/len(x)) )
.rename(columns = {'event_type':'Rolling Geometric Mean'})
.plot(ax=ax)
)
plt.show()
###Output
_____no_output_____
###Markdown
Combine with GroupBy 🤯Pandas has no problem with groupby and resample together. It's as simple as `groupby[col1,col2]`. In our specific case, we want to cound events in an interval per event type.
###Code
per_event = (df
.groupby('event_type')
.resample('15T')
.apply('count')
.rename(columns={'event_type':'amount'})
)
per_event
###Output
_____no_output_____
###Markdown
Stack, Unstack Unstack In this case, working with a wide format indexed on intervals, with event types as columns, will make a lot more sense. The Old wayPivot table in modern pandas is more robust than it used to be. Still, it requires you to specify everything.
###Code
pt = pd.pivot_table(per_event,values = 'amount',columns='event_type',index='date_')
pt.head()
###Output
_____no_output_____
###Markdown
A better wayWhen you have just one column of values, unstack does the same easily
###Code
pt = per_event.unstack('event_type')
pt.columns = pt.columns.droplevel() # Unstack creates a multiindex on columns
pt.head()
###Output
_____no_output_____
###Markdown
UnstackAnd some extra tricks
###Code
pt.stack().head()
###Output
_____no_output_____
###Markdown
This looks kind of what we had expected but:* It's a series, not a DataFrame* The levels of the index are reversed* The main sort is on the date, yet it used to be on the event type
###Code
stack_back = (pt
.stack()
.to_frame('amount') # Turn Series to DF without calling the DF constructor
.swaplevel() # Swaps the levels of the index
.sort_index() # Sort by index, makes so much sense yet I used to do .reset_index().sort_values()
)
stack_back.head()
stack_back.equals(per_event)
###Output
_____no_output_____
###Markdown
ClipLet's say, we know from domain knowledge the that an event takes place a minimum of 5 and maximum of 12 at each timestamp. We would like to fix that. In a real world example, we many time want to turn negative numbers to zeroes or some truly big numbers to sum known max. The Old WayIterate over columns and change values that meet condition.
###Code
cl = pt.copy()
lb = 5
ub = 12
# Needed A loop of 3 lines
for col in ['A','B','C']:
cl['clipped_{}'.format(col)] = cl[col]
cl.loc[cl[col] < lb,'clipped_{}'.format(col)] = lb
cl.loc[cl[col] > ub,'clipped_{}'.format(col)] = ub
my_utils.plot_clipped(cl) # my_utils can be found in the github repo
###Output
_____no_output_____
###Markdown
A better way`.clip(lb,ub)`
###Code
cl = pt.copy()
# Beutiful One Liner
cl[['clipped_A','clipped_B','clipped_C']] = cl.clip(5,12)
my_utils.plot_clipped(cl) # my_utils can be found in the github repo
###Output
_____no_output_____
###Markdown
ReindexNow I have 3 event types from 17:00 to 23:00. Let's imagine, I know that actually I have 5 event types. I also know that the period was from 16:00 to 00:00.
###Code
etypes = list('ABCDZ') # New columns
# Define a date range - Pandas will automatically make this into an index
idx = pd.date_range(start='2018-10-09 16:00:00',end='2018-10-09 23:59:00',freq='15T')
type(idx)
pt.reindex(idx, columns=etypes, fill_value=0).head()
### Let's put this in a function - This will help us later.
def get_all_types_and_timestamps(df, min_date='2018-10-09 16:00:00',
max_date='2018-10-09 23:59:00', etypes=list('ABCDZ')):
ret = df.copy()
time_idx = pd.date_range(start=min_date,end=max_date,freq='15T')
# Indices work like set. This is a good practive so we don't override our intended index
idx = ret.index.union(time_idx)
etypes = df.columns.union(set(etypes))
ret = ret.reindex(idx, columns=etypes, fill_value=0)
return ret
###Output
_____no_output_____
###Markdown
Method Chaining AssignAssign is for creating new columns on the dataframes. This is instead of`df[new_col] = function(df[old_col])`. They are both one lines, but `.assign` doesn't break the flow.
###Code
pt.assign(mean_all = pt.mean(axis=1)).head()
###Output
_____no_output_____
###Markdown
PipeThink R's %>% (Or rather, avoid thinking about R), `.pipe` is a method that accepts a function. `pipe`, by default, assumes the first argument of this function is a dataframe and passes the current dataframe down the pipeline. The function should return a dataframe also, if you want to continue with the pipe. Yet, it can also return any other value if you put it in the last step. This is incredibly valueable because it takes you one step further from "sql" where you do things "in reverse". $f(g(h(df)))$ = `df.pipe(h).pipe(g).pipe(f)`
###Code
def do_something(df, col='A', n = 200):
ret = df.copy()
# A dataframe is mutable, if you don't copy it first, this is prone to many errors.
# I always copy when I enter a function, even if I'm sure it shouldn't change anything.
ret[col] = ret[col] + n
return ret
do_something(do_something(do_something(pt), 'B', 100), 'C',500).head()
(pt
.pipe(do_something)
.pipe(do_something, col='B', n=100)
.pipe(do_something, col='C', n=500)
.head(5))
###Output
_____no_output_____
###Markdown
You can always do this with multiple lines of `df = do_something(df)` but I think this method is more elegant. Beautiful Code Tells a StoryYour code is not just about making the computer do things. It's about telling a story of what you wish to happen. Sometimes other people will want to read you code. Most time, it is you 3 monhts in the future who will want to read it. Some say good code documents itself. I'm not that extreme, yet storytelling with code may save you from many lines of unnecessary comments.The next and final block tells the story in one block. It's elegant, it tells a story. If you build utility functions and `pipe` them while following meaningful naming, they help tell a story. if you `assign` columns with meaningful names, they tell a story. you `drop`, you `apply`, you `read`, you `groupby` and you `resample` - they all tell a story.(Well... Maybe they could have gone with better naming for `resample`)
###Code
from import my_utils
df = (pd
.read_csv ('./data.csv', index_col=0, parse_dates=['date_'])
.assign (event_type=lambda df: df.event_type.map({1: 'A', 5: 'B', 7: 'C'}))
.sort_values ('date_')
.set_index ('date_')
.drop (columns='val_updated')
.groupby ('event_type')
.resample ('15T')
.apply ('count')
.rename (columns={'event_type': 'amount'})
.unstack ('event_type')
.pipe (my_utils.remove_multi_index)
.pipe (get_all_types_and_timestamps) # Remember this from before?
.assign (mean_event=lambda df: df.mean(axis=1))
.loc [:, ['mean_event']]
.pipe (my_utils.make_sliding_time_windows, steps_back=6)
.dropna ()
)
df.head()
###Output
_____no_output_____
|
8 3 5 RANDOM FOREST - bagging - bootstrap - random supspace.ipynb
|
###Markdown
RAMDOM FOREST
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
df = pd.read_csv("verisetleri\Hitters.csv")
df = df.dropna()
dms = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
y = df["Salary"]
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
X = pd.concat([X_, dms[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
rf_model = RandomForestRegressor(random_state=42).fit(X_train, y_train)
rf_model.get_params()
# n_estimators --> Kullanılacak olan ağaç sayısı. 250, 500 , 1000 gibi değerlerle kullanılmaya çalışılır.
# max_features --> Bölünmelerde göz önünde bulundurulacak olan değişken sayısı
y_pred = rf_model.predict(X_test)
np.sqrt(mean_squared_error(y_test, y_pred))
# MODEL TUNNING
# En önemli parametreler
# 1- Ağaç sayısı
# 2 -Bölünme işlemlerinde göz önünde bulundurulacak olan değişken sayısıdır.
# 3 ve 4 - min_sample_split ve max_depth parametreleridir.
rf_params = {"max_depth" : [5, 8, 10],
"max_features" : [2, 5, 10],
"n_estimators" : [200, 500, 1000, 2000],
"min_samples_split" : [2, 10, 80, 100]}
rf_cv_model = GridSearchCV(rf_model, rf_params, cv=10, n_jobs=-1, verbose=2).fit(X_train, y_train)
rf_cv_model.best_params_
rf_final_model = RandomForestRegressor(random_state=42, max_depth=8, max_features=2, min_samples_split=2, n_estimators=200).fit(X_train, y_train)
rf_final_model.get_params()
y_pred = rf_final_model.predict(X_test)
np.sqrt(mean_squared_error(y_test, y_pred))
###Output
_____no_output_____
###Markdown
DEĞİŞKEN ÖNEM DÜZEYİ ve BUNLARIN GÖRSELLEŞTİRİLMESİ
###Code
# Modelleme işlemleri sırasında göz önünde bulundurmamız gereken yada odaklanmamız gereken değişkenleri görmek adına bir imkan sağlamaktadır.
rf_final_model.feature_importances_*100
# rf_final_model.feature_importances_ değişkenlerin önemlerinin hesaplanmış durumlarıdır. Bu skorları 100 ile çarparak karşılaştırılabilir bir formda gözlemleme imkanı buluyoruz.
Importance = pd.DataFrame({'Importance':rf_final_model.feature_importances_*100},
index=X_train.columns)
# Burada bir Importance dataframe oluşturlacak. Birinci değişkendeki değerler "Importance", tune edilmiş modelin feature_importances_ isimli nesnesinden gelecek ve bunları 100 ile çarpmış olacağız.
# index=X_train.colums ile de değişken isimleri kullanılarak bir sütün oluşturuluyor.
Importance.sort_values(by='Importance',
axis=0,
ascending=True).plot(kind='barh',
color='r')
plt.xlabel('Variable Importance')
plt.gca().legend_ = None
# Elimizde 100 tane değişken varsa bunlarla her zaman çalışmak istmeyebiliriz. Yada elimizdeki mevcutdeğişkenler üzerinden yeni değişkenler türetmemiz dolayısıyla hatayı azaltmaya çalışmak gibi gayretlerimiz olacak. Bu noktada değişkenlerin önem düzeyine bu şekilde erişilebilirse bu bize yapacak olduğumuz seçimlerde yada alacak olduğumuz kararlarda bir karar destek noktası sağlayacaktır.
# Burada önem derecesi en yüksek olan değişkenlerin en önemli özelliği oyuncuların performanslarına ilişkin değerler olmasıdır.
# son
###Output
_____no_output_____
|
assocs/assoc_mags.ipynb
|
###Markdown
Inspect by condition categories
###Code
COND_CATS = {
'temperature',
'supplement',
'strain-description',
'taxonomy-id',
'base-media',
'carbon-source',
'nitrogen-source',
'phosphorous-source',
'sulfur-source',
'calcium-source'
}
COND_TO_COND_TYPE_d = dict()
for cc in COND_CATS:
for cond in all_AVA_muts[cc].unique():
COND_TO_COND_TYPE_d[cond] = cc
CC_FT_assoc_d = dict()
for cc in COND_CATS:
d = {ft:set() for ft in FEAT_TYPES}
CC_FT_assoc_d[cc] = d
# CC_FT_assoc_d
for _, m in all_AVA_muts.iterrows():
for ftc in FEAT_TYPE_COL:
for f in m[ftc]:
for cond in f['significantly associated conditions']:
cc = COND_TO_COND_TYPE_d[cond]
ft = ftc
f_id = f['name']
if ftc == "genomic features":
f_id = f['RegulonDB ID']
ft = f["feature type"]
if ft == "unknown":
ft = "intergenic"
if ftc == "operons":
f_id = f['RegulonDB ID']
CC_FT_assoc_d[cc][ft].add(f_id)
# CC_FT_assoc_d
C_FT_F_assoc_d = dict()
for c in COND_TO_COND_TYPE_d.keys():
d = {ft:set() for ft in FEAT_TYPES}
C_FT_F_assoc_d[c] = d
for _, m in all_AVA_muts.iterrows():
for ftc in FEAT_TYPE_COL:
for f in m[ftc]:
for c in f['significantly associated conditions']:
ft = ftc
f_id = f['name']
if ftc == "genomic features":
f_id = f['RegulonDB ID']
ft = f["feature type"]
if ft == "unknown":
ft = "intergenic"
if ftc == "operons":
f_id = f['RegulonDB ID']
C_FT_F_assoc_d[c][ft].add(f_id)
# C_FT_F_assoc_d
df = pd.DataFrame()
for c, ft_f_assoc_d in C_FT_F_assoc_d.items():
for ft, feats in ft_f_assoc_d.items():
if c not in df.index:
srs = pd.Series({ft:0 for ft in FEAT_TYPES}, name=c)
df = df.append(srs)
df.at[c,ft] += len(feats)
df["associated features"] = df.apply(lambda r: r[FEAT_TYPES].sum(), axis=1)
df = df[df["associated features"]!=0]
df["condition type"] = df.apply(lambda r: COND_TO_COND_TYPE_d[r.name], axis=1)
df.head()
# getting medians to sort upcoming boxplot
medians_d = {}
for c_t, tdf in df.groupby(["condition type"]):
medians_d[c_t] = tdf["associated features"].median()
medians_df = pd.DataFrame.from_dict(medians_d, orient="index", columns=["median"])
medians_df = medians_df.sort_values(by="median", ascending=False)
medians_df
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from adjustText import adjust_text
%matplotlib inline
plt.rcParams["figure.dpi"] = 300
sns.set_palette("muted")
sns.set_context("paper")
# sns.set_style("ticks")
sns.set_style("whitegrid")
plt.rcParams['font.sans-serif'] = ["FreeSans"]
boxplot_kwargs = {
'boxprops': {'edgecolor': 'k', 'linewidth': 0.75},
'whiskerprops': {'color': 'k', 'linewidth': 0.75},
'medianprops': {'color': 'k', 'linewidth': 0.75},
'medianprops': {'color': 'orange', 'linewidth': 1},
'capprops': {'color': 'k', 'linewidth': 0.75},
'flierprops': {'marker': '.', 'markerfacecolor': "black", 'markeredgecolor': "black",
# "markersize": 2 # only include if including the stripplot
}
}
plt.figure(
figsize=(2.5, 1.5)
)
ax = sns.boxplot(
data=df, x="associated features", y="condition type",
order=medians_df.index,
color="white",
**boxplot_kwargs
)
ax.set_xlabel('associated features per condition', fontname="FreeSans", fontsize=9)
ax.set_ylabel('conditions', fontname="FreeSans", fontsize=9)
ax.tick_params(axis='both', which='both', length=0)
ax.set_xlim(0,)
df2 = pd.DataFrame()
for c, r in df.iterrows():
for ftc in FEAT_TYPES:
srs = pd.Series({"condition type": r["condition type"],
"feature type": ftc, "associated count": r[ftc]}, name=c)
df2 = df2.append(srs)
df2["associated count"] = df2["associated count"].astype(int)
df2
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from adjustText import adjust_text
%matplotlib inline
plt.rcParams["figure.dpi"] = 200
sns.set_palette("tab20")
sns.set_context("paper")
sns.set_style("ticks")
# sns.set_style("whitegrid")
plt.rcParams['font.sans-serif'] = ["FreeSans"]
boxplot_kwargs = {
'boxprops': {'edgecolor': 'k', 'linewidth': 0.75},
'whiskerprops': {'color': 'k', 'linewidth': 0.75},
'medianprops': {'color': 'black', 'linewidth': 0.75},
'capprops': {'color': 'k', 'linewidth': 0.75},
'flierprops': {'marker': 'd', 'markerfacecolor': "black", 'markeredgecolor': "black", "markersize":1}
}
plt.figure(
# figsize=(3, 2)
)
sns.boxplot(
data=df2, x="associated count", y="condition type",
hue="feature type",
# color="white",
**boxplot_kwargs
)
# sns.stripplot(
# data=df, x="genomic features", y="cond type",
# color="0.8",
# alpha=0.5,
# linewidth=1,
# # facecolors=None
# )
sns.despine(
# offset=10,
trim=True
)
CC_FT_assoc_cnt_df = pd.DataFrame(columns=["condition type", "feature type", "associated count"])
for cc, ft_d in CC_FT_assoc_d.items():
for ft, f_set in ft_d.items():
CC_FT_assoc_cnt_df = CC_FT_assoc_cnt_df.append({"condition type": cc,
"feature type": ft,
"associated count": len(f_set)
}, ignore_index=True)
CC_FT_assoc_cnt_df = CC_FT_assoc_cnt_df[CC_FT_assoc_cnt_df["associated count"] != 0]
CC_FT_assoc_cnt_df.head()
CC_FT_assoc_cnt_df["feature type"] = CC_FT_assoc_cnt_df.apply(lambda r: r["feature type"].replace("regulators", "regulons") , axis=1)
# CC_FT_assoc_cnt_df["feature type"] = CC_FT_assoc_cnt_df.apply(lambda r: r["feature type"].replace("EC numbers", "reactions") , axis=1)
CC_FT_assoc_cnt_df["feature type"] = CC_FT_assoc_cnt_df.apply(lambda r: r["feature type"][:-1] if r["feature type"][-1] == 's' else r["feature type"] ,axis=1)
p = {
'gene':"#72C4B3",
'operon':"#A7A0CB",
'pathway':"#F65E54",
'imodulon':"#6397C2",
'regulon':"#FA9A47",
'attenuator terminator':"#9BD44C",
'intergenic':"#F9B8DA",
'promoter':"#CAC9CA",
'terminator':"#A35DA6",
'TFBS':"#BAE4B0",
'RBS':"#FFE953",
'reaction': "#F781BF",
'product': "#D9AF77"
}
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from adjustText import adjust_text
%matplotlib inline
plt.rcParams["figure.dpi"] = 200
# sns.set_palette("deep")
sns.set_context("paper")
sns.set_style("ticks")
plt.rcParams['font.sans-serif'] = ["FreeSans"]
plt.figure(figsize=(4, 2.5))
ax = sns.swarmplot(x="associated count", y="condition type", hue="feature type", data=CC_FT_assoc_cnt_df, palette=p)
plt.legend(
# title='title',
bbox_to_anchor=(1.05, 1), loc='upper left',
# prop=fontP
)
CC_FT_assoc_cnt_df.head()
# TODO: order columns according to which have the largest sums on average per CC
CC_FT_assoc_cnt_mat = pd.DataFrame(columns=CC_FT_assoc_cnt_df["feature type"].unique(), index=CC_FT_assoc_cnt_df["condition type"].unique())
CC_FT_assoc_cnt_mat = CC_FT_assoc_cnt_mat.fillna(0)
for _, r in CC_FT_assoc_cnt_df.iterrows():
CC_FT_assoc_cnt_mat.at[r["condition type"], r["feature type"]] = r["associated count"]
CC_FT_assoc_cnt_mat
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams["figure.dpi"] = 300
sns.set_palette("Set2")
sns.set_context("paper")
sns.set_style("whitegrid")
plt.rcParams['font.sans-serif'] = ["FreeSans"]
plt.rcParams['legend.handlelength'] = 1
plt.rcParams['legend.handleheight'] = 1.125
plt.rcParams['legend.handletextpad'] = 0.1
plt.rcParams['legend.labelspacing'] = 0.1
df = CC_FT_assoc_cnt_mat.copy()
# sorting
df["sum"] = df.apply(lambda r: r.sum(), axis=1)
df.sort_values(by="sum", inplace=True)
df.drop(columns=["sum"], inplace=True)
df = df.T
df["sum"] = df.apply(lambda r: r.sum(), axis=1)
df.sort_values(by="sum", ascending=False, inplace=True)
df.drop(columns=["sum"], inplace=True)
df = df.T
# display(df)
ax = df.plot.barh(
figsize=(4, 1.5),
width=1,
color=p,
stacked=True
)
# sns.despine(ax=ax, top=True, right=True, left=True, bottom=False)
# leg = ax.legend(
# loc='center left',
# bbox_to_anchor=(1, 0.4),
# frameon=False,
# title="feature type",
# labelspacing=0
# )
leg = ax.legend(
bbox_to_anchor=(0., 1, 1., .102),
loc=3,
ncol=4,
frameon=False,
fontsize=7,
labelspacing=0
)
leg._legend_box.align = "left"
ax.tick_params(axis='both', which='both', length=0)
ax.set_xlabel('associations', fontname="FreeSans", fontsize=9) # TODO: call out in figure description that the assocations are statistically significant.
ax.set_ylabel('condition type', fontname="FreeSans", fontsize=9)
ax.grid(axis='y')
# ax.set_title("Mutations to the GlpK subunit binding sites\nare very specific")
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams["figure.dpi"] = 300
sns.set_palette("muted")
sns.set_context("paper")
sns.set_style("whitegrid")
plt.rcParams['font.sans-serif'] = ["FreeSans"]
plt.rcParams['legend.handlelength'] = 1
plt.rcParams['legend.handleheight'] = 1.125
plt.rcParams['legend.handletextpad'] = 0.1
plt.rcParams['legend.labelspacing'] = 0.1
df = CC_FT_assoc_cnt_mat.copy()
# sorting
df["sum"] = df.apply(lambda r: r.sum(), axis=1)
df.sort_values(by="sum", inplace=True)
df.drop(columns=["sum"], inplace=True)
df = df.T
df["sum"] = df.apply(lambda r: r.sum(), axis=1)
df.sort_values(by="sum", ascending=True, inplace=True)
df.drop(columns=["sum"], inplace=True)
display(df)
ax = df.plot.barh(
figsize=(2, 2),
width=1,
# color=p,
stacked=True
)
# sns.despine(ax=ax, top=True, right=True, left=True, bottom=False)
leg = ax.legend(loc='center left', bbox_to_anchor=(1, 0.4), frameon=False, title="condition type", labelspacing=0)
leg._legend_box.align = "left"
ax.tick_params(axis='both', which='both', length=0)
for tick in ax.get_xticklabels():
tick.set_fontname("FreeSans")
for tick in ax.get_yticklabels():
tick.set_fontname("FreeSans")
ax.set_xlabel('associations', fontname="FreeSans", fontsize=9)
ax.set_ylabel('feature type', fontname="FreeSans", fontsize=9)
ax.grid(axis='y')
# ax.set_title("Mutations to the GlpK subunit binding sites\nare very specific")
plt.savefig("fx.svg")
###Output
_____no_output_____
|
data/3 - RL-TodosLosFeatures.ipynb
|
###Markdown
Lectura de archivos
###Code
%matplotlib inline
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
id = '15eBo8goFc4q--lIGREc9Dknn9GqBybrs'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('test_values_selected_features_remix.csv')
test_values1 = pd.read_csv('test_values_selected_features_remix.csv', encoding='latin-1', index_col='building_id')
test_values1[test_values1.select_dtypes('O').columns] = test_values1[test_values1.select_dtypes('O').columns].astype('category')
id = '1F6ZTDoJc-aaiD-rpriXq5zmv2i84OSR4'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('train_values_selected_features_remix.csv')
train_values1 = pd.read_csv('train_values_selected_features_remix.csv', encoding='latin-1', index_col='building_id')
train_values1[train_values1.select_dtypes('O').columns] = train_values1[train_values1.select_dtypes('O').columns].astype('category')
id = '1kVPCXPedaMcZZIuj2wFjhJoQ36nD5kbq'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('train_values_complete_features_remix.csv')
train_values2 = pd.read_csv('train_values_complete_features_remix.csv', encoding='latin-1', index_col='building_id')
train_values2[train_values2.select_dtypes('O').columns] = train_values2[train_values2.select_dtypes('O').columns].astype('category')
id = '1XBWMAHR6sItnHT2dq3hX2nk9WrzNSF7J'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('test_values_complete_features_remix.csv')
test_values2 = pd.read_csv('test_values_complete_features_remix.csv', encoding='latin-1', index_col='building_id')
test_values2[test_values2.select_dtypes('O').columns] = test_values2[test_values2.select_dtypes('O').columns].astype('category')
id='1RUtolRcQlR3RGULttM4ZoQaK_Ouow4gc'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('train_labels.csv')
train_labels = pd.read_csv('train_labels.csv', encoding='latin-1', dtype={'building_id': 'int64', 'damage_grade': 'int64'}, index_col='building_id')
id = '1JDpXEXjP0QCLF7qbpMgWRdNCf5w3TURK'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('index.csv')
df_index = pd.read_csv('index.csv', encoding='latin-1', index_col='building_id')
###Output
_____no_output_____
###Markdown
Ordenamiento
###Code
id = '1kt2VFhgpfRS72wtBOBy1KDat9LanfMZU'
downloaded = drive.CreateFile({'id': id})
downloaded.GetContentFile('test_values.csv')
test_values = pd.read_csv('test_values.csv', encoding='latin-1', dtype = {'building_id': 'int64', 'geo_level_2_id': 'int64', 'geo_level_3_id': 'int64',\
'count_floors_pre_eq': 'int64', 'age': 'int64', 'area_percentage': 'int64', \
'height_percentage': 'int64', 'land_surface_condition': 'category',\
'foundation_type': 'category', 'roof_type': 'category', 'ground_floor_type': 'category',\
'other_floor_type': 'category', 'position': 'category', 'plan_configuration': 'category',\
'has_superstructure_adobe_mud': 'boolean', 'has_superstructure_mud_mortar_stone': 'boolean', \
'has_superstructure_stone_flag': 'boolean', 'has_superstructure_cement_mortar_stone': 'boolean',\
'has_superstructure_mud_mortar_brick': 'boolean', 'has_superstructure_cement_mortar_brick': 'boolean',\
'has_superstructure_timber': 'boolean', 'has_superstructure_bamboo': 'boolean',\
'has_superstructure_rc_non_engineered': 'boolean', 'has_superstructure_rc_engineered': 'boolean',\
'has_superstructure_other': 'boolean', 'legal_ownership_status': 'category', 'count_families': 'int64', \
'has_secondary_use': 'boolean', 'has_secondary_use_agriculture': 'boolean', 'has_secondary_use_hotel': 'boolean', \
'has_secondary_use_rental': 'boolean', 'has_secondary_use_institution': 'boolean', 'has_secondary_use_school': 'boolean',\
'has_secondary_use_industry': 'boolean', 'has_secondary_use_health_post': 'boolean', \
'has_secondary_use_gov_office': 'boolean', 'has_secondary_use_use_police': 'boolean', 'has_secondary_use_other': 'boolean'})
test_values2 = test_values2.reset_index()
test_values = test_values['building_id']
test_values = test_values.to_frame()
test_values2 = test_values.merge(test_values2, on = 'building_id').set_index('building_id')
df = train_values2.merge(train_labels, left_index = True, right_index= True)
train_values2 = df.drop(columns='damage_grade')
train_labels = pd.DataFrame(df['damage_grade'])
test_values = pd.read_csv('test_values.csv', encoding='latin-1', dtype = {'building_id': 'int64', 'geo_level_2_id': 'int64', 'geo_level_3_id': 'int64',\
'count_floors_pre_eq': 'int64', 'age': 'int64', 'area_percentage': 'int64', \
'height_percentage': 'int64', 'land_surface_condition': 'category',\
'foundation_type': 'category', 'roof_type': 'category', 'ground_floor_type': 'category',\
'other_floor_type': 'category', 'position': 'category', 'plan_configuration': 'category',\
'has_superstructure_adobe_mud': 'boolean', 'has_superstructure_mud_mortar_stone': 'boolean', \
'has_superstructure_stone_flag': 'boolean', 'has_superstructure_cement_mortar_stone': 'boolean',\
'has_superstructure_mud_mortar_brick': 'boolean', 'has_superstructure_cement_mortar_brick': 'boolean',\
'has_superstructure_timber': 'boolean', 'has_superstructure_bamboo': 'boolean',\
'has_superstructure_rc_non_engineered': 'boolean', 'has_superstructure_rc_engineered': 'boolean',\
'has_superstructure_other': 'boolean', 'legal_ownership_status': 'category', 'count_families': 'int64', \
'has_secondary_use': 'boolean', 'has_secondary_use_agriculture': 'boolean', 'has_secondary_use_hotel': 'boolean', \
'has_secondary_use_rental': 'boolean', 'has_secondary_use_institution': 'boolean', 'has_secondary_use_school': 'boolean',\
'has_secondary_use_industry': 'boolean', 'has_secondary_use_health_post': 'boolean', \
'has_secondary_use_gov_office': 'boolean', 'has_secondary_use_use_police': 'boolean', 'has_secondary_use_other': 'boolean'})
test_values1 = test_values1.reset_index()
test_values = test_values['building_id']
test_values = test_values.to_frame()
test_values1 = test_values.merge(test_values1, on = 'building_id').set_index('building_id')
df = train_values1.merge(train_labels, left_index = True, right_index= True)
train_values1 = df.drop(columns='damage_grade')
train_labels = pd.DataFrame(df['damage_grade'])
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
train_values = train_values2.copy()
test_values = test_values2.copy()
###Output
_____no_output_____
###Markdown
Continua la regresion
###Code
index = df_index.index
cols = train_values.index.tolist()
not_in_index = []
for col in cols:
if col not in index: not_in_index.append(col)
valid_values = train_values.loc[index]
train_values = train_values.loc[not_in_index]
valid_labels = train_labels.loc[index]
train_labels = train_labels.loc[not_in_index]
idx1 = train_values.shape[0]
idx2 = valid_values.shape[0]
data_df = pd.concat([train_values, valid_values, test_values], sort=False)
cat_features = ['geo_level_1_id', 'geo_level_2_id', 'geo_level_3_id', 'land_surface_condition',
'foundation_type', 'roof_type', 'ground_floor_type', 'other_floor_type',
'position', 'plan_configuration', 'legal_ownership_status',
'count_families', 'range_age']
num_features = ['count_floors_pre_eq', 'max_mean_area_percentage_id_1', 'mean_area_geo_level_1_id']
data_cat = pd.DataFrame(index = data_df.index,
data = data_df,
columns = cat_features)
data_num = data_df.drop(columns = cat_features)
num_features = data_num.columns
data_num.shape
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import MinMaxScaler
enc = OneHotEncoder(drop='first')
enc.fit(data_cat)
data_cat_encoded = enc.transform(data_cat)
type(data_cat_encoded)
scaler = MinMaxScaler()
scaler.fit(data_num[:idx2])
data_num_scaled = scaler.transform(data_num)
type(data_num)
data_cat_encoded
from scipy.sparse import coo_matrix, hstack
data = hstack((data_cat_encoded,data_num_scaled))
data_cat_encoded
data = data.astype(dtype='float16')
X_train = data.tocsr()[:idx1]
X_valid = data.tocsr()[idx1:idx2+idx1]
X_test = data.tocsr()[idx2+idx1:]
y_train = train_labels['damage_grade'].values
y_valid = valid_labels['damage_grade'].values
train_values = train_values.astype(dtype= {'geo_level_1_id': 'category', 'geo_level_2_id': 'category',
'geo_level_3_id': 'category', 'count_families':'category',
'count_materials': 'category'})
test_values = test_values.astype(dtype= {'geo_level_1_id': 'category', 'geo_level_2_id': 'category',
'geo_level_3_id': 'category', 'count_families':'category',
'count_materials': 'category'})
#idx = train_values.shape[0]
#data_df = pd.concat([train_values, test_values], sort=False)
#data_cat = pd.DataFrame(index = data_df.index,
# data = data_df,
# columns = cat_features)
#data_num = data_df.drop(columns = cat_features)
#data = data.astype(dtype='float16')
#X_train = data.tocsr()[:idx]
#X_test = data.tocsr()[idx:]
# para el undersample poner train_labels2
#y_train = train_labels['damage_grade']
#from sklearn.model_selection import train_test_split
#X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.3, random_state=9)
###Output
_____no_output_____
###Markdown
Regresion
###Code
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(C = 0.9, verbose = 100, solver ='liblinear', max_iter=500, n_jobs=-1, random_state = 5)
log_reg.fit(X_train, y_train)
from sklearn.metrics import f1_score
# TEST SCORE
y_pred = log_reg.predict(X_valid)
f1_score(y_valid, y_pred, average='micro')
from sklearn.metrics import f1_score, accuracy_score, confusion_matrix, classification_report
pred = y_pred
score = f1_score(y_valid, pred, average='micro')
#score = accuracy_score(y_valid_split, pred)
cm = confusion_matrix(y_valid, pred)
report = classification_report(y_valid, pred)
print("f1_micro: ", score, "\n\n")
print(cm, "\n\n")
print(report, "\n\n")
# TRAIN SCORE
y_pred = log_reg.predict(X_train)
f1_score(y_train, y_pred, average='micro')
###Output
_____no_output_____
###Markdown
Algunas cosas que se pueden probar; Tarda un rato largo el smote Si quiero hacer un SMOTE
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=27)
X_train, y_train = sm.fit_sample(X_train, y_train)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/externals/six.py:31: FutureWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).
"(https://pypi.org/project/six/).", FutureWarning)
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.neighbors.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API.
warnings.warn(message, FutureWarning)
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function safe_indexing is deprecated; safe_indexing is deprecated in version 0.22 and will be removed in version 0.24.
warnings.warn(msg, category=FutureWarning)
###Markdown
Si quiero OverSampling
###Code
pip install -U imbalanced-learn
from imblearn.over_sampling import RandomOverSampler
sampling_strategy = "not majority"
ros = RandomOverSampler(sampling_strategy=sampling_strategy)
X_train, y_train = ros.fit_resample(X_train, y_train)
###Output
_____no_output_____
###Markdown
SI quiero un undersampling
###Code
from imblearn.under_sampling import RandomUnderSampler
sampling_strategy = "not minority"
rus = RandomUnderSampler(sampling_strategy=sampling_strategy)
X_train, y_train = rus.fit_resample(X_train, y_train)
###Output
_____no_output_____
###Markdown
Ridge
###Code
from sklearn.linear_model import RidgeClassifier
#penalty = 'l1'
ridge = RidgeClassifier(alpha= 10, max_iter=500, random_state = 5)
ridge.fit(X_train, y_train)
from sklearn.metrics import f1_score
# TEST SCORE
y_pred = ridge.predict(X_valid)
f1_score(y_valid, y_pred, average='micro')
from sklearn.metrics import f1_score, accuracy_score, confusion_matrix, classification_report
pred = y_pred
score = f1_score(y_valid, pred, average='micro')
#score = accuracy_score(y_valid_split, pred)
cm = confusion_matrix(y_valid, pred)
report = classification_report(y_valid, pred)
print("f1_micro: ", score, "\n\n")
print(cm, "\n\n")
print(report, "\n\n")
# TRAIN SCORE
y_pred = ridge.predict(X_train)
f1_score(y_train, y_pred, average='micro')
from sklearn.metrics import f1_score, accuracy_score, confusion_matrix, classification_report
pred = y_pred
score = f1_score(y_train, pred, average='micro')
#score = accuracy_score(y_valid_split, pred)
cm = confusion_matrix(y_train, pred)
report = classification_report(y_train, pred)
print("f1_micro: ", score, "\n\n")
print(cm, "\n\n")
print(report, "\n\n")
X_test.shape
###Output
_____no_output_____
###Markdown
Linear Discriminant
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
lda.fit(X_train.toarray(), y_train)
from sklearn.metrics import f1_score
# TEST SCORE
y_pred = lda.predict(X_valid.to_array())
f1_score(y_valid, y_pred, average='micro')
from sklearn.metrics import f1_score, accuracy_score, confusion_matrix, classification_report
pred = y_pred
score = f1_score(y_valid, pred, average='micro')
#score = accuracy_score(y_valid_split, pred)
cm = confusion_matrix(y_valid, pred)
report = classification_report(y_valid, pred)
print("f1_micro: ", score, "\n\n")
print(cm, "\n\n")
print(report, "\n\n")
# TRAIN SCORE
y_pred = ridge.predict(X_train)
f1_score(y_train, y_pred, average='micro')
from sklearn.metrics import f1_score, accuracy_score, confusion_matrix, classification_report
pred = y_pred
score = f1_score(y_train, pred, average='micro')
#score = accuracy_score(y_valid_split, pred)
cm = confusion_matrix(y_train, pred)
report = classification_report(y_train, pred)
print("f1_micro: ", score, "\n\n")
print(cm, "\n\n")
print(report, "\n\n")
###Output
f1_micro: 0.7799196448640144
[[109313 7685 1642]
[ 19091 77204 22345]
[ 4378 23190 91072]]
precision recall f1-score support
1 0.82 0.92 0.87 118640
2 0.71 0.65 0.68 118640
3 0.79 0.77 0.78 118640
accuracy 0.78 355920
macro avg 0.78 0.78 0.78 355920
weighted avg 0.78 0.78 0.78 355920
###Markdown
Feature Selection
###Code
import joblib
from mlxtend.feature_selection import SequentialFeatureSelector as sfs
log_reg = LogisticRegression(C = 1.0, n_jobs=-1, random_state = 5)
sfs1 = sfs(log_reg,
k_features=100,
forward=True,
floating=False,
verbose=2,
scoring='f1_micro',
cv=3)
# Perform SFFS
sfs1 = sfs1.fit(X_train, y_train)
feat_cols = list(sfs1.k_feature_idx_)
print(feat_cols)
###Output
_____no_output_____
###Markdown
Submission
###Code
probas = pd.DataFrame(log_reg.predict_proba(X_valid), index = valid_labels['damage_grade'].index, columns=['proba_1', 'proba_2', 'proba_3'])
y_predicted = log_reg.predict(X_valid)
probas['predicted_RL_7432'] = y_predicted
probas
probas.to_csv('3_7432_RL_valid_probas.csv')
predicted_df = pd.DataFrame(y_pred.astype(np.int64), index = y_valid.index, columns=['damage_grade'])
predicted_df.to_csv('mas_cortito.csv')
from google.colab import files
files.download("mas_cortito.csv")
###Output
_____no_output_____
|
old_versions/1main.ipynb
|
###Markdown
Network inference of categorical variables: non-sequential data
###Code
import sys
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
%matplotlib inline
import inference
import fem
# setting parameter:
np.random.seed(1)
n = 20 # number of positions
m = 3 # number of values at each position
l = int(((n*m)**2)) # number of samples
g = 2.
nm = n*m
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
# generate coupling matrix w0:
def generate_interactions(n,m,g):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
i1tab,i2tab = itab(n,m)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,i1:i2] = 0. # no self-interactions
for i in range(nm):
for j in range(nm):
if j > i: w[i,j] = w[j,i]
return w
w0 = inference.generate_interactions(n,m,g)
plt.imshow(w0,cmap='rainbow',origin='lower')
plt.clim(-0.5,0.5)
plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.5,0,0.5])
plt.show()
#print(w0)
def generate_sequences_old(w,n,m,l):
i1tab,i2tab = itab(n,m)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
#print(s)
ntrial = 2*m
nrepeat = 10*n*m
for t in range(l):
for irepeat in range(nrepeat): # update for entire positions
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
h = np.sum(s[t,:]*w[i1:i2,:],axis=1) # h[i1:i2]
k = np.random.randint(0,m)
for itrial in range(ntrial): # update at each position
k2 = np.random.randint(0,m)
while k2 == k:
k2 = np.random.randint(0,m)
if np.exp(h[k2]- h[k]) > np.random.rand():
k = k2
s[t,i1:i2] = 0.
s[t,i1+k] = 1.
return s
def generate_sequences(w,n,m,l):
i1tab,i2tab = itab(n,m)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
print(s)
nrepeat = 1000
for irepeat in range(nrepeat):
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
h = s.dot(w[i1:i2,:].T) # h[t,i1:i2]
h_old = (s[:,i1:i2]*h).sum(axis=1) # h[t,i0]
k = np.random.randint(0,m,size=l)
for t in range(l):
if np.exp(h[t,k[t]] - h_old[t]) > np.random.rand():
s[t,i1:i2] = 0.
s[t,i1+k[t]] = 1.
return s
s = generate_sequences(w0,n,m,l)
print(s.shape)
s_inverse = np.argmax(s.reshape(-1,m),axis=1).reshape(-1,n)
print(s_inverse.shape)
y = (s_inverse.T).copy()
print(y.shape)
model = fem.discrete.model()
x1, x2 = y[:, :-1], y[:, 1:]
model.fit(x1, x2)
w_fit_flat = np.hstack([wi for wi in model.w.itervalues()]).flatten()
plt.scatter(w0,w_fit_flat.reshape(nm,nm))
plt.plot([-1.0,1.0],[-1.0,1.0],'r--')
plt.show()
i1tab,i2tab = itab(n,m)
nloop = 10
nm1 = nm - m
wini = np.random.normal(0.0,1./np.sqrt(nm),size=(nm,nm1))
i = 0
i1,i2 = i1tab[i],i2tab[i]
#x = np.hstack([s[:,:i1],s[:,i2:]])
x = s[:,i2:].copy()
y = s.copy()
# covariance[ia,ib]
cab_inv = np.empty((m,m,nm1,nm1))
eps = np.empty((m,m,l))
for ia in range(m):
for ib in range(m):
if ib != ia:
eps[ia,ib,:] = y[:,i1+ia] - y[:,i1+ib]
which_ab = eps[ia,ib,:] !=0.
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
cab = np.cov(dxab,rowvar=False,bias=True)
cab_inv[ia,ib,:,:] = linalg.pinv(cab,rcond=1e-15)
w = wini[i1:i2,:].copy()
for iloop in range(nloop):
h = np.dot(x,w.T)
for ia in range(m):
wa = np.zeros(nm1)
for ib in range(m):
if ib != ia:
which_ab = eps[ia,ib,:] !=0.
eps_ab = eps[ia,ib,which_ab]
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
h_ab = h[which_ab,ia] - h[which_ab,ib]
ha = np.divide(eps_ab*h_ab,np.tanh(h_ab/2.), out=np.zeros_like(h_ab), where=h_ab!=0)
dhdx = (ha - ha.mean())[:,np.newaxis]*dxab
dhdx_av = dhdx.mean(axis=0)
wab = cab_inv[ia,ib,:,:].dot(dhdx_av) # wa - wb
wa += wab
w[ia,:] = wa/m
plt.scatter(w0[i1:i2,i2:],w)
###Output
_____no_output_____
|
Project/notebooks/0.0 Creating directory (product, link, id).ipynb
|
###Markdown
Extracting product id from the weblinks Necessary step to act as reference for web scraping for reviews
###Code
#Import necessary libraries
import json
import re
import os
import pandas as pd
with open('../data/raw_data/initial.json', 'r') as file:
raw_data= json.load(file)
df=pd.DataFrame.from_dict(raw_data)
df['product_id']=df.weblink.str.partition(sep='grid:')[2]
df.columns
df_links_id=df[['product_name', 'weblink', 'product_id', 'num_reviews']].copy()
with open('../data/processed_data/combined_data.json', 'r') as file:
raw_data= json.load(file)
selected=pd.DataFrame.from_dict(raw_data)
selected_df= pd.merge(selected, df_links_id, on='product_name', how='left')
selected_df.product_id = selected_df.product_id.str.upper()
selected_df.columns
directory= selected_df[['brand', 'product_name']]
directory
datapath_data = os.path.join('../data/raw_data', 'productid_directory.csv')
if not os.path.exists(datapath_data):
directory.to_csv(datapath_data)
datapath_data = os.path.join('../data/raw_data', 'pre_selection.json')
if not os.path.exists(datapath_data):
selected_df.to_json(datapath_data)
datapath_data = os.path.join('../data/raw_data', 'data_links_id.json')
if not os.path.exists(datapath_data):
df_links_id.to_json(datapath_data)
###Output
_____no_output_____
###Markdown
______________________________________________________________________ Creating full list of product_ids scraped
###Code
#Storing each raw json file in a dataframe
with open('../data/raw_data/cleansers_full.json', 'r') as file:
raw_data= json.load(file)
cleansers_raw=pd.DataFrame.from_dict(raw_data)
with open('../data/raw_data/eye_products.json', 'r') as file:
raw_data= json.load(file)
eyeproducts_raw=pd.DataFrame.from_dict(raw_data)
with open('../data/raw_data/moisturizers_full.json', 'r') as file:
raw_data= json.load(file)
moisturizers_raw=pd.DataFrame.from_dict(raw_data)
with open('../data/raw_data/treatments_full.json', 'r') as file:
raw_data= json.load(file)
treatments_raw=pd.DataFrame.from_dict(raw_data)
full_df= pd.concat([cleansers_raw, eyeproducts_raw, moisturizers_raw, treatments_raw])
full_df['product_id']=full_df.weblink.str.partition(sep='grid:')[2]
#Also need to fix number of reviews
full_df.drop(full_df[full_df['num_reviews'].isnull()].index, inplace=True)
#Cleaning num_reviews column
full_df.loc[full_df.num_reviews.str.endswith('K'), 'num_reviews']=full_df.loc[full_df.num_reviews.str.endswith('K'), 'num_reviews'].str.strip('K').astype('float64')*1000
full_df['num_reviews']=full_df['num_reviews'].astype('int64')
full_df.reset_index(inplace=True)
full_df.info()
datapath3 = os.path.join('../data/raw_data', 'preprocessed_full.json')
if not os.path.exists(datapath3):
full_df.to_json(datapath3)
full_df
with open('../data/raw_data/full_reviews.json', 'r') as file:
raw_data= json.load(file)
reviews=pd.DataFrame.from_dict(raw_data)
reviews
###Output
_____no_output_____
|
optimize/OptTesting.ipynb
|
###Markdown
recomend-intake optimize.py testing=======================================
###Code
import scipy
import scipy.stats
import matplotlib
import pylab
import seaborn
import pandas
%matplotlib inline
import optimize
from micro_reqs import ALL_MICROS
# RDA and UL vectors
columns, RDA, UL = [], [], []
for n, _, _, r, u in ALL_MICROS:
columns.append(n) #string
RDA.append(r)
UL.append(u)
RDA, UL = scipy.array(RDA), scipy.array(UL)
# Food details and food as nutrient vectors
food_details = list(optimize.read_foods("foods.jsontxt.gz"))
foods = scipy.array([optimize.extract_nutrients(f) for f in food_details])
# Food normialized as percent of RDA
foods_rda = pandas.DataFrame(
foods / RDA, # foods is F x N matrix, RDA is 1 x N vector, division operator does point-wise div per row
columns=columns
)
print(foods_rda[:12][["Vitamin B6", "Vitamin B12", "Magnesium", "Calcium"]])
seaborn.regplot("Magnesium", "Calcium", foods_rda)
rnds = scipy.stats.binom.rvs(1420.0, 5/1420.0, size=10000)
rnds = pandas.DataFrame(rnds, columns=['R'])
seaborn.distplot(rnds)
for food in sorted(food_details, key=lambda f: f['descrip'].lower()):
print(food['descrip'])
###Output
Alfalfa seeds, sprouted, raw
Apples, canned, sweetened, sliced, drained, heated
Apples, canned, sweetened, sliced, drained, unheated
Apples, dried, sulfured, stewed, without added sugar
Apples, dried, sulfured, uncooked
Apples, raw, with skin
Apples, raw, without skin
Apples, raw, without skin, cooked, boiled
Apples, raw, without skin, cooked, microwave
Apricots, canned, water pack, with skin, solids and liquids
Apricots, dried, sulfured, stewed, without added sugar
Apricots, dried, sulfured, uncooked
Apricots, raw
Artichokes, (globe or french), cooked, boiled, drained, without salt
Artichokes, (globe or french), frozen, cooked, boiled, drained, without salt
Arugula, raw
Asparagus, canned, drained solids
Asparagus, cooked, boiled, drained
Asparagus, frozen, cooked, boiled, drained, without salt
Asparagus, raw
Avocados, raw, all commercial varieties
Balsam-pear (bitter gourd), leafy tips, cooked, boiled, drained, without salt
Balsam-pear (bitter gourd), pods, cooked, boiled, drained, without salt
Bamboo shoots, canned, drained solids
Bamboo shoots, raw
Bananas, raw
Barley, pearled, cooked
Barley, pearled, raw
Beans, baked, canned, no salt added
Beans, black turtle, mature seeds, cooked, boiled, without salt
Beans, black, mature seeds, canned, low sodium
Beans, black, mature seeds, raw
Beans, great northern, mature seeds, canned, low sodium
Beans, kidney, all types, mature seeds, cooked, boiled, without salt
Beans, kidney, red, mature seeds, canned, solids and liquids
Beans, kidney, red, mature seeds, cooked, boiled, without salt
Beans, kidney, red, mature seeds, raw
Beans, mature, red kidney, canned, solids and liquid, low sodium
Beans, mung, mature seeds, sprouted, canned, drained solids
Beans, navy, mature seeds, canned
Beans, pink, mature seeds, cooked, boiled, without salt
Beans, pink, mature seeds, raw
Beans, pinto, mature seeds, canned, solids and liquids, low sodium
Beans, pinto, mature seeds, cooked, boiled, without salt
Beans, pinto, mature seeds, raw
Beans, shellie, canned, solids and liquids
Beans, snap, green, canned, no salt added, drained solids
Beans, snap, green, canned, regular pack, drained solids
Beans, snap, green, cooked, boiled, drained, without salt
Beans, snap, green, frozen, cooked, boiled, drained without salt
Beans, snap, green, raw
Beans, snap, yellow, canned, regular pack, drained solids
Beans, snap, yellow, cooked, boiled, drained, without salt
Beans, snap, yellow, frozen, cooked, boiled, drained, without salt
Beans, white, mature seeds, cooked, boiled, without salt
Beans, white, mature seeds, raw
Beet greens, cooked, boiled, drained, without salt
Beet greens, raw
Beets, canned, drained solids
Beets, canned, no salt added, solids and liquids
Beets, cooked, boiled, drained
Beets, pickled, canned, solids and liquids
Beets, raw
Blackberries, frozen, unsweetened
Blackberries, raw
Blueberries, dried, sweetened
Blueberries, frozen, sweetened
Blueberries, frozen, unsweetened
Blueberries, raw
Boysenberries, frozen, unsweetened
Breadfruit, raw
Broadbeans (fava beans), mature seeds, cooked, boiled, without salt
Broadbeans (fava beans), mature seeds, raw
Broccoli raab, cooked
Broccoli raab, raw
Broccoli, chinese, cooked
Broccoli, cooked, boiled, drained, without salt
Broccoli, frozen, chopped, cooked, boiled, drained, without salt
Broccoli, frozen, chopped, unprepared
Broccoli, frozen, spears, cooked, boiled, drained, without salt
Broccoli, raw
Brussels sprouts, cooked, boiled, drained, without salt
Brussels sprouts, frozen, cooked, boiled, drained, without salt
Brussels sprouts, raw
Burdock root, cooked, boiled, drained, without salt
Burdock root, raw
Cabbage, chinese (pak-choi), cooked, boiled, drained, without salt
Cabbage, chinese (pak-choi), raw
Cabbage, chinese (pe-tsai), raw
Cabbage, cooked, boiled, drained, without salt
Cabbage, japanese style, fresh, pickled
Cabbage, mustard, salted
Cabbage, raw
Cabbage, red, cooked, boiled, drained, without salt
Cabbage, red, raw
Cabbage, savoy, raw
Carambola, (starfruit), raw
Carrots, canned, no salt added, solids and liquids
Carrots, canned, regular pack, drained solids
Carrots, cooked, boiled, drained, without salt
Carrots, frozen, cooked, boiled, drained, without salt
Carrots, frozen, unprepared
Carrots, raw
Cassava, raw
Cauliflower, cooked, boiled, drained, without salt
Cauliflower, frozen, cooked, boiled, drained, without salt
Cauliflower, frozen, unprepared
Cauliflower, green, raw
Cauliflower, raw
Celeriac, raw
Celery, cooked, boiled, drained, without salt
Celery, raw
Cereals ready-to-eat, KASHI GO LEAN CRUNCH!, Honey Almond Flax
Cereals ready-to-eat, KASHI GOLEAN
Cereals ready-to-eat, KASHI GOLEAN CRUNCH!
Cereals ready-to-eat, KASHI GOOD FRIENDS
Cereals ready-to-eat, KASHI HEART TO HEART, Honey Toasted Oat
Cereals ready-to-eat, KASHI Honey Sunshine
Cereals ready-to-eat, KASHI, HEART TO HEART, Oat Flakes & Blueberry Clusters
Chard, swiss, cooked, boiled, drained, without salt
Chard, swiss, raw
Chayote, fruit, raw
Cherries, sour, red, canned, water pack, solids and liquids (includes USDA commodity red tart cherries, canned)
Cherries, sour, red, frozen, unsweetened
Cherries, sour, red, raw
Cherries, tart, dried, sweetened
Chickpeas (garbanzo beans, bengal gram), mature seeds, canned, solids and liquids, low sodium
Chickpeas (garbanzo beans, bengal gram), mature seeds, cooked, boiled, without salt
Chickpeas (garbanzo beans, bengal gram), mature seeds, raw
Chicory greens, raw
Chives, raw
Chrysanthemum, garland, cooked, boiled, drained, without salt
Collards, cooked, boiled, drained, without salt
Collards, frozen, chopped, cooked, boiled, drained, without salt
Collards, raw
Corn bran, crude
Cowpeas (blackeyes), immature seeds, cooked, boiled, drained, without salt
Cowpeas (blackeyes), immature seeds, frozen, cooked, boiled, drained, without salt
Cowpeas, common (blackeyes, crowder, southern), mature seeds, cooked, boiled, without salt
Cowpeas, common (blackeyes, crowder, southern), mature seeds, raw
Cranberries, dried, sweetened
Cranberries, raw
Cranberry sauce, canned, sweetened
Cress, garden, cooked, boiled, drained, without salt
Cress, garden, raw
Cucumber, peeled, raw
Cucumber, with peel, raw
Currants, red and white, raw
Currants, zante, dried
Dandelion greens, cooked, boiled, drained, without salt
Dandelion greens, raw
Dates, deglet noor
Drumstick leaves, cooked, boiled, drained, without salt
Eggplant, cooked, boiled, drained, without salt
Eggplant, pickled
Eggplant, raw
Endive, raw
Fennel, bulb, raw
Figs, canned, water pack, solids and liquids
Figs, dried, stewed
Figs, dried, uncooked
Figs, raw
Garlic, raw
Ginger root, raw
Grape leaves, raw
Grapefruit, raw, pink and red and white, all areas
Grapefruit, raw, white, all areas
Grapefruit, sections, canned, water pack, solids and liquids
Grapes, american type (slip skin), raw
Grapes, canned, thompson seedless, water pack, solids and liquids
Grapes, red or green (European type, such as Thompson seedless), raw
Guava sauce, cooked
Guavas, common, raw
Hearts of palm, raw
Jerusalem-artichokes, raw
Jute, potherb, cooked, boiled, drained, without salt
Kale, cooked, boiled, drained, without salt
Kale, frozen, cooked, boiled, drained, without salt
Kiwifruit, green, raw
Kohlrabi, cooked, boiled, drained, without salt
Kohlrabi, raw
Kumquats, raw
Leeks, (bulb and lower leaf-portion), raw
Lemon peel, raw
Lemons, raw, without peel
Lentils, mature seeds, cooked, boiled, without salt
Lentils, raw
Lettuce, butterhead (includes boston and bibb types), raw
Lettuce, cos or romaine, raw
Lettuce, green leaf, raw
Lettuce, iceberg (includes crisphead types), raw
Lima beans, immature seeds, canned, no salt added, solids and liquids
Lima beans, immature seeds, cooked, boiled, drained, without salt
Lima beans, immature seeds, frozen, baby, cooked, boiled, drained, without salt
Lima beans, immature seeds, frozen, fordhook, cooked, boiled, drained, without salt
Lima beans, immature seeds, frozen, fordhook, unprepared
Lima beans, immature seeds, raw
Lima beans, large, mature seeds, cooked, boiled, without salt
Lima beans, large, mature seeds, raw
Limes, raw
Litchis, dried
Litchis, raw
Loganberries, frozen
Lotus root, cooked, boiled, drained, without salt
Mangos, raw
Maraschino cherries, canned, drained
Melons, cantaloupe, raw
Melons, casaba, raw
Melons, honeydew, raw
Miso
Mulberries, raw
Mung beans, mature seeds, cooked, boiled, without salt
Mung beans, mature seeds, raw
Mung beans, mature seeds, sprouted, cooked, boiled, drained, with salt
Mung beans, mature seeds, sprouted, cooked, boiled, drained, without salt
Mung beans, mature seeds, sprouted, raw
Mungo beans, mature seeds, cooked, boiled, without salt
Mushrooms, canned, drained solids
Mushrooms, shiitake, cooked, without salt
Mushrooms, shiitake, dried
Mushrooms, white, cooked, boiled, drained, without salt
Mushrooms, white, raw
Mustard greens, cooked, boiled, drained, without salt
Mustard greens, frozen, cooked, boiled, drained, without salt
Mustard greens, raw
Natto
Nopales, cooked, without salt
Nopales, raw
Nuts, almond butter, plain, with salt added
Nuts, almond paste
Nuts, almonds
Nuts, almonds, blanched
Nuts, almonds, dry roasted, with salt added
Nuts, almonds, dry roasted, without salt added
Nuts, almonds, oil roasted, with salt added
Nuts, almonds, oil roasted, without salt added
Nuts, brazilnuts, dried, unblanched
Nuts, cashew butter, plain, with salt added
Nuts, cashew nuts, dry roasted, with salt added
Nuts, cashew nuts, dry roasted, without salt added
Nuts, cashew nuts, oil roasted, with salt added
Nuts, cashew nuts, oil roasted, without salt added
Nuts, chestnuts, european, roasted
Nuts, coconut cream, canned, sweetened
Nuts, coconut meat, dried (desiccated), not sweetened
Nuts, coconut meat, dried (desiccated), sweetened, flaked, packaged
Nuts, coconut meat, dried (desiccated), sweetened, shredded
Nuts, coconut meat, raw
Nuts, coconut milk, raw (liquid expressed from grated meat and water)
Nuts, hazelnuts or filberts
Nuts, macadamia nuts, dry roasted, with salt added
Nuts, mixed nuts, dry roasted, with peanuts, with salt added
Nuts, mixed nuts, oil roasted, with peanuts, with salt added
Nuts, mixed nuts, oil roasted, without peanuts, with salt added
Nuts, pecans
Nuts, pine nuts, dried
Nuts, pistachio nuts, dry roasted, with salt added
Nuts, pistachio nuts, dry roasted, without salt added
Nuts, walnuts, black, dried
Nuts, walnuts, english
Oat bran, raw
Okra, cooked, boiled, drained, without salt
Okra, frozen, cooked, boiled, drained, without salt
Okra, frozen, unprepared
Okra, raw
Olives, pickled, canned or bottled, green
Olives, ripe, canned (jumbo-super colossal)
Olives, ripe, canned (small-extra large)
Onions, cooked, boiled, drained, with salt
Onions, cooked, boiled, drained, without salt
Onions, frozen, chopped, cooked, boiled, drained, without salt
Onions, frozen, whole, cooked, boiled, drained, without salt
Onions, frozen, whole, unprepared
Onions, raw
Onions, spring or scallions (includes tops and bulb), raw
Onions, young green, tops only
Oranges, raw, all commercial varieties
Papad
Papayas, raw
Parsley, fresh
Parsnips, cooked, boiled, drained, without salt
Passion-fruit, (granadilla), purple, raw
Peaches, canned, water pack, solids and liquids
Peaches, dried, sulfured, stewed, without added sugar
Peaches, dried, sulfured, uncooked
Peaches, frozen, sliced, sweetened
Peaches, raw
Peanut butter, chunk style, without salt
Peanut butter, chunky, vitamin and mineral fortified
Peanut butter, reduced sodium
Peanut butter, smooth style, with salt
Peanut butter, smooth style, without salt
Peanut butter, smooth, reduced fat
Peanut butter, smooth, vitamin and mineral fortified
Peanuts, all types, cooked, boiled, with salt
Peanuts, all types, dry-roasted, with salt
Peanuts, all types, dry-roasted, without salt
Peanuts, all types, raw
Pears, asian, raw
Pears, canned, water pack, solids and liquids
Pears, dried, sulfured, stewed, without added sugar
Pears, dried, sulfured, uncooked
Pears, raw
Peas and carrots, canned, no salt added, solids and liquids
Peas and carrots, frozen, cooked, boiled, drained, without salt
Peas and onions, frozen, cooked, boiled, drained, without salt
Peas, edible-podded, boiled, drained, without salt
Peas, edible-podded, frozen, cooked, boiled, drained, without salt
Peas, edible-podded, raw
Peas, green (includes baby and lesuer types), canned, drained solids, unprepared
Peas, green, canned, no salt added, solids and liquids
Peas, green, cooked, boiled, drained, without salt
Peas, green, frozen, cooked, boiled, drained, without salt
Peas, green, frozen, unprepared
Peas, green, raw
Peas, green, split, mature seeds, raw
Peas, split, mature seeds, cooked, boiled, without salt
Pepper, banana, raw
Peppers, hot chile, sun-dried
Peppers, jalapeno, raw
Peppers, serrano, raw
Persimmons, japanese, raw
Pickles, cucumber, dill or kosher dill
Pickles, cucumber, sour
Pickles, cucumber, sour, low sodium
Pickles, cucumber, sweet (includes bread and butter pickles)
Pigeonpeas, immature seeds, cooked, boiled, drained, without salt
Pigeonpeas, immature seeds, raw
Pimento, canned
Pineapple, canned, water pack, solids and liquids
Pineapple, frozen, chunks, sweetened
Pineapple, raw, all varieties
Plantains, cooked
Plantains, raw
Plums, canned, purple, water pack, solids and liquids
Plums, dried (prunes), stewed, without added sugar
Plums, dried (prunes), uncooked
Plums, raw
Poi
Pokeberry shoots, (poke), cooked, boiled, drained, without salt
Pomegranates, raw
Potatoes, baked, flesh and skin, without salt
Potatoes, baked, flesh, without salt
Potatoes, baked, skin, without salt
Potatoes, boiled, cooked in skin, flesh, with salt
Potatoes, boiled, cooked in skin, flesh, without salt
Potatoes, boiled, cooked without skin, flesh, with salt
Potatoes, boiled, cooked without skin, flesh, without salt
Potatoes, canned, drained solids, no salt added
Potatoes, flesh and skin, raw
Potatoes, frozen, whole, unprepared
Pumpkin flowers, cooked, boiled, drained, without salt
Pumpkin leaves, cooked, boiled, drained, without salt
Pumpkin, canned, without salt
Pumpkin, cooked, boiled, drained, without salt
Pumpkin, raw
Quinoa, cooked
Radicchio, raw
Radishes, hawaiian style, pickled
Radishes, oriental, cooked, boiled, drained, without salt
Radishes, oriental, raw
Radishes, raw
Raisins, golden seedless
Raisins, seedless
Raspberries, frozen, red, sweetened
Raspberries, raw
Rhubarb, frozen, cooked, with sugar
Rhubarb, frozen, uncooked
Rhubarb, raw
Rutabagas, cooked, boiled, drained, without salt
Rutabagas, raw
Rye grain
Salsify, cooked, boiled, drained, without salt
Sauerkraut, canned, low sodium
Sauerkraut, canned, solids and liquids
Seaweed, agar, dried
Seaweed, agar, raw
Seaweed, irishmoss, raw
Seaweed, kelp, raw
Seaweed, laver, raw
Seaweed, spirulina, dried
Seaweed, wakame, raw
Seeds, flaxseed
Seeds, pumpkin and squash seed kernels, dried
Seeds, pumpkin and squash seed kernels, roasted, with salt added
Seeds, pumpkin and squash seed kernels, roasted, without salt
Seeds, sesame butter, tahini, from roasted and toasted kernels (most common type)
Seeds, sesame seed kernels, dried (decorticated)
Seeds, sesame seed kernels, toasted, with salt added (decorticated)
Seeds, sesame seed kernels, toasted, without salt added (decorticated)
Seeds, sesame seeds, whole, dried
Seeds, sunflower seed kernels, dried
Seeds, sunflower seed kernels, dry roasted, with salt added
Seeds, sunflower seed kernels, dry roasted, without salt
Seeds, sunflower seed kernels, oil roasted, with salt added
Seeds, sunflower seed kernels, oil roasted, without salt
Soursop, raw
Soy protein isolate
Soy sauce made from hydrolyzed vegetable protein
Soy sauce made from soy (tamari)
Soybean, curd cheese
Soybeans, mature cooked, boiled, without salt
Soybeans, mature seeds, roasted, salted
Soybeans, mature seeds, sprouted, cooked, steamed
Spinach, canned, regular pack, drained solids
Spinach, cooked, boiled, drained, without salt
Spinach, frozen, chopped or leaf, cooked, boiled, drained, without salt
Spinach, frozen, chopped or leaf, unprepared
Spinach, raw
Squash, summer, all varieties, cooked, boiled, drained, without salt
Squash, summer, all varieties, raw
Squash, summer, crookneck and straightneck, canned, drained, solid, without salt
Squash, summer, crookneck and straightneck, cooked, boiled, drained, without salt
Squash, summer, crookneck and straightneck, frozen, cooked, boiled, drained, without salt
Squash, summer, scallop, cooked, boiled, drained, without salt
Squash, summer, zucchini, includes skin, cooked, boiled, drained, without salt
Squash, summer, zucchini, includes skin, frozen, cooked, boiled, drained, without salt
Squash, summer, zucchini, includes skin, raw
Squash, winter, all varieties, cooked, baked, without salt
Squash, winter, all varieties, raw
Strawberries, frozen, sweetened, sliced
Strawberries, frozen, sweetened, whole
Strawberries, frozen, unsweetened
Strawberries, raw
Sweet potato leaves, cooked, steamed, without salt
Sweet potato, canned, vacuum pack
Sweet potato, cooked, baked in skin, flesh, without salt
Sweet potato, cooked, boiled, without skin
Sweet potato, raw, unprepared
Tamarinds, raw
Tangerines, (mandarin oranges), raw
Tofu yogurt
Tofu, soft, prepared with calcium sulfate and magnesium chloride (nigari)
Tomatillos, raw
Tomatoes, green, raw
Tomatoes, red, ripe, canned, stewed
Tomatoes, red, ripe, cooked
Tomatoes, red, ripe, raw, year round average
Tomatoes, sun-dried
Turnip greens and turnips, frozen, cooked, boiled, drained, without salt
Turnip greens, canned, no salt added
Turnip greens, cooked, boiled, drained, without salt
Turnip greens, frozen, cooked, boiled, drained, without salt
Turnips, cooked, boiled, drained, without salt
Turnips, frozen, cooked, boiled, drained, without salt
Turnips, raw
Waterchestnuts, chinese, (matai), raw
Waterchestnuts, chinese, canned, solids and liquids
Watercress, raw
Watermelon, raw
Waxgourd, (chinese preserving melon), cooked, boiled, drained, without salt
Yam, cooked, boiled, drained, or baked, with salt
Yam, cooked, boiled, drained, or baked, without salt
Yam, raw
Yambean (jicama), raw
|
src/literary/notebook/finder.ipynb
|
###Markdown
Notebook Finder
###Code
%load_ext literary.module
import os
import pathlib
import sys
import traceback
import typing as tp
from importlib.machinery import FileFinder
from inspect import getclosurevars
from traitlets import Bool, Type
from ..core.exporter import LiteraryExporter
from ..core.project import ProjectOperator
T = tp.TypeVar("T")
def _get_loader_details(hook) -> tuple:
"""Return the loader_details for a given FileFinder closure
:param hook: FileFinder closure
:returns: loader_details tuple
"""
try:
namespace = getclosurevars(hook)
except TypeError as err:
raise ValueError from err
try:
return namespace.nonlocals["loader_details"]
except KeyError as err:
raise ValueError from err
def _find_file_finder(path_hooks: list) -> tp.Tuple[int, tp.Any]:
"""Find the FileFinder closure in a list of path hooks
:param path_hooks: path hooks
:returns: index of hook and the hook itself
"""
for i, hook in enumerate(path_hooks):
try:
_get_loader_details(hook)
except ValueError:
continue
return i, hook
raise ValueError
def _extend_file_finder(finder: T, *loader_details) -> T:
"""Extend an existing file finder with new loader details
:param finder: existing FileFinder instance
:param loader_details:
:return:
"""
return FileFinder.path_hook(*_get_loader_details(finder), *loader_details)
def inject_loaders(
path_hooks: list, *loader_details
):
"""Inject a set of loaders into a list of path hooks
:param path_hooks: list of path hooks
:param loader_details: FileFinder loader details
:return:
"""
i, finder = _find_file_finder(path_hooks)
new_finder = _extend_file_finder(finder, *loader_details)
path_hooks[i] = new_finder
# To fix cached path finders
sys.path_importer_cache.clear()
###Output
_____no_output_____
|
docs/Action Sequence Graph.ipynb
|
###Markdown
Action Sequence Graph TutorialThis tutorial covers use of the Action Sequence Graph in the FxnBlock class, which is useful for representing a Function's progress through a sequence of actions (e.g., modes of operation, etc)..
###Code
# for use in development - makes sure git version is used instead of pip-installed version
import sys, os
sys.path.insert(1,os.path.join(".."))
from fmdtools.modeldef import *
import fmdtools.resultdisp as rd
import fmdtools.faultsim.propagate as prop
###Output
_____no_output_____
###Markdown
Action sequence graphs are used within a function block to represent the actions that the function performs and their sequence. Actions in an ASG are respresented by `Action` blocks, which are similar to function and component blocks in that they have:- flow connections- modes, and- behaviorsFlow connections are routed to the action in the function block definition and represent the *shared variables* between the actions.Modes are similar to function modes and are instantiated (as they are in components) at both the Function and Action level. Using the `name=` option enables one to tag these modes as action modes at the function level while using the same local nameBelow we define three actions for use in a given model:- Perceive, a user's perception abilities/behaviors. In this function the user percieves a hazard (unless their perception fails)- Act, the user's action which they perform to mitigate the hazard.- Done, the user's state when they are done performing the action.
###Code
class Perceive(Action):
def __init__(self, name, flows):
super().__init__(name,flows)
self.assoc_modes({'failed'}, exclusive=True, name="perceive_")
def behavior(self,time):
if not self.in_mode('failed'):
self.Hazard.percieved = self.Hazard.present
self.Outcome.num_perceptions+=self.Hazard.percieved
else: self.Hazard.percieved = False; self.remove_fault('failed', 'nom')
def percieved(self):
return self.Hazard.percieved
class Act(Action):
def __init__(self, name, flows):
super().__init__(name,flows)
self.assoc_modes({'failed', 'unable'}, exclusive=True, name="act_")
def behavior(self,time):
if not self.in_mode('failed', 'unable'):
self.Outcome.num_actions+=1
self.Hazard.mitigated=True
elif self.in_mode('failed'):
self.Hazard.mitigated=False; self.remove_fault('failed', 'nom')
else: self.Hazard.mitigated=False
def acted(self):
return not self.in_mode('failed')
class Done(Action):
def __init__(self, name, flows):
super().__init__(name,flows)
def behavior(self,time):
if not self.Hazard.present: self.Hazard.mitigated=False
def ready(self):
return not self.Hazard.present
###Output
_____no_output_____
###Markdown
To proceed through the sequence of actions, *conditions* must be met between each action. In these actions, we have defined the following conditions:- Percieve.percieved: perception is done if the hazard is percieved- Act.acted: the action is complete if the action was performed- Done.complete: the hazard mitigation is over (and mitigated state is reset to False)To create the overall ASG structure, the following adds the flows, actions, and conditions to the function block.
###Code
class DetectHazard(FxnBlock):
def __init__(self,name, flows):
super().__init__(name, flows)
self.add_flow('Outcome', {'num_perceptions':0, 'num_actions':0})
self.add_act("Perceive", Perceive, self.Outcome, self.Hazard)
self.add_act("Act", Act, self.Outcome, self.Hazard)
self.add_act("Done", Done, self.Outcome, self.Hazard)
self.add_cond("Perceive","Act", "Percieved", self.Perceive.percieved)
self.add_cond("Act","Done", "Acted", self.Act.acted)
self.add_cond("Done", "Perceive", "Ready", self.Done.ready)
self.assoc_modes(exclusive=True)
self.build_ASG(initial_action="Perceive", asg_proptype='dynamic')
###Output
_____no_output_____
###Markdown
Note the use of the following methods:- add_flow adds an *internal flow*--a flow used to connect actions that does not leave the function. Here *Outcome* is an internal flow, while *Hazard* is an external flow.
###Code
help(FxnBlock.add_flow)
###Output
Help on function add_flow in module fmdtools.modeldef:
add_flow(self, flowname, flowdict={}, flowtype='')
Adds a flow with given attributes to the Function Block
Parameters
----------
flowname : str
Unique flow name to give the flow in the function
flowattributes : dict, Flow, set or empty set
Dictionary of flow attributes e.g. {'value':XX}, or the Flow object.
If a set of attribute names is provided, each will be given a value of 1
If an empty set is given, it will be represented w- {flowname: 1}
###Markdown
- add_act adds the action to the function and hands it the given flows and parameters. Here the actions are "Percieve", "Act", and "Done"
###Code
help(FxnBlock.add_act)
###Output
Help on function add_act in module fmdtools.modeldef:
add_act(self, name, action, *flows, duration=0.0, **params)
Associate an Action with the Function Block for use in the Action Sequence Graph
Parameters
----------
name : str
Internal Name for the Action
action : Action
Action class to instantiate
*flows : flow
Flows (optional) which connect the actions
**params : any
parameters to instantiate the Action with.
###Markdown
- add_cond specifies the conditions for going from one action to another.
###Code
help(FxnBlock.add_cond)
###Output
Help on function add_cond in module fmdtools.modeldef:
add_cond(self, start_action, end_action, name='auto', condition='pass')
Associates a Condition with the Function Block for use in the Action Sequence Graph
Parameters
----------
start_action : str
Action where the condition is checked
end_action : str
Action that the condition leads to.
name : str
Name for the condition. Defaults to numbered conditions if none are provided.
condition : method
Method in the class to use as a condition. Defaults to self.condition_pass if none are provided
###Markdown
- build_ASG finally constructs the structure of the ASG (see: self.action_graph and self.flow_graph) and determines the settings for the simulation. In DetectHazard, default options are used, with the first action specified as "Percieve" and also with it specified that the actions propagate in the dynamic step (rather than static step)
###Code
help(FxnBlock.build_ASG)
###Output
Help on function build_ASG in module fmdtools.modeldef:
build_ASG(self, initial_action='auto', state_rep='finite-state', max_action_prop='until_false', mode_rep='replace', asg_proptype='dynamic', per_timestep=False, asg_pos={})
Constructs the Action Sequence Graph with the given parameters.
Parameters
----------
initial_action : str/list
Initial action to set as active. Default is 'auto'
- 'auto' finds the starting node of the graph and uses it
- 'ActionName' sets the given action as the first active action
- providing a list of actions will set them all to active (if multi-state rep is used)
state_rep : 'finite-state'/'multi-state'
How the states of the system are represented. Default is 'finite-state'
- 'finite-state' means only one action in the system can be active at once (i.e., a finite state machine)
- 'multi-state' means multiple actions can be performed at once
max_action_prop : 'until_false'/'manual'/int
How actions progress. Default is 'until_false'
- 'until_false' means actions are simulated until all outgoing conditions are false
- providing an integer places a limit on the number of actions that can be performed per timestep
mode_rep : 'replace'/'independent'
How actions are used to represent modes. Default is 'replace.'
- 'replace' uses the actions to represent the operational modes of the system (only compatible with 'exclusive' representation)
- 'independent' keeps the actions and function-level mode seperate
asg_proptype : 'static'/'dynamic'/'manual'
Which propagation step to execute the Action Sequence Graph in. Default is 'dynamic'
- 'manual' means that the propagation is performed manually (defined in a behavior method)
per_timestep : bool
Defines whether the action sequence graph is reset to the initial state each time-step (True) or stays in the current action (False). Default is False
asg_pos : dict, optional
Positions of the nodes of the action/flow graph {node: [x,y]}. Default is {}
###Markdown
Below we first instantiate the function to how how it (and the ASG) simulates on its own
###Code
Hazard = Flow({"present":False, "percieved":False, "mitigated":False},'Hazard')
ex_fxn = DetectHazard('DetectHazard', [Hazard])
###Output
_____no_output_____
###Markdown
We can now view the ASG show_ASG()
###Code
help(FxnBlock.show_ASG)
fig = ex_fxn.show_ASG()
#%matplotlib qt
#rd.graph.set_pos(ex_fxn, 'combined')
#%matplotlib inline
###Output
_____no_output_____
###Markdown
As shown, the "Percieve" action is active (green), while the inactive actions are shown in blue. This action is active because it is the initial action here.If we update the action, we can see the ASG progress between states:
###Code
ex_fxn.Hazard.present=True
ex_fxn.updatefxn('dynamic', time= 1)
fig = ex_fxn.show_ASG()
ex_fxn.Hazard
ex_fxn.Outcome
###Output
_____no_output_____
###Markdown
As shown, each of the actions are progressed throuh in a single timestep until the ASG is in the "Done" action
###Code
ex_fxn.Hazard.present= False
ex_fxn.updatefxn('dynamic', time= 2)
fig = ex_fxn.show_ASG()
ex_fxn.Outcome
ex_fxn.Hazard
###Output
_____no_output_____
###Markdown
As shown, now that the hazard is no longer present, the "Ready" Condition is triggered and the ASG goes back to the percieve state. Below, this function is placed in the context of a model so we can see how it behaves in the context of a simulation
###Code
class ProduceHazard(FxnBlock):
def __init__(self,name, flows):
super().__init__(name, flows)
def dynamic_behavior(self,time):
if not time%4: self.Hazard.present=True
else: self.Hazard.present=False
class PassHazard(FxnBlock):
def __init__(self,name, flows):
super().__init__(name, flows, states={'hazards_mitigated':0, 'hazards_propagated':0})
def dynamic_behavior(self,time):
if self.Hazard.present and self.Hazard.mitigated: self.hazards_mitigated+=1
elif self.Hazard.present and not self.Hazard.mitigated: self.hazards_propagated+=1
class HazardModel(Model):
def __init__(self, params={}, modelparams={'times':[0,60], 'tstep':1}, valparams={}):
super().__init__(params,modelparams,valparams)
self.add_flow("Hazard", {"present":False, "percieved":False, "mitigated":False})
self.add_fxn("ProduceHazard", ['Hazard'], ProduceHazard)
self.add_fxn("DetectHazard",['Hazard'], DetectHazard)
self.add_fxn("PassHazard", ['Hazard'], PassHazard)
self.build_model()
mdl = HazardModel()
endstate, resgraph, mdlhist = prop.nominal(mdl)
###Output
_____no_output_____
###Markdown
Below we look at the states of the functions/flows to see how this has simulated.
###Code
restab = rd.tabulate.hist(mdlhist)
restab['DetectHazard']
###Output
_____no_output_____
###Markdown
As shown, the ASG alternates between Perceive (when the hazard is not present) and Done (when the hazard is present)
###Code
restab['Hazard']
###Output
_____no_output_____
###Markdown
As a result, all of the present hazards (above) are also perceived and mitigated.
###Code
restab['PassHazard']
###Output
_____no_output_____
###Markdown
And as a result no hazards are propagated. Or, in plot form:
###Code
fig, axs = rd.plot.mdlhists(mdlhist, fxnflowvals={'DetectHazard':'Outcome', 'PassHazard':'all'}, figsize=(10,5))
###Output
_____no_output_____
###Markdown
As shown, perceptions and actions track the hazards mitigated.
###Code
fig, axs = rd.plot.mdlhists(mdlhist, fxnflowvals={'Hazard':'all', 'DetectHazard':'mode'}, figsize=(10,5))
###Output
_____no_output_____
###Markdown
And the mode tracks the presence of the hazard. ASGs can also be viewed using the `resultdisp.graph` module. Below we will simulate a fault and see how it tracks in the model.
###Code
endstate_fault, resgraph_fault, mdlhist_fault = prop.one_fault(mdl, 'DetectHazard','perceive_failed', time=4)
###Output
_____no_output_____
###Markdown
As shown, this fault results in the hazard not being perceived (and thus the hazard propagating)
###Code
fig, axs = rd.plot.mdlhists(mdlhist_fault, fxnflowvals={'Hazard':'all', 'DetectHazard':'mode'}, figsize=(10,5), time_slice=[4])
fig, axs = rd.plot.mdlhists(mdlhist_fault, fxnflowvals={'DetectHazard':'Outcome', 'PassHazard':'all'}, figsize=(10,5), time_slice=[4])
###Output
_____no_output_____
###Markdown
As shown, this only shows up in the PassHazard function (since the fault is removed in one timestep).
###Code
fig = rd.graph.show(resgraph_fault)
###Output
_____no_output_____
###Markdown
To see this in more detail, we will process the results history and then use `graph.results_from` at the time of the fault.
###Code
reshist, diff, summary = rd.process.hist(mdlhist_fault)
rd.tabulate.hist(reshist)
###Output
_____no_output_____
###Markdown
Below shows the state of the model at the given time.
###Code
fig = rd.graph.result_from(mdl, reshist, 4)
###Output
_____no_output_____
###Markdown
We can also use `show` to view the state of the ASG. See below:
###Code
fig = rd.graph.result_from(mdl.fxns['DetectHazard'], reshist, 4, gtype='combined')
###Output
_____no_output_____
###Markdown
Note the lack of a fault at this time-step, despite it being instantiated here. This is because the fault was removed at the end of the same time-step it was added in.The 'unable' fault, on the other hand, stays throughout the simulation and thus shows up:
###Code
endstate_unable, resgraph_unable, mdlhist_unable = prop.one_fault(mdl, 'DetectHazard','act_unable', time=4)
reshist_unable, diff_unable, summary_unable = rd.process.hist(mdlhist_unable)
fig, axs = rd.plot.mdlhists(mdlhist_unable, fxnflowvals={'Hazard':'all', 'DetectHazard':'mode'}, figsize=(10,5), time_slice=[4])
fig = rd.graph.result_from(mdl, reshist_unable, 4)
fig = rd.graph.result_from(mdl.fxns['DetectHazard'], reshist_unable, 4, gtype='combined')
fig = rd.graph.result_from(mdl.fxns['DetectHazard'], reshist_unable, 6, gtype='combined')
###Output
_____no_output_____
|
nbextensions/usability/highlighter/export_highlights.ipynb
|
###Markdown
Exporting the notebookAs suggested by @juhasch, it is interesting to keep the highlights when exporting the notebook to another format. We give and explain below some possibilities: Short version- Html export:```bash jupyter nbconvert FILE --config JUPYTER_DATA_DIR/extensions/highlight_html_cfg.py ```- LaTeX export:```bash jupyter nbconvert FILE --config JUPYTER_DATA_DIR/extensions/highlight_latex_cfg.py ```where JUPYTER_DATA_DIR can be found from the output of```bash jupyter --paths```eg `~/.local/share/jupyter` in my case. Seems to be `c:\users\NAME\AppData\Roaming\jupyter` under Windows. Examples can be found here: [initial notebook](tst_highlights.ipynb), [html version](tst_highlights.html), [pdf version](tst_highlights.pdf) (after an additional LaTeX $\rightarrow$ pdf compilation). Html exportThis is quite easy. Actually, highlight formatting embedded in markdown cells is preserved while converting with the standard```bashjupyter nbconvert file.ipynb```However, the css file is missing and must be added. Here we have several possibilities- Embed the css *within* the notebook. For that, consider the last cell of the present notebook. This code reads the css file `highlighter.css` in the extension directory and displays the corresponding style. So doing the ` ...` section will be present in the cell output and interpreted by the web browser. Drawbacks of this solution is that user still have to execute this cell and that the this is not language agnostic. - Use a **template file** to link or include the css file during conversion. Such a file is provided as `templates/highlighter.tpl`. It was choosen here to *include* the css content in the produced html file rather than linking it. This avoids the necessity to keep the css file with the html files. - This works directly if the css resides in the same directory as the file the user is attempting to convert --thus requires the user to copy `highlighter.css` in the current directory. Then the conversion is simply ```bash jupyter nbconvert file.ipynb --template highlighter```- It still remains two problems with this approach. First, it can be annoying to have to systematically copy the css file in the current directory. Second, the data within the html tags is not converted (and thus markdown remains unmodified). A solution is to use a pair of preprocessor/postprocessor that modify the html tags and enable the subsequent markdown to html converter to operate on the included data. Also, a config file is provided which redefines the template path to enable direct inclusion of the css file in the extension directory. Unfortunately, it seems that the *full path* to the config file has to be provided. This file resides in the extensions subdirectory of the jupyter_data_dir. The path can be found by looking at the output of```bash jupyter --paths```Then the command to issue for converting the notebook to html is```bash jupyter nbconvert FILE --config JUPYTER_DATA_DIR/extensions/highlight_html_cfg.py ```For instance```bashjupyter nbconvert tst_highlights.ipynb --config ~/.local/share/jupyter/extensions/highlight_html_cfg.py ``` LaTeX exportThis is a bit more complicated since the direct conversion removes all html formatting present in markdown cells. Thus use again a **preprocessor** which runs before the markdown $\rightarrow$ LaTeX conversion. In turn, it appears that we also need to postprocess the result. Three LaTeX commands, namely *highlighta, highlightb, highlightc*, and three environments *highlightA, highlightB, highlightC* are defined. Highlighting html markup is then transformed into the corresponding LaTeX commands and the text for completely highlighted cells is put in the adequate LaTeX environment. Pre and PostProcessor classes are defined in the file `pp_highlighter.py` located in the `extensions` directory. A LaTeX template, that includes the necessary packages and the definitions of commands/environments is provides as `highlighter.tplx` in the template directory. The template inherits from `article.ltx`. For more complex scenarios, typically if the latex template file has be customized, the user shall modify its template or inherit from his base template rather than from article. Finally, a config file fixes the different options for the conversion. Then the command to issue is simply ```bash jupyter nbconvert FILE --config JUPYTER_DATA_DIR/extensions/highlight_latex_cfg.py ```e.g. ```bashjupyter nbconvert tst_highlights.ipynb --config ~/.local/share/jupyter/extensions/highlight_latex_cfg.py ``` Configuring pathsFor those who do not have taken the extension from the `IPython-notebook-extensions` repository or have not configured extensions via its `setup.py` utility, a file `set_paths.py` is present in the extension directory (it is merely a verbatim copy of the relevant parts in setup.py). This file configure the paths to the `templates` and `extension` directories. It should be executed by something like```bashpython3 set_paths.py```Additionaly, you may also have to execute `mv_paths.py` if you installed from the original repo via `jupyter nbextension install ..````bashpython3 mv_paths.py``` Example for embedding the css within the notebook before conversion
###Code
from IPython.core.display import display, HTML
from jupyter_core.paths import jupyter_config_dir, jupyter_data_dir
import os
csspath=os.path.join(jupyter_data_dir(),'nbextensions','usability',
'highlighter','highlighter.css')
HTML('<style>'+open(csspath, "r").read()+'</style>')
###Output
_____no_output_____
|
Machine Learning/decision tree algorithm.ipynb
|
###Markdown
Prediction using Decision Tree Algorithm Objective: Create the Decision Tree classifier and visualize it graphically. Author- Kuwar Kapur Importing The Libraries
###Code
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from six import StringIO
from IPython.display import Image
from sklearn import tree
import pydotplus
###Output
_____no_output_____
###Markdown
READING THE DATA AND FINDING OUT THE UNIQUE SPECIES
###Code
df=pd.read_csv('iris.csv')
df['Species'].unique()
###Output
_____no_output_____
###Markdown
GATHERING INFO ABOUT THE DATASET
###Code
df.describe()
###Output
_____no_output_____
###Markdown
USING LABEL ENCODER FOR CATEGORICAL FEATURES
###Code
lr=LabelEncoder()
df['Species']=lr.fit_transform(df['Species'])
###Output
_____no_output_____
###Markdown
SPLITTING THE DATA FOR TRAINING AND TESTING
###Code
X=df.drop(['Species','Id'],axis=1)
y=df['Species']
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
###Output
_____no_output_____
###Markdown
FINDING OUT THE ACCURACY USING DECISION TREE CLASSIFIER
###Code
DT= DecisionTreeClassifier()
Z=DT.fit(X_train,y_train)
predict=DT.predict(X_test)
print(classification_report(y_test,predict))
cm2=accuracy_score(y_test,predict)
print(cm2)
print(confusion_matrix(y_test,predict))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 11
1 1.00 1.00 1.00 13
2 1.00 1.00 1.00 6
accuracy 1.00 30
macro avg 1.00 1.00 1.00 30
weighted avg 1.00 1.00 1.00 30
1.0
[[11 0 0]
[ 0 13 0]
[ 0 0 6]]
###Markdown
VISUALISING THE DECISION TREE
###Code
new_col=df.select_dtypes(include=float).columns
dot_data = StringIO()
tree.export_graphviz(DT, out_file=dot_data, feature_names=new_col,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
###Output
_____no_output_____
|
notebooks/Introduction to sympy.ipynb
|
###Markdown
Introduction to sympyA Python library for symbolic computations Table of Contents1 Python set-up2 Numbers2.1 Integers2.2 Floats or Reals2.3 Rationals or Fractions2.4 Surds2.5 Useful constants2.6 Complex numbers2.7 Miscellaneous2.8 Be a little careful when dividing by zero and of arithmetic with infinities3 Symbols3.1 Import symbols from sympy.abc3.2 Define one symbol at a time3.3 Define multiple symbols in one line of code3.4 Set the attributes of a symbol3.5 Check the assumptions/properties of a symbol3.6 Symbolic functions4 Functions4.1 In-line arithmetic4.2 Absolute Values4.3 Factorials4.4 Trig functions4.5 Exponential and logarithmic functions5 Expressions5.1 Creating an expression5.2 Creating expressions from strings5.3 Substituting values into an expression5.4 Simplifying expressions5.4.1 Finding factors5.4.2 Expanding out5.4.3 Collecting terms5.4.4 Canceling common factors, expressing as $\frac{p}{q}$5.4.5 Trig simplification5.4.6 Trig expansions5.4.7 Power simplifications5.4.8 Log simplifications5.4.9 Rewriting functions5.5 Solving expressions5.5.1 Example quadratic solution5.5.2 Generalised quadratic solution5.5.3 Quadratic with a complex solution5.5.4 Manipulating expressions to re-arrange terms5.6 Plotting expressions6 Equations6.1 Creating equations6.2 Solving equations6.2.1 Rearranging terms6.2.2 Exponential example6.2.3 Quadratic example6.2.4 A trigonometric example6.3 Solving systems of equations6.3.1 Two linear equations6.3.2 A linear and cubic system with three point-solutions6.3.3 A system of equations with no solutions7 Limits7.1 Simple example7.2 More complicated examples7.2.1 $f(x) = x^n$7.2.2 $f(x)=a^x$7.2.3 $f(x)=sin(x)$7.3 Limits, where the direction in which we approach the limit is important8 Derivatives8.1 First, second and subsequent derivatives8.2 Partial derivatives9 Integrals9.1 Definite Integrals9.2 Indefinite integrals9.3 sympy cannot evaluate some integrals10 Sums10.1 Infinite sums10.2 Finite sums11 Taylor series expansion11.1 A finite Taylor series11.2 Infinite Taylor series12 Matrices / Linear Algebra13 The End Python set-upInstall with pip or conda (as appropriate to your system)
###Code
from platform import python_version
python_version() # version of python on my machine
import sympy as sp
sp.__version__ # version of sympy on my machine
# This makes the notebook easier to read ...
sp.init_printing(use_unicode=True)
###Output
_____no_output_____
###Markdown
Numbers Integers
###Code
sp.Integer(5) # this is a sympy integer
###Output
_____no_output_____
###Markdown
Floats or Reals
###Code
sp.Float(1 / 2) # this is a sympy float
###Output
_____no_output_____
###Markdown
Rationals or FractionsHint: using Rationals in calculus should be preferred over using floats, as it will yield easire to understand symbolic answers.
###Code
y = sp.Rational(1, 3) # this is a sympy Rational
y
# get the numeric value for an expression to n decimal places
y.n(22)
# we can do the usual maths with Rationals
sp.Rational(3, 4) + sp.Rational(1, 3)
# Note: if we divide sympy Integers, we also get a Rational
sp.Integer(1) / sp.Integer(4)
# We get a sympy Rational even when one of the numerator or denominator is a python integer.
sp.Integer(5) / 2
# getting the numerator and denominator
numerator, denominator = sp.fraction(sp.Rational(-55, 10))
numerator, denominator
# or ...
r = sp.Rational(-55, 10)
numerator = r.numerator
denominator = r.denominator
numerator, denominator
# It is a little challenging to represent an improper Rational
# as a mixed fraction or mixed numer (whole number plus fraction)
def mixed_number(rational: sp.Rational):
numerator, denominator = sp.fraction(rational)
whole = sp.Abs(numerator) // sp.Abs(denominator)
part = (
sp.Rational(sp.Abs(numerator) % sp.Abs(denominator),
sp.Abs(denominator))
)
with sp.evaluate(False):
# Use the context manager to avoid simplification back to
# a Rational. And make sure we have the correct sign ...
mixed_number = whole + part if rational >= 0 else (- whole - part)
return mixed_number
mixed_number(sp.Rational(-55, 10))
_.n()
###Output
_____no_output_____
###Markdown
Surds
###Code
sp.sqrt(8)
# Note, surds are automatically simplified if possible
# if you don't want the simplification
sp.sqrt(8, evaluate=False)
# or you can use this context manager to avoid evaluation
with sp.evaluate(False):
y = sp.sqrt(8)
y
sp.N(_) # numberic value for the last calculation
sp.cbrt(3) # cube roots
sp.real_root(4, 6) # nth real root
# Use a context manager to prevent auto-simplification
with sp.evaluate(False):
t = sp.Integer(-2) ** sp.Rational(-1, 2)
t
# same as previous cell with simplification
1 / sp.sqrt(-2)
###Output
_____no_output_____
###Markdown
Useful constantsRemember, these constants are symbols, not their approximate values
###Code
sp.pi # use sp.pi for 𝜋
sp.E # capital E for the base of the natural logarithm (Euler's number)
sp.I # capital I for the square root of -1
sp.I ** 2
sp.oo # oo (two lower-case letters o) for infinity
-sp.oo # negative infinity
# This is the "not a number" construct for sympy
sp.nan # in a result, this typically means undefined ...
###Output
_____no_output_____
###Markdown
Complex numbers
###Code
z = 3 + 4 * sp.I # Construct complex numbers
z
sp.re(z), sp.im(z) # get the real and imaginary parts of z
sp.Abs(z) # the Absolute value of a complex number
# It's distance from the origin on the Argand plane
# To obtain the complex conjugate
t = sp.conjugate(4 + 5j) # from a python complex number (ugly)
s = (4 + 5 * sp.I).conjugate() # from a sympy complex number (better)
display(t, s)
###Output
_____no_output_____
###Markdown
Miscellaneous
###Code
sp.prime(5) # get the nth prime number
sp.pi.evalf(5) # evaluate to a python floating with n decimal places
###Output
_____no_output_____
###Markdown
Be a little careful when dividing by zero and of arithmetic with infinitiesThe results may differ with your expectations
###Code
# This is undefined ...
sp.oo - sp.oo
# This is also undefined ...
sp.Integer(0) / sp.Integer(0)
# I would have pegged this one as undefined
# or as complex infinity. But not as real infinity???
sp.oo / 0
sp.Rational(1, 0) # yields complex infinity
sp.Integer(1) / sp.Integer(0) # Also yeilds complex infinity
###Output
_____no_output_____
###Markdown
SymbolsYou must define symbols before using them with sympy. To avoid confusion, match the name of the symbol to the python variable name.There are a number of ways to create a sympy symbol ... Import symbols from sympy.abc
###Code
# The quick and easy way to get English and Greek letter names
from sympy.abc import a, b, c, x, n, alpha, beta, gamma, delta
alpha
delta
###Output
_____no_output_____
###Markdown
Define one symbol at a time
###Code
a = sp.Symbol('a') # defining a symbol, one at a time
###Output
_____no_output_____
###Markdown
Define multiple symbols in one line of code
###Code
x, y, z = sp.symbols('x y z') # define multiple symbols at a time
###Output
_____no_output_____
###Markdown
Set the attributes of a symbol
###Code
# you can set attributes for a symbol
i, j, k = sp.symbols('i, j, k', integer=True, positive=True)
###Output
_____no_output_____
###Markdown
Check the assumptions/properties of a symbol
###Code
i.assumptions0
###Output
_____no_output_____
###Markdown
Symbolic functions
###Code
# We can also declare that symbols are [undefined] functions
x = sp.Symbol('x')
f = sp.Function('f')
f
# Including as functions that take arguments
g = sp.Function('g')(x)
g
x, y = sp.symbols('x y')
h = sp.Function('h')(x, y)
h
# And we can do multiple functions at once
f, g = sp.symbols('f g', function=True)
f.assumptions0
###Output
_____no_output_____
###Markdown
Functions In-line arithmeticsympy recognizes the usual python in-line operators, and applies the proper order of operations
###Code
# python operators work as expected
x, y = sp.symbols('x y')
x + x - 2 * y * x / x ** 3
# Note: some simplification ocurred
with sp.evaluate(False):
y = sp.E ** (sp.I * sp.pi) + 1
y
# The .doit() method evaluates an expression ...
y.doit() # Thank you Euler
# Note: While this might look like a Rational,
# This is not a Rational, rather, it is a division
p, q = sp.symbols('p q', Integer=True)
frac = p / q
frac
# Use sp.numer() and sp.denom() to get the numerator, denominator
sp.numer(frac), sp.denom(frac)
###Output
_____no_output_____
###Markdown
Absolute Values
###Code
# Absolute value
x = sp.Symbol('x')
sp.Abs(x)
###Output
_____no_output_____
###Markdown
Factorials
###Code
sp.factorial(4, evaluate=False) # factorials
###Output
_____no_output_____
###Markdown
Trig functions
###Code
from sympy.abc import theta
sp.sin(theta) # also cos, tan,
sp.asin(1) # also acos, atan
# secant example (which is the reciprocol of cos(𝜃))
sp.sec(theta) # also csc and cot for cosecant and cotangent
###Output
_____no_output_____
###Markdown
Exponential and logarithmic functions
###Code
sp.exp(x)
# Which is the same as ...
sp.E ** x
sp.E ** x == sp.exp(x)
sp.log(x) # log to the base of e
sp.log(x, 10) # log to the base of 10
###Output
_____no_output_____
###Markdown
Expressions Creating an expression
###Code
# We start by defining the symbols used in the expression
x = sp.Symbol('x')
# Then we use those symbols to create an expression.
y = x + x + x * x # Note: This assigns a sympy expression
# to the python variable y.
# This does not create an equation.
# sympy will collect simple polynomial terms automatically
y
###Output
_____no_output_____
###Markdown
Creating expressions from stringsNote: the string should be a valid python string
###Code
y = sp.simplify('m ** 2 + 2 * m - 8')
y
###Output
_____no_output_____
###Markdown
Substituting values into an expression
###Code
x, y = sp.symbols('x y')
f = x ** 2 + y
f
f.subs(x, 4)
x, a, b, c = sp.symbols('x a b c')
quadratic = a * (x ** 2) + b * x + c
quadratic
# multiple substitutions with a dictionary
quadratic.subs({a:1, b:2, c:1})
###Output
_____no_output_____
###Markdown
Simplifying expressionsNote: Often the .simplify() method is all you need. However, simplifying with the .simplify() method is not well defined. You may need to call a specific simplification method that is well defined to achieve a desired result. Note: beyond some minimal simplification, sympy does not automatically simplify your expressions. Note: this is not a complete list of simplification functions
###Code
# Often the .simplify() method is all you need ...
x = sp.Symbol('x')
y = ((2 * x) / (x - 1)) - ((x ** 2 - 1) / (x - 1) ** 2)
y
# This expression can be simplified to the number 1
y.simplify()
# Another example ...
x = sp.Symbol('x')
with sp.evaluate(False):
# this context manager prevents any
# automatic simplification, resulting
# in a very ugly expression
y = (2 / (x + 3)) / (1 + (3 / x))
y
y.simplify()
###Output
_____no_output_____
###Markdown
Finding factors
###Code
x = sp.Symbol('x')
y = x ** 2 - 1
y
y.factor()
###Output
_____no_output_____
###Markdown
Expanding out
###Code
x = sp.Symbol('x')
y = (x + 1) * (x - 1)
y
y.expand() # for polynomials, this is the opposite to .factor()
###Output
_____no_output_____
###Markdown
Collecting terms
###Code
x, y, z = sp.symbols('x y z')
expr = x * y + 3 * x ** 2 + 2 * x ** 3 + z * x ** 2
expr
expr.collect(x)
###Output
_____no_output_____
###Markdown
Canceling common factors, expressing as $\frac{p}{q}$
###Code
x = sp.Symbol('x')
y = (x**2 - 1) / (x - 1)
y
y.cancel()
###Output
_____no_output_____
###Markdown
Trig simplification
###Code
x = sp.Symbol('x')
y = (sp.tan(x) ** 2) / ( sp.sec(x) ** 2 )
y
y.trigsimp()
###Output
_____no_output_____
###Markdown
Trig expansions
###Code
x, y = sp.symbols('x y')
f = sp.sin(x + y)
f
sp.expand_trig(f)
###Output
_____no_output_____
###Markdown
Power simplifications
###Code
x, a, b = sp.symbols('x a b')
y = x ** a * x ** b
y
y.powsimp()
###Output
_____no_output_____
###Markdown
Log simplifications
###Code
# Note: positive constraint in the next line ...
a, b = sp.symbols('a b', positive=True)
y = sp.log(a * b)
y
sp.expand_log(y)
y = sp.log(a / b)
y
sp.expand_log(y)
###Output
_____no_output_____
###Markdown
Rewriting functions
###Code
# rewite an expression in terms of a particular function
x = sp.Symbol('x')
y = sp.tan(x)
y
# express our tan(x) function in terms of sin
y.rewrite(sp.sin)
# express our tan(x) function in terms of the exponetial function
y.rewrite(sp.exp)
###Output
_____no_output_____
###Markdown
Solving expressionsSolving these expressions assumes they are equations that are equal to zero. Example quadratic solution
###Code
# solve a quadratic equation
x = sp.Symbol('x')
sp.solve(x ** 2 - 1, x) # solve expression in nrespect of x
# yields two possible solutions
###Output
_____no_output_____
###Markdown
Generalised quadratic solution
###Code
# More generally ...
a, b, c, x = sp.symbols('a b c x')
y = a * x ** 2 + b * x + c # standard quadratic equation
sp.solve(y, x) # yields a list of two possible solutions
###Output
_____no_output_____
###Markdown
Quadratic with a complex solution
###Code
# and if the only solutions are complex ...
sp.solve(x ** 2 + 2 * x + 10, x)
# yields a list of two possible solutions
###Output
_____no_output_____
###Markdown
Manipulating expressions to re-arrange terms
###Code
# rearrange terms ...
x, y = sp.symbols('x y')
f = x ** 2 - 2 * x * y + 3
sp.solve(f, y) # solve for y = ...; yeilds one possible solution
###Output
_____no_output_____
###Markdown
Plotting expressions
###Code
x = sp.Symbol('x')
expr = sp.sin(x) ** 2 + sp.cos(x)
expr
plot = sp.plot(expr, show=True)
print(type(k))
# plot multiple lines at once
sp.plot(sp.sin(x), sp.cos(x), legend=True, show=True)
from sympy.plotting import plot3d
x, y = sp.symbols('x y')
plot = plot3d(x**2 + y**2, show=True)
###Output
_____no_output_____
###Markdown
Equations Creating equations
###Code
# Note: we use sp.Eq(left_expr, right_expr)
# to create an equation in sympy
x, y = sp.symbols('x y')
sp.Eq(y, 3 * x + 2)
###Output
_____no_output_____
###Markdown
Solving equations Rearranging terms
###Code
x, y = sp.symbols('x y')
eqn = sp.Eq(x ** 2 + 2 * x * y - 1, 0)
eqn # this is our equation
solution = sp.solve(eqn, y)
solution # yields a list of solutions,
# in this case a list of 1 ...
# Which we can turn back into an equation
sp.Eq(y, solution[0])
###Output
_____no_output_____
###Markdown
Exponential example
###Code
a, b, x = sp.symbols('a b x')
eq = sp.Eq(a ** b, sp.E ** x)
eq # our equation
sp.Eq(x, sp.solve(eq, x)[0])
###Output
_____no_output_____
###Markdown
Quadratic example
###Code
y = sp.Symbol('y')
eq = sp.Eq(x ** 2 - 2 * x, 0)
sp.solve(eq)
y = sp.Symbol('y')
eq = sp.Eq(x ** 2 - 2 * x, y)
sp.solve(eq)
y = sp.Symbol('y')
eq = sp.Eq(x ** 2 - 2 * x, y)
sp.solve(eq, x) # solve for x
###Output
_____no_output_____
###Markdown
A trigonometric example
###Code
x = sp.Symbol('x')
eq = sp.Eq(sp.sin(x) ** 2 + sp.cos(x), 0)
sp.solve(eq)
# solveset allows us to capture tbe set of infinite solutions
sp.solveset(eq)
###Output
_____no_output_____
###Markdown
Solving systems of equations Two linear equations
###Code
x, y = sp.symbols('x y')
eq1 = sp.Eq(y, 3 * x + 2) # an equation
eq1
eq2 = sp.Eq(y, -2 * x - 1)
eq2
sp.solve([eq1, eq2], [x, y])
###Output
_____no_output_____
###Markdown
A linear and cubic system with three point-solutions
###Code
eq1 = sp.Eq(y, x)
eq2 = sp.Eq(y, sp.Rational(1, 10) * x ** 3)
sp.solve([eq1, eq2], [x, y])
###Output
_____no_output_____
###Markdown
A system of equations with no solutions
###Code
eq1 = sp.Eq(y, x)
eq2 = sp.Eq(y, x + 1)
sp.solve([eq1, eq2], [x, y])
###Output
_____no_output_____
###Markdown
Limits Simple example
###Code
x = sp.Symbol('x')
expr = (x * x - 1)/(x - 1)
expr
# the function of this expression is a straight line
sp.plot(expr, xlim=(-4,4), ylim=(-2,6))
# But our expression is not defined when x is 1
# as it evaluates to zero divided by zero
expr.subs(x, 1)
# The limit as x approaches 1
sp.limit(expr, x, 1) # using the global limit() function
expr.limit(x, 1) # using the .limit() method
# We can display the limit with the Limit() function
# Note: by default, this is the limit approached from
# the positive side.
lim = sp.Limit(expr, x, 1)
lim
# And we can use the .doit() method to calculate the limit
lim.doit()
###Output
_____no_output_____
###Markdown
More complicated examplesCalculate the derivative from first principles (using limits), for a selection of functions$$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$ $f(x) = x^n$
###Code
# This is our generic function x to the power of n
def f(x, n):
return x ** n
x, h, n = sp.symbols('x h n')
f(x, n) # what is our function f(x, n) ...
# Apply the limit to our function ...
# Note: our arguments x, h and n are sympy symbols
lim = sp.Limit((f(x + h, n) - f(x, n))/h, h, 0)
lim
# Calculate the limit ...
lim.doit()
###Output
_____no_output_____
###Markdown
$f(x)=a^x$
###Code
# Let's change the function to an exponential: a ** x
def f(a, x):
return a ** x
x, h, a = sp.symbols('x h a')
f(a, x) # what is our function f(x) ...
# Apply the limit to our function ...
lim = sp.Limit((f(a, x + h) - f(a, x))/h, h, 0)
lim
# Calculate the limit ...
lim.doit()
###Output
_____no_output_____
###Markdown
$f(x)=sin(x)$
###Code
# One last example, the derivative of sin(x)
def f(x):
return sp.sin(x)
x, h = sp.symbols('x h')
f(x) # Our function f(x) = sin(x)
# Apply the limit to our function ...
lim = sp.Limit((f(x + h) - f(x))/h, h, 0)
lim
# And evaluating the limit
lim.doit()
###Output
_____no_output_____
###Markdown
Limits, where the direction in which we approach the limit is important
###Code
x = sp.Symbol('x')
expr = 1 / x
expr
sp.plot(expr, xlim=(-8, 8), ylim=(-8, 8))
# Let's display the limit from the positve direction
lim = sp.Limit(expr, x, 0, '+')
lim
# And calculate it ...
lim.doit()
# And the limit from the negative direction
expr.limit(x, 0, '-')
# We can also do the limit from both directions
expr.limit(x, 0, '+-') # which yields complex infinity
###Output
_____no_output_____
###Markdown
Derivatives First, second and subsequent derivativesFor one-variable expressions ... sympy has multiple ways to get the ordinary derivative* using the `.diff()` method on an expression* using the `diff()` function on an expression* using the combined `Derivative().doit()` function/method on an expression
###Code
# Let's generate a polynomial ...
x = sp.Symbol('x')
y = 3 * x ** 4 + 2 * x ** 2 - x - 1
y
# provide the symbolic forumala for the first derivative
y_dash = sp.Derivative(y, x)
y_dash
# And calculate the differential ...
y_dash.doit()
# Also ... using the .diff() method
y.diff(x) # differentiate with respect to x
# Also using the diff() function
sp.diff(y, x) # differentiate with respect to x
# provide the symbolic formula for the second derivative
y_2dash = sp.Derivative(y, x, 2)
y_2dash
# second derivative can also be done like this
y_2dash.doit()
# Also ...
y.diff(x, x) # differentiate twice in respect of x
# Also ...
y.diff(x, 2)
# And the formula for the third Derivative
y_3dash = sp.Derivative(y, x, 3)
y_3dash
# third derivative (and so on ...)
y_3dash.doit()
# Also ...
y.diff(x, 3)
# Also ...
y.diff(x, x, x)
# Also ...
sp.diff(y, x, 3)
# Generalisations ...
a, x = sp.symbols('a x')
(x ** a).diff(x).simplify()
# Generalisations ...
a, x = sp.symbols('a x')
(a ** x).diff(x)
###Output
_____no_output_____
###Markdown
Partial derivativesAs with the above differentials, there are multiple ways to do this ...
###Code
x, y = sp.symbols('x y')
g = x* y ** 2 + x ** 3
g
# The first partial derivative of the expression with respect to x
partial_x = sp.Derivative(g, x, 1)
partial_x
# Calculate ...
partial_x.doit()
# And the first order partial derivative in respect of y
partial_y = sp.Derivative(g, y, 1)
partial_y.doit()
###Output
_____no_output_____
###Markdown
Integrals Definite Integrals
###Code
# Definite integral using the integrate() function
# The tuple contains (with_respect_to, lower_limit, upper_limit)
x = sp.Symbol('x')
f = sp.sin(x)
y = sp.Integral(f, (x, 0, sp.pi / 2))
y
# we can then calculate it as follows
y.doit()
# We can calculate the definite interval using the .integrate() method
x = sp.Symbol('x')
f.integrate((x, 0, sp.pi / 2))
# We can calculate the definite interval using the integrate() function
sp.integrate(f, (x, 0, sp.pi / 2))
###Output
_____no_output_____
###Markdown
Indefinite integrals ***Caution***: sympy does not yield the constant of integration (the "+ C") that arises from the indefinite integral. So technically, we are getting the anti-derivative, rather than the indefinite integral. Note: the constant of integration is netted out when the definite integral is calculated.
###Code
x = sp.Symbol('x')
y = x ** 2 + 2 * x
y.integrate(x) # integreate in respect of x
x = sp.Symbol('x')
sp.log(x).integrate(x)
x = sp.Symbol('x')
sp.sin(x).integrate(x)
###Output
_____no_output_____
###Markdown
sympy cannot evaluate some integrals If `integrate()` is unable to compute an integral, it returns an unevaluated Integral object.
###Code
# for example ... sysmpy cannot calculate this integral
sp.log(sp.sin(x)).integrate(x)
###Output
_____no_output_____
###Markdown
SumsSums can be achieved with either the summation function or the Sum constructor Infinite sums
###Code
# A sum from the sum constructor
# Note: the second term is the tuple: (index, lower_bound, upper_bound)
# Where: the range is from lower_bound to upper_bound inclusive
x, n = sp.symbols('x n')
s = sp.Sum(6 * (x ** -2), (x, 1, sp.oo)) # sum constructor
s # display the sum
s.doit() # evaluate the sum
s.n() # approximate value
# A sum using the summation function
x = sp.symbols('x')
s = sp.summation(90 / x ** 4, (x, 1, sp.oo))
s
# A sum using the summation function
# with a defined python function
n = sp.symbols('n')
def f(n): return 945 / (n ** 6) # a defined function
s = sp.summation(f(n), (n, 1, sp.oo))
s
# And another example that sums to one
x = sp.symbols('x')
with sp.evaluate(False):
expr = 1 / (2 ** x)
expr
s = sp.Sum(expr, (x, 1, sp.oo))
s
s.doit()
###Output
_____no_output_____
###Markdown
Finite sums
###Code
x, a, b, c, n = sp.symbols('x a b c n')
quad = a * (x ** 2) + b * x + c
quad
quad_sum = sp.summation(1 / quad, (x, 0, n))
quad_sum
quad_sum.subs(n, 10)
quad_sum.subs({a:1, b:2, c:1, n:10})
_.n() # previous value approximated ...
quad_sum = sp.summation(1 / quad, (x, 0, 10))
quad_sum
quad_sum.subs({a:1, b:2, c:1})
_.n() # previous value approximated ...
###Output
_____no_output_____
###Markdown
Taylor series expansionTaylor series is a technique to approximate a function at/near a point using polynomials. The technique requires that multiple n-order derivatives can be found for the function. It works well with exponential and trigonometric functions. The series used to approximate the function over a range, using a specified number of polynomial terms, or it can be expressed as an infinite series.Why do it? Mathematically, polynomials are sometimes easier to work with. These approximations are remarkably accurate with just a small number of terms. A finite Taylor series
###Code
x = sp.Symbol('x')
s = sp.series(sp.cos(x), x, x0=0, n=6) # at x=0, for six polynomial terms
# noting that the odd powers for our
# polynomial are zero.
s # note the donominators are 0!, 2!, 4!, 6! ... (where 0! equals 1)
s
# We can remove the Big-O notation
s.removeO()
# We can compare cos(0.5) with our taylor-approximation of cos(0.5)
# and see for this point-value it is accurate to three decimal places.
point = 0.5
print(f'cos={sp.cos(point)} Taylor={s.removeO().subs(x, point).n()}')
# This Taylor series looks like it provides
# a workable approximation for cos(x) between
# at least x=-π/4 and +π/4
sp.plot(sp.cos(x), s.removeO(),
legend=True,
show=True, ylim=(-1,1.5))
# Let's try plotting a few more terms, which expands the
# range for which our approximation provides good results.
x = sp.Symbol('x')
s_6 = sp.series(sp.cos(x), x, x0=0, n=6).removeO()
s_16 = sp.series(sp.cos(x), x, x0=0, n=16).removeO()
sp.plot(sp.cos(x), s_6, s_16,
#legend=True,
show=True, ylim=(-1.3,1.3))
# Let's compare the two plotted approximations at π/4
p = sp.pi / 4
print(f'cos({p})={sp.cos(p.evalf())}; ')
print(f'n=6--> {s_6.subs(x, p).n()}, ')
print(f'n=16--> {s_16.subs(x, p).n()}')
###Output
cos(pi/4)=0.707106781186548;
n=6--> 0.707429206709773,
n=16--> 0.707106781186547
###Markdown
Infinite Taylor series
###Code
# Instead of specifying n for the order of the polynomial, we set n=None
t_log = sp.series(sp.log(x), x=x, x0=1, n=None)
t_log # this returns a python generator for the series.
# Let's display the first 8 terms of this series
# Note: generators can only be used once in Python ...
lst = []
for i in range(8):
lst.append(next(t_log))
display(lst)
sum(lst) # which we can sum in python ...
###Output
_____no_output_____
|
lesson01/Part1_introToPython_notes_lesson01.ipynb
|
###Markdown
Day1, Part 1: An introduction to Python programmingReferences: * https://sites.google.com/a/ucsc.edu/krumholz/teaching-and-courses/python-15/class-1 Right now we are going to go over a few basics of programming using Python and also some fun tidbits about how to mark up our "jupyter notebooks" with text explainations and figures of what we are doing. If you have been programming for a while, this will serve as a bit of a review, but since most folks don't primarily program in Python, this will be an opportunity to learn a bit more detail about the nuances of Python.If you are very new to programming, this might all seem like gibberish right now. This is completely normal! Programming is one of those things that is hard to understand in the abstract, but gets much more easy the more you do it and can see what sorts of outputs you get.This part will be a little dry, but soon we'll get to using the things we learn here to do more fun stuff.Also! These notes are online. Hopefully I won't go to fast, but if I do please feel free to raise your hand and tell me to slow down gosh darn it! If you just want to make sure you got something - feel free to refer back to these notes. BEWARE: you will learn this better if you try to follow along in class and not copy directly from here - but I assume you can figure out for yourself what methods you want to employ best in this class. 1. Introduction to jupyter notebooks* code vs comments* markdown "cheat sheet"* running a cell* using latex to do math equations: $-G M_1 m_2/r^2$ or $- \frac{G M m}{r^2}$* latex math "cheat sheet" 1. Using Python as a calculatorIt is possible to interact with python in many ways. Let's start with something simple - using this notebook+python as a calculator
###Code
# lets add 2+3
2+3
# ALSO: note what I did there with the "#" -> this is called a comment,
# and it allows us to add in notes to ourselves OR OTHERS
# Commenting is SUPER important to (1) remember what you did and
# (2) tell others that you're working with what you did
#In fact, comments are part of a "good coding practice"
# Python even has the nicities to tell you all about
# what is good coding practice:
import this
###Output
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
###Markdown
The above sentences will be come clearer the more you program in Python! :)
###Code
2*3
2-3
4/2
# lets say I want to write 2 raised to the power of 3, i.e. 2*2*2 = 8
# python has special syntax for that:
2**3
###Output
_____no_output_____
###Markdown
Python knows the basic arithmetic operations plus (+), minus (-), times (*), divide (/), and raise to a power (**). It also understands parentheses, and follows the normal rules for order of operations:
###Code
1+2*3
(1+2)*3
###Output
_____no_output_____
###Markdown
2. Simple variablesWe can also define variables to store numbers, and we can perform arithmetic on those variables. Variables are just names for boxes that can store values, and on which you can perform various operations. For example:
###Code
a=4
a+1
a
# note that a itself hasn't changed
a/2
# now I can change the value of a by reassigning it to its original value + 1
a = a+1
# short hand is a += 1
a
a**2
###Output
_____no_output_____
###Markdown
There's a subtle but important point to notice here, which is the meaning of the equal sign. In mathematics, the statement that a = b is a statement that two things are equal, and it can be either true or false. In python, as in almost all other programming languages, a = b means something different. It means that the value of the variable a should be changed to whatever value b has. Thus the statement we made a = a + 1 is not an assertion (which is obviously false) that a is equal to itself plus one. It is an instruction to the computer to take the variable a, and 1 to it, and then store the result back into the variable a. In this example, it therefore changes the value of a from 4 to 5.One more point regrading assignments: the fact that = means something different in programming than it does in mathematics implies that the statements a = b and b = a will have very different effects. The first one causes the computer to forget whatever is stored in a and replace it by whatever is stored in b. The second statement has the opposite effect: the computer forgets what is stored in b, and replaces it by whatever is stored in a.For example, I can use a double equals sign to "test" whether or not a is equal to some value:
###Code
a == 5
a == 6
###Output
_____no_output_____
###Markdown
More on this other form of an equal sign when we get to flow control -> stay tuned! The variable a that we have defined is an integer, or int for short. We can find this out by asking python:
###Code
type(a)
###Output
_____no_output_____
###Markdown
Integers are exactly what they sound like: they hold whole numbers, and operations involving them and other whole numbers will always yield whole numbers. This is an important point:
###Code
a/2
###Output
_____no_output_____
###Markdown
Why is 5/2 giving 2? The reason is that we're dividing integers, and the result is required to be an integer. In this case, python rounds down to the nearest integer. If we want to get a non-integer result, we need to make the operation involve a non-integer. We can do this just by changing the 2 to 2.0, or even just 2., since the trailing zero is assumed:
###Code
a/2.
###Output
_____no_output_____
###Markdown
If we assign this to a variable, we will have a new type of variable: a floating point number, or float for short.
###Code
b = a/2.
type(b)
###Output
_____no_output_____
###Markdown
A floating point variable is capable of holding real numbers. Why have different types of variables for integers versus non-integer real numbers? In mathematics there is no need to make the distinction, of course: all integers are real numbers, so it would seem that there should be no reason to have a separate type of variable to hold integers. However, this ignores the way computers work. On a computer, operations involving integers are exact: 1 + 1 is exactly 2. However, operations on real numbers are necessarily inexact. I say necessarily because a real number is capable of having an arbitrary number of decimal places. The number pi contains infinitely many digits, and never repeats, but my computer only comes with a finite amount of memory and processor power. Even rational numbers run into this problem, because their decimal representation (or to be exact their representation in binary) may be an infinitely repeating sequence. Thus it is not possible to perform operations on arbitrary real numbers to exact precision. Instead, arithmetic operations on floating point numbers are approximate, with the level of accuracy determined by factors like how much memory one wants to devote to storing digits, and how much processor time one wants to spend manipulating them. On most computers a python floating point number is accurate to about 1 in 10^15, but this depends on both the architecture and on the operations you perform. That's enough accuracy for many purposes, but there are plenty of situations (for example counting things) when we really want to do things precisely, and we want 1 + 1 to be exactly 2. That's what integers are there for. A third type of very useful variable is strings, abbreviated str. A string is a sequence of characters, and one can declare that something is a string by putting characters in quotation marks (either " or ' is fine):
###Code
c = 'alice'
type(c)
###Output
_____no_output_____
###Markdown
The quotation marks are important here. To see why, try issuing the command without them:
###Code
c=alice
###Output
_____no_output_____
###Markdown
This is an error message, complaining that the computer doesn't know what alice is. The problem is that, without the quotation marks, python thinks that alice is the name of a variable, and complains when it can't find a variable by that name. Putting the quotation marks tells python that we mean a string, not a variable named alice.Obviously we can't add strings in the same sense that we add numbers, but we can still do operations on them. The plus operation concatenates two strings together:
###Code
d = 'bob'
c+d
###Output
_____no_output_____
###Markdown
There are a vast number of other things we can do with strings as well, which we'll discuss later.In addition to integers, floats, and strings, there are three other types of variables worth mentioning. Here we'll just mention the variable type of Boolean variable (named after George Boole), which represents a logical value. Boolean variables can be either True or False:
###Code
g=True
type(g)
###Output
_____no_output_____
###Markdown
Boolean variables can have logic operations performed on them, like not, and, and or:
###Code
not g
h = False
g and h
g or h
###Output
_____no_output_____
###Markdown
The final type of variable is None. This is a special value that is used to designate something that has not been assigned yet, or is otherwise undefined.
###Code
j = None
j
# note: nothing prints out since this variable isn't anything!
###Output
_____no_output_____
###Markdown
3. One dimensional arrays with numpyThe variables we have dealt with so far are fairly simple. They represent single values. However, for scientific or numeric purposes we often want to deal with large collections of numbers. We can try to do this with a "natively" supported Python datastructure called a list:
###Code
myList = [1, 2,3, 4]
myList
###Output
_____no_output_____
###Markdown
You can do some basic things with lists like add to each element:
###Code
myList[0] += 5
myList
# so now you can see that the first element of the list is now 1+5 = 6
###Output
_____no_output_____
###Markdown
You can also do fun things with lists like combine groups of objects with different types:
###Code
myList = ["Bob", "Linda", 5, 6, True]
myList
###Output
_____no_output_____
###Markdown
However, lists don't support adding of numbers across the whole array, or vector operations like dot products. Formally, an array is a collection of objects that all have the same type: a collection of integers, or floats, or bools, or anything else. In the simplest case, these are simply arranged into a numbered list, one after another. Think of an array as a box with a bunch of numbered compartments, each of which can hold something. For example, here is a box with eight compartments.  We can turn this abstract idea into code with the numpy package, as Python doesn't natively support these types of objects. Lets start by importing numpy:
###Code
import numpy as np # here we are importing as "np" just for brevity - you'll see that a lot
# note: if you get an error here try:
#!pip install numpy
# *before* you try to import anything
###Output
_____no_output_____
###Markdown
We can start by initializing an empty array with 8 entries, to match our image above. There are several ways of doing this.
###Code
# we can start by calling the "empty" function
array1 = np.empty(8)
array1
# here you can see that we have mostly zero, or almost zeros in our array
# we can also specifically initial it with zeros:
array2 = np.zeros(8)
array2
# so this looks a little nicer
# we can also create a *truely* empty array like so:
array3 = np.array([])
array3
# then, to add to this array, we can "append" to the end of it like so:
array3 = np.append(array3, 0)
array3
###Output
_____no_output_____
###Markdown
Of course, we'd have to do the above 8 times and if we do this by hand it would be a little time consuming. We'll talk about how to do such an operation more efficiently using a "for loop" a little later in class. Let's say we want to fill our array with the following elements: We can do this following the few different methods we discussed. We could start by calling "np.empty" or "np.zeros" and then fill each element one at a time, or we can convert a *list* type into an *array* type on initilization of our array. For example:
###Code
array4 = np.array([10,11,12,13,14,15,16,17])
array4
###Output
_____no_output_____
###Markdown
There are even functions in numpy we can use to create new types of arrays. For example, we could have also created this same array as follows:
###Code
array5=np.arange(10,18,1)
array5
# note that here I had to specify 10-18 or one more than the array I wanted
###Output
_____no_output_____
###Markdown
We can also make this array with different spacing:
###Code
array6 = np.arange(10,18,2)
array6
###Output
_____no_output_____
###Markdown
We can compare operations with arrays and lists, for example:
###Code
myList = [5, 6, 7]
myArray = np.array([5,6,7])
myList, myArray
###Output
_____no_output_____
###Markdown
So, things look very similar. Lets try some operations with them:
###Code
myList[0], myArray[0]
###Output
_____no_output_____
###Markdown
So, they look very much the same... lets try some more complicated things.
###Code
myList[0] + 5, myArray[0] + 5
myArray + 5
myList + 5
###Output
_____no_output_____
###Markdown
So here we can see that while we can add a number to an array, we can't just add a number to a list. What does adding a number to myArray look like?
###Code
myArray, myArray+5
###Output
_____no_output_____
###Markdown
So we can see that adding an number to an array just increases all elements by the number.
###Code
# we can also increament element by element
myArray + [1, 2, 3]
###Output
_____no_output_____
###Markdown
There are also several differences in "native" operations with arrays vs lists. We can learn more about this by using a tab complete.
###Code
myList.
myArray.
###Output
_____no_output_____
###Markdown
As you can see myArray supports a lot more operations. For example:
###Code
# we can sum all elements of our array
myArray, myArray.sum()
# or we can take the mean value:
myArray.mean()
# or take things like the standard deviation, which
# is just a measurement of how much the array varies
# overall from the mean
myArray.std()
###Output
_____no_output_____
###Markdown
We can do some more interesting things with arrays, for example, specify what their "type" is:
###Code
myFloatArray = np.zeros([5])
myFloatArray
###Output
_____no_output_____
###Markdown
Compare this by if we force this array to be integer type:
###Code
myIntArray = np.zeros([5],dtype='int')
myIntArray
# we can see that there are no decimals after the zeros
# this is because this array will only deal with
# whole,"integer" numbers
###Output
_____no_output_____
###Markdown
How do we access elements of an array? Lets see a few different ways.
###Code
myArray = np.arange(0,10,1)
myArray
# just the first few elements:
myArray[0:5]
# all but the first element:
myArray[1:]
# in reverse:
myArray[::-1]
# all but the last 2 elements:
myArray[:-2]
# elements between 2 & 7
myArray[2:7]
# note: *is* included, but not 7
###Output
_____no_output_____
###Markdown
Multiple dimension arraysArrays aren't just a list of numbers, we can also make *matricies* of arrays. This is just a fancy way of saying "arrays with multiple dimensions".For example, lets say we want to make a set of numbers that has entries along a row *and* column. Something that looks like this: We can do this with the following call:
###Code
x2d=np.array([[10,11,12,13,14,15,16,17], [20,21,22,23,24,25,26,27], [30, 31, 32, 33, 34, 35, 36, 37]])
x2d
###Output
_____no_output_____
###Markdown
Now note, we can use some of the same array calls we used before:
###Code
x2d[0,:]
x2d[:,0]
x2d[1:,:]
###Output
_____no_output_____
###Markdown
We can even use functions like zeros to make multi-dimensional arrays with all entries equal to zero:
###Code
x2d2 = np.zeros((3,7))
x2d2
###Output
_____no_output_____
###Markdown
Once you've defined an array, you have a few ways to get info about the array. We used "mean" above, but you might also want to check that its shape is what you think it should be (I do this a lot!):
###Code
x2d.shape
###Output
_____no_output_____
###Markdown
There are *many* ways to manipulate arrays and call them and if you're feeling overwhelmed at this point, that is ok! We'll get plenty of practice using these in the future. 4. DictionariesThere is one more data type that you might come across in Python a dictionary/directory. I think it might be called a "directory" but I've always said dictionary and it might be hard to change at this point :)For brevity I'll just be calling it a "dict" anyway!
###Code
# the calling sequence is a little weird, but its essentially a way to "name" components of our dict
myDict = {"one":1, "A string":"My String Here", "array":np.array([5,6,6])}
###Output
_____no_output_____
###Markdown
Now we can call each of these things by name:
###Code
myDict["one"]
myDict["A string"]
myDict["array"]
###Output
_____no_output_____
###Markdown
In this course we'll be dealing mostly with arrays, and a little bit with lists & dicts - so if the data structures like dicts or "sets", which we didn't cover but you can read up on your own if you're super curious, seem weird that is ok! You will have many opportunties to work with these sorts of things more as you go on in your programming life. 5. Simple plots with arraysLets now start to make some plots with our arrays. To do this, we have to import a plotting library - this is just like what we did to import our numpy library to help us do stuff with arrays.
###Code
import matplotlib.pyplot as plt
# note: there is actually a *lot* of matplotlib sub-libraries
# we'll just start with pyplot
# again, if this doesn't work you can try
#!pip install matplotlib
# OR
#!conda install matplotlib
# lets try a simple plot
plt.plot([5,6], [7,8])
# ok neat! we see a plot with
# x going from 5-6, and y going from 7-8
###Output
_____no_output_____
###Markdown
Let's combine our numpy array stuff with plots:
###Code
x = np.arange(0,2*np.pi,0.01)
x
# so, here we see that x goes from 0 to 2*PI in steps of 0.01
# now, lets make a plot of sin(x)
y = np.sin(x)
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
Ok, that's pretty sweet, but maybe we can't recall what we are plotting and we want to put some labels on our plot.
###Code
plt.plot(x,y)
plt.xlabel('x value from 0 to PI')
plt.ylabel('sin(x)')
plt.title('My first plot!')
###Output
_____no_output_____
###Markdown
Finally, note we get this text that gives us info about our plot that we might not want to show each time. We can explicitly "show" this plot:
###Code
plt.plot(x,y)
plt.xlabel('x value from 0 to PI')
plt.ylabel('sin(x)')
plt.title('My first plot!')
plt.show()
###Output
_____no_output_____
###Markdown
So, now let's say we want to save our lovely plot. Lets try that!
###Code
plt.plot(x,y)
plt.xlabel('x value from 0 to PI')
plt.ylabel('sin(x)')
plt.title('My first plot!')
plt.savefig('myAMAZINGsinplot.png')
###Output
_____no_output_____
###Markdown
Well, it seems like nothing happened. Why? So, basically our plot gets showed to us, but it *also* gets saved to disk. But where? Well by default, since we didn't give a full path, it saved to whatever directory we are running this jupyter notebook from. Lets figure out where that is:
###Code
# there are a few ways to figure this out
# we'll use this opportunty to do a little command-line
!pwd # MAC
# so the "!" is called an "escape" character - this just
# shunts us out of this notebook and gives us access to our
# main "terminal" or "shell" - basically the underlying
# structure of our computer
# note, in windows this equivalent is !cd
# see: https://www.lemoda.net/windows/windows2unix/windows2unix.html
###Output
/Users/jillnaiman1/csci-p-14110/lesson01
###Markdown
We have several options at this point to open our image. We can save it in a directory where we easily know where it is, or we can open it up from here. For example:
###Code
!open /Users/jillnaiman1/csci-p-14110/lesson01/myAMAZINGsinplot.png
# the above is for macs!!
###Output
_____no_output_____
###Markdown
So we can change the location of where this saves so that we are saving all of our files to one place. There are several ways to do this. You can open a file folder as you normally would and make a new folder and figure out where it is on disk and save to there. We can also do this using some command lines.First we'll make a new directory, this command is the same on both Macs & Windows. First, lets remind ourselves where we are:
###Code
!pwd
###Output
/Users/jillnaiman1/csci-p-14110/lesson01
###Markdown
Usually it will be something like "/Users" and then your user name and then maybe another directory. These are subfolders where different folders live. We can check out what is in a particular folder using "ls" or "dir" on a Windows machine like so:
###Code
!ls
###Output
_example_assignment.md index.md~
_introToPython_notes_lesson01.ipynb lecture01.md
[1m[34mimages[m[m myAMAZINGsinplot.png
index.md myFirstProgram.ipynb
###Markdown
So this is now telling me what is in the current directory that I found with !pwd. Lets say we want to make a new folder for this class in our "home" directory - this is what your directory/folder with your user name is generally called. We should first check out what is already in there, with that we can use "ls" or "dir" like so:
###Code
!ls /Users/jillnaiman1/
###Output
#test.txt#
[1m[34mApplications[m[m
[1m[34mApplications (Parallels)[m[m
[1m[34mArepoCode[m[m
[1m[34mArepoCodePython[m[m
[1m[34mArepoModsLocal[m[m
[1m[34mArepoRuns[m[m
[1m[34mArepo_ICs[m[m
[1m[34mBTBackground[m[m
[1m[34mBasic-Chat[m[m
[1m[34mBlotto[m[m
[1m[34mCreative Cloud Files[m[m
[1m[34mData-digging[m[m
[1m[34mDesktop[m[m
[1m[34mDocuments[m[m
[1m[34mDownloads[m[m
[1m[34mDropbox[m[m
[1m[34mEclipseSoundscapes[m[m
[1m[34mFLASH4[m[m
[1m[34mFLASH4.2[m[m
[1m[34mFLASH4.4[m[m
FLASH4.4.tar
[1m[34mFLASH4_Runs[m[m
[1m[34mFreeRoutingNew[m[m
[1m[34mGadget-2.0.7[m[m
[1m[34mGlowdeck_Bluetooth[m[m
Glowdeck_Bluetooth.zip
HOPE
[1m[34mHoudiniProjects[m[m
[1m[34mInClassPost[m[m
Le_siII_lowercase.txt
Le_siI_lowercase.txt
Le_siV_lowercase.txt
Le_siX_lowercase.txt
[1m[34mLibrary[m[m
Microchip Bluetooth DFU Utility Installer.exe
Miniconda2-latest-MacOSX-x86_64.sh
Miniconda3-latest-MacOSX-x86_64.sh
[1m[34mMovies[m[m
[1m[34mMusic[m[m
[1m[34mMyCSClass_summer2019[m[m
[1m[34mMyFirstPost[m[m
[1m[34mMyNewDir[m[m
[1m[34mMyPosty[m[m
[1m[34mNuPyCEE[m[m
[1m[34mOpenSpace[m[m
[1m[34mOpenVdbInstall[m[m
[1m[34mPanoptes-Front-End[m[m
[1m[34mPictures[m[m
[1m[34mPost1[m[m
[1m[34mPublic[m[m
[1m[34mPulledARImages[m[m
README.md
[1m[34mSites[m[m
[1m[34mSnowLeopard_Lion_Mountain_Lion_Mavericks_Yosemite_El-Captain_06.07.2016[m[m
[1m[34mTactileSun[m[m
[1m[34mUntitled Folder[m[m
[1m[34mUntitled Folder 1[m[m
Untitled.ipynb
Untitled1.ipynb
Untitled2.ipynb
Untitled3.ipynb
Untitled4.ipynb
Untitled5.ipynb
Untitled6.ipynb
Untitled7.ipynb
Untitled8.ipynb
Untitled9.ipynb
VC_redist.x86.exe_old
[1m[34m__MACOSX[m[m
[1m[34magbpaper[m[m
[1m[34maggregation-for-caesar[m[m
allthree.mtl
allthree.obj
[1m[34manaconda3[m[m
[1m[34marduino_sketches[m[m
[1m[34marepo-snap-util[m[m
[1m[34marepoInstall[m[m
areposnap.tar.gz
[1m[34mastroblend-dev[m[m
[1m[34mastroblend-stable[m[m
[1m[34mastroblend-website[m[m
[1m[34mastronaiman[m[m
[1m[34mavriot-website[m[m
backblue.gif
[1m[34mblender-2.75a[m[m
blender-2.75a.tar
[1m[34mblender-2.76[m[m
[1m[34mblender-2.76b-OSX_10.6-x86_64[m[m
[1m[34mbqplot[m[m
[1m[34mbrainbit[m[m
calculate_lxprof_lowercase.pro
[1m[34mcfa_yt_workshop[m[m
[1m[34mcfa_yt_workshop_home[m[m
cghistoplot_lowercase.pro
[1m[34mcorgBuildFile[m[m
[1m[34mcorgWebsiteBuild[m[m
crashblender.blend
[1m[34mcsci-p-14110[m[m
[1m[34mcube_cascade[m[m
[1m[34mcube_cascade_old[m[m
[1m[34mdata[m[m
[1m[34mdgstripping[m[m
[1m[34mdmv_ill[m[m
dsd_ds9.pdf
[1m[34mdylans_tools[m[m
fade.gif
[1m[34mfhack[m[m
[1m[34mfigure[m[m
[1m[34mfitbit_app[m[m
[1m[34mflash_runs[m[m
flashr.tar
flattenedGalaxbaked.blend
[1m[34mforEmacs[m[m
[1m[34mforLisa[m[m
forLisa.tar.gz
[1m[34mgabrielasData[m[m
galaxy0030_Projection_y_density.png
[1m[34mgalaxySnaps[m[m
[1m[34mgenericPlanetFiles[m[m
[1m[34mgits_for_viz_class[m[m
[1m[34mglue-master[m[m
[1m[34mhdf5Install[m[m
[1m[32mhello[m[m
hello.c
[1m[34mhoudini_yt-tmp[m[m
[1m[34mhoudini_yt-website[m[m
[1m[34mhoudini_yt_dev[m[m
[1m[34mhts-cache[m[m
[1m[34miOS_BrainWeather[m[m
[1m[34miOS_BrainWeather_myAWS[m[m
[1m[34miOS_BrainWeather_old[m[m
[1m[34midl_files[m[m
[1m[34midyllPosts[m[m
[1m[34millustrisData[m[m
[1m[34millustris_python[m[m
[1m[34mimageParse[m[m
index.html
install_arepo_packages.sh
install_arepo_packages.sh~
install_hdf5.sh
install_hdf5.sh~
install_script.sh
install_script.sh~
install_script_miniconda.sh
install_script_yt20160511.sh
install_script_yt20160511.sh~
install_script_yt20160511_t2.sh
install_script_yt20160511_t2.sh~
install_script_ytblender.sh
install_script_ytblender.sh~
install_script_ytblender_mypythonversion.sh
install_script_ytblender_new.sh
install_script_ytblender_new.sh~
install_yt3.sh
[1m[34mios_tmp[m[m
itssuchabeautifulday.png
[1m[34mjnaiman-openauth[m[m
jnaiman-openauth.zip
[1m[34mjnaiman.github.io[m[m
junk.gdbm
library-repos-install.sh
linux-3.19.3.tar.xz
[1m[34mlocalpy[m[m
[1m[34mmacports[m[m
[1m[34mmelodyDFUtool[m[m
melodyDFUtool.zip
[1m[34mmelodysmart-ios-swift[m[m
[1m[34mmesa-r10108[m[m
[1m[34mminiconda3[m[m
[1m[34mmitsuba-3401706c2f1d[m[m
[1m[34mmpi4py-1.3.1[m[m
mpi4py-1.3.1.tar
multirun.sh
multirun.sh~
[1m[34mmy-idyll-post[m[m
myTrial.step
[1m[34mnaimanconsulting[m[m
[1m[34mnode_modules[m[m
[1m[34mopenChampaignProject[m[m
[1m[34mopenvdb[m[m
[1m[34mopenvdb_packages[m[m
package-lock.json
[1m[34mparsync[m[m
passwd_iam
[1m[34mperiodic_kdtree[m[m
playing_with_class_data.log
playing_with_class_data.tex
[1m[34mprototype_aie_website[m[m
[1m[34mprototype_aie_website_tmp[m[m
[1m[34mpulsarPython[m[m
[1m[34mpython-fitbit[m[m
[1m[34mpython-virtual-environments[m[m
[1m[34mpywwt[m[m
[1m[32mrsync_parallel.sh[m[m
[1m[34ms3-drive[m[m
[1m[34ms3fs-fuse[m[m
[1m[34mscaffolding-interactives[m[m
solverlibs.py
solverlibs.pyc
[1m[34mspeakermic_aa1.stl[m[m
[1m[34mspring2019online[m[m
[1m[34mstarless_grRender[m[m
[1m[34mstarless_grRender_mod[m[m
[1m[34mstarrender[m[m
[1m[34mstarrender.bfg-report[m[m
[1m[34mstellarModels[m[m
[1m[34msurfaces[m[m
[1m[34mtempIOS[m[m
[1m[34mtempleton2018[m[m
testyt.py
testyt.py~
[1m[34mtmpARImages[m[m
[1m[34mtmpToPeter[m[m
[1m[34mtmp_aie_website[m[m
[1m[34mtmpchemEvPaper[m[m
tmpchemEvPaper.tar.gz
[1m[34mtmppython[m[m
trial.iges
trial.step
untitled-earbudBuildCombine-earbud_back_taper.stl
untitled-earbud_back_taper.stl
untitled.hipnc
vcredist_x86.exe
[1m[34mvogelsbergerlabtools[m[m
[1m[34mvowpal_wabbit[m[m
[1m[34mvowpal_wabbit2[m[m
[1m[34mvowpal_wabbit3[m[m
[1m[34mwebcam-pulse-detector[m[m
[1m[34mweight-loss[m[m
[1m[32mwinetricks[m[m
[1m[34mworkshop2016[m[m
[1m[34mwwt-web-client[m[m
[1m[34mwwt-web-client_OLD[m[m
[1m[34mwwt-web-client_cp[m[m
[1m[34mwwt-website[m[m
[1m[34myt[m[m
[1m[34myt_for_python3_jn[m[m
yt_native.png
[1m[34myt_surfaces[m[m
[1m[34mytini-repo[m[m
[1m[34mytini_practice[m[m
###Markdown
So now we can see a list of all the stuff that is in there. To make a new directory we can use the command "mkdir" on both Macs and Windows:
###Code
!mkdir /Users/jillnaiman1/MyNewDir
###Output
mkdir: /Users/jillnaiman1/MyNewDir: File exists
###Markdown
Now I can save my image to this directory like so:
###Code
plt.plot(x,y)
plt.xlabel('x value from 0 to PI')
plt.ylabel('sin(x)')
plt.title('My first plot!')
plt.savefig('/Users/jillnaiman1/MyNewDir/myAMAZINGsinplot.png')
###Output
_____no_output_____
###Markdown
Now I can see this file in that directory with "ls":
###Code
!ls /Users/jillnaiman1/MyNewDir
###Output
myAMAZINGsinplot.png
###Markdown
And I can also open it from there:
###Code
!open /Users/jillnaiman1/MyNewDir/myAMAZINGsinplot.png
###Output
_____no_output_____
|
examples/hybrid/GeneralizedLinearModel.ipynb
|
###Markdown
Generalized Linear Model
###Code
from pearl.bayesnet import from_yaml
from pearl.data import VariableData, BayesianNetworkDataset
from pearl.common import NodeValueType
import torch
import matplotlib.pyplot as plt
from pyro.optim import Adam
import graphviz
###Output
_____no_output_____
###Markdown
In this notebook we define and train a simple generalized linea model. Let us consider a model with 5 variables. The dependent variable 'E' has discrete parents 'A', 'B' and continuous parents 'C', 'D'. For each combination of discrete parents, 'E' is sampled from a categorical distribution (with 2 categories) obtained by applying a softmax function to linear combinations of the continuous parents 'C', 'D'. The exact data generating mechanism is shown below.
###Code
N = 5000
# 'A' and 'B' are sampled from categorical distributions
# 'C' and 'D' are sampled from Normal distributions
a = torch.distributions.Categorical(torch.tensor([0.3, 0.7])).sample((N,)).float()
b = torch.distributions.Categorical(torch.tensor([0.6, 0.4])).sample((N,)).float()
c = torch.distributions.Normal(0., 1.).sample((N,)).float()
d = torch.distributions.Normal(3, 1.).sample((N,)).float()
parents_stack = torch.stack([c,d], dim=-1).unsqueeze(-1).expand(-1, -1, 2)
mask1 = torch.eq(a, 0.) & torch.eq(b, 0.)
mask2 = torch.eq(a, 0.) & torch.eq(b, 1.)
mask3 = torch.eq(a, 1.) & torch.eq(b, 0.)
mask4 = torch.eq(a, 1.) & torch.eq(b, 1.)
# 'E' is sampled using a generalized linear model.
# For each combination of values of 'A' and 'B', we use a softmax function applied to two linear functions of 'C' and 'D' to generate a categorical distribution.
# 'E' is finally sampled from this categorical distribution.
e = torch.empty((N,))
bias = torch.tensor([
[
[
[0.0, 0.0]
],
[
[0.0, 0.0]
]
],
[
[
[0.0, 0.0]
],
[
[0.0, 0.0]
]
],
])
weights = torch.tensor(
[
[
[
[1.0, 1.25],
[1.0, 1.50],
],
[
[1.0, 1.75],
[1.0, 2.00],
],
],
[
[
[2.00, 1.0],
[1.75, 1.0],
],
[
[1.50, 1.0],
[1.25, 1.0],
]
],
],
)
e[mask1] = torch.distributions.Categorical(
torch.softmax(
torch.sum(parents_stack * weights[0][0], dim=-2) + bias[0][0],
dim=-1
)
).sample().float()[mask1]
e[mask2] = torch.distributions.Categorical(
torch.softmax(
torch.sum(parents_stack * weights[0][1], dim=-2) + bias[0][1],
dim=-1
)
).sample().float()[mask2]
e[mask3] = torch.distributions.Categorical(
torch.softmax(
torch.sum(parents_stack * weights[1][0], dim=-2) + bias[1][0],
dim=-1
)
).sample().float()[mask3]
e[mask4] = torch.distributions.Categorical(
torch.softmax(
torch.sum(parents_stack * weights[1][1], dim=-2) + bias[1][1],
dim=-1
)
).sample().float()[mask4]
assert a.shape == b.shape == c.shape == d.shape == e.shape == (N,)
###Output
_____no_output_____
###Markdown
Declarative model specification The declarative specification the model and the graph structure are shown below.
###Code
! cat glmodel.yaml
model = from_yaml('glmodel.yaml')
model.write_dot('glmodel.dot')
graphviz.Source.from_file('glmodel.dot')
###Output
_____no_output_____
###Markdown
Model training We will now train the model defined using pearl on the generated data and see how well it recovers the parameters of the generalized linear model. First we need to package the tensors as a `BayesianNetworkDataset` and divide it into train/test split.
###Code
variable_dict = {
'A': VariableData(
NodeValueType.CATEGORICAL,
a,
['a', 'b'],
),
'B': VariableData(
NodeValueType.CATEGORICAL,
b,
['a', 'b'],
),
'C': VariableData(
NodeValueType.CONTINUOUS,
c,
),
'D': VariableData(
NodeValueType.CONTINUOUS,
d,
),
'E': VariableData(
NodeValueType.CATEGORICAL,
e,
['a', 'b'],
)
}
dataset = BayesianNetworkDataset(variable_dict)
train_dataset, test_dataset = dataset.split((4000, 1000))
###Output
_____no_output_____
###Markdown
Next we will train the model using SVI to optimize the parameters.
###Code
adam = Adam({'lr': 0.005, 'betas': (0.95, 0.995)})
losses=model.train(
dataset=train_dataset,
optimizer=adam,
num_steps=10000,
)
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Examine parameters
###Code
# CPD over A is Cat(0.3, 0.7 )
alphas = model.get_node_object('A').guide_alpha
print(alphas / torch.sum(alphas, dim=-1, keepdim=True))
# CPD over B is Cat(0.6, 0.4 )
alphas = model.get_node_object('B').guide_alpha
print(alphas / torch.sum(alphas, dim=-1, keepdim=True))
# CPD over C is N(0., 1.)
node_object = model.get_node_object('C')
mean = node_object.guide_mean_mean
scale = node_object.guide_scale
print(mean)
print(scale)
# CPD over C is N(3., 1.)
node_object = model.get_node_object('D')
mean = node_object.guide_mean_mean
scale = node_object.guide_scale
print(mean)
print(scale)
# CPD over E
node_object = model.get_node_object('E')
node_weights = node_object.guide_weights_mean
node_bias = node_object.guide_bias_mean
print(node_bias)
print(node_weights)
###Output
tensor([[[-0.1524, 0.1524],
[ 0.4244, -0.4244]],
[[ 0.1649, -0.1648],
[ 0.0865, -0.0865]]], requires_grad=True)
tensor([[[[-0.1215, 0.1215],
[-0.2178, 0.2178]],
[[-0.4403, 0.4403],
[-0.7369, 0.7369]]],
[[[ 0.5110, -0.5110],
[ 0.3410, -0.3410]],
[[ 0.2721, -0.2721],
[ 0.1013, -0.1013]]]], requires_grad=True)
###Markdown
The bias vector is close to zero since we did not use any bias term in the data generating mechanism for 'E'. While the exact weights may not be identical, its important that they induce the same categorical distribution through the softmax function. We verify that by computing the softmax of the two weight tensors.
###Code
print(torch.softmax(torch.sum(weights, dim=-2), dim=-1))
print(torch.softmax(torch.sum(node_weights, dim=-2) + node_bias, dim=-1))
###Output
tensor([[[0.3208, 0.6792],
[0.1480, 0.8520]],
[[0.8520, 0.1480],
[0.6792, 0.3208]]])
tensor([[[0.2722, 0.7278],
[0.1816, 0.8184]],
[[0.8843, 0.1157],
[0.7150, 0.2850]]], grad_fn=<SoftmaxBackward>)
###Markdown
Predictions We can use the trained model to answer various probabilistic queries. The standard query is to predict 'E' conditioned on the remaining variables.
###Code
_, map_assignments, _ = model.predict(
dataset=test_dataset,
target_variables=['E'],
)
acc = float(torch.eq(test_dataset['E'], map_assignments['E']).sum())
print(f'accuracy is {acc / len(test_dataset)}')
###Output
accuracy is 0.826
|
ExampleExperiments/hog3/hog3.ipynb
|
###Markdown
Goethe Universität, FB 05 PsychologieWintersemester 2017/2018PsyMSc 4: Python für Psychologen, Dr. Jona SassenhagenAnonym
###Code
%matplotlib inline
import pandas as pd
import matplotlib as mpl
import numpy as np
import seaborn as sns
from scipy import stats
from glob import glob
from scipy.stats.stats import pearsonr
from scipy.stats import ttest_ind
from scipy.stats import f_oneway as anova
all_dfs = list()
for ii, file in enumerate(glob("./csv/*")):
if file.endswith(".csv"):
try:
df = pd.read_csv(file)
all_dfs.append(df)
except Exception:
pass
df = pd.concat(all_dfs)
df
###Output
_____no_output_____
###Markdown
kicking out outliers invalid trials
###Code
df= df.query("rt != -999") # kicking out invalid trials (without response)
df
list_all_results = [] # list in which all results are stored
list_discussion = [] # list in which all significant effects are stored
###Output
_____no_output_____
###Markdown
accuracy-outliers
###Code
# exclusion of all subjects with an accuracy below 90 %
bad_subj = []
for subj in df["subject"].unique():
if df.query("subject == @subj").mean()["correct"] < 0.90:
bad_subj.append(subj)
else:
continue
df = df.query("subject not in @bad_subj")
print("Following subjects are excluded from further analysis because their mean accuracy is below 90 %: ", bad_subj)
# mean rt etc. for each subject
subject_means_all = df.groupby("subject").mean()
subject_means_all
overall_mean = subject_means_all["rt"].mean()
print("mean rt overall:", overall_mean)
overall_sd = subject_means_all["rt"].std()
print("sd rt overall:", overall_sd)
###Output
mean rt overall: 0.3971274934977512
sd rt overall: 0.04763491215296197
###Markdown
rt/accuracy distribution
###Code
fig, axs = mpl.pyplot.subplots(ncols = 2)
sns.distplot(subject_means_all["rt"], ax = axs[0]).set_title("rt")
sns.distplot(subject_means_all["correct"], ax = axs[1]).set_title("accuracy")
fig, axs = mpl.pyplot.subplots(ncols = 2)
sns.boxplot(y = subject_means_all["rt"], ax = axs[0]).set_title("rt")
sns.boxplot(y = subject_means_all["correct"], ax = axs[1]).set_title("accuracy")
###Output
_____no_output_____
###Markdown
creating subset for only one house of hogwarts
###Code
# further analysis will be performed only on data of this certain house
house = "Hufflepuff"
df = df.query("house == @house")
subject_means_all = df.groupby("subject").mean()
###Output
_____no_output_____
###Markdown
descriptive analysis age, temperature
###Code
# maximum, minimum, mean, standard deviation for age and temperature
list_age_temp = ["age", "temperature"]
for col in list_age_temp:
mean = subject_means_all[col].mean()
sd = subject_means_all[col].std()
max_ = subject_means_all[col].max()
min_ = subject_means_all[col].min()
print( """{} is in a range from {} to {}. Mean {} is {} with a standard deviation of {}."""
.format(col, min_, max_, col, round(mean,2), round(sd,2)),"\n")
###Output
age is in a range from 22.0 to 44.0. Mean age is 27.88 with a standard deviation of 5.42.
temperature is in a range from 34.0 to 43.0. Mean temperature is 38.38 with a standard deviation of 2.37.
###Markdown
valence
###Code
# creates new column with neutral/positive/negative for trials in which target appeared on the side of the neutral/positive/negative target
# boolean version doesn´t work because variable isn´t dichotomous --> neutral, negative, positive
"""'
' *!!! Instructor Notes !!!*
The original student's code in following cell is correct, but unnecessarily slow (and not idiomatic Python/Pandas).
A much faster replacement has been added to not slow down code execution on slow computers.
original version:'
df["cong_side_val"] = "neutral"
for index, (neutral_side, target_side) in enumerate(zip(df["cue_neutral_side"],df["target_side"])):
if neutral_side != target_side:
df["cong_side_val"].iloc[index] = df["cue_valence"].iloc[index]
df
"""
df["cong_side_val"] = df["cue_valence"]
df.loc[df["cue_neutral_side"] == df["target_side"], "cong_side_val"] = "neutral"
replace_idx = df["cue_neutral_side"]
# mean rts for the newly defined valence categories
sub_neutral = df.query("cong_side_val =='neutral'")
sub_positive = df.query("cong_side_val == 'positive'")
sub_negative = df.query("cong_side_val == 'negative'")
subj_neutral_means = sub_neutral.groupby("subject").mean()["rt"]
subj_positive_means = sub_positive.groupby("subject").mean()["rt"]
subj_negative_means = sub_negative.groupby("subject").mean()["rt"]
print("mean_rt_neutral:",subj_neutral_means.mean(),"sd_rt_neutral:",subj_neutral_means.std())
print("mean_rt_positive:",subj_positive_means.mean(),"sd_rt_positive:",subj_positive_means.std())
print("mean_rt_negative:",subj_negative_means.mean(), "sd_rt_negative:",subj_negative_means.std())
# plotting rts to to have a look at the distribution
fig, axs = mpl.pyplot.subplots(ncols = 3)
sns.boxplot(y = subj_neutral_means, ax = axs[0]).set_title("neutral")
sns.boxplot(y = subj_positive_means, ax = axs[1]).set_title("positive")
sns.boxplot(y = subj_negative_means, ax = axs[2]).set_title("negative")
fig, axs = mpl.pyplot.subplots(ncols = 3)
sns.distplot(subj_neutral_means, ax = axs[0]).set_title("neutral")
sns.distplot(subj_positive_means, ax = axs[1]).set_title("positive")
sns.distplot(subj_negative_means, ax = axs[2]).set_title("negative")
###Output
mean_rt_neutral: 0.3943348251670536 sd_rt_neutral: 0.033709865556719565
mean_rt_positive: 0.4049403695219734 sd_rt_positive: 0.07685557698627946
mean_rt_negative: 0.39258582559564026 sd_rt_negative: 0.05730546741865581
###Markdown
cross vs. finger-hole
###Code
# mean rts for fixation sign categories
sub_cross = df.query("fixation == 'cross'")
sub_hole = df.query("fixation == 'hole'")
subj_cross_means = sub_cross.groupby("subject").mean()["rt"]
subj_hole_means = sub_hole.groupby("subject").mean()["rt"]
print("mean_rt_cross:",subj_cross_means.mean(),"sd_rt_cross:",subj_cross_means.std())
print("mean_rt_hole:",subj_hole_means.mean(),"sd_rt_hole:",subj_hole_means.std())
# plotting rt distribution for finger vs. cross trials
fig, axs = mpl.pyplot.subplots(ncols = 2)
sns.boxplot(y = subj_cross_means, ax = axs[0]).set_title("cross")
sns.boxplot(y = subj_hole_means, ax = axs[1]).set_title("hole")
fig, axs = mpl.pyplot.subplots(ncols = 2)
sns.distplot(subj_cross_means, ax = axs[0]).set_title("cross")
sns.distplot(subj_hole_means, ax = axs[1]).set_title("hole")
###Output
mean_rt_cross: 0.3924249147857533 sd_rt_cross: 0.04438673225797699
mean_rt_hole: 0.4021177184297322 sd_rt_hole: 0.06563617577409724
###Markdown
Distributions regarding valence and fixation sign seem to be slightly right-skewed.We assume that this also goes for the other subsets involved in further analysis... further checking of normal distribution rt
###Code
# creates series with mean rt/age/accuracy per subject
mean_rt_subj = subject_means_all["rt"]
mean_age_subj = subject_means_all["age"]
mean_accuracy_subj = subject_means_all["correct"]
# Kolmogorov Smirnov test to check on whether rts are normally distributed over all trials
# we won't repeat this for each subset involved in further analysis...
means = [mean_rt_subj, mean_age_subj, mean_accuracy_subj]
variable = ["RT", "age", "accuracy"]
for m, var in zip(means, variable):
ks, p = stats.kstest(m, "norm")
print( "According to the Kolmogorov-Smirnov test the distribution of {} is ".format(var) +("not normal." if p < 0.05 else "normal.")+ "( ks =",ks, ", p =", p, ").")
if p < 0.05:
print("Though according to KS-test {} is not normally distributed we do not think that normal distribution is violated in a way that exceeds the robustness of the used statistical methods.".format(var),"\n")
###Output
According to the Kolmogorov-Smirnov test the distribution of RT is not normal.( ks = 0.6334204742802089 , p = 1.206466038183862e-10 ).
Though according to KS-test RT is not normally distributed we do not think that normal distribution is violated in a way that exceeds the robustness of the used statistical methods.
According to the Kolmogorov-Smirnov test the distribution of age is not normal.( ks = 1.0 , p = 0.0 ).
Though according to KS-test age is not normally distributed we do not think that normal distribution is violated in a way that exceeds the robustness of the used statistical methods.
According to the Kolmogorov-Smirnov test the distribution of accuracy is not normal.( ks = 0.8263912196613754 , p = 0.0 ).
Though according to KS-test accuracy is not normally distributed we do not think that normal distribution is violated in a way that exceeds the robustness of the used statistical methods.
###Markdown
inference statistic correlations
###Code
scatter_age_rt = sns.regplot(x= mean_age_subj, y= mean_rt_subj)
scatter_age_rt
###Output
_____no_output_____
###Markdown
homoscedasticity should be more or less given
###Code
# correlation rt and age
r, p = pearsonr(mean_rt_subj, mean_age_subj)
statistic = ("r=",r,"p=",p)
result = ("RT and age " + ("do" if p < .05 else "do not") + " correlate significantly.")
list_all_results.append([result, statistic])
if p < .05:
list_discussion.append("a correlation of reaction time and age")
print(result)
print(statistic)
scatter_acc_rt = sns.regplot(x= mean_accuracy_subj, y= mean_rt_subj)
scatter_acc_rt
###Output
_____no_output_____
###Markdown
homoscedasticity....
###Code
# correlation rt and accuracy
r, p = pearsonr(mean_rt_subj, mean_accuracy_subj)
statistic = ("r=",r,"p=",p)
result = ("RT and accuracy " + ("do" if p < .05 else "do not") + " correlate significantly.")
list_all_results.append([result, statistic])
if p < .05:
list_discussion.append("a correlation of reaction time and accuracy")
print(result)
print(statistic)
###Output
RT and accuracy do not correlate significantly.
('r=', -0.14252628314886848, 'p=', 0.48733054033522605)
###Markdown
t-tests
###Code
# temperature for each subject in a df with mean rt, median serves as cutoff for cold/warm
mean_rts_temp= df.groupby(["subject", "temperature"]).mean()["rt"].to_frame()
temp_cutoff = df.groupby("subject").mean()["temperature"].median()
print("cut-off for temperature:",temp_cutoff, "Celsius Degree")
# creating warm/cold subsets according to cutoff and checking on homoscedasticity
cold = mean_rts_temp.query("temperature < @temp_cutoff")["rt"]
warm = mean_rts_temp.query("temperature >= @temp_cutoff")["rt"]
mean_rt_cold= cold.mean()
mean_rt_warm = warm.mean()
lev, p = stats.levene(cold, warm)
message = "According to the Levene test homogenity of variance is"
print(message, "not given" if p < 0.05 else "given.", "( p =", p,")")
# ttest for differences in rt between people preferring cold/warm water
t,p = ttest_ind(cold,warm)
statistic = ("t=",t,"p=",p)
result = ("RTs between people preferring cold and people preferring warm water "
+ ("do" if p < .05 else "do not") + " differ significantly.")
print(result)
print(statistic)
if p< .05:
result_2 = ("""People preferring cold water (mean rt = {} s) react """.format(round(mean_rt_cold, 2))
+ ("quicker " if mean_rt_cold < mean_rt_warm else "slowlier ")+
"""than people preferring warm water (mean rt =", {} s").""".format(round(mean_rt_warm, 2)))
list_all_results.append([result, statistic, result_2])
list_discussion.append("the preferred shower temperature")
print(result_2)
else:
list_all_results.append([result, statistic])
###Output
RTs between people preferring cold and people preferring warm water do not differ significantly.
('t=', -1.1633374398039285, 'p=', 0.256128880826745)
###Markdown
cross vs. finger-hole
###Code
# creating subsets for ttest and checking on homoscedasticity
finger_hole_rts = df.query("fixation == 'hole'")
cross_rts = df.query("fixation == 'cross'")
finger_mean_rts_subj = finger_hole_rts.groupby("subject").mean()["rt"] # mean rt in trials with finger-hole for each subject
cross_mean_rts_subj = cross_rts.groupby("subject").mean()["rt"]
mean_rt_finger = finger_mean_rts_subj.mean()
mean_rt_cross = cross_mean_rts_subj.mean()
lev, p = stats.levene(finger_mean_rts_subj, cross_mean_rts_subj)
message = "According to the Levene test homogenity of variance is"
print(message, "not given" if p < 0.05 else "given.", "( p =", p,")")
# checking on whether rt differs between trials with cross and trials with finger hole as fixation sign
t,p = ttest_ind(finger_mean_rts_subj, cross_mean_rts_subj)
statistic = ("t=",t,"p=",p)
result = ("RTs for different kinds of fixation-signs(cross vs. finger-hole) " + ("do" if p < .05 else "do not")
+ " differ significantly.")
print(result)
print(statistic)
if p< .05:
result_2 = ("""Subjects react """ + ("quicker " if mean_rt_cold < mean_rt_warm else "slowlier ")+
"""when the fixation-sign is a finger-hole (mean rt = {} ms)""".format(round(mean_rt_finger, 2))
+""" than when it is a cross (mean rt =", {} ms")""".format(round(mean_rt_cross, 2)))
list_all_results.append([result, statistic, result_2])
list_discussion.append("the kind of the fixation sign")
print(result_2)
else:
list_all_results.append([result, statistic])
###Output
RTs for different kinds of fixation-signs(cross vs. finger-hole) do not differ significantly.
('t=', 0.6237569786862032, 'p=', 0.5356226612446039)
###Markdown
finger-direction congruent with target-side vs. incongruent
###Code
# creating a new column/variable which is "congruent" when fixation direction is congruent with target side or "incongruent" respectively
# boolean version doesn´t work since the variable isn´t dichotomous--> incongruent, congruent, None(trials with cross-fixation)
"""'
' *!!! Instructor Notes !!!*
Again, the original student's code in following cell is correct, but unnecessarily slow and unidiomatic.
A much faster replacement has been added to not slow down code execution on slow computers.
original version:'
df["cong_side_fix_dir"] = "incongruent"
for index, (fixation_direction, target_side) in enumerate(zip(df["fixation_direction"],df["target_side"])):
if (fixation_direction == target_side):
df["cong_side_fix_dir"].iloc[index] = "congruent"
elif fixation_direction == "None":
df["cong_side_fix_dir"].iloc[index] = df["fixation_direction"].iloc[index]
df
"""
def mapper(columns):
fixation_direction, target_side = columns
if fixation_direction == None:
return fixation_direction
elif fixation_direction == target_side:
return "congruent"
else:
return "incongruent"
df["cong_side_fix_dir"] = df[["fixation_direction", "target_side"]].apply(mapper, axis=1)
# creating subsets for ttest and checking on homoscedasticity
hole_dir_cong_rts = df.query("target_side == fixation_direction")
hole_dir_incong_rts = df.query("target_side != fixation_direction and fixation_direction != None")
hole_dir_cong_rts_subj = hole_dir_cong_rts.groupby("subject").mean()["rt"] # mean rt in trials with congruent traget side and finger direction for each subject
hole_dir_incong_rts_subj = hole_dir_incong_rts.groupby("subject").mean()["rt"]
lev, p = stats.levene(hole_dir_cong_rts_subj, hole_dir_incong_rts_subj)
message = "According to the Levene test homogenity of variance is"
print(message, "not given" if p < 0.05 else "given.", "( p =", p,")")
# checks on whether rt for trials in which finger-hole-direction and target side are congruent differs from rt of incongruent trials
t,p = ttest_ind(hole_dir_cong_rts_subj, hole_dir_incong_rts_subj)
statistic = ("t=",t,"p=",p)
result = ("RTs for trials in which finger-hole-direction and target-side are congruent or incongruent respectively " + ("do" if p < .05 else "do not") + " differ significantly.")
list_all_results.append([result, statistic])
if p < .05:
list_discussion.append("congruence of finger-hole direction and target side")
print(result)
print(statistic)
###Output
RTs for trials in which finger-hole-direction and target-side are congruent or incongruent respectively do not differ significantly.
('t=', 0.7008179530101276, 'p=', 0.4866653856484685)
###Markdown
ANOVA neutral vs. positive vs. negative (overall, 40 ms, 1000 ms) data preparation
###Code
# lists with mean rts for trials with negative/positive/neutral cue words that appeared on the same side as the target for each subject
durations = ["overall", 0.04, 1.00]
cong_valences = []
cong_valences_short = []
cong_valences_long = []
for dur in durations:
for cond in ("neutral", "negative", "positive"):
if dur == "overall":
rt_val = df.query("cong_side_val == '" + cond + "'")
mean_rt_val_subj = rt_val.groupby("subject").mean()["rt"]
cong_valences.append(mean_rt_val_subj)
elif dur == 0.04:
rt_val = df.query("cong_side_val == '" + cond + "' & duration == 0.04")
mean_rt_val_subj = rt_val.groupby("subject").mean()["rt"]
cong_valences_short.append(mean_rt_val_subj)
else:
rt_val = df.query("cong_side_val == '" + cond + "' & duration == 1.00")
mean_rt_val_subj = rt_val.groupby("subject").mean()["rt"]
cong_valences_long.append(mean_rt_val_subj)
neutral, negative, positive = cong_valences
neutral_short, negative_short, positive_short = cong_valences_short
neutral_long, negative_long, positive_long = cong_valences_long
# homoscedasticity...
valences = [[neutral, negative, positive], [neutral_short, negative_short, positive_short], [neutral_long, negative_long, positive_long]]
durations = ["(overall)", "(40 ms)", "(1000 ms)"]
for val, dur in zip(valences, durations):
lev, p = stats.levene(val[0], val[1], val[2])
print("According to the Levene test homogenity of variance is " + ("not given " if p < 0.05 else "given ") + "{}.(p = {})".format(dur, p))
###Output
According to the Levene test homogenity of variance is given (overall).(p = 0.7588507712972801)
According to the Levene test homogenity of variance is given (40 ms).(p = 0.706997809192438)
According to the Levene test homogenity of variance is given (1000 ms).(p = 0.7630220483681297)
###Markdown
ANOVAs
###Code
# anovas to check on effects of valence (overall, 40 ms, 1000 ms)
for val, dur in zip(valences, durations):
F, p = anova(val[0],val[1],val[2])
statistic = ("F =", F,"p =", p)
result = ("RTs to words with different valences (neutral, negative, positive) that are congruent with target side " + ("do" if p < .05 else "do not") + " differ significantly {}.".format(dur))
list_all_results.append([result, statistic])
if p < .05:
list_discussion.append("the valence of cue words that were presented on the same side as the target {}".format(dur))
print(result)
print(statistic,"\n")
###Output
RTs to words with different valences (neutral, negative, positive) that are congruent with target side do not differ significantly (overall).
('F =', 0.3375820574298715, 'p =', 0.7145719214526068)
RTs to words with different valences (neutral, negative, positive) that are congruent with target side do not differ significantly (40 ms).
('F =', 1.166559782696912, 'p =', 0.31702244739027224)
RTs to words with different valences (neutral, negative, positive) that are congruent with target side do not differ significantly (1000 ms).
('F =', 0.08239181284254808, 'p =', 0.9209942987233681)
###Markdown
overview over results / discussion
###Code
# overview results
for index in list_all_results:
for i in index:
print(i,"\n")
print("\n")
#discussion
length_all = len(list_all_results)
length = len(list_discussion)
print("We could obtain "+ ("no" if length == 0 else str(length)) + """ significant effect(s) among the students of {}""".format(house)
+ (", which is at total odds with what is indicated by Rowling (1997,1998,1999,2000,2003,2005,2007,2016)." if length ==0 else "."))
if length > 1:
last = list_discussion.pop()
print("The obtained significant effects referred to"+ (",".join(list_discussion))+"and ", last,", which is indeed "
+ ("perfectly in line with what we expected and the observations of Rowling (1997,1998,1999,2000,2003,2005,2007,2016)." if length == length_all else
"quite nice and compatible with the reports of Rowling(1997,1998,1999,2000,2003,2005,2007,2016)."))
elif length ==1:
print("The obtained significant effect referred to "+ (",".join(list_discussion))+", which is indeed "
+ ("perfectly in line with what we expected and the observations of Rowling (1997,1998,1999,2000,2003,2005,2007,2016)."))
else:
print("\n","What a bummer...")
###Output
We could obtain no significant effect(s) among the students of Hufflepuff, which is at total odds with what is indicated by Rowling (1997,1998,1999,2000,2003,2005,2007,2016).
What a bummer...
|
notebooks/yahoo-api-example.ipynb
|
###Markdown
Yahoo API ExampleThis notebook is an example of using yahoo api to get fantasy sports data.
###Code
from rauth import OAuth2Service
import webbrowser
import json
###Output
_____no_output_____
###Markdown
**Prerequisite**First we need to create a Yahoo APP at https://developer.yahoo.com/apps/, and select Fantasy Sports - Read for API Permissions. Then we can get the Client ID (Consumer Key) and Client Secret (Consumer Secret)
###Code
clientId= "dj0yJmk9M3gzSWJZYzFmTWZtJmQ9WVdrOU9YcGxTMHB4TXpnbWNHbzlNQS0tJnM9Y29uc3VtZXJzZWNyZXQmeD1kZg--"
clinetSecrect="dbd101e179b3d129668965de65d05c02df42333d"
###Output
_____no_output_____
###Markdown
**Step 1: Create an OAuth object**
###Code
oauth = OAuth2Service(client_id = clientId,
client_secret = clinetSecrect,
name = "yahoo",
access_token_url = "https://api.login.yahoo.com/oauth2/get_token",
authorize_url = "https://api.login.yahoo.com/oauth2/request_auth",
base_url = "http://fantasysports.yahooapis.com/fantasy/v2/")
###Output
_____no_output_____
###Markdown
**Step 2: Generate authorize url, and then get the verify code**For this script, the redirect_uri is set to 'oob',and open a page in brower to get the verify code.For Web APP server, we can set redirect uri as callback domain during Yahoo APP creation.
###Code
params = {
'response_type': 'code',
'redirect_uri': 'oob'
}
authorize_url = oauth.get_authorize_url(**params)
webbrowser.open(authorize_url)
code = input('Enter code: ')
###Output
Enter code: gnybkmd
###Markdown
**Step 3: Get session with the code**
###Code
data = {
'code': code,
'grant_type': 'authorization_code',
'redirect_uri': 'oob'
}
oauth_session = oauth.get_auth_session(data=data,
decoder= lambda payload : json.loads(payload.decode('utf-8')))
###Output
_____no_output_____
###Markdown
**Example to get user Info**
###Code
user_url='https://fantasysports.yahooapis.com/fantasy/v2/users;use_login=1'
resp = oauth_session.get(user_url, params={'format': 'json'})
resp.json()
user_guid=resp.json()['fantasy_content']['users']['0']['user'][0]['guid']
user_guid
###Output
_____no_output_____
###Markdown
**Example to query nba teams of logged user.**
###Code
team_url = 'https://fantasysports.yahooapis.com/fantasy/v2/users;use_login=1/games;game_keys=nba/teams'
resp = oauth_session.get(team_url, params={'format': 'json'})
teams = resp.json()['fantasy_content']['users']['0']['user'][1]['games']['0']['game'][1]['teams']
teams
team_count = int(teams['count'])
team_count
for idx in range(0,team_count):
team = teams[str(idx)]['team'][0][19]['managers']
print(team, '\n')
###Output
[{'manager': {'manager_id': '2', 'nickname': '邪', 'guid': 'EQMHXVGZ65XDJ5G57ZRRBKXUTM', 'is_current_login': '1', 'image_url': 'https://ct.yimg.com/cy/4556/23861899267_82a6e0_64sq.jpg'}}]
[{'manager': {'manager_id': '17', 'nickname': '邪', 'guid': 'EQMHXVGZ65XDJ5G57ZRRBKXUTM', 'is_current_login': '1', 'image_url': 'https://ct.yimg.com/cy/4556/23861899267_82a6e0_64sq.jpg'}}]
[{'manager': {'manager_id': '5', 'nickname': '邪', 'guid': 'EQMHXVGZ65XDJ5G57ZRRBKXUTM', 'is_current_login': '1', 'image_url': 'https://ct.yimg.com/cy/4556/23861899267_82a6e0_64sq.jpg'}}]
###Markdown
**Example to get nba leagues of logged user**
###Code
league_url = 'https://fantasysports.yahooapis.com/fantasy/v2/users;use_login=1/games;game_keys=nba/leagues'
resp = oauth_session.get(league_url, params={'format': 'json'})
leagues = resp.json()['fantasy_content']['users']['0']['user'][1]['games']['0']['game'][1]['leagues']
leagues
league_count = int(leagues['count'])
league_count
for idx in range(0,league_count):
league = leagues[str(idx)]['league'][0]
print(league, '\n')
###Output
{'league_key': '375.l.573', 'league_id': '573', 'name': 'Never Ending', 'url': 'https://basketball.fantasysports.yahoo.com/nba/573', 'draft_status': 'postdraft', 'num_teams': 20, 'edit_key': '2018-03-13', 'weekly_deadline': 'intraday', 'league_update_timestamp': '1520922304', 'scoring_type': 'head', 'league_type': 'private', 'renew': '364_817', 'renewed': '', 'iris_group_chat_id': 'TJA2CBKGARGW7H3THMJ2WKTA4Y', 'allow_add_to_dl_extra_pos': 1, 'is_pro_league': '0', 'is_cash_league': '0', 'current_week': 21, 'start_week': '1', 'start_date': '2017-10-17', 'end_week': '23', 'end_date': '2018-04-01', 'game_code': 'nba', 'season': '2017'}
{'league_key': '375.l.1039', 'league_id': '1039', 'name': 'Alpha2017', 'url': 'https://basketball.fantasysports.yahoo.com/nba/1039', 'draft_status': 'postdraft', 'num_teams': 18, 'edit_key': '2018-03-14', 'weekly_deadline': '', 'league_update_timestamp': '1520922491', 'scoring_type': 'head', 'league_type': 'private', 'renew': '364_24740', 'renewed': '', 'iris_group_chat_id': 'AYFAMW6K7FAMBEKTQHCF33OSMU', 'allow_add_to_dl_extra_pos': 0, 'is_pro_league': '0', 'is_cash_league': '0', 'current_week': 21, 'start_week': '1', 'start_date': '2017-10-17', 'end_week': '23', 'end_date': '2018-04-01', 'game_code': 'nba', 'season': '2017'}
{'league_key': '375.l.15031', 'league_id': '15031', 'name': 'New Beginning 2017', 'url': 'https://basketball.fantasysports.yahoo.com/nba/15031', 'draft_status': 'postdraft', 'num_teams': 18, 'edit_key': '2018-03-14', 'weekly_deadline': '', 'league_update_timestamp': '1520922528', 'scoring_type': 'head', 'league_type': 'private', 'renew': '364_24682', 'renewed': '', 'iris_group_chat_id': '2PBWBVXAMZGHXE3LVVUZ5SYEHQ', 'allow_add_to_dl_extra_pos': 0, 'is_pro_league': '0', 'is_cash_league': '0', 'current_week': 21, 'start_week': '1', 'start_date': '2017-10-17', 'end_week': '24', 'end_date': '2018-04-11', 'game_code': 'nba', 'season': '2017'}
###Markdown
**Example to get league metadata**
###Code
settings_url = 'https://fantasysports.yahooapis.com/fantasy/v2/game/nba/leagues;league_keys=375.l.1039/settings'
resp = oauth_session.get(settings_url, params={'format': 'json'})
settings = resp.json()['fantasy_content']['game'][1]['leagues']['0']['league'][1]['settings'][0]
settings
stat_categories = settings['stat_categories']['stats']
for category in stat_categories:
print(category['stat'], '\n')
###Output
{'stat_id': 9004003, 'enabled': '1', 'name': 'Field Goals Made / Field Goals Attempted', 'display_name': 'FGM/A', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P', 'is_only_display_stat': '1'}}], 'is_only_display_stat': '1'}
{'stat_id': 5, 'enabled': '1', 'name': 'Field Goal Percentage', 'display_name': 'FG%', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 9007006, 'enabled': '1', 'name': 'Free Throws Made / Free Throws Attempted', 'display_name': 'FTM/A', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P', 'is_only_display_stat': '1'}}], 'is_only_display_stat': '1'}
{'stat_id': 8, 'enabled': '1', 'name': 'Free Throw Percentage', 'display_name': 'FT%', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 10, 'enabled': '1', 'name': '3-point Shots Made', 'display_name': '3PTM', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 12, 'enabled': '1', 'name': 'Points Scored', 'display_name': 'PTS', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 13, 'enabled': '1', 'name': 'Offensive Rebounds', 'display_name': 'OREB', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 15, 'enabled': '1', 'name': 'Total Rebounds', 'display_name': 'REB', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 16, 'enabled': '1', 'name': 'Assists', 'display_name': 'AST', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 17, 'enabled': '1', 'name': 'Steals', 'display_name': 'ST', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 18, 'enabled': '1', 'name': 'Blocked Shots', 'display_name': 'BLK', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 19, 'enabled': '1', 'name': 'Turnovers', 'display_name': 'TO', 'sort_order': '0', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
{'stat_id': 20, 'enabled': '1', 'name': 'Assist/Turnover Ratio', 'display_name': 'A/T', 'sort_order': '1', 'position_type': 'P', 'stat_position_types': [{'stat_position_type': {'position_type': 'P'}}]}
###Markdown
**Get all teams of a league**
###Code
teams_url = 'https://fantasysports.yahooapis.com/fantasy/v2/league/375.l.573/teams'
resp = oauth_session.get(teams_url, params={'format': 'json'})
league_teams = resp.json()['fantasy_content']['league'][1]['teams']
league_teams
league_team_count = int(league_teams['count'])
league_team_count
for idx in range(0,league_team_count):
league_team = league_teams[str(idx)]['team'][0]
print(league_team, '\n')
team_logo = league_team[5]['team_logos'][0]['team_logo']['url']
# print('team_log', team_logo)
###Output
[{'team_key': '375.l.573.t.1'}, {'team_id': '1'}, {'name': 'C1-szrocky'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/1'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/4345/25942278979_a8f154_192sq.jpg?ct=fantasy'}}]}, {'division_id': '3'}, {'waiver_priority': 14}, {'faab_balance': '10'}, {'number_of_moves': '58'}, {'number_of_trades': '1'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 195}, {'managers': [{'manager': {'manager_id': '1', 'nickname': 'Rocky', 'guid': 'GH5XETJTNQTKCUZH6MZUNH2PBM', 'is_commissioner': '1', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4725/37939417090_dd288c_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.2'}, {'team_id': '2'}, {'name': 'C2-真邪门'}, {'is_owned_by_current_login': 1}, {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/2'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/4725/38954867636_d47b60_192sq.jpg?ct=fantasy'}}]}, {'division_id': '3'}, {'waiver_priority': 20}, {'faab_balance': '2'}, {'number_of_moves': '62'}, {'number_of_trades': '6'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '199'}, {'auction_budget_spent': 199}, {'managers': [{'manager': {'manager_id': '2', 'nickname': '邪', 'guid': 'EQMHXVGZ65XDJ5G57ZRRBKXUTM', 'is_current_login': '1', 'image_url': 'https://ct.yimg.com/cy/4556/23861899267_82a6e0_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.3'}, {'team_id': '3'}, {'name': 'C4-阴有时有雨'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/3'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_08_100.png'}}]}, {'division_id': '3'}, {'waiver_priority': 19}, {'faab_balance': '16'}, {'number_of_moves': '55'}, {'number_of_trades': '5'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, {'clinched_playoffs': 1}, {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '196'}, {'auction_budget_spent': 196}, {'managers': [{'manager': {'manager_id': '3', 'nickname': 'Josh', 'guid': '6FHWG57T5PNK2FM3NXF5EEWA3U', 'image_url': 'https://ct.yimg.com/cy/4585/38729776096_c363ac_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.4'}, {'team_id': '4'}, {'name': 'A3-Marsmnky'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/4'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '1'}, {'waiver_priority': 10}, {'faab_balance': '0'}, {'number_of_moves': '61'}, {'number_of_trades': '10'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, {'clinched_playoffs': 1}, {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 193}, {'managers': [{'manager': {'manager_id': '4', 'nickname': 'Mars T', 'guid': 'R5ZTKEUC5DMSGCQTV3TIFWLMSI', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.5'}, {'team_id': '5'}, {'name': 'A2-苏打绿'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/5'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/4335/23391236588_d43959_192sq.jpg?ct=fantasy'}}]}, {'division_id': '1'}, {'waiver_priority': 11}, {'faab_balance': '11'}, {'number_of_moves': '52'}, {'number_of_trades': '5'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '2'}}, {'clinched_playoffs': 1}, {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '199'}, {'auction_budget_spent': 198}, {'managers': [{'manager': {'manager_id': '5', 'nickname': 'fan', 'guid': 'FOHURZVIRU6NBP3UOSOUM5OEEA', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.6'}, {'team_id': '6'}, {'name': 'C5-Gray Potato'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/6'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_05_100.png'}}]}, {'division_id': '3'}, {'waiver_priority': 1}, {'faab_balance': '0'}, {'number_of_moves': '13'}, {'number_of_trades': '4'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 200}, {'managers': [{'manager': {'manager_id': '6', 'nickname': 'Panwenjie', 'guid': '3NUR5O5PS6P33EEKBFEW4QRC2E', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4680/38119766965_3e7117_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.7'}, {'team_id': '7'}, {'name': 'D1-pippo'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/7'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_03_100.png'}}]}, {'division_id': '4'}, {'waiver_priority': 9}, {'faab_balance': '0'}, {'number_of_moves': '42'}, {'number_of_trades': '5'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '201'}, {'auction_budget_spent': 201}, {'managers': [{'manager': {'manager_id': '7', 'nickname': 'pippo', 'guid': 'SRJTDXM7NCKJSWAWFGHDKMGP4U', 'is_commissioner': '1', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wv/images/e1444b788688b4f24aa132deec3df84d_64.jpeg'}}]}]
[{'team_key': '375.l.573.t.8'}, {'team_id': '8'}, {'name': 'B5-Sin'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/8'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '2'}, {'waiver_priority': 6}, {'faab_balance': '43'}, {'number_of_moves': '34'}, {'number_of_trades': 0}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, {'clinched_playoffs': 1}, {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '201'}, {'auction_budget_spent': 201}, {'managers': [{'manager': {'manager_id': '8', 'nickname': 'sin', 'guid': 'VYOYNLQKG4KJPP653GLEWIWTH4', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4730/27437247689_968ccd_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.9'}, {'team_id': '9'}, {'name': 'B2-Jordan'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/9'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '2'}, {'waiver_priority': 8}, {'faab_balance': '12'}, {'number_of_moves': '64'}, {'number_of_trades': '4'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '196'}, {'auction_budget_spent': 195}, {'managers': [{'manager': {'manager_id': '9', 'nickname': 'Jordan', 'guid': 'PNSGFB66AWDITQED3PN5DGTTEI', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.10'}, {'team_id': '10'}, {'name': 'B1-F.E.D.S'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/10'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/4449/26108207669_bb6800_192sq.jpg?ct=fantasy'}}]}, {'division_id': '2'}, {'waiver_priority': 17}, {'faab_balance': '30'}, {'number_of_moves': '50'}, {'number_of_trades': '2'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, {'clinched_playoffs': 1}, {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '198'}, {'auction_budget_spent': 198}, {'managers': [{'manager': {'manager_id': '10', 'nickname': '民', 'guid': 'JO6JSRTPVO2SYDUXW6HOM5HLK4', 'is_commissioner': '1', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.11'}, {'team_id': '11'}, {'name': 'A4-dragonball'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/11'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/1687/25344958385_85325603cf_192sq.jpg?ct=fantasy'}}]}, {'division_id': '1'}, {'waiver_priority': 4}, {'faab_balance': '35'}, {'number_of_moves': '25'}, {'number_of_trades': '6'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 200}, {'managers': [{'manager': {'manager_id': '11', 'nickname': 'Zhu Heng', 'guid': 'K2RQUB4V4LHEX6UL6YYFLN64OU', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.12'}, {'team_id': '12'}, {'name': 'C3-Lydia'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/12'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '3'}, {'waiver_priority': 2}, {'faab_balance': '0'}, {'number_of_moves': '28'}, {'number_of_trades': '4'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 198}, {'managers': [{'manager': {'manager_id': '12', 'nickname': 'Lydia', 'guid': 'HFMFRGGN5E7EI62HEGE7FT7A6Q', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4473/37602919364_731a05_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.13'}, {'team_id': '13'}, {'name': 'B4-苦菜花'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/13'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_05_100.png'}}]}, {'division_id': '2'}, {'waiver_priority': 15}, {'faab_balance': '0'}, {'number_of_moves': '35'}, {'number_of_trades': '7'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '3'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '206'}, {'auction_budget_spent': 206}, {'managers': [{'manager': {'manager_id': '13', 'nickname': '虚拟', 'guid': 'BJLLFAFT6WB6YTIHMM64WTLNHA', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4498/37586704474_f93cd5_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.14'}, {'team_id': '14'}, {'name': 'D4-lebronjames'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/14'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '4'}, {'waiver_priority': 7}, {'faab_balance': '15'}, {'number_of_moves': '36'}, {'number_of_trades': '1'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 197}, {'managers': [{'manager': {'manager_id': '14', 'nickname': 'ted', 'guid': 'T2ZPHMM7ZSFY3PY3R2GXAVAVXQ', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.15'}, {'team_id': '15'}, {'name': 'D5-unbe'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/15'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/1656/25214693492_db83b88754_192sq.jpg?ct=fantasy'}}]}, {'division_id': '4'}, {'waiver_priority': 5}, {'faab_balance': '37'}, {'number_of_moves': '29'}, {'number_of_trades': '2'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '204'}, {'auction_budget_spent': 204}, {'managers': [{'manager': {'manager_id': '15', 'nickname': 'unbelievable', 'guid': 'HGM27Q7E525UDSCWKAVEHRMPK4', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.16'}, {'team_id': '16'}, {'name': 'A1-阿木'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/16'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_09_100.png'}}]}, {'division_id': '1'}, {'waiver_priority': 16}, {'faab_balance': '3'}, {'number_of_moves': '60'}, {'number_of_trades': '1'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '201'}, {'auction_budget_spent': 199}, {'managers': [{'manager': {'manager_id': '16', 'nickname': 'Huang', 'guid': 'KBQ3YFFP6AMJVVMCCDRKICNF74', 'is_commissioner': '1', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4595/27369545699_a5cc12_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.17'}, {'team_id': '17'}, {'name': 'D2-天王'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/17'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '4'}, {'waiver_priority': 3}, {'faab_balance': '0'}, {'number_of_moves': '28'}, {'number_of_trades': '3'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '0'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '202'}, {'auction_budget_spent': 200}, {'managers': [{'manager': {'manager_id': '17', 'nickname': 'Chris', 'guid': 'HOI76OLNR7DUGA3N4IQFS4IWIQ', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.18'}, {'team_id': '18'}, {'name': 'A5 - 鸡基'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/18'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://ct.yimg.com/cy/4447/26043029009_caa5f2_192sq.jpg?ct=fantasy'}}]}, {'division_id': '1'}, {'waiver_priority': 12}, {'faab_balance': '12'}, {'number_of_moves': '41'}, {'number_of_trades': '2'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '1'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '200'}, {'auction_budget_spent': 198}, {'managers': [{'manager': {'manager_id': '18', 'nickname': 'fd', 'guid': 'KHV7E4QK5YRS4COW2QCHAKEDCU', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4575/38153422014_9ce52c_64sq.jpg'}}, {'manager': {'manager_id': '21', 'nickname': 'Mars T', 'guid': 'R5ZTKEUC5DMSGCQTV3TIFWLMSI', 'is_comanager': '1', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
[{'team_key': '375.l.573.t.19'}, {'team_id': '19'}, {'name': 'B3-xiuxian'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/19'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_07_100.png'}}]}, {'division_id': '2'}, {'waiver_priority': 18}, {'faab_balance': '14'}, {'number_of_moves': '37'}, {'number_of_trades': '1'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '2'}}, {'clinched_playoffs': 1}, {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '198'}, {'auction_budget_spent': 198}, {'managers': [{'manager': {'manager_id': '19', 'nickname': 'LeiChen', 'guid': 'I6HNSBFPTRRXLMQ7P2SR6J57GQ', 'email': '[email protected]', 'image_url': 'https://ct.yimg.com/cy/4690/37713810903_a49284_64sq.jpg'}}]}]
[{'team_key': '375.l.573.t.20'}, {'team_id': '20'}, {'name': 'D3-CP9'}, [], {'url': 'https://basketball.fantasysports.yahoo.com/nba/573/20'}, {'team_logos': [{'team_logo': {'size': 'large', 'url': 'https://s.yimg.com/dh/ap/fantasy/img/nba/icon_01_100.png'}}]}, {'division_id': '4'}, {'waiver_priority': 13}, {'faab_balance': '22'}, {'number_of_moves': '51'}, {'number_of_trades': '9'}, {'roster_adds': {'coverage_type': 'week', 'coverage_value': 21, 'value': '2'}}, [], {'league_scoring_type': 'head'}, [], [], {'has_draft_grade': 0}, {'auction_budget_total': '199'}, {'auction_budget_spent': 199}, {'managers': [{'manager': {'manager_id': '20', 'nickname': 'Richard', 'guid': 'KZG45PTB6KA2BZHEUJFLGF26WI', 'email': '[email protected]', 'image_url': 'https://s.yimg.com/wm/modern/images/default_user_profile_pic_64.png'}}]}]
###Markdown
**Example to get team stats of week 2**
###Code
stat_url = 'https://fantasysports.yahooapis.com/fantasy/v2/team/375.l.1039.t.17/stats;type=week;week=2'
resp = oauth_session.get(stat_url, params={'format': 'json'})
team_stats = resp.json()['fantasy_content']['team'][1]['team_stats']['stats']
team_stats
###Output
_____no_output_____
###Markdown
**Example to get team stats of whole season**
###Code
stat_url = 'https://fantasysports.yahooapis.com/fantasy/v2/team/375.l.1039.t.17/stats'
resp = oauth_session.get(stat_url, params={'format': 'json'})
team_stats = resp.json()['fantasy_content']['team'][1]['team_stats']['stats']
team_stats
###Output
_____no_output_____
###Markdown
**Example to get game stat categories**
###Code
stat_url = 'https://fantasysports.yahooapis.com/fantasy/v2/game/nba/stat_categories'
resp = oauth_session.get(stat_url, params={'format': 'json'})
stat_categories = resp.json()['fantasy_content']['game'][1]['stat_categories']['stats']
stat_categories
###Output
_____no_output_____
|
notebooks/figures_bandits_may_2019_arfl_talk.ipynb
|
###Markdown
TwoHigh
###Code
env_name = 'BanditOneHigh2-v0'
n_arm = 2
tie_break = 'next' # round robin tie break strategy
tie_threshold = 0.000001 # epsilon in the paper
lr = .001
# Run bandit exps
result_meta = meta_bandit(
env_name=env_name,
num_episodes=n_arm*50,
lr=lr,
tie_threshold=tie_threshold,
tie_break=tie_break,
seed_value=179,
)
plot_meta(env_name, result_meta)
env_name = 'BanditOneHigh2-v0'
# Run bandit exps
result_ep = epsilon_bandit(
env_name=env_name,
num_episodes=n_arm*100,
lr=lr,
epsilon=0.2,
epsilon_decay_tau=0.0,
seed_value=179,
)
plot_epsilon(env_name, result_ep)
env_name = 'BanditOneHigh2-v0'
# Run bandit exps
result_decay = epsilon_bandit(
env_name=env_name,
num_episodes=n_arm*100,
lr=lr,
epsilon=0.2,
epsilon_decay_tau=0.04,
seed_value=179,
)
plot_epsilon(env_name, result_decay)
###Output
_____no_output_____
###Markdown
OneHigh121
###Code
env_name = 'BanditOneHigh121-v0'
n_arm = 121
tie_break = 'next' # round robin tie break strategy
tie_threshold = 1*0.001 # epsilon in the paper
lr = .00001
# Run bandit exps
result_meta = meta_bandit(
env_name=env_name,
num_episodes=n_arm*10,
lr=lr,
tie_threshold=tie_threshold,
tie_break=tie_break,
seed_value=179,
)
plot_meta(env_name, result_meta)
env_name = 'BanditOneHigh121-v0'
# Run bandit exps
result_decay = epsilon_bandit(
env_name=env_name,
num_episodes=n_arm*5,
lr=lr,
epsilon=0.5,
epsilon_decay_tau=0.04,
seed_value=179,
)
plot_epsilon(env_name, result_decay)
###Output
_____no_output_____
###Markdown
TwoHardAndSparse
###Code
env_name = 'BanditHardAndSparse2-v0'
n_arm = 2
tie_break = 'next' # round robin tie break strategy
tie_threshold = 0.0000001 # epsilon in the paper
lr = .015
# Run bandit exps
result_meta = meta_bandit(
env_name=env_name,
num_episodes=n_arm*1000,
lr=lr,
tie_threshold=tie_threshold,
tie_break=tie_break,
seed_value=179,
)
plot_meta(env_name, result_meta)
# Run bandit exps
result_decay = epsilon_bandit(
env_name=env_name,
num_episodes=n_arm*1000,
lr=lr,
epsilon=0.5,
epsilon_decay_tau=0.001,
seed_value=179,
)
plot_epsilon(env_name, result_decay)
###Output
_____no_output_____
|
IdokoSparkifyProject.ipynb
|
###Markdown
Sparkify Project Workspace Import the needed tools
###Code
# import libraries
from pyspark.sql import SparkSession
from pyspark.sql.functions import avg, col, concat, desc, explode, lit, min, max, split, udf
from pyspark.sql.types import IntegerType
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.classification import LogisticRegression, DecisionTreeClassifier, RandomForestClassifier, GBTClassifier, NaiveBayes
from pyspark.ml.evaluation import MulticlassClassificationEvaluator, BinaryClassificationEvaluator, RegressionEvaluator
from pyspark.ml.feature import CountVectorizer, IDF, Normalizer, PCA, RegexTokenizer, StandardScaler, StopWordsRemover, StringIndexer, VectorAssembler
from pyspark.ml.regression import LinearRegression
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.feature import PCA
from pyspark.ml.linalg import Vectors
import re
import datetime
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import pyplot
from pyspark.sql.functions import *
import random
from pyspark.sql import Row
from sklearn import neighbors
# create a Spark session
spark = SparkSession.builder \
.master("local") \
.config("spark.driver.memory", "15g") \
.appName("Creating Features") \
.getOrCreate()
sc = spark.sparkContext
###Output
_____no_output_____
###Markdown
1.INTRODUCTION The current wave of digitization sweeping across industries has led to explosion of data, as information is being tracked using different tools. This, together with the advent of social media has greatly emboldened the inclination and match towards artificial intelligence. This is grounded on the ability to obtain insights from the massive amounts of data obtained sometimes in real times while powering products that rely on such data. Technologies to wrangle such big data are currently being championed by Apache Hadoop and Spark. One single computer is not able to handle such massive datasets, and calls for multiple computers which can be hosted online using computing platforms such as Amazon Web Services (AWS), Microsoft Azure, IBM Watson, and Google Cloud. This brings to the goal of this write up which will apply a big data technology (Spark) to a subset (128 MB)of a larger dataset of about 12GB.The dataset here is from Udacity for a pseudo-music streaming service called 'Sparkify' where users have been using either as guests, free, or paid levels. Some of the users, after using the service for a while as paid subscribers end up cancelling their subscription. The goal is therefore to be able to predict the churn of such users which here is defined as when a user is tracked in the 'Cancel Confirmation' page, using the different features measured while the user has been using the platform. The end product of this project would be a model that can be deployed on the sparkify app/platform to predict if a given user stand a chance of cancelling their subscription, and then move swiftly to prevent such case.The prediction of churn users for the sparkify dataset will be handled using machine learning technques. First of all, the dataset will be explored to obtain quality insights into the behaviour of the users/subscribers. Then features will be engineered that most strongly explain the behaviours. This will be followed by modellng which will involve first defining a baseline model, and working to improve this baseline model. As such, metrics perculier to classification tasks will be used. Such metrics as Fscore and accuracy will be used to measure the perfromance of the different models while working to improve them. F1-score in particular was used as the deciding metric. F1 is defined as the harmonic mean of the precision and recall. Remember that precision is a measure of true positives over all positive predictions, while recall is a measure of true positive predictions over all positive labels. Hence F1-score was used to capture how well the model can distinguish between the two classes in its predictions, while quantifying the prediction of positves (defined here as churn or label == 1). This is against accuracy which will not capture how well the prediction was made for users who will cancel their subscription
###Code
def load_data(path):
'''
Function to load a data into a spark dataframe using path of a json dataset.
Args:
path- the path where the data is stored
Returns:
A dataframe of the dataset.
'''
path = path
return spark.read.json(path)
df = load_data('mini_sparkify_event_data.json')
df.persist()
#Size of dataset
df.count()
###Output
_____no_output_____
###Markdown
2. BUSINESS AND DATA UNDERSTANDING Take a look at the descriptive statistics of the dataframe
###Code
df.describe().show()
df.printSchema()
###Output
root
|-- artist: string (nullable = true)
|-- auth: string (nullable = true)
|-- firstName: string (nullable = true)
|-- gender: string (nullable = true)
|-- itemInSession: long (nullable = true)
|-- lastName: string (nullable = true)
|-- length: double (nullable = true)
|-- level: string (nullable = true)
|-- location: string (nullable = true)
|-- method: string (nullable = true)
|-- page: string (nullable = true)
|-- registration: long (nullable = true)
|-- sessionId: long (nullable = true)
|-- song: string (nullable = true)
|-- status: long (nullable = true)
|-- ts: long (nullable = true)
|-- userAgent: string (nullable = true)
|-- userId: string (nullable = true)
###Markdown
Find out where userids are null or non existent
###Code
df.select("*").where(col("userId").isNull()).count()
df.select("*").where(col("sessionId").isNull()).count()
#Used this to take a look at all possible user ids and found empty #string users
df.select('userId').groupby('userId').count().show(4)
df.select("userId").where(col("userId")=='').count()
#Check to see if these are just invalid data or has correlation with users being logged in or out.
df.select("userId",'auth').where(col("userId")=='').where(col('auth') == 'Logged In').count()
#Find out the different categories of the column 'auth'
df.select('auth').groupby('auth').count().collect()
#Find out the different categories of the column 'auth'
df.select('level').groupby('level').count().collect()
df.select("userId",'auth').where(col("userId")=='').where(col('auth').isin(['Guest', 'Logged Out'])).count()
#How many distinct users
df.select('userId').distinct().count()
###Output
_____no_output_____
###Markdown
Looks like the only empty userIds are where the users are either Logged out or simply guests who have not registered.
###Code
8249+97
df.select('sessionId').distinct().count()
df.select('userId').describe().collect()
#maximum and minimum value of sessionId
print(df.agg(max(col('sessionId'))).collect())
print(df.agg(min(col('sessionId'))).collect())
df.persist()
#Used this to take a look at all possible 'userAgent'
#df.select('userAgent').groupby('userAgent').count().collect()
#Used this to take a look at all possible 'ItemInSession'
#df.select('ItemInSession').groupby('ItemInSession').count().collect()
df.select('ItemInSession').describe().collect()
#The different levels used in streaming by users.
df.select('level').groupby('level').count().collect()
###Output
_____no_output_____
###Markdown
Define Churn:as users who landed on the page 'Cancellation Confirmation'
###Code
df.show(5)
###Output
+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+-----+
| artist| auth|firstName|gender|itemInSession|lastName| length|level| location|method| page| registration|sessionId| song|status| ts| userAgent|userId|Churn|
+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+-----+
| Martha Tilston|Logged In| Colin| M| 50| Freeman|277.89016| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Rockpools| 200|1538352117000|Mozilla/5.0 (Wind...| 30| 0|
|Five Iron Frenzy|Logged In| Micah| M| 79| Long|236.09424| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8| Canada| 200|1538352180000|"Mozilla/5.0 (Win...| 9| 0|
| Adam Lambert|Logged In| Colin| M| 51| Freeman| 282.8273| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Time For Miracles| 200|1538352394000|Mozilla/5.0 (Wind...| 30| 0|
| Enigma|Logged In| Micah| M| 80| Long|262.71302| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8|Knocking On Forbi...| 200|1538352416000|"Mozilla/5.0 (Win...| 9| 0|
| Daft Punk|Logged In| Colin| M| 52| Freeman|223.60771| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29|Harder Better Fas...| 200|1538352676000|Mozilla/5.0 (Wind...| 30| 0|
+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+-----+
only showing top 5 rows
###Markdown
Behaviour Analysis:* Now, I want to explore the behaviour of these two groups of users to find out the actions that may have preceeded their decision to cancel their subscription.
###Code
def prepare_data(df):
'''
Function to prepare the given dataframe and divid into groups of churn and non churn
users while returnng the original datafrme with a new label column into a spark dataframe.
Args:
df- the original dataframe
Returns:
df - dataframe of the dataset with new column of churn added
stayed - dataframe of the non -churn user's activities only.
all_cancelled - dataframe of the churn user's activities only.
'''
#Define a udf for cancelled
canceled = udf(lambda x: 1 if x == 'Cancellation Confirmation' else 0)
#define a new column 'churn' where 1 indicates cancellation of subscription, 0 otherwise
df = df.withColumn('Churn', canceled(df.page))
#Dataframe of all that cancelled
cancelled_df = df.select('page', 'userId','Churn').where(col('churn')==1)
#List of cancelled
list_cancelled = cancelled_df.select('userId').distinct().collect()#list of cancelled users
#Put in a list format
gb = []#temporary variable to store lists
for row in list_cancelled:
gb.append(row[0])
canc_list = [x for x in gb if x != '']#remove the invalid users
#Total number of users who canceled
print(f"The number of churned users is: {len(canc_list)}")
#List of staying users
all_users = df.select('userId').distinct().collect()
gh = []#a temporary variable to store all users
for row in all_users:
gh.append(row[0])
stayed_list = set(gh)-set(gb)#list of users staying
stayed_list = [x for x in stayed_list if x != '']#remove the invalid users
#Total number of users who did not cancel
print(f"The number of staying users is: {len(stayed_list)}")
#Store both canceled and staying users in new dataframes containng all actions they undertook
all_cancelled = df.select("*").where(col('userId').isin(canc_list))
stayed = df.select('*').where(col('userId').isin(stayed_list))
#Redefine a udf for churn
churned = udf(lambda x: 0 if x in stayed_list else 1, IntegerType())
#Creat new column which will be our label column to track all users that eventually cancelled their subscription
df = df.withColumn('label', churned(col('userId')))
return df, stayed, all_cancelled
#prepare_data(df)
df, stayed, all_cancelled = prepare_data(df)
all_cancelled.persist()
df.persist()
# Total Number of streams for each group
print('Below is a size of the two groups of users among the full dataset:')
print('Cancelled Users:', all_cancelled.count())
print('Stayed Users:', stayed.count())
###Output
Below is a size of the two groups of users among the full dataset:
Cancelled Users: 44864
Stayed Users: 233290
###Markdown
The above show that users who stayed appear to be more sampled than users who cancelled. This leaves us with an imbalanced dataset and bias against the churn users. This will ulitmately affect our modelling Downgrade page visits:
###Code
#Obtain a dataframe of label and how many times they visited the downgrade page
downgrades = df.select('label','page').where(col('page')=='Downgrade').groupby('label').count()
#Compute percentage of downgrade visits/events to all events by each group
downgrades_df = pd.DataFrame({'1':(downgrades.select('count').collect()[0][0]/all_cancelled.count())*100,
'0':(downgrades.select('count').collect()[1][0]/stayed.count())*100}, index = [1])
#Visualization
downgrades_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of Downgrades to all events')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Comparing visits to Downgrades page among two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Cancelled users appear to have visited the downgrade page slightly more than the users who did not yet cancel Dislikes
###Code
#Obtain a dataframe of label and how many times they appeared to thumbs down
dislikes = df.select('label','page').where(col('page')=='Thumbs Down').groupby('label').count()
dis_df = pd.DataFrame({'1':(dislikes.select('count').collect()[0][0]/all_cancelled.count())*100,
'0':(dislikes.select('count').collect()[1][0]/stayed.count())*100}, index = [1])
#Visualization
dis_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of Thumbs down to all events')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Comparing Thumbs down among two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Cancelled users appeared to press thumbs down on songs more than the users who havent cancelled Liked Videos
###Code
#Obtain a dataframe of label and how many times they pressed thumbs up on songs
likes = df.select('label','page').where(col('page')=='Thumbs Up').groupby('label').count()
#Percentage of thumbs down events compared to all events by each group
likes_df = pd.DataFrame({'1':(likes.select('count').collect()[0][0]/all_cancelled.count())*100,
'0':(likes.select('count').collect()[1][0]/stayed.count())*100}, index = [1])
#Visualization
likes_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of Thumbs Up to all events')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Comparing Thumbs Up among two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Users who have not cancelled their subscriptions appear to have liked (thumbs up) to more songs Adding to playlist
###Code
#Obtain a dataframe of label and how many times they added to their playlist
added = df.select('label','page').where(col('page')=='Add to Playlist').groupby('label').count()
#Percentage of thumbs down events compared to all events by each group
added_df = pd.DataFrame({'1':(likes.select('count').collect()[0][0]/all_cancelled.count())*100,
'0':(likes.select('count').collect()[1][0]/stayed.count())*100}, index = [1])
#Visualization
added_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of Added to Playlist for all events')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Comparing Playlist add among two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Again users who stayed seem to have added to their playlist more often Error Page
###Code
#Obtain a dataframe of label and how many times they ended in Error page
error = df.select('label','page').where(col('page')=='Error').groupby('label').count()
#Percentage of thumbs down events compared to all events by each group
error_df = pd.DataFrame({'1':(likes.select('count').collect()[0][0]/all_cancelled.count())*100,
'0':(likes.select('count').collect()[1][0]/stayed.count())*100}, index = [1])
#Visualization
error_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of Error page events among all events')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Error page events among the two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Looks like the users who have not cancelled have more % times landed to error page more than cancelled users. So it is unlikely a factor to make users want to leave. Length of time a song played
###Code
#Length of time music played for each group
print('Cancelled Users:', df.select('label','length').where(col('label') == 1).where(col('length').isNotNull()).count())
print('Stayed Users:', df.select('label','length').where(col('label') == 0).where(col('length').isNotNull()).count())
#Obtain a dataframe of label and how many times they ended in Error page
lengths = df.select('label','length').groupby('label').count()
lengths_df = lengths.toPandas().set_index('label')
#Visualization
lengths_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('Lengths among all two groups')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Comparing lengths of play for the two groups of users')
plt.show()
.select('userId', col('sum(length)')\.alias('count'))
#Aggregate length by average per user
mhy = (df.select('userId', 'length', 'label').groupby('userId', 'label').agg({'length':'avg'})).select('label', col('avg(length)').alias('av_length'))
avlength_df = mhy.toPandas().set_index('label')
#Visualization
avlength_df.plot(kind = 'hist', figsize = (8,5))
plt.ylabel('Lengths among all two groups')
plt.xlabel('Cancelled and non cancelled users')
plt.title('Comparing lengths of play for the two groups of users')
plt.show()
bh = df.select('label', 'userid').groupby('userId','label').count().select('userId', 'label')
bh.count()
#Number of times the 'NextSong' page was arrived at for each group
print('Cancelled Users:', df.select('label','length','page').where(col('label') == 1).where(col('page')=='NextSong').count())
print('Stayed Users:', df.select('label','length','page').where(col('label') == 0).where(col('page')=='NextSong').count())
###Output
Cancelled Users: 36394
Stayed Users: 191714
###Markdown
Looks like the number of times users was found at 'NextSong' page indicate valid plays where length of play is not NoneDrawing from this intelligence, We can get the average length of plays for each group and compare the two as follows:
###Code
#Pull the average length of songs played by all users who cancelled
df.select('label','length').where(col('label') == 1).where(col('length').isNotNull()).agg(avg(col('length'))).collect()
#Pull the average length of songs played by all users who did not cancel
df.select('label','length').where(col('label') == 0).where(col('length').isNotNull()).agg(avg(col('length'))).collect()
###Output
_____no_output_____
###Markdown
It does look like the users who did not cancel have slightly higher average length of songs, even though this is very small.
###Code
df.head()
df.select('page').distinct().show(22)
df.select('auth').distinct().show(22)
df.count()
###Output
_____no_output_____
###Markdown
Logged Outs and Ins
###Code
logOuts = df.select('userId','page').where(col('page')=='Logout').groupby('userId').count()
logOuts2 = df.select('userId','auth').where(col('auth')=='Logged Out').groupby('userId').count()
logIns = df.select('userId','auth').where(col('auth')=='Logged In').groupby('userId').count()
logIns2 = df.select('userId','page').where(col('page')=='Login').groupby('userId').count()
logOuts.show(3)
logOuts2.show()
logIns2.show()
logIns.count()
logIns.show(3)
###Output
+------+-----+
|userId|count|
+------+-----+
|100010| 381|
|200002| 474|
| 125| 10|
+------+-----+
only showing top 3 rows
###Markdown
Decided to use ratio of counts of each user's presence in log outs page over the counts of the user's authentication as logged in to be the feature of importance given that certain users will have this ratio higher. Gender
###Code
#How many females in the dataset in general
(df.select('userId','gender').where(col('gender')=='F').groupby('userId').count()).toPandas().shape
#How many Males in the dataset in general
(df.select('userId','gender').where(col('gender')=='M').groupby('userId').count()).toPandas().shape
#Distribution of gender among the two groups - non churn users
df.select('label','gender').where(col('label')==0).groupby('gender').count().collect()
#Distribution of gender among the two groups - churn users
df.select('label','gender').where(col('label')==1).groupby('gender').count().collect()
#Obtain a dataframe of label and gender for comprism of gender among two groups
authen1 = df.select('label','gender').where(col('label')=='1').groupby('gender').count().toPandas()
authen0 = df.select('label','gender').where(col('label')=='0').groupby('gender').count().toPandas()
#Rename columns and set level column to index
authen0_df = authen0.rename(columns={'count':'0'}).set_index('gender')
authen1_df = authen1.rename(columns = {'count':'1'}).set_index('gender')
#Concatenate the two dataframes
auth_df = pd.concat([authen1_df, authen0_df], axis = 1)
sum_1 = np.sum(auth_df['1'])
#Convert to percenttage proportions
get_percent1 = lambda x: (x/sum_1)*100
get_percent0 = lambda x: (x/stayed.count())*100
#Apply the functions to the respective columns
auth_df['1'] = auth_df['1'].apply(get_percent1)
auth_df['0'] = auth_df['0'].apply(get_percent0)
#Visualization
auth_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('Proportions of male and female among all users')
plt.xlabel('Cancelled and non cancelled males and females')
plt.title('Comparing percentage of each gender among the two groups of users')
plt.show()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:12: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
if sys.path[0] == '':
###Markdown
Males are more likely to cancel than females. Also the invalid users ('') appear to have been cancelled users
###Code
#Percentage proportions of males and females
auth_df.head()
###Output
_____no_output_____
###Markdown
Authentication
###Code
df.select('label','auth').where(col('label')==1).groupby('auth').count().collect()
df.select('label','auth').where(col('label')==0).groupby('auth').count().collect()
#Obtain a dataframe of label and gender for comprism of gender among two groups
authen1 = df.select('label','auth').where(col('label')=='1').groupby('auth').count().toPandas()
authen0 = df.select('label','auth').where(col('label')=='0').groupby('auth').count().toPandas()
#Rename columns and set level column to index
authen0_df = authen0.rename(columns={'count':'0'}).set_index('auth')
authen1_df = authen1.rename(columns = {'count':'1'}).set_index('auth')
#Concatenate the two dataframes
auth_df = pd.concat([authen1_df, authen0_df], axis = 1)
sum_1 = np.sum(auth_df['1'])
#Convert to percenttage proportions
get_percent1 = lambda x: (x/sum_1)*100
get_percent0 = lambda x: (x/stayed.count())*100
#Apply the functions to the respective columns
auth_df['1'] = auth_df['1'].apply(get_percent1)
auth_df['0'] = auth_df['0'].apply(get_percent0)
#Visualization
auth_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% ofloggin and logouts among all users')
plt.xlabel('Cancelled and non cancelled authentications')
plt.title('Comparing proportion of each authentication type among the two groups of users')
plt.show()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:12: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
if sys.path[0] == '':
###Markdown
Users who keep logging out and logging back in are more likely to cancel Levels: Free or Paid
###Code
#Obtain a dataframe of label and levels for comprism
authen1 = df.select('label','level').where(col('label')=='1').groupby('level').count().toPandas()
authen0 = df.select('label','level').where(col('label')=='0').groupby('level').count().toPandas()
#Rename columns and set level column to index
authen0_df = authen0.rename(columns={'count':'0'}).set_index('level')
authen1_df = authen1.rename(columns = {'count':'1'}).set_index('level')
#Concatenate the two dataframes
auth_df = pd.concat([authen1_df, authen0_df], axis = 1)
sum_1 = np.sum(auth_df['1'])
#Convert to percenttages
get_percent1 = lambda x: (x/sum_1)*100
get_percent0 = lambda x: (x/stayed.count())*100
#Apply the functions to the respective columns
auth_df['1'] = auth_df['1'].apply(get_percent1)
auth_df['0'] = auth_df['0'].apply(get_percent0)
#Visualization
auth_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of level events among all users')
plt.xlabel('Cancelled and non cancelled users and levels')
plt.title('Comparing percentage of each level among the two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Cancelled users appear to have streamed sparkify songs more on free level Method - PUT or GET
###Code
df.select('label','method').where(col('label')==1).groupby('method').count().collect()
df.select('label','method').where(col('label')==0).groupby('method').count().collect()
#Obtain a dataframe of label and methods for comprism
meth1 = df.select('label','method').where(col('label')==1).groupby('method').count().toPandas()
meth0 = df.select('label','method').where(col('label')==0).groupby('method').count().toPandas()
#Rename columns and set level column to index
meth0_df = meth0.rename(columns={'count':'0'}).set_index('method')
meth1_df = meth1.rename(columns = {'count':'1'}).set_index('method')
#Concatenate the two dataframes
meth_df = pd.concat([meth1_df, meth0_df], axis = 1)
sum_1 = np.sum(meth_df['1'])
#Convert to percenttages
get_percent1 = lambda x: (x/sum_1)*100
get_percent0 = lambda x: (x/stayed.count())*100
#Apply the functions to the respective columns
meth_df['1'] = meth_df['1'].apply(get_percent1)
meth_df['0'] = meth_df['0'].apply(get_percent0)
#Visualization
meth_df.plot(kind = 'bar', figsize = (8,5))
plt.ylabel('% of different methods among all users')
plt.xlabel('Cancelled and non cancelled users and methods')
plt.title('Comparing percentage of each method among the two groups of users')
plt.show()
###Output
_____no_output_____
###Markdown
Cancelled users appear to use more of 'GET' Method while non-cancelled users use more 'PUT' method 3. DATA PROPROCESSING AND FEATURE ENGINEERING Getting the features we need to perform modelling
###Code
features_list = ['likes', 'dislikes', 'friend_adds', 'playlist_adds', 'downgraded', 'upgraded', 'NumSongs', 'length'\
,'gender', 'method', 'status', 'level']#list of possibl features
###Output
_____no_output_____
###Markdown
**Some of the columns have sub-features which are not specific to users and so a given user may have more than one of such feature. Because, of this, I will engineer them as aggregates of either count, sum, or average depending on which makes moresense. On the otherhand, some columns are pure categorical columns eg. GENDER and will not make sense to agrregate them in any way. For these, I will simply encode them. Other columns will have certain variables that I can engineer using agrregate sums for each user eg. length of music. Further features will be engineered using insights from my exploratory analysis while avoiding duplicity of features or having two features that explain similar events.**1. **Length:** Aggregate length by total length per user. Knowing that from exploratory analysis, the users who did not cancel have slightly higher average length of songs. 2. **Likes:** Aggregate likes by total count of thumbs-up per user given that users who did not cancel have a higher tendency to thumbs-up on songs, compared to users who later cancelled.3. **Dislikes:** Aggregate dislikes by total count of page -thumbs_down per user given that users who did not cancel have a lower tendency to thumbs-down on songs, compared to users who later cancelled.4. **Added_friends:** Aggregate added_friends by total count of page -added_friends per user given that users who did not cancel have a higher tendency to add_friends (measured by percentage ratio of _add friends_ page to all other pages), compared to users who later cancelled.5. **Playlist_adds:** Aggregate playlist_adds by total count of page -add to playlist per user given that users who did not cancel have a higher percentage ratio of add_playlist compared to users who later cancelled.6. **Downgraded:** Aggregate downgraded by total count of page -downgraded per user knowing that users who did not cancel have a lower percentage ratio of a downgradeed compared to users who later cancelled.7. **Upgraded:** Aggregate upgraded by total count of page -upgraded per user knowing that users who did not cancel have a higherr percentage ratio of a upgradeed compared to users who later cancelled.8. **NumSongs:** Aggregate Songs by total count of songs per user knowing that users who did not cancel have a higher average number of a songs compared to users who later cancelled. 'NextSong' page is akin to valid plays where length of play is not None, this should be synonymous to count of songs and will not add extra benefit to the model if included.9. **Method_Put:** Aggregate PUT method by the number of times each user used the PUT method. This because a user may have used more than one method, hence, setting it of as simply categorical will not clearly reflect the effect of this feature10. **Method_Get:** Aggregate GET method by the number of times each user used the GET method. Same logic as the PUT method11. **Gender:** Group the users according to gender, as no user appeared to have more than one gender. Thus, gender is a purely categorical feature.12. **Status_307:** Aggregate status by the number of times each user used the status of 307. This is since certain user used more that one type of staus13. **Status_200:** Aggregate status by the number of times each user used the status of 200. 14. **Status_404:** Aggregate status by the number of times each user used the status of 404 to stream a song.15. **Level_paid:** Aggregate level by the number of times each user used the Paid level to stream a song.16. **Level_free:** Aggregate status by the number of times each user used the Free level to stream a song. Create/engineer all necessary features and combne into a list
###Code
def engineer_features(df):
'''
Script to create all the needed feature for the model-classisfication problem of user churn for
sparkify. creates and combine all the features into a list of dataframes
Args:
df: The dataframe of the events recorded for users on the streaming platform
Returns:
returns a list of dataframes of the features needed for modelling.
'''
dat_f = []#define list to combine all features
#Aggregate length by average length of play per user
length = (df.select('userId', 'length').groupby('userId').agg({'length':'avg'})).select('userId', col('avg(length)')\
.alias('count'))
dat_f.append(('length', length))
#Aggregate dislikes by total count per user
dislikes = df.select('userId','page').where(col('page')=='Thumbs Down').groupby('userId').count()
dat_f.append(('dislikes', dislikes))
likes = df.select('userId','page').where(col('page')=='Thumbs Up').groupby('userId').count()
dat_f.append(('likes', likes))
#Aggregate added friends by total count per user
friend_adds = df.select('userId','page').where(col('page')=='Add Friend').groupby('userId').count()
dat_f.append(('friend_adds', friend_adds))
#Aggregate playlist_adds by total count per user
playlist_adds = df.select('userId','page').where(col('page')=='Add to Playlist').groupby('userId').count()
dat_f.append(('playlist_adds', playlist_adds))
#Aggregate downgrades by total count per user
downgraded = df.select('userId','page').where(col('page')=='Downgrade').groupby('userId').count()
dat_f.append(('downgraded', downgraded))
#Aggregate upgraded by total count per user
upgraded = df.select('userId','page').where(col('page')=='Upgrade').groupby('userId').count()
dat_f.append(('upgraded', upgraded))
#Aggregate users by number of features
NumSongs = (df.select('userId', 'song').groupby('userId').agg({'song':'count'})).select('userId', col('count(song)').alias('count'))
dat_f.append(('NumSongs', NumSongs))
#Aggregate method_put by total count per user
method_put = df.select('method', 'userid').where(col('method') == 'PUT').groupby('userId').count()
dat_f.append(('method_put', method_put))
#Aggregate method_get by total count per user
method_get = df.select('method', 'userid').where(col('method') == 'GET').groupby('userId').count()
dat_f.append(('method_get', method_get))
#Group gender per user
gender = (df.select('userId','gender').groupby('userId','gender').count()).select(\
'userId', col('gender').alias('count'))
dat_f.append(('gender', gender))
#Aggregae status_307 per user
status_307 = df.select('status', 'userid').where(col('status') == 307).groupby('userId').count()
dat_f.append(('status_307', status_307))
#Aggregae status_404 per user
status_404 = df.select('status', 'userid').where(col('status') == 404).groupby('userId').count()
dat_f.append(('status_404', status_404))
#Aggregae status_200 per user
status_200 = df.select('status', 'userid').where(col('status') == 200).groupby('userId').count()
dat_f.append(('status_200', status_200))
#Aggregate for each user, how many times it streamed sparkify songs using level -FREE
level_free = df.select('level', 'userid').where(col('level') == 'free').groupby('userId').count()
dat_f.append(('level_free', level_free))
#Aggregate for each user, how many times it streamed sparkify songs using level -PAID
level_paid = df.select('level', 'userid').where(col('level') == 'paid').groupby('userId').count()
dat_f.append(('level_paid', level_paid))
#Aggregate number/count of of log-outs
logOuts = df.select('userId','page').where(col('page')=='Logout').groupby('userId').count()
dat_f.append(('logOuts', logOuts))
#Group gender per user
#gender = (df.select('userId','gender').groupby('userId','gender').count()).select('userId', col('gender').alias('count'))
return dat_f
def Concat_DataFrames(dat_f, how = 'outer'):
'''
Concatenates all feature dataframes into one dataframe where each column is a feature, an only have one userId column
Assumes all new dataframes to have a 'userId' column, and a 'count' column
Args:
data_f: list_dataframes - The list of dataframes to concatenate
how - 'outer' the type of join to use
Returns:
returns a dataframe of the features needed.
'''
#Combine all the features into a given dataframe that will be the features dataframe.
#Get the first dataframe and rename count to its actual name
feature_df0 = dat_f[0][1].withColumnRenamed('count', dat_f[0][0])
feature_df0 = feature_df0.withColumnRenamed('userId', 'userId1')
#for the remaining dataframe, join using 'outer' to retain the number of users in the first dataframe
for name, dataframe in dat_f[1:]:
#rename column 'count' to actual name
dataframe = dataframe.withColumnRenamed('count', name)#rename column 'count' to actual name
#join two dataframes into one and delete the new userId
feature_df0 = feature_df0.join(dataframe, feature_df0.userId1 == dataframe.userId, how)\
.drop(dataframe.userId)#join two dataframes into one and delete the new userId
return feature_df0
#Use the above function to create a features_dataframe
feature_df = Concat_DataFrames(engineer_features(df), how = 'outer')#Create the features dataframe
feature_df.cache()
###Output
_____no_output_____
###Markdown
Report descriptive statistics of each feature to be used in modelling
###Code
#Features engineered
feats = engineer_features(df)
#Get and report the descriptive statistics of each feature
#for name, dataframe in feats:
# print(name)
# dataframe.describe().show()
#First batch
feature_df.select('userId1', 'length', 'dislikes', 'likes', 'friend_adds', 'playlist_adds').describe().show()
#Second batch
feature_df.select('downgraded', 'upgraded', 'NumSongs', 'gender', 'method_put', 'method_get').describe().show()
#Third batch
feature_df.select('status_307', 'status_404', 'status_200', 'level_free', 'level_paid').describe().show()
###Output
+-------+------------------+------------------+------------------+------------------+------------------+
|summary| status_307| status_404| status_200| level_free| level_paid|
+-------+------------------+------------------+------------------+------------------+------------------+
| count| 224| 118| 226| 196| 166|
| mean|117.99107142857143|2.1864406779661016|1149.6106194690265|297.64285714285717|1374.4698795180723|
| stddev|237.57047556804517|1.4319262774221952| 1244.97869992402|348.09607427128066| 1309.055160884664|
| min| 1| 1| 6| 4| 1|
| max| 3246| 7| 8909| 2617| 7779|
+-------+------------------+------------------+------------------+------------------+------------------+
###Markdown
Null elements
###Code
#Since some users will not have values for certain features, they were captured as NULL. In actual sense, these null elements
# are zero values, considering aggregates were used. Hence, I will replace these null elements with zeros for all columns.
feature_df = feature_df.fillna(0)
feature_df.show(4)
feature_df.hint('skew').count()
###Output
_____no_output_____
###Markdown
Categorical columns:
###Code
feature_df.persist()
###Output
_____no_output_____
###Markdown
The only truly categorical feature at this point is the gender, and this can be simply taken care of using string indexing.Other feature scaling and selection will include: a. Vector assembling of all features into one vector b. Feature scaling using Normalizer, the chosen scaler. For all these, I will pipeline them into one series on steps and execute them, alongside the modeling. Distribtion of features/Choosing Scaler
###Code
#Data Visualizations
#using histograms to find distributions of each field
feature_df1 = feature_df.toPandas().set_index('userId1')
feature_df1.iloc[:,:].hist(figsize = (8,5))
pyplot.show()
###Output
_____no_output_____
###Markdown
The distribution of all the features is a deviation from normal, and can best be described as log-normal. In addition, thereappear to be some outliers in the features. Hence, a normalizer and MinMaxScaler will be chosen over a standard scaler to effectively scale thefeatures. Combine the feature and label dataframe into df_ready and remove the user ''
###Code
#Create the label column for all users
label = (df.select('label', 'userid').groupby('userId','label').count()).select(\
'userId', 'label')
#Combine
df_ready = feature_df.join(label, feature_df.userId1 == label.userId, how = 'outer').drop(label.userId)#join two dataframes
df_ready = df_ready.select('*').where(col('userId1') != '')#Remove the invalid user
df_ready = df_ready.withColumnRenamed('userId1', 'Id')#rename the userId1 to Id
#df_ready.count()
df_ready.persist()
from pyspark.ml.feature import MinMaxScaler
###Output
_____no_output_____
###Markdown
Principal Component AnalysisThe goal here will be to apply pca and determine the number of k features to use in the building a PCA for pipeline training of the dataset. So i will first test 16 k-values (features), and from the explained variance, I will choose the optimal number of k-features to use in all my modelling going forward.
###Code
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'logOuts', 'gender_indexed'], outputCol="features")
#use Normalizer to normalize features
#NScaler = Normalizer(inputCol = 'features', outputCol = 'scaledFeatures')
NScaler = MinMaxScaler(inputCol = 'features', outputCol = 'scaledFeatures')
#Apply principal component analysis to reduce number of features to the most important features
#pca = PCA(k= 10, inputCol="scaledFeatures", outputCol="pcaFeatures")
#Define pipeline and get a new train set
pipeline1 = Pipeline(stages=[Indexer1, assembler, NScaler])#pipeline the transformations
train_Vect = pipeline1.fit(train_set).transform(train_set)#Fit and transform the tran set
train_Vect.head()
train_Vect.head()
#PCA_fitted = pipeline1.fit(train_set)
#Apply principal component analysis to reduce number of features to the most important features
#pca = PCA(k= 10, inputCol="scaledFeatures", outputCol="pcaFeatures")
#train_PCA = PCA_fitted.transform(train_set).select("pcaFeatures")
#train_PCA.show(truncate=False)
del(PCA_fitted)
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k= 16, inputCol="scaledFeatures", outputCol="pcaFeatures")
PCA_fitted = pca.fit(train_Vect)
PCA_fitted.explainedVariance
###Output
_____no_output_____
###Markdown
The first most important 10 features account for more than 99.9 percent of the variance, and so I will only limit my k values to 10 features Weighting:Define a udf that will assign to evry minority class, and ratio of majority to total number of samples on every majority class. This is in a bid to mitigate the effects of oversampling bias against the minority samples on the dataset, and the predictions to come using the different machine learning models
###Code
#ratio of sampling of majority class over the total samples
w_factor = stayed.count()/df.count()
w_factor
#Define a udf to apply weighting
weigh_col = udf(lambda x: 1 if x==1 else w_factor)
#Appy the udf on the dataframe
df_ready = df_ready.withColumn('classWeights', weigh_col(df_ready.label).cast('float'))
df_ready.show(5)
###Output
+------+------------------+--------+-----+-----------+-------------+----------+--------+--------+----------+----------+------+----------+----------+----------+----------+----------+-------+-----+------------+
| Id| length|dislikes|likes|friend_adds|playlist_adds|downgraded|upgraded|NumSongs|method_put|method_get|gender|status_307|status_404|status_200|level_free|level_paid|logOuts|label|classWeights|
+------+------------------+--------+-----+-----------+-------------+----------+--------+--------+----------+----------+------+----------+----------+----------+----------+----------+-------+-----+------------+
|100010| 243.421444909091| 5| 17| 4| 7| 0| 2| 275| 313| 68| F| 31| 0| 350| 381| 0| 5| 0| 0.81427574|
|200002|242.91699209302305| 6| 21| 4| 8| 5| 2| 387| 432| 42| M| 37| 0| 437| 120| 354| 5| 0| 0.81427574|
| 125|261.13913750000006| 0| 0| 0| 0| 0| 0| 8| 9| 2| M| 1| 0| 10| 11| 0| 0| 1| 1.0|
| 124|248.17653659965674| 41| 171| 74| 118| 41| 0| 4079| 4548| 277| F| 351| 6| 4468| 0| 4825| 59| 0| 0.81427574|
| 51|247.88055082899118| 21| 100| 28| 52| 23| 0| 2111| 2338| 126| M| 175| 1| 2288| 0| 2464| 24| 1| 1.0|
+------+------------------+--------+-----+-----------+-------------+----------+--------+--------+----------+----------+------+----------+----------+----------+----------+----------+-------+-----+------------+
only showing top 5 rows
###Markdown
ModelingSplit the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. First I will define a baseline model - using logistic regression.Then, I will further use grid search to tune the best performing parameters for various models. Further I will apply thebest perfroming on the validation set. Split into Train and Test datasets:Split datasets into train and test sets in the ratio of (0.75, 0.25)
###Code
train_set, test_set = df_ready.randomSplit([0.75,0.25], seed = 5)
train_set.persist()
###Output
_____no_output_____
###Markdown
Baseline Model - LogisticRegression
###Code
#LogisticRegression
#Apply string Indexer on the genger column
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds','logOuts', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#use Normalizer to normalize features
#NScaler = Normalizer(inputCol = 'pcaFeatures', outputCol = 'scaledFeatures')
NScaler = MinMaxScaler(inputCol = 'features', outputCol = 'scaledFeatures')
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k=10, inputCol="scaledFeatures", outputCol="pcaFeatures")
#modeler = LogisticRegression(fitIntercept = False, family="multinomial", featuresCol = 'scaledFeatures')odeler = model
#1
modeler = LogisticRegression(fitIntercept = False, family="multinomial", featuresCol = 'pcaFeatures', maxIter = 10,\
regParam=0.3, elasticNetParam=0.8, weightCol = 'classWeights')
#2
#modeler = LogisticRegression(fitIntercept = False, family="multinomial", featuresCol = 'scaledFeatures', maxIter = 10,\
# regParam=0.3, elasticNetParam=0.5)
#Design a pipeline to run through all the modeling process
pipeline = Pipeline(stages=[Indexer1, assembler, NScaler, pca, modeler])
del(best_model)
best_model = pipeline.fit(train_set)
best_model.save("LRmodel_3")
path = "LRmodel_3"
LR_mod = PipelineModel.load(path)
def model_():
def evaluate_model(built_model, model_name, testing_set, evaluator = MulticlassClassificationEvaluator()):
'''
Function to evaluate the performance of a model. it returns the important score metrics
Args:
build_model - the model already trained whose performance is to be evaluated.
testing_set - the dataset which the model is to be tested on, could be train, test, or validation.
evaluator - the type of evaluator, default is BinaryClassificationEvaluator.
Returns:
Print important metrics including Accuracy and Recall
'''
#Number of possible predictions
gh = testing_set.count()
#make predictions
pred_results = built_model.transform(testing_set).select("features", "label", "prediction")
#define an object of the evaluator
#evaluator = BinaryClassificationEvaluator(metric = a)
evaluator = evaluator
#compute all the important metrics
TPTN = pred_results.filter(pred_results.label == pred_results.prediction).count()#True Positive and True Negative
print(f"The Accuracy of {model_name} is: {TPTN/gh}")
#True Positives
TP = pred_results.filter(pred_results.label == pred_results.prediction).where(col('prediction') == 1).count()
#True Negatives
FN = pred_results.filter(pred_results.label != pred_results.prediction).where(col('prediction') == 0).count()
#Recall
Recall = TP/(TP+FN)
print(f"The Recall of {model_name} is: {Recall}")
F1 = evaluator.evaluate(pred_results)
print(f"The F1-score of {model_name} is: {F1}")
#Use model to make predictions
pred_results = LR_mod.transform(test_set)\
.select("id", "label", "prediction")
#Find out if there is a prediction of a churn user
pred_results.select('*').where(col('label') == 1).show()
#Define the evaluator object
evaluator = MulticlassClassificationEvaluator()
TPTN = pred_results.filter(pred_results.label == pred_results.prediction).count()#True Positive and True Negative
TP = pred_results.filter(pred_results.label == pred_results.prediction).where(col('prediction') == 1).count()#TruePositivesOnly
FN = pred_results.filter(pred_results.label != pred_results.prediction).where(col('prediction') == 0).count()#FalseNegatives
#Recall
Recall = TP/(TP+FN)
Recall
#Total predictables
gh = test_set.count()#Total
#F1 score
F1 = evaluator.evaluate(pred_results)
F1
#Accuracy
Accuracy = TPTN/gh
Accuracy
###Output
_____no_output_____
###Markdown
RandomForest - Cross Validation and Grid search
###Code
#Apply string Indexer on the genger column
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds','logOuts', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#Normalizer not needed in RandomForests
NScaler = MinMaxScaler(inputCol = 'features', outputCol = 'scaledFeatures')
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k=10, inputCol="scaledFeatures", outputCol="pcaFeatures")
#modeler = RandomForestClassifier(labelCol="label", featuresCol="features", numTrees = 8, maxDepth = 4)
modeler = RandomForestClassifier(featuresCol = 'pcaFeatures')#RandomForest Classifier
#Design a pipeline to run through all the modeling process
pipeline = Pipeline(stages=[Indexer1, assembler, NScaler, pca, modeler])
paramGrid = []
#RandomForest
paramGrid.append((ParamGridBuilder() \
.addGrid(modeler.numTrees, [5, 8, 10, 15])\
.addGrid(modeler.maxDepth, [4,8]) \
.build()))
#Apply cross validation to search through the grid parmeters for the best performing
cross_val = CrossValidator(estimator = pipeline,
estimatorParamMaps = paramGrid[0],
evaluator = MulticlassClassificationEvaluator(),
numFolds = 3)
#Set threshold parameters to prevent crash of jobs
spark.conf.set("spark.sql.broadcastTimeout", 50000)
spark.conf.set("spark.driver.memory", "10g")
#Fit model to test set
best_model2 = cross_val.fit(train_set)
best_model2.bestModel.save('best_rF_model1')#Save model
path = "best_rF_model1"
RF_mod = PipelineModel.load(path)#Load model
###Output
_____no_output_____
###Markdown
Obtain best perfroming parameters from parameters grid
###Code
java_model = RF_mod.stages[-1]._java_obj
#best Model's number of trees
java_model.getNumTrees()
#Best model's max depth
java_model.getMaxDepth()
#Use F1-score so you can determine how well the model can distinguish between the two classes in its predictions. If it predicts
# a 1 as 1 and a 0 as 0. Ie how much the model can get the different classes right
#make predictions
pred_results = RF_mod.transform(test_set)\
.select('id', "label", "prediction")
#Define an evaluator object
evaluator = MulticlassClassificationEvaluator()
TPTN = pred_results.filter(pred_results.label == pred_results.prediction).count()#True Positive and True Negative
TP = pred_results.filter(pred_results.label == pred_results.prediction).where(col('prediction') == 1).count() #TruePositives
FN = pred_results.filter(pred_results.label != pred_results.prediction).where(col('prediction') == 0).count()#FalseNegatives
gh = test_set.count()#Total
#Recall
Recall = TP/(TP+FN)
Recall
#F1 score
F1 = evaluator.evaluate(pred_results)
F1
#Accuracy
Accuracy = TPTN/gh
Accuracy
pred_results.filter(pred_results.prediction == 1).select('id', 'label', 'prediction').show()
pred_results.filter(pred_results.prediction == 1).select('id', 'label', 'prediction').show()
###Output
+------+-----+----------+
| id|label|prediction|
+------+-----+----------+
| 139| 0| 1.0|
|300014| 0| 1.0|
|100012| 1| 1.0|
+------+-----+----------+
###Markdown
Decision TreesCross validation and grid search
###Code
#Encoder the gender column
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#certain parameters as numClasses is equal to 2,
modeler = DecisionTreeClassifier(labelCol="label", featuresCol="features")#DecisionTree Classifier
pipeline = Pipeline(stages=[Indexer1, assembler, modeler])
paramGrid = []
#DecisionTree
paramGrid.append((ParamGridBuilder() \
.addGrid(modeler.maxDepth, [1,2,3, 4]) \
.addGrid(modeler.maxBins, [2,10,32]) \
.build()))
spark.conf.set("spark.sql.broadcastTimeout", 50000)
spark.conf.set("spark.driver.memory", "10g")
#Apply cross validation to search through the grid parmeters for the best performing
cross_val = CrossValidator(estimator = pipeline,
estimatorParamMaps = paramGrid[0],
evaluator = BinaryClassificationEvaluator(),
numFolds = 3)
#Fit model to test set
DT_model = cross_val.fit(train_set)
# Save and load model
DT_model.bestModel.save("DTreeModel_CV")
#path = "DTreeModel_CV"
#test_mod = PipelineModel.load(pathModel)
#Get model's best perfroming parameters
java_model = DT_model.bestModel.stages[-1]._java_obj
#best Model's max depth choice
java_model.getMaxDepth()
#Model's best performing Max Bins
java_model.getMaxBins()
path = "DTreeModel_CV"
DT_mod = PipelineModel.load(path)
#Make predictions
pred_results = DT_mod.transform(test_set)\
.select('id', "label", "prediction")
#F1-score
F1 = evaluator.evaluate(pred_results)
F1
TP = pred_results.filter(pred_results.label == pred_results.prediction).where(col('prediction') == 1).count() #TruePositives
TPTN = pred_results.filter(pred_results.label == pred_results.prediction).count()#True Positive and True Negative
FN = pred_results.filter(pred_results.label != pred_results.prediction).where(col('prediction') == 0).count()#FalseNegatives
#Recall
Recall = TP/(TP+FN)
Recall
#Accuracy
Accuracy = TPTN/gh
Accuracy
#How much of the churn users predictions were accurate
pred_results.select("label", "prediction").where(col('prediction') == 1).show()
###Output
+-----+----------+
|label|prediction|
+-----+----------+
| 0| 1.0|
| 0| 1.0|
| 0| 1.0|
| 1| 1.0|
| 1| 1.0|
+-----+----------+
###Markdown
Repeat After new features were engineered and Applying PCA
###Code
#Apply string Indexer on the genger column
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds','logOuts', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#Normalizer not needed in RandomForests
#NScaler = MinMaxScaler(inputCol = 'features', outputCol = 'scaledFeatures')
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k=10, inputCol="features", outputCol="pcaFeatures")
#modeler = NaiveBayes(labelCol="label", featuresCol="pcaFeatures", weightCol = 'classWeights')
#modeler = RandomForestClassifier(featuresCol = 'pcaFeatures')#RandomForest Classifier
modeler = DecisionTreeClassifier(labelCol="label", featuresCol="pcaFeatures", maxDepth = 3, maxBins = 32)
#Design a pipeline to run through all the modeling process
pipeline2 = Pipeline(stages=[Indexer1, assembler, pca, modeler])
DT_mod = pipeline2.fit(train_set)
#make predictions
pred_results = DT_mod.transform(test_set)\
.select('id', "label", "prediction")
#How much of the churn users predictions were accurate
pred_results.select("label", "prediction").where(col('prediction') == 1).show()
evaluate_model(DT_mod, 'DecisionTree', test_set, evaluator = MulticlassClassificationEvaluator())
#feature_imp = RF_mod.featureImportances.values.tolist()
###Output
_____no_output_____
###Markdown
GradientBoostingTree
###Code
#Apply string Indexer on the genger column
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds','logOuts', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#Normalizer not needed in RandomForests
#NScaler = MinMaxScaler(inputCol = 'features', outputCol = 'scaledFeatures')
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k=10, inputCol="features", outputCol="pcaFeatures")
#modeler = RandomForestClassifier(labelCol="label", featuresCol="features", numTrees = 8, maxDepth = 4)
modeler = GBTClassifier(featuresCol = 'pcaFeatures', maxIter = 10, )#GradBoostTree Classifier
#Design a pipeline to run through all the modeling process
pipeline = Pipeline(stages=[Indexer1, assembler, pca, modeler])
paramGrid = []
#RandomForest
paramGrid.append((ParamGridBuilder() \
.addGrid(modeler.maxDepth, [4,5, 8]) \
.build()))
del(pipeline)
#Apply cross validation to search through the grid parmeters for the best performing
cross_val = CrossValidator(estimator = pipeline,
estimatorParamMaps = paramGrid[0],
evaluator = MulticlassClassificationEvaluator(),
numFolds = 3)
#Fit model to test set
GBT_model = cross_val.fit(train_set)
# Save and load model
GBT_model.bestModel.save("GBTreeModel_CV")
path = "GBTreeModel_CV"
GBT_mod = PipelineModel.load(path)
#Get model's best perfroming parameters
java_model = GBT_model.bestModel.stages[-1]._java_obj
#best Model's max depth choice
java_model.getMaxDepth()
#make predictions
pred_results = GBT_mod.transform(test_set)\
.select('id', "label", "prediction")
#How much of the churn users predictions were accurate
pred_results.select("label", "prediction").where(col('prediction') == 1).show()
evaluate_model(GBT_mod, 'GradBoostTree', test_set, evaluator = MulticlassClassificationEvaluator())
###Output
The Accuracy of GradBoostTree is: 0.8113207547169812
The Recall of GradBoostTree is: 0.6
The F1-score of GradBoostTree is: 0.8176509025565629
###Markdown
This is a very good model. Even though the F1 score is slightly lower due to more False positive prediction than that of the DT
###Code
#help(GBTClassifier)
###Output
_____no_output_____
###Markdown
OverSampling of churn Users using SMOTE
###Code
#adapted this SMOTESampling code online from https://github.com/Angkirat/Smote-for-Spark/blob/master/PythonCode.py
def SmoteSampling(vectorized, k = 5, minorityClass = 1, majorityClass = 0, percentageOver = 200, percentageUnder = 100):
'''
Function to apply Synthetic Minority Oversampling Technique (SMOTE) using pyspark on the train set so as to increase the
percentage of the minority class (label = 1)to equal that of the majority class (label = 0)
Args:
vectorized - the train set already in a vectorized format (apllied VectorAssembler)
minorityClass - the value of the label which contitute the minority in the label column
majorityClass - the value of the label which contitute the majority in the label column
percentageOver - how much you want the minority increased
percentageUnder - the current percentage value
k =
Returns:
built_model
evautor metric for the model
'''
if(percentageUnder > 100|percentageUnder < 10):
raise ValueError("Percentage Under must be in range 10 - 100");
if(percentageOver < 100):
raise ValueError("Percentage Over must be in at least 100");
dataInput_min = vectorized[vectorized['label'] == minorityClass]
dataInput_maj = vectorized[vectorized['label'] == majorityClass]
feature = dataInput_min.select('features')
feature = feature.rdd
feature = feature.map(lambda x: x[0])
feature = feature.collect()
feature = np.asarray(feature)
nbrs = neighbors.NearestNeighbors(n_neighbors=k, algorithm='auto').fit(feature)
neighbours = nbrs.kneighbors(feature)
gap = neighbours[0]
neighbours = neighbours[1]
min_rdd = dataInput_min.drop('label').rdd
pos_rddArray = min_rdd.map(lambda x : list(x))
pos_ListArray = pos_rddArray.collect()
min_Array = list(pos_ListArray)
newRows = []
nt = len(min_Array)
nexs = int(percentageOver/100)
for i in range(nt):
for j in range(nexs):
neigh = random.randint(1,k)
difs = min_Array[neigh][0] - min_Array[i][0]
newRec = (min_Array[i][0]+random.random()*difs)
newRows.insert(0,(newRec))
newData_rdd = sc.parallelize(newRows)
newData_rdd_new = newData_rdd.map(lambda x: Row(features = x, label = 1))
new_data = newData_rdd_new.toDF()
new_data_minor = dataInput_min.unionAll(new_data)
new_data_major = dataInput_maj.sample(False, (float(percentageUnder)/float(100)))
return new_data_major.unionAll(new_data_minor)
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds','logOuts', 'downgraded', \
'upgraded','NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#Define a pipeine that will apply both StringIndexing and vectorAssemby and any other needed transformation before applyin
#smote
pipeline1 = Pipeline(stages = [Indexer1, assembler])
train_set1 = pipeline1.fit(train_set).transform(train_set)#transform the train set using the indexer and Assmebler
train_set2 = train_set1.select('features', 'label')#bring out only the feature and label columns
new_train = SmoteSampling(train_set2, k = 2, minorityClass = 1, majorityClass = 0)#apply Smote to the new train data
test_set1 = pipeline1.fit(test_set).transform(test_set)#transform test_set to be in the form of train set, without adding rows
new_train.persist()
###Output
_____no_output_____
###Markdown
RandomForestClassifier performance testing using SMOTE train set.
###Code
#spark.conf.set("spark.sql.broadcastTimeout", 5000)
modeler = RandomForestClassifier(labelCol="label", featuresCol="features", numTrees = 8, maxDepth = 4)
best_model = modeler.fit(train_set)
#best_model.save(sc, 'best_rF_model')
test_set1 = pipeline1.fit(test_set).transform(test_set)
pred_results = best_model.transform(test_set1)
pred_results.select("label", "prediction").where(col('prediction') == 1).show()#print number of predictions of users as churn
###Output
+-----+----------+
|label|prediction|
+-----+----------+
| 0| 1.0|
| 0| 1.0|
| 0| 1.0|
| 0| 1.0|
| 0| 1.0|
| 0| 1.0|
| 0| 1.0|
| 1| 1.0|
+-----+----------+
###Markdown
DecisionTree performance testing using SMOTE train set.
###Code
Indexer1 = StringIndexer(inputCol = 'gender', outputCol = 'gender_indexed')
#Assemble features into one
assembler = VectorAssembler(inputCols=['length', 'likes', 'dislikes', 'friend_adds', 'playlist_adds', 'downgraded', 'upgraded',\
'NumSongs', 'method_put', 'method_get', 'status_307', 'status_404',\
'status_200', 'level_free', 'level_paid', 'gender_indexed'], outputCol="features")
#Apply pipeline of just indexer and assembler to prpare the test set
pipeline1 = Pipeline(stages = [Indexer1, assembler])
#Transform the test set
test_set1 = pipeline1.fit(test_set).transform(test_set)
new_train.count()
#Define modeling technique
modeler = DecisionTreeClassifier(labelCol="label", featuresCol="features", maxBins = 32, maxDepth = 3)
#Fit model to the new train set
best_model2 = modeler.fit(new_train)
pred_results1 = best_model2.transform(test_set1)#make predictions on the test_set
pred_results1.select("label", "prediction").where(col('prediction') == 1).show()#print number of predictions of users as churn
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k=10, inputCol="features", outputCol="pcaFeatures")
#modeler = NaiveBayes(labelCol="label", featuresCol="pcaFeatures", weightCol = 'classWeights')
#modeler = RandomForestClassifier(featuresCol = 'pcaFeatures')#RandomForest Classifier
modeler = DecisionTreeClassifier(labelCol="label", featuresCol="pcaFeatures", maxDepth = 3, maxBins = 32)
#Design a pipeline to run through all the modeling process
pipeline2 = Pipeline(stages=[pca, modeler])
#Fit model to the new train set
newDT_mod = pipeline2.fit(new_train)
pred_results2 = newDT_mod.transform(test_set1)#make predictions on the test_set
#How much of the churn users predictions were accurate
pred_results2.select("label", "prediction").where(col('label') == 1).show()
evaluate_model(newDT_mod, 'DecTree', test_set1, evaluator = MulticlassClassificationEvaluator())
###Output
The Accuracy of DecTree is: 0.6981132075471698
The Recall of DecTree is: 0.2
The F1-score of DecTree is: 0.69811320754717
###Markdown
Grad Boosting Tree using SMOTE
###Code
#Apply principal component analysis to reduce number of features to the most important features
pca = PCA(k=10, inputCol="features", outputCol="pcaFeatures")
#modeler = NaiveBayes(labelCol="label", featuresCol="pcaFeatures", weightCol = 'classWeights')
#modeler = RandomForestClassifier(featuresCol = 'pcaFeatures')#RandomForest Classifier
modeler = GBTClassifier(featuresCol = 'pcaFeatures', maxIter = 10, maxDepth = 4 )#DecTree Classifier
#Design a pipeline to run through all the modeling process
pipeline2 = Pipeline(stages=[pca, modeler])
#Fit model to the new train set
newDT_mod = pipeline2.fit(new_train)
pred_results2 = newDT_mod.transform(test_set1)#make predictions on the test_set
#How much of the churn users predictions were accurate
pred_results2.select("label", "prediction").where(col('label') == 1).show()
evaluate_model(newDT_mod, 'GradBoostTree', test_set1, evaluator = MulticlassClassificationEvaluator())
###Output
The Accuracy of GradBoostTree is: 0.6981132075471698
The Recall of GradBoostTree is: 0.5
The F1-score of GradBoostTree is: 0.7216255442670536
|
Deep Learning/Convolutional Neural Networks/CNN.ipynb
|
###Markdown
CNNSaket TiwariDate: 29 Jun 2019
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import keras
from keras.layers import Dense,Activation, Dropout, MaxPooling2D, Convolution2D, Flatten
from keras.models import Sequential
from keras.utils import np_utils
from keras.datasets import cifar10
(X_train,y_train),(X_test,y_test) =cifar10.load_data()
print(X_train.shape,y_train.shape)
print(X_test.shape,y_test.shape)
plt.imshow(X_train[0])
np.unique(y_train)
#one hot encoding
y_train= np_utils.to_categorical(y_train)
y_test= np_utils.to_categorical(y_test)
print(y_train.shape,y_test.shape)
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32,32,3)))
model.add(Activation('relu'))
#output->(30,30,3)
model.add(Convolution2D(64,3,3))
model.add(Activation('relu'))
#output ->(28,28,64)
model.add(MaxPooling2D(pool_size=(2,2)))
#output ->(14,14,64)
model.add(Convolution2D(16,3,3))
model.add(Activation('relu'))
#output >(12,12,16)
model.add(Flatten())
#12*12*16
#Regularisation using Dropout
model.add(Dropout(0.25))
#output layer
#10->no of classes
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train,y_train,batch_size=100,epochs=1,validation_data=(X_test,y_test))
###Output
Train on 50000 samples, validate on 10000 samples
Epoch 1/1
50000/50000 [==============================] - 89s 2ms/step - loss: 2.3027 - acc: 0.0986 - val_loss: 2.3026 - val_acc: 0.1000
|
word2vec-nlp-tutorial/.ipynb_checkpoints/tutorial-part-1-checkpoint.ipynb
|
###Markdown
Bag of Words Meets Bags of Popcorn* https://www.kaggle.com/c/word2vec-nlp-tutorial [자연 언어 처리 - 위키백과, 우리 모두의 백과사전](https://ko.wikipedia.org/wiki/%EC%9E%90%EC%97%B0_%EC%96%B8%EC%96%B4_%EC%B2%98%EB%A6%AC)* 자연 언어 처리(自然言語處理) 또는 자연어 처리(自然語處理)는 인간이 발화하는 언어 현상을 기계적으로 분석해서 컴퓨터가 이해할 수 있는 형태로 만드는 자연 언어 이해 혹은 그러한 형태를 다시 인간이 이해할 수 있는 언어로 표현하는 제반 기술을 의미한다. (출처 : 위키피디아) 자연어처리(NLP)와 관련 된 캐글 경진대회* [Sentiment Analysis on Movie Reviews | Kaggle](https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews)* [Toxic Comment Classification Challenge | Kaggle](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)* [Spooky Author Identification | Kaggle](https://www.kaggle.com/c/spooky-author-identification) 튜토리얼 개요 파트1 * 초보자를 대상으로 기본 자연어 처리를 다룬다. 파트2, 3* Word2Vec을 사용하여 모델을 학습시키는 방법과 감정분석에 단어 벡터를 사용하는 방법을 본다.* 파트3는 레시피를 제공하지 않고 Word2Vec을 사용하는 몇 가지 방법을 실험해 본다.* 파트3에서는 K-means 알고리즘을 사용해 군집화를 해본다.* 긍정과 부정 리뷰가 섞여있는 100,000만개의 IMDB 감정분석 데이터 세트를 통해 목표를 달성해 본다. 평가 - ROC 커브(Receiver-Operating Characteristic curve)* TPR(True Positive Rate)과 FPR(False Positive Rate)을 각각 x, y 축으로 놓은 그래프* 민감도 TPR - 1인 케이스에 대해 1로 예측한 비율 - 암환자를 진찰해서 암이라고 진단함* 특이도 FPR - 0인 케이스에 대해 1로 잘못 예측한 비율 - 암환자가 아닌데 암이라고 진단함 * X, Y가 둘 다 [0, 1] 범위이고 (0, 0)에서 (1, 1)을 잇는 곡선이다.* ROC 커브의 및 면적이 1에 가까울 수록(왼쪽 꼭지점에 다가갈수록) 좋은 성능* 참고 : * [New Sight :: Roc curve, AUR(AUCOC), 민감도, 특이도](http://newsight.tistory.com/53) * [ROC의 AUC 구하기 :: 진화하자 - 어디에도 소속되지 않기](http://adnoctum.tistory.com/121) * [Receiver operating characteristic - Wikipedia](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) Use Google's Word2Vec for movie reviews* 자연어 텍스트를 분석해서 특정단어를 얼마나 사용했는지, 얼마나 자주 사용했는지, 어떤 종류의 텍스트인지 분류하거나 긍정인지 부정인지에 대한 감정분석, 그리고 어떤 내용인지 요약하는 정보를 얻을 수 있다.* 감정분석은 머신러닝(기계학습)에서 어려운 주제로 풍자, 애매모호한 말, 반어법, 언어 유희로 표현을 하는데 이는 사람과 컴퓨터에게 모두 오해의 소지가 있다. 여기에서는 Word2Vec을 통한 감정분석을 해보는 튜토리얼을 해본다.* Google의 Word2Vec은 단어의 의미와 관계를 이해하는 데 도움* 상당수의 NLP기능은 nltk모듈에 구현되어 있는데 이 모듈은 코퍼스, 함수와 알고리즘으로 구성되어 있다.* 단어 임베딩 모형 테스트 : [Korean Word2Vec](http://w.elnn.kr/search/) BOW(bag of words)* 가장 간단하지만 효과적이라 널리쓰이는 방법* 장, 문단, 문장, 서식과 같은 입력 텍스트의 구조를 제외하고 각 단어가 이 말뭉치에 얼마나 많이 나타나는지만 헤아린다.* 구조와 상관없이 단어의 출현횟수만 세기 때문에 텍스트를 담는 가방(bag)으로 생각할 수 있다.* BOW는 단어의 순서가 완전히 무시 된다는 단점이 있다. 예를 들어 의미가 완전히 반대인 두 문장이 있다고 하다. - `it's bad, not good at all.` - `it's good, not bad at all.` * 위 두 문장은 의미가 전혀 반대지만 완전히 동일하게 반환된다.* 이를 보완하기 위해 n-gram을 사용하는 데 BOW는 하나의 토큰을 사용하지만 n-gram은 n개의 토큰을 사용할 수 있도록 한다.* [Bag-of-words model - Wikipedia](https://en.wikipedia.org/wiki/Bag-of-words_model) 파트 1NLP는?NLP(자연어처리)는 텍스트 문제에 접근하기 위한 기술집합이다.이 튜토리얼에서는 IMDB 영화 리뷰를 로딩하고 정제하고 간단한 BOW(Bag of Words) 모델을 적용하여 리뷰가 추천인지 아닌지에 대한 정확도를 예측한다. 시작하기 전에이 튜토리얼은 파이썬으로 되어 있으며, NLP에 익숙하다면 파트2 로 건너뛰어도 된다.
###Code
import pandas as pd
"""
header = 0 은 파일의 첫 번째 줄에 열 이름이 있음을 나타내며
delimiter = \t 는 필드가 탭으로 구분되는 것을 의미한다.
quoting = 3은 쌍따옴표를 무시하도록 한다.
"""
# QUOTE_MINIMAL (0), QUOTE_ALL (1),
# QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
# 레이블인 sentiment 가 있는 학습 데이터
train = pd.read_csv('data/labeledTrainData.tsv',
header=0, delimiter='\t', quoting=3)
# 레이블이 없는 테스트 데이터
test = pd.read_csv('data/testData.tsv',
header=0, delimiter='\t', quoting=3)
train.shape
train.tail(3)
test.shape
test.tail()
train.columns.values
# 레이블인 'sentiment'가 없다. 이 데이터를 기계학습을 통해 예측한다.
test.columns.values
train.info()
train.describe()
train['sentiment'].value_counts()
# html 태그가 섞여있기 때문에 이를 정제해줄 필요가 있음
train['review'][0][:700]
###Output
_____no_output_____
###Markdown
데이터 정제 Data Cleaning and Text Preprocessing기계가 텍스트를 이해할 수 있도록 텍스트를 정제해 준다.신호와 소음을 구분한다. 아웃라이어데이터로 인한 오버피팅을 방지한다.1. BeautifulSoup(뷰티풀숩)을 통해 HTML 태그를 제거2. 정규표현식으로 알파벳 이외의 문자를 공백으로 치환3. NLTK 데이터를 사용해 불용어(Stopword)를 제거4. 어간추출(스테밍 Stemming)과 음소표기법(Lemmatizing)의 개념을 이해하고 SnowballStemmer를 통해 어간을 추출 텍스트 데이터 전처리 이해하기(출처 : [트위터 한국어 형태소 분석기](https://github.com/twitter/twitter-korean-text))**정규화 normalization (입니닼ㅋㅋ -> 입니다 ㅋㅋ, 샤릉해 -> 사랑해)*** 한국어를 처리하는 예시입니닼ㅋㅋㅋㅋㅋ -> 한국어를 처리하는 예시입니다 ㅋㅋ**토큰화 tokenization*** 한국어를 처리하는 예시입니다 ㅋㅋ -> 한국어Noun, 를Josa, 처리Noun, 하는Verb, 예시Noun, 입Adjective, 니다Eomi ㅋㅋKoreanParticle**어근화 stemming (입니다 -> 이다)*** 한국어를 처리하는 예시입니다 ㅋㅋ -> 한국어Noun, 를Josa, 처리Noun, 하다Verb, 예시Noun, 이다Adjective, ㅋㅋKoreanParticle**어구 추출 phrase extraction** * 한국어를 처리하는 예시입니다 ㅋㅋ -> 한국어, 처리, 예시, 처리하는 예시Introductory Presentation: [Google Slides](https://docs.google.com/presentation/d/10CZj8ry03oCk_Jqw879HFELzOLjJZ0EOi4KJbtRSIeU/) * 뷰티풀숩이 설치되지 않았다면 우선 설치해 준다.```!pip install BeautifulSoup4```
###Code
# 설치 및 버전확인
!pip show BeautifulSoup4
from bs4 import BeautifulSoup
example1 = BeautifulSoup(train['review'][0], "html5lib")
print(train['review'][0][:700])
example1.get_text()[:700]
# 정규표현식을 사용해서 특수문자를 제거
import re
# 소문자와 대문자가 아닌 것은 공백으로 대체한다.
letters_only = re.sub('[^a-zA-Z]', ' ', example1.get_text())
letters_only[:700]
# 모두 소문자로 변환한다.
lower_case = letters_only.lower()
# 문자를 나눈다. => 토큰화
words = lower_case.split()
print(len(words))
words[:10]
###Output
437
###Markdown
불용어 제거(Stopword Removal)일반적으로 코퍼스에서 자주 나타나는 단어는 학습 모델로서 학습이나 예측 프로세스에 실제로 기여하지 않아 다른 텍스트와 구별하지 못한다. 예를들어 조사, 접미사, i, me, my, it, this, that, is, are 등 과 같은 단어는 빈번하게 등장하지만 실제 의미를 찾는데 큰 기여를 하지 않는다. Stopwords는 "to"또는 "the"와 같은 용어를 포함하므로 사전 처리 단계에서 제거하는 것이 좋다. NLTK에는 153 개의 영어 불용어가 미리 정의되어 있다. 17개의 언어에 대해 정의되어 있으며 한국어는 없다. NLTK data 설치 * http://corazzon.github.io/nltk_data_install
###Code
import nltk
from nltk.corpus import stopwords
stopwords.words('english')[:10]
# stopwords 를 제거한 토큰들
words = [w for w in words if not w in stopwords.words('english')]
print(len(words))
words[:10]
###Output
219
###Markdown
스테밍(어간추출, 형태소 분석)출처 : [어간 추출 - 위키백과, 우리 모두의 백과사전](https://ko.wikipedia.org/wiki/%EC%96%B4%EA%B0%84_%EC%B6%94%EC%B6%9C)* 어간 추출(語幹 抽出, 영어: stemming)은 어형이 변형된 단어로부터 접사 등을 제거하고 그 단어의 어간을 분리해 내는 것* "message", "messages", "messaging" 과 같이 복수형, 진행형 등의 문자를 같은 의미의 단어로 다룰 수 있도록 도와준다.* stemming(형태소 분석): 여기에서는 NLTK에서 제공하는 형태소 분석기를 사용한다. 포터 형태소 분석기는 보수적이고 랭커스터 형태소 분석기는 좀 더 적극적이다. 형태소 분석 규칙의 적극성 때문에 랭커스터 형태소 분석기는 더 많은 동음이의어 형태소를 생산한다. [참고 : 모두의 데이터 과학 with 파이썬(길벗)](http://www.gilbut.co.kr/book/bookView.aspx?bookcode=BN001787)
###Code
# 포터 스태머의 사용 예
stemmer = nltk.stem.PorterStemmer()
print(stemmer.stem('maximum'))
print("The stemmed form of running is: {}".format(stemmer.stem("running")))
print("The stemmed form of runs is: {}".format(stemmer.stem("runs")))
print("The stemmed form of run is: {}".format(stemmer.stem("run")))
# 랭커스터 스태머의 사용 예
from nltk.stem.lancaster import LancasterStemmer
lancaster_stemmer = LancasterStemmer()
print(lancaster_stemmer.stem('maximum'))
print("The stemmed form of running is: {}".format(lancaster_stemmer.stem("running")))
print("The stemmed form of runs is: {}".format(lancaster_stemmer.stem("runs")))
print("The stemmed form of run is: {}".format(lancaster_stemmer.stem("run")))
# 처리 전 단어
words[:10]
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer('english')
words = [stemmer.stem(w) for w in words]
# 처리 후 단어
words[:10]
###Output
_____no_output_____
###Markdown
Lemmatization 음소표기법언어학에서 음소 표기법 (또는 lemmatization)은 단어의 보조 정리 또는 사전 형식에 의해 식별되는 단일 항목으로 분석 될 수 있도록 굴절 된 형태의 단어를 그룹화하는 과정이다. 예를 들어 동음이의어가 문맥에 따라 다른 의미를 갖는데 1) *배*가 맛있다. 2) *배*를 타는 것이 재미있다. 3) 평소보다 두 *배*로 많이 먹어서 *배*가 아프다.위에 있는 3개의 문장에 있는 "배"는 모두 다른 의미를 갖는다. 레마타이제이션은 이때 앞뒤 문맥을 보고 단어의 의미를 식별하는 것이다.영어에서 meet는 meeting으로 쓰였을 때 회의를 뜻하지만 meet 일 때는 만나다는 뜻을 갖는데 그 단어가 명사로 쓰였는지 동사로 쓰였는지에 따라 적합한 의미를 갖도록 추출하는 것이다.* 참고 : - [Stemming and lemmatization](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) - [Lemmatisation - Wikipedia](https://en.wikipedia.org/wiki/Lemmatisation)
###Code
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
print(wordnet_lemmatizer.lemmatize('fly'))
print(wordnet_lemmatizer.lemmatize('flies'))
words = [wordnet_lemmatizer.lemmatize(w) for w in words]
# 처리 후 단어
words[:10]
###Output
fly
fly
###Markdown
* 하지만 이 튜토리얼에서는 Stemming과 Lemmatizing을 소개만 해서 stemming 코드를 별도로 추가하였다. 문자열 처리* 위에서 간략하게 살펴본 내용을 바탕으로 문자열을 처리해 본다.
###Code
def review_to_words( raw_review ):
# 1. HTML 제거
review_text = BeautifulSoup(raw_review, 'html.parser').get_text()
# 2. 영문자가 아닌 문자는 공백으로 변환
letters_only = re.sub('[^a-zA-Z]', ' ', review_text)
# 3. 소문자 변환
words = letters_only.lower().split()
# 4. 파이썬에서는 리스트보다 세트로 찾는게 훨씬 빠르다.
# stopwords 를 세트로 변환한다.
stops = set(stopwords.words('english'))
# 5. Stopwords 불용어 제거
meaningful_words = [w for w in words if not w in stops]
# 6. 어간추출
stemming_words = [stemmer.stem(w) for w in meaningful_words]
# 7. 공백으로 구분된 문자열로 결합하여 결과를 반환
return( ' '.join(stemming_words) )
clean_review = review_to_words(train['review'][0])
clean_review
# 첫 번째 리뷰를 대상으로 전처리 해줬던 내용을 전체 텍스트 데이터를 대상으로 처리한다.
# 전체 리뷰 데이터 수 가져오기
num_reviews = train['review'].size
num_reviews
"""
clean_train_reviews = []
캐글 튜토리얼에는 range가 xrange로 되어있지만
여기에서는 python3를 사용하기 때문에 range를 사용했다.
"""
# for i in range(0, num_reviews):
# clean_train_reviews.append( review_to_words(train['review'][i]))
"""
하지만 위 코드는 어느 정도 실행이 되고 있는지 알 수가 없어서
5000개 단위로 상태를 찍도록 개선했다.
"""
# clean_train_reviews = []
# for i in range(0, num_reviews):
# if (i + 1)%5000 == 0:
# print('Review {} of {} '.format(i+1, num_reviews))
# clean_train_reviews.append(review_to_words(train['review'][i]))
"""
그리고 코드를 좀 더 간결하게 하기 위해 for loop를 사용하는
대신 apply를 사용하도록 개선
"""
# %time train['review_clean'] = train['review'].apply(review_to_words)
"""
코드는 한 줄로 간결해 졌지만 여전히 오래 걸림
"""
# CPU times: user 1min 15s, sys: 2.3 s, total: 1min 18s
# Wall time: 1min 20s
# 참고 : https://gist.github.com/yong27/7869662
# http://www.racketracer.com/2016/07/06/pandas-in-parallel/
from multiprocessing import Pool
import numpy as np
def _apply_df(args):
df, func, kwargs = args
return df.apply(func, **kwargs)
def apply_by_multiprocessing(df, func, **kwargs):
# 키워드 항목 중 workers 파라메터를 꺼냄
workers = kwargs.pop('workers')
# 위에서 가져온 workers 수로 프로세스 풀을 정의
pool = Pool(processes=workers)
# 실행할 함수와 데이터프레임을 워커의 수 만큼 나눠 작업
result = pool.map(_apply_df, [(d, func, kwargs)
for d in np.array_split(df, workers)])
pool.close()
# 작업 결과를 합쳐서 반환
return pd.concat(list(result))
%time clean_train_reviews = apply_by_multiprocessing(\
train['review'], review_to_words, workers=4)
%time clean_test_reviews = apply_by_multiprocessing(\
test['review'], review_to_words, workers=4)
###Output
CPU times: user 116 ms, sys: 139 ms, total: 255 ms
Wall time: 51.6 s
###Markdown
워드 클라우드- 단어의 빈도 수 데이터를 가지고 있을 때 이용할 수 있는 시각화 방법- 단순히 빈도 수를 표현하기 보다는 상관관계나 유사도 등으로 배치하는 게 더 의미 있기 때문에 큰 정보를 얻기는 어렵다.
###Code
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
# %matplotlib inline 설정을 해주어야지만 노트북 안에 그래프가 디스플레이 된다.
%matplotlib inline
def displayWordCloud(data = None, backgroundcolor = 'white', width=800, height=600 ):
wordcloud = WordCloud(stopwords = STOPWORDS,
background_color = backgroundcolor,
width = width, height = height).generate(data)
plt.figure(figsize = (15 , 10))
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# 학습 데이터의 모든 단어에 대한 워드 클라우드를 그려본다.
%time displayWordCloud(' '.join(clean_train_reviews))
# 테스트 데이터의 모든 단어에 대한 워드 클라우드를 그려본다.
%time displayWordCloud(' '.join(clean_test_reviews))
# 단어 수
train['num_words'] = clean_train_reviews.apply(lambda x: len(str(x).split()))
# 중복을 제거한 단어 수
train['num_uniq_words'] = clean_train_reviews.apply(lambda x: len(set(str(x).split())))
# 첫 번째 리뷰에
x = clean_train_reviews[0]
x = str(x).split()
print(len(x))
x[:10]
import seaborn as sns
fig, axes = plt.subplots(ncols=2)
fig.set_size_inches(18, 6)
print('리뷰별 단어 평균 값 :', train['num_words'].mean())
print('리뷰별 단어 중간 값', train['num_words'].median())
sns.distplot(train['num_words'], bins=100, ax=axes[0])
axes[0].axvline(train['num_words'].median(), linestyle='dashed')
axes[0].set_title('리뷰별 단어 수 분포')
print('리뷰별 고유 단어 평균 값 :', train['num_uniq_words'].mean())
print('리뷰별 고유 단어 중간 값', train['num_uniq_words'].median())
sns.distplot(train['num_uniq_words'], bins=100, color='g', ax=axes[1])
axes[1].axvline(train['num_uniq_words'].median(), linestyle='dashed')
axes[1].set_title('리뷰별 고유한 단어 수 분포')
###Output
리뷰별 단어 평균 값 : 119.52356
리뷰별 단어 중간 값 89.0
리뷰별 고유 단어 평균 값 : 94.05756
리뷰별 고유 단어 중간 값 74.0
###Markdown
[Bag-of-words model - Wikipedia](https://en.wikipedia.org/wiki/Bag-of-words_model)다음의 두 문장이 있다고 하자,```(1) John likes to watch movies. Mary likes movies too.(2) John also likes to watch football games.```위 두 문장을 토큰화 하여 가방에 담아주면 다음과 같다.```[ "John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games"]```그리고 배열의 순서대로 가방에서 각 토큰이 몇 번 등장하는지 횟수를 세어준다.```(1) [1, 2, 1, 1, 2, 1, 1, 0, 0, 0](2) [1, 1, 1, 1, 0, 0, 0, 1, 1, 1]```=> 머신러닝 알고리즘이 이해할 수 있는 형태로 바꿔주는 작업이다.단어 가방을 n-gram을 사용해 bigram 으로 담아주면 다음과 같다.```[ "John likes", "likes to", "to watch", "watch movies", "Mary likes", "likes movies", "movies too",]```=> 여기에서는 CountVectorizer를 통해 위 작업을 한다. 사이킷런의 CountVectorizer를 통해 피처 생성* 정규표현식을 사용해 토큰을 추출한다.* 모두 소문자로 변환시키기 때문에 good, Good, gOod이 모두 같은 특성이 된다.* 의미없는 특성을 많이 생성하기 때문에 적어도 두 개의 문서에 나타난 토큰만을 사용한다.* min_df로 토큰이 나타날 최소 문서 개수를 지정할 수 있다.
###Code
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline
# 튜토리얼과 다르게 파라메터 값을 수정
# 파라메터 값만 수정해도 캐글 스코어 차이가 많이 남
vectorizer = CountVectorizer(analyzer = 'word',
tokenizer = None,
preprocessor = None,
stop_words = None,
min_df = 2, # 토큰이 나타날 최소 문서 개수
ngram_range=(1, 3),
max_features = 20000
)
vectorizer
# 속도 개선을 위해 파이프라인을 사용하도록 개선
# 참고 : https://stackoverflow.com/questions/28160335/plot-a-document-tfidf-2d-graph
pipeline = Pipeline([
('vect', vectorizer),
])
%time train_data_features = pipeline.fit_transform(clean_train_reviews)
train_data_features
train_data_features.shape
vocab = vectorizer.get_feature_names()
print(len(vocab))
vocab[:10]
# 벡터화 된 피처를 확인해 봄
import numpy as np
dist = np.sum(train_data_features, axis=0)
for tag, count in zip(vocab, dist):
print(count, tag)
pd.DataFrame(dist, columns=vocab)
pd.DataFrame(train_data_features[:10].toarray(), columns=vocab).head()
###Output
_____no_output_____
###Markdown
[랜덤 포레스트 - 위키백과, 우리 모두의 백과사전](https://ko.wikipedia.org/wiki/%EB%9E%9C%EB%8D%A4_%ED%8F%AC%EB%A0%88%EC%8A%A4%ED%8A%B8)랜덤 포레스트의 가장 핵심적인 특징은 임의성(randomness)에 의해 서로 조금씩 다른 특성을 갖는 트리들로 구성된다는 점이다. 이 특징은 각 트리들의 예측(prediction)들이 비상관화(decorrelation) 되게하며, 결과적으로 일반화(generalization) 성능을 향상시킨다. 또한, 임의화(randomization)는 포레스트가 노이즈가 포함된 데이터에 대해서도 강인하게 만들어 준다.
###Code
from sklearn.ensemble import RandomForestClassifier
# 랜덤포레스트 분류기를 사용
forest = RandomForestClassifier(
n_estimators = 100, n_jobs = -1, random_state=2018)
forest
%time forest = forest.fit(train_data_features, train['sentiment'])
from sklearn.model_selection import cross_val_score
%time score = np.mean(cross_val_score(\
forest, train_data_features, \
train['sentiment'], cv=10, scoring='roc_auc'))
# 위에서 정제해준 리뷰의 첫 번째 데이터를 확인
clean_test_reviews[0]
# 테스트 데이터를 벡터화 함
%time test_data_features = pipeline.transform(clean_test_reviews)
test_data_features = test_data_features.toarray()
test_data_features
# 벡터화 된 단어로 숫자가 문서에서 등장하는 횟수를 나타낸다
test_data_features[5][:100]
# 벡터화 하며 만든 사전에서 해당 단어가 무엇인지 찾아볼 수 있다.
# vocab = vectorizer.get_feature_names()
vocab[8], vocab[2558], vocab[2559], vocab[2560]
# 테스트 데이터를 넣고 예측한다.
result = forest.predict(test_data_features)
result[:10]
# 예측 결과를 저장하기 위해 데이터프레임에 담아 준다.
output = pd.DataFrame(data={'id':test['id'], 'sentiment':result})
output.head()
output.to_csv('data/tutorial_1_BOW_{0:.5f}.csv'.format(score), index=False, quoting=3)
output_sentiment = output['sentiment'].value_counts()
print(output_sentiment[0] - output_sentiment[1])
output_sentiment
fig, axes = plt.subplots(ncols=2)
fig.set_size_inches(12,5)
sns.countplot(train['sentiment'], ax=axes[0])
sns.countplot(output['sentiment'], ax=axes[1])
###Output
_____no_output_____
###Markdown
첫 번째 제출을 할 준비가 되었다.리뷰를 다르게 정리하거나 'Bag of Words' 표현을 위해 다른 수의 어휘 단어를 선택하거나 포터 스테밍 등을 시도해 볼 수 있다.다른 데이터세트로 NLP를 시도해 보려면 로튼 토마토(Rotten Tomatoes)를 해보는 것도 좋다.* 로튼 토마토 데이터 셋을 사용하는 경진대회 : [Sentiment Analysis on Movie Reviews | Kaggle](https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews)
###Code
# 파마레터를 조정해 가며 점수를 조금씩 올려본다.
# uni-gram 사용 시 캐글 점수 0.84476
print(436/578)
# tri-gram 사용 시 캐글 점수 0.84608
print(388/578)
# 어간추출 후 캐글 점수 0.84780
print(339/578)
# 랜덤포레스트의 max_depth = 5 로 지정하고
# CountVectorizer의 tokenizer=nltk.word_tokenize 를 지정 후 캐글 점수 0.81460
print(546/578)
# 랜덤포레스트의 max_depth = 5 는 다시 None으로 변경
# CountVectorizer max_features = 10000개로 변경 후 캐글 점수 0.85272
print(321/578)
# CountVectorizer의 tokenizer=nltk.word_tokenize 를 지정 후 캐글 점수 0.85044
print(326/578)
# CountVectorizer max_features = 10000개로 변경 후 캐글 점수 0.85612
print(305/578)
# 0.85884
print(296/578)
print(310/578)
###Output
0.754325259515571
0.671280276816609
0.5865051903114187
0.9446366782006921
0.5553633217993079
0.5640138408304498
0.527681660899654
0.5121107266435986
0.5363321799307958
|
Exemplo de estudo - 156.ipynb
|
###Markdown
Curitiba - Base do 156Este é um modelo de iPython notebook que trata os dados do Curitiba 156. Este conterá algumas funcionalidades e visa servir como ponto de partida para quaisquer outras análises de dados.
###Code
import qgrid
import pandas as pd
import matplotlib.pyplot as plt
from curitiba_dados_abertos.datasources import DS156
###Output
_____no_output_____
###Markdown
Leitura dos dadosNesse momento os dados são importados por meio da biblioteca `curitiba-dados-abertos`, disponível no PyPI, onde ele trata de realizar todas as chamadas relativas ao datasource escolhido. No exemplo acima, usamos a base de dados do 156 (DS-156)
###Code
ds156 = DS156(download_folder='source_data') #Instancia o elemento DS156
ds156.get_info()
###Output
_____no_output_____
###Markdown
A Função abaixo importa o dataframe direto no Pandas. Ele faz o download da base de dados, e já o carrega prontamente. Este pode receber um parâmetro `date_prefix` onde o mesmo recebe uma data no formato disponibilizado pelo portal da transparência. Caso esse parâmetro seja `None` o mesmo fará o download do último arquivo disponibilizado. A listagem dos itens disponíveis pode ser acessível por meio do método `ds156.list_available_items()`.
###Code
data = ds156.get_pandas_dataframe(date_prefix=None)
ds156.list_available_items()
###Output
_____no_output_____
###Markdown
Apresentação dos dadosNesse momento o dataset foi minimamente limpo e já está pronto para utilização. Os dados para limpeza dos dados se encontram na seguinte URL [](https://github.com/CodeForCuritiba/curitiba-dados-abertos/blob/master/curitiba_dados_abertos/datasources/ds_156.pyL9-L15)Agora será realizado tarefas de seleção de pedaços do dataset. Dado separado por assunto
###Code
data_assunto = data[['ASSUNTO']].groupby(['ASSUNTO']).size().reset_index(name='counts').sort_values(['counts'], ascending=False)
qgrid.show_grid(data_assunto, show_toolbar=True)
###Output
_____no_output_____
###Markdown
Organização por tipo de órgão
###Code
data_orgaos = data[['ORGAO']].groupby(['ORGAO']).size().reset_index(name='counts').sort_values(['counts'], ascending=False)
qgrid.show_grid(data_orgaos)
###Output
_____no_output_____
###Markdown
Visualização do Grid inteiro
###Code
qgrid.show_grid(data, show_toolbar=True)
###Output
_____no_output_____
###Markdown
Gráfico de solicitação por Bairro
###Code
data_top_bairros = data.groupby('BAIRRO_ASS')['BAIRRO_ASS'] \
.count().reset_index(name='count') \
.sort_values(['count'], ascending=False).head(8)
plt.figure(figsize=(18,4))
plt.bar(data_top_bairros['BAIRRO_ASS'], data_top_bairros['count'], align='center', alpha=0.5)
for a,b in zip(data_top_bairros['BAIRRO_ASS'], data_top_bairros['count']):
plt.text(a, b, str(b))
plt.show()
###Output
_____no_output_____
|
Chapter05/TrainingNeuralNetwork.ipynb
|
###Markdown
Training a Neural Network*Curtis Miller*In this notebook I demonstrate how to train the neural network known as the **multilayer perceptron (MLP)**. We will use a MLP to classify the iris dataset and also a dataset of handwritten digits, in order to detect different characters.Neural networks have a lot of parameters to set when training. These include:* How many hidden layers to have* How many neurons to include in each layer* The activation functions of neurons in the hidden layers* Value of the regularization term to control overfitting (referred to as $\alpha$)Issues when training a neural network are also accute. These are choices related to the actual optimization algorithm that estimates the parameters used for prediction. For neural networks this fitting process is very involved.MLPs are online algorithms just like perceptrons. This is especially advantageous for training on large datasets that don't necessarily fit into data. Additionally, MLPs are *not* linear classifiers/regressors. This suggests that MLPs are most popular for learning problems that require fitting data that isn't linearly separable.MLPs can be used for classification and regression. This notebook focuses on classification.First, lets load in the datasets we will use.
###Code
from sklearn.datasets import load_iris, load_digits
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# First, the iris dataset
iris_obj = load_iris()
iris_data_train, iris_data_test, species_train, species_test = train_test_split(iris_obj.data, iris_obj.target)
# Next, the digits dataset
digits_obj = load_digits()
print(digits_obj.DESCR)
digits_obj.data.shape
digits_data_train, digits_data_test, number_train, number_test = train_test_split(digits_obj.data, digits_obj.target)
number_train[:5]
digits_data_train[0, :]
digits_data_train[0, :].reshape((8, 8))
plt.imshow(digits_data_train[0, :].reshape((8, 8)))
###Output
_____no_output_____
###Markdown
Fitting a MLP to the Iris DataMLP models are implemented via the `MLPClassifier` object in **scikit-learn**. The MLP classifier I train:* Has one hidden layer with 20 neurons* Uses the logistic activation function for the hidden layers* Uses a regularization parameter of $\alpha = 1$I demonstrate its use below.
###Code
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
mlp_iris = MLPClassifier(hidden_layer_sizes=(20,), # A tuple with the number of neurons for each hidden layer
activation='logistic', # Which activation function to use
alpha=1, # Regularization parameter
max_iter=1000) # Maximum number of iterations taken by the solver
mlp_iris = mlp_iris.fit(iris_data_train, species_train)
mlp_iris.predict(iris_data_train[:1,:])
species_pred_train = mlp_iris.predict(iris_data_train)
accuracy_score(species_pred_train, species_train)
species_pred_test = mlp_iris.predict(iris_data_test)
accuracy_score(species_pred_test, species_test)
###Output
_____no_output_____
###Markdown
The classifier has extremely high accuracy for this dataset. Fitting a MLP to the Digits DatasetLet's now see how the MLP classifier performs for the digits dataset. Again there is only one hidden layer, this one with 50 neurons.
###Code
mlp_digits = MLPClassifier(hidden_layer_sizes=(50,),
activation='logistic',
alpha=1)
mlp_digits = mlp_digits.fit(digits_data_train, number_train)
mlp_digits.predict(digits_data_train[[0], :])
number_pred_train = mlp_digits.predict(digits_data_train)
accuracy_score(number_pred_train, number_train)
number_pred_test = mlp_digits.predict(digits_data_test)
accuracy_score(number_pred_test, number_test)
###Output
_____no_output_____
|
_360-in-525/2018/04/jp/360-in-525-04_12.ipynb
|
###Markdown
12. Non-parametric Estimation and Testing [Mathematical Statistical and Computational Foundations for Data Scientists](https://lamastex.github.io/scalable-data-science/360-in-525/2018/04/)©2018 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) Topics- Non-parametric Estimation- Glivenko-Cantelli Theorem- Dvoretsky-Kiefer-Wolfowitz Inequality- Hypothesis Testing- Permutation Testing- Permutation Testing with Shells Data Inference and Estimation: The Big PictureThe Big Picture is about inference and estimation, and especially inference and estimation problems where computational techniques are helpful. Point estimationSet estimationParametric MLE of finitely many parametersdoneConfidence intervals, via the central limit theoremNon-parametric (infinite-dimensional parameter space)about to see ... about to see ... One/Many-dimensional Integrals (finite-dimensional)coming up ... coming up ...So far we have seen parametric models, for example- $X_1, X_2, \ldots, X_n \overset{IID}{\sim} Bernoulli (\theta)$, $\theta \in [0,1]$- $X_1, X_2, \ldots, X_n \overset{IID}{\sim} Exponential (\lambda)$, $\lambda \in (0,\infty)$- $X_1, X_2, \ldots, X_n \overset{IID}{\sim} Normal(\mu^*, \sigma)$, $\mu \in \mathbb{R}$, $\sigma \in (0,\infty)$In all these cases **the parameter space** (the space within which the parameter(s) can take values) is **finite dimensional**:- for the $Bernoulli$, $\theta \in [0,1] \subseteq \mathbb{R}^1$- for the $Exponential$, $\lambda \in (0, \infty) \subseteq \mathbb{R}^1$- for the $Normal$, $\mu \in \mathbb{R}^1$, $\sigma \in (0,\infty) \subseteq \mathbb{R}^1$, so $(\mu, \sigma) \subseteq \mathbb{R}^2$For parametric experiments, we can use the maximum likelihood principle and estimate the parameters using the **Maximum Likelihood Estimator (MLE)**, for instance. Non-parametric estimationSuppose we don't know what the distribution function (DF) is? We are not trying to estimate some fixed but unknown parameter $\theta^*$ for some RV we are assuming to be $Bernoulli(\theta^*)$, we are trying to estimate the DF itself. In real life, data does not come neatly labeled "I am a realisation of a $Bernoulli$ RV", or "I am a realisation of an $Exponential$ RV": an important part of inference and estimation is to make inferences about the DF itself from our observations. Observations from some unknown processConsider the following non-parametric product experiment:$$X_1, X_2, \ldots, X_n\ \overset{IID}{\sim} F^* \in \{\text{all DFs}\}$$We want to produce a point estimate for $F^*$, which is a allowed to be any DF ("lives in the set of all DFs"), i.e., $F^* \in \{\text{all DFs}\}$Crucially, $\{\text{all DFs}\}$, i.e., the set of all distribution functions over $\mathbb{R}$ is infinite dimensional.We have already seen an estimate, made using the data, of a distribution function: the empirical or data-based distribution function (or empirical cumulative distribution function). This can be formalized as the following process of adding indicator functions of the half-lines beginning at the data points $[X_1,+\infty),[X_2,+\infty),\ldots,[X_n,+\infty)$:$$\widehat{F}_n (x) = \frac{1}{n} \sum_{i=1}^n \mathbf{1}_{[X_i,+\infty)}(x)$$where,$$\mathbf{1}_{[X_i,+\infty)}(x) := \begin{cases} & 1 \quad \text{ if } X_i \leq x \\ & 0 \quad \text{ if }X_i > x \end{cases}$$ First let us evaluate a set of functions that will help us conceptualize faster:
###Code
def makeEMFHidden(myDataList):
'''Make an empirical mass function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, relative frequency) ordered by data value.'''
sortedUniqueValues = sorted(list(set(myDataList)))
freqs = [myDataList.count(i) for i in sortedUniqueValues]
relFreqs = [ZZ(fr)/len(myDataList) for fr in freqs] # use a list comprehension
return zip(sortedUniqueValues, relFreqs)
from pylab import array
def makeEDFHidden(myDataList, offset=0):
'''Make an empirical distribution function from a data list.
Param myDataList, list of data to make ecdf from.
Param offset is an offset to adjust the edf by, used for doing confidence bands.
Return list of tuples comprising (data value, cumulative relative frequency) ordered by data value.'''
sortedUniqueValues = sorted(list(set(myDataList)))
freqs = [myDataList.count(i) for i in sortedUniqueValues]
from pylab import cumsum
cumFreqs = list(cumsum(freqs)) #
cumRelFreqs = [ZZ(i)/len(myDataList) for i in cumFreqs] # get cumulative relative frequencies as rationals
if offset > 0: # an upper band
cumRelFreqs = [min(i ,1) for i in cumRelFreqs] # use a list comprehension
if offset < 0: # a lower band
cumRelFreqs = [max(i, 0) for i in cumFreqs] # use a list comprehension
return zip(sortedUniqueValues, cumRelFreqs)
# EPMF plot
def epmfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
epmf_pairs = makeEMFHidden(samples)
epmf = point(epmf_pairs, rgbcolor = "blue", pointsize="20")
for k in epmf_pairs: # for each tuple in the list
kkey, kheight = k # unpack tuple
epmf += line([(kkey, 0),(kkey, kheight)], rgbcolor="blue", linestyle=":")
# padding
epmf += point((0,1), rgbcolor="black", pointsize="0")
return epmf
# ECDF plot
def ecdfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
ecdf_pairs = makeEDFHidden(samples)
ecdf = point(ecdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(ecdf_pairs)):
x, kheight = ecdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = ecdf_pairs[k-1] # unpack previous tuple
ecdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
ecdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
ecdf += line([(x, previous_height),(x, kheight)], rgbcolor="grey", linestyle=":")
# padding
ecdf += line([(ecdf_pairs[0][0]-0.2, 0),(ecdf_pairs[0][0], 0)], rgbcolor="grey")
max_index = len(ecdf_pairs)-1
ecdf += line([(ecdf_pairs[max_index][0], ecdf_pairs[max_index][1]),(ecdf_pairs[max_index][0]+0.2, ecdf_pairs[max_index][1])],rgbcolor="grey")
return ecdf
def calcEpsilon(alphaE, nE):
'''Return confidence band epsilon calculated from parameters alphaE > 0 and nE > 0.'''
return sqrt(1/(2*nE)*log(2/alphaE))
###Output
_____no_output_____
###Markdown
Let us continue with the conceptsWe can remind ourselves of this for a small sample of $de\,Moivre(k=5)$ RVs:
###Code
deMs=[randint(1,5) for i in range(20)] # randint can be used to uniformly sample integers in a specified range
deMs
sortedUniqueValues = sorted(list(set(deMs)))
freqs = [deMs.count(i) for i in sortedUniqueValues]
from pylab import cumsum
cumFreqs = list(cumsum(freqs)) #
cumRelFreqs = [ZZ(i)/len(deMs) for i in cumFreqs] # get cumulative relative frequencies as rationals
zip(sortedUniqueValues, cumRelFreqs)
show(ecdfPlot(deMs), figsize=[6,3]) # use hidden ecdfPlot function to plot
###Output
_____no_output_____
###Markdown
We can use the empirical cumulative distribution function $\widehat{F}_n$ for our non-parametric estimate because this kind of estimation is possible in infinite-dimensional contexts due to the following two theorems:- Glivenko-Cantelli Theorem (*Fundamental Theorem of Statistics*)- Dvoretsky-Kiefer-Wolfowitz (DKW) Inequality Glivenko-Cantelli TheoremLet $X_1, X_2, \ldots, X_n \overset{IID}{\sim} F^* \in \{\text{all DFs}\}$and the empirical distribution function (EDF) is $\widehat{F}_n(x) := \displaystyle\frac{1}{n} \sum_{i=1}^n \mathbf{1}_{[X_i,+\infty)}(x)$, then$$\sup_x { | \widehat{F}_n(x) - F^*(x) | } \overset{P}{\rightarrow} 0$$Remember that the EDF is a statistic of the data, a statistic is an RV, and (from our work the convergence of random variables), $\overset{P}{\rightarrow}$ means "converges in probability". The proof is beyond the scope of this course, but we can gain an appreciation of what it means by looking at what happens to the ECDF for $n$ simulations from:- $de\,Moivre(1/5,1/5,1/5,1/5,1/5)$ and - $Uniform(0,1)$ as $n$ increases:
###Code
@interact
def _(n=(10,(0..200))):
'''Interactive function to plot ecdf for obs from de Moirve (5).'''
if (n > 0):
us = [randint(1,5) for i in range(n)]
p=ecdfPlot(us) # use hidden ecdfPlot function to plot
#p+=line([(-0.2,0),(0,0),(1,1),(1.2,1)],linestyle=':')
p.show(figsize=[8,2])
@interact
def _(n=(10,(0..200))):
'''Interactive function to plot ecdf for obs from Uniform(0,1).'''
if (n > 0):
us = [random() for i in range(n)]
p=ecdfPlot(us) # use hidden ecdfPlot function to plot
p+=line([(-0.2,0),(0,0),(1,1),(1.2,1)],linestyle='-')
p.show(figsize=[3,3],aspect_ratio=1)
###Output
_____no_output_____
###Markdown
It is clear, that as $n$ increases, the ECDF $\widehat{F}_n$ gets closer and closer to the true DF $F^*$, $\displaystyle\sup_x { | \widehat{F}_n(x) - F^*(x) | } \overset{P}{\rightarrow} 0$.This will hold no matter what the (possibly unknown) $F^*$ is. Thus, $\widehat{F}_n$ is a point estimate of $F^*$.We need to add the DKW Inequality be able to get confidence sets or a 'confidence band' that traps $F^*$ with high probability. Dvoretsky-Kiefer-Wolfowitz (DKW) InequalityLet $X_1, X_2, \ldots, X_n \overset{IID}{\sim} F^* \in \{\text{all DFs}\}$and the empirical distribution function (EDF) is $\widehat{F}_n(x) := \displaystyle\frac{1}{n} \sum_{i=1}^n \mathbf{1}_{[X_i,+\infty)}(x)$,then, for any $\varepsilon > 0$,$$P\left( \sup_x { | \widehat{F}_n(x) - F^*(x) | > \varepsilon }\right) \leq 2 \exp(-2n\varepsilon^2) $$We can use this inequality to get a $1-\alpha$ confidence band $C_n(x) := \left[\underline{C}_n(x), \overline{C}_n(x)\right]$ about our point estimate $\widehat{F}_n$ of our possibly unknown $F^*$ such that the $F^*$ is 'trapped' by the band with probability at least $1-\varepsilon$.$$\begin{eqnarray} \underline{C}_{\, n}(x) &=& \max \{ \widehat{F}_n(x)-\varepsilon_n, 0 \}, \notag \\ \overline{C}_{\, n}(x) &=& \min \{ \widehat{F}_n(x)+\varepsilon_n, 1 \}, \notag \\ \varepsilon_n &=& \sqrt{ \frac{1}{2n} \log \left( \frac{2}{\alpha}\right)} \\ \end{eqnarray}$$and$$P\left(\underline{C}_n(x) \leq F^*(x) \leq \overline{C}_n(x)\right) \geq 1-\alpha$$ YouTry in classTry this out for a simple sample from the $Uniform(0,1)$, which you can generate using random. First we will just make the point estimate for $F^*$, the EDF $\widehat{F}_n$
###Code
n=10
uniformSample = [random() for i in range(n)]
print(uniformSample)
###Output
[0.8449449930165663, 0.959800833217879, 0.09902605599111358, 0.03939099670996371, 0.4642490289974345, 0.504813888254253, 0.738258498331617, 0.599178550184525, 0.4507203192074971, 0.7148587186955275]
###Markdown
In one of the assessments, you did a question that took you through the steps for getting the list of points that you would plot for an empirical distribution function (EDF). We will do exactly the same thing here.First we find the unique values in the sample, in order from smallest to largest, and get the frequency with which each unique value occurs:
###Code
sortedUniqueValuesUniform = sorted(list(set(uniformSample)))
print(sortedUniqueValuesUniform)
freqsUniform = [uniformSample.count(i) for i in sortedUniqueValuesUniform]
freqsUniform
###Output
_____no_output_____
###Markdown
Then we accumulate the frequences to get the cumulative frequencies:
###Code
from pylab import cumsum
cumFreqsUniform = list(cumsum(freqsUniform)) # accumulate
cumFreqsUniform
###Output
_____no_output_____
###Markdown
And the the relative cumlative frequencies:
###Code
# cumulative rel freqs as rationals
cumRelFreqsUniform = [ZZ(i)/len(uniformSample) for i in cumFreqsUniform]
cumRelFreqsUniform
###Output
_____no_output_____
###Markdown
And finally zip these up with the sorted unique values to get a list of points we can plot:
###Code
ecdfPointsUniform = zip(sortedUniqueValuesUniform, cumRelFreqsUniform)
ecdfPointsUniform
###Output
_____no_output_____
###Markdown
Here is a function that you can just use to do a ECDF plot:
###Code
# ECDF plot given a list of points to plot
def ecdfPointsPlot(listOfPoints, colour='grey', lines_only=False):
'''Returns an empirical probability mass function plot from a list of points to plot.
Param listOfPoints is the list of points to plot.
Param colour is used for plotting the lines, defaulting to grey.
Param lines_only controls wether only lines are plotted (true) or points are added (false, the default value).
Returns an ecdf plot graphic.'''
ecdfP = point((0,0), pointsize="0")
if not lines_only: ecdfP = point(listOfPoints, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(listOfPoints)):
x, kheight = listOfPoints[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = listOfPoints[k-1] # unpack previous tuple
ecdfP += line([(previous_x, previous_height),(x, previous_height)], rgbcolor=colour)
ecdfP += line([(x, previous_height),(x, kheight)], rgbcolor=colour, linestyle=":")
if not lines_only:
ecdfP += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
# padding
max_index = len(listOfPoints)-1
ecdfP += line([(listOfPoints[0][0]-0.2, 0),(listOfPoints[0][0], 0)], rgbcolor=colour)
ecdfP += line([(listOfPoints[max_index][0], listOfPoints[max_index][1]),(listOfPoints[max_index][0]+0.2, listOfPoints[max_index][1])],rgbcolor=colour)
return ecdfP
###Output
_____no_output_____
###Markdown
This makes the plot of the $\widehat{F}_{10}$, the point estimate for $F^*$ for these $n=10$ simulated samples.
###Code
show(ecdfPointsPlot(ecdfPointsUniform), figsize=[6,3])
###Output
_____no_output_____
###Markdown
What about adding those confidence bands? You will do essentially the same thing, but adjusting for the required $\varepsilon$. First we need to decide on an $\alpha$ and calculate the $\varepsilon$ corresponding to this alpha. Here is some of our code to calculate the $\varepsilon$ corresponding to $\alpha=0.05$ (95% confidence bands), using a hidden function calcEpsilon:
###Code
alpha = 0.05
epsilon = calcEpsilon(alpha, n)
epsilon
###Output
_____no_output_____
###Markdown
See if you can write your own code to do this calculation, $\varepsilon_n = \sqrt{ \frac{1}{2n} \log \left( \frac{2}{\alpha}\right)}$. For completeness, do the whole thing:assign the value 0.05 to a variable named alpha, and then use this and the variable called n that we have already declared to calculate a value for $\varepsilon$. Call the variable to which you assign the value for $\varepsilon$ epsilon so that it replaces the value we calculated in the cell above (you should get the same value as us!). Now we need to use this to adjust the EDF plot. In the two cells below we first of all do the adjustment for $\underline{C}_{\,n}(x) =\max \{ \widehat{F}_n(x)-\varepsilon_n, 0 \}$, and then use zip again to get the points to actually plot for the lower boundary of the 95% confidence band.Now we need to use this to adjust the EDF plot. In the two cells below we first of all do the adjustment for $\overline{C}_{\,n}(x) =\min \{ \widehat{F}_n(x)+\varepsilon_n, 1 \}$, and then use zip again to get the points to actually plot for the lower boundary of the 95% confidence band.
###Code
# heights for the lower band
cumRelFreqsUniformLower = [max(crf - epsilon, 0) for crf in cumRelFreqsUniform]
print(cumRelFreqsUniformLower)
ecdfPointsUniformLower = zip(sortedUniqueValuesUniform, cumRelFreqsUniformLower)
ecdfPointsUniformLower
###Output
_____no_output_____
###Markdown
We carefully gave our `ecdfPointsPlo`t function the flexibility to be able to plot bands, by having a colour parameter (which defaults to 'grey') and a `lines_only` parameter (which defaults to `false`). Here we can plot the lower bound of the confidence interval by adding `ecdfPointsPlot(ecdfPointsUniformLower, colour='green', lines_only=true)` to the previous plot:
###Code
pointEstimate = ecdfPointsPlot(ecdfPointsUniform)
lowerBound = ecdfPointsPlot(ecdfPointsUniformLower, colour='green', lines_only=true)
show(pointEstimate + lowerBound, figsize=[6,3])
###Output
_____no_output_____
###Markdown
YouTry You try writing the code to create the list of points needed for plotting the upper band $\overline{C}_{\,n}(x) =\min \{ \widehat{F}_n(x)+\varepsilon_n, 1 \}$. You will need to first of all get the upper heights (call them say `cumRelFreqsUniformUpper`) and then `zip` them up with the `sortedUniqueValuesUniform` to get the points to plot.
###Code
# heights for the upper band
###Output
_____no_output_____
###Markdown
Once you have got done this you can add them to the plot by altering the code below:
###Code
pointEstimate = ecdfPointsPlot(ecdfPointsUniform)
lowerBound = ecdfPointsPlot(ecdfPointsUniformLower,colour='green', lines_only=true)
show(pointEstimate + lowerBound, figsize=[6,3])
###Output
_____no_output_____
###Markdown
(end of YouTry)--- If we are doing lots of collections of EDF points we may as well define a function to do it, rather than repeating the same code again and again. We use an offset parameter to give us the flexibility to use this to make points for confidence bands as well.
###Code
def makeEDFPoints(myDataList, offset=0):
'''Make a list empirical distribution plotting points from from a data list.
Param myDataList, list of data to make ecdf from.
Param offset is an offset to adjust the edf by, used for doing confidence bands.
Return list of tuples comprising (data value, cumulative relative frequency(with offset)) ordered by data value.'''
sortedUniqueValues = sorted(list(set(myDataList)))
freqs = [myDataList.count(i) for i in sortedUniqueValues]
from pylab import cumsum
cumFreqs = list(cumsum(freqs))
cumRelFreqs = [ZZ(i)/len(myDataList) for i in cumFreqs] # get cumulative relative frequencies as rationals
if offset > 0: # an upper band
cumRelFreqs = [min(i+offset ,1) for i in cumRelFreqs]
if offset < 0: # a lower band
cumRelFreqs = [max(i+offset, 0) for i in cumRelFreqs]
return zip(sortedUniqueValues, cumRelFreqs)
###Output
_____no_output_____
###Markdown
NZ EartQuakesNow we will try looking at the Earthquakes data we have used before to get a confidence band around an EDF for that. We start by bringing in the data and the function we wrote earlier to parse that data.
###Code
def getLonLatMagDepTimes(NZEQCsvFileName):
'''returns longitude, latitude, magnitude, depth and the origin time as unix time
for each observed earthquake in the csv filr named NZEQCsvFileName'''
from datetime import datetime
import time
from dateutil.parser import parse
import numpy as np
with open(NZEQCsvFileName) as f:
reader = f.read()
dataList = reader.split('\n')
myDataAccumulatorList =[]
for data in dataList[1:-1]:
dataRow = data.split(',')
myTimeString = dataRow[2] # origintime
# let's also grab longitude, latitude, magnitude, depth
myDataString = [dataRow[4],dataRow[5],dataRow[6],dataRow[7]]
try:
myTypedTime = time.mktime(parse(myTimeString).timetuple())
myFloatData = [float(x) for x in myDataString]
myFloatData.append(myTypedTime) # append the processed timestamp
myDataAccumulatorList.append(myFloatData)
except TypeError, e: # error handling for type incompatibilities
print 'Error: Error is ', e
#return np.array(myDataAccumulatorList)
return myDataAccumulatorList
myProcessedList = getLonLatMagDepTimes('data/earthquakes.csv')
def interQuakeTimes(quakeTimes):
'''Return a list inter-earthquake times in seconds from earthquake origin times
Date and time elements are expected to be in the 5th column of the array
Return a list of inter-quake times in seconds. NEEDS sorted quakeTimes Data'''
import numpy as np
retList = []
if len(quakeTimes) > 1:
retList = [quakeTimes[i]-quakeTimes[i-1] for i in range(1,len(quakeTimes))]
#return np.array(retList)
return retList
interQuakesSecs = interQuakeTimes(sorted([x[4] for x in myProcessedArray]))
len(interQuakesSecs)
interQuakesSecs[0:10]
###Output
_____no_output_____
###Markdown
There is a lot of data here, so let's use an interactive plot to do the non-parametric DF estimation just for some of the last data:
###Code
@interact
def _(takeLast=(500,(0..min(len(interQuakesSecs),1999))), alpha=(0.05)):
'''Interactive function to plot the edf estimate and confidence bands for inter earthquake times.'''
if takeLast > 0 and alpha > 0 and alpha < 1:
lastInterQuakesSecs = interQuakesSecs[len(interQuakesSecs)-takeLast:len(interQuakesSecs)]
interQuakePoints = makeEDFPoints(lastInterQuakesSecs)
p=ecdfPointsPlot(interQuakePoints, lines_only=true)
epQuakes = calcEpsilon(alpha, len(lastInterQuakesSecs))
interQuakePointsLower = makeEDFPoints(lastInterQuakesSecs, offset=-epQuakes)
lowerQuakesBound = ecdfPointsPlot(interQuakePointsLower, colour='green', lines_only=true)
interQuakePointsUpper = makeEDFPoints(lastInterQuakesSecs, offset=epQuakes)
upperQuakesBound = ecdfPointsPlot(interQuakePointsUpper, colour='green', lines_only=true)
show(p + lowerQuakesBound + upperQuakesBound, figsize=[6,3])
else:
print "check your input values"
###Output
_____no_output_____
###Markdown
What if we are not interested in estimating $F^*$ itself, but we are interested in scientificially investigating whether two distributions are the same. For example, perhaps, whether the distribution of earthquake magnitudes was the same in April as it was in March. Then, we should attempt to reject a falsifiable hypothesis ... Hypothesis TestingA formal definition of hypothesis testing is beyond our current scope. Here we will look in particular at a non-parametric hypothesis test called a permutation test. First, a quick review:The outcomes of a hypothesis test, in general, are:'true state of nature'Do not reject $H_0$Reject $H_0$$H_0$ is true OK Type I error$H_0$ is falseType II errorOKSo, we want a small probability that we reject $H_0$ when $H_0$ is true (minimise Type I error). Similarly, we want to minimise the probability that we fail to reject $H_0$ when $H_0$ is false (type II error). The P-value is one way to conduct a desirable hypothesis test. The scale of the evidence against $H_0$ is stated in terms of the P-value. The following interpretation of P-values is commonly used:- P-value $\in (0, 0.01]$: Very strong evidence against $H_0$- P-value $\in (0.01, 0.05]$: Strong evidence against $H_0$- P-value $\in (0.05, 0.1]$: Weak evidence against $H_0$- P-value $\in (0.1, 1]$: Little or no evidence against $H_0$ Permutation TestingA Permuation Test is a **non-parametric exact** method for testing whether two distributions are the same based on samples from each of them.What do we mean by "non-parametric exact"? It is non-parametric because we do not impose any parametric assumptions. It is exact because it works for any sample size.Formally, we suppose that: $$ X_1,X_2,\ldots,X_m \overset{IID}{\sim} F^* \quad \text{and} \quad X_{m+1}, X_{m+2},\ldots,X_{m+n} \overset{IID}{\sim} G^* \enspace , $$are two sets of independent samples where the possibly unknown DFs $F^*,\,G^* \in \{ \text{all DFs} \}$.(Notice that we have written it so that the subscripts on the $X$s run from 1 to $m+n$.)Now, consider the following hypothesis test: $$H_0: F^*=G^* \quad \text{versus} \quad H_1: F^* \neq G^* \enspace . $$Our test statistic uses the observations in both both samples. We want a test statistic that is a sensible one for the test, i.e., will be large when when $F^*$ is 'too different' from $G^*$So, let our test statistic $T(X_1,\ldots,X_m,X_{m+1},\ldots,X_{m+n})$ be say: $$T:=T(X_1,\ldots,X_m,X_{m+1},\ldots,X_{m+n})= \text{abs} \left( \frac{1}{m} \sum_{i=1}^m X_i - \frac{1}{n} \sum_{i=m+1}^n X_i \right) \enspace .$$(In words, we have chosen a test statistic that is the absolute value of the difference in the sample means. Note the limitation of this: if $F^*$ and $G^*$ have the same mean but different variances, our test statistic $T$ will not be large.)Then the idea of a permutation test is as follows:- Let $N:=m+n$ be the pooled sample size and consider all $N!$ permutations of the observed data $x_{obs}:=(x_1,x_2,\ldots,x_m,x_{m+1},x_{m+2},\ldots,x_{m+n})$.- For each permutation of the data compute the statistic $T(\text{permuted data } x)$ and denote these $N!$ values of $T$ by $t_1,t_2,\ldots,t_{N!}$.- Under $H_0: X_1,\ldots,X_m,X_{m+1},\ldots,X_{m+n} \overset{IID}{\sim}F^*=G^*$, each of the permutations of $x= (x_1,x_2,\ldots,x_m,x_{m+1},x_{m+2},\ldots,x_{m+n})$ has the same joint probability $\prod_{i=1}^{m+n} f(x_i)$, where $f(x_i)$ is the density function corresponding to $F^*=G^*$, $f(x_i)=dF(x_i)=dG(x_i)$. - Therefore, the transformation of the data by our statistic $T$ also has the same probability over the values of $T$, namely $\{t_1,t_2,\ldots,t_{N!}\}$. Let $\mathbf{P}_0$ be this permutation distribution under the null hypothesis. $\mathbf{P}_0$ is discrete and uniform over $\{t_1,t_2,\ldots,t_{N!}\}$. - Let $t_{obs} := T(x_{obs})$ be the observed value of the test statistic.- Assuming we reject $H_0$ when $T$ is large, the P-value = $\mathbf{P}_0 \left( T \geq t_{obs} \right)$- Saying that $\mathbf{P}_0$ is discrete and uniform over $\{t_1, t_2, \ldots, t_{N!}\}$ says that each possible permutation has an equal probabability of occuring (under the null hypothesis). There are $N!$ possible permutations and so the probability of any individual permutation is $\frac{1}{N!}$$$\text{P-value} = \mathbf{P}_0 \left( T \geq t_{obs} \right) = \frac{1}{N!} \left( \sum_{j=1}^{N!} \mathbf{1} (t_j \geq t_{obs}) \right), \qquad \mathbf{1} (t_j \geq t_{obs}) = \begin{cases} 1 & \text{if } \quad t_j \geq t_{obs} \\ 0 & \text{otherwise} \end{cases}$$This will make more sense if we look at some real data. Permutation Testing with Shell DataIn 2008, Guo Yaozong and Chen Shun collected data on the diameters of coarse venus shells from New Brighton beach for a course project. They recorded the diameters for two samples of shells, one from each side of the New Brighton Pier. The data is given in the following two cells.
###Code
leftSide = [52, 54, 60, 60, 54, 47, 57, 58, 61, 57, 50, 60, 60, 60, 62, 44, 55, 58, 55,\
60, 59, 65, 59, 63, 51, 61, 62, 61, 60, 61, 65, 43, 59, 58, 67, 56, 64, 47,\
64, 60, 55, 58, 41, 53, 61, 60, 49, 48, 47, 42, 50, 58, 48, 59, 55, 59, 50, \
47, 47, 33, 51, 61, 61, 52, 62, 64, 64, 47, 58, 58, 61, 50, 55, 47, 39, 59,\
64, 63, 63, 62, 64, 61, 50, 62, 61, 65, 62, 66, 60, 59, 58, 58, 60, 59, 61,\
55, 55, 62, 51, 61, 49, 52, 59, 60, 66, 50, 59, 64, 64, 62, 60, 65, 44, 58, 63]
rightSide = [58, 54, 60, 55, 56, 44, 60, 52, 57, 58, 61, 66, 56, 59, 49, 48, 69, 66, 49,\
72, 49, 50, 59, 59, 59, 66, 62, 44, 49, 40, 59, 55, 61, 51, 62, 52, 63, 39,\
63, 52, 62, 49, 48, 65, 68, 45, 63, 58, 55, 56, 55, 57, 34, 64, 66, 54, 65,\
61, 56, 57, 59, 58, 62, 58, 40, 43, 62, 59, 64, 64, 65, 65, 59, 64, 63, 65,\
62, 61, 47, 59, 63, 44, 43, 59, 67, 64, 60, 62, 64, 65, 59, 55, 38, 57, 61,\
52, 61, 61, 60, 34, 62, 64, 58, 39, 63, 47, 55, 54, 48, 60, 55, 60, 65, 41,\
61, 59, 65, 50, 54, 60, 48, 51, 68, 52, 51, 61, 57, 49, 51, 62, 63, 59, 62,\
54, 59, 46, 64, 49, 61]
len(leftSide), len(rightSide)
###Output
_____no_output_____
###Markdown
$(115 + 139)!$ is a very big number. Lets start small, and take a subselection of the shell data to demonstrate the permutation test concept: the first two shells from the left of the pier and the first one from the right:
###Code
rightSub = [52, 54]
leftSub = [58]
totalSample = rightSub + leftSub
totalSample
###Output
_____no_output_____
###Markdown
So now we are testing the hypotheses$$\begin{array}{lcl}H_0&:& X_1,X_2,X_3 \overset{IID}{\sim} F^*=G^* \\H_1&:&X_1, X_2 \overset{IID}{\sim} F^*, \,\,X_3 \overset{IID}{\sim} G^*, F^* \neq G^*\end{array}$$ With the test statistic$$\begin{array}{lcl}T(X_1,X_2,X_3) &=& \text{abs} \left(\displaystyle\frac{1}{2}\displaystyle\sum_{i=1}^2X_i - \displaystyle\frac{1}{1}\displaystyle\sum_{i=2+1}^3X_i\right) \\ &=&\text{abs}\left(\displaystyle\frac{X_1+ X_2}{2} - \displaystyle\frac{X_3}{1}\right)\end{array}$$Our observed data $x_{obs} = (x_1, x_2, x_3) = (52, 54, 58)$and the realisation of the test statistic for this data is $t_{obs} = \text{abs}\left(\displaystyle\frac{52+54}{2} - \frac{58}{1}\right) = \text{abs}\left(53 - 58\right) = \text{abs}(-5) = 5$Now we need to tabulate the permutations and their probabilities. There are 3! = 6 possible permutataions of three items. For larger samples, you could use the `factorial` function to calculate this:
###Code
factorial(3)
###Output
_____no_output_____
###Markdown
We said that under the null hypotheses (the samples have the same DF) each permutation is equally likely, so each permutation has probability $\displaystyle\frac{1}{6}$.There is a way in Python (the language under the hood in Sage), to get all the permuations of a sequence:
###Code
list(Permutations(totalSample))
###Output
_____no_output_____
###Markdown
We can tabulate the permuations, their probabilities, and the value of the test statistic that would be associated with that permutation:Permutation$t$$\mathbf{P}_0(T=t)$ Probability under Null(52, 54, 58)5$\frac{1}{6}$(52, 58, 54) 1$\frac{1}{6}$(54, 52, 58)5$\frac{1}{6}$(54, 58, 52)4$\frac{1}{6}$(58, 52, 54)1$\frac{1}{6}$(58, 54, 52)4$\frac{1}{6}$
###Code
allPerms = list(Permutations(totalSample))
for p in allPerms:
t = abs((p[0] + p[1])/2 - p[2]/1)
print p, " has t = ", t
###Output
[52, 54, 58] has t = 5
[52, 58, 54] has t = 1
[54, 52, 58] has t = 5
[54, 58, 52] has t = 4
[58, 52, 54] has t = 1
[58, 54, 52] has t = 4
###Markdown
To calculate the P-value for our test statistic $t_{obs} = 5$, we need to look at how many permutations would give rise to test statistics that are at least as big, and add up their probabilities.$$\begin{array}{lcl}\text{P-value} &=& \mathbf{P}_0(T \geq t_{obs}) \\&=&\mathbf{P}_0(T \geq 5)\\&=&\frac{1}{6} + \frac {1}{6} \\&=&\frac{2}{6}\\ &=&\frac{1}{3} \\ &\approx & 0.333\end{array}$$We could write ourselves a little bit of code to do this in SageMath. As you can see, we could easily improve this to make it more flexible so that we could use it for different numbers of samples, but it will do for now.
###Code
allPerms = list(Permutations(totalSample))
pProb = 1/len(allPerms)
pValue = 0
tobs = 5
for p in allPerms:
t = abs((p[0] + p[1])/2 - p[2]/1)
if t >= tobs:
pValue = pValue + pProb
pValue
###Output
_____no_output_____
###Markdown
This means that there is little or no evidence against the null hypothesis (that the shell diameter observations are from the same DF). Pooled sample sizeThe lowest possible P-value for a pooled sample of size $N=m+n$ is $\displaystyle\frac{1}{N!}$. Can you see why this is? So with our small sub-samples the smallest possible P-value would be $\frac{1}{6} \approx 0.167$. If we are looking for P-value $\leq 0.01$ to constitute very strong evidence against $H_0$, then we have to have a large enough pooled sample for this to be possible. Since $5! = 5 \times 4 \times 3 \times 2 \times 1 = 120$, it is good to have $N \geq 5$ YouTry in classTry copying and pasting our code and then adapting it to deal with a sub-sample (52, 54, 60) from the left of the pier and (58, 54) from the right side of the pier.
###Code
rightSub = [52, 54, 60]
leftSub = [58, 54]
totalSample = rightSub + leftSub
totalSample
###Output
_____no_output_____
###Markdown
You will have to think about:- calculating the value of the test statistic for the observed data and for all the permuations of the total sample- calculating the probability of each permutation- calculating the P-value by adding the probabilities for the permutations with test statistics at least as large as the observed value of the test statistic (add more cells if you need them)(end of You Try)--- We can use the sample function and the Python method for making permutations to experiment with a larger sample, say 5 of each.
###Code
n, m = 5, 5
leftSub = sample(leftSide, n)
rightSub = sample(rightSide,m)
totalSample = leftSub + rightSub
leftSub; rightSub; totalSample
tobs = abs(mean(leftSub) - mean(rightSub))
tobs
###Output
_____no_output_____
###Markdown
We have met sample briefly already: it is part of the Python random module and it does exactly what you would expect from the name: it samples a specified number of elements randomly from a sequence.
###Code
#define a helper function for calculating the tstat from a permutation
def tForPerm(perm, samplesize1, samplesize2):
'''Calculates the t statistic for a permutation of data given the sample sizes to split the permuation into.
Param perm is the permutation of data to be split into the two samples.
Param samplesize1, samplesize2 are the two sample sizes.
Returns the absolute value of the difference in the means of the two samples split out from perm.'''
sample1 = [perm[i] for i in range(samplesize1)]
sample2 = [perm[samplesize1+j] for j in range(samplesize2)]
return abs(mean(sample1) - mean(sample2))
allPerms = list(Permutations(totalSample))
pProb = 1/len(allPerms)
pValue = 0
tobs = abs(mean(leftSub) - mean(rightSub))
for p in allPerms:
t = tForPerm(p, n, m)
if t >= tobs:
pValue = pValue + pProb
pValue
n+m
factorial(n+m) # how many permutations is it checking
###Output
_____no_output_____
|
notebooks/Riverside-demo.ipynb
|
###Markdown
Running Eazy-py on the Riverside test catalogs
###Code
%matplotlib inline
import glob
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
from astropy.table import Table
from astropy.utils.exceptions import AstropyWarning
import warnings
np.seterr(all='ignore')
warnings.simplefilter('ignore', category=AstropyWarning)
# https://github.com/gbrammer/eazy-py
import eazy
###Output
_____no_output_____
###Markdown
Prepare catalogs
###Code
## Replace extraneous tabs with spaces
os.system('perl -pi -e "s/\t/ /g" ../data/raw/CANDELS_GDSS_worksho*[p1].dat')
## Make flux columns
files = glob.glob('../data/raw/*[p1].dat')
for file in files:
cat = Table.read(file, format='ascii.commented_header')
for err_col in cat.colnames:
if err_col.startswith('e'):
mag_col = err_col[1:]
flux = 10**(-0.4*(cat[mag_col]-25))
flux_err = np.log(10)/2.5*cat[err_col]
bad = (cat[mag_col] < -90)
bad |= cat[mag_col] > 90
flux[bad] = -99
flux_err[bad] = -99
cat['flux_'+mag_col] = flux
cat['err_'+mag_col] = flux_err
# For translate file
#print('flux_{0:<11s} F00\nerr_{0:<12s} E00'.format(mag_col))
cat.write(file.replace('.dat','.flux.fits'), overwrite=True)
# Link templates and filter files
# EAZYCODE is an environment variable that points to the the eazy-photoz distribution
eazy.symlink_eazy_inputs(path='/work/03565/stevans/maverick/software/eazypy/eazy-photoz', path_is_env=False)
### filter translation file
# Not sure about CTIO_U
trans = """ID id
zz z_spec
flux_CTIO_U F107
err_CTIO_U E107
flux_VIMOS_U F103
err_VIMOS_U E103
flux_ACS_F435W F233
err_ACS_F435W E233
flux_ACS_F606W F236
err_ACS_F606W E236
flux_ACS_F775W F238
err_ACS_F775W E238
flux_ACS_F814W F239
err_ACS_F814W E239
flux_ACS_F850LP F240
err_ACS_F850LP E240
flux_WFC3_F098M F201
err_WFC3_F098M E201
flux_WFC3_F105W F202
err_WFC3_F105W E202
flux_WFC3_F125W F203
err_WFC3_F125W E203
flux_WFC3_F160W F205
err_WFC3_F160W E205
flux_ISAAC_KS F37
err_ISAAC_KS E37
flux_HAWKI_KS F269
err_HAWKI_KS E269
flux_IRAC_CH1 F18
err_IRAC_CH1 E18
flux_IRAC_CH2 F19
err_IRAC_CH2 E19
flux_IRAC_CH3 F20
err_IRAC_CH3 E20
flux_IRAC_CH4 F21
err_IRAC_CH4 E21"""
fp = open('../data/raw/zphot.translate.gdss','w')
fp.write(trans)
fp.close()
###Output
_____no_output_____
###Markdown
Run the photo-z fits
###Code
# Galactic extinction
EBV = {'aegis':0.0066, 'cosmos':0.0148, 'goodss':0.0069, 'uds':0.0195, 'goodsn':0.0103}['goodss']
roots = ['../data/raw/CANDELS_GDSS_workshop', '../data/raw/CANDELS_GDSS_workshop_z1'][:1]
for root in roots:
print('\n####\n')
params = {}
params['CATALOG_FILE'] = '{0}.flux.fits'.format(root)
params['MAIN_OUTPUT_FILE'] = '{0}.eazypy'.format(root)
params['PRIOR_FILTER'] = 205
params['PRIOR_ABZP'] = 25
params['MW_EBV'] = EBV
params['Z_MAX'] = 12
params['Z_STEP'] = 0.01
params['TEMPLATES_FILE'] = 'templates/fsps_full/tweak_fsps_QSF_12_v3.param'
params['VERBOSITY'] = 1
ez = eazy.photoz.PhotoZ(param_file=None,
translate_file='zphot.translate.gdss',
zeropoint_file=None, params=params,
load_prior=True, load_products=False, n_proc=-1)
for iter in range(2):
ez.fit_parallel(n_proc=4)
ez.error_residuals()
print('Get physical parameters')
ez.standard_output()
# Outputs for the z=1 catalog
zout = Table.read('{0}.zout.fits'.format(params['MAIN_OUTPUT_FILE']))
zout['ssfr'] = zout['SFR']/zout['mass']
print(zout.colnames)
###Output
['id', 'z_spec', 'nusefilt', 'numpeaks', 'z_phot', 'z_phot_chi2', 'z_phot_risk', 'z_min_risk', 'min_risk', 'z_chi2_noprior', 'chi2_noprior', 'z025', 'z160', 'z500', 'z840', 'z975', 'restU', 'restU_err', 'restB', 'restB_err', 'restV', 'restV_err', 'restJ', 'restJ_err', 'Lv', 'MLv', 'Av', 'mass', 'SFR', 'LIR', 'line_flux_Ha', 'line_EW_Ha', 'line_flux_O3', 'line_EW_O3', 'line_flux_Hb', 'line_EW_Hb', 'line_flux_O2', 'line_EW_O2', 'line_flux_Lya', 'line_EW_Lya', 'ssfr']
###Markdown
Diagnostic plots
###Code
fig = ez.zphot_zspec(zmin=0.7, zmax=1.4, minor=0.1, skip=0)
### UVJ
uv = -2.5*np.log10(zout['restU']/zout['restV'])
vj = -2.5*np.log10(zout['restV']/zout['restJ'])
uverr = 2.5*np.sqrt((zout['restU_err']/zout['restU'])**2+(zout['restV_err']/zout['restV'])**2)
vjerr = 2.5*np.sqrt((zout['restV_err']/zout['restV'])**2+(zout['restJ_err']/zout['restJ'])**2)
for show in ['ssfr', 'MLv', 'Av']:
fig = plt.figure(figsize=[5,5])
ax = fig.add_subplot(111)
ax.errorbar(vj, uv, xerr=vjerr, yerr=uverr, color='k',
alpha=0.1, marker='.', capsize=0, linestyle='None')
if show == 'ssfr':
sc = ax.scatter(vj, uv, c=np.log10(zout['ssfr']),
vmin=-13, vmax=-8.5, zorder=10, cmap='RdYlBu')
label = 'log sSFR'
ticks = np.arange(-13,-8,2)
elif show == 'MLv':
sc = ax.scatter(vj, uv, c=np.log10(zout['MLv']),
vmin=-1, vmax=1, zorder=10, cmap='magma')
label = r'$\log\ M/L_V$'
ticks = np.arange(-1,1.1,1)
elif show == 'Av':
sc = ax.scatter(vj, uv, c=zout['Av'], vmin=0,
vmax=2.5, zorder=10, cmap='plasma')
label = r'$A_V$'
ticks = np.arange(0,2.1,1)
# Colorbar
cax = fig.add_axes((0.18, 0.88, 0.2, 0.03))
cb = plt.colorbar(sc, cax=cax, orientation='horizontal')
cb.set_label(label)
cb.set_ticks(ticks)
ax.set_xlim(-0.3, 2.6)
ax.set_ylim(-0.3, 2.6)
ax.grid()
ax.set_xlabel(r'$V-J$'); ax.set_ylabel(r'$U-V$')
ax.set_title('Riverside z=1 sample')
fig.tight_layout(pad=0.1)
#plt.savefig('Riverside_z1_{0}.pdf'.format(show))
# Show SED
id_i = ez.cat['id'][4]
fig = ez.show_fit(id_i, show_fnu=0)
# nu-Fnu scaling for far-IR
fig = ez.show_fit(id_i, show_fnu=2)
fig.axes[0].set_xlim(0.2, 1200)
fig.axes[0].set_ylim(1, 100)
fig.axes[0].loglog()
fig.axes[0].set_xticklabels([0.1, 1, 10, 100, 1000])
fig.axes[1].set_xlim(0, 2)
###Output
_____no_output_____
|
MVP_cognitive_search_bertsquad.ipynb
|
###Markdown
MVP Cognitive Search Application using "bertsquad" model Importing Libraries
###Code
import os
import pandas as pd
from ast import literal_eval
from cdqa.utils.converters import pdf_converter
from cdqa.pipeline import QAPipeline
from cdqa.utils.download import download_model
###Output
_____no_output_____
###Markdown
Downloading Pre-trained model: BERT (Stanford question and answer data set)
###Code
download_model(model = 'bert-squad_1.1', dir='./models')
###Output
Downloading trained model...
bert_qa.joblib already downloaded
###Markdown
Converting text from PDF into Pandas dataframe
###Code
df = pdf_converter(directory_path='./PDF_docs/')
df.head(4)
###Output
2021-03-05 17:50:33,772 [MainThread ] [WARNI] Failed to see startup log message; retrying...
###Markdown
Using pre-trained language model: bertsquad
###Code
cdqa_pipeline = QAPipeline(reader='./models/bert_qa.joblib', max_df = 1.0)
cdqa_pipeline.fit_retriever(df=df)
###Output
_____no_output_____
###Markdown
Questions from technical information document:
###Code
question_1 = 'What is the shelf life of Golpanol ALS?'
prediction = cdqa_pipeline.predict(question_1)
print('Question : {}'.format(question_1))
print('Answer: {}'.format(prediction[0]))
question_2 = 'What is the appearance of Golpanol ALS?'
prediction = cdqa_pipeline.predict(question_2)
print('Question : {}'.format(question_2))
print('Answer: {}'.format(prediction[0]))
question_3 = 'What is Golpanol ALS pH value?'
prediction = cdqa_pipeline.predict(question_3)
print('Question : {}'.format(question_3))
print('Answer: {}'.format(prediction[0]))
###Output
Question : What is Golpanol ALS pH value?
Answer: 2 years
###Markdown
Questions from product specification document:
###Code
question_4 = 'What is the PRD number of Golpanol ALS?'
prediction = cdqa_pipeline.predict(question_4)
print('Question : {}'.format(question_4))
print('Answer: {}'.format(prediction[0]))
question_5 = 'What is the density value of Golpanol ALS?'
prediction = cdqa_pipeline.predict(question_5)
print('Question : {}'.format(question_5))
print('Answer: {}'.format(prediction[0]))
question_6 = 'What is the chemical nature of Golpanol ALS?'
prediction = cdqa_pipeline.predict(question_6)
print('Question : {}'.format(question_6))
print('Answer: {}'.format(prediction[0]))
###Output
Question : What is the chemical nature of Golpanol ALS?
Answer: Solubility
|
notebooks/graphics/Oil_transport/Oil_Cargo_WA_import.ipynb
|
###Markdown
Washington oil IMPORT to Salish Sea facilities- Crude oil export counted if receiver is a refinery or terminal- All non-ATBs and non-tugs are categorized tankers- Oil volumes organized by vessel type and by fuel type- 93.15% of all volume imports to the selected refineries and terminals is accounted for using the specified oil type clasifications - Total Import in this analysis: 5,726,027,289 gallons Known issues- I need to fix my ATB and barge calculation as there is currently no import via these ships, but I know that's wrong (i.e. I'm missing dilbit from Westridge)- I need to convert to liters for our intents and purposes
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# User inputs
file_dir = '/Users/rmueller/Documents/UBC/MIDOSS/Data/DeptOfEcology/'
file_name = 'MuellerTrans4-30-20.xlsx'
# Import columns are: (E) 'StartDateTime, (H) Receiver, (O) Region, (P) Product, (Q) Quantity in Gallons, (R) Transfer Type (Fueling, Cargo, or Other)', (W) Deliverer type
df = pd.read_excel(f'{file_dir}{file_name}',sheet_name='Vessel Oil Transfer',
usecols="E,H,O,P,Q,R,X")
###Output
_____no_output_____
###Markdown
Extract data for oil cargo transferred to vessels for marine export approximation
###Code
# Get all cargo fuel transfers
bool_cargo = df['TransferType']=='Cargo'
cargo_data = df[bool_cargo]
oil_traffic = {}
oil_traffic['receiver']={}
for reciever in cargo_data.Receiver:
# create a list of all recieving entities
if reciever not in oil_traffic['receiver']:
oil_traffic['receiver'][f'{reciever}'] = f'{reciever}'
###Output
_____no_output_____
###Markdown
Evaluate marine oil import By vessel type
###Code
# Eli Seely (DOE) reccomends using "Facility" in "DelivererTypeDescription"[W] to flag all refineries and terminals
# and "Region"[O] (which is identified by county) to ID all non-Salish-Sea traffic
non_Salish_counties = ['Klickitat','Clark','Cowlitz','Wahkakum','Pacific','Grays Harbor']
# create dataset of export cargo in which export cargo is defined as
# all cargo being transferred from (land-based) facility
cargo_import = cargo_data[cargo_data.ReceiverTypeDescription.str.contains('Facility')]
# remove transfers from non-Salish-Sea locations
print('Removing oil transfers from: ')
for counties in non_Salish_counties:
display(counties)
cargo_import = cargo_import[~cargo_import.Region.str.contains(f'{counties}')]
# need to re-set indexing in order to use row-index as data_frame index
cargo_import.reset_index(drop=True, inplace=True)
# introduce dictionary for cargo traffic
oil_traffic['cargo'] = {}
# ATB fuel export
oil_traffic['cargo']['atb'] = {}
oil_traffic['cargo']['atb']['percent_volume_import'] = {} # percentage of total crude export by oil type
oil_traffic['cargo']['atb']['volume_import_total'] = 0
oil_traffic['cargo']['atb']['volume_import'] = [] # a vector of volumes ordered to pair with oil_type
oil_traffic['cargo']['atb']['oil_type'] = [] # a vector of oil types ordered to pair with volume_export
# barge fuel export
oil_traffic['cargo']['barge'] = {}
oil_traffic['cargo']['barge']['percent_volume_import'] = {}
oil_traffic['cargo']['barge']['volume_import_total'] = 0
oil_traffic['cargo']['barge']['volume_import'] = []
oil_traffic['cargo']['barge']['oil_type'] = []
# tanker fuel export
oil_traffic['cargo']['tanker'] = {}
oil_traffic['cargo']['tanker']['percent_volume_import'] = {}
oil_traffic['cargo']['tanker']['volume_import_total'] = 0
oil_traffic['cargo']['tanker']['volume_import'] = []
oil_traffic['cargo']['tanker']['oil_type'] = []
# total
oil_traffic['cargo']['import_total'] = {}
oil_traffic['cargo']['import_total']['all'] = 0
oil_traffic['cargo']['import_total']['atb_barge_tankers'] = 0
# identify ship traffic
[nrows,ncols] = cargo_import.shape
# total up volume of oil transferred onto ATB BARGES, non-ATB barges, and other vessels
# create counter for vessel-type
atb_counter = 0
barge_counter = 0
tanker_counter = 0
for rows in range(nrows):
# Add up all oil import to refineries and terminals, regardless of vessel-type
oil_traffic['cargo']['import_total']['all'] += cargo_import.TransferQtyInGallon[rows]
# ATB
if 'ATB' in cargo_import.Receiver[rows]:
oil_traffic['cargo']['atb']['volume_import_total'] += cargo_import.TransferQtyInGallon[rows]
oil_traffic['cargo']['atb']['volume_import'].append(cargo_import.TransferQtyInGallon[rows])
oil_traffic['cargo']['atb']['oil_type'].append(cargo_import.Product[rows])
atb_counter += 1
# Barge
elif ('BARGE' in cargo_import.Receiver[rows] or \
'Barge' in cargo_import.Receiver[rows] or \
'PB' in cargo_import.Receiver[rows] or \
'YON' in cargo_import.Receiver[rows] or \
'DLB' in cargo_import.Receiver[rows]):
oil_traffic['cargo']['barge']['volume_import_total'] += cargo_import.TransferQtyInGallon[rows]
oil_traffic['cargo']['barge']['volume_import'].append(cargo_import.TransferQtyInGallon[rows])
oil_traffic['cargo']['barge']['oil_type'].append(cargo_import.Product[rows])
barge_counter += 1
# Tanker
else:
oil_traffic['cargo']['tanker']['volume_import_total'] += cargo_import.TransferQtyInGallon[rows]
oil_traffic['cargo']['tanker']['volume_import'].append(cargo_import.TransferQtyInGallon[rows])
oil_traffic['cargo']['tanker']['oil_type'].append(cargo_import.Product[rows])
tanker_counter += 1
oil_traffic['cargo']['import_total']['atb_barge_tankers'] = oil_traffic['cargo']['atb']['volume_import_total'] + oil_traffic['cargo']['barge']['volume_import_total'] + oil_traffic['cargo']['tanker']['volume_import_total']
atb_barge_tanker_percent = 100 * oil_traffic['cargo']['import_total']['atb_barge_tankers']/oil_traffic['cargo']['import_total']['all']
print('Volume of oil import captured by ATB, barge, and tank traffic used here: ' + str(atb_barge_tanker_percent) + '%')
print('Total Import in this analysis: ' + str(oil_traffic['cargo']['import_total']['all']) + ' gallons')
# Calculate percent of total transport by vessel type
atb_percent = 100*oil_traffic['cargo']['atb']['volume_import_total']/oil_traffic['cargo']['import_total']['all']
barge_percent = 100*oil_traffic['cargo']['barge']['volume_import_total']/oil_traffic['cargo']['import_total']['all']
tanker_percent = 100*oil_traffic['cargo']['tanker']['volume_import_total']/oil_traffic['cargo']['import_total']['all']
print(atb_percent)
print(barge_percent)
print(tanker_percent)
volume_export_byvessel = [oil_traffic['cargo']['atb']['volume_import_total'], oil_traffic['cargo']['barge']['volume_import_total'], oil_traffic['cargo']['tanker']['volume_import_total']]
#colors = ['b', 'g', 'r', 'c', 'm', 'y']
labels = [f'ATB ({atb_percent:3.1f}%)', f'Tow Barge ({barge_percent:3.1f}%)',f'Tanker ({tanker_percent:3.1f}%)']
plt.gca().axis("equal")
plt.pie(volume_export_byvessel, labels= labels)
plt.title('Types of vessels by volume transporting oil as cargo to Salish Sea facilities')
###Output
_____no_output_____
###Markdown
By oil type within vessel type classification
###Code
oil_traffic['cargo']['atb']['CRUDE']=0
oil_traffic['cargo']['atb']['GASOLINE']=0
oil_traffic['cargo']['atb']['JET FUEL/KEROSENE']=0
oil_traffic['cargo']['atb']['DIESEL/MARINE GAS OIL']=0
oil_traffic['cargo']['atb']['DIESEL LOW SULPHUR (ULSD)']=0
oil_traffic['cargo']['atb']['BUNKER OIL/HFO']=0
oil_traffic['cargo']['atb']['other']=0
oil_traffic['cargo']['barge']['CRUDE']=0
oil_traffic['cargo']['barge']['GASOLINE']=0
oil_traffic['cargo']['barge']['JET FUEL/KEROSENE']=0
oil_traffic['cargo']['barge']['DIESEL/MARINE GAS OIL']=0
oil_traffic['cargo']['barge']['DIESEL LOW SULPHUR (ULSD)']=0
oil_traffic['cargo']['barge']['BUNKER OIL/HFO']=0
oil_traffic['cargo']['barge']['other']=0
oil_traffic['cargo']['tanker']['CRUDE']=0
oil_traffic['cargo']['tanker']['GASOLINE']=0
oil_traffic['cargo']['tanker']['JET FUEL/KEROSENE']=0
oil_traffic['cargo']['tanker']['DIESEL/MARINE GAS OIL']=0
oil_traffic['cargo']['tanker']['DIESEL LOW SULPHUR (ULSD)']=0
oil_traffic['cargo']['tanker']['BUNKER OIL/HFO']=0
oil_traffic['cargo']['tanker']['other']=0
oil_types = ['CRUDE', 'GASOLINE', 'JET FUEL/KEROSENE','DIESEL/MARINE GAS OIL',
'DIESEL LOW SULPHUR (ULSD)', 'BUNKER OIL/HFO', 'other']
percent_check = 0
for oil_name in range(len(oil_types)):
# ATBs
for rows in range(len(oil_traffic['cargo']['atb']['volume_import'])):
if oil_types[oil_name] in oil_traffic['cargo']['atb']['oil_type'][rows]:
oil_traffic['cargo']['atb'][oil_types[oil_name]] += oil_traffic['cargo']['atb']['volume_import'][rows]
# Barges
for rows in range(len(oil_traffic['cargo']['barge']['volume_import'])):
if oil_types[oil_name] in oil_traffic['cargo']['barge']['oil_type'][rows]:
oil_traffic['cargo']['barge'][oil_types[oil_name]] += oil_traffic['cargo']['barge']['volume_import'][rows]
# Tankers (non-ATB or Barge)
for rows in range(len(oil_traffic['cargo']['tanker']['volume_import'])):
if oil_types[oil_name] in oil_traffic['cargo']['tanker']['oil_type'][rows]:
oil_traffic['cargo']['tanker'][oil_types[oil_name]] += oil_traffic['cargo']['tanker']['volume_import'][rows]
# calculate percentages based on total oil cargo exports
oil_traffic['cargo']['atb']['percent_volume_import'][oil_types[oil_name]] = 100 * oil_traffic['cargo']['atb'][oil_types[oil_name]]/oil_traffic['cargo']['import_total']['all']
oil_traffic['cargo']['barge']['percent_volume_import'][oil_types[oil_name]] = 100 * oil_traffic['cargo']['barge'][oil_types[oil_name]]/oil_traffic['cargo']['import_total']['all']
oil_traffic['cargo']['tanker']['percent_volume_import'][oil_types[oil_name]] = 100 * oil_traffic['cargo']['tanker'][oil_types[oil_name]]/oil_traffic['cargo']['import_total']['all']
for name in ['atb', 'barge', 'tanker']:
percent_check += oil_traffic['cargo'][f'{name}']['percent_volume_import'][oil_types[oil_name]]
percent_check
###Output
_____no_output_____
###Markdown
Plot up ATB fuel types
###Code
atb_volume_import = [oil_traffic['cargo']['atb']['CRUDE'],
oil_traffic['cargo']['atb']['GASOLINE'],
oil_traffic['cargo']['atb']['JET FUEL/KEROSENE'],
oil_traffic['cargo']['atb']['DIESEL/MARINE GAS OIL'],
oil_traffic['cargo']['atb']['DIESEL LOW SULPHUR (ULSD)'],
oil_traffic['cargo']['atb']['BUNKER OIL/HFO']]
labels = []
for ii in range(len(oil_types)-1):
labels.append(f'{oil_types[ii]} ({oil_traffic["cargo"]["atb"]["percent_volume_import"][oil_types[ii]]:3.1f}) %')
plt.gca().axis("equal")
plt.pie(atb_volume_import, labels= labels)
plt.title('Types of oil transport by volume for ATBs to Salish Sea facilities')
###Output
_____no_output_____
###Markdown
Plot up Barge fuel types
###Code
# barge_volume_export = [oil_traffic['cargo']['barge']['CRUDE'],
# oil_traffic['cargo']['barge']['GASOLINE'],
# oil_traffic['cargo']['barge']['JET FUEL/KEROSENE'],
# oil_traffic['cargo']['barge']['DIESEL/MARINE GAS OIL'],
# oil_traffic['cargo']['barge']['DIESEL LOW SULPHUR (ULSD)'],
# oil_traffic['cargo']['barge']['BUNKER OIL/HFO'],
# oil_traffic['cargo']['barge']['other']]
barge_volume_import = [oil_traffic['cargo']['barge']['JET FUEL/KEROSENE'],
oil_traffic['cargo']['barge']['DIESEL/MARINE GAS OIL'],
oil_traffic['cargo']['barge']['DIESEL LOW SULPHUR (ULSD)'],
oil_traffic['cargo']['barge']['BUNKER OIL/HFO']]
labels = []
for ii in [2,3,4,5]:
labels.append(f'{oil_types[ii]} ({oil_traffic["cargo"]["barge"]["percent_volume_import"][oil_types[ii]]:3.1f}) %')
plt.gca().axis("equal")
plt.pie(barge_volume_import, labels= labels)
plt.title('Types of oil transport by volume for barges to Salish Sea facilities')
###Output
_____no_output_____
###Markdown
Plot up Tanker fuel types
###Code
tanker_volume_import = [oil_traffic['cargo']['tanker']['CRUDE'],
oil_traffic['cargo']['tanker']['GASOLINE'],
oil_traffic['cargo']['tanker']['JET FUEL/KEROSENE'],
oil_traffic['cargo']['tanker']['DIESEL/MARINE GAS OIL'],
oil_traffic['cargo']['tanker']['DIESEL LOW SULPHUR (ULSD)'],
oil_traffic['cargo']['tanker']['BUNKER OIL/HFO'],
oil_traffic['cargo']['tanker']['other']]
labels = []
for ii in range(len(oil_types)):
labels.append(f'{oil_types[ii]} ({oil_traffic["cargo"]["tanker"]["percent_volume_import"][oil_types[ii]]:3.1f}) %')
plt.gca().axis("equal")
plt.pie(tanker_volume_import, labels= labels)
plt.title('Types of oil transport by volume for tankers to Salish Sea facilities\n with percent of net import')
###Output
_____no_output_____
|
Experiment Scripts/Data_Visualisation_Code/Medical_Expending_Data_Visualisation.ipynb
|
###Markdown
Medical_Expending_Data_Visualisation
###Code
## Importing important libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=12)
matplotlib.rc('ytick', labelsize=30)
matplotlib.rcParams.update({'font.size': 28})
import math
import datetime as dt
import os
import sys
###Output
_____no_output_____
###Markdown
Utility Functions
###Code
## Visulalization function
def Visualize(dataset,List_of_count_to_print,title1,ylab,vx=50,vy=30,w=.80):
df = dataset
n = 0
for i in List_of_count_to_print:
filter1 = df['Country'] == i
df = df[filter1]
labels = df['Date']
conf = df['Confirmed']
Recov = df['Recovered']
Death = df['Deaths']
#high = max(conf)
#low = min(conf)
x = np.arange(len(labels)) # the x label locations
width = w # the width of the bars
fig, ax = plt.subplots(figsize=(vx,vy))
rects1 = ax.bar(x - width, conf, width, label='confirmed')
rects2 = ax.bar(x , Recov, width, label='Recovered')
rects3 = ax.bar(x + width , Death, width, label='Death')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel(ylab)
ax.set_title(title1)
ax.set_xticks(x)
plt.xticks(rotation=90)
#plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))])
ax.set_xticklabels(labels)
ax.legend()
n = n + 1
plt.show()
## function to Check the List of Countries avaialable
def count_avalaible(dataframe,country_coul_rep = 'Country'):
x = 0
for i in set(dataframe.loc[:,country_coul_rep]):
print(i,end=' | ')
x = x + 1
if(x > 6):
x = 0
print()
print("\n\n##Total No of Countries = " + str(len(set(dataframe.loc[:,country_coul_rep]))))
###Output
_____no_output_____
###Markdown
Loading Medical_Expending_Data Data
###Code
Medical_Expending_Countires_Wise = pd.read_csv('../../Medical Expending/API_SH.XPD.CHEX.PC.CD_DS2_en_csv_v2_1218407/API_SH.XPD.CHEX.PC.CD_DS2_en_csv_v2_1218407.csv')
Medical_Expending_Countires_Wise
## Check the List of Countries avaialable
## Columns renaming for Uniformity
Medical_Expending_Countires_Wise = Medical_Expending_Countires_Wise.rename(columns={'Country Name': 'Country'})
count_avalaible(Medical_Expending_Countires_Wise,'Country')
## Analysing the data Structure
Country_to_look_for = 'India'
ylab = "Expeniding in $"
xlab = "Countries"
filter1 = Medical_Expending_Countires_Wise['Country'] == Country_to_look_for
Medical_Expending_Countires_Wise_country_specific = Medical_Expending_Countires_Wise[filter1]
Medical_Expending_Countires_Wise_country_specific
#Medical_Expending_Countires_Wise ## Uncomment this to view for all countires at once
## Visualisation
df = Medical_Expending_Countires_Wise
labels = df['Country Code']
prev_2015 = df['2015']
prev_2016 = df['2016']
prev_2017 = df['2017']
title1 = 'Expening on Health for years 2015-2017 '
#high = int(max(prev_2018))
#low = 0
x = np.arange(len(labels)) # the x label locations
width = .50 # the width of the bars
fig, ax = plt.subplots(figsize=(60,20))
rects1 = ax.bar(x-width/2, prev_2015, width, label='prev_2015')
rects2 = ax.bar(x, prev_2016, width, label='prev_2016')
rects3 = ax.bar(x+width/2, prev_2017, width, label='prev_2017')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_title(title1)
ax.set_xticks(x)
#ax.set_yticks(y)
plt.xticks(rotation=90)
#plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))])
ax.set_xticklabels(labels)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Cleaning Mdeical Expending DATA(Preprocessing)
###Code
Medical_Expending_Countires_Wise = Medical_Expending_Countires_Wise.fillna(0)
Medical_Expending_Countires_Wise_imp_features_only = Medical_Expending_Countires_Wise.drop(Medical_Expending_Countires_Wise.columns.difference(['Country','2015','2016','2017']), 1)
Medical_Expending_Countires_Wise_imp_features_only = Medical_Expending_Countires_Wise_imp_features_only.replace('United States', 'US')
Medical_Expending_Countires_Wise_imp_features_only
## Column match
print('-----------------------------------------------------------------')
countries = ['Afghanistan','Italy' , 'Kuwait', 'India', 'South Africa' ,'US','Bangladesh', 'Brazil','Egypt',
'United Kingdom','Sri Lanka', 'Chile' , 'Norway', 'New Zealand' ,'Switzerland','Ireland','Argentina',
'Australia', 'Canada', 'China','Slovenia','North Macedonia','Zimbabwe','Sweden','Netherlands','Pakistan']
k = 0
match = []
for i in set(Medical_Expending_Countires_Wise_imp_features_only.loc[:,'Country']):
if(i in countries):
k +=1
match.append(i)
print(i)
print(k)
print("-------Not Matching --------------------")
for i in countries:
if(i not in match ):
print(i)
###Output
-----------------------------------------------------------------
Zimbabwe
United Kingdom
Kuwait
US
North Macedonia
Argentina
Brazil
Australia
India
Ireland
Switzerland
Bangladesh
Netherlands
Chile
Sweden
Canada
Norway
South Africa
Sri Lanka
Pakistan
New Zealand
China
Slovenia
Afghanistan
Italy
25
-------Not Matching --------------------
Egypt
###Markdown
Writing the cleaned data in Cleaned Folder
###Code
Medical_Expending_Countires_Wise_imp_features_only.to_csv('../Pre_Processed_Data/Medical_Expending_Countires_Wise_Processed.csv')
###Output
_____no_output_____
###Markdown
Visualisation After Cleaning
###Code
## Visualisation
df = Medical_Expending_Countires_Wise
labels = df['Country Code']
prev_2015 = df['2015']
prev_2016 = df['2016']
prev_2017 = df['2017']
title1 = 'Expening on Health for years 2015-2017 '
#high = int(max(prev_2018))
#low = 0
x = np.arange(len(labels)) # the x label locations
width = .50 # the width of the bars
fig, ax = plt.subplots(figsize=(60,20))
rects1 = ax.bar(x-width/2, prev_2015, width, label='prev_2015')
rects2 = ax.bar(x, prev_2016, width, label='prev_2016')
rects3 = ax.bar(x+width/2, prev_2017, width, label='prev_2017')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_title(title1)
ax.set_xticks(x)
#ax.set_yticks(y)
plt.xticks(rotation=90)
#plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))])
ax.set_xticklabels(labels)
ax.legend()
plt.show()
###Output
_____no_output_____
|
notebooks/3.01 Time Series with Pandas.ipynb
|
###Markdown
Looking at your data
###Code
data.head()
data.info()
no_st = 5#data.shape[1]
data.T.head(no_st)
data[2050:].T # gives the last five rows but transposed so we can see all stocks
data[data.columns[:5]].plot(figsize=(10, 12), subplots=True);
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
data.describe().T.round(2)
data.mean()
data.aggregate([min, max, np.mean, np.std, np.median]).T.round(2)
###Output
_____no_output_____
###Markdown
Changes over time, rolling statistics
###Code
data.diff().T.head(no_st)
data.diff().mean()
###Output
_____no_output_____
###Markdown
Percentage change
###Code
data.pct_change().round(3).T.head(no_st)
rets = np.log(data/data.shift(1))
rets.head(no_st).round(3)
###Output
_____no_output_____
###Markdown
Example of lambda function and slicing
###Code
rets[rets.columns[:5]].cumsum().apply(np.exp).plot(figsize=(10, 6));
rets[rets.columns[:5]].cumsum().apply(np.exp).hist(figsize=(10, 6), bins=50);
rets[rets.columns[:5]].cumsum().apply(np.exp).boxplot(figsize=(10, 6));
###Output
_____no_output_____
###Markdown
Resampling First we resample the data to weekly time intervals.
###Code
data.iloc[:, :10].resample('1w', label='right').mean().plot(figsize=(10, 5), legend=False);
###Output
_____no_output_____
|
evaluations/sars-cov-2/3-index-genomes.case-200.ipynb
|
###Markdown
1. Parameters
###Code
# Defaults
cases_dir = 'cases/unset'
reference_file = 'references/NC_045512.gbk.gz'
input_files_all = 'input/input-files.tsv'
iterations = 3
mincov = 10
ncores = 32
number_samples = 10
build_tree = False
# Parameters
cases_dir = "cases/case-200"
iterations = 3
number_samples = 200
build_tree = True
from pathlib import Path
from shutil import rmtree
from os import makedirs
import imp
fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib'])
gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description)
cases_dir_path = Path(cases_dir)
if cases_dir_path.exists():
rmtree(cases_dir_path)
if not cases_dir_path.exists():
makedirs(cases_dir_path)
input_files_all = Path(input_files_all)
reference_file = Path(reference_file)
case_name = str(cases_dir_path.name)
reference_name = reference_file.name.split('.')[0]
cases_input = cases_dir_path / 'input-files-case.tsv'
index_path = cases_dir_path / 'index'
benchmark_path = cases_dir_path / 'index-info.tsv'
output_tree = cases_dir_path / 'tree.tre'
###Output
_____no_output_____
###Markdown
2. Create subset input
###Code
import pandas as pd
all_input_df = pd.read_csv(input_files_all, sep='\t')
all_input_total = len(all_input_df)
subset_input_df = all_input_df.head(number_samples)
subset_input_total = len(subset_input_df)
subset_input_df.to_csv(cases_input, sep='\t', index=False)
print(f'Wrote {subset_input_total}/{all_input_total} samples to {cases_input}')
###Output
Wrote 200/100000 samples to cases/case-200/input-files-case.tsv
###Markdown
2. Index genomes
###Code
!gdi --version
###Output
gdi, version 0.3.0.dev12
###Markdown
2.1. Index reads
###Code
results_handler = gdi_benchmark.BenchmarkResultsHandler(name=case_name)
benchmarker = gdi_benchmark.IndexBenchmarker(benchmark_results_handler=results_handler,
index_path=index_path, input_files_file=cases_input,
reference_file=reference_file, mincov=mincov,
build_tree=build_tree,
ncores=ncores)
benchmark_df = benchmarker.benchmark(iterations=iterations)
benchmark_df
benchmark_df.to_csv(benchmark_path, sep='\t', index=False)
###Output
_____no_output_____
###Markdown
3. Export trees
###Code
if build_tree:
!gdi --project-dir {index_path} export tree {reference_name} > {output_tree}
print(f'Wrote tree to {output_tree}')
else:
print(f'build_tree={build_tree} so no tree to export')
###Output
Wrote tree to cases/case-200/tree.tre
|
Lesson02/Exercise15.ipynb
|
###Markdown
Creating contour plot
###Code
import seaborn as sns
mpg_df = sns.load_dataset("mpg")
# contour plot
sns.set_style("white")
# generate KDE plot: first two parameters are arrays of X and Y coordinates of data points
# parameter shade is set to True so that the contours are filled with a color gradient based on number of data points
sns.kdeplot(mpg_df.weight, mpg_df.mpg, shade=True)
###Output
_____no_output_____
|
session-2/numpy/02-Python-for-Data-Analysis-NumPy/03-Numpy Operations.ipynb
|
###Markdown
___ ___ NumPy Operations ArithmeticYou can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
###Code
import numpy as np
arr = np.arange(0,10)
arr + arr
arr * arr
arr - arr
# Warning on division by zero, but not an error!
# Just replaced with nan
arr/arr
# Also warning, but not an error instead infinity
1/arr
arr**3
###Output
_____no_output_____
###Markdown
Universal Array FunctionsNumpy comes with many [universal array functions](http://docs.scipy.org/doc/numpy/reference/ufuncs.html), which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones:
###Code
#Taking Square Roots
np.sqrt(arr)
#Calcualting exponential (e^)
np.exp(arr)
np.max(arr) #same as arr.max()
np.sin(arr)
np.log(arr)
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in log
if __name__ == '__main__':
|
notebooks/Order_stars.ipynb
|
###Markdown
Order stars[AMath 586, Spring Quarter 2019](http://staff.washington.edu/rjl/classes/am586s2019/) at the University of Washington. For other notebooks, see [Index.ipynb](Index.ipynb) or the [Index of all notebooks on Github](https://github.com/rjleveque/amath586s2019/blob/master/notebooks/Index.ipynb).Plot the region of relative stability, also called the order star, for various 1-step methods.The general approach is to apply the method to $u' = \lambda u$ with time step $k$ to obtain $U^{n+1} = R(z) U^n$, where $R(z)$ is a rational function of $z=k\lambda$ (a polynomial for an explicit method). Then evaluate $|R(z)/e^z|$ on a grid of points in the complex plane and do a filled contour plot that shows the regions where $|R(z)/e^z| \leq 1$.
###Code
%pylab inline
seterr(divide='ignore', invalid='ignore') # suppress divide by zero warnings
from ipywidgets import interact
def plotOS(R, axisbox = [-10, 10, -10, 10], npts=500):
"""
Compute |R(z)| over a fine grid on the region specified by axisbox
and do a contour plot to show the region of absolute stability.
"""
xa, xb, ya, yb = axisbox
x = linspace(xa,xb,npts)
y = linspace(ya,yb,npts)
X,Y = meshgrid(x,y)
Z = X + 1j*Y
Rval = R(Z)
Rrel = abs(Rval / exp(Z))
# plot interior, exterior, as green and white:
levels = [-1e9,1,1e9]
CS1 = contourf(X, Y, Rrel, levels, colors = ('g', 'w'))
# plot boundary as a black curve:
CS2 = contour(X, Y, Rrel, [1,], colors = ('k',), linewidths = (2,))
title('Order Star')
grid(True)
plot([xa,xb],[0,0],'k') # x-axis
plot([0,0],[ya,yb],'k') # y-axis
axis('scaled') # scale x and y same so that circles are circular
axis(axisbox) # set limits
R = lambda z: 1+z
plotOS(R, axisbox=[-5,5,-5,5])
###Output
_____no_output_____
###Markdown
Theta method
###Code
def plotOS_theta(theta):
R = lambda z: (1. + (1-theta)*z) / (1-theta*z)
figure(figsize=(6,6))
plotOS(R, npts=200) # use fewer points so interact works well
title("Order star for theta-method with theta = %4.2f" % theta)
###Output
_____no_output_____
###Markdown
For $\theta=1/2$ this is the Trapezoid method, which is second order accurate, and so the order star has 3 sectors inside the region of relative stability near $z=0$:
###Code
plotOS_theta(0.5)
###Output
_____no_output_____
###Markdown
For all other $\theta$, the method is only first order accurate and there are only 2 sectors inside/outside the order star near $z=0$:
###Code
interact(plotOS_theta, theta=(0,1,.1));
###Output
_____no_output_____
###Markdown
Taylor series methodsFor this class of methods we can easily increase the order and observe how the structure near the origin varies:
###Code
def plotOS_TS(r):
def R(z):
# return Rz = 1 + z + 0.5z^2 + ... + (1/r!) z^r
Rz = 1.
term = 1.
for j in range(1,r+1):
term = term * z/float(j)
Rz = Rz + term
return Rz
figure(figsize=(6,6))
plotOS(R, npts=300)
title('Taylor series method r = %i' % r)
interact(plotOS_TS, r=(1,20,1));
###Output
_____no_output_____
###Markdown
Note that the way this computation is done it is subject to a lot of cancellation near $z=0$, so plots above do not look correct in this region, particularly for $r>15$ or so -- there should be $r+1$ green sectors and $r+1$ white sectors approaching the origin.Here's a better version for the particular case of high-degree Taylor series methods:
###Code
def plotOS_TS(r):
def R(z):
# return Rz = 1 + z + 0.5z^2 + ... + (1/r!) z^r
from math import factorial
Rz = 1.
term = 1.
for j in range(1,r+1):
term = term * z/float(j)
Rz = Rz + term
# for small z when r is large,
# it's better to compute the remainder
remainder = 0.
term = 1./factorial(r)
for j in range(r+1, 2*r):
term = term * z/float(j)
remainder = remainder + term
remainder *= z**r
# Define this so that contours of |R(z)/exp(z)| = 1
# look right near origin:
Rz_smallz = where(real(remainder/exp(z))>0., 0.5*exp(z), 2*exp(z))
Rz = where(abs(z**(r+1)/factorial(r+1)) > 1e-10, Rz, Rz_smallz)
return Rz
figure(figsize=(6,6))
plotOS(R, npts=500)
title('Taylor series method r = %i' % r)
interact(plotOS_TS, r=(1,30,1));
###Output
_____no_output_____
|
libraries/matplotlib/Demo.ipynb
|
###Markdown
*This is a **Markdown**-formatted cell*
###Code
# We can install packages right inside the Notebook
!pip3 install numpy bokeh
from ipywidgets import interact
import numpy as np
from bokeh.io import push_notebook, show, output_notebook
from bokeh.plotting import figure
output_notebook()
x = np.linspace(0, 2*np.pi, 2000)
y = np.sin(x)
p = figure(title='simple line example', plot_height=300, plot_width=600, y_range=(-5,5))
r = p.line(x, y, color='#2222aa', line_width=3)
def update(f, w=1, A=1, phi=0):
if f == "sin": func = np.sin
elif f == "cos": func = np.cos
elif f == "tan": func = np.tan
r.data_source.data['y'] = A * func(w * x + phi)
push_notebook()
show(p, notebook_handle=True)
interact(update, f=["sin", "cos", "tan"], w=(0,100), A=(1,5), phi=(0, 20, 0.1))
###Output
_____no_output_____
|
B1-1/BALI/Actividades/BALI_U2_A2_BERC.ipynb
|
###Markdown
Actividad 2. Representación matricialEste notebook servira para corroborar los ejercicios de la _Actividad 2_ de la _Unidad 2_ del curso de _Álgera Lineal_ de la _UnADM_.El uso de este material, en actividades realcionadas con la _UnADM_, debe regirse por el [código de ética](https://www.unadmexico.mx/images/descargables/codigo_de_etica_de_estudiantes_de_la_unadm.pdf) de la institución. Para cualquier otro proposito favor de seguir los lineamientos expresados en el archivo [readme.md](../../../readme.md) de este repositorio repositorio. Ejercicios Ejercicio 1Sean $A = \begin{pmatrix} 5 & 4 & 3 \\ 7 & 8 & 6 \\ 6 & 3 & 1 \end{pmatrix}$ y $B = \begin{pmatrix} -3&-5 &-3 \\ 4 & 5 & 6 \\ 7 & 8 & 3 \end{pmatrix}$ realiza las siguientes operaciones:- A+B- A-B- AxB- 3A + 2B
###Code
# Declaracion de matrices A,B
A = matrix([[ 5, 4, 3],
[ 7, 8, 6],
[ 6, 3, 1]])
B = matrix([[-3,-5,-3],
[ 4, 5, 6],
[ 7, 8, 3]])
print(A, end='\n\n')
print(B)
A+B
A-B
A*B
3*A + 2*B
###Output
_____no_output_____
###Markdown
Ejercicio 2Para cada sistema de ecuaciones encuentra su matriz principal y su matriz ampliada- S1$\begin{pmatrix} x_1 - x_2 +4x_3 = 7 \\ 4x_1 +2x_2 -2x_3 = 10 \\ 2x_1 +3x_2 + x_3 = 23\end {pmatrix}$- S2$\begin{pmatrix} 2x_1 + 3x_2 + x_3 = 4 \\ 4x_1 + 2x_2 -2x_3 =10 \\ x_1 - 3x_2 -3x_3 = 3\end {pmatrix}$- S3$\begin{pmatrix} 2x_1 - 6x_2 + 10x_3 + 7x_4 = 1 \\ -4x_1 - 3x_2 + 20x_3 +14x_4 = 1 \\ 10x_1 - 9x_2 + 15x_3 +13x_4 =-1 \\ 3x_1 + 8x_2 - 30x_3 + 3x_4 = 1\end {pmatrix}$
###Code
def GenerarMatriz(equsys, vars):
A=matrix([[equ.lhs().coefficient(v) for v in vars] for equ in equsys])
b=matrix([[equ.rhs()] for equ in equsys])
return (A,b)
def showEquations(obj):
for eq in obj:
print(eq)
def work(obj, vars):
showEquations(obj)
return GenerarMatriz(obj, vars)
"""Sistema 1
x_1 − x_2 + 4x_3 =7
4x_1 + 2x_2 − 2x_3 =10
2x_1 + 3x_2 + x_3 =23
"""
xs = var('x1,x2,x3')
eq1 = x1 - x2 + 4*x3 == 7
eq2 = 4*x1 + 2*x2 - 2*x3 == 10
eq3 = 2*x1 + 3*x2 + x3 == 23
ret = work([eq1, eq2, eq3], xs)
ret
"""Sistema 2
2x_1 + 3x_2 + x_3 = 4
4x_1 + 2x_2 −2x_3 = 10
x_1 − 3x_2 −3x_3 = 3
"""
xs = var('x1,x2,x3')
eq1 = 2*x1 + 3*x2 + x3 == 4
eq2 = 4*x1 + 2*x2 - 2*x3 == 10
eq3 = 1*x1 - 3*x2 - 3*x3 == 3
ret = work([eq1, eq2, eq3], xs)
ret
"""Sistema 3
2x_1 - 6x_2 + 10x_3 + 7x_4 = 1
-4x_1- 3x_2 + 20x_3 +14x_4 = 1
10x_1- 9x_2 + 15x_3 +13x_4 =-1
3x_1 + 8x_2 - 30x_3 + 3x_4 = 1
"""
xs = var('x1,x2,x3,x4')
eq1 = 2*x1 - 6*x2 + 10*x3 + 7*x4 == 1
eq2 = -4*x1 - 3*x2 + 20*x3 +14*x4 == 1
eq3 = 10*x1 - 9*x2 + 15*x3 +13*x4 ==-1
eq4 = 3*x1 + 8*x2 - 30*x3 + 3*x4 == 1
ret = work([eq1, eq2, eq3, eq4], xs)
ret
###Output
2*x1 - 6*x2 + 10*x3 + 7*x4 == 1
-4*x1 - 3*x2 + 20*x3 + 14*x4 == 1
10*x1 - 9*x2 + 15*x3 + 13*x4 == -1
3*x1 + 8*x2 - 30*x3 + 3*x4 == 1
|
courses/Convolutional Neural Networks/WEEK 1/Convolution_model_Step_by_Step_v2a.ipynb
|
###Markdown
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Subscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Updates If you were working on the notebook before this update...* The current notebook is version "v2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* clarified example used for padding function. Updated starter code for padding function.* `conv_forward` has additional hints to help students if they're stuck.* `conv_forward` places code for `vert_start` and `vert_end` within the `for h in range(...)` loop; to avoid redundant calculations. Similarly updated `horiz_start` and `horiz_end`. **Thanks to our mentor Kevin Brown for pointing this out.*** `conv_forward` breaks down the `Z[i, h, w, c]` single line calculation into 3 lines, for clarity.* `conv_forward` test case checks that students don't accidentally use n_H_prev instead of n_H, use n_W_prev instead of n_W, and don't accidentally swap n_H with n_W* `pool_forward` properly nests calculations of `vert_start`, `vert_end`, `horiz_start`, and `horiz_end` to avoid redundant calculations.* `pool_forward' has two new test cases that check for a correct implementation of stride (the height and width of the previous layer's activations should be large enough relative to the filter dimensions so that a stride can take place). * `conv_backward`: initialize `Z` and `cache` variables within unit test, to make it independent of unit testing that occurs in the `conv_forward` section of the assignment.* **Many thanks to our course mentor, Paul Mielke, for proposing these test cases.** 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: **Figure 1** : **Zero-Padding** Image (3 channels, RGB) with a padding of 2. The main benefits of padding are the following:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), mode='constant', constant_values = (0,0))```
###Code
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), mode='constant', constant_values = (0,0))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =\n", x.shape)
print ("x_pad.shape =\n", x_pad.shape)
print ("x[1,1] =\n", x[1,1])
print ("x_pad[1,1] =\n", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
###Output
x.shape =
(4, 3, 3, 2)
x_pad.shape =
(4, 7, 7, 2)
x[1,1] =
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] =
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
###Markdown
**Expected Output**:```x.shape = (4, 3, 3, 2)x_pad.shape = (4, 7, 7, 2)x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]]x_pad[1,1] = [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]]``` 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) **Figure 2** : **Convolution operation** with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). **Note**: The variable b will be passed in as a numpy array. If we add a scalar (a float or integer) to a numpy array, the result is a numpy array. In the special case when a numpy array contains a single value, we can cast it as a float to convert it to a scalar.
###Code
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, the result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice_prev and W. Do not add the bias yet.
s = np.multiply(a_slice_prev,W)
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z+b
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
###Output
Z = [[[-6.99908945]]]
###Markdown
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: **Exercise**: Implement the function below to convolve the filters `W` on an input activation `A_prev`. This function takes the following inputs:* `A_prev`, the activations output by the previous layer (for a batch of m inputs); * Weights are denoted by `W`. The filter window size is `f` by `f`.* The bias vector is `b`, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```Notice how this gives a 3D slice that has height 2, width 2, and depth 3. Depth is the number of channels. This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find out how each of the corner can be defined using h, w, f and s in the code below. **Figure 3** : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** This figure shows only a single channel. **Reminder**:The formulas relating the output shape of the convolution to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_C = \text{number of filters used in the convolution}$$For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. Additional Hints if you're stuck* You will want to use array slicing (e.g.`varname[0:1,:,3:5]`) for the following variables: `a_prev_pad` ,`W`, `b` Copy the starter code of the function and run it outside of the defined function, in separate cells. Check that the subset of each array is the size and dimension that you're expecting. * To decide how to get the vert_start, vert_end; horiz_start, horiz_end, remember that these are indices of the previous layer. Draw an example of a previous padded layer (8 x 8, for instance), and the current (output layer) (2 x 2, for instance). The output layer's indices are denoted by `h` and `w`. * Make sure that `a_slice_prev` has a height, width and depth.* Remember that `a_prev_pad` is a subset of `A_prev_pad`. Think about which one should be used within the for loops.
###Code
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer,
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above.
# Hint: use int() to apply the 'floor' operation. (≈2 lines)
n_H = int((n_H_prev - f + 2*pad)/stride)+1
n_W = int((n_W_prev - f + 2*pad)/stride)+1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m,n_H,n_W,n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev,pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
vert_start = h*stride
vert_end = h*stride+f
for w in range(n_W): # loop over horizontal axis of the output volume
# Find the horizontal start and end of the current "slice" (≈2 lines)
horiz_start = w*stride
horiz_end = w*stride+f
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈3 line)
weights = W[:,:,:,c]
biases = b[:,:,:,c]
Z[i, h, w, c] = conv_single_step(a_slice_prev,weights,biases)
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,5,7,4)
W = np.random.randn(3,3,4,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 1,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =\n", np.mean(Z))
print("Z[3,2,1] =\n", Z[3,2,1])
print("cache_conv[0][1][2][3] =\n", cache_conv[0][1][2][3])
###Output
Z's mean =
0.692360880758
Z[3,2,1] =
[ -1.28912231 2.27650251 6.61941931 0.95527176 8.25132576
2.31329639 13.00689405 2.34576051]
cache_conv[0][1][2][3] =
[-1.1191154 1.9560789 -0.3264995 -1.34267579]
###Markdown
**Expected Output**:```Z's mean = 0.692360880758Z[3,2,1] = [ -1.28912231 2.27650251 6.61941931 0.95527176 8.25132576 2.31329639 13.00689405 2.34576051]cache_conv[0][1][2][3] = [-1.1191154 1.9560789 -0.3264995 -1.34267579]``` Finally, CONV layer should also contain an activation, in which case we would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the $f \times f$ window you would compute a *max* or *average* over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$$$ n_C = n_{C_{prev}}$$
###Code
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
vert_start = h*stride
vert_end = h*stride+f
for w in range(n_W): # loop on the horizontal axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
horiz_start = w*stride
horiz_end = w*stride+f
for c in range (n_C): # loop over the channels of the output volume
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i,:,:,c]
# Compute the pooling operation on the slice.
# Use an if statement to differentiate the modes.
# Use np.max and np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
# Case 1: stride of 1
np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 1, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A.shape = " + str(A.shape))
print("A =\n", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A.shape = " + str(A.shape))
print("A =\n", A)
###Output
mode = max
A.shape = (2, 3, 3, 3)
A =
[[[[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]]
[[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]]
[[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]]]
[[[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]]
[[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]]
[[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]]]]
mode = average
A.shape = (2, 3, 3, 3)
A =
[[[[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]]
[[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]]
[[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]]]
[[[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]]
[[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]]
[[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]]]]
###Markdown
** Expected Output**```mode = maxA.shape = (2, 3, 3, 3)A = [[[[ 1.74481176 0.90159072 1.65980218] [ 1.74481176 1.46210794 1.65980218] [ 1.74481176 1.6924546 1.65980218]] [[ 1.14472371 0.90159072 2.10025514] [ 1.14472371 0.90159072 1.65980218] [ 1.14472371 1.6924546 1.65980218]] [[ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.6924546 2.18557541]]] [[[ 1.19891788 0.84616065 0.82797464] [ 0.69803203 0.84616065 1.2245077 ] [ 0.69803203 1.12141771 1.2245077 ]] [[ 1.96710175 0.84616065 1.27375593] [ 1.96710175 0.84616065 1.23616403] [ 1.62765075 1.12141771 1.2245077 ]] [[ 1.96710175 0.86888616 1.27375593] [ 1.96710175 0.86888616 1.23616403] [ 1.62765075 1.12141771 0.79280687]]]]mode = averageA.shape = (2, 3, 3, 3)A = [[[[ -3.01046719e-02 -3.24021315e-03 -3.36298859e-01] [ 1.43310483e-01 1.93146751e-01 -4.44905196e-01] [ 1.28934436e-01 2.22428468e-01 1.25067597e-01]] [[ -3.81801899e-01 1.59993515e-02 1.70562706e-01] [ 4.73707165e-02 2.59244658e-02 9.20338402e-02] [ 3.97048605e-02 1.57189094e-01 3.45302489e-01]] [[ -3.82680519e-01 2.32579951e-01 6.25997903e-01] [ -2.47157416e-01 -3.48524998e-04 3.50539717e-01] [ -9.52551510e-02 2.68511000e-01 4.66056368e-01]]] [[[ -1.73134159e-01 3.23771981e-01 -3.43175716e-01] [ 3.80634669e-02 7.26706274e-02 -2.30268958e-01] [ 2.03009393e-02 1.41414785e-01 -1.23158476e-02]] [[ 4.44976963e-01 -2.61694592e-03 -3.10403073e-01] [ 5.08114737e-01 -2.34937338e-01 -2.39611830e-01] [ 1.18726772e-01 1.72552294e-01 -2.21121966e-01]] [[ 4.29449255e-01 8.44699612e-02 -2.72909051e-01] [ 6.76351685e-01 -1.20138225e-01 -2.44076712e-01] [ 1.50774518e-01 2.89111751e-01 1.23238536e-03]]]]```
###Code
# Case 2: stride of 2
np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A.shape = " + str(A.shape))
print("A =\n", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A.shape = " + str(A.shape))
print("A =\n", A)
###Output
mode = max
A.shape = (2, 2, 2, 3)
A =
[[[[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]]
[[ 1.74481176 1.6924546 2.18557541]
[ 1.74481176 1.6924546 2.18557541]]]
[[[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]]
[[ 1.96710175 1.12141771 1.27375593]
[ 1.96710175 1.12141771 1.27375593]]]]
mode = average
A.shape = (2, 2, 2, 3)
A =
[[[[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]]
[[-0.08747826 0.19315856 0.07307572]
[-0.08747826 0.19315856 0.07307572]]]
[[[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]]
[[ 0.13127224 0.11964292 -0.01802191]
[ 0.13127224 0.11964292 -0.01802191]]]]
###Markdown
**Expected Output:** ```mode = maxA.shape = (2, 2, 2, 3)A = [[[[ 1.74481176 0.90159072 1.65980218] [ 1.74481176 1.6924546 1.65980218]] [[ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.6924546 2.18557541]]] [[[ 1.19891788 0.84616065 0.82797464] [ 0.69803203 1.12141771 1.2245077 ]] [[ 1.96710175 0.86888616 1.27375593] [ 1.62765075 1.12141771 0.79280687]]]]mode = averageA.shape = (2, 2, 2, 3)A = [[[[-0.03010467 -0.00324021 -0.33629886] [ 0.12893444 0.22242847 0.1250676 ]] [[-0.38268052 0.23257995 0.6259979 ] [-0.09525515 0.268511 0.46605637]]] [[[-0.17313416 0.32377198 -0.34317572] [ 0.02030094 0.14141479 -0.01231585]] [[ 0.42944926 0.08446996 -0.27290905] [ 0.15077452 0.28911175 0.00123239]]]]``` Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainder of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we will briefly present them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the activation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] += a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]```**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
###Code
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.random.randn(*A_prev.shape)
dW = np.random.randn(*W.shape)
db = np.random.randn(*b.shape)
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev,pad)
dA_prev_pad = zero_pad(dA_prev,pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h*stride
vert_end = h*stride+f
horiz_start = w*stride
horiz_end = w*stride+f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c]*dZ[i,h,w,c]
dW[:,:,:,c] += a_slice*dZ[i,h,w,c]
db[:,:,:,c] += dZ[i,h,w,c]
# Set the ith training example's dA_prev to the unpadded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[ pad:-pad, pad:-pad, : ]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
# We'll run conv_forward to initialize the 'Z' and 'cache_conv",
# which we'll use to test the conv_backward function
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
# Test conv_backward
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
###Output
dA_mean = 1.45794346971
dW_mean = 1.75193344294
db_mean = 7.25946501714
###Markdown
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward passNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
###Code
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = (x==np.max(x))
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
###Output
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
###Markdown
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
###Code
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (≈1 line)
average = dz / (n_H * n_W)
# Create a matrix where every entry is the "average" value (≈1 line)
a = np.ones(shape) * average
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
###Output
distributed value = [[ 0.5 0.5]
[ 0.5 0.5]]
###Markdown
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.
###Code
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters['stride']
f = hparameters['f']
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros(A_prev.shape)
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = h*stride+f
horiz_start = w*stride
horiz_end = w*stride+f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end,horiz_start:horiz_end,c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += np.multiply(dA[i, h, w, c], mask)
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i,h,w,c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f,f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da,shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
###Output
mode = max
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0. 0. ]
[ 5.05844394 -1.68282702]
[ 0. 0. ]]
mode = average
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0.08485462 0.2787552 ]
[ 1.26461098 -0.25749373]
[ 1.17975636 -0.53624893]]
|
notebooks/tg/mera/general/real/mnist_0_1.ipynb
|
###Markdown
Imports
###Code
import math
import pandas as pd
import pennylane as qml
import time
from keras.datasets import mnist
from matplotlib import pyplot as plt
from pennylane import numpy as np
from pennylane.templates import AmplitudeEmbedding, AngleEmbedding
from pennylane.templates.subroutines import ArbitraryUnitary
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
Model Params
###Code
np.random.seed(131)
initial_params = np.random.random([66])
INITIALIZATION_METHOD = 'Angle'
BATCH_SIZE = 20
EPOCHS = 400
STEP_SIZE = 0.01
BETA_1 = 0.9
BETA_2 = 0.99
EPSILON = 0.00000001
TRAINING_SIZE = 0.78
VALIDATION_SIZE = 0.07
TEST_SIZE = 1-TRAINING_SIZE-VALIDATION_SIZE
initial_time = time.time()
###Output
_____no_output_____
###Markdown
Import dataset
###Code
(train_X, train_y), (test_X, test_y) = mnist.load_data()
examples = np.append(train_X, test_X, axis=0)
examples = examples.reshape(70000, 28*28)
classes = np.append(train_y, test_y)
x = []
y = []
for (example, label) in zip(examples, classes):
if label == 0:
x.append(example)
y.append(-1)
elif label == 1:
x.append(example)
y.append(1)
x = np.array(x)
y = np.array(y)
# Normalize pixels values
x = x / 255
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=TEST_SIZE, shuffle=True)
validation_indexes = np.random.random_integers(len(X_train), size=(math.floor(len(X_train)*VALIDATION_SIZE),))
X_validation = [X_train[n] for n in validation_indexes]
y_validation = [y_train[n] for n in validation_indexes]
pca = PCA(n_components=8)
pca.fit(X_train)
X_train = pca.transform(X_train)
X_validation = pca.transform(X_validation)
X_test = pca.transform(X_test)
preprocessing_time = time.time()
###Output
_____no_output_____
###Markdown
Circuit creation
###Code
device = qml.device("default.qubit", wires=8)
def unitary(params, wire1, wire2):
# qml.RZ(0, wires=wire1)
qml.RY(params[0], wires=wire1)
# qml.RZ(0, wires=wire1)
# qml.RZ(0, wires=wire2)
qml.RY(params[1], wires=wire2)
# qml.RZ(0, wires=wire2)
qml.CNOT(wires=[wire2, wire1])
# qml.RZ(0, wires=wire1)
qml.RY(params[2], wires=wire2)
qml.CNOT(wires=[wire1, wire2])
qml.RY(params[3], wires=wire2)
qml.CNOT(wires=[wire2, wire1])
# qml.RZ(0, wires=wire1)
qml.RY(params[4], wires=wire1)
# qml.RZ(0, wires=wire1)
# qml.RZ(0, wires=wire2)
qml.RY(params[5], wires=wire2)
# qml.RZ(0, wires=wire2)
@qml.qnode(device)
def circuit(features, params):
# Load state
if INITIALIZATION_METHOD == 'Amplitude':
AmplitudeEmbedding(features=features, wires=range(8), normalize=True, pad_with=0.)
else:
AngleEmbedding(features=features, wires=range(8), rotation='Y')
# First layer
unitary(params[0:6], 1, 2)
unitary(params[6:12], 3, 4)
unitary(params[12:18], 5, 6)
# Second layer
unitary(params[18:24], 0, 1)
unitary(params[24:30], 2, 3)
unitary(params[30:36], 4, 5)
unitary(params[36:42], 6, 7)
# Third layer
unitary(params[42:48], 2, 5)
# Fourth layer
unitary(params[48:54], 1, 2)
unitary(params[54:60], 5, 6)
# Fifth layer
unitary(params[60:66], 2, 5)
# Measurement
return qml.expval(qml.PauliZ(5))
###Output
_____no_output_____
###Markdown
Circuit example
###Code
features = X_train[0]
print(f"Inital parameters: {initial_params}\n")
print(f"Example features: {features}\n")
print(f"Expectation value: {circuit(features, initial_params)}\n")
print(circuit.draw())
###Output
Inital parameters: [0.65015361 0.94810917 0.38802889 0.64129616 0.69051205 0.12660931
0.23946678 0.25415707 0.42644165 0.83900255 0.74503365 0.38067928
0.26169292 0.05333379 0.43689638 0.20897912 0.59441102 0.09890353
0.22409353 0.5842624 0.95908107 0.20988382 0.66133746 0.50261295
0.32029143 0.12506485 0.80688893 0.98696002 0.54304141 0.23132314
0.60351254 0.17669598 0.88653747 0.58902228 0.72117264 0.27567029
0.78811469 0.1326223 0.39971595 0.62982409 0.42404345 0.16187284
0.52034418 0.6070413 0.5808057 0.82111597 0.98499188 0.93449492
0.90305486 0.3380262 0.78324429 0.74373474 0.58058546 0.43266356
0.66792795 0.23668741 0.45173663 0.91999741 0.96687301 0.76905057
0.32671177 0.62283984 0.19160224 0.24832171 0.11683869 0.01032549]
Example features: [-3.63840234 -2.87217609 -1.38022087 -0.4780986 -0.25409499 0.27703028
0.07474095 0.079672 ]
Expectation value: 0.08266765070062759
0: ──RY(-3.64)───RY(0.224)────────────────────────────────────────────────────────────╭X─────────────╭C─────────────╭X──RY(0.661)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
1: ──RY(-2.87)───RY(0.65)────╭X─────────────╭C─────────────╭X──RY(0.691)───RY(0.584)──╰C──RY(0.959)──╰X──RY(0.21)───╰C──RY(0.503)──RY(0.903)──────────────────────────────────────────────────────────╭X─────────────╭C─────────────╭X──RY(0.581)───────────────────────────────────────────────────────────┤
2: ──RY(-1.38)───RY(0.948)───╰C──RY(0.388)──╰X──RY(0.641)──╰C──RY(0.127)───RY(0.32)───╭X─────────────╭C─────────────╭X──RY(0.543)──RY(0.52)───╭X─────────────╭C─────────────╭X──RY(0.985)──RY(0.338)──╰C──RY(0.783)──╰X──RY(0.744)──╰C──RY(0.433)──RY(0.327)──╭X─────────────╭C─────────────╭X──RY(0.117)───┤
3: ──RY(-0.478)──RY(0.239)───╭X─────────────╭C─────────────╭X──RY(0.745)───RY(0.125)──╰C──RY(0.807)──╰X──RY(0.987)──╰C──RY(0.231)─────────────│──────────────│──────────────│─────────────────────────────────────────────────────────────────────────────────│──────────────│──────────────│───────────────┤
4: ──RY(-0.254)──RY(0.254)───╰C──RY(0.426)──╰X──RY(0.839)──╰C──RY(0.381)───RY(0.604)──╭X─────────────╭C─────────────╭X──RY(0.721)─────────────│──────────────│──────────────│─────────────────────────────────────────────────────────────────────────────────│──────────────│──────────────│───────────────┤
5: ──RY(0.277)───RY(0.262)───╭X─────────────╭C─────────────╭X──RY(0.594)───RY(0.177)──╰C──RY(0.887)──╰X──RY(0.589)──╰C──RY(0.276)──RY(0.607)──╰C──RY(0.581)──╰X──RY(0.821)──╰C──RY(0.934)──RY(0.668)──╭X─────────────╭C─────────────╭X──RY(0.967)──RY(0.623)──╰C──RY(0.192)──╰X──RY(0.248)──╰C──RY(0.0103)──┤ ⟨Z⟩
6: ──RY(0.0747)──RY(0.0533)──╰C──RY(0.437)──╰X──RY(0.209)──╰C──RY(0.0989)──RY(0.788)──╭X─────────────╭C─────────────╭X──RY(0.424)──RY(0.237)──────────────────────────────────────────────────────────╰C──RY(0.452)──╰X──RY(0.92)───╰C──RY(0.769)───────────────────────────────────────────────────────────┤
7: ──RY(0.0797)──RY(0.133)────────────────────────────────────────────────────────────╰C──RY(0.4)────╰X──RY(0.63)───╰C──RY(0.162)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
###Markdown
Accuracy test definition
###Code
def measure_accuracy(x, y, circuit_params):
class_errors = 0
for example, example_class in zip(x, y):
predicted_value = circuit(example, circuit_params)
if (example_class > 0 and predicted_value <= 0) or (example_class <= 0 and predicted_value > 0):
class_errors += 1
return 1 - (class_errors/len(y))
###Output
_____no_output_____
###Markdown
Training
###Code
params = initial_params
opt = qml.AdamOptimizer(stepsize=STEP_SIZE, beta1=BETA_1, beta2=BETA_2, eps=EPSILON)
test_accuracies = []
best_validation_accuracy = 0.0
best_params = []
for i in range(len(X_train)):
features = X_train[i]
expected_value = y_train[i]
def cost(circuit_params):
value = circuit(features, circuit_params)
return ((expected_value - value) ** 2)/len(X_train)
params = opt.step(cost, params)
if i % BATCH_SIZE == 0:
print(f"epoch {i//BATCH_SIZE}")
if i % (10*BATCH_SIZE) == 0:
current_accuracy = measure_accuracy(X_validation, y_validation, params)
test_accuracies.append(current_accuracy)
print(f"accuracy: {current_accuracy}")
if current_accuracy > best_validation_accuracy:
print("best accuracy so far!")
best_validation_accuracy = current_accuracy
best_params = params
if len(test_accuracies) == 30:
print(f"test_accuracies: {test_accuracies}")
if np.allclose(best_validation_accuracy, test_accuracies[0]):
params = best_params
break
del test_accuracies[0]
print("Optimized rotation angles: {}".format(params))
training_time = time.time()
###Output
Optimized rotation angles: [ 1.27708731 0.0418406 1.17656788 0.89938174 1.45642609 -0.49868312
0.37890149 -0.02206522 0.12904403 0.07320608 0.94279088 0.53608522
-1.75211862 0.43277391 0.85323566 0.97988847 0.85899252 -0.23636722
-0.41831644 1.35017644 -0.27600096 0.04688274 0.66133746 0.24953542
-0.305001 0.32282209 1.11461573 1.81793655 0.21440776 0.23132314
0.75891848 0.44127749 -0.20394899 0.07579767 0.72117264 -0.1991202
0.45284395 -1.22641762 0.11191774 -0.27693122 -0.04148221 0.16187284
0.19171052 0.13225081 0.85987269 1.44920939 0.67502418 1.28414991
0.64997733 0.0280585 1.37091801 1.84556485 0.58058546 0.47868408
1.01758294 -0.22883824 1.58664478 -0.66013907 1.09476426 0.76905057
0.37273229 0.75073109 0.00424176 0.63528275 0.11683869 0.06182167]
###Markdown
Testing
###Code
accuracy = measure_accuracy(X_test, y_test, params)
print(accuracy)
test_time = time.time()
print(f"pre-processing time: {preprocessing_time-initial_time}")
print(f"training time: {training_time - preprocessing_time}")
print(f"test time: {test_time - training_time}")
print(f"total time: {test_time - initial_time}")
###Output
pre-processing time: 2.911926031112671
training time: 2276.3546538352966
test time: 49.9194598197937
total time: 2329.186039686203
|
8_1_get_feature_embeddings.ipynb
|
###Markdown
###Code
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Part I: What is feature embdding? 1. [An embedding is a translation of a high-dimensional vector into a low-dimensional space. Ideally, an embedding captures some of the semantics of the input by placing semantically similar inputs close together in the embedding space.](https://cloud.google.com/solutions/machine-learning/overview-extracting-and-serving-feature-embeddings-for-machine-learningwhat_is_an_embedding) Ultimate goal of computer vision model embeddings
###Code
Image(url = 'https://raw.githubusercontent.com/mangye16/Unsupervised_Embedding_Learning/master/fig/Motivation.png')
Image(url = 'https://raw.githubusercontent.com/mangye16/Unsupervised_Embedding_Learning/master/fig/Pipeline.png')
###Output
_____no_output_____
###Markdown
[Ye et al., 2019, arXiv](https://arxiv.org/pdf/1904.03436.pdf)
###Code
Image(url = 'https://raw.githubusercontent.com/nmningmei/BOLD5000_autoencoder/master/figures/autoencoder%20phase%204.jpg')
###Output
_____no_output_____
###Markdown
Ultimate goal of word embeddings: looks good, never works.These are the only examples that work.
###Code
Image(url = 'https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2017/06/06062705/Word-Vectors.png')
###Output
_____no_output_____
###Markdown
[Hunter Heidenreich ](http://hunterheidenreich.com/blog/intro-to-word-embeddings/) Let's start with the difficult one: train a word embedding model
###Code
Image(url = 'https://jaxenter.com/wp-content/uploads/2018/08/image-2-768x632.png')
###Output
_____no_output_____
###Markdown
What is showed above is called "skip-gram" $\rightarrow$ mapping 1 word to many wordsThe other way around is called "Continuous Bag of Words" (CBOW)[source: Tommaso Teofili August 17, 2018](https://jaxenter.com/deep-learning-search-word2vec-147782.html)
###Code
Image(url = 'https://miro.medium.com/max/778/0*Yl7I7bH52zk8m_8R.')
Image(url = 'https://miro.medium.com/max/778/0*CldH-gf1GuWjqNjt.')
###Output
_____no_output_____
###Markdown
From [Introduction to Word Vectors](https://medium.com/@jayeshbahire/introduction-to-word-vectors-ea1d4e4b84bf) [Classical Word2Vec - the 2013 paper](https://www.tensorflow.org/tutorials/representation/word2vec)pro:1. basic usage2. local co-occurence3. old4. **It is the reason people use 300-D as the representaion space**con:1. not handle rare words2. trained on wiki corpus [GloVe](https://nlp.stanford.edu/projects/glove/)pro:1. more globle co-oocurrence capturing2. handle rare wordscon:1. expensive to train2. limited languages3. trained on wiki corpus [FastText](https://fasttext.cc/docs/en/pretrained-vectors.html), now supports 157 languages, and they are easy to downloadpro:1. easy to train2. more globle co-occurence3. language supportcon:1. not handle rare words2. trained on wiki Example of how to use a pre-trained word embedding``` for example, load fast test model in memory fasttext_link: http://dcc.uchile.cl/~jperez/word-embeddings/fasttext-sbwc.vec.gzimport gensim only available in Cajal02 herefasttest_model = gensim.models.keyedvectors.KeyedVectors.load_word2vec_format(fasttext_downloaded_file_name)for word in words: word_vector_representation = fasttest_model.get_vector(word)``` Move on to computer vision for images Videos are just many frames of images, aren't they?
###Code
Image(url = 'https://www.jneurosci.org/content/jneuro/35/27/10005/F1.large.jpg?width=800&height=600&carousel=1')
###Output
_____no_output_____
###Markdown
[Güçlü and Gerven, 2015](https://www.jneurosci.org/content/35/27/10005.full) Very important note: almost all the pretrained computer vision models are trained by supervised fashion to classify some classes 1. One of the most common dataset they are classifying on is the [ImageNet](www.image-net.org) 2. [different computer vision models are better in encoding information to explain variance](http://www.brain-score.org/leaderboard) in brain responses: DenseNet169 is better in V4 but worse in V1, while MobileNetV2 is better in V1 but worse in V4 and IT. Part II: live coding section require:1. zip file of your stimuli (words in text form, images are zipped, videos are segmented into images, sound waves, sound waves?)2. stimulus files are ready to use3. open a new colab from your Google Drive
###Code
###Output
_____no_output_____
|
code/firstLookatInstacartDataset.ipynb
|
###Markdown
Find out if there is any products from the train sample that has never been bought before
###Code
products[~products['product_id'].isin(order_products__prior['product_id'])]
###Output
_____no_output_____
|
python tasks/K Means/Market segmentation/Market segmentation example.ipynb
|
###Markdown
Market segmentation example Import the relevant libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Load the data
###Code
data = pd.read_csv ('3.12. Example.csv')
data
###Output
_____no_output_____
###Markdown
Plot the data
###Code
plt.scatter(data['Satisfaction'],data['Loyalty'])
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
###Output
_____no_output_____
###Markdown
Select the features
###Code
x = data.copy()
###Output
_____no_output_____
###Markdown
Clustering
###Code
kmeans = KMeans(2)
kmeans.fit(x)
###Output
_____no_output_____
###Markdown
Clustering results
###Code
clusters = x.copy()
clusters['cluster_pred']=kmeans.fit_predict(x)
plt.scatter(clusters['Satisfaction'],clusters['Loyalty'],c=clusters['cluster_pred'],cmap='rainbow')
plt.xlabel('Satisfaction')
plt.ylabel('Loyalty')
###Output
_____no_output_____
###Markdown
Standardize the variables
###Code
from sklearn import preprocessing
x_scaled = preprocessing.scale(x)
x_scaled
###Output
_____no_output_____
###Markdown
Take advantage of the Elbow method
###Code
wcss =[]
for i in range(1,10):
kmeans = KMeans(i)
kmeans.fit(x_scaled)
wcss.append(kmeans.inertia_)
wcss
plt.plot(range(1,10),wcss)
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
###Output
_____no_output_____
|
Project 1/Project_1 - Clean Code Hyperspectral.ipynb
|
###Markdown
Hyperspectral Images Load data
###Code
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import scipy.io as sio
import os
hsimg_load = sio.loadmat('SanBarHyperIm.mat')
hsimg_data = hsimg_load['SanBarIm88x400']
# hsimg_load = sio.loadmat('PaviaHyperIm.mat')
# hsimg_data = hsimg_load['PaviaHyperIm']
himage_display = hsimg_data[:,:,0:3]
plt.imshow(himage_display)
plt.show()
hsimg_data[:,:,0:3].shape
###Output
_____no_output_____
###Markdown
PCA - Number of Components Decision
###Code
from sklearn.decomposition import PCA
data = hsimg_data.reshape(hsimg_data.shape[0] * hsimg_data.shape[1], hsimg_data.shape[2])
pca = PCA().fit(data)
cum_var = np.cumsum(pca.explained_variance_ratio_)
eigenvalues = pca.explained_variance_
count = 0
for var in cum_var:
count += 1
if var >= 0.95:
n_components = count
answer = "We need about "+ str(n_components) + " components to retain 95% of the variance"
print(answer)
break
plt.figure(1)
plt.plot(cum_var)
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Explained Variance')
plt.figure(2)
plt.plot(eigenvalues)
plt.xlabel('Number of Components')
plt.ylabel('Eigenvalues')
plt.show()
# Minumum Noise Factor --> Similar to PCA but removes noise from bands
###Output
We need about 3 components to retain 95% of the variance
###Markdown
PCA - Actually run PCA
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from skimage.transform import rescale
from sklearn.cluster import KMeans
import numpy as np
import time
#Reshape to 2D - one column per components
data = hsimg_data.reshape(hsimg_data.shape[0] * hsimg_data.shape[1], hsimg_data.shape[2])
#Using PCA
pca = PCA(n_components=n_components)
reduced_data = pca.fit_transform(data)
#Since my data is not between [0,1], I rescale the data
min_max_scaler = MinMaxScaler()
reduced_data_scaled = min_max_scaler.fit_transform(reduced_data)
#Turn data back into 3 dimensions to control the downsampling of the data
reduced_data_3D = reduced_data_scaled .reshape(hsimg_data[:,:,0:3].shape)
#Flatten my data again for algorithm input
img_data = reduced_data_3D.reshape(reduced_data_3D.shape[0] * reduced_data_3D.shape[1], 3)
plt.imshow(reduced_data_3D)
plt.show()
###Output
_____no_output_____
###Markdown
Do Kmeans
###Code
from sklearn.cluster import KMeans
n_clusters = 9
# Initializing KMeans
kmeans = KMeans(n_clusters=n_clusters)
# Fitting with inputs
t0 = time.time()
# Run algorithm
kmeans = kmeans.fit(img_data)
clusters = kmeans.cluster_centers_[kmeans.predict(img_data)]
t1 = time.time()
# Reshape the data into 3D
# img_clustered = clusters.reshape(img_r.shape)
img_clustered = clusters.reshape(reduced_data_3D.shape)
# Plot the data
plt.imshow(img_clustered)
title = 'KMeans clustering time to do: %.2fs' % (t1 - t0)
print(title)
plt.show()
###Output
KMeans clustering time to do: 2.14s
###Markdown
Mask the result - This is the general code that should go into the evaluation section, this repeats for each algorithm
###Code
# Load mask data
htruth_load = sio.loadmat('PaviaGrTruthMask.mat')
htruth_mask = htruth_load['PaviaGrTruthMask']
# create mask with same dimensions as image
mask = np.zeros_like(img_clustered)
# copy your image_mask to all dimensions (i.e. colors) of your image
for i in range(3):
mask[:,:,i] = htruth_mask.copy()
# apply the mask to your image
masked_image = img_clustered*mask
plt.imshow(masked_image)
plt.show()
###Output
_____no_output_____
###Markdown
Run martin index and load ground truth
###Code
from collections import defaultdict
def martinIndex(groundTruth, segmentedImage):
def imageToHashSegmented(arr):
myHash = {}
for i in range(len(arr)):
for j in range(len(arr[0])):
tempTuple = tuple(arr[i][j].tolist())
if tempTuple in myHash:
myHash[tempTuple].add((i,j))
else:
myHash[tempTuple] = {(i,j)}
return myHash
def imageToHash(arr):
myHash = {}
for i in range(len(arr)):
for j in range(len(arr[0])):
if arr[i][j] in myHash:
myHash[arr[i][j]].add((i,j))
else:
myHash[arr[i][j]] = {(i,j)}
return myHash
def WJ(hashGround):
totalPixels = len(groundTruth) * len(groundTruth[0])
wjHash = defaultdict(int)
for x in hashGround:
wjHash[x] = len(hashGround[x])/totalPixels
return wjHash
def WJI(hashGround, hashSegmented):
wjiHash = defaultdict(int)
wjiHashDen = defaultdict(int)
for j in hashGround:
for i in hashSegmented:
if len(hashGround[j].intersection(hashSegmented[i])) > 0:
intersection = 1
else:
intersection = 0
wjiHash[(j,i)] = len(hashSegmented[i]) * intersection
wjiHashDen[j] += len(hashSegmented[i]) * intersection
for j in hashGround:
for i in hashSegmented:
wjiHash[(j,i)] /= wjiHashDen[j]
return wjiHash
def EGS(hashGround, hashSegmented):
martinIndex = 0
wji = WJI(hashGround, hashSegmented)
wj = WJ(hashGround)
for j in hashGround:
innerSum = 1
for i in hashSegmented:
innerSum -= (len(hashGround[j].intersection(hashSegmented[i])) / len(hashGround[j].union(hashSegmented[i]))) * wji[(j,i)]
innerSum *= wj[j]
martinIndex += innerSum
return martinIndex
if segmentedImage[0][0].size>1:
return EGS(imageToHash(groundTruth), imageToHashSegmented(segmentedImage))
return EGS(imageToHash(groundTruth), imageToHash(segmentedImage))
hgtruth_load = sio.loadmat('PaviaGrTruth.mat')
hgtruth_mask = hgtruth_load['PaviaGrTruth']
martinIndex(hgtruth_mask, masked_image)*1000
###Output
_____no_output_____
###Markdown
SOM
###Code
from minisom import MiniSom
n_clusters = 9
t0 = time.time()
#Run Algorithm
som = MiniSom(1, n_clusters, 3, sigma=0.1, learning_rate=0.2) # 3x1 = 3 final colors
som.random_weights_init(img_data)
starting_weights = som.get_weights().copy() # saving the starting weights
som.train_random(img_data, 100)
qnt = som.quantization(img_data) # quantize each pixels of the image
clustered = np.zeros(reduced_data_3D.shape)
for i, q in enumerate(qnt): # place the quantized values into a new image
clustered[np.unravel_index(i, dims=(reduced_data_3D.shape[0], reduced_data_3D.shape[1]))] = q
t1 = time.time()
# Plot image
plt.imshow(clustered)
title = 'Self-Organizing Map clustering time to do: %.2fs' % (t1 - t0)
print(title)
plt.show()
###Output
Self-Organizing Map clustering time to do: 4.22s
###Markdown
Mask the result - This is the general code that should go into the evaluation section, this repeats for each algorithm
###Code
# create mask with same dimensions as image
mask = np.zeros_like(img_clustered)
# copy your image_mask to all dimensions (i.e. colors) of your image
for i in range(3):
mask[:,:,i] = htruth_mask.copy()
# apply the mask to your image
masked_image = clustered*mask
plt.imshow(masked_image)
plt.show()
martinIndex(hgtruth_mask, masked_image)*1000
###Output
_____no_output_____
###Markdown
FCM
###Code
import skfuzzy
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
#Turn into grayscale
img_gray = rgb2gray(reduced_data_3D)
# Reshape data
img_flat = img_gray.reshape((1, -1))
n_clusters = 9
t0 = time.time()
# Run algorithm
fzz = skfuzzy.cluster.cmeans(img_flat, c = n_clusters, m = 2, error=0.005, maxiter=1000)
t1 = time.time()
#Find clustering from fuzzy segmentation
img_clustered = np.argmax(fzz[1], axis=0).astype(float)
img_clustered.shape = img_gray.shape
plt.imshow(img_clustered)
title = 'Fuzzy C-Means clustering time to do: %.2fs' % (t1 - t0)
print(title)
plt.show()
img_clustered.shape
# DIfferently from previous methods, this section that is commented does not work because the matrix is
# (640,300) and not (640,300,3)
# # create mask with same dimensions as image
# mask = np.zeros_like(img_clustered)
# copy your image_mask to all dimensions (i.e. colors) of your image
# for i in range(3):
# mask[:,:,i] = htruth_mask.copy()
#So I applied the matrix directly and it work, but still not 3D
# apply the mask to your image
masked_image = img_clustered*htruth_mask
plt.imshow(masked_image)
plt.show()
masked_image
img_clustered
# THis does not work, issue described above
martinIndex(hgtruth_mask, masked_image)*1000
# masked_image.shape
# np.savetxt("foo.csv", masked_image, delimiter=",")
###Output
_____no_output_____
###Markdown
Spectral
###Code
import time
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from sklearn.feature_extraction import image
from sklearn.cluster import spectral_clustering
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
# img_r = rescale(reduced_data_3D,0.1,mode='reflect')
#Turn into grayscale
img_gray = rgb2gray(reduced_data_3D)
graph = image.img_to_graph(img_gray)#, mask=mask)
beta = 5
eps = 1e-6
graph.data = np.exp(-beta * graph.data / graph.data.std()) + eps
n_clusters = 9
t0 = time.time()
img_clustered = spectral_clustering(graph, n_clusters=n_clusters, assign_labels = 'discretize')
t1 = time.time()
img_clustered = img_clustered.reshape(img_gray.shape)
title = 'Spectral clustering time to do: %.2fs' % (t1 - t0)
print(title)
plt.imshow(img_clustered)
plt.show()
img_clustered
# mask = rescale(htruth_mask,0.1,mode='reflect')
# apply the mask to your image
masked_image = img_clustered*htruth_mask
plt.imshow(masked_image)
plt.show()
martinIndex(hgtruth_mask, masked_image)*1000
hgtruth_mask
plt.imshow(hgtruth_mask)
plt.show()
###Output
_____no_output_____
###Markdown
GMM
###Code
from sklearn import mixture
n_clusters = 9
gmm = mixture.GaussianMixture(n_components=n_clusters, covariance_type='full')
t0 = time.time()
# Run the algorithm
img_gmm = gmm.fit(img_data)
img_clustered = img_data[gmm.predict(img_data)].astype(float)
t1 = time.time()
# Reshape the data
img_clustered.shape = reduced_data_3D.shape
# Plot the data
plt.imshow(img_clustered)
title = 'Gaussian Mixture Model clustering time to do: %.2fs' % (t1 - t0)
print(title)
plt.show()
# create mask with same dimensions as image
mask = np.zeros_like(img_clustered)
# copy your image_mask to all dimensions (i.e. colors) of your image
for i in range(3):
mask[:,:,i] = htruth_mask.copy()
# apply the mask to your image
masked_image = img_clustered*mask
plt.imshow(masked_image)
plt.show()
martinIndex(hgtruth_mask, masked_image)*1000
###Output
_____no_output_____
|
docs/notebook-examples/using-vast-mocs-example.ipynb
|
###Markdown
Using MOC files of the VAST Pilot SurveyThis notebook gives an example of how to use vast-tools in a notebook environment to interact with the MOC, or STMOC, files of the VAST Pilot survey.MOC stands for `Multi-Order Coverage map` as is a standard in astronomy for showing survey coverage, see this link for a tutorial on HiPS and MOCS:* http://cds.unistra.fr/adass2018/I also recommened looking at the mocpy docs, as all interaction with the mocs here is performed using mocpy:* https://cds-astro.github.io/mocpy/Below are the imports required for this example. The main import required from vast-tools is the VASTMOCS class. Astropy objects are also imported, as well as `matplotlib.pyplot`. Also imported is `World2ScreenMPL` from mocpy which allows us to create `wcs` objects on which we can plot MOC coverages (see https://cds-astro.github.io/mocpy/examples/examples.html).
###Code
from vasttools.moc import VASTMOCS
from astropy.time import Time
from mocpy import World2ScreenMPL
import matplotlib.pyplot as plt
from astropy import units as u
from astropy.coordinates import Angle, SkyCoord
###Output
_____no_output_____
###Markdown
Included in VASTMOCS are:* A MOC file for each pilot survey tile (sometimes called 'field').* A MOC file for each pilot survey defined field (a collection of tiles).* A MOC file for each pilot survey epoch.* A STMOC for the current VAST Pilot Survey (this is a MOC file with observation time information attached).The first task is to initialise a VASTMOCS object:
###Code
vast_mocs = VASTMOCS()
###Output
_____no_output_____
###Markdown
Tile MOCSLet's start by loading and plotting a MOC of a tile, which is loaded as below. We will load both the `TILES` and `COMBINED` version and plot both to see the difference between the two.
###Code
field_moc = vast_mocs.load_pilot_tile_moc('VAST_0012-06A', itype='tiles')
field_moc_comb = vast_mocs.load_pilot_tile_moc('VAST_0012-06A', itype='combined')
###Output
_____no_output_____
###Markdown
As stated in the mocpy documentation, in order to plot a MOC file we need to use `World2ScreenMPL`. This is done below. For convenience we also load a dataframe containing tile/field centres which is available in vast-tools.
###Code
from vasttools.survey import load_field_centres
FIELD_CENTRES = load_field_centres()
# set the field name as the index for convenience
FIELD_CENTRES = FIELD_CENTRES.set_index('field')
fig = plt.figure(figsize=(10,10))
with World2ScreenMPL(
fig,
fov=15 * u.deg,
center=SkyCoord(
FIELD_CENTRES.loc['VAST_0012-06A']['centre-ra'],
FIELD_CENTRES.loc['VAST_0012-06A']['centre-dec'], unit='deg', frame='icrs'
),
coordsys="icrs",
rotation=Angle(0, u.degree),
) as wcs:
ax = fig.add_subplot(111, projection=wcs)
ax.set_title("VAST Pilot Field 0012-06A Area")
ax.grid(color="black", linestyle="dotted")
field_moc_comb.fill(ax=ax, wcs=wcs, alpha=0.7, fill=True, linewidth=0, color="C1", label='COMBINED')
field_moc_comb.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
field_moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="#00bb00", label='TILE')
field_moc.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
You can also combine MOC files to create a new MOC, below I load the field to the North and add it to the already loaded field above.
###Code
# Load moc to the north
north_field_moc = vast_mocs.load_pilot_tile_moc('VAST_0012+00A', itype='tiles')
# Add the mocs together
sum_moc = field_moc.union(north_field_moc)
fig = plt.figure(figsize=(10,10))
with World2ScreenMPL(
fig,
fov=20. * u.deg,
center=SkyCoord(
FIELD_CENTRES.loc['VAST_0012-06A']['centre-ra'],
FIELD_CENTRES.loc['VAST_0012-06A']['centre-dec'] + 3, unit='deg', frame='icrs'
),
coordsys="icrs",
rotation=Angle(0, u.degree),
) as wcs:
ax = fig.add_subplot(111, projection=wcs)
ax.set_title("Combined VAST Pilot Field 0012-06A and 0012+00A Area")
ax.grid(color="black", linestyle="dotted")
sum_moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="#00bb00")
sum_moc.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
plt.show()
###Output
_____no_output_____
###Markdown
VAST Pilot Field MOCSYou can also load MOCS of each of the defined VAST Pilot fields as defined on this wiki page: https://github.com/askap-vast/vast-project/wiki/Pilot-Survey-Planning.
###Code
# First we load the ellipical frame from astropy to make the sky plot look nicer
from astropy.visualization.wcsaxes.frame import EllipticalFrame
vast_pilot_field_1_moc = vast_mocs.load_pilot_field_moc('1')
fig = plt.figure(figsize=(12, 6))
with World2ScreenMPL(
fig,
fov=320 * u.deg,
center=SkyCoord(0, 0., unit='deg', frame='icrs'),
coordsys="icrs",
rotation=Angle(0, u.degree),
) as wcs:
ax = fig.add_subplot(111, projection=wcs, frame_class=EllipticalFrame)
ax.set_title("VAST Pilot Survey Field 1")
ax.grid(color="black", linestyle="dotted")
vast_pilot_field_1_moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="#00bb00")
vast_pilot_field_1_moc.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
plt.show()
###Output
_____no_output_____
###Markdown
We can add another field to the MOC.
###Code
vast_pilot_field_4_moc = vast_mocs.load_pilot_field_moc('4')
fig = plt.figure(figsize=(12, 6))
with World2ScreenMPL(
fig,
fov=320 * u.deg,
center=SkyCoord(0, 0., unit='deg', frame='icrs'),
coordsys="icrs",
rotation=Angle(0, u.degree),
) as wcs:
ax = fig.add_subplot(111, projection=wcs, frame_class=EllipticalFrame)
ax.set_title("VAST Pilot Survey Fields 1 and 4")
ax.grid(color="black", linestyle="dotted")
vast_pilot_field_1_moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="#00bb00")
vast_pilot_field_1_moc.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
vast_pilot_field_4_moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="C1")
vast_pilot_field_4_moc.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
plt.show()
###Output
_____no_output_____
###Markdown
Epoch MOCSEpoch MOCS are loaded and manipulated in the same way.
###Code
# Load the epoch one MOC
epoch1_moc = vast_mocs.load_pilot_epoch_moc('1')
fig = plt.figure(figsize=(12, 6))
with World2ScreenMPL(
fig,
fov=320 * u.deg,
center=SkyCoord(0, 0., unit='deg', frame='icrs'),
coordsys="icrs",
rotation=Angle(0, u.degree),
) as wcs:
ax = fig.add_subplot(111, projection=wcs, frame_class=EllipticalFrame)
ax.set_title("VAST Pilot Survey Epoch 1")
ax.grid(color="black", linestyle="dotted")
epoch1_moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="#00bb00")
epoch1_moc.border(ax=ax, wcs=wcs, alpha=0.5, color="black")
plt.show()
###Output
_____no_output_____
###Markdown
VAST Pilot STMOCThe STMOC (a Space-Time MOC) file is a MOC file containing the current entire VAST Pilot survey with time information attached. You can use it to match survey areas that are time sensitive. In the example below we query 4 time ranges and plot the coverage of the Pilot survey within these time ranges, example taken from the mocpy example notebook (https://github.com/cds-astro/mocpy/blob/master/notebooks/Space%20%26%20Time%20coverages.ipynb).
###Code
# load the stmoc
vast_stmoc = vast_mocs.load_pilot_stmoc()
def add_to_plot(fig, id, wcs, title, moc):
ax = fig.add_subplot(id, projection=wcs, frame_class=EllipticalFrame)
ax.grid(color="black", linestyle="dotted")
ax.set_title(title)
ax.set_xlabel('lon')
ax.set_ylabel('lat')
moc.fill(ax=ax, wcs=wcs, alpha=0.9, fill=True, linewidth=0, color="#00bb00")
moc.border(ax=ax, wcs=wcs, linewidth=1, color="black")
fig = plt.figure(figsize=(18, 9))
plt.subplots_adjust(hspace=0.3)
time_ranges = Time([
[["2019-10-29", "2019-10-30"]],
[["2019-12-19", "2019-12-20"]],
[["2020-01-16", "2020-01-18"]],
[["2020-02-01", "2020-08-01"]]
], format='iso', scale='tdb', out_subfmt="date")
with World2ScreenMPL(fig,
fov=320 * u.deg,
center=SkyCoord(0, 0, unit='deg', frame='icrs'),
coordsys="icrs",
rotation=Angle(0, u.degree),
projection="AIT") as wcs:
for i in range(0, 4):
moc_vast = vast_stmoc.query_by_time(time_ranges[i])
title = "VAST Pilot observations between \n{0} and {1}".format(time_ranges[i][0, 0].iso, time_ranges[i][0, 1].iso)
id_subplot = int("22" + str(i+1))
add_to_plot(fig, id_subplot, wcs, title, moc_vast)
plt.show()
###Output
/Users/adam/GitHub/vast-tools/vasttools/moc.py:66: ResourceWarning: unclosed file <_io.FileIO name='/Users/adam/GitHub/vast-tools/vasttools/./data/mocs/VAST_PILOT.stmoc.fits' mode='rb' closefd=True>
stmoc = STMOC.from_fits(stmoc_path)
|
coupled_process_elements/model_basicCh_steady_solution.ipynb
|
###Markdown
 terrainbento model BasicCh steady-state solution This model shows example usage of the BasicCh model from the TerrainBento package.Instead of the linear flux law for hillslope erosion and transport, BasicCh uses a nonlinear hillslope sediment flux law:$\frac{\partial \eta}{\partial t} = - KQ^{1/2}S - \nabla q_h$ $q_h = -DS \left[ 1 + \left( \frac{S}{S_c} \right)^2 + \left( \frac{S}{S_c} \right)^4 + ... \left( \frac{S}{S_c} \right)^{2(N-1)} \right]$where $Q$ is the local stream discharge, $S$ is the local slope, $K$ is the erodibility by water, $D$ is the regolith transport efficiency, and $S_c$ is the critical slope. $q_h$ represents the hillslope sediment flux per unit width. $N$ is the number of terms in the Taylor Series expansion. Refer to [Barnhart et al. (2019)](https://www.geosci-model-dev-discuss.net/gmd-2018-204/) for further explaination. For detailed information about creating a BasicCh model, see [the detailed documentation](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.derived_models.model_basicCh.html).This notebook (a) shows the initialization and running of this model, (b) saves a NetCDF file of the topography, which we will use to make an oblique Paraview image of the landscape, and (c) creates a slope-area plot at steady state.
###Code
# import required modules
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams["font.size"] = 20
matplotlib.rcParams["pdf.fonttype"] = 42
%matplotlib inline
from landlab import imshow_grid
from landlab.io.netcdf import write_netcdf
from terrainbento import BasicCh
np.random.seed(42)
# create the parameter dictionary needed to instantiate the model
params = {
# create the Clock.
"clock": {"start": 0,
"step": 10,
"stop": 1e7},
# Create the Grid.
"grid": {"grid": {"RasterModelGrid":[(100, 160), {"xy_spacing": 10}]},
"fields": {"at_node": {"topographic__elevation":{"random":[{"where":"CORE_NODE"}]}}}},
# Set up Boundary Handlers
"boundary_handlers":{"NotCoreNodeBaselevelHandler": {"modify_core_nodes": True,
"lowering_rate": -0.001}},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "output_netcdfs/basicCh.",
"fields":["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility" : 0.001,
"m_sp" : 0.5,
"n_sp" : 1.0,
"regolith_transport_parameter" : 0.2,
"critical_slope" : 0.07, # unitless
}
# we can use an output writer to run until the model reaches steady state.
class run_to_steady(object):
def __init__(self, model):
self.model = model
self.last_z = self.model.z.copy()
self.tolerance = 0.00001
def run_one_step(self):
if model.model_time > 0:
diff = (self.model.z[model.grid.core_nodes]
- self.last_z[model.grid.core_nodes])
if max(abs(diff)) <= self.tolerance:
self.model.clock.stop = model._model_time
print("Model reached steady state in " + str(model._model_time) + " time units\n")
else:
self.last_z = self.model.z.copy()
if model._model_time <= self.model.clock.stop - self.model.output_interval:
self.model.clock.stop += self.model.output_interval
# initialize the model using the Model.from_dict() constructor.
# We also pass the output writer here.
model = BasicCh.from_dict(params, output_writers={"class": [run_to_steady]})
# to run the model as specified, we execute the following line:
model.run()
# MAKE SLOPE-AREA PLOT
# plot nodes that are not on the boundary or adjacent to it
core_not_boundary = np.array(model.grid.node_has_boundary_neighbor(model.grid.core_nodes)) == False
plotting_nodes = model.grid.core_nodes[core_not_boundary]
# assign area_array and slope_array
area_array = model.grid.at_node["drainage_area"][plotting_nodes]
slope_array = model.grid.at_node["topographic__steepest_slope"][plotting_nodes]
# instantiate figure and plot
fig = plt.figure(figsize=(6, 3.75))
slope_area = plt.subplot()
# plot the data
slope_area.scatter(area_array, slope_array, marker="o", c="k",
label = "Model BasicCh")
# make axes log and set limits
slope_area.set_xscale("log")
slope_area.set_yscale("log")
slope_area.set_xlim(9*10**1, 3*10**5)
slope_area.set_ylim(1e-3, 1e-1)
# set x and y labels
slope_area.set_xlabel(r"Drainage area [m$^2$]")
slope_area.set_ylabel("Channel slope [-]")
slope_area.legend(scatterpoints=1,prop={"size":12})
slope_area.tick_params(axis="x", which="major", pad=7)
# save out an output figure
output_figure = os.path.join("output_figures/maintext_taylor_hillslopes_slope_area.pdf")
fig.savefig(output_figure, bbox_inches="tight", dpi=1000) # save figure
# Save stack of all netcdfs for Paraview to use.
model.save_to_xarray_dataset(filename="output_netcdfs/basicCh.nc",
time_unit="years",
reference_time="model start",
space_unit="meters")
# remove temporary netcdfs
model.remove_output_netcdfs()
# make a plot of the final steady state topography
imshow_grid(model.grid, "topographic__elevation")
###Output
_____no_output_____
|
Pynq-ZU/base/notebooks/rpi/SenseHat/01_character.ipynb
|
###Markdown
Sense HAT for PYNQ:Character displayThis notebook illustrates how to interact with the [Sense HAT](https://www.raspberrypi.org/products/sense-hat/) and display character on the LED matrix of Sense HAT.This example notebook includes the following steps.1. import python libraries2. select RPi switch and using Microblaze Library3. configure the I2C device4. convert characters5. waiting for user's input and display on the Sense HAT LED matrix 1. Sense HAT IntroductionThe Sense HAT, which is a fundamental part of the [Astro Pi](https://astro-pi.org/) mission, allows your board to sense the world around it.It has a 8×8 RGB LED matrix, a five-button joystick and includes the following sensors:* Gyroscope* Accelerometer* Magnetometer* Temperature* Barometric pressure* Humidity 2. Prepare the overlayDownload the overlay first, then select the shared pin to be connected toRPi header (by default, the pins will be connected to PMODA instead).
###Code
from pynq.overlays.base import BaseOverlay
from pynq.lib import MicroblazeLibrary
from PIL import Image, ImageDraw, ImageFont, ImageShow, ImageColor
import time
import numpy as np
base = BaseOverlay('base.bit')
lib = MicroblazeLibrary(base.RPI, ['i2c','gpio', 'xio_switch','circular_buffer'])
###Output
_____no_output_____
###Markdown
3. Configure the I2C device and GPIO deviceInitialize the I2C device and set the I2C pin of RPi header. Since the PYNQ-ZU board does not have pull-up on the Reset_N pin of the HAT (GPIO25), set that to 1.
###Code
device = lib.i2c_open_device(1)
lib.set_pin(2, lib.SDA1)
lib.set_pin(3, lib.SCL1)
gpio25=lib.gpio_open(25)
lib.gpio_set_direction(gpio25, 0);
gpio25.write(1)
LED_MATRIX_ADDRESS = 0x46
###Output
_____no_output_____
###Markdown
4. Convert charactersRender characters and organize images into dictionary
###Code
text = ' +-*/!"><0123456789.=)(ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz?,;:\''
img = Image.new('RGB', (len(text)*5, 8), color = (0,0,0))
d = ImageDraw.Draw(img)
d.text((0,-1), text, fill=(255,255,255),font=ImageFont.truetype("/usr/share/fonts/truetype/ttf-bitstream-vera/VeraMono.ttf",8))
text_pixels = list(map(list, img.rotate(-90, expand=True).getdata()))
text_dict = {}
for index, s in enumerate(text):
start = index * 40
end = start + 40
char = text_pixels[start:end]
text_dict[s] = char
###Output
_____no_output_____
###Markdown
5. DisplayWait for input and display on the LED matrix
###Code
buf = bytearray(8*8*3+1) # Display size 8x8 grid (3 color) plus end char
buf[0] = 0
done = False
while not done:
display = input("Please input a string(e.g.'HELLO,XILINX!') and press enter. Input 'C' to terminate.\n")
if(display == "c" or display == "C" ):
done = True
else:
display = display + " "
for value in display:
for i in range(0,len(buf)) :
buf[i] = 0;
for y in range(0,8) :
for x in range(2,7) :
buf[1+x+8*0+3*8*y] = int(text_dict[value][8*(6-x)+(y)][0]/20); #R
buf[1+x+8*1+3*8*y] = int(text_dict[value][8*(6-x)+(y)][1]/20); #G
buf[1+x+8*2+3*8*y] = int(text_dict[value][8*(6-x)+(y)][2]/20); #B
lib.i2c_write(device, LED_MATRIX_ADDRESS, buf, len(buf))
time.sleep(0.5)
device.close()
###Output
_____no_output_____
|
math/Matrix.ipynb
|
###Markdown
Matrix functionsA set of function for matrices without numpy.
###Code
import math
def formatMatrix(M):
return '[' + '\n '.join([' '.join(['{:7.3f}'.format(c) for c in row]) for row in M]) + ' ]'
def zeros (rows, cols):
return [[0]*cols for i in range(rows)];
def identity (rows):
M = zeros(rows, rows)
for r in range(rows):
M[r][r] = 1.0
return M
def copyMatrix (M):
return [[v for v in col] for col in M]
print ( formatMatrix( identity(3) ))
###Output
[ 1.000 0.000 0.000
0.000 1.000 0.000
0.000 0.000 1.000 ]
###Markdown
Matrix transpose
###Code
def transpose(A):
rowsA = len(A)
colsA = len(A[0])
return [[A[i][j] for i in range(rowsA)] for j in range(colsA)]
print (formatMatrix(transpose([[1,2,3],[4,5,6]])))
###Output
[ 1.000 4.000
2.000 5.000
3.000 6.000 ]
###Markdown
Matrix multiplication
###Code
def scale(A, scale):
return [[v*scale for v in col] for col in A]
def dot(A, B):
rowsA = len(A)
colsA = len(A[0])
rowsB = len(B)
colsB = len(B[0])
if colsA != rowsB:
raise Exception('Number of A columns must equal number of B rows.')
C = zeros(rowsA, colsB)
for i in range(rowsA):
for j in range(colsB):
C[i][j] = sum([A[i][k] * B[k][j] for k in range(colsA)])
return C
A = [[1,2,3],[4,5,6]]
B = [[1,0],[1,1],[0,1]]
C = dot(A,B)
print ( formatMatrix( C ))
print ( formatMatrix( scale(A, 2.0) ))
###Output
[ 3.000 5.000
9.000 11.000 ]
[ 2.000 4.000 6.000
8.000 10.000 12.000 ]
###Markdown
Inverse a square matrix.
###Code
def inverse(A):
rowsA = len(A)
colsA = len(A[0])
if rowsA != colsA:
raise Exception('Matrix must be square')
AM = copyMatrix(A)
IM = identity(rowsA)
for fd in range(rowsA):
fdScaler = 1.0 / AM[fd][fd]
for j in range(rowsA):
AM[fd][j] *= fdScaler
IM[fd][j] *= fdScaler
for i in list(range(rowsA))[0:fd] + list(range(rowsA))[fd+1:]:
crScaler = AM[i][fd]
for j in range(rowsA):
AM[i][j] = AM[i][j] - crScaler * AM[fd][j]
IM[i][j] = IM[i][j] - crScaler * IM[fd][j]
return IM
A = [[5,4,3,2,1],[4,3,2,1,5],[3,2,9,5,4],[2,1,5,4,3],[1,2,3,4,5]]
AI = inverse(A)
print ('A ::')
print ( formatMatrix( A ))
print ('Inverse ::')
print ( formatMatrix( AI ))
print ('A * inverse :: ')
print ( formatMatrix(dot( A, AI )))
###Output
A ::
[ 5.000 4.000 3.000 2.000 1.000
4.000 3.000 2.000 1.000 5.000
3.000 2.000 9.000 5.000 4.000
2.000 1.000 5.000 4.000 3.000
1.000 2.000 3.000 4.000 5.000 ]
Inverse ::
[ 0.022 0.200 -0.333 0.733 -0.378
0.244 -0.200 0.333 -0.933 0.444
-0.056 -0.000 0.333 -0.333 -0.056
0.122 -0.200 -0.333 0.533 0.122
-0.167 0.200 0.000 -0.000 0.033 ]
A * inverse ::
[ 1.000 0.000 -0.000 -0.000 -0.000
0.000 1.000 0.000 -0.000 0.000
0.000 0.000 1.000 -0.000 -0.000
0.000 0.000 0.000 1.000 -0.000
0.000 0.000 -0.000 -0.000 1.000 ]
###Markdown
Solve the Ax=b linear equations system. By taking the inverse of A and multiplying b.
###Code
def solve(A, b):
Inv = inverse(A)
return dot(Inv, b)
# 4x + 3y = -13
# -10x -2y = 5
A = [[4, 3], [-10, -2]]
b = [[-13], [5]]
x = solve(A, b)
print ( formatMatrix(x) )
A = [[4, 3], [-10, -2]]
b = [[-2], [1]]
x = solve(A, b)
print ( formatMatrix(x) )
A = [[4, 3], [-10, -2]]
b = [[-13,-2], [5,1]]
x = solve(A, b)
print ( formatMatrix(x) )
###Output
[ 0.500
-5.000 ]
[ 0.045
-0.727 ]
[ 0.500 0.045
-5.000 -0.727 ]
###Markdown
Orthogonalization.
###Code
def orthogonize (A):
def _dot(a,b):
return sum([ia * ib for ia,ib in zip(a,b)])
def _proj(a,b):
coef = _dot(b,a) / _dot(a,a)
return [ia * coef for ia in a]
rowsA = len(A)
colsA = len(A[0])
B = []
for i in range(rowsA):
temp_vec = A[i]
for inB in B :
proj_vec = _proj(inB, A[i])
temp_vec = [x-y for x,y in zip(temp_vec, proj_vec)]
coef = 1.0/math.sqrt(_dot(temp_vec,temp_vec))
temp_vec = [it*coef for it in temp_vec]
B.append(temp_vec)
return B
#tests
A = [
[1,0,0],
[0,1,0],
[0,0,1]
]
print ( formatMatrix(orthogonize(A)))
A = [
[1.1,0,0],
[0,1.2,0],
[0,0,1.1]
]
print ( formatMatrix(orthogonize(A)))
A = [
[1,0.1,0],
[0,1,0],
[0,0,1]
]
print ( formatMatrix(orthogonize(A)))
###Output
[ 1.000 0.000 0.000
0.000 1.000 0.000
0.000 0.000 1.000 ]
[ 1.000 0.000 0.000
0.000 1.000 0.000
0.000 0.000 1.000 ]
[ 0.995 0.100 0.000
-0.100 0.995 0.000
0.000 0.000 1.000 ]
|
chapter6/chapter6_walkthrough.ipynb
|
###Markdown
Chapter 6: Inference for categorical data Guided Practice 6.2We get that $SE_{\hat{p}} = \sqrt{\frac{\hat{p} \times (1 - \hat{p})}{n}} = 0.0159$. Guided Practice 6.4A plausible hypothesis would be $P_{0}: $ support and opposition are the same fraction ($ = 0.50$) and $P_{A}: $ states that support and opposition fractions are in different proportions ($ \ne 0.50$). Guided Practice 6.5Let's check the success-failure condition with the null value $np_{0} = 412.5 \approx 413 = n(1 - p_{0})$, so we can use the normal distribution. Guided Practice 6.8We basically want $1.645 \times \sqrt{\frac{p(1 - p)}{n}} \lt 0.01$, we can conveniently create a Python routine which, taking as input `p`, the confidence level and the margin of error, will output the minimum sample size `n` to achieve that. We run the function with the three proportions found previously and see what's the respective `n`.
###Code
from scipy import stats
def get_sample_size(p, confidence_level, error_margin):
z = round(stats.norm.interval(confidence_level)[-1], 2)
return round((z ** 2) * ((p * (1 - p)) / (error_margin ** 2)))
for p in [0.017, 0.062, 0.013]:
print(f"With p = {p}, we need a sample size of n = {get_sample_size(p, 0.90, 0.01)}.")
###Output
With p = 0.017, we need a sample size of n = 449.
With p = 0.062, we need a sample size of n = 1564.
With p = 0.013, we need a sample size of n = 345.
###Markdown
Guided Practice 6.10We can again leverage the previously created function to answer this question.
###Code
get_sample_size(0.70, 0.95, 0.05)
###Output
_____no_output_____
###Markdown
Exercise 6.1 - Vegetarian college students.* (a) Since $np = 60 \times 0.08 = 4.8$ and $n(1 - p) = 60 \times 0.92 = 55.2$, the normal distribution won't work well.* (b) Yes, since the sample proportion is closer to 0 than 1 and the sample size is not very big for such a proportion.* (c) A sample size of $n = 125$ would give us $SE_{\hat{p}} = 0.024$ therefore we can compute $Z = 1.67$, which means that value is quite unusual.* (d) With a sample size of $n = 250$ we get $SE_{\hat{p}} = 0.017$ and therefore $Z = 2.33$, so the proportion becomes more unusual.* (e) We reduced the standard error by 28% so we had roughly one quarter of the standard error with half the sample size. Exercise 6.2 - Young Americans, Part I.* (a) Maybe only slightly left skewed due to $p$ being closer to 1 and the sample size being 20.* (b) Since $np = 30.8$ and $n(1 - p) = 9.2$, we fail the success-failure condition so the normal approximation won't work well.* (c) Since $SE_{\hat{p}} = 0.054$ we have that $Z = 1.47$ so the observation would be considered unusual.* (d) In this case, $Z = 2.08$, so the observation is more unusual. Exercise 6.3 - Orange tabbies.* (a) It is left skewed since the sample size is small and the proportion is close to 1. Therefore we expect a noticeable left skewness.* (b) True, since that would halve the _standard error_ (remember that $n$ is under the square root too).* (c) True, since we have a big sample size which makes it easier to approximate it to a normal distribution. Also, $np = 140 * 0.90 = 126$ and $n(1 - p) = 14$.* (d) True, a bigger sample size would increase the _success-failure condition_ values. Exercise 6.4 - Young Americans, Part II.* (a) True, since we have a very small sample size and a proportion closer to 0. * (b) True, since we need to meet the following:$$\begin{cases}np \ge 10 \\ n(1 - p) \ge 10 \end{cases} = \begin{cases}n \ge 13.33 \\ n \ge 40 \end{cases} => n \ge 40$$* (c) We have $SE_{\hat{p}} = 0.057$ so $Z = -0.88$, therefore it is not considered unusual.* (d) We now have $Z = -1.53$ so the sample proportion is unusual.* (e) False, a sample size $3n$ will decrease the _standard error_ by a factor $\frac{1}{\sqrt{3}}$. Exercise 6.5 - Gender equality.* (a) False, confidence is about the population not the sample. * (b) True, since a confidence interval built on some data gives us the degree of confidence about a population.* (c) True, since this is another definition of confidence interval.* (d) True, since quadrupling the sample size will halve the _standard error_.* (e) True, since we still get a low margin of error. Exercise 6.6 - Elderly drivers.* (a) We have $SE_{\hat{p}} = 0.015$ and so our 95% confidence interval should be $[63.08\%,\ 68.91\%]$, so the values are correct.* (b) No, since the interval is lower than 70%. Exercise 6.7 - Fireworks on July 4th.For a 95% confidence interval we have a margin of error of $1.96 * SE_{\hat{p}} = 0.04 = 4\%$. Exercise 6.8 - Life rating in Greece.* (a) The parameter of interest is the proportion of Greek people living in a condition poor enough to be considered "suffering". The value of this proportion is $\hat{p} = 0.25$.* (b) We may want to check the _success-failure condition_, having $np = 0.25 * 1000 = 250$ and $n(1 - p) = 0.75 * 1000 = 750$.* (c) The 95% confidence interval is $[22.32\%,\ 27.68\%]$.* (d) A higher confidence level would give us a wider confidence interval.* (e) A larger sample would shrink the standard error, thus shrinking the confidence interval itself. Exercise 6.9 - Study abroad.* (a) An optional web survey may be completed by those who choose to do so, therefore this is not a representative sample of the population of interest.* (b) The 90% confidence interval is $[52.89\%,\ 57.11\%]$. We are confident that the proportion of students sure to spend a study period abroad is between 52.89% and 57.11%.* (c) It means that by taking multiple samples and calculating the sample parameter of interest, and building then a 90% confidence interval, 90% of these intervals will capture the true population parameter.* (d) If this sample were representative, then yes since the interval is above 50%. Exercise 6.10 - Legalization of marijuana, Part I.* (a) It is a sample statistic since it is referred to a sample.* (b) We are 95% confident that the true population parameters (the fraction of US residents wanting to legalize Cannabis) lies within the interval $[58.59\%,\ 63.41\%]$.* (c) Let's check the _success-failure_ condition: $np = 1578 * 0.61 \approx 963 $ and $n(1 - p) = 1578 * 0.39 \approx 615$, therefore the normal model is reasonable.* (d) I am 95% confident that this statement is true. Exercise 6.11 - National Health Plan, Part I.* (a) The null hypothesis is that half of the Independents support a National Health Plan, while the alternative hypothesis tells us that the proportion is bigger (a one-tailed test is necessary here). We have that $SE_{\hat{p}} = 0.02$. The found _p-value_ is $p = 0.006 \lt 0.05$. Therefore we can conclude there's enough evidence for us to reject the null hypothesis.* (b) As found before, a 95% confidence interval does not include 0.5 (50%) but a 99% confidence interval might. Exercise 6.12 - Is college worth it? Part I.* (a) The null hypothesis is that half of the partecipants said they did not go to college because they could not afford it, while the alternative hypothesis is that the proportion of those who did not go to college because they could not afford it is less than 0.5, i.e. is in "minority". We found a p-value of 0.25 (one sided), therefore we can't reject the null hypothesis with this data, so we cannot confirm the statement.* (b) Sure, since the p-value is greater than 0.05. Exercise 6.13 - Taste test.* (a) Given H0: p = 0.5 and HA: p $\ne$ 0.5 we find a p-value of 0.0036 (two tailed test), therefore we can reject the null hypothesis. People are better to detect than random guessing.* (b) In this context the p-value is the probability to see a 53% success rate if we used random guessing, and this probability is 0.0036. Exercise 6.14 - Is college worth it? Part II.* (a) The 90% confidence interval is $[43.48\%,\ 52.52\%]$.* (b) Using our previously defined script we get $n = 2984$. Exercise 6.15 - National Health Plan, Part II.An appropriate sample size would be $n = 6657$. Exercise 6.16 - Legalize Marijuana, Part II.We need to use a sample size of $n = 2285$. Guided Practice 6.13We define as $p_{fo} = \frac{145}{12933} = 0.011$ as the percentage of heart attacks in the group receiving fish oil (treatment group) and $p_{pl} = \frac{200}{12938} = 0.016$ as the fraction of heart attacks in the control group. We have $p_{fo} - p_{pl} = 0.011 - 0.015 = -0.004$ with $SE = 0.0014$ (using the formula implemented in the function below). So our confidence interval is $[-0.0077,\ -0.0023]$, which means that we're 95% confident that the difference between the treatment and control heart attack frequency is between -0.77% and -0.23%, which is not a significant difference. Guided Practice 6.14It is an experimental study since we have both a treatment and a control group. Guided Practice 6.15H0 : pm - pnm = 0.03 and HA : pm - pnm $\ne$ 0.03, with pm being the proportion receiving mammogram and pnm the proportion in the control group. Guided Practice 6.19H0 : pnew - pold = 0 and HA : pnew - pold $\ge$ 0.03. Guided Practice 6.21We remember that $Var[\alpha A + \beta B] = \alpha^{2} Var[{A}] + \beta^{2} Var[{B}]$ and since $Var[\alpha X] = \alpha^{2}Var[X]$ so $Var[-X] = (-1)^2Var[X] = Var[X]$ we get that $SE_{\hat{p}_1 - \hat{p}_2}^{2} = SE_{\hat{p}_{1}}^{2} + SE_{\hat{p}_{2}}^{2}$. Exercise 6.17 - Social experiment, Part I.Let's calculate the pooled proportion as $\hat{p}_{yes} = \frac{20}{45} = 0.44$, and now let's check the success-failure conditions as$$n_{pro} * \hat{p}_{yes} = 20 * 0.44 = 8.8 \\$$$$n_{pro} * (1 - \hat{p}_{yes}) = 20 * 0.56 = 11.2 \\$$$$n_{con} * \hat{p}_{yes} = 25 * 0.44 = 11 \\$$$$n_{con} * (1 - \hat{p}_{yes}) = 20 * 0.56 = 14 \\$$The first conditions holds 8.8, therefore it does not meet the success-failure condition. Exercise 6.18 - Heart transplant success.We have that $\hat{p}_{survived} = \frac{28}{103} = 0.2718$ and we fail to have $n_{survived} \times \hat{p}_{survived} \ge 10$. The confidence interval won't be accurate as it won't be symmetric around the estimate. Exercise 6.19 - Gender and color preference.* (a) False, since that is an interval.* (b) True, since we are 95% confident to see those percent differences.* (c) True, since this is another definition of a confidence interval.* (d) True, since our confidence interval is above the 0.* (e) False, at it is simply the negated. Exercise 6.20 - Government shutdown.* (a) With significance level of 5%, we need a p-value less that 0.05 to reject the null hypothesis. However, from the given confidence interval we can't deduct that there are any significant differences, so this is false.* (b) False, it's the opposite: we are 95% confident that 16% to -2% of those who make less than 40,000$ are personally affected.* (c) False, a 90% confidence interval would be narrower.* (d) True. We get that $SE = 0.05$ so roughly yes, we get that interval. Exercise 6.21 - National Health Plan, Part III.* (a) First of all we have $p_{D} - p_{I} = 0.79 - 0.55 = 0.24$ and $SE_{p_{D} - p_{I}} = 0.03$ (calculated using the module defined below). Now, the 95% confidence interval is $[0.181,\ 0.299]$.* (b) True, since the 95% confidence interval is definitely greater than 0.
###Code
def standard_error(p, n):
return ((p * (1 - p)) / n) ** 0.5
def standard_error_difference(p1, p2, n1, n2):
return (standard_error(p1, n1) ** 2 + standard_error(p2, n2) ** 2) ** 0.5
###Output
_____no_output_____
###Markdown
Exercise 6.22 - Sleep deprivation, CA vs. OR, Part I.Since $p_{OR} - p_{CA} = 0.088 - 0.080 = 0.008$ and $SE_{p_{OR} - p_{CA}} = 0.005$ we have that the 95% confidence interval is $[-0.002,\ 0.018]$. The observed difference might be due to chance since the confidence interval shows that the difference in proportion can go below 0. Exercise 6.23 - Offshore drilling, Part I.* (a) Respectively, $p_{grad} = 0.2374 = 23.74\%$ and $p_{no\ grad} = 0.3368 = 33.68\%$.* (b) $H_{0} : p_{no\ grad} = p_{grad}$ and $H_{0} : p_{no\ grad} \ne p_{grad}$. We obtain $SE_{p_{no\ grad} - p_{grad}} = 0.0314$. Now we get a p-value as $pval = 0.0015$ (two-tailed test), thus there's convincing evidence that the proportion of non college graduates who do not have an opinion is greater than the proportion of college grade who neither do have an opinion. Exercise 6.24 - Sleep deprivation, CA vs. OR, Part II.* (a) Success-failure condition is met and so we can conduct an hypothesis test $H_{0} : p_{Ca} = p_{Or}$ and $H_{0} : p_{Ca} \ne p_{Or}$. So we get $pval = 0.099$ which tells us there's no strong evidence supporting the alternative hypothesis.* (b) It was made a type II error. Exercise 6.25 - Offshore drilling, Part II.* (a) Respectively $35.16\%$ and $33.93\%$.* (b) We get a pvalue of 0.71, showing us that there is not strong evidence supporting that the two differences are not by chance. Exercise 6.26 - Full body scan, Part I.* (a) $H_{0} : p_{r} = p_{d}$ and $H_{A} : p_{r} \ne p_{d}$. We find a p-value of $0.498$, therefore we can't reject the null hypothesis.* (b) We would make a Type II error, i.e. failing to reject a null hypothesis which ought to be rejected indeed. Exercise 6.27 - Sleep deprived transportation workers.We check if we meet the success-failure condition using the script below.
###Code
def success_failure_two(p1, p2, n1, n2):
p_p = (p1 * n1 + p2 * n2) / (n1 + n2)
return all(
x >= 10 for x in [p_p * n1, p_p * n2, (1 - p_p) * n1, (1 - p_p) * n2]
)
success_failure_two(35 / 203, 35 / 292, 203, 292)
###Output
_____no_output_____
###Markdown
Let's now create a whole routine which output the p-value for us (it will be rudimental at first).
###Code
def p_val_calc_diff(p1, p2, n1, n2):
if not success_failure_two(p1, p2, n1, n2):
return -1
else:
se = standard_error_difference(p1, p2, n1, n2)
z = (p1 - p2) / se
print(z)
if z < 0:
return stats.norm.cdf(z) * 2
else:
return (1 - stats.norm.cdf(z)) * 2
p_val_calc_diff(35 / 292, 35 / 203, 292, 203)
###Output
-1.6109110850818888
###Markdown
We get a p-value which obliges us to reject the alternative hypothesis. Exercise 6.28 - Prenatal vitamins and Autism.* (a) $H_{0} : p_{v} = p_{nv}$ and $H_{A} : p_{v} \ne p_{nv}$.* (b) We can check using the routine defined above, and since we get a p-value less than 0.05, we can accept the alternative hypothesis.* (c) Since we have not proven causation, we might use a softer term. Exercise 6.29 - HIV in sub-Saharan Africa.* (a) Let's use pandas for this (see below).* (b) $H_{0} : p_{n} = p_{l}$ and $H_{A} : p_{n} \ne p_{l}$.* (c) We find a p-value of 0.003, therefore we reject the null hypothesis.
###Code
import pandas as pd
df = pd.DataFrame({
"outcome": ["virologic failure"] * 26 + ["success"] * 94 + ["virologic failure"] * 10 + ["success"] * 110,
"medicine": ["nevaprine"] * 120 + ["lopinavir"] * 120
})
pd.crosstab(df.medicine, df.outcome, margins=True)
###Output
_____no_output_____
###Markdown
Exercise 6.30 - An apple a day keeps the doctor away.It depends on the number of students and the purpose of this analysis. Guided Practice 6.23We would expect 33 of the jurors to be Hispanic and 24.75 from other races. Guided Practice 6.24 * (a) The center gets wider.* (b) The variability seems to increase.* (c) The shape becomes more normal-like. Guided Practice 6.28
###Code
1 - stats.chi2.cdf(11.7, df=7)
###Output
_____no_output_____
###Markdown
Guided Practice 6.29
###Code
1 - stats.chi2.cdf(10, df=4)
###Output
_____no_output_____
###Markdown
Guided Practice 6.30
###Code
1 - stats.chi2.cdf(9.21, df=3)
###Output
_____no_output_____
###Markdown
Guided Practice 6.34Let's use, as above, Python.
###Code
import numpy as np
obs = np.array([717, 369, 155, 69, 28, 14, 10])
exp = np.array([743, 338, 153, 70, 32, 14, 12])
x2 = stats.chisquare(obs, exp, ddof=6)
x2.statistic
###Output
_____no_output_____
###Markdown
Guided Practice 6.35We need to use $n - 1 = 7 - 1 = 6$ degrees of freedom. Exercise 6.31 - True or false, Part I.* (a) False. The chi-square has only one parameter, i.e. the degrees of freedom.* (b) True. The more the degrees of freedom the more it is normal, though.* (c) True.* (d) False. It becomes more normal. Exercise 6.32 - True or false, Part II.* (a) True. It becomes more normal shaped.* (b) True. We would get a p-value of 0.8197.* (c) False. Only the right tail.* (d) True, as it becomes more normal (and the tails gets larger). Exercise 6.33 - Open source textbook.* (a) $H_{0}$: the distribution of students follows the professor prediction. $H_{A}$: the distribution of students does not follow the professor prediction.* (b) $76$, $31$ and $19$.* (c) Observations are independent and each bin has at least five cases. They are both met.* (d) $X^{2} = 1.825$ with $pval = 0.40$.* (e) THe distribution of students follows somehow professor's distribution so we can't reject the null hypothesis. Exercise 6.34 - Barking deer.* (a) $H_{0}$: the distribution of the sites follow the percentages as shown in the study. $H_{A}$: the distribution of the sites does not follow the percentages.* (b) The chi squares test can be used to assess if the observed distribution matches the expected one.* (c) Conditions are matched.* (d) The resulting pvalue is very small (around 1.813e-61), therefore we can reject the null hypothesis, i.e. that the sites follow that distribution. Guided Practice 6.39We would expect $73 * (1 - 0.2785) = 52.6 \approx 53$ to hide the freezing problem from that group. Guided Practice 6.42We can use numpy and pandas to achieve this.
###Code
import pandas as pd
df = pd.DataFrame({
"lifestyle": np.array([319, 380]) * 234 / 699,
"met": np.array([319, 380]) * 232 / 699,
"rosi": np.array([319, 380]) * 233 / 699
}, index=["Failure", "Success"]).T
df
###Output
_____no_output_____
###Markdown
Guided Practice 6.43Again, we can use Python for it.
###Code
emp_df = pd.DataFrame({
"lifestyle": np.array([109, 125]),
"met": np.array([120, 112]),
"rosi": np.array([90, 143])
}, index=["Failure", "Success"]).T
chi2, p, dof, exp = stats.chi2_contingency(emp_df)
chi2
###Output
_____no_output_____
###Markdown
Guided Practice 6.44We already calculated `p` above. And we can reject the null hypothesis.
###Code
p
###Output
_____no_output_____
###Markdown
Exercise 6.35 - Quitters.* (a) We can create a two-way table with pandas.
###Code
two_way_635 = pd.DataFrame({
"Patch with Support": [40, 110],
"Patch without Support": [30, 120],
}, index=["Quit", "No Quit"]).T
two_way_635
###Output
_____no_output_____
###Markdown
* (b) * (i) If being part of a support group does not affect the outcome of the experiment, we would observe a probability of quitting of $p = 0.47$. Therefore the probability of not quitting is $0.53$. So to answer the first question, we would expect 35 people to quit in the "patch + support" group. * (ii) I would still expect 110 people to not quit. Exercise 6.36 - Full body scan, Part II.* (a) I would expect $\approx 48$ Republicans to not support the use of full-body scans.* (b) Around $297$ Democrats would be expected to support the use of full-body scans.* (c) I would expect $21$ Independents to not answer. Exercise 6.37 - Offshore drilling, Part III.Let's build the two-way table and use Python to perform the due calculations.
###Code
college_grad_oil_drill = pd.DataFrame({
"Support": [154, 132],
"Oppose": [180, 126],
"Do not know": [104, 131]
}, index=["Yes", "No"]).T
significance_level = 0.05
chi2, p, dof, exp = stats.chi2_contingency(college_grad_oil_drill)
print(f"We got a chi2 statistics of {chi2} with a p-value of {p} (degrees of freedom {dof} and expected distribution {exp}).")
if p < significance_level:
print("We can reject the null hypothesis, therefore there is a significant difference.")
else:
print("No significant difference.")
###Output
We got a chi2 statistics of 11.460816627638106 with a p-value of 0.003245751677744377 (degrees of freedom 2 and expected distribution [[151.47279323 134.52720677]
[162.06529625 143.93470375]
[124.46191052 110.53808948]]).
We can reject the null hypothesis, therefore there is a significant difference.
###Markdown
Exercise 6.38 - Parasitic worm.* (a) $H_{0}$: differences in results are due to chance. $H_{A}$: there's a significant difference among the treatments.* (b) Given the small p-value, we can reject the null hypothesis, and therefore the results we got are not due to chance. Exercise 6.39 - Active learning.No, the surveys are not independent as they are conducted on the same students. Exercise 6.40 - Website experiment.* (a) Let's make a dataframe to answer such question.
###Code
n_partecipants = 701
visitor_df = pd.DataFrame({
"Position 1": [0.138, 0.183],
"Position 2": [0.146, 0.185],
"Position 3": [0.121, 0.227]
}, index=["Download", "No Download"]).T
visitor_df * n_partecipants
###Output
_____no_output_____
###Markdown
* (b) The probability of being in a group, given the equal chance of belonging to any of the three, is 0.333 roughly. We expect then that for each group we have roughly 233 partecipants. Let's test the goodness of fit using the chi square distribution. We do see that we obtain a p-value of 0.675, which means that the fluctuation from the expected value is due to randomness with high probability, so we cannot reject the null hypothesis, which is that the classes are perfectly balances.
###Code
c = stats.chisquare(visitor_df.sum(axis=1) * 701, [701 * 1 / 3] * 3)
c
###Output
_____no_output_____
###Markdown
Exercise 6.41 - Shipping holiday gifts.* (a) We would expect each shipping method to be uniformly distributed among the age segments. * (b) We can't use the chi-square test since the expected distribution for the "Not Sure" shipping method does not have at least 5 in all bins. Exercise 6.42 - The Civil War.* (a) $H_{0}: \hat{p} = 0.5$ and $h_{A}: \hat{p} > 0.5$. We get $Z = 4.69$, which translates to a p-value of $pval = 1.37 \times 10^{6}$.* (b) The p-value means that the chances to get such a result due to chance is very small, therefore we can reject the null hypothesis.* (c) $[0.5390,\ 0.5810]$. Exercise 6.43 - College smokers.* (a) Since $np = 40$ and $n(1 - p) = 160$, we can use the normal approximation. The 95% confidence interval is $[0.1446,\ 0.2554]$.* (b) We would need a sample size of $1537$. Exercise 6.44 - Acetaminophen and liver damage.* (a) Let's use $\hat{p} = 0.50$, so we get $n = 3393$ and this means we need to have 60,860 dollars set aside.* (b) With fewer partecipants we get a greater standard error and thus a wider interval. Exercise 6.45 - Life after college.* (a) We really want to find the proportion of all the 4500 students in the class under consideration. The point estimate is $\hat{p} = 0.87$.* (b) We can easily see that the conditions are met.* (c) The 95% confidence interval is $[0.84,\ 0.90]$.* (d) It means that out of 100 samples on which we build a 95% confidence interval, approximately 95 of them will contain the true population parameter.* (e) Of course 99% confidence interval is wider, since in order to have a greater confidence, we need to get a wider interval. Exercise 6.46 - Diabetes and unemployment.* (a) Let's leverage Python to create a two way table.
###Code
diab_unemp_df = pd.DataFrame({
"Employed": [47774 * 0.0015, 47774 * (1 - 0.0015)],
"Unemployed": [5885 * 0.0025, 5885 * (1 - 0.0025)]
}, index=["Diabetes", "No Diabetes"]).T
diab_unemp_df
###Output
_____no_output_____
###Markdown
* (b) We want to test whether there is a difference in the proportions of unemployed and employed people suffering diabetes. Our $H_{0}$ is that there is not difference, while our $H_{A}$ is that there is a significant difference.* (c) Using a large sample size, we get a very low standard error and thus a very low p-value which does not translate to a practical significant result, rather and over estimated result. Exercise 6.47 - Rock-paper-scissors.Again, let's make easy work of this by using Python.
###Code
stats.chisquare([43, 21, 35], [33, 33, 33])
###Output
_____no_output_____
###Markdown
If the outcomes have equal chances, we would expect their probability be 1 / 3, and thus we would get in 99 times, 33 of each option. With this, we got a p-value of 0.02, suggesting that actually there's a bias in choosing the option. Exercise 6.48 - 2010 Healthcare Law.* (a) False. The confidence interval is built on the sample, but it serves to estimate a population proportion.* (b) True. This is the definition of the 95% confidence interval.* (c) True.* (d) True. Exercise 6.49 - Browsing on the mobile device.* (a) We have $H_{0}: p_{US} - p_{CN} = 0$ and $p_{US} - p_{CN} \ne 0$. We get a $Z=-16$ and a very small p-value, indicating that there's a statistical difference.* (b) The proportion of Chinese of surfs the web only on the phone is statistically greater than the American proportion.* (c) $[0.1545,\ 0.1855]$.
###Code
0.17 - 1.96 * standard_error(0.17, 2254), 0.17 + 1.96 * standard_error(0.17, 2254)
###Output
_____no_output_____
|
behavioral_analysis/matjags-dn/dn_bm_04_parameter_estimation.ipynb
|
###Markdown
Parameter estimation
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import scipy.io
import os
from sklearn.neighbors import KernelDensity
path_root = os.environ.get('DECIDENET_PATH')
path_code = os.path.join(path_root, 'code')
if path_code not in sys.path:
sys.path.append(path_code)# plt.style.use('fast')
plt.rcParams.update({
'font.size': 14,
})
path_out = os.path.join(path_root, 'data/main_fmri_study/derivatives/jags')
path_vba = os.path.join(path_out, 'vba')
path_jags_output = os.path.join(path_out, 'jags_output')
# Load posterior model probabilities
path_pmp = os.path.join(path_vba, 'pmp_HLM_sequential_split.mat')
mat = scipy.io.loadmat(path_pmp, squeeze_me=True)
pmp = mat['pmp']
modelnames = ['PICI', 'PICD', 'PDCI', 'PDCD']
# Load posterior samples for hierarchical model with PDCI submodel
path_h_pdci = os.path.join(path_jags_output, 'H_pdci.mat')
mat = scipy.io.loadmat(path_h_pdci, variable_names=['samples', 'nSamples', 'nChains'], squeeze_me=True)
samples = mat['samples']
n_samples, n_chains = mat['nSamples'], mat['nChains']
n_subjects, n_conditions = 32, 2
n_prederrsign = 2
sublabels = [f'm{sub:02}' for sub in range(2, n_subjects+2)]
# Load samples for relevant behavioral parameters
alpha_pdci = samples['alpha_pdci'].item()
beta_pdci = samples['beta_pdci'].item()
alpha_pdci = np.reshape(alpha_pdci, (n_chains * n_samples, n_subjects, n_prederrsign))
beta_pdci = np.reshape(beta_pdci, (n_chains * n_samples, n_subjects))
###Output
_____no_output_____
###Markdown
Simple posterior examination
###Code
# Describe marginalized distribution of behavioral parameters
print(f'ME alpha+ = {np.mean(alpha_pdci[:, :, 0])}')
print(f'SD alpha+ = {np.std(alpha_pdci[:, :, 0])}')
print(f'ME alpha- = {np.mean(alpha_pdci[:, :, 1])}')
print(f'SD alpha- = {np.std(alpha_pdci[:, :, 1])}')
print(f'ME beta = {np.mean(beta_pdci)}')
print(f'SD beta = {np.std(beta_pdci)}\n')
# Calculate Bayes Factor alpha+ > alpha-
pdci_idx = (np.argmax(pmp, axis=0) == modelnames.index('PDCI'))
bf_plus_minus = (np.sum(alpha_pdci[:, :, 0] > alpha_pdci[:, :, 1])
/ np.sum(alpha_pdci[:, :, 1] > alpha_pdci[:, :, 0]))
bf_plus_minus_only_pdci = (np.sum(alpha_pdci[:, pdci_idx, 0] > alpha_pdci[:, pdci_idx, 1])
/ np.sum(alpha_pdci[:, pdci_idx, 1] > alpha_pdci[:, pdci_idx, 0]))
print(f'BF(alpha+ > alpha-) = {bf_plus_minus}')
print(f'BF(alpha+ > alpha-; PDCI subjects) = {bf_plus_minus_only_pdci}')
###Output
ME alpha+ = 0.7366027877084681
SD alpha+ = 0.2749347221528298
ME alpha- = 0.41500499592687495
SD alpha- = 0.14884168292299235
ME beta = 0.1254196657501562
SD beta = 0.043113740888508764
BF(alpha+ > alpha-) = 4.782279121454966
BF(alpha+ > alpha-; PDCI subjects) = 37.89855665881871
###Markdown
Point parameter estimationEstimate learning rate values as point estimates of posterior probability distribution. First, MCMC samples are converted into continuous function using kernel density estimation (KDE) with Gaussian kernel. Then, argument maximizing estimated distribution is considered as point estimate of $\alpha$. Paramater `n_gridpoints` control the grid sparsity over parameter space (i.e. gridpoints for which KDE distribution is estimated). Smoothing of KDE distribution is adjusted by `kernel_width` parameter.
###Code
n_gridpoints = 1001
kernel_width = 0.01
gridpoints = np.linspace(0, 1, n_gridpoints)
alpha_pdci_mle = np.zeros((n_subjects, 2))
beta_pdci_mle = np.zeros((n_subjects, 1))
for sub in range(n_subjects):
# Calculate parameters as maximum likelihood
kde = KernelDensity(bandwidth=kernel_width, kernel='gaussian')
# Learning rates
for j in range(n_prederrsign):
kde.fit(alpha_pdci[:, sub, j].flatten()[:, np.newaxis])
alpha_pdci_mle[sub, j] = gridpoints[np.argmax(
kde.score_samples(gridpoints[:, np.newaxis]))]
# Precision
kde.fit(beta_pdci[:, sub].reshape((n_samples * n_chains, 1)))
beta_pdci_mle[sub] = gridpoints[np.argmax(
kde.score_samples(gridpoints[:, np.newaxis]))]
# Save to file
np.save(os.path.join(path_out, 'parameter_estimates/alpha_pdci_mle_3digits'),
alpha_pdci_mle)
np.save(os.path.join(path_out, 'parameter_estimates/beta_pdci_mle_3digits'),
beta_pdci_mle)
fig, ax = plt.subplots(nrows=1, ncols=1, facecolor='w', figsize=(7, 7))
ax.plot([0, 1], [0, 1], 'k--', alpha=.5)
sns.kdeplot(
alpha_pdci_mle[:, 0],
alpha_pdci_mle[:, 1],
cmap="bone_r",
shade=True,
shade_lowest=False,
ax=ax,
alpha=.75,
bw=.1
)
ax.scatter(
alpha_pdci_mle[:, 0],
alpha_pdci_mle[:, 1],
color='black',
marker='x'
)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.grid(alpha=.4)
ax.set_title('Learning rates for PDCI model')
ax.set_xlabel(r'$\alpha_{+}$')
ax.set_ylabel(r'$\alpha_{-}$')
plt.tight_layout()
###Output
_____no_output_____
|
Coursera/The Arduino Platform and C Programming/Week-2/Quiz/Module-2-Quiz.ipynb
|
###Markdown
1. What is the name of the library which contains the printf() function? Ans: stdio.h 2. What does the '\n' character mean? Ans: newline 3. What type of data is surrounded by double quotes in a program? Ans: a string 4. What C type is one byte long? Ans: char 5. Does the following statement evaluate to True or False?
###Code
(10 || (5-2)) && ((6 / 2) - (1 + 2))
###Output
_____no_output_____
###Markdown
Ans: False 6. What does the following program print to the screen?
###Code
int main (){
int x = 0, y = 1;
if (x || !y)
printf("1");
else if (y && x)
printf("2");
else
printf("3");
}
###Output
_____no_output_____
###Markdown
Ans: 3 7. What does the following program print to the screen?
###Code
int main (){
int x = 0, z = 2;
while (x < 3) {
printf ("%i ", x);
x = x + z;
}
}
###Output
_____no_output_____
###Markdown
Ans: 0 2 8. What does the following program print to the screen?
###Code
int foo (int q) {
int x = 1;
return (q + x);
}
int main (){
int x = 0;
while (x < 3) {
printf ("%i ", x);
x = x + foo(x);
}
}
###Output
_____no_output_____
|
Big-Data-Clusters/CU8/Public/content/monitor-k8s/tsg066-get-kubernetes-events.ipynb
|
###Markdown
TSG066 - Get BDC event (Kubernetes)===================================Description-----------View the big data cluster secretsSteps----- Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("tsg066-get-kubernetes-events.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
###Output
_____no_output_____
###Markdown
Get the Kubernetes namespace for the big data clusterGet the namespace of the Big Data Cluster use the kubectl command lineinterface .**NOTE:**If there is more than one Big Data Cluster in the target Kubernetescluster, then either:- set \[0\] to the correct value for the big data cluster.- set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio.
###Code
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
###Output
_____no_output_____
###Markdown
Show the Kubernetes events for the Big Data Cluster namespace
###Code
run(f'kubectl get events -n {namespace}')
###Output
_____no_output_____
###Markdown
Show the Kubernetes events for the system namespace
###Code
run(f'kubectl get events -n kube-system')
###Output
_____no_output_____
###Markdown
Show the Kubernetes events in the default namespace
###Code
run(f'kubectl get events')
print('Notebook execution complete.')
###Output
_____no_output_____
|
Stackoverflow_simple_data_retriever.ipynb
|
###Markdown
Importing packages
###Code
from models import MulticlassLogReg
import tensorflow_federated
from utils_federated.datasets import stackoverflow_tag_prediction
import nest_asyncio
import numpy as np
from numpy.random import default_rng
from sklearn.datasets import dump_svmlight_file, load_svmlight_file
from prep_data import DATASET_PATH
import pickle
###Output
_____no_output_____
###Markdown
Downloading data & helper functions
###Code
nest_asyncio.apply()
train_fl, test_fl = stackoverflow_tag_prediction.get_federated_datasets(train_client_batch_size=500)
def retrieve_train_data_for_client(client_id, n=500, d_x = 10000, d_y =500):
one_client_data = train_fl.create_tf_dataset_for_client(client_id)
X = np.empty(shape = (n, d_x))
y = np.empty(shape = (n, d_y))
for element in one_client_data:
X = element[0].numpy()
y = element[1].numpy()
return X, y
###Output
_____no_output_____
###Markdown
Clustered data One cluster
###Code
client_num = 50
exp_keyword = 'SO_LR_one_cluster_{}'.format(client_num)
n_train_samples = 90
n_test_samples = 300
n_val_samples = 110
assert n_train_samples + n_test_samples + n_val_samples == 500
X = np.empty(shape = (n_train_samples * client_num, 10000))
y = np.empty(shape = (n_train_samples * client_num, 500))
participating_clients = []
rng = default_rng()
cluster = 3
counter = 0
max_labels_used = 0
with open('datasets/clients_cluster_{}.data'.format(cluster), 'rb') as file:
selected_clients = pickle.load(file)
X_ = None
y_ = None
train_indices = None
val_indices = None
test_indices = None
for client_id in selected_clients:
if counter == client_num:
break
X_, y_ = retrieve_train_data_for_client(client_id)
if X_.shape[0] != 500:
continue
max_labels_used = max(max_labels_used, np.array([1 for x in y_.sum(axis=0) if x > 0]).sum())
participating_clients.append(client_id)
y_sum = y_.sum(axis=0)
assert y_sum.shape[0] == 500
all_indices = set(list(range(500)))
participating_labels = [i for i in range(500) if y_sum[i] > 0]
unique_datapts = set()
for label in participating_labels:
for ind in range(500):
if y_[ind, label] == 1.:
unique_datapts.add(ind)
break
rest_inds = list(all_indices - unique_datapts)
rng.shuffle(rest_inds)
all_indices = list(unique_datapts) + rest_inds
train_indices = all_indices[:n_train_samples]
val_indices = all_indices[n_train_samples : n_train_samples + n_val_samples]
test_indices = all_indices[n_train_samples + n_val_samples:]
X[counter * n_train_samples : (counter + 1) * n_train_samples, :] = X_[train_indices, :]
y[counter * n_train_samples : (counter + 1) * n_train_samples, :] = y_[train_indices, :]
with open('datasets/{}_val{}_X.npy'.format(exp_keyword, client_id), 'wb') as file_1:
np.save(file_1, X_[val_indices, :])
with open('datasets/{}_val{}_y.npy'.format(exp_keyword, client_id), 'wb') as file_2:
np.save(file_2, y_[val_indices, :])
with open('datasets/{}_test{}_X.npy'.format(exp_keyword, client_id), 'wb') as file_3:
np.save(file_3, X_[test_indices, :])
with open('datasets/{}_test{}_y.npy'.format(exp_keyword, client_id), 'wb') as file_4:
np.save(file_4, y_[test_indices, :])
counter += 1
print('# clients added {}, # clients required {}'.format(counter, client_num))
with open('datasets/{}.npz'.format(exp_keyword), 'wb') as file:
np.savez(file, X=X, y=y)
with open(DATASET_PATH + 'list_clients_{}.data'.format(exp_keyword), 'wb') as file:
# store the data as binary data stream
pickle.dump(participating_clients, file)
###Output
_____no_output_____
###Markdown
Two client datasets
###Code
common_clients = sorted(list(set(train_fl.client_ids).intersection(set(test_fl.client_ids))))
participating_clients = []
for i in range(1000):
client_id = common_clients[i]
train_d = train_fl.create_tf_dataset_for_client(client_id)
test_d = test_fl.create_tf_dataset_for_client(client_id)
tr_flag = False
te_flag = False
for d in train_d:
if d[0].shape[0] == 500:
tr_flag = True
break
for d in test_d:
if d[0].shape[0] > 400:
te_flag = True
break
if tr_flag and te_flag:
participating_clients.append(client_id)
if len(participating_clients) > 10:
break
participating_clients
###Output
_____no_output_____
###Markdown
First client
###Code
X_, y_ = retrieve_train_data_for_client(participating_clients[0])
label_dist = y_.sum(axis=0)
sorted(range(len(label_dist)), key = lambda k: label_dist[k], reverse=True)[:10]
###Output
_____no_output_____
###Markdown
Second client
###Code
X_, y_ = retrieve_train_data_for_client(participating_clients[1])
label_dist = y_.sum(axis=0)
sorted(range(len(label_dist)), key = lambda k: label_dist[k], reverse=True)[:10]
###Output
_____no_output_____
###Markdown
Merging datasets
###Code
X = np.empty(shape=(1000, 10000))
y = np.empty(shape=(1000, 500))
for i in range(2):
curr_client_id = participating_clients[i]
X_, y_ = retrieve_train_data_for_client(curr_client_id)
X[i * 500 : (i + 1) * 500, :] = X_
y[i * 500 : (i + 1) * 500, :] = y_
with open('datasets/SO_LR_two_workers_100.npz', 'wb') as file:
np.savez(file, X=X, y=y)
with open(DATASET_PATH + 'list_clients_SO_LR_two_workers_100.data', 'wb') as file:
# store the data as binary data stream
pickle.dump(participating_clients[:2], file)
TEST_DATA_SIZE = 300
VAL_DATA_SIZE = 100
for client_id in participating_clients:
test_data = test_fl.create_tf_dataset_for_client(client_id)
X_test = np.empty(shape = (TEST_DATA_SIZE, 10000))
y_test = np.empty(shape = (TEST_DATA_SIZE, 500))
X_val = np.empty(shape = (VAL_DATA_SIZE, 10000))
y_val = np.empty(shape = (VAL_DATA_SIZE, 500))
write_flag = False
for element in test_data:
if element[0].shape[0] < TEST_DATA_SIZE + VAL_DATA_SIZE:
continue
X_test = element[0].numpy()[:TEST_DATA_SIZE, :]
y_test = element[1].numpy()[:TEST_DATA_SIZE, :]
X_val = element[0].numpy()[TEST_DATA_SIZE : TEST_DATA_SIZE + VAL_DATA_SIZE, :]
y_val = element[1].numpy()[TEST_DATA_SIZE : TEST_DATA_SIZE + VAL_DATA_SIZE, :]
write_flag = True
if not write_flag:
raise RuntimeError
with open('datasets/SO_LR_two_workers_100_test{}_X.npy'.format(client_id), 'wb') as file_1:
np.save(file_1, X_test)
with open('datasets/SO_LR_two_workers_100_test{}_y.npy'.format(client_id), 'wb') as file_2:
np.save(file_2, y_test)
with open('datasets/SO_LR_two_workers_100_val{}_X.npy'.format(client_id), 'wb') as file_3:
np.save(file_3, X_val)
with open('datasets/SO_LR_two_workers_100_val{}_y.npy'.format(client_id), 'wb') as file_4:
np.save(file_4, y_val)
###Output
_____no_output_____
|
13_mosaic_sparsity_regularizer/old codes and plots/older/interpretable codes loss entropy/Synthetic_elliptical_blobs_interpretable_200_100_k01.ipynb
|
###Markdown
**Focus Net**
###Code
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,200) #,self.output)
self.linear2 = nn.Linear(200,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
###Output
_____no_output_____
###Markdown
**Classification Net**
###Code
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,100)
self.linear2 = nn.Linear(100,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
###Output
_____no_output_____
###Markdown
###Code
where = Focus_deep(5,1,9,5).double()
what = Classification_deep(5,3).double()
where = where.to("cuda")
what = what.to("cuda")
def calculate_attn_loss(dataloader,what,where,criter,k):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels, fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
loss = criter(outputs, labels) + k*ent
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
print("--"*40)
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
acti = []
loss_curi = []
analysis_data = []
epochs = 1000
k=0.1
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch #entropy(alpha.cpu().numpy(), base=2, axis=1)
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
loss = criterion(outputs, labels) + k*ent
# loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
analysis_data = np.array(analysis_data)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig("trends_synthetic_300_300.png",bbox_inches="tight")
plt.savefig("trends_synthetic_300_300.pdf",bbox_inches="tight")
analysis_data[-1,:2]/3000
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
print(running_loss, anls_data)
what.eval()
where.eval()
alphas = []
max_alpha =[]
alpha_ftpt=[]
alpha_ffpt=[]
alpha_ftpf=[]
alpha_ffpf=[]
argmax_more_than_half=0
argmax_less_than_half=0
cnt =0
with torch.no_grad():
for i, data in enumerate(train_loader, 0):
inputs, labels, fidx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg, alphas = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
batch = len(predicted)
mx,_ = torch.max(alphas,1)
max_alpha.append(mx.cpu().detach().numpy())
for j in range (batch):
cnt+=1
focus = torch.argmax(alphas[j]).item()
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if (focus == fidx[j].item() and predicted[j].item() == labels[j].item()):
alpha_ftpt.append(alphas[j][focus].item())
# print(focus, fore_idx[j].item(), predicted[j].item() , labels[j].item() )
elif (focus != fidx[j].item() and predicted[j].item() == labels[j].item()):
alpha_ffpt.append(alphas[j][focus].item())
elif (focus == fidx[j].item() and predicted[j].item() != labels[j].item()):
alpha_ftpf.append(alphas[j][focus].item())
elif (focus != fidx[j].item() and predicted[j].item() != labels[j].item()):
alpha_ffpf.append(alphas[j][focus].item())
np.sum(entropy(alphas.cpu().numpy(), base=2, axis=1))/batch
np.mean(-np.log2(mx.cpu().detach().numpy()))
a = np.array([[0.1,0.9], [0.5, 0.5]])
-0.1*np.log2(0.1)-0.9*np.log2(0.9)
entropy([9/10, 1/10], base=2), entropy([0.5, 0.5], base=2), entropy(a, base=2, axis=1)
np.mean(-np.log2(a))
max_alpha = np.concatenate(max_alpha,axis=0)
print(max_alpha.shape, cnt)
np.array(alpha_ftpt).size, np.array(alpha_ffpt).size, np.array(alpha_ftpf).size, np.array(alpha_ffpf).size
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(max_alpha,bins=50,color ="c")
plt.title("alpha values histogram")
plt.savefig("attention_model_2_hist")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c")
plt.title("alpha values in ftpt")
plt.savefig("attention_model_2_hist")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ffpt),bins=50,color ="c")
plt.title("alpha values in ffpt")
plt.savefig("attention_model_2_hist")
###Output
_____no_output_____
|
Project_Capstone.ipynb
|
###Markdown
IBM Data Science Certification - Project CapstoneThis notebook is developed as the Project Capstone of Luiz Claudio Boechat in order to complete the IBM Data Science Certification Track, held on Coursera platform. Summary1. Data Acquisition2. Data Visualization3. Neighborhood Exploration4. Clustering Neighborhoods 1. Data AcquisitionThis is the first part of the Project. This assignment consists of acquiring the list of neighborhoods of Toronto and loading into a Data Frame.As the first step, all necessary libraries and classes are imported to the current environment.
###Code
#importing libraries
from bs4 import BeautifulSoup
from IPython.core.display import HTML
import pandas as pd
import requests
print("Imported dependencies!")
###Output
Imported dependencies!
###Markdown
Considering the link to the data, the html content is requested and parsed using the `BeatifulSoup` class.
###Code
#HTTP request to the URL
data_url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
data_page = requests.get(data_url)
#HTML parsing with BeautifulSoup
data_soup = BeautifulSoup(data_page.text, 'html.parser')
#extracting html table with data
data_html_table = data_soup.find_all('table', class_='wikitable')
data_html_table_str = str(data_html_table[0])
HTML(data_html_table_str)
###Output
_____no_output_____
###Markdown
The html content is converted into a Data Frame.
###Code
#converting html table into data frame
df = pd.read_html(data_html_table_str)[0]
print('Read data frame with shape: ', df.shape)
print('Data Frame Head:')
df.head()
###Output
Read data frame with shape: (180, 3)
Data Frame Head:
###Markdown
Processing Data Frame in order to remove unassigned Postal Codes (which is the same of removing empty Neighborhoods).
###Code
#processing data frame
#renaming column PostalCode
df.rename(columns={'Postal Code': 'PostalCode'}, inplace=True)
#dropping rows with empty neighborhood
df.dropna(subset=['Neighborhood'], inplace=True)
print('Data Frame Head:')
df.head()
print('Data Frame description:')
df.describe(include='all')
print('Is there any unassigned Neighborhood?', any(df['Neighborhood'].isna()))
print('Is there any unassigned Borough?', any(df['Borough'] == 'Not assigned'))
print('Is there any duplicated Postal Code?', any(df['PostalCode'].duplicated()))
print('Shape of data frame: ', df.shape)
###Output
Shape of data frame: (103, 3)
###Markdown
_Note:_ It is not necessary to combine neighborhood names according postal code, neither assigning borough names to neighborhoods because:* data was extracted directly from Wikipedia URL where all empty Neighborhoods have 'Not Assigned' boroughs;* no empty neighborhoods remained in the data frame after dropping rows;* no duplicate Postal Codes remained. Acquiring coordinates for each postal code using Geospatial CSV file.
###Code
#reading CSV file
df_coords = pd.read_csv('https://cocl.us/Geospatial_data')
#renaming columns
df_coords.rename(columns={'Postal Code': 'PostalCode'}, inplace=True)
print('Shape of coordinates\' data frame: ', df.shape)
print('Coordinates data frame head:')
df_coords.head()
###Output
Shape of coordinates' data frame: (103, 3)
Coordinates data frame head:
###Markdown
Joining two databases.
###Code
#testing if all postal codes from one data frame is in another
print('Are all postal codes from df in df_coords?', all(df['PostalCode'].isin(df_coords['PostalCode'])))
print('Are all postal codes from df_coords in df?', all(df_coords['PostalCode'].isin(df['PostalCode'])))
#joining data frames on Postal Code value
df = df.merge(df_coords, on='PostalCode')
print('Read data frame with shape: ', df.shape)
print('Data Frame Head:')
df.head()
###Output
Read data frame with shape: (103, 5)
Data Frame Head:
###Markdown
2. Data Visualization With the dataframe containing all neighborhoods and coordinates, it is possible to plot a map using `folium` library.
###Code
#installing libraries
!pip install folium
!pip install geopy
#importing libraries
import folium
from folium import plugins
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
print("Imported dependencies!")
###Output
Collecting folium
[?25l Downloading https://files.pythonhosted.org/packages/a4/f0/44e69d50519880287cc41e7c8a6acc58daa9a9acf5f6afc52bcc70f69a6d/folium-0.11.0-py2.py3-none-any.whl (93kB)
[K |████████████████████████████████| 102kB 8.7MB/s ta 0:00:011
[?25hRequirement already satisfied: numpy in /opt/conda/envs/Python36/lib/python3.6/site-packages (from folium) (1.15.4)
Collecting branca>=0.3.0 (from folium)
Downloading https://files.pythonhosted.org/packages/13/fb/9eacc24ba3216510c6b59a4ea1cd53d87f25ba76237d7f4393abeaf4c94e/branca-0.4.1-py3-none-any.whl
Requirement already satisfied: requests in /opt/conda/envs/Python36/lib/python3.6/site-packages (from folium) (2.21.0)
Requirement already satisfied: jinja2>=2.9 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from folium) (2.10)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from requests->folium) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from requests->folium) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from requests->folium) (2020.4.5.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from requests->folium) (1.24.1)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from jinja2>=2.9->folium) (1.1.0)
Installing collected packages: branca, folium
Successfully installed branca-0.4.1 folium-0.11.0
Requirement already satisfied: geopy in /opt/conda/envs/Python36/lib/python3.6/site-packages (1.18.1)
Requirement already satisfied: geographiclib<2,>=1.49 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from geopy) (1.49)
Imported dependencies!
###Markdown
The map is centered according to Toronto coordinates.
###Code
address = 'Toronto, Ontario'
geolocator = Nominatim(user_agent="ny_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of {} are {}, {}.'.format(address, latitude, longitude))
# create map of Manhattan using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, label in zip(df['Latitude'], df['Longitude'], df['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
3. Neighborhood Exploration
###Code
# @hidden_cell
CLIENT_ID = '3HBP2YBLOSMGO5OQU0C5ISMEV2XD2STJYTFDKRCS0QBZNHLS' # your Foursquare ID
CLIENT_SECRET = 'D5WMNK0E2ZQVT2YCWI3ITUMI0WZUZZPKY1HYOKQD32UDIR5N' # your Foursquare Secret
VERSION = '20180604'
###Output
_____no_output_____
###Markdown
Using Foursquare API, all neighborhoods are explored in order to find out their venues. For each neighborhood, the API is used to query venues' names, location and categories.
###Code
#function used for getting neighborhood venues
def getNeighborhoodVenues(names, latitudes, longitudes, radius=500, limit=15):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, limit)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(name, lat, lng, v['venue']['name'], v['venue']['location']['lat'], v['venue']['location']['lng'], v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood', 'Neighborhood Latitude', 'Neighborhood Longitude', 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
For the exploration, only neighborhoods whose boroughs are located in Toronto are considered.
###Code
#filtering Toronto neighborhoods
toronto_df = df[df['Borough'].str.contains('Toronto')]
#querying venues
toronto_venues = getNeighborhoodVenues(names=toronto_df['Neighborhood'], latitudes=toronto_df['Latitude'], longitudes=toronto_df['Longitude'])
toronto_venues.head()
#venues per neighborhood
toronto_venues.groupby('Neighborhood').count()['Venue']
###Output
_____no_output_____
###Markdown
Evaluating categories of venues for each neighborhood:
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
###Output
_____no_output_____
###Markdown
Grouping the row per neighborhood and evaluating the frequency of each type of venue.
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
Top 10 common venue categories of each neighborhoord:
###Code
#importing dependencies
import numpy as np
#auxiliary function for sorting most common categories in descending order
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
#number of top venues
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted
###Output
_____no_output_____
###Markdown
4. Clustering Neighborhoods Running k-Means algorithm for grouping neighborhoods into 5 clusters.
###Code
#importing dependencies
from sklearn.cluster import KMeans
#number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
#run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
#check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
###Output
_____no_output_____
###Markdown
Creating a new data frame including the labels generated by KMeans.
###Code
#add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
#merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_df
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
toronto_merged.head()
###Output
_____no_output_____
###Markdown
Cluster Visualization:
###Code
#importing dependencies
import matplotlib.cm as cm
import matplotlib.colors as colors
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Clusters
###Code
#cluster 1
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
#cluster 2
toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
#cluster 3
toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
#cluster 4
toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
#cluster 5
toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
|
xgboost_building.ipynb
|
###Markdown
RAMP on predicting cyclist traffic in ParisAuthors: *Roman Yurchak (Symerio)*; also partially inspired by the air_passengers starting kit. IntroductionThe dataset was collected with cyclist counters installed by Paris city council in multiple locations. It contains hourly information about cyclist traffic, as well as the following features, - counter name - counter site name - date - counter installation date - latitude and longitude Available features are quite scarce. However, **we can also use any external data that can help us to predict the target variable.**
###Code
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
###Output
_____no_output_____
###Markdown
Loading the data with pandasFirst, download the data files, - [train.parquet](https://github.com/rth/bike_counters/releases/download/v0.1.0/train.parquet) - [test.parquet](https://github.com/rth/bike_counters/releases/download/v0.1.0/test.parquet)and put them to into the data folder.Data is stored in [Parquet format](https://parquet.apache.org/), an efficient columnar data format. We can load the train set with pandas,
###Code
data = pd.read_parquet(Path('data') / 'train.parquet')
data_test =pd.read_parquet(Path('data') / 'test.parquet')
data_test
###Output
_____no_output_____
###Markdown
We can check general information about different columns, and in particular the number of unique entries in each column, Adding external data : Covid and holidays
###Code
import datetime
date = pd.to_datetime(df_ext['date'])
df_ext.loc[:, ['date_only']] = date
new_date = [dt.date() for dt in df_ext['date_only']]
df_ext.loc[:, ['date_only']] = new_date
mask = ((df_ext['date_only'] >= pd.to_datetime('2020/10/30'))
& (df_ext['date_only'] <= pd.to_datetime('2020/12/15'))
| (df_ext['date_only'] >= pd.to_datetime('2021/04/03'))
& (df_ext['date_only'] <= pd.to_datetime('2021/05/03')))
df_ext['confi'] =0
df_ext.loc[mask,['confi']] = 1
df_ext[df_ext['date_only'] == pd.to_datetime('2020/10/30')]
from vacances_scolaires_france import SchoolHolidayDates
d = SchoolHolidayDates()
for each_row in range(df_ext.shape[0]):
if d.is_holiday_for_zone(df_ext.loc[each_row,'date_only'], 'C') == True :
df_ext.loc[each_row,'holiday'] = 1
else :
df_ext.loc[each_row,'holiday'] = 0
df_ext["holiday"] = df_ext["holiday"].astype(int)
df_ext
## Delete date_only
#df_ext = df_ext.drop('date_only', 1)
# df_ext
data_2 = data.iloc[:10000]
data_2.to_csv('data_r_2.csv')
###Output
_____no_output_____
###Markdown
Parameters tuning XG Boost
###Code
def _merge_external_data(X):
file_path = Path(__file__).parent / 'external_data.csv'
df_ext = pd.read_csv(file_path, parse_dates=['date'])
X = X.copy()
# When using merge_asof left frame need to be sorted
X['orig_index'] = np.arange(X.shape[0])
X = pd.merge_asof(X.sort_values('date'), df_ext[['date', 't','confi']].sort_values('date'), on='date')
# Sort back to the original order
X = X.sort_values('orig_index')
del X['orig_index']
return X
from xgboost import XGBRegressor
import xgboost as xgb
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
date_encoder = FunctionTransformer(_encode_dates)
date_cols = _encode_dates(X_train[['date']]).columns.tolist()
categorical_encoder = OneHotEncoder(handle_unknown="ignore")
categorical_cols = ["counter_name", "site_name"]
preprocessor = ColumnTransformer([
('date', StandardScaler(), date_cols),
('cat', categorical_encoder, categorical_cols),
])
regressor = XGBRegressor()
pipe = make_pipeline(date_encoder, preprocessor, regressor)
pipe.fit(X_train, y_train)
pip install tensorflow
from os import pread
import pandas as pd
import numpy as np
from pathlib import Path
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.compose import ColumnTransformer
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.wrappers.scikit_learn import KerasRegressor
def _encode_dates(X):
X = X.copy() # modify a copy of X
# Encode the date information from the DateOfDeparture columns
X.loc[:, "year"] = X["date"].dt.year
X.loc[:, "month"] = X["date"].dt.month
X.loc[:, "day"] = X["date"].dt.day
X.loc[:, "weekday"] = X["date"].dt.weekday
X.loc[:, "hour"] = X["date"].dt.hour
# Finally we can drop the original columns from the dataframe
return X.drop(columns=["date"])
def counters_done(X):
mask1 = (X['counter_name']=='152 boulevard du Montparnasse E-O') & (X['date'] >= pd.to_datetime('2021/01/26')) & (X['date'] <= pd.to_datetime('2021/02/24'))
mask1bis = (X['counter_name']=='152 boulevard du Montparnasse O-E') & (X['date'] >= pd.to_datetime('2021/01/26')) & (X['date'] <= pd.to_datetime('2021/02/24'))
mask2 = (X['counter_name']=='20 Avenue de Clichy SE-NO') & (X['date'] >= pd.to_datetime('2021/05/06')) & (X['date'] <= pd.to_datetime('2021/07/21'))
mask2bis = (X['counter_name']=='20 Avenue de Clichy NO-SE') & (X['date'] >= pd.to_datetime('2021/05/06')) & (X['date'] <= pd.to_datetime('2021/07/21'))
X.drop(X[mask1].index, inplace=True)
X.drop(X[mask1bis].index, inplace=True)
X.drop(X[mask2].index, inplace=True)
X.drop(X[mask2bis].index, inplace=True)
return X
def confinement(X):
date = pd.to_datetime(X['date'])
X.loc[:, ['date_only']] = date
new_date = [dt.date() for dt in X['date_only']]
X.loc[:, ['date_only']] = new_date
mask = ((X['date_only'] >= pd.to_datetime('2020/10/30').date())
& (X['date_only'] <= pd.to_datetime('2020/12/15').date())
| (X['date_only'] >= pd.to_datetime('2021/04/03').date())
& (X['date_only'] <= pd.to_datetime('2021/05/03').date()))
X['confi'] = np.where(mask, 1, 0)
return X
def curfew(X):
date = pd.to_datetime(X['date'])
X.loc[:, ['date_only']] = date
new_date = [dt.date() for dt in X['date_only']]
X.loc[:, ['date_only']] = new_date
X.loc[:, ['hour_only']] = date
new_hour = [dt.hour for dt in X['hour_only']]
X.loc[:, ['hour_only']] = new_hour
mask = (
#First curfew
(X['date_only'] >= pd.to_datetime('2020/12/15').date())
& (X['date_only'] < pd.to_datetime('2021/01/16').date())
& ((X['hour_only'] >= 20) | (X['hour_only'] <= 6))
|
# Second curfew
(X['date_only'] >= pd.to_datetime('2021/01/16').date())
& (X['date_only'] < pd.to_datetime('2021/03/20').date())
& ((X['hour_only'] >= 18) | (X['hour_only'] <= 6))
|
# Third curfew
(X['date_only'] >= pd.to_datetime('2021/03/20').date())
& (X['date_only'] < pd.to_datetime('2021/05/19').date())
& ((X['hour_only'] >= 19) | (X['hour_only'] <= 6))
|
# Fourth curfew
(X['date_only'] >= pd.to_datetime('2021/05/19').date())
& (X['date_only'] < pd.to_datetime('2021/06/9').date())
& ((X['hour_only'] >= 21) | (X['hour_only'] <= 6))
|
# Fifth curfew
(X['date_only'] >= pd.to_datetime('2021/06/9').date())
& (X['date_only'] < pd.to_datetime('2021/06/20').date())
& ((X['hour_only'] >= 21) | (X['hour_only'] <= 6))
)
X['curfew'] = np.where(mask, 1, 0)
return X.drop(columns=['hour_only', 'date_only'])
def _merge_external_data(X):
file_path = Path().parent / 'external_data.csv'
df_ext = pd.read_csv(file_path, parse_dates=['date'])
X = X.copy()
# When using merge_asof left frame need to be sorted
X['orig_index'] = np.arange(X.shape[0])
X = pd.merge_asof(X.sort_values('date'), df_ext[['date','t','u', 'rr3']].sort_values('date'), on='date')
# Sort back to the original order
X = X.sort_values('orig_index')
del X['orig_index']
return X
def to_df(X):
X = pd.DataFrame.sparse.from_spmatrix(X)
return X
def build_and_compile_model():
#X = pd.DataFrame.sparse.from_spmatrix(X)
#norm = tf.keras.layers.Normalization(axis=-1)
#norm.adapt(np.array(X))
model = keras.Sequential([
#norm,
layers.Dense(128, activation='elu'),
layers.Dense(128, activation='elu'),
layers.Dense(1)
])
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.0005))
return model
def get_estimator():
confinement_encoder = FunctionTransformer(confinement)
curfew_encoder = FunctionTransformer(curfew)
date_encoder = FunctionTransformer(_encode_dates)
merge = FunctionTransformer(_merge_external_data, validate=False)
build_compile = FunctionTransformer(build_and_compile_model)
df_encode = FunctionTransformer(to_df)
date_cols = ['hour', 'weekday']
# 'month', 'day', 'weekday', 'year',
categorical_encoder = OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=100)
categorical_cols = ["counter_name"]
numeric_cols = ['t']
#period_cols = ['curfew']
preprocessor = ColumnTransformer(
[
("date", StandardScaler(), date_cols),
("cat", OneHotEncoder(handle_unknown="ignore"), categorical_cols),
('numeric', StandardScaler(), numeric_cols),
#('period', 'passthrough', period_cols),
]
)
regressor = KerasRegressor(build_fn=build_and_compile_model,verbose=1, epochs=15, batch_size=45, validation_split=0.07)
pipe = make_pipeline(merge, confinement_encoder, curfew_encoder, date_encoder, preprocessor, df_encode, regressor)
return pipe
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures
date_encoder = FunctionTransformer(_encode_dates)
date_cols = ['year', 'month', 'day', 'weekday']
numeric_cols = ['t','u']
hour_col = ['hour']
period_cols = ['confi','curfew']
categorical_cols = ["counter_name"]
preprocessor = ColumnTransformer([
('date', StandardScaler(), date_cols),
('cat', OneHotEncoder(handle_unknown='ignore'), categorical_cols),
('numeric', StandardScaler(), numeric_cols),
('period', 'passthrough', period_cols),
('hour', PolynomialFeatures(degree=2), hour_col)
])
regressor = CatBoostRegressor()
pipe = make_pipeline(FunctionTransformer(confinement, validate=False),FunctionTransformer(curfew, validate=False),FunctionTransformer(_merge_external_data, validate=False), date_encoder, preprocessor, regressor)
pipe = get_estimator()
pipe
###Output
_____no_output_____
###Markdown
Deconstructing the pipeline
###Code
for param_name in pipe.get_params().keys():
print(param_name)
model = make_pipeline(date_encoder, preprocessor, regressor)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Useful link for hyperparameter tuning :https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/ https://xgboost.readthedocs.io/en/latest/python/python_api.html 1st step : tune n_estimators
###Code
regressor = XGBRegressor()
model = make_pipeline(date_encoder, preprocessor, regressor)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Here we keep the relatively high learning rate = 0.3 to perform the hyperparameters tuning
###Code
param_grid = {'xgbregressor__n_estimators' : np.arange(200,350,50)}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
The accuracy score using a GridSearchCV is 0.94 in 603.766 seconds
The best set of parameters is: {'xgbregressor__n_estimators': 300}
###Markdown
Step 2: Tune max_depth and min_child_weight
###Code
#import time
#from sklearn.model_selection import GridSearchCV
#param_grid = {'xgbregressor__max_depth' : range(9,12)}
#model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
# n_jobs = nb of core on the computer
#start = time.time()
#model_grid_search.fit(X_train, y_train)
#elapsed_time = time.time() - start
#print(
# f"The accuracy score using a {model_grid_search.__class__.__name__} is "
# f"{model_grid_search.score(X_train, y_train):.2f} in "
# f"{elapsed_time:.3f} seconds"
#)
#print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
_____no_output_____
###Markdown
We will choose max_depth = 9
###Code
regressor = XGBRegressor(max_depth=9)
model = make_pipeline(date_encoder, preprocessor, regressor)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
min_child_weight
###Code
#param_grid = {'xgbregressor__min_child_weight' : np.arange(0.2,0.5,0.1)}
#model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
# n_jobs = nb of core on the computer
#start = time.time()
#model_grid_search.fit(X_train, y_train)
#elapsed_time = time.time() - start
#print(
# f"The accuracy score using a {model_grid_search.__class__.__name__} is "
# f"{model_grid_search.score(X_train, y_train):.2f} in "
# f"{elapsed_time:.3f} seconds"
#)
#print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
_____no_output_____
###Markdown
We will choose min_child_weight = 0.2 Step 3: Tune gamma
###Code
# regressor = XGBRegressor(max_depth=9, min_child_weight=0.2)
#model = make_pipeline(date_encoder, preprocessor, regressor)
# model.fit(X_train, y_train)
# param_grid = {'xgbregressor__gamma' : np.arange(0,0.2,0.1)}
# model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
# n_jobs = nb of core on the computer
#start = time.time()
#model_grid_search.fit(X_train, y_train)
#elapsed_time = time.time() - start
#print(
# f"The accuracy score using a {model_grid_search.__class__.__name__} is "
# f"{model_grid_search.score(X_train, y_train):.2f} in "
# f"{elapsed_time:.3f} seconds"
#)
#print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
_____no_output_____
###Markdown
We choose gamma = 0.1 Step 4: Tune subsample and colsample_bytree Subsample
###Code
regressor = XGBRegressor(max_depth=9, min_child_weight=0.2, gamma = 0.1)
model = make_pipeline(date_encoder, preprocessor, regressor)
model.fit(X_train, y_train)
from sklearn.model_selection import GridSearchCV
import time
param_grid = {'xgbregressor__subsample' : np.arange(0.5,1,0.1)}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
The accuracy score using a GridSearchCV is 0.93 in 518.901 seconds
The best set of parameters is: {'xgbregressor__subsample': 0.8999999999999999}
###Markdown
Let's choose subsample = 0.9 colsample_bytree
###Code
regressor = XGBRegressor(max_depth=9, learning_rate=0.3, min_child_weight=0.2, gamma = 0.1, subsample=0.9)
model = make_pipeline(date_encoder, preprocessor, regressor)
model.fit(X_train, y_train)
from sklearn.model_selection import GridSearchCV
import time
# mieux quand je n'y touche pas
param_grid = {
'xgbregressor__colsample_bytree' : np.arange(0.5,1.0,0.1),
}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
The accuracy score using a GridSearchCV is 0.94 in 567.597 seconds
The best set of parameters is: {'xgbregressor__colsample_bytree': 0.6}
###Markdown
Here we choose colsample_bytree=0.6 Step 5: Tuning Regularization Parameters reg_alpha (lasso for the regularization phase of the XGBoost)
###Code
regressor = XGBRegressor(max_depth=9, learning_rate=0.3, min_child_weight=0.2, gamma = 0.1, subsample=0.9, colsample_bytree=0.6)
model = make_pipeline(date_encoder, preprocessor, regressor)
model.fit(X_train, y_train)
from sklearn.model_selection import GridSearchCV
import time
# mieux quand je n'y touche pas
param_grid = {
'xgbregressor__reg_alpha' : [1e-5, 1e-2, 0.1, 1, 100]
}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
from sklearn.model_selection import GridSearchCV
import time
# mieux quand je n'y touche pas
param_grid = {
'xgbregressor__reg_alpha' : np.arange(2.8, 3.2 , 1)
}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
The accuracy score using a GridSearchCV is 0.95 in 405.342 seconds
The best set of parameters is: {'xgbregressor__reg_alpha': 3}
###Markdown
We will choose reg_alpha = 3 reg_lambda
###Code
regressor = XGBRegressor(max_depth=9, learning_rate=0.3, min_child_weight=0.2, gamma = 0.1, subsample=0.9, colsample_bytree=0.6, reg_alpha=3)
model = make_pipeline(date_encoder, preprocessor, regressor)
from sklearn.model_selection import GridSearchCV
import time
# mieux quand je n'y touche pas
param_grid = {
'xgbregressor__reg_lambda' : np.arange(1, 5, 1)
}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
param_grid = {
'xgbregressor__reg_lambda' : np.arange(0.7, 1.3, 0.1)
}
model_grid_search = GridSearchCV(model, param_grid=param_grid, n_jobs=4, cv=5)# cv = 5 fold cross-validation
#n_jobs = nb of core on the computer
start = time.time()
model_grid_search.fit(X_train, y_train)
elapsed_time = time.time() - start
print(
f"The accuracy score using a {model_grid_search.__class__.__name__} is "
f"{model_grid_search.score(X_train, y_train):.2f} in "
f"{elapsed_time:.3f} seconds"
)
print(f"The best set of parameters is: {model_grid_search.best_params_}")
###Output
The accuracy score using a GridSearchCV is 0.95 in 609.759 seconds
The best set of parameters is: {'xgbregressor__reg_lambda': 0.9999999999999999}
###Markdown
Randomized parameter tuning
###Code
pipe
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
params = {
'xgbregressor__n_estimators' : [100],
'xgbregressor__max_depth' : [-1, 1, 2, 3, 4, 5, 6, 9],
'xgbregressor__min_child_weight' : [1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4],
'xgbregressor__gamma': [0.05,0.1, 0.2],
'xgbregressor__subsample' : [0.2, 0.5, 0.7, 0.9],
'xgbregressor__colsample_bytree' : [0.2, 0.5, 0.7, 0.9],
'xgbregressor__reg_alpha' : [0, 1e-1, 1, 2, 5, 7, 10, 50, 100],
'xgbregressor__reg_lambda' : [0, 1e-1, 1, 5, 10, 20, 50, 100],
'xgbregressor__learning_rate' : [0.3],
}
grid_search = RandomizedSearchCV(pipe, param_distributions=params, n_jobs=4, cv=5, random_state=42, scoring='neg_mean_squared_error', n_iter=200)
result_gs = grid_search.fit(X_train, y_train)
print(f"Best params : {result_gs.best_params_}")
for param_name in pipe.get_params().keys():
print(param_name)
###Output
memory
steps
verbose
functiontransformer-1
functiontransformer-2
functiontransformer-3
functiontransformer-4
columntransformer
catboostregressor
functiontransformer-1__accept_sparse
functiontransformer-1__check_inverse
functiontransformer-1__func
functiontransformer-1__inv_kw_args
functiontransformer-1__inverse_func
functiontransformer-1__kw_args
functiontransformer-1__validate
functiontransformer-2__accept_sparse
functiontransformer-2__check_inverse
functiontransformer-2__func
functiontransformer-2__inv_kw_args
functiontransformer-2__inverse_func
functiontransformer-2__kw_args
functiontransformer-2__validate
functiontransformer-3__accept_sparse
functiontransformer-3__check_inverse
functiontransformer-3__func
functiontransformer-3__inv_kw_args
functiontransformer-3__inverse_func
functiontransformer-3__kw_args
functiontransformer-3__validate
functiontransformer-4__accept_sparse
functiontransformer-4__check_inverse
functiontransformer-4__func
functiontransformer-4__inv_kw_args
functiontransformer-4__inverse_func
functiontransformer-4__kw_args
functiontransformer-4__validate
columntransformer__n_jobs
columntransformer__remainder
columntransformer__sparse_threshold
columntransformer__transformer_weights
columntransformer__transformers
columntransformer__verbose
columntransformer__verbose_feature_names_out
columntransformer__date
columntransformer__cat
columntransformer__numeric
columntransformer__period
columntransformer__hour
columntransformer__date__copy
columntransformer__date__with_mean
columntransformer__date__with_std
columntransformer__cat__categories
columntransformer__cat__drop
columntransformer__cat__dtype
columntransformer__cat__handle_unknown
columntransformer__cat__sparse
columntransformer__numeric__copy
columntransformer__numeric__with_mean
columntransformer__numeric__with_std
columntransformer__hour__degree
columntransformer__hour__include_bias
columntransformer__hour__interaction_only
columntransformer__hour__order
catboostregressor__loss_function
###Markdown
Visualizing the results :
###Code
pipe = get_estimator()
mask = ((X_test['counter_name'] == '152 boulevard du Montparnasse E-O')
& (X_test['date'] > pd.to_datetime('2021/09/01'))
& (X_test['date'] < pd.to_datetime('2021/09/15')))
df_viz = X_test.loc[mask].copy()
df_viz['bike_count'] = np.exp(y_test[mask.values]) - 1
df_viz['bike_count (predicted)'] = np.exp(pipe.predict(X_test[mask])) - 1
fig, ax = plt.subplots(figsize=(12, 4))
df_viz.plot(x='date', y='bike_count', ax=ax)
df_viz.plot(x='date', y='bike_count (predicted)', ax=ax)
ax.set_title('Predictions with NN - 152 boulevard du Montparnasse E-O')
ax.set_ylabel('bike_count')
###Output
5/5 [==============================] - 0s 2ms/step
|
Moringa_Data_Science_Prep_W3_Independent_Project_2019_07_KELVIN_NJUNGE_DataReport_.ipynb
|
###Markdown
MTN COTE D'IVOIRE INFRASTRUCTURE UPGRADE **STRATEGY Data Preparation Importing libraries
###Code
#Import the pandas library
import pandas as pd
# as well as the Numpy library
import numpy as np
###Output
_____no_output_____
###Markdown
Loading Datasets
###Code
df1 = pd.read_csv('/content/Telcom_dataset.csv')
df1.head()
df2 = pd.read_csv('/content/Telcom_dataset2.csv')
df2.head()
df3 = pd.read_csv('/content/Telcom_dataset3.csv')
df3.head()
###Output
_____no_output_____
###Markdown
Cleaning Datasets Changing columns names
###Code
# Changing the columns names
# previewing the column names
# change to lower case
df1.columns
df1.columns = map(str.lower, df1.columns)
df1.columns
columns = ['product', 'value', 'date_time', 'cell_on_site', 'dw_a_number_int',
'dw_b_number_int', 'country_a', 'country_b', 'cell_id', 'site_id']
# changing columns titles for dataset 1
df1.columns = columns
df1.head()
# changing columns titles for dataset 2
df2.columns = columns
df2.head()
# changing columns titles for dataset 3
df3.columns = columns
df3.head()
###Output
_____no_output_____
###Markdown
Concatenating the three datasets
###Code
# combining the three datasets using concatenation
df_combine = pd.concat([df1,df2,df3], ignore_index = 1)
df_combine.head()
###Output
_____no_output_____
###Markdown
Dropping unnecessary columns
###Code
# Dropping unnecessary columns
df_combine_cleaned = df_combine.drop(columns = ['country_a' , 'country_b' ,'cell_on_site'])
df_combine_cleaned.head()
# Loading other files with the datasets description
df4 = pd.read_excel('CDR_description.xlsx')
df4
## Loading the cells geo datasets
df5 = pd.read_csv('cells_geo.csv', delimiter =";" , index_col = 0)
df5.head()
# Loading other files with the datasets description
df6 = pd.read_excel('cells_geo_description.xlsx')
df6
###Output
_____no_output_____
###Markdown
Splitting the date and time column
###Code
# to split the date and time column into separate columns
df_combine_cleaned.head()
# create a new column and split the date_time column into 2
df_combine_cleaned[['date', 'time']] = df_combine.date_time.str.split(expand=True)
df_combine_cleaned.head()
# the sum for each product billing price
grouped = df_combine.groupby(['product'])['value'].sum()
grouped
df_combine_cleaned
Which ones were the most used city for the three days?
#Inference: infers City with most calls over the last 3 days
#Action : Will count the number of calls per city. The one with maximum number of calls is the most used city
df_combine_cleaned.groupby('cell_id').count().nlargest(5,'product')
###Output
Object `days` not found.
|
apps/image-augmentation-3d/image-augementation-3d.ipynb
|
###Markdown
Image Augementation for 3D images Powered by **Analytics Zoo/Spark** for deep learning, running on **Intel** architecture. In this demo, we will show some imaging processing methods on meniscus data.
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import h5py
from math import pi
from zoo.common.nncontext import *
from zoo.feature.common import *
from zoo.feature.image3d.transformation import *
%matplotlib inline
sc = init_nncontext("Image Augmentation 3D Example")
###Output
_____no_output_____
###Markdown
Load sample data using `h5py` library. We expand the dimension to meet the 3D image dimensions.
###Code
image = h5py.File(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/image-augmentation-3d/image/meniscus_full.mat")['meniscus_im']
sample = np.array(image)
sample = np.expand_dims(sample,3)
###Output
_____no_output_____
###Markdown
Shape of sample.
###Code
sample.shape
###Output
_____no_output_____
###Markdown
Create LocalImageSet
###Code
image_list=[sample]
image_set = LocalImageSet(image_list=image_list)
###Output
creating: createLocalImageSet
###Markdown
Create DistributedImageSet
###Code
data_rdd = sc.parallelize([sample])
image_set = DistributedImageSet(image_rdd=data_rdd)
###Output
creating: createDistributedImageSet
###Markdown
Image Tranformation Cropping
###Code
start_loc = [13,80,125]
patch = [5, 40, 40]
crop = Crop3D(start=start_loc, patch_size=patch)
cropped_imageset = crop(image_set)
crop_data = cropped_imageset.get_image(key="imageTensor").first()
crop_data.shape
###Output
creating: createCrop3D
###Markdown
Rotate 30 degrees
###Code
yaw = 0.0
pitch = 0.0
roll = pi/6
rotate_30 = Rotate3D([yaw, pitch, roll])
rotate_30_imageset = rotate_30(cropped_imageset)
rotate_30_data = rotate_30_imageset.get_image(key="imageTensor").first()
rotate_30_data.shape
###Output
creating: createRotate3D
###Markdown
Rotate 90 degrees
###Code
yaw = 0.0
pitch = 0.0
roll = pi/2
rotate_90 = Rotate3D([yaw, pitch, roll])
rotate_90_imageset = rotate_90(rotate_30_imageset)
rotate_90_data = rotate_90_imageset.get_image(key="imageTensor").first()
rotate_90_data.shape
###Output
creating: createRotate3D
###Markdown
Random affine transformation
###Code
random = np.random.rand(3, 3)
affine = AffineTransform3D(random)
affine_imageset = affine(rotate_90_imageset)
affine_data = affine_imageset.get_image(key="imageTensor").first()
affine_data.shape
###Output
creating: createAffineTransform3D
###Markdown
Pipeline of 3D transformers
###Code
image_set = DistributedImageSet(image_rdd=data_rdd)
start_loc = [13,80,125]
patch = [5, 40, 40]
yaw = 0.0
pitch = 0.0
roll_30 = pi / 6
roll_90 = pi / 2
transformer = ChainedPreprocessing(
[Crop3D(start_loc, patch),
Rotate3D([yaw, pitch, roll_30]),
Rotate3D([yaw, pitch, roll_90]),
AffineTransform3D(random)])
transformed = transformer(image_set)
pipeline_data = transformed.get_image(key="imageTensor").first()
###Output
creating: createDistributedImageSet
creating: createCrop3D
creating: createRotate3D
creating: createRotate3D
creating: createAffineTransform3D
creating: createChainedPreprocessing
###Markdown
Show Results
###Code
fig = plt.figure(figsize=[10,10])
y = fig.add_subplot(2,3,1)
y.add_patch(plt.Rectangle((start_loc[2]-1,start_loc[1]-1),
patch[1],
patch[1], fill=False,
edgecolor='red', linewidth=1.5)
)
y.text(start_loc[2]-45, start_loc[1]-15,
'Cropped Region',
bbox=dict(facecolor='green', alpha=0.5),
color='white')
y.imshow(sample[15,:,:,0],cmap='gray')
y.set_title('Original Image')
y = fig.add_subplot(2,3,2)
y.imshow(crop_data[2,:,:,0],cmap='gray')
y.set_title('Cropped Image')
y = fig.add_subplot(2,3,3)
y.imshow(rotate_30_data[2,:,:,0],cmap='gray')
y.set_title('Rotate 30 Deg')
y = fig.add_subplot(2,3,4)
y.imshow(rotate_90_data[2,:,:,0],cmap='gray')
y.set_title('Rotate 90 Deg')
y = fig.add_subplot(2,3,5)
y.imshow(affine_data[2,:,:,0],cmap='gray')
y.set_title('Random Affine Transformation')
y = fig.add_subplot(2,3,6)
y.imshow(pipeline_data[2,:,:,0],cmap='gray')
y.set_title('Pipeline Transformation')
###Output
_____no_output_____
|
Lesson 4/Lesson 4 All Exercises.ipynb
|
###Markdown
Exercise 1: Load and examine a superstore sales data from an Excel file
###Code
df = pd.read_excel("Sample - Superstore.xls")
df.head(10)
df.drop('Row ID',axis=1,inplace=True)
df.shape
###Output
_____no_output_____
###Markdown
Exercise 2: Subsetting the DataFrame
###Code
df_subset = df.loc[[i for i in range(5,10)],['Customer ID','Customer Name','City','Postal Code','Sales']]
df_subset
###Output
_____no_output_____
###Markdown
Exercise 3: An example use case – determining statistics on sales and profit for records 100-199
###Code
df_subset = df.loc[[i for i in range(100,200)],['Sales','Profit']]
df_subset.describe()
df_subset.plot.box()
plt.title("Boxplot of sales and profit",fontsize=15)
plt.ylim(0,500)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 4: A useful function – unique
###Code
df['State'].unique()
df['State'].nunique()
df['Country'].unique()
df.drop('Country',axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Exercise 5: Conditional Selection and Boolean Filtering
###Code
df_subset = df.loc[[i for i in range (10)],['Ship Mode','State','Sales']]
df_subset
df_subset>100
df_subset[df_subset>100]
df_subset[df_subset['Sales']>100]
df_subset[(df_subset['State']!='California') & (df_subset['Sales']>100)]
###Output
_____no_output_____
###Markdown
Exercise 6: Setting and re-setting index
###Code
matrix_data = np.matrix('22,66,140;42,70,148;30,62,125;35,68,160;25,62,152')
row_labels = ['A','B','C','D','E']
column_headings = ['Age', 'Height', 'Weight']
df1 = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe DataFrame\n",'-'*25, sep='')
print(df1)
print("\nAfter resetting index\n",'-'*35, sep='')
print(df1.reset_index())
print("\nAfter resetting index with 'drop' option TRUE\n",'-'*45, sep='')
print(df1.reset_index(drop=True))
print("\nAdding a new column 'Profession'\n",'-'*45, sep='')
df1['Profession'] = "Student Teacher Engineer Doctor Nurse".split()
print(df1)
print("\nSetting 'Profession' column as index\n",'-'*45, sep='')
print (df1.set_index('Profession'))
###Output
The DataFrame
-------------------------
Age Height Weight
A 22 66 140
B 42 70 148
C 30 62 125
D 35 68 160
E 25 62 152
After resetting index
-----------------------------------
index Age Height Weight
0 A 22 66 140
1 B 42 70 148
2 C 30 62 125
3 D 35 68 160
4 E 25 62 152
After resetting index with 'drop' option TRUE
---------------------------------------------
Age Height Weight
0 22 66 140
1 42 70 148
2 30 62 125
3 35 68 160
4 25 62 152
Adding a new column 'Profession'
---------------------------------------------
Age Height Weight Profession
A 22 66 140 Student
B 42 70 148 Teacher
C 30 62 125 Engineer
D 35 68 160 Doctor
E 25 62 152 Nurse
Setting 'Profession' column as index
---------------------------------------------
Age Height Weight
Profession
Student 22 66 140
Teacher 42 70 148
Engineer 30 62 125
Doctor 35 68 160
Nurse 25 62 152
###Markdown
Exercise 7: GroupBy method
###Code
df_subset = df.loc[[i for i in range (10)],['Ship Mode','State','Sales']]
df_subset
byState = df_subset.groupby('State')
byState
print("\nGrouping by 'State' column and listing mean sales\n",'-'*50, sep='')
print(byState.mean())
print("\nGrouping by 'State' column and listing total sum of sales\n",'-'*50, sep='')
print(byState.sum())
print(pd.DataFrame(df_subset.groupby('State').describe().loc['California']).transpose())
df_subset.groupby('Ship Mode').describe().loc[['Second Class','Standard Class']]
pd.DataFrame(byState.describe().loc['California'])
byStateCity=df.groupby(['State','City'])
byStateCity.describe()['Sales']
###Output
_____no_output_____
###Markdown
Exercise 8: Missing values in Pandas
###Code
df_missing=pd.read_excel("Sample - Superstore.xls",sheet_name="Missing")
df_missing
df_missing.isnull()
for c in df_missing.columns:
miss = df_missing[c].isnull().sum()
if miss>0:
print("{} has {} missing value(s)".format(c,miss))
else:
print("{} has NO missing value!".format(c))
###Output
Customer has 1 missing value(s)
Product has 2 missing value(s)
Sales has 1 missing value(s)
Quantity has 1 missing value(s)
Discount has NO missing value!
Profit has 1 missing value(s)
###Markdown
Exercise 9: Filling missing values with `fillna()`
###Code
df_missing.fillna('FILL')
df_missing[['Customer','Product']].fillna('FILL')
df_missing['Sales'].fillna(method='ffill')
df_missing['Sales'].fillna(method='bfill')
df_missing['Sales'].fillna(df_missing.mean()['Sales'])
###Output
_____no_output_____
###Markdown
Exercise 10: Dropping missing values with `dropna()`
###Code
df_missing.dropna(axis=0)
df_missing.dropna(axis=1)
df_missing.dropna(axis=1,thresh=10)
###Output
_____no_output_____
###Markdown
Exercise 11: Outlier detection using simple statistical test
###Code
df_sample = df[['Customer Name','State','Sales','Profit']].sample(n=50).copy()
# Assign a wrong (negative value) in few places
df_sample['Sales'].iloc[5]=-1000.0
df_sample['Sales'].iloc[15]=-500.0
df_sample.plot.box()
plt.title("Boxplot of sales and profit", fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.grid(True)
###Output
_____no_output_____
###Markdown
Exercise 12: Concatenation
###Code
df_1 = df[['Customer Name','State','Sales','Profit']].sample(n=4)
df_2 = df[['Customer Name','State','Sales','Profit']].sample(n=4)
df_3 = df[['Customer Name','State','Sales','Profit']].sample(n=4)
df_1
df_2
df_3
df_cat1 = pd.concat([df_1,df_2,df_3], axis=0)
df_cat1
df_cat2 = pd.concat([df_1,df_2,df_3], axis=1)
df_cat2
###Output
_____no_output_____
###Markdown
Exercise 13: Merging by a common key
###Code
df_1=df[['Customer Name','Ship Date','Ship Mode']][0:4]
df_1
df_2=df[['Customer Name','Product Name','Quantity']][0:4]
df_2
pd.merge(df_1,df_2,on='Customer Name',how='inner')
pd.merge(df_1,df_2,on='Customer Name',how='inner').drop_duplicates()
df_3=df[['Customer Name','Product Name','Quantity']][2:6]
df_3
pd.merge(df_1,df_3,on='Customer Name',how='inner').drop_duplicates()
pd.merge(df_1,df_3,on='Customer Name',how='outer').drop_duplicates()
###Output
_____no_output_____
###Markdown
Exercise 14: Join method
###Code
df_1=df[['Customer Name','Ship Date','Ship Mode']][0:4]
df_1.set_index(['Customer Name'],inplace=True)
df_1
df_2=df[['Customer Name','Product Name','Quantity']][2:6]
df_2.set_index(['Customer Name'],inplace=True)
df_2
df_1.join(df_2,how='left').drop_duplicates()
df_1.join(df_2,how='right').drop_duplicates()
df_1.join(df_2,how='inner').drop_duplicates()
df_1.join(df_2,how='outer').drop_duplicates()
###Output
_____no_output_____
###Markdown
Miscelleneous useful methods Exercise 15: Randomized sampling - `sample` method
###Code
df.sample(n=5)
df.sample(frac=0.001)
df.sample(frac=0.001,replace=True)
###Output
_____no_output_____
###Markdown
Exercise 16: Pandas `value_count` method to return unique records
###Code
df['Customer Name'].value_counts()[:10]
###Output
_____no_output_____
###Markdown
Exercise 17: Pivot table functionality - `pivot_table`
###Code
df_sample=df.sample(n=100)
df_sample.pivot_table(values=['Sales','Quantity','Profit'],index=['Region','State'],aggfunc='mean')
###Output
_____no_output_____
###Markdown
Exercise 18: Sorting by particular column
###Code
df_sample=df[['Customer Name','State','Sales','Quantity']].sample(n=15)
df_sample
df_sample.sort_values(by='Sales')
df_sample.sort_values(by=['State','Sales'])
###Output
_____no_output_____
###Markdown
Exercise 19: Flexibility for user-defined function with `apply` method
###Code
def categorize_sales(price):
if price < 50:
return "Low"
elif price < 200:
return "Medium"
else:
return "High"
df_sample=df[['Customer Name','State','Sales']].sample(n=100)
df_sample.head(10)
df_sample['Sales Price Category']=df_sample['Sales'].apply(categorize_sales)
df_sample.head(10)
df_sample['Customer Name Length']=df_sample['Customer Name'].apply(len)
df_sample.head(10)
df_sample['Discounted Price']=df_sample['Sales'].apply(lambda x:0.85*x if x>200 else x)
df_sample.head(10)
###Output
_____no_output_____
|
Core 1/data analysis/ipython notebook/Assignment 2 Model Solutions.ipynb
|
###Markdown
Question 1: Poiseuille's method for determining viscosityThe volume flow rate, ${\displaystyle\frac{{\rm d}V}{{\rm d}t}}$, of fluid flowing smoothly through a horizontal tube of length $L$ and radius $r$ is given by Poiseuille's equation:\begin{equation}\frac{{\rm d}V}{{\rm d}t}=\frac{\pi\rho g h r^4}{8\eta L},\end{equation}where $\eta$ and $\rho$ are the viscosity and density, respectively, of the fluid, $h$ is the head of pressure across the tube, and $g$ the acceleration due to gravity. In an experiment the graph of the flow rate versus height has a slope measured to 7%, the length is known to 0.5%, and the radius to 8%. Required:(i) What is the fractional precision to which the viscosity is known? (ii) If more experimental time is available, should this be devoted to >(a) collecting more flow-rate data, >(b) measuring the length, >(c) the radius of the tube? (i) What is the fractional precision to which the viscosity is known? Express your answer as a DECIMAL.
###Code
def one_i():
'Returns fractional precision of viscosity'
fractional_precision = np.sqrt((16*0.08**2)+(0.07**2)+(0.005**2))
percentage_error = fractional_precision*100
return fractional_precision
one_i()
###Output
_____no_output_____
###Markdown
(ii) If more experimental time is available, should this be devoted to >(a) collecting more flow-rate data, >(b) measuring the length, >(c) the radius of the tube?
###Code
def one_ii():
'''Your function should return a string of A,B or C'''
comment = 'C'
return comment
one_ii()
###Output
_____no_output_____
###Markdown
Question 2: Functional error approach for Van der Waals calculationThe Van der Waals equation of state is a correction to the ideal gas law, given by the equation,\begin{equation}(P+\frac{a}{V_m^2})(V_m-b) = RT,\end{equation}where $P$ is the pressure, $V_m$ is the molar volume, $T$ is the absolute temperature, $R$ is the universal gas constant with $a$ and $b$ being species-specific Van der Waals coefficents. A sample of Nitrogen was measured in an experiment as,>Molar Volume $V_m$ = $(2.000 \pm 0.003)$x$10^{-4}m^3mol^{-1}$>Absolute Temperature $T$ = $298.0\pm0.2K$and the constants are, $a$ = $(1.408$x$10^{-1}) m^6mol^{-2}Pa$$b$ = $(3.913$x$10^{-5}) m^3mol^{-1}$$R$ = $(8.3145) JK^{-1}mol^{-1}$Required:>(i) From the given data, calculate the pressure giving your answer in MPa.>(ii) Calculate the uncertainty in the pressure by using the functional approach for error propagation.>(iii) Repeat the calculations above for >>$V_m = (2.000\pm0.003)\times10^{-3}\,{\rm m}^{3}\,{\rm mol}^{-1}$ and $T=400.0 \pm 0.2K$. (i) From the given data, calculate the pressure giving your answer in MPa.
###Code
def two_i():
'''Your function should return the pressure in MPa'''
R = 8.3145
T,alpha_t = 298,0.2
v_m,alpha_vm = 2*10**-4,0.003*10**-4
b = 3.913*10**-5
a = 1.408*10**-1
# Note: the answer is divided by 10**6 in order to have the correct units
pressure = (((R*T)/(v_m-b))-(a/(v_m**2)))/10**6
return pressure
two_i()
###Output
_____no_output_____
###Markdown
(ii) Calculate the uncertainty in the pressure by using the functional approach for error propagation.
###Code
def two_ii():
'''Your function should return the uncertainty'''
R = 8.3145
T,alpha_t = 298,0.2
v_m,alpha_vm = 2*10**-4,0.003*10**-4
b = 3.913*10**-5
a = 1.408*10**-1
pressure = two_i()*10**6 # pressure is not in Pascals
alpha_v= ((R*T)/((v_m+alpha_vm)-b))-(a/((v_m+alpha_vm)**2))-pressure
alpha_t = ((R*(T+alpha_t))/(v_m-b))-(a/(v_m**2))-pressure
uncertainty = (np.sqrt((alpha_v**2)+(alpha_t**2)))/10**6
return uncertainty
two_ii()
###Output
_____no_output_____
###Markdown
(iii) Repeat the calculations above for >$V_m = (2.000\pm0.003)\times10^{-3}\,{\rm m}^{3}\,{\rm mol}^{-1}$ and $T=400.0 \pm 0.2K$.
###Code
def two_iii():
'''Your function should return both the pressure and the uncertainty'''
pressure_2 = 0
uncertainty_2 = 0
R = 8.3145
T,alpha_t = 400,0.2
v_m,alpha_vm = 2*10**-3,0.003*10**-3
b = 3.913*10**-5
a = 1.408*10**-1
pressure_2 = (((R*T)/(v_m-b))-(a/(v_m**2)))
alpha_v= ((R*T)/((v_m+alpha_vm)-b))-(a/((v_m+alpha_vm)**2))-pressure_2
alpha_t = ((R*(T+alpha_t))/(v_m-b))-(a/(v_m**2))-pressure_2
uncertainty_2 = (np.sqrt((alpha_v**2)+(alpha_t**2)))/1000000
pressure_2 = pressure_2/10**6
return pressure_2,uncertainty_2
two_iii()
###Output
_____no_output_____
###Markdown
Question 3: Reverse Engineering the Incredible GoalThe separation of the posts is 7.32m, and the ball is struck from a point 22m from the near post and 29m from the far post.Required:> (i) Plot a graph of $\alpha_\theta$ on the y-axis vs $\alpha_L$ on the x-axis for the range of values presented in the `alpha_ls` array.>(ii) To what (common) precision must these three lengths be known to justify quoting the angle to 11 significant figures? *[Hint: use the functional approach using the values for the errors in the length measurements as the provided `alpha_ls` array]* (i) Plot a graph of $\alpha_\theta$ on the y-axis vs $\alpha_L$ on the x-axis for the range of values presented in the `alpha_ls` array.
###Code
def three_i():
'''Your function should plot the graph and return the alpha_thetas array.'''
a = 22
b = 29
c = 7.32
alpha_ls = [0.1, 0.01, 0.001, 0.0001, 1e-05, 1e-06, 1e-07, 1e-08, 1e-09, 1e-10, 1e-11, 1e-12]
alpha_thetas = []
# Cosine rule
theta_rad = np.arccos(((b**2)+(c**2)-(a**2))/(2*b*c))
for i in alpha_ls:
error = i
# Perform the functional approach
alpha_az = np.arccos(((b**2)+(c**2)-((a+error)**2))/(2*b*c))-theta_rad
alpha_bz = np.arccos((((b+error)**2)+(c**2)-(a**2))/(2*(b+error)*c))-theta_rad
alpha_cz = np.arccos(((b**2)+((c+error)**2)-(a**2))/(2*b*(c+error)))-theta_rad
alpha_theta = np.sqrt(((alpha_az)**2)+((alpha_bz)**2)+((alpha_cz)**2))
alpha_thetas.append(alpha_theta)
plt.plot(alpha_ls,alpha_thetas)
#Plot on a log scale
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r'$\alpha_L$')
plt.ylabel(r'$\alpha_\theta$')
plt.show()
return alpha_thetas
three_i()
###Output
_____no_output_____
###Markdown
(ii) To what (common) precision must these three lengths be known to justify quoting the angle to 11 significant figures? For the angle to be quoted to 11 significant figures, the error must be in the 10th decimal place (since the angle is in radians and we are assuming the angle to be >1 radian). This is the case first when the error in the angle is 7x$10^{-10}$, which corresponds to a common precision in the lenghts of $10^{-9}$. If the angle were <1 radian then the precision would be $10^{-10}$. Question 4: Linear Regression/Weighted FitThe data plotted in Fig 6.1(d) relating to the degradation of the signal to noise ratio from a frequency to voltage converter near harmonics of the mains frequency are listed below.\begin{equation}\begin{array}{lcccccc}\hline{\rm frequency~(Hz)} &10&20&30&40&50&60\\{\rm voltage~(mV)} &16&45&64&75&70&115\\{\rm error~(mV)} &5&5&5&5&30&5\\\hline{\rm frequency~(Hz)} &70&80&90&100&110&\\{\rm voltage~(mV)} &142&167&183&160&221&\\{\rm error~(mV)} &5&5&5&30&5&\\\hline\end{array} \end{equation}This data is also contained in the file 'linear_regression.csv'.Required: > (i) Calculate the best-fit gradient and intercept and associated errors using a weighted fit. You may use the `curve_fit` function.
###Code
data = pd.read_csv('linear_regression.csv')
frequencies = data.iloc[:,0]
voltages = data.iloc[:,1]
voltage_errors = data.iloc[:,2]
def f(frequencies, a,b):
return a*frequencies+b
def best_fit_params():
'''Your function should return the gradient,gradient_error,intercept,intercept_error'''
gradient = 0
gradient_error = 0
intercept = 0
intercept_error = 0
# Using the curve_fit function
popt, pcov = curve_fit(f, frequencies, voltages, sigma = voltage_errors)
perr = np.sqrt(np.diag(pcov))
gradient = popt[0]
gradient_error = perr[0]
intercept = popt[1]
intercept_error = perr[1]
return(gradient,gradient_error,intercept,intercept_error)
best_fit_params()
###Output
_____no_output_____
###Markdown
Question 5: Error bars from a $\chi^2$ minimisation|--|Unweighted | Weighted || --- | --- | --- || Gradient | (1.9$\pm$0.2)mV/Hz | (2.03$\pm$0.05)mV/Hz || Intercept | (0$\pm$1)x10mV | (-1$\pm$3)mV |Required:>(i) For the data set of the previous question, write your own code to perform a $\chi^2$ minimisation. You may use the `mininmize` function, but not the `curve_fit` function.>(ii) Verify that $\chi^{2}_{\rm{min}}$ is obtained for the same values of the parameters as are listed in the table above. >(iii) By following the procedure of $\chi^2\rightarrow\chi^2_{\rm min}+1$ outlined in Section 6.5 in Hughes and Hase and the figure above, check that the error bars for $m$ and $c$ are in agreement with the table above. Include explicitly the first 5 steps of the procedure shown in the figure above for the calculation of the slope. (i) For the data set of the previous question, write code to perform a $\chi^2$ minimisation.
###Code
data = pd.read_csv('linear_regression.csv')
frequencies = data.iloc[:,0]
voltages = data.iloc[:,1]
voltage_errors = data.iloc[:,2]
# Note: it is useful to normalise the data in order to help avoid a large
# dependence on the initial starting position
voltages = voltages
voltage_errors = voltage_errors
def chisqfunc(x):
'Linear chi squared fitting function'
a,b = x
model = a + b*frequencies
chisq = np.sum(((voltages - model)/voltage_errors)**2)
return chisq
# Set starting position from which to minimize
x0 = np.array([0,0])
# Use scipy.optimize.minimize
result = minimize(chisqfunc, x0)
a,b=result.x
# Good practice to plot graph to show the fit to check its quality
plt.errorbar(frequencies,voltages, label = 'data', yerr=voltage_errors,fmt='o',capsize=2.5)
plt.plot(frequencies,a+b*frequencies, label = 'chi squared fit')
plt.xlabel('Frequency (Hz)')
plt.ylabel('Voltage (mV)')
plt.legend()
plt.show()
print ('The gradient is {} mV/Hz and the intecept is {} mV'.format(b,a))
###Output
_____no_output_____
###Markdown
Note: the fit is good and the only points further away from the line are the two with very large error bars. (ii) Verify that $\chi^{2}_{\rm{min}}$ is obtained for the same values of the parameters as are listed in the table above.
###Code
data = pd.read_csv('linear_regression.csv')
def five_ii():
'''Your function must return the gradient and intercept'''
gradient = b
intercept = a
return gradient,intercept
five_ii()
###Output
_____no_output_____
###Markdown
(iii) By following the procedure of $\chi^2\rightarrow\chi^2_{\rm min}+1$ outlined in Section 6.5 in Hughes and Hase and the figure above, check that the error bars for $m$ and $c$ are in agreement with the table above. Include explicitly the first 5 steps of the procedure shown in the figure above for the calculation of the slope.
###Code
def search(precision, z_min, f_min):
z_new = z_min
f_new = f_min
m_stop = 0
c_stop = 0
while(m_stop == 0 or c_stop == 0):
m_stop = 1
c_stop = 1
# There are more optimal ways than doing a while(1) function but this suffices for this solution
while (1):
# changing gradient
z_new[1] = z_new[1] + (1 / precision)
f_new = chisqfunc(z_new)
if (f_new - f_min) >= 1 :
z_new[1] = z_new[1] - (1 / precision)
break;
else :
m_stop = 0
while (1):
# changing intercept
z_new[0] = z_new[0] - (1 / precision)
f_new = chisqfunc(z_new)
if (f_new - f_min) >= 1 :
z_new[0] = z_new[0] + (1 / precision)
break;
else:
c_stop = 0
return z_new
def five_iii():
'''
Adapted from one of the submitted homework solutions - thank you to whoever did this
The function has been adapted to make it slightly more efficient but I liked the custom search function
'''
error_m = 0
error_c = 0
x0 = [0,0]
result_weight = minimize(chisqfunc, x0)
precision = 1000
result_shift_weight = search(1000, result_weight.x, result_weight.fun)
result_weight = minimize(chisqfunc, x0)
error_m_weight = round(abs(result_shift_weight[1] - result_weight.x[1]), 2)
error_c_weight = round(abs(result_shift_weight[0] - result_weight.x[0]), 0)
error_m = error_m_weight
error_c = error_c_weight
return error_m, error_c
five_iii()
###Output
_____no_output_____
###Markdown
Question 6- Strategies for error bars\begin{equation}\begin{array}{lccccc}\hlinex &1&2&3&4&5\\y &51&103&150&199&251\\\alpha_{y} &1&1&2&2&3\\\hlinex &6&7&8&9&10\\y &303&347&398&452&512\\\alpha_{y} &3&4&5&6&7\\\hline\end{array} \end{equation}Required:>(i) Calculate the weighted best-fit values of the slope, intercept, and their uncertainties.>(ii) If the data set had been homoscedastic, with all the errors equal, $\alpha_{y}=4$, calculate the weighted best-fit values of the slope, intercept, and their uncertainties.>(iii) If the experimenter took greater time to collect the first and last data points, for which $\alpha_{y}=1$, at the expense of all of the other data points, for which $\alpha_{y}=8$, calculate the weighted best-fit values of the slope, intercept, and their uncertainties.>(iv) Comment on your results.>(v) Plot the original data from the table including error bars. On the same plot, show the fitted function calculated in (i). (i) Calculate the weighted best-fit values of the slope, intercept, and their uncertainties.
###Code
xs = [1,2,3,4,5,6,7,8,9,10]
ys = [51,103,150,199,251,303,347,398,452,512]
ay = [1,1,2,2,3,3,4,5,6,7]
def f(x,a,b):
return(a*x+b)
def six_i():
'Returns slope, intercept, slope uncertainty and intercept uncertainty'
errors = [1,1,2,2,3,3,4,5,6,7]
slope = 0
intercept = 0
slope_uncertainty = 0
intercept_uncertainty = 0
popt, pcov = curve_fit(f, xs, ys, sigma = errors,p0 = [49,1.7])
perr = np.sqrt(np.diag(pcov))
slope = popt[0]
intercept = popt[1]
slope_uncertainty = perr[0]
intercept_uncertainty = perr[1]
return slope,intercept,slope_uncertainty,intercept_uncertainty
six_i()
###Output
_____no_output_____
###Markdown
(ii) If the data set had been homoscedastic, with all the errors equal, $\alpha_{y}=4$, calculate the weighted best-fit values of the slope, intercept, and their uncertainties.
###Code
def six_ii():
errors = [4,4,4,4,4,4,4,4,4,4]
slope = 0
intercept = 0
slope_uncertainty = 0
intercept_uncertainty = 0
popt, pcov = curve_fit(f, xs, ys, sigma = errors)
perr = np.sqrt(np.diag(pcov))
slope = popt[0]
intercept = popt[1]
slope_uncertainty = perr[0]
intercept_uncertainty = perr[1]
return slope,intercept,slope_uncertainty,intercept_uncertainty
six_ii()
###Output
_____no_output_____
###Markdown
(iii) If the experimenter took greater time to collect the first and last data points, for which $\alpha_{y}=1$, at the expense of all of the other data points, for which $\alpha_{y}=8$, calculate the weighted best-fit values of the slope, intercept, and their uncertainties.
###Code
def six_iii():
errors = [1,8,8,8,8,8,8,8,8,1]
slope = 0
intercept = 0
slope_uncertainty = 0
intercept_uncertainty = 0
popt, pcov = curve_fit(f, xs, ys, sigma = errors)
perr = np.sqrt(np.diag(pcov))
slope = popt[0]
intercept = popt[1]
slope_uncertainty = perr[0]
intercept_uncertainty = perr[1]
comment = 'In this case, the error on the fit parameters is lower due to reducing the error of these datapoints.'
return slope,intercept,slope_uncertainty,intercept_uncertainty
six_iii()
###Output
_____no_output_____
###Markdown
(iv) Comment on your results. Comparing the uncertainties of the fits found in parts (i), (ii) and (iii), we find that the uncertainty on the slope is smallest for (iii). I.e. having small uncertainties for the extremal points is important for a good determination of the slope. Precise measurements of the data points close to x=0 reduce the uncertainty on the intercept. This is why we find relatively small (and similar) uncertainties on the intercept in scenarios (i) and (iii), where the uncertainty on the y-position of the x=1 data point is 1 in both cases. (v) Plot the original data from the table including error bars. On the same plot, show the fitted function calculated in (i).
###Code
m,c,_,_ = six_i()
plt.errorbar(xs,ys,yerr=ay,fmt='o',capsize=2.5, label = 'data')
plt.plot(np.arange(11),m*np.arange(11)+c, label = 'weighted fit')
plt.legend()
plt.show
###Output
_____no_output_____
|
doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/.ipynb_checkpoints/CM_Notebook2_Answers-checkpoint.ipynb
|
###Markdown
Classical Mechanics - Week 2 Last week we:- Analytically mapped 1D motion over some time- Gained practice with functions- Reviewed vectors and matrices in Python This week we will:- Practice using Python syntax and variable maniulaton- Utilize analytical solutions to create more refined functions- Work in 3 Dimensions
###Code
## As usual, here are some useful packages we will be using. Feel free to use more and experiment as you wish.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
%matplotlib inline
###Output
_____no_output_____
###Markdown
We previously plotted our variable as an equation.However, this week we will begin storing the position information within vectors, implemented through arrays in coding.Let's get some practice with this. The cell below creates two arrays, one containing the times to be analyzed and the other containing the x and y components of the position vector at each point in time. The second array is initially empty. Then it will make the initial position to be x = 2 and y = 1. Take a look at the code and comments to get an understanding of what's happening better.
###Code
tf = 4 #length of value to be analyzed
dt = .001 # step sizes
t = np.arange(0.0,tf,dt) # Creates an evenly spaced time array going from 0 to 3.999, with step sizes .001
p = np.zeros((len(t), 2)) # Creates an empty array of [x,y] arrays (our vectors). Array size is same as the one for time.
p[0] = [2.0,1.0] # This sets the inital position to be x = 2 and y = 1
###Output
_____no_output_____
###Markdown
Below we are printing specific values in our array to see what's being stored where. The first number in the r[] represents which array iteration we are looking at, while the number after the "," represents which listed number in the array iteration we are getting back. Notice how the listings start at 0.Feel free to mess around with this as much as you want.
###Code
print(p[0]) # Prints the first array
print(p[0,:]) # Same as above, these commands are interchangeable
print(p[3999]) # Prints the 4000th array
print(p[0,0]) # Prints the first value of the first array
print(p[0,1]) # Prints the second value of first array
print(p[:,0]) # Prints the first value of all the arrays
# Try running this cell. Notice how it gives an error since we did not implement a third dimension into our arrays
print(p[:,2])
###Output
_____no_output_____
###Markdown
In the cell below we want to manipulate the arrays.Our goal is to make each vector's x component valued the same as their respective vector's position in the iteration and the y value will be twice that value, EXCEPT the first vector, which we have already set. i.e. $p[0] = [2,1], p[1] = [1,2], p[2] = [2,4], p[3] = [3,6], ...$ The skeleton code has already been provided for you, along with hints. Your job is to complete the code, execute it, and then run the checker in the cell below it. We will be using a for loop and an if statement in the checker code.If your code is working, the cell with the checker should print "Success!" If "There is an error in your code" appears, look to see where the error in your code is and re-run the checker cell until you get the success message.
###Code
for i in range(1,3999):
p[i] = [i,2*i] # What equation should you put in the x and y components?
# Checker cell to make sure your code is performing correctly
c = 0
for i in range(0,3999):
if i == 0:
if p[i,0] != 2.0:
c += 1
if p[i,1] != 1.0:
c += 1
else:
if p[i,0] != 1.0*i:
c += 1
if p[i,1] != 2.0*i:
c += 1
if c == 0:
print("Success!")
else:
print("There is an error in your code")
###Output
Success!
###Markdown
Last week:We made basic plots of a ball moving in 1D space using Physics I equations, coded within a function. However, this week we will be working with a bit more advanced concepts for the same problem. You learned to derive the equations of motions starting from Force in class/reading and how to solve integrations analytically. Now let's use those concepts to analyze such phenomena. Assume we have a soccer ball moving in 3 dimensions with the following trajectory:- $x(t) = 10t\cos{45^{\circ}} $- $y(t) = 10t\sin{45^{\circ}} $- $z(t) = 10t - \dfrac{9.81}{2}t^2$ Now let's create a 3D plot using these equations. In the cell below write the equations into their respective labels. The time we want to analyze is already provided for you along with $x(t)$. **Important Concept:** Numpy comes with many mathematical packages, some of them being the trigonometric functions sine, cosine, tangent. We are going to utilize these this week. Additionally, these functions work with radians, so we will also be using a function from numpy that converts degrees to radians.
###Code
tf = 2.04 # The final time to be evaluated
dt = 0.1 # The time step size
t = np.arange(0,tf,dt) # The time array
theta_deg = 45 # Degrees
theta_rad = np.radians(theta_deg) # Converts our degrees to its radians counterpart
x = 10*t*np.cos(theta_rad) # Equation for our x component, utilizing np.cos() and our calculated radians
y = 10*t*np.sin(theta_rad) # Put the y equation here
z = 10*t-9.81/2*t**2# Put the z equation here
## Once you have entered the proper equations in the cell above, run this cell to plot in 3D
fig = plt.axes(projection='3d')
fig.set_xlabel('x')
fig.set_ylabel('y')
fig.set_zlabel('z')
fig.scatter(x,y,z)
###Output
_____no_output_____
###Markdown
Q1.) How would you express $x(t)$, $y(t)$, $z(t)$ for this problem as a single vector, $\vec{r}(t)$? &9989; Double click this cell, erase its content, and put your answer to the above question here. In the cell below we will put the equations into a single vector, $\vec{r}$. Fix any bugs you find and comment the fixes you made in the line(s) below the $\vec{r}$ array. Comments are made by putting a before inputs in Python(***Hints:*** Compare the equations used in $\vec{r}$ to the ones above. Also, don't be afraid to run the cell and see what error message comes up)
###Code
r = np.array((10*t*np.cos(theta_rad), 10*t*np.sin(theta_rad), 10*t - 9.81/2*t**2))
## Run this code to plot using our r array
fig = plt.axes(projection='3d')
fig.set_xlabel('x')
fig.set_ylabel('y')
fig.set_zlabel('z')
fig.scatter(r[0],r[1],r[2])
###Output
_____no_output_____
###Markdown
Q2.) What do you think the benefits and/or disadvantages are from expressing our 3 equations as a single array/vector? This can be both from a computational and physics stand point. &9989; Double click this cell, erase its content, and put your answer to the above question here. The cell bellow prints the maximum $x$ component from our $\vec{r}$ vector using the numpy package. Use the numpy package to also print the maximum $y$ and $z$ components **FROM** our $\vec{r}$.
###Code
print("Maximum x value is: ", np.max(r[0]))
print("Maximum y value is: ", np.max(r[1]))
print("Maximum z value is: ", np.max(r[2]))
## Put the code for printing out our maximum y and z values here
###Output
Maximum x value is: 14.142135623730951
Maximum y value is: 14.14213562373095
Maximum z value is: 5.095
###Markdown
Complete Taylor Question 1.35 before moving further. (Recall that the golf ball is hit due east at an angle $\theta$ with respect to the horizontal, and the coordinate directions are $x$ measured east, $y$ north, and $z$ vertically up.) Q3.) What is the analytical solution for our theoretical golf ball's position $\vec{r}(t)$ over time from Taylor Question 1.35? Also what is the formula for the time $t_f$ when the golf ball returns to the ground? &9989; Double click this cell, erase its content, and put your answers to the above questions here. Using what we learned in this notebook and the previous one, set up a function called Golfball in the cell below that utilizes our analytical solution from above.This function should take in an initial velocity, vi, and the angle $\theta$ that the golfball was hit in degrees. It should then return a 3D graph of the motion.Also include code in the function to print the maximum $x$, $y$, and $z$ as above.(A skeleton with hints has already been provided for you)
###Code
def Golfball(vi, theta_deg):
theta_rad = np.radians(theta_deg)
vix = vi*np.cos(theta_rad) # Put formulae to obtain initial velocity components here.
viz = vi*np.sin(theta_rad)
# g is already given to you here
g = 9.81 # in m/s^2
# Set up the time array
tf = 2*viz/g # Use the formula for tf from Taylor (1.35) to determine the length of the time array
dt = 0.1 # Choose the time step size to be 0.1 seconds.
t = np.arange(0,tf,dt) # Define the time array here
# Code for position vector here
r = np.array((vix*t,0,viz*t-0.5*g*t**2)) # meters
## Put code to print maximum x, y, z values here
print("Maximum x value is: ", np.max(r[0]))
print("Maximum y value is: ", np.max(r[1]))
print("Maximum z value is: ", np.max(r[2]))
## Put code for plotting in 3d here
fig = plt.axes(projection='3d')
fig.set_xlabel('x')
fig.set_ylabel('y')
fig.set_zlabel('z')
fig.scatter(r[0],r[1],r[2])
###Output
_____no_output_____
###Markdown
In the cell below create lines of code that asks the user to input an initial velocity (in m/s) and the angle (in degrees). Then use these inputs on the created Golfball function. Play around with the values to see if your function is properly working. Hint: If you get stuck, look at the previous notebook. We did something very similar to this last time.
###Code
Vi = float(input("Please enter initial velocity (m/s) "))
Theta_deg = float(input("Please enter initial angle (degrees) "))
# Now below, we will call upon the BallToss function and plug in our asked values
Golfball(Vi, Theta_deg)
###Output
Please enter initial velocity (m/s) 90
Please enter initial angle (degrees) 30
Maximum x value is: 709.2748056994552
Maximum y value is: 0
Maximum z value is: 103.21019999999997
###Markdown
Call the Golfball function in the empty cell provided below and produce a graph of the solution for initial conditions $v_i=90$ m/s and $\theta=30^\circ$.
###Code
Golfball(90, 30)
###Output
Maximum x value is: 709.2748056994552
Maximum y value is: 0
Maximum z value is: 103.21019999999997
###Markdown
Q4.) Given initial values of $v_i = 90 m/s$, $\theta = 30^{\circ}$, what would our maximum x, y and z components be? &9989; Double click this cell, erase its content, and put your answer to the above question here. Call the Golfball function in the empty cell provided below and produce a graph of the solution for initial conditions $v_i=45$ m/s and $\theta=45^\circ$.
###Code
Golfball(45, 45)
###Output
Maximum x value is: 203.6467529817257
Maximum y value is: 0
Maximum z value is: 51.59617649086283
###Markdown
Q5.) Given initial values of $v_i = 45 m/s$, $\theta = 45^{\circ}$, what would our maximum x, y and z components be? &9989; Double click this cell, erase its content, and put your answer to the above question here. Notebook Wrap-up. Run the cell below and copy-paste your answers into their corresponding cells.Rename the notebook to CM_Week2_[Insert Your Name Here].ipynb and submit it to D2L dropbox.
###Code
from IPython.display import HTML
HTML(
"""
<iframe
src="https://forms.gle/gLVojgAUYao9K8VR7"
width="100%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
###Output
_____no_output_____
|
code/section1/video1_5.ipynb
|
###Markdown
Mastering PyTorch Supervised learning Visualize the training in Visdom Accompanying notebook to Video 1.5
###Code
# Include libraries
import numpy as np
from PIL import Image
import os
import random
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torch.autograd import Variable
from torchvision import transforms
import torchvision.transforms.functional as TF
from utils import get_image_name, get_number_of_cells, \
split_data, download_data, SEED
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
root = './'
download_data(root=root)
data_paths = os.path.join('./', 'data_paths.txt')
if not os.path.exists(data_paths):
!wget http://pbialecki.de/mastering_pytorch/data_paths.txt
if not os.path.isfile(data_paths):
print('data_paths.txt missing!')
# Setup Globals
use_cuda = torch.cuda.is_available()
data_paths = os.path.join('./', 'data', 'data_paths.txt')
np.random.seed(SEED)
torch.manual_seed(SEED)
if use_cuda:
torch.cuda.manual_seed(SEED)
print('Using: {}'.format(torch.cuda.get_device_name(0)))
print_steps = 10
# Utility functions
def weights_init(m):
'''
Initialize the weights of each Conv2d layer using xavier_uniform
("Understanding the difficulty of training deep feedforward
neural networks" - Glorot, X. & Bengio, Y. (2010))
'''
if isinstance(m, nn.Conv2d):
nn.init.xavier_uniform(m.weight.data)
elif isinstance(m, nn.ConvTranspose2d):
nn.init.xavier_uniform(m.weight.data)
def dice_loss(y_target, y_pred, smooth=0.0):
y_target = y_target.view(-1)
y_pred = y_pred.view(-1)
intersection = (y_target * y_pred).sum()
dice_coef = (2. * intersection + smooth) / (
y_target.sum() + y_pred.sum() + smooth)
return 1. - dice_coef
class CellDataset(Dataset):
def __init__(self, image_paths, target_paths, size, train=False):
self.image_paths = image_paths
self.target_paths = target_paths
self.size = size
resize_size = [s+10 for s in self.size]
self.resize_image = transforms.Resize(
size=resize_size, interpolation=Image.BILINEAR)
self.resize_mask = transforms.Resize(
size=resize_size, interpolation=Image.NEAREST)
self.train = train
def transform(self, image, mask):
# Resize
image = self.resize_image(image)
mask = self.resize_mask(mask)
# Perform data augmentation
if self.train:
# Random cropping
i, j, h, w = transforms.RandomCrop.get_params(
image, output_size=self.size)
image = TF.crop(image, i, j, h, w)
mask = TF.crop(mask, i, j, h, w)
# Random horizontal flipping
if random.random() > 0.5:
image = TF.hflip(image)
mask = TF.hflip(mask)
# Random vertical flipping
if random.random() > 0.5:
image = TF.vflip(image)
mask = TF.vflip(mask)
else:
center_crop = transforms.CenterCrop(self.size)
image = center_crop(image)
mask = center_crop(mask)
# Transform to tensor
image = TF.to_tensor(image)
mask = TF.to_tensor(mask)
return image, mask
def __getitem__(self, index):
image = Image.open(self.image_paths[index])
mask = Image.open(self.target_paths[index])
x, y = self.transform(image, mask)
return x, y
def __len__(self):
return len(self.image_paths)
def get_random_sample(dataset):
'''
Get a random sample from the specified dataset.
'''
data, target = dataset[int(np.random.choice(len(dataset), 1))]
data.unsqueeze_(0)
target.unsqueeze_(0)
if use_cuda:
data = data.cuda()
target = target.cuda()
data = Variable(data)
target = Variable(target)
return data, target
class BaseConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, padding,
stride):
super(BaseConv, self).__init__()
self.act = nn.ReLU()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size, padding,
stride)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size,
padding, stride)
self.downsample = None
if in_channels != out_channels:
self.downsample = nn.Sequential(
nn.Conv2d(
in_channels, out_channels, kernel_size, padding, stride)
)
def forward(self, x):
residual = x
out = self.act(self.conv1(x))
out = self.conv2(out)
if self.downsample:
residual = self.downsample(x)
out += residual
out = self.act(out)
return out
class DownConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, padding,
stride):
super(DownConv, self).__init__()
self.pool1 = nn.MaxPool2d(kernel_size=2)
self.conv_block = BaseConv(in_channels, out_channels, kernel_size,
padding, stride)
def forward(self, x):
x = self.pool1(x)
x = self.conv_block(x)
return x
class UpConv(nn.Module):
def __init__(self, in_channels, in_channels_skip, out_channels,
kernel_size, padding, stride):
super(UpConv, self).__init__()
self.conv_trans1 = nn.ConvTranspose2d(
in_channels, in_channels, kernel_size=2, padding=0, stride=2)
self.conv_block = BaseConv(
in_channels=in_channels + in_channels_skip,
out_channels=out_channels,
kernel_size=kernel_size,
padding=padding,
stride=stride)
def forward(self, x, x_skip):
x = self.conv_trans1(x)
x = torch.cat((x, x_skip), dim=1)
x = self.conv_block(x)
return x
class ResUNet(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, padding,
stride):
super(ResUNet, self).__init__()
self.init_conv = BaseConv(in_channels, out_channels, kernel_size, padding, stride)
self.down1 = DownConv(out_channels, 2 * out_channels, kernel_size,
padding, stride)
self.down2 = DownConv(2 * out_channels, 4 * out_channels, kernel_size,
padding, stride)
self.down3 = DownConv(4 * out_channels, 8 * out_channels, kernel_size,
padding, stride)
self.up3 = UpConv(8 * out_channels, 4 * out_channels, 4 * out_channels,
kernel_size, padding, stride)
self.up2 = UpConv(4 * out_channels, 2 * out_channels, 2 * out_channels,
kernel_size, padding, stride)
self.up1 = UpConv(2 * out_channels, out_channels, out_channels,
kernel_size, padding, stride)
self.out = nn.Conv2d(out_channels, 1, kernel_size, padding, stride)
def forward(self, x):
# Encoder
x = self.init_conv(x)
x1 = self.down1(x)
x2 = self.down2(x1)
x3 = self.down3(x2)
# Decoder
x_up = self.up3(x3, x2)
x_up = self.up2(x_up, x1)
x_up = self.up1(x_up, x)
x_out = F.sigmoid(self.out(x_up))
return x_out
def train(epoch, visualize=False):
'''
Main training loop
'''
global win_loss
global win_images
# Set model to train mode
model.train()
# Iterate training set
for batch_idx, (data, mask) in enumerate(train_loader):
if use_cuda:
data = data.cuda()
mask = mask.cuda()
data = Variable(data)
mask = Variable(mask.squeeze())
optimizer.zero_grad()
output = model(data)
loss_mask = criterion(output.squeeze(), mask)
loss_dice = dice_loss(mask, output.squeeze())
loss = loss_mask + loss_dice
loss.backward()
optimizer.step()
if batch_idx % print_steps == 0:
loss_mask_data = loss_mask.data[0]
loss_dice_data = loss_dice.data[0]
train_losses.append(loss_mask_data)
print(
'Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}\tLoss(dice): {:.6f}'.
format(epoch, batch_idx * len(data),
len(train_loader.dataset), 100. * batch_idx / len(
train_loader), loss_mask_data, loss_dice_data))
x_idx = (epoch - 1) * len(train_loader) + batch_idx
losses = [loss_mask_data, loss_dice_data]
win_loss = visualize_losses(losses, x_idx, win_loss)
if visualize:
# Visualize some images in Visdom
nb_images = 4
images_pred = output.data[:nb_images].cpu()
images_target = mask.data[:nb_images].cpu().unsqueeze(1)
images_input = data.data[:nb_images].cpu()
images = torch.zeros(3 * images_pred.size(0), *images_pred.size()[1:])
images[::3] = images_input
images[1::3] = images_pred
images[2::3] = images_target
# Resize images to fit in visdom
images = resize_tensors(images)
images = make_grid(images, nrow=3, pad_value=0.5)
win_images = visualize_images(
images.numpy(), win_images, title='Training: input - prediction - target')
def validate():
'''
Validation loop
'''
global win_eval_images
# Set model to eval mode
model.eval()
# Setup val_loss
val_mask_loss = 0
val_dice_loss = 0
# Disable gradients (to save memory)
with torch.no_grad():
# Iterate validation set
for data, mask in val_loader:
if use_cuda:
data = data.cuda()
mask = mask.cuda()
data = Variable(data)
mask = Variable(mask.squeeze())
output = model(data)
val_mask_loss += F.binary_cross_entropy(output.squeeze(), mask).data[0]
val_dice_loss += dice_loss(mask, output.squeeze()).data[0]
# Calculate mean of validation loss
val_mask_loss /= len(val_loader)
val_dice_loss /= len(val_loader)
val_losses.append(val_mask_loss)
print('Validation\tLoss: {:.6f}\tLoss(dice): {:.6f}'.format(val_mask_loss, val_dice_loss))
# Visualize some images in Visdom
nb_images = 4
images_pred = output.data[:nb_images].cpu()
images_target = mask.data[:nb_images].cpu().unsqueeze(1)
images_input = data.data[:nb_images].cpu()
images = torch.zeros(3 * images_pred.size(0), *images_pred.size()[1:])
images[::3] = images_input
images[1::3] = images_pred
images[2::3] = images_target
# Resize images to fit in visdom
images = resize_tensors(images)
images = make_grid(images, nrow=3, pad_value=0.5)
win_eval_images = visualize_images(
images.numpy(), win_eval_images, title='Validation: input - prediction - target')
# Get train data folders and split to training / validation set
with open(data_paths, 'r') as f:
data_paths = f.readlines()
image_paths = [line.split(',')[0].strip() for line in data_paths]
target_paths = [line.split(',')[1].strip() for line in data_paths]
# Split data into train/validation datasets
im_path_train, im_path_val, tar_path_train, tar_path_val = split_data(
image_paths, target_paths)
# Create datasets
train_dataset = CellDataset(
image_paths=im_path_train,
target_paths=tar_path_train,
size=(96, 96),
train=True
)
val_dataset = CellDataset(
image_paths=im_path_val,
target_paths=tar_path_val,
size=(96, 96),
train=False
)
# Wrap in DataLoader
train_loader = DataLoader(
dataset=train_dataset,
batch_size=32,
num_workers=12,
shuffle=True
)
val_loader = DataLoader(
dataset=val_dataset,
batch_size=64,
num_workers=12,
shuffle=True
)
# Creae model
model = ResUNet(
in_channels=1, out_channels=32, kernel_size=3, padding=1, stride=1)
# Initialize weights
model.apply(weights_init)
# Push to GPU, if available
if use_cuda:
model.cuda()
# Create optimizer and scheduler
optimizer = optim.SGD(model.parameters(), lr=1e-3)
# Create criterion
criterion = nn.BCELoss()
###Output
_____no_output_____
###Markdown
Create visdom helper functions
###Code
from visdom import Visdom
from torchvision.utils import make_grid
# Setup visdom
viz = Visdom(port=6006)
win_loss = None
win_images = None
win_eval_loss = None
win_eval_images = None
def visualize_losses(losses, x_idx, win):
if not win:
win = viz.line(
Y=np.column_stack(losses),
X=np.column_stack([x_idx] * len(losses)),
opts=dict(
showlegend=True,
xlabel='iteration',
ylabel='BCELoss',
ytype='log',
title='Losses',
legend=['Loss(mask)', 'Loss(dice)']))
else:
win = viz.line(
Y=np.column_stack(losses),
X=np.column_stack([x_idx] * len(losses)),
opts=dict(showlegend=True),
win=win,
update='append')
return win
def visualize_images(images, win, title=''):
if not win:
win = viz.images(tensor=images, opts=dict(title=title))
else:
win = viz.images(tensor=images, win=win, opts=dict(title=title))
return win
def resize_tensors(tensors, size=(128, 128)):
to_pil = transforms.ToPILImage()
res = transforms.Resize(size=size)
to_tensor = transforms.ToTensor()
images = torch.stack([to_tensor(res(to_pil(t))) for t in tensors])
return images
# Start training
train_losses, val_losses = [], []
epochs = 30
for epoch in range(1, epochs):
train(epoch, visualize=True)
validate()
###Output
_____no_output_____
|
File Loader Script.ipynb
|
###Markdown
File Loader ScriptThe following is a file Loader script that tweaks the existing sklearn.datasets load_files function to recursively load nested directory structure. The Following script expects you to have installed sklearn. If not already installed do a :pip install sklearn or sudo pip install sklearn Load text files with categories as subfolder names. Individual samples are assumed to be files stored a heirarchical folder structure such as the following: container_folder/ Intermediate_folder1/...... category_1_folder/ file_1.txt file_2.txt ... file_42.txt Intermediate_folder1/...... category_2_folder/ file_43.txt file_44.txt ... The folder names are used as supervised signal label names. The individual file names are not important. This function does not try to extract features into a numpy array or scipy sparse matrix. In addition, if load_content is false it does not try to load the files in memory. To use text files in a scikit-learn classification or clustering algorithm, you will need to use the `sklearn.feature_extraction.text` module to build a feature extraction transformer that suits your problem. If you set load_content=True, you should also specify the encoding of the text using the 'encoding' parameter. For many modern text files, 'utf-8' will be the correct encoding. If you leave encoding equal to None, then the content will be made of bytes instead of Unicode, and you will not be able to use most functions in `sklearn.feature_extraction.text`. Similar feature extractors should be built for other kind of unstructured data input such as images, audio, video, ... Parameters ---------- container_path : string or unicode Path to the main folder holding one subfolder per category level: integer Specify the depth of the directory structure that you wish to store as your category labels description: string or unicode, optional (default=None) A paragraph describing the characteristic of the dataset: its source, reference, etc. categories : A collection of strings or None, optional (default=None) If None (default), load all the categories. If not None, list of category names to load (other categories ignored). exlude: A collections of Strings or None, optional (default=None) If None (default), load all the categories. If not None, list of category names in exclude are ignored OneVsAll: A String (default= None) Creates a OneVsAll classifier with the parameter in the string. Eg: OneVsAll =['Agro'] will create a binary classifier where Documents are classified into two classes i.e Agro Vs Non-Agro Only supported for Level1 now Cat: Often given together with Categories parameter. If cat=1, then categories parameter takes a list of level 1 classes cat=2, then categories parameter takes a list of level 2 classes load_content : boolean, optional (default=True) Whether to load or not the content of the different files. If true a 'data' attribute containing the text information is present in the data structure returned. If not, a filenames attribute gives the path to the files. encoding : string or None (default is None) If None, do not try to decode the content of the files (e.g. for images or other non-text content). If not None, encoding to use to decode text files to Unicode if load_content is True. decode_error: {'strict', 'ignore', 'replace'}, optional Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given `encoding`. Passed as keyword argument 'errors' to bytes.decode. shuffle : bool, optional (default=True) Whether or not to shuffle the data: might be important for models that make the assumption that the samples are independent and identically distributed (i.i.d.), such as stochastic gradient descent. random_state : int, RandomState instance or None, optional (default=0) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. Returns ------- data : Bunch Dictionary-like object, the interesting attributes are: either data, the raw text data to learn, or 'filenames', the files holding it, 'target', the classification labels (integer index), 'target_names', the meaning of the labels, and 'DESCR', the full description of the dataset.
###Code
import os
import shutil
from os import environ
from os.path import dirname
from os.path import join
from os.path import exists
from os.path import expanduser
from os.path import isdir
from os import listdir
import glob
import numpy as np
from sklearn.utils import check_random_state
class MyError(Exception):
pass
class Bunch(dict):
def __init__(self, **kwargs):
dict.__init__(self, kwargs)
def __setattr__(self, key, value):
self[key] = value
def __getattr__(self, key):
try:
return self[key]
except KeyError:
raise AttributeError(key)
def __getstate__(self):
return self.__dict__
def load_files(container_path,level, description=None, OneVsAll=None, categories=None,
load_content=True,cat=0,exclude= None,encoding=None,shuffle=False,
decode_error='strict', random_state=0):
target = []
target_names = []
filenames = []
if level==1:
folders = [f for f in sorted(listdir(container_path))
if isdir(join(container_path, f))]
#print folders
elif level==2:
depth2= sorted(glob.glob(container_path+"/*/*"))
l0= [f[:f.rfind('/')] for f in depth2]
f1=[f[f.rfind('/')+1:] for f in l0]
folders = [f[(f.rfind('/'))+1:] for f in depth2]
else:
# quit the function and any function(s) that may have called it
raise MyError('Select Between Level 1 and Level 2')
if categories is not None and level==1:
folders = [f for f in folders if f in categories]
if categories is not None and level==2 and cat==1:
# when categories parameter takes a list of level 1 classes
folders=[f for f in folders if f.split('.')[0] in categories]
if categories is not None and level ==2 and cat ==2:
# when categories parameters takes a list of level 2 classes
folders=[f for f in folders if f in categories]
# print folders
if exclude is not None:
folders = [f for f in folders if f not in exclude]
if OneVsAll is None:
for label, folder in enumerate(folders):
target_names.append(folder)
if level==1:
folder_path = join(container_path, folder)
elif level==2:
for i in f1:
if folder.split('.')[0] == i:
folder_path = join(container_path,i,folder)
documents = [join(root, name) for root,dirs, files in sorted(os.walk(folder_path)) for name in files]
target.extend(len(documents)*[label])
filenames.extend(documents)
#print filenames
elif OneVsAll is not None and level==1:
if len(OneVsAll)==1:
target_names=[f for f in OneVsAll]
ComplemetClass = 'Non-'+OneVsAll[0]
target_names.append(ComplemetClass)
for label, folder in enumerate(folders):
folder_path = join(container_path, folder)
documents = [join(root, name) for root,dirs, files in sorted(os.walk(folder_path)) for name in files]
if folder==OneVsAll[0]:
label=0
target.extend(len(documents)*[label])
elif folder!=OneVsAll[0]:
label=1
target.extend(len(documents)*[label])
filenames.extend(documents)
else:
MyError('OneVsAll can only be list of size 1')
# convert to array for fancy indexing
filenames = np.array(filenames)
target = np.array(target)
if shuffle:
random_state = check_random_state(random_state)
indices = np.arange(filenames.shape[0])
random_state.shuffle(indices)
filenames = filenames[indices]
target = target[indices]
if load_content:
data = []
for filename in filenames:
with open(filename, 'rb') as f:
data.append(f.read())
if encoding is not None:
data = [d.decode(encoding, decode_error) for d in data]
return Bunch(data=data,
filenames=filenames,
target_names=target_names,
target=target,
DESCR=description)
return Bunch(filenames=filenames,
target_names=target_names,
target=target,
DESCR=description)
###Output
_____no_output_____
|
notebooks/Inference_tacotron2_New_Zealand_English.ipynb
|
###Markdown
**Inference with the trained speech model and the trained vocoder model**First change your directory to the one containing the TensorFlowTTS files and install the dependencies required to run the code.
###Code
cd /content/drive/My Drive/projectFiles/TensorFlowTTS
!pip install TensorFlowTTS
###Output
Collecting TensorFlowTTS
[?25l Downloading https://files.pythonhosted.org/packages/24/4b/68860f419d848d2e4fb57c03194563aed2f4317227f93b91839482aa0723/TensorFlowTTS-0.9-py3-none-any.whl (112kB)
[K |███ | 10kB 16.0MB/s eta 0:00:01
[K |█████▉ | 20kB 15.5MB/s eta 0:00:01
[K |████████▊ | 30kB 7.6MB/s eta 0:00:01
[K |███████████▋ | 40kB 8.6MB/s eta 0:00:01
[K |██████████████▋ | 51kB 4.1MB/s eta 0:00:01
[K |█████████████████▌ | 61kB 4.7MB/s eta 0:00:01
[K |████████████████████▍ | 71kB 4.8MB/s eta 0:00:01
[K |███████████████████████▎ | 81kB 4.9MB/s eta 0:00:01
[K |██████████████████████████▏ | 92kB 5.3MB/s eta 0:00:01
[K |█████████████████████████████▏ | 102kB 5.6MB/s eta 0:00:01
[K |████████████████████████████████| 112kB 5.6MB/s
[?25hRequirement already satisfied: h5py>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (2.10.0)
Collecting tensorflow-gpu>=2.3.0
[?25l Downloading https://files.pythonhosted.org/packages/18/99/ac32fd13d56e40d4c3e6150030132519997c0bb1f06f448d970e81b177e5/tensorflow_gpu-2.3.1-cp36-cp36m-manylinux2010_x86_64.whl (320.4MB)
[K |████████████████████████████████| 320.4MB 39kB/s
[?25hCollecting unidecode>=1.1.1
[?25l Downloading https://files.pythonhosted.org/packages/d0/42/d9edfed04228bacea2d824904cae367ee9efd05e6cce7ceaaedd0b0ad964/Unidecode-1.1.1-py2.py3-none-any.whl (238kB)
[K |████████████████████████████████| 245kB 43.8MB/s
[?25hCollecting pyworld>=0.2.10
[?25l Downloading https://files.pythonhosted.org/packages/af/88/003eef396c966cf00088900167831946b80b8e7650843905cb9590c2d9ca/pyworld-0.2.12.tar.gz (222kB)
[K |████████████████████████████████| 225kB 47.3MB/s
[?25hCollecting librosa>=0.7.0
[?25l Downloading https://files.pythonhosted.org/packages/26/4d/c22d8ca74ca2c13cd4ac430fa353954886104321877b65fa871939e78591/librosa-0.8.0.tar.gz (183kB)
[K |████████████████████████████████| 184kB 45.1MB/s
[?25hRequirement already satisfied: tqdm>=4.26.1 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (4.41.1)
Collecting pypinyin
[?25l Downloading https://files.pythonhosted.org/packages/db/50/58b16cb56aeb003246d76ce3648f8e449605d7595e444a9b7c87bd543db8/pypinyin-0.40.0-py2.py3-none-any.whl (1.3MB)
[K |████████████████████████████████| 1.3MB 40.8MB/s
[?25hCollecting g2pM
[?25l Downloading https://files.pythonhosted.org/packages/af/21/dc5b497f09a94a9605e0b8a94ad0e01ae73a2b65109bf5bd325b0814b6a8/g2pM-0.1.2.5-py3-none-any.whl (1.7MB)
[K |████████████████████████████████| 1.7MB 41.4MB/s
[?25hRequirement already satisfied: PyYAML>=3.12 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (3.13)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (7.1.2)
Requirement already satisfied: setuptools>=38.5.1 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (50.3.2)
Collecting jamo>=0.4.1
Downloading https://files.pythonhosted.org/packages/ac/cc/49812faae67f9a24be6ddaf58a2cf7e8c3cbfcf5b762d9414f7103d2ea2c/jamo-0.4.1-py3-none-any.whl
Collecting g2p-en
[?25l Downloading https://files.pythonhosted.org/packages/d7/d9/b77dc634a7a0c0c97716ba97dd0a28cbfa6267c96f359c4f27ed71cbd284/g2p_en-2.1.0-py3-none-any.whl (3.1MB)
[K |████████████████████████████████| 3.1MB 37.9MB/s
[?25hCollecting textgrid
Downloading https://files.pythonhosted.org/packages/9f/9e/04fb27ec5ac287b203afd5b228bc7c4ec5b7d3d81c4422d57847e755b0cc/TextGrid-1.5-py3-none-any.whl
Collecting soundfile>=0.10.2
Downloading https://files.pythonhosted.org/packages/eb/f2/3cbbbf3b96fb9fa91582c438b574cff3f45b29c772f94c400e2c99ef5db9/SoundFile-0.10.3.post1-py2.py3-none-any.whl
Requirement already satisfied: scikit-learn>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (0.22.2.post1)
Requirement already satisfied: matplotlib>=3.1.0 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (3.2.2)
Collecting inflect>=4.1.0
Downloading https://files.pythonhosted.org/packages/a8/d7/9ee314763935ce36e3023103ea2c689e6026147230037503a7772cdad6c1/inflect-5.0.2-py3-none-any.whl
Requirement already satisfied: numba<=0.48 in /usr/local/lib/python3.6/dist-packages (from TensorFlowTTS) (0.48.0)
Collecting tensorflow-addons>=0.10.0
[?25l Downloading https://files.pythonhosted.org/packages/b3/f8/d6fca180c123f2851035c4493690662ebdad0849a9059d56035434bff5c9/tensorflow_addons-0.11.2-cp36-cp36m-manylinux2010_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 41.6MB/s
[?25hRequirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.6/dist-packages (from h5py>=2.10.0->TensorFlowTTS) (1.18.5)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from h5py>=2.10.0->TensorFlowTTS) (1.15.0)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.10.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.1.2)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.33.2)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.2.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (3.12.4)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.1.0)
Requirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (2.3.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.35.1)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.6.3)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.3.3)
Requirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (2.3.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.12.1)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu>=2.3.0->TensorFlowTTS) (3.3.0)
Requirement already satisfied: cython>=0.24.0 in /usr/local/lib/python3.6/dist-packages (from pyworld>=0.2.10->TensorFlowTTS) (0.29.21)
Requirement already satisfied: audioread>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from librosa>=0.7.0->TensorFlowTTS) (2.1.9)
Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from librosa>=0.7.0->TensorFlowTTS) (1.4.1)
Requirement already satisfied: joblib>=0.14 in /usr/local/lib/python3.6/dist-packages (from librosa>=0.7.0->TensorFlowTTS) (0.17.0)
Requirement already satisfied: decorator>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from librosa>=0.7.0->TensorFlowTTS) (4.4.2)
Requirement already satisfied: resampy>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from librosa>=0.7.0->TensorFlowTTS) (0.2.2)
Collecting pooch>=1.0
[?25l Downloading https://files.pythonhosted.org/packages/ce/11/d7a1dc8173a4085759710e69aae6e070d0d432db84013c7c343e4e522b76/pooch-1.2.0-py3-none-any.whl (47kB)
[K |████████████████████████████████| 51kB 6.3MB/s
[?25hCollecting distance>=0.1.3
[?25l Downloading https://files.pythonhosted.org/packages/5c/1a/883e47df323437aefa0d0a92ccfb38895d9416bd0b56262c2e46a47767b8/Distance-0.1.3.tar.gz (180kB)
[K |████████████████████████████████| 184kB 42.0MB/s
[?25hRequirement already satisfied: nltk>=3.2.4 in /usr/local/lib/python3.6/dist-packages (from g2p-en->TensorFlowTTS) (3.2.5)
Requirement already satisfied: cffi>=1.0 in /usr/local/lib/python3.6/dist-packages (from soundfile>=0.10.2->TensorFlowTTS) (1.14.3)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.0->TensorFlowTTS) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.0->TensorFlowTTS) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.0->TensorFlowTTS) (1.3.1)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.0->TensorFlowTTS) (2.8.1)
Requirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba<=0.48->TensorFlowTTS) (0.31.0)
Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons>=0.10.0->TensorFlowTTS) (2.7.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.0.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.17.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.4.2)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (2.23.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (3.3.3)
Collecting appdirs
Downloading https://files.pythonhosted.org/packages/3b/00/2344469e2084fb287c2e0b57b72910309874c3245463acd6cf5e3db69324/appdirs-1.4.4-py2.py3-none-any.whl
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pooch>=1.0->librosa>=0.7.0->TensorFlowTTS) (20.4)
Requirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi>=1.0->soundfile>=0.10.2->TensorFlowTTS) (2.20)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (4.1.1)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.3.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (2020.6.20)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (3.0.4)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (2.0.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow-gpu>=2.3.0->TensorFlowTTS) (3.4.0)
Building wheels for collected packages: pyworld, librosa, distance
Building wheel for pyworld (setup.py) ... [?25l[?25hdone
Created wheel for pyworld: filename=pyworld-0.2.12-cp36-cp36m-linux_x86_64.whl size=609445 sha256=35a39ba3e49ca882c42219b8258d9b2308b9117d90cee9a206c2ba4a2f98bc18
Stored in directory: /root/.cache/pip/wheels/d0/e4/1c/a508000462b83164d5eba9a4b46f39b4b1645ac952bbe71551
Building wheel for librosa (setup.py) ... [?25l[?25hdone
Created wheel for librosa: filename=librosa-0.8.0-cp36-none-any.whl size=201376 sha256=9cb1d80ad6dbc9bfd5098d6b1288ba440a3d31c1b2d5dd70d6fb2366a376aac1
Stored in directory: /root/.cache/pip/wheels/ee/10/1e/382bb4369e189938d5c02e06d10c651817da8d485bfd1647c9
Building wheel for distance (setup.py) ... [?25l[?25hdone
Created wheel for distance: filename=Distance-0.1.3-cp36-none-any.whl size=16262 sha256=46ea4e165a691245e81630bacb1f80c55e17976336cd1e922939e9c88b2eba26
Stored in directory: /root/.cache/pip/wheels/d5/aa/e1/dbba9e7b6d397d645d0f12db1c66dbae9c5442b39b001db18e
Successfully built pyworld librosa distance
Installing collected packages: tensorflow-gpu, unidecode, pyworld, soundfile, appdirs, pooch, librosa, pypinyin, g2pM, jamo, inflect, distance, g2p-en, textgrid, tensorflow-addons, TensorFlowTTS
Found existing installation: librosa 0.6.3
Uninstalling librosa-0.6.3:
Successfully uninstalled librosa-0.6.3
Found existing installation: inflect 2.1.0
Uninstalling inflect-2.1.0:
Successfully uninstalled inflect-2.1.0
Found existing installation: tensorflow-addons 0.8.3
Uninstalling tensorflow-addons-0.8.3:
Successfully uninstalled tensorflow-addons-0.8.3
Successfully installed TensorFlowTTS-0.9 appdirs-1.4.4 distance-0.1.3 g2p-en-2.1.0 g2pM-0.1.2.5 inflect-5.0.2 jamo-0.4.1 librosa-0.8.0 pooch-1.2.0 pypinyin-0.40.0 pyworld-0.2.12 soundfile-0.10.3.post1 tensorflow-addons-0.11.2 tensorflow-gpu-2.3.1 textgrid-1.5 unidecode-1.1.1
###Markdown
**Load Model**
###Code
import tensorflow as tf
import yaml
import numpy as np
import matplotlib.pyplot as plt
import IPython.display as ipd
from tensorflow_tts.inference import TFAutoModel
from tensorflow_tts.inference import AutoConfig
from tensorflow_tts.inference import AutoProcessor
###Output
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package cmudict to /root/nltk_data...
[nltk_data] Unzipping corpora/cmudict.zip.
###Markdown
**Load Tacotron-2 trained model from previous notebook**
###Code
tacotron2_config = AutoConfig.from_pretrained('examples/tacotron2/conf/tacotron2.v1.yaml')
tacotron2 = TFAutoModel.from_pretrained(
config=tacotron2_config,
pretrained_path="/content/drive/My Drive/projectFiles/Final_Project_Main/TensorFlowTTS/examples/tacotron2/exp/train.tacotron2.v1/checkpoints/model-20000.h5",
training=False,
name="tacotron2"
)
###Output
_____no_output_____
###Markdown
**Download and load the MelGAN vocoder**
###Code
melgan_config = AutoConfig.from_pretrained('examples/melgan/conf/melgan.v1.yaml')
melgan = TFAutoModel.from_pretrained(
config=melgan_config,
pretrained_path="melgan-1M6.h5",
name="melgan"
)
###Output
_____no_output_____
###Markdown
**Download and load the processor for LJ Speech database**
###Code
processor = AutoProcessor.from_pretrained(pretrained_path="./ljspeech_mapper.json")
###Output
_____no_output_____
###Markdown
**Inference**
###Code
def do_synthesis(input_text, text2mel_model, vocoder_model, text2mel_name, vocoder_name):
input_ids = processor.text_to_sequence(input_text)
# text2mel part
if text2mel_name == "TACOTRON":
_, mel_outputs, stop_token_prediction, alignment_history = text2mel_model.inference(
tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
tf.convert_to_tensor([len(input_ids)], tf.int32),
tf.convert_to_tensor([0], dtype=tf.int32)
)
else:
raise ValueError("Only TACOTRON, FASTSPEECH, FASTSPEECH2 are supported on text2mel_name")
# vocoder part
if vocoder_name == "MELGAN" or vocoder_name == "MELGAN-STFT":
audio = vocoder_model(mel_outputs)[0, :, 0]
elif vocoder_name == "MB-MELGAN":
audio = vocoder_model(mel_outputs)[0, :, 0]
else:
raise ValueError("Only MELGAN, MELGAN-STFT and MB_MELGAN are supported on vocoder_name")
if text2mel_name == "TACOTRON":
return mel_outputs.numpy(), alignment_history.numpy(), audio.numpy()
else:
return mel_outputs.numpy(), audio.numpy()
def visualize_attention(alignment_history):
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_title(f'Alignment steps')
im = ax.imshow(
alignment_history,
aspect='auto',
origin='lower',
interpolation='none')
fig.colorbar(im, ax=ax)
xlabel = 'Decoder timestep'
plt.xlabel(xlabel)
plt.ylabel('Encoder timestep')
plt.tight_layout()
plt.show()
plt.close()
def visualize_mel_spectrogram(mels):
mels = tf.reshape(mels, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mels), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
**Enter the text you want to convert to speech and get the audio and allignment and mel-spectrogram graphs**
###Code
input_text = "Bill got in the habit of asking himself 'Is that thought true?' And if he wasn't absolutely certain it was, he just let it go."
tacotron2.setup_window(win_front=10, win_back=10)
mels, alignment_history, audios = do_synthesis(input_text, tacotron2, melgan, "TACOTRON", "MELGAN")
visualize_attention(alignment_history[0])
visualize_mel_spectrogram(mels[0])
ipd.Audio(audios, rate=22050)
###Output
_____no_output_____
|
Chapter07/Exercise11/7_11.ipynb
|
###Markdown
Exercise 7.11 In Section 7.7, it was mentioned that GAMs are generally fit using a backfitting approach. The idea behind backfitting is actually quite simple. We will now explore backfitting in the context of multiple linear regression. Suppose that we would like to perform multiple linear regression, but we do not have software to do so. Instead, we only have software to perform simple linear regression. Therefore, we take the following iterative approach: we repeatedly hold all but one coefficient estimate fixed at its current value, and update only that coefficient estimate using a simple linear regression. The process is continued until convergence—that is, until the coefficient estimates stop changing. We now try this out on a toy example. Generate a response $Y$ and two predictors $X_1$ and $X_2$, with$n = 100$. Initialize $\hat{\beta}_1$ to take on a value of your choice. It does not matter what value you choose. Keeping $\hat{\beta}_1$ fixed, fit the model $$ Y - \hat{\beta}_1 X_1 = \beta_0 + \beta_2 X_2 + \epsilon \,. $$ You can do this as follows: > a=y-beta1*x1> beta2=lm(a~x2)\$coef[2] Keeping $\hat{\beta}_2$ fixed, fit the model $$ Y - \hat{\beta}_2 X_2 = \beta_0 + \beta_1 X_1 + \epsilon \,. $$ You can do this as follows: > a=y-beta2*x2> beta1=lm(a~x1)\$coef[2] Write a for loop to repeat 3 and 4 $1,000$ times. Report the estimates of $\hat{\beta}_0$, $\hat{\beta}_1$, and $\hat{\beta}_2$ at each iteration of the for loop. Create a plot in which each of these values is displayed, with $\hat{\beta}_0$, $\hat{\beta}_1$, and $\hat{\beta}_2$ each shown in a different color. Compare your answer in 5 to the results of simply performingmultiple linear regression to predict $Y$ using $X_1$ and $X_2$. Use the abline() function to overlay those multiple linear regression coefficient estimates on the plot obtained in 5. On this data set, how many backfitting iterations were required in order to obtain a "good" approximation to the multiple regression coefficient estimates?
###Code
%matplotlib notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# https://stackoverflow.com/questions/34398054/ipython-notebook-cell-multiple-outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
Exercise 7.11.1 Generate a response $Y$ and two predictors $X_1$ and $X_2$, with$n = 100$.
###Code
np.random.seed(42)
n = 100
x1 = np.random.normal(size=n, loc=-2, scale=4)
x2 = np.random.normal(size=n, loc=5, scale=3)
gaussian_noise = np.random.normal(size=n, loc=0, scale=2)
beta0 = 5.4
beta1 = 2.5
beta2 = -1.3
y = beta0 + beta1 * x1 + beta2 * x2 + gaussian_noise
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
_ = ax.scatter(x1, x2, y, s=2)
x_pred1 = np.linspace(-15, 5, num=201)
x_pred2 = np.linspace(0, 14, num=141)
x1_pred2, x2_pred2 = np.meshgrid(x_pred1, x_pred2)
y_pred = beta0 + beta1 * x1_pred2 + beta2 * x2_pred2
_ = ax.plot_surface(x1_pred2, x2_pred2, y_pred, alpha=0.2)
_ = ax.set_xlabel(r'$X_1$')
_ = ax.set_ylabel(r'$X_2$')
_ = ax.set_zlabel(r'$Y$')
###Output
_____no_output_____
###Markdown
Exercise 7.11.2 Initialize $\hat{\beta}_1$ to take on a value of your choice. It does not matter what value you choose.
###Code
beta1_hat = 5.0
###Output
_____no_output_____
###Markdown
Exercise 7.11.3 Keeping $\hat{\beta}_1$ fixed, fit the model$$Y - \hat{\beta}_1 X_1 = \beta_0 + \beta_2 X_2 + \epsilon \,.$$You can do this as follows: > a=y-beta1*x1> beta2=lm(a~x2)\$coef[2]
###Code
a = y - beta1_hat * x1
x2_intercept = sm.add_constant(x2)
model = sm.OLS(a, x2_intercept)
fitted = model.fit()
beta2_hat = fitted.params[1]
beta2_hat
###Output
_____no_output_____
###Markdown
Exercise 7.11.4 Keeping $\hat{\beta}_2$ fixed, fit the model$$Y - \hat{\beta}_2 X_2 = \beta_0 + \beta_1 X_1 + \epsilon \,.$$You can do this as follows: > a=y-beta2*x2> beta1=lm(a~x1)\$coef[2]
###Code
a = y - beta2_hat * x2
x1_intercept = sm.add_constant(x1)
model = sm.OLS(a, x1_intercept)
fitted = model.fit()
beta1_hat = fitted.params[1]
beta1_hat
###Output
_____no_output_____
###Markdown
Exercise 7.11.5 Write a for loop to repeat 3 and 4 $1,000$ times. Report the estimates of $\hat{\beta}_0$, $\hat{\beta}_1$, and $\hat{\beta}_2$ at each iteration of the for loop. Create a plot in which each of these values is displayed, with $\hat{\beta}_0$, $\hat{\beta}_1$, and $\hat{\beta}_2$ each shown in a different color.
###Code
max_iter = 1000
beta0_hat_arr = np.zeros(shape=(max_iter, ))
beta1_hat_arr = np.zeros(shape=(max_iter, ))
beta2_hat_arr = np.zeros(shape=(max_iter, ))
beta1_hat = 5.0 # initialize
for i in range(max_iter):
a = y - beta1_hat * x1
fitted = sm.OLS(a, x2_intercept).fit()
beta2_hat = fitted.params[1]
a = y - beta2_hat * x2
fitted = sm.OLS(a, x1_intercept).fit()
beta1_hat = fitted.params[1]
a = y - beta1_hat*x1 - beta2_hat*x2
beta0_hat = np.mean(a)
beta0_hat_arr[i] = beta0_hat
beta1_hat_arr[i] = beta1_hat
beta2_hat_arr[i] = beta2_hat
iterations = np.arange(max_iter)
fig, ax = plt.subplots(1, 1, constrained_layout=True, figsize=(8, 4))
_ = ax.scatter(iterations[:10], beta0_hat_arr[:10], c='r', label=r'$\beta_0$')
_ = ax.scatter(iterations[:10], beta1_hat_arr[:10], c='b', label=r'$\beta_1$')
_ = ax.scatter(iterations[:10], beta2_hat_arr[:10], c='g', label=r'$\beta_2$')
_ = ax.set_xlabel('iterations')
_ = ax.legend(loc=4)
###Output
_____no_output_____
###Markdown
Exercise 7.11.6 Compare your answer in 5 to the results of simply performingmultiple linear regression to predict $Y$ using $X_1$ and $X_2$. Use the abline() function to overlay those multiple linear regression coefficient estimates on the plot obtained in 5.
###Code
df_x = pd.DataFrame({
'X1': x1,
'X2': x2,
})
df_x.insert(0, 'Intercept', 1)
df_y = pd.DataFrame({'Y': y})
model = sm.OLS(df_y, df_x)
fitted = model.fit()
fig, ax = plt.subplots(1, 1, constrained_layout=True, figsize=(8, 4))
_ = ax.scatter(iterations[:10], beta0_hat_arr[:10], c='r', label=r'$\beta_0$')
_ = ax.scatter(iterations[:10], beta1_hat_arr[:10], c='b', label=r'$\beta_1$')
_ = ax.scatter(iterations[:10], beta2_hat_arr[:10], c='g', label=r'$\beta_2$')
_ = ax.axhline(y=fitted.params.iloc[0])
_ = ax.axhline(y=fitted.params.iloc[1])
_ = ax.axhline(y=fitted.params.iloc[2])
_ = ax.set_xlabel('iterations')
_ = ax.legend(loc='lower right')
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.