path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
lstm_stock_predictor_fng.ipynb | ###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
X.shape, y.shape
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
X_train
X.shape
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
# Fit the MinMaxScaler object with the training feature data X_train
scaler.fit(X_train)
# Scale the features training and testing sets
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Fit the MinMaxScaler object with the training target data y_train
y_scaler = MinMaxScaler()
y_scaler.fit(y_train)
# Scale the target training and testing sets
y_train = y_scaler.transform(y_train)
y_test = y_scaler.transform(y_test)
X_train.shape, y_train.shape
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
X_train.shape
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 5
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
pd.DataFrame(model.history.history).plot();
###Output
_____no_output_____
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
np.sqrt(y_scaler.inverse_transform([[0.14951150119304657]]))
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_scaler.inverse_transform(predicted)
real_prices = y_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title="Actual Vs. Predicted Prices")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
x_train_scaler = MinMaxScaler()
x_test_scaler = MinMaxScaler()
y_train_scaler = MinMaxScaler()
y_test_scaler = MinMaxScaler()
# Fit the scaler for the Training Data
x_train_scaler.fit(X_train)
y_train_scaler.fit(y_train)
# Scale the training data
X_train = x_train_scaler.transform(X_train)
y_train = y_train_scaler.transform(y_train)
# Fit the scaler for the Testing Data
x_test_scaler.fit(X_test)
y_test_scaler.fit(y_test)
# Scale the y_test data
X_test = x_test_scaler.transform(X_test)
y_test = y_test_scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
372/372 [==============================] - 10s 11ms/step - loss: 0.0944
Epoch 2/10
372/372 [==============================] - 4s 11ms/step - loss: 0.1019
Epoch 3/10
372/372 [==============================] - 4s 11ms/step - loss: 0.1051
Epoch 4/10
372/372 [==============================] - 4s 10ms/step - loss: 0.1048
Epoch 5/10
372/372 [==============================] - 4s 10ms/step - loss: 0.1020
Epoch 6/10
372/372 [==============================] - 4s 10ms/step - loss: 0.1036
Epoch 7/10
372/372 [==============================] - 4s 10ms/step - loss: 0.0994
Epoch 8/10
372/372 [==============================] - 4s 10ms/step - loss: 0.0943
Epoch 9/10
372/372 [==============================] - 4s 10ms/step - loss: 0.0978
Epoch 10/10
372/372 [==============================] - 4s 10ms/step - loss: 0.1170
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_test_scaler.inverse_transform(predicted)
real_prices = y_test_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
###Output
Bad key "text.kerning_factor" on line 4 in
C:\Users\chris\anaconda3\envs\pyvizenv\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test_patch.mplstyle.
You probably need to get an updated matplotlibrc file from
http://github.com/matplotlib/matplotlib/blob/master/matplotlibrc.template
or from the matplotlib source distribution
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# YOUR CODE HERE!
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# YOUR CODE HERE!
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units = number_units,
return_sequences = True,
input_shape = (X_train.shape[1],1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(
units = number_units,
return_sequences = True,
))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(
units = number_units,
return_sequences = False,
))
model.add(Dropout(dropout_fraction))
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
epochs = 10
batch_size = 10
model.fit(X_train, y_train, epochs=epochs, shuffle=False, batch_size=batch_size, verbose=1)
###Output
Epoch 1/10
38/38 [==============================] - 4s 8ms/step - loss: 0.0901
Epoch 2/10
38/38 [==============================] - 0s 8ms/step - loss: 0.0737
Epoch 3/10
38/38 [==============================] - 0s 8ms/step - loss: 0.0565
Epoch 4/10
38/38 [==============================] - 0s 9ms/step - loss: 0.0579
Epoch 5/10
38/38 [==============================] - 0s 8ms/step - loss: 0.0563
Epoch 6/10
38/38 [==============================] - 0s 9ms/step - loss: 0.0490
Epoch 7/10
38/38 [==============================] - 0s 8ms/step - loss: 0.0514
Epoch 8/10
38/38 [==============================] - 0s 9ms/step - loss: 0.0491
Epoch 9/10
38/38 [==============================] - 0s 8ms/step - loss: 0.0481
Epoch 10/10
38/38 [==============================] - 0s 8ms/step - loss: 0.0466
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
stocks.hvplot.line(xlabel="Date",
ylabel="Price")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split -1]
X_test = X[split:]
y_train = y[: split -1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# YOUR CODE HERE!
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]
[[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52325581]
[0.25581395]
[0.38372093]
[0.30232558]]
[[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52873563]
[0.25581395]
[0.38372093]
[0.30232558]
[0.53488372]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]
[[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.62790698]
[0.65116279]
[0.58139535]
[0.58139535]]
[[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.63218391]
[0.65116279]
[0.58139535]
[0.58139535]
[0.60465116]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# YOUR CODE HERE!
model = Sequential()
number_units = 5
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
371/371 [==============================] - 5s 4ms/step - loss: 0.1603
Epoch 2/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0963
Epoch 3/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0999
Epoch 4/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0943
Epoch 5/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0931
Epoch 6/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0876
Epoch 7/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0883
Epoch 8/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0875
Epoch 9/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0817
Epoch 10/10
371/371 [==============================] - 1s 4ms/step - loss: 0.0812
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
stocks.plot(title="Actual Vs. Predicted Gold Prices")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
%matplotlib inline
from hvplot import hvPlot
import hvplot.pandas
import tensorflow as tf
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
x_scaler = MinMaxScaler()
y_scaler = MinMaxScaler()
x_scaler.fit(X_train)
X_train = x_scaler.transform(X_train)
X_test = x_scaler.transform(X_test)
y_scaler.fit(y_train)
y_train = y_scaler.transform(y_train)
y_test = y_scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
372/372 [==============================] - 6s 6ms/step - loss: 0.0322
Epoch 2/10
372/372 [==============================] - 2s 6ms/step - loss: 0.0340
Epoch 3/10
372/372 [==============================] - 2s 6ms/step - loss: 0.0352
Epoch 4/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0352
Epoch 5/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0351
Epoch 6/10
372/372 [==============================] - 2s 6ms/step - loss: 0.0349
Epoch 7/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0351
Epoch 8/10
372/372 [==============================] - 2s 6ms/step - loss: 0.0336
Epoch 9/10
372/372 [==============================] - 2s 7ms/step - loss: 0.0364
Epoch 10/10
372/372 [==============================] - 2s 6ms/step - loss: 0.0434
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_scaler.inverse_transform(predicted)
real_prices = y_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.hvplot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remainder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:4]} \n")
print (f"X_test sample values:\n{X_test[:4]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]
[[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52325581]
[0.25581395]
[0.38372093]
[0.30232558]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]
[[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.62790698]
[0.65116279]
[0.58139535]
[0.58139535]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
#layer 1
model.add(LSTM(units=number_units, return_sequences=True, input_shape=(X_train.shape[1], 1)))
model.add(Dropout(dropout_fraction))
#layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
#layer 3
model.add(LSTM(units=number_units, return_sequences=False))
model.add(Dropout(dropout_fraction))
#outputlayer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=50, shuffle=False, batch_size=2, verbose=1)
###Output
Epoch 1/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0412
Epoch 2/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0428
Epoch 3/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0419
Epoch 4/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0411
Epoch 5/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0423
Epoch 6/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0408
Epoch 7/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0405
Epoch 8/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0408
Epoch 9/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0406
Epoch 10/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0408
Epoch 11/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0405
Epoch 12/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0408
Epoch 13/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0396
Epoch 14/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0392
Epoch 15/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0403
Epoch 16/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0401
Epoch 17/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0391
Epoch 18/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0390
Epoch 19/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0384
Epoch 20/50
186/186 [==============================] - 2s 9ms/step - loss: 0.0407
Epoch 21/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0390
Epoch 22/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0384
Epoch 23/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0388
Epoch 24/50
186/186 [==============================] - 2s 9ms/step - loss: 0.0378
Epoch 25/50
186/186 [==============================] - 2s 9ms/step - loss: 0.0388
Epoch 26/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0383
Epoch 27/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0388
Epoch 28/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0380
Epoch 29/50
186/186 [==============================] - 2s 9ms/step - loss: 0.0368
Epoch 30/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0373
Epoch 31/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0370
Epoch 32/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0373
Epoch 33/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0373
Epoch 34/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0373
Epoch 35/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0366
Epoch 36/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0365
Epoch 37/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0378
Epoch 38/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0378
Epoch 39/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0367
Epoch 40/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0377
Epoch 41/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0361
Epoch 42/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0358
Epoch 43/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0378
Epoch 44/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0365
Epoch 45/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0363
Epoch 46/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0364
Epoch 47/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0369
Epoch 48/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0377
Epoch 49/50
186/186 [==============================] - 1s 8ms/step - loss: 0.0368
Epoch 50/50
186/186 [==============================] - 2s 8ms/step - loss: 0.0378
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title="The Real vs. Predicted Values")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
# import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split=int(0.7 * len(X))
X_train=X[:split]
X_test=X[split:]
y_train=y[:split]
y_test=y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler=MinMaxScaler()
#Scale the features training and testing sets
scaler.fit(X)
#Scales features training and testing sets
X_train=scaler.transform(X_train)
X_test=scaler.transform(X_test)
#Fit MinMaxScaler object with target Y data
scaler.fit(y)
#Scale the target training and testing sets
y_train=scaler.transform(y_train)
y_test=scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# Definte the LSTM RNN model
model=Sequential()
# Initial Model Setup
number_units = 10
dropout_fraction=.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1],1))
)
model.add(Dropout(dropout_fraction))
#Layer 2
model.add(LSTM(
units=number_units,
return_sequences=True)
)
model.add(Dropout(dropout_fraction))
#Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
#Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam",loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10,
shuffle=False, batch_size=90,
verbose=1
)
###Output
Epoch 1/10
5/5 [==============================] - 6s 20ms/step - loss: 0.1381
Epoch 2/10
5/5 [==============================] - 0s 13ms/step - loss: 0.1054
Epoch 3/10
5/5 [==============================] - 0s 14ms/step - loss: 0.0824
Epoch 4/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0643
Epoch 5/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0506
Epoch 6/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0427
Epoch 7/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0426
Epoch 8/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0411
Epoch 9/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0414
Epoch 10/10
5/5 [==============================] - 0s 13ms/step - loss: 0.0428
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test,y_test,verbose=0)
# Make some predictions
predicted=model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
x = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
x.append(features)
y.append(target)
return np.array(x), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
x, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(x))
x_train = x[: split]
x_test = x[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# Use the MinMaxScaler to scale data between 0 and 1.
x_train_scaler = MinMaxScaler()
x_test_scaler = MinMaxScaler()
y_train_scaler = MinMaxScaler()
y_test_scaler = MinMaxScaler()
# Fit the scaler for the Training Data
x_train_scaler.fit(x_train)
y_train_scaler.fit(y_train)
# Scale the training data
x_train = x_train_scaler.transform(x_train)
y_train = y_train_scaler.transform(y_train)
# Fit the scaler for the Testing Data
x_test_scaler.fit(x_test)
y_test_scaler.fit(y_test)
# Scale the y_test data
x_test = x_test_scaler.transform(x_test)
y_test = y_test_scaler.transform(y_test)
# Reshape the features for the model
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# Define the LSTM RNN model.
model = Sequential()
# Initial model setup
number_units = 5
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(x_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(x_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
372/372 [==============================] - 4s 5ms/step - loss: 0.0854
Epoch 2/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0476
Epoch 3/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0499
Epoch 4/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0481
Epoch 5/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0482
Epoch 6/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0462
Epoch 7/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0496
Epoch 8/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0475
Epoch 9/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0438
Epoch 10/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0456
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(x_test, y_test, verbose=1)
# Make some predictions
predicted = model.predict(x_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_test_scaler.inverse_transform(predicted)
real_prices = y_test_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title="Actual Vs. Predicted BTC CLosing Prices")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
!pip install hvplot
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
from google.colab import drive
#drive.mount("/content/gdrive")
df = pd.read_csv('/content/gdrive/My Drive/btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
from google.colab import drive
#drive.mount("/content/gdrive")
df2 = pd.read_csv('/content/gdrive/My Drive/btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]
[[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52325581]
[0.25581395]
[0.38372093]
[0.30232558]]
[[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52873563]
[0.25581395]
[0.38372093]
[0.30232558]
[0.53488372]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]
[[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.62790698]
[0.65116279]
[0.58139535]
[0.58139535]]
[[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.63218391]
[0.65116279]
[0.58139535]
[0.58139535]
[0.60465116]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
372/372 [==============================] - 6s 7ms/step - loss: 0.0667
Epoch 2/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0717
Epoch 3/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0743
Epoch 4/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0777
Epoch 5/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0724
Epoch 6/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0735
Epoch 7/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0708
Epoch 8/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0682
Epoch 9/10
372/372 [==============================] - 3s 8ms/step - loss: 0.0642
Epoch 10/10
372/372 [==============================] - 3s 7ms/step - loss: 0.0712
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
## Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
## Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
# # df[2] = pd.Series({"A": 6, "B": 7}, index=[2])
# df.loc[2] = [7, 7]
# df
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('Resources/btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('Resources/btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.tail()
# df.loc[pd.to_datetime("2020-12-30")] = [0, 0]
# df.tail()
# from datetime import timedelta
# pd.to_datetime("2020-12-30") + timedelta(days=3)
# pd.to_datetime( datetime.now().strftime("%Y-%m-%d") )
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]
[[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52325581]
[0.25581395]
[0.38372093]
[0.30232558]]
[[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52873563]
[0.25581395]
[0.38372093]
[0.30232558]
[0.53488372]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]
[[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.62790698]
[0.65116279]
[0.58139535]
[0.58139535]]
[[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.63218391]
[0.65116279]
[0.58139535]
[0.58139535]
[0.60465116]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 5
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0573
Epoch 2/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0347
Epoch 3/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0362
Epoch 4/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0349
Epoch 5/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0349
Epoch 6/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0334
Epoch 7/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0356
Epoch 8/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0339
Epoch 9/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0312
Epoch 10/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0321
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# Create a MinMaxScaler object
scaler = MinMaxScaler()
# Fit the MinMaxScaler object with the features data X
scaler.fit(X)
# Scale the features training and testing sets
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Fit the MinMaxScaler object with the target data Y
scaler.fit(y)
# Scale the target training and testing sets
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
# Print some sample data after reshaping the datasets
print (f"X_train sample values:\n{X_train[:3]} \n")
print (f"X_test sample values:\n{X_test[:3]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# Build the LSTM model.
model = Sequential()
# Initial model setup
number_units = 30
dropout_fraction = 0.2
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0248
Epoch 2/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0303
Epoch 3/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0292
Epoch 4/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0284
Epoch 5/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0302
Epoch 6/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0366
Epoch 7/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0417
Epoch 8/10
372/372 [==============================] - 2s 4ms/step - loss: 0.0403
Epoch 9/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0387
Epoch 10/10
372/372 [==============================] - 2s 5ms/step - loss: 0.0376
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title = "Real Vs. Predicted Values")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
#import hvplot.pandas
%matplotlib inline
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# YOUR CODE HERE!
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:1]} \n")
print (f"X_test sample values:\n{X_test[:1]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# YOUR CODE HERE!
model = Sequential()
number_units = 10
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
371/371 [==============================] - 6s 7ms/step - loss: 0.0980
Epoch 2/10
371/371 [==============================] - 2s 7ms/step - loss: 0.0887
Epoch 3/10
371/371 [==============================] - 3s 7ms/step - loss: 0.0871
Epoch 4/10
371/371 [==============================] - 2s 7ms/step - loss: 0.0846
Epoch 5/10
371/371 [==============================] - 3s 7ms/step - loss: 0.0843
Epoch 6/10
371/371 [==============================] - 2s 7ms/step - loss: 0.0844
Epoch 7/10
371/371 [==============================] - 2s 7ms/step - loss: 0.0800
Epoch 8/10
371/371 [==============================] - 3s 7ms/step - loss: 0.0835
Epoch 9/10
371/371 [==============================] - 3s 7ms/step - loss: 0.0793
Epoch 10/10
371/371 [==============================] - 2s 7ms/step - loss: 0.0786
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
%matplotlib inline
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of fear and greed index values and a target of the 11th day closing price
# Try a window size anywhere from 1 to 10 and see how the model performance changes
window_size = 1
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remainder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
# Use MinMaxScaler to scale the data between 0 and 1.
# YOUR CODE HERE!
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
###Output
X_train sample values:
[[[0.25287356]]
[[0.08045977]]
[[0.36781609]]
[[0.18390805]]
[[0.03448276]]]
X_test sample values:
[[[0.40229885]]
[[0.37931034]]
[[0.34482759]]
[[0.63218391]]
[[0.65517241]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# YOUR CODE HERE!
model = Sequential()
number_units = 20
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(X_train, y_train, epochs=40, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0610
Epoch 2/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0325
Epoch 3/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0312
Epoch 4/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0291
Epoch 5/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0272
Epoch 6/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0272
Epoch 7/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0270
Epoch 8/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0282
Epoch 9/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0264
Epoch 10/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0272
Epoch 11/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0268
Epoch 12/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0278
Epoch 13/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0286
Epoch 14/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0280
Epoch 15/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0282
Epoch 16/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0290
Epoch 17/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0305
Epoch 18/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0298
Epoch 19/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0301
Epoch 20/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0303
Epoch 21/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0307
Epoch 22/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0320
Epoch 23/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0309
Epoch 24/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0317
Epoch 25/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0311
Epoch 26/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0321
Epoch 27/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0313
Epoch 28/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0313
Epoch 29/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0314
Epoch 30/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0327
Epoch 31/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0327
Epoch 32/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0338
Epoch 33/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0308
Epoch 34/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0322
Epoch 35/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0334
Epoch 36/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0341
Epoch 37/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0327
Epoch 38/40
377/377 [==============================] - 1s 2ms/step - loss: 0.0343
Epoch 39/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0325
Epoch 40/40
377/377 [==============================] - 1s 3ms/step - loss: 0.0324
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
})
stocks.head()
import matplotlib.pyplot as plt
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
plt.plot(stocks)
plt.title('Model Using FNG', fontsize=20)
plt.legend(['Real','Predicted'],loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 3
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remainder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]]
[[0.08045977]
[0.36781609]
[0.18390805]]
[[0.36781609]
[0.18390805]
[0.03448276]]
[[0.18390805]
[0.03448276]
[0. ]]
[[0.03448276]
[0. ]
[0.32183908]]]
X_test sample values:
[[[0.40229885]
[0.40229885]
[0.37931034]]
[[0.40229885]
[0.37931034]
[0.34482759]]
[[0.37931034]
[0.34482759]
[0.63218391]]
[[0.34482759]
[0.63218391]
[0.65517241]]
[[0.63218391]
[0.65517241]
[0.5862069 ]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
# Layer 1
model.add(LSTM(units=30, return_sequences=True, input_shape=(X_train.shape[1], 1)))
model.add(Dropout(0.2))
# Layer 2
model.add(LSTM(units=30, return_sequences=True))
model.add(Dropout(0.2))
# Layer 3
model.add(LSTM(units=30))
model.add(Dropout(0.2))
# Output layer
model.add(Dense(units=1 ,activation='sigmoid'))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, batch_size=1, verbose=1)
###Output
Epoch 1/10
377/377 [==============================] - 2s 5ms/step - loss: 0.0434
Epoch 2/10
377/377 [==============================] - 2s 5ms/step - loss: 0.0330
Epoch 3/10
377/377 [==============================] - 2s 4ms/step - loss: 0.0334
Epoch 4/10
377/377 [==============================] - 2s 4ms/step - loss: 0.0320
Epoch 5/10
377/377 [==============================] - 2s 4ms/step - loss: 0.0325
Epoch 6/10
377/377 [==============================] - 2s 5ms/step - loss: 0.0327
Epoch 7/10
377/377 [==============================] - 2s 4ms/step - loss: 0.0323
Epoch 8/10
377/377 [==============================] - 1s 4ms/step - loss: 0.0324
Epoch 9/10
377/377 [==============================] - 2s 4ms/step - loss: 0.0320
Epoch 10/10
377/377 [==============================] - 2s 4ms/step - loss: 0.0324
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
%matplotlib inline
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column #
# Column index 1 is the `Close` column #
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split -1] #
X_test = X[split:]
y_train = y[: split -1] #
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# YOUR CODE HERE!
# Create a MinMaxScaler object
scaler = MinMaxScaler()
# Fit the MinMaxScaler object with the features data X
scaler.fit(X)
# Scale the features training and testing sets
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Fit the MinMaxScaler object with the target data y
scaler.fit(y)
# Scale the target training and testing sets
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
# Print some sample data after reshaping the datasets
print(f"X_train sample values:\n{X_train[:3]} \n")
print(f"X_test sample values:\n{X_test[:3]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# YOUR CODE HERE!
# Define the LSTM RNN Model
model = Sequential()
# Initial model setup
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1)))
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Outer later
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer='adam', loss='mean_squared_error')
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=100, verbose=1)
###Output
Epoch 1/10
4/4 [==============================] - 0s 20ms/step - loss: 0.1493
Epoch 2/10
4/4 [==============================] - 0s 13ms/step - loss: 0.1105
Epoch 3/10
4/4 [==============================] - 0s 12ms/step - loss: 0.0793
Epoch 4/10
4/4 [==============================] - 0s 13ms/step - loss: 0.0550
Epoch 5/10
4/4 [==============================] - 0s 12ms/step - loss: 0.0446
Epoch 6/10
4/4 [==============================] - 0s 12ms/step - loss: 0.0386
Epoch 7/10
4/4 [==============================] - 0s 13ms/step - loss: 0.0355
Epoch 8/10
4/4 [==============================] - 0s 14ms/step - loss: 0.0335
Epoch 9/10
4/4 [==============================] - 0s 12ms/step - loss: 0.0333
Epoch 10/10
4/4 [==============================] - 0s 12ms/step - loss: 0.0357
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test, verbose=1)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# had to change this ^ fr0m the original starter code
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
stocks.plot(title="Actual vs. Predicted BTC Prices Based on Sentiment")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# YOUR CODE HERE!
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# YOUR CODE HERE!
model = Sequential()
number_units = 10
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units = number_units,
return_sequences = True,
input_shape = (X_train.shape[1],1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(
units = number_units,
return_sequences = True,
))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(
units = number_units,
return_sequences = False,
))
model.add(Dropout(dropout_fraction))
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
epochs = 50
batch_size = 10
model.fit(X_train, y_train, epochs=epochs, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/50
371/371 [==============================] - 12s 8ms/step - loss: 0.0327
Epoch 2/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0308
Epoch 3/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0298
Epoch 4/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0291
Epoch 5/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0298
Epoch 6/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0299
Epoch 7/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0289
Epoch 8/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0303
Epoch 9/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0299A: 0s - lo
Epoch 10/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0304A: 0s - loss: 0.03
Epoch 11/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0333
Epoch 12/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0348
Epoch 13/50
371/371 [==============================] - 3s 8ms/step - loss: 0.0347
Epoch 14/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0371
Epoch 15/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0367
Epoch 16/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0370
Epoch 17/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0381
Epoch 18/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0377
Epoch 19/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0376
Epoch 20/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0373A: 0s - loss: 0.0
Epoch 21/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0379
Epoch 22/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0355
Epoch 23/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0356
Epoch 24/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0369
Epoch 25/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0366
Epoch 26/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0355
Epoch 27/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0360
Epoch 28/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0385
Epoch 29/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0373
Epoch 30/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0372
Epoch 31/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0373
Epoch 32/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0371
Epoch 33/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0372
Epoch 34/50
371/371 [==============================] - 2s 7ms/step - loss: 0.0371
Epoch 35/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0353
Epoch 36/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0364
Epoch 37/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0360
Epoch 38/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0351
Epoch 39/50
371/371 [==============================] - ETA: 0s - loss: 0.037 - 2s 6ms/step - loss: 0.0371
Epoch 40/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0363
Epoch 41/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0362
Epoch 42/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0366
Epoch 43/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0350
Epoch 44/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0352
Epoch 45/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0361
Epoch 46/50
371/371 [==============================] - 2s 5ms/step - loss: 0.0364
Epoch 47/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0367
Epoch 48/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0358
Epoch 49/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0355
Epoch 50/50
371/371 [==============================] - 2s 6ms/step - loss: 0.0343
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
stocks.hvplot.line(xlabel="Date",
ylabel="Price")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv("btc_sentiment.csv", index_col = "date", infer_datetime_format = True, parse_dates = True)
df = df.drop(columns = "fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv("btc_historic.csv", index_col = "Date", infer_datetime_format = True, parse_dates = True)["Close"]
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how = "inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remainder for testing
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
y_test_scaler = MinMaxScaler()
y_test_scaler.fit(X)
X_train = y_test_scaler.transform(X_train)
X_test = y_test_scaler.transform(X_test)
y_test_scaler.fit(y)
y_train = y_test_scaler.transform(y_train)
y_test = y_test_scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM( units = number_units, return_sequences = True, input_shape = (X_train.shape[1],1)))
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units = number_units, return_sequences = True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units = number_units, return_sequences = False))
model.add(Dropout(dropout_fraction))
model.add(Dense(1))
# Compile the model
model.compile(optimizer = "adam", loss = "mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs = 10, batch_size = 10, shuffle = False, verbose = 1)
###Output
Epoch 1/10
38/38 [==============================] - 5s 13ms/step - loss: 0.0901
Epoch 2/10
38/38 [==============================] - 1s 16ms/step - loss: 0.0737
Epoch 3/10
38/38 [==============================] - 0s 12ms/step - loss: 0.0565
Epoch 4/10
38/38 [==============================] - 0s 11ms/step - loss: 0.0579
Epoch 5/10
38/38 [==============================] - 0s 11ms/step - loss: 0.0563
Epoch 6/10
38/38 [==============================] - 0s 12ms/step - loss: 0.0490
Epoch 7/10
38/38 [==============================] - 0s 12ms/step - loss: 0.0514
Epoch 8/10
38/38 [==============================] - 0s 12ms/step - loss: 0.0491
Epoch 9/10
38/38 [==============================] - 0s 11ms/step - loss: 0.0481
Epoch 10/10
38/38 [==============================] - 0s 11ms/step - loss: 0.0466
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test, verbose = 1)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_test_scaler.inverse_transform(predicted)
real_prices = y_test_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.hvplot.line(xlabel = "Date", ylabel = "Price", title = "Actual Vs. Predicted BTC CLosing Prices")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
# YOUR CODE HERE!
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# YOUR CODE HERE!
model = Sequential()
number_units = 33
dropout_fraction = 0.205
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(X_train, y_train, epochs=32, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/32
371/371 [==============================] - 9s 9ms/step - loss: 0.0191
Epoch 2/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0227
Epoch 3/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0235
Epoch 4/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0239
Epoch 5/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0247
Epoch 6/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0271
Epoch 7/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0252
Epoch 8/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0246
Epoch 9/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0256
Epoch 10/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0266
Epoch 11/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0282
Epoch 12/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0407
Epoch 13/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0399
Epoch 14/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0406
Epoch 15/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0427
Epoch 16/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0468
Epoch 17/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0436
Epoch 18/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0444
Epoch 19/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0459
Epoch 20/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0450
Epoch 21/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0444
Epoch 22/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0445
Epoch 23/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0443
Epoch 24/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0444
Epoch 25/32
371/371 [==============================] - 3s 8ms/step - loss: 0.0441
Epoch 26/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0445
Epoch 27/32
371/371 [==============================] - 4s 10ms/step - loss: 0.0444
Epoch 28/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0442
Epoch 29/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0444
Epoch 30/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0443
Epoch 31/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0443
Epoch 32/32
371/371 [==============================] - 3s 9ms/step - loss: 0.0437
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.tail()
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
stocks.plot()
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
%matplotlib inline
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of fear and greed index values and a target of the 11th day closing price
# Try a window size anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remainder for testing
split = int(0.7 * len(X))
X_train = X[: split -1]
X_test = X[split:]
y_train = y[: split -1]
y_test = y[split:]
# Use MinMaxScaler to scale the data between 0 and 1.
# Importing the MinMaxScaler from sklearn
from sklearn.preprocessing import MinMaxScaler
# Create a MinMaxScaler object
scaler = MinMaxScaler()
# Fit the MinMaxScaler object with the features data X
scaler.fit(X)
# Scale the features training and testing sets
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Fit the MinMaxScaler object with the target data y
scaler.fit(y)
# Scale the target training and testing sets
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
# Print some sample data after reshaping the datasets
print(f"X_train sample values:\n{X_train[:3]} \n")
print(f"X_test sample values:\n{X_test[:3]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Define the LSTM RNN Model
model = Sequential()
# Initial model setup
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1)))
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Outer later
model.add(Dense(1))
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# Train the model
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=100, verbose=1)
###Output
Train on 371 samples
Epoch 1/10
371/371 [==============================] - 5s 12ms/sample - loss: 0.1472
Epoch 2/10
371/371 [==============================] - 0s 378us/sample - loss: 0.1135
Epoch 3/10
371/371 [==============================] - 0s 382us/sample - loss: 0.0827
Epoch 4/10
371/371 [==============================] - 0s 404us/sample - loss: 0.0567
Epoch 5/10
371/371 [==============================] - 0s 606us/sample - loss: 0.0457
Epoch 6/10
371/371 [==============================] - 0s 354us/sample - loss: 0.0424
Epoch 7/10
371/371 [==============================] - 0s 282us/sample - loss: 0.0360
Epoch 8/10
371/371 [==============================] - 0s 261us/sample - loss: 0.0335
Epoch 9/10
371/371 [==============================] - 0s 276us/sample - loss: 0.0334
Epoch 10/10
371/371 [==============================] - 0s 420us/sample - loss: 0.0363
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test, verbose=1)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
})
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title="Actual vs. Predicted BTC Prices Based on Sentiment")
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv("Resources/btc_sentiment.csv", index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv("Resources/btc_historic.csv", index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remainder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train_scaler = scaler.transform(X_train)
X_test_scaler = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train_scaler.reshape((X_train_scaler.shape[0], X_train_scaler.shape[1], 1))
X_test = X_test_scaler.reshape((X_test_scaler.shape[0], X_test_scaler.shape[1], 1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]}")
###Output
X_train sample values:
[[[0.25287356]
[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.31395349]
[0.24418605]
[0.40697674]
[0.52325581]]
[[0.08045977]
[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.24418605]
[0.40697674]
[0.52325581]
[0.25581395]]
[[0.36781609]
[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.40697674]
[0.52325581]
[0.25581395]
[0.38372093]]
[[0.18390805]
[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52325581]
[0.25581395]
[0.38372093]
[0.30232558]]
[[0.03448276]
[0. ]
[0.32183908]
[0.25287356]
[0.4137931 ]
[0.52873563]
[0.25581395]
[0.38372093]
[0.30232558]
[0.53488372]]]
X_test sample values:
[[[0.36781609]
[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.39534884]
[0.37209302]
[0.3372093 ]
[0.62790698]]
[[0.43678161]
[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37209302]
[0.3372093 ]
[0.62790698]
[0.65116279]]
[[0.34482759]
[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.3372093 ]
[0.62790698]
[0.65116279]
[0.58139535]]
[[0.45977011]
[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.62790698]
[0.65116279]
[0.58139535]
[0.58139535]]
[[0.45977011]
[0.40229885]
[0.40229885]
[0.37931034]
[0.34482759]
[0.63218391]
[0.65116279]
[0.58139535]
[0.58139535]
[0.60465116]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=4, verbose=1)
###Output
Epoch 1/10
93/93 [==============================] - 3s 6ms/step - loss: 0.1437
Epoch 2/10
93/93 [==============================] - 0s 5ms/step - loss: 0.1190
Epoch 3/10
93/93 [==============================] - 0s 5ms/step - loss: 0.1187
Epoch 4/10
93/93 [==============================] - 0s 5ms/step - loss: 0.1177
Epoch 5/10
93/93 [==============================] - 0s 5ms/step - loss: 0.1047
Epoch 6/10
93/93 [==============================] - 0s 5ms/step - loss: 0.1076
Epoch 7/10
93/93 [==============================] - 0s 5ms/step - loss: 0.0977
Epoch 8/10
93/93 [==============================] - 0s 5ms/step - loss: 0.1008
Epoch 9/10
93/93 [==============================] - 0s 5ms/step - loss: 0.0969
Epoch 10/10
93/93 [==============================] - 0s 5ms/step - loss: 0.0933
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Batch size of 4 minimizes the loss function value.
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot()
# The model struggles to predict future prices based on fng values.
# Between the two, LSTM predictions based on the closing prices show better trend following, just not at the same scale.
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
sentiment_df = pd.read_csv('btc_sentiment.csv', index_col = "date", infer_datetime_format = True, parse_dates = True)
sentiment_df = sentiment_df.drop(columns = "fng_classification")
sentiment_df.head()
# Load the historical closing prices for Bitcoin
btc_hist_df = pd.read_csv('btc_historic.csv', index_col = "Date", infer_datetime_format = True, parse_dates = True)['Close']
btc_hist_df = btc_hist_df.sort_index()
btc_hist_df.tail()
# Join the data into a single DataFrame
fng_df = sentiment_df.join(btc_hist_df, how = "inner")
fng_df.tail()
fng_df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size_10 = 10
window_size_9 = 9
window_size_8 = 8
window_size_7 = 7
window_size_6 = 6
window_size_5 = 5
window_size_4 = 4
window_size_3 = 3
window_size_2 = 2
window_size_1 = 1
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(fng_df, window_size_1, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(len(X) * 0.70)
X_train = X[:split -1]
X_test = X[split:]
y_train = y[:split -1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0],X_train.shape[1],1))
X_test = X_test.reshape((X_test.shape[0],X_test.shape[1],1))
print (f"X_train sample values:\n{X_train[:5]} \n")
print (f"X_test sample values:\n{X_test[:5]} \n")
###Output
X_train sample values:
[[[0.25287356]]
[[0.08045977]]
[[0.36781609]]
[[0.18390805]]
[[0.03448276]]]
X_test sample values:
[[[0.40229885]]
[[0.37931034]]
[[0.34482759]]
[[0.63218391]]
[[0.65517241]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = .2
# Layer 1
model.add(LSTM(units = number_units, return_sequences = True, input_shape = (X_train.shape[1],1)))
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units = number_units, return_sequences = True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units = number_units))
model.add(Dropout(dropout_fraction))
# Output Layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs = 10, shuffle = False, batch_size = 1, verbose = 1)
###Output
Epoch 1/10
377/377 [==============================] - 5s 2ms/step - loss: 0.1389
Epoch 2/10
377/377 [==============================] - 1s 2ms/step - loss: 0.0852
Epoch 3/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0813
Epoch 4/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0761
Epoch 5/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0745
Epoch 6/10
377/377 [==============================] - 1s 2ms/step - loss: 0.0732
Epoch 7/10
377/377 [==============================] - 1s 2ms/step - loss: 0.0701
Epoch 8/10
377/377 [==============================] - 1s 2ms/step - loss: 0.0693
Epoch 9/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0686
Epoch 10/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0695
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test,y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
fng = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
fng.tail()
# Plot the real vs predicted values as a line chart
fng.plot()
print("The fng model with a 10 day window and a batch size of 1 has a loss of 0.12418720871210098")
print("The fng model with a 9 day window and a batch size of 1 has a loss of 0.1161537617444992")
print("The fng model with a 8 day window and a batch size of 1 has a loss of 0.1266108751296997")
print("The fng model with a 7 day window and a batch size of 1 has a loss of 0.13030321896076202")
print("The fng model with a 6 day window and a batch size of 1 has a loss of 0.11609522998332977")
print("The fng model with a 5 day window and a batch size of 1 has a loss of 0.10695990920066833")
print("The fng model with a 4 day window and a batch size of 1 has a loss of 0.12075154483318329")
print("The fng model with a 3 day window and a batch size of 1 has a loss of 0.1109219640493393")
print("The fng model with a 2 day window and a batch size of 1 has a loss of 0.11845260113477707")
print("The fng model with a 1 day window and a batch size of 1 has a loss of 0.11144669353961945")
###Output
The fng model with a 10 day window and a batch size of 1 has a loss of 0.12418720871210098
The fng model with a 9 day window and a batch size of 1 has a loss of 0.1161537617444992
The fng model with a 8 day window and a batch size of 1 has a loss of 0.1266108751296997
The fng model with a 7 day window and a batch size of 1 has a loss of 0.13030321896076202
The fng model with a 6 day window and a batch size of 1 has a loss of 0.11609522998332977
The fng model with a 5 day window and a batch size of 1 has a loss of 0.10695990920066833
The fng model with a 4 day window and a batch size of 1 has a loss of 0.12075154483318329
The fng model with a 3 day window and a batch size of 1 has a loss of 0.1109219640493393
The fng model with a 2 day window and a batch size of 1 has a loss of 0.11845260113477707
The fng model with a 1 day window and a batch size of 1 has a loss of 0.11144669353961945
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
X_scaler = scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
y_scaler = scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=15, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/15
371/371 [==============================] - 5s 6ms/step - loss: 0.0225
Epoch 2/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0239
Epoch 3/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0244
Epoch 4/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0261
Epoch 5/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0258
Epoch 6/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0246
Epoch 7/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0239
Epoch 8/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0240
Epoch 9/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0250
Epoch 10/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0245
Epoch 11/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0261
Epoch 12/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0305
Epoch 13/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0430
Epoch 14/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0444
Epoch 15/15
371/371 [==============================] - 2s 6ms/step - loss: 0.0462
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_scaler.inverse_transform(predicted)
real_prices = y_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
# plot using hvplot
hvplot.show(stocks.hvplot())
# Plot using pandas plot function
stocks.plot();
###Output
_____no_output_____
###Markdown
LSTM Stock Predictor Using Fear and Greed IndexIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('./data/btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[:split]
X_test = X[split:]
y_train = y[:split]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
###Output
_____no_output_____
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 20
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units = number_units,
return_sequences = True,
input_shape = (X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output Layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=8, verbose=1)
###Output
_____no_output_____
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
model.evaluate(X_test, y_test, verbose=1)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_test_scaler.inverse_transform(predicted)
real_prices = y_test_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.hvplot.line(title="Actual Vs. Predicted BTC Prices (FNG indicator)", ylabel="Price", xlabel="Date")
###Output
_____no_output_____ |
cornoterminal.ipynb | ###Markdown
###Code
velocidade = float(input('Digite a velocidade do veiculo: '))
print('O limite de velocidade da rodovia é de 80Km')
if velocidade > 80:
print('Você excedeu o limite de velcidade {} {} {}Km'.format('\033[1;31;43m', velocidade-80, '\33[m'))
multa= (velocidade - 80) *7
print('Você foi multado em R$ {:.2f}'.format(multa))
print('\033[32;43mTenha um bom dia! Diriga com segurança!\033[m')
#padrão ANCII para cor
#styler 0 nada 1 bold 4 underline 7 negativo
#fundo tela https://raccoon.ninja/pt/dev-pt/tabela-de-cores-ansi-python/
# video aula cores https://www.youtube.com/watch?v=0hBIhkcA8O8
# python cores no terminal https://wiki.python.org.br/CoresNoTerminal
casa = float(input('Entre com o valor do imóvel R$ '))
sal= float(input('Entre com o valor do salário R$ '))
anos = float(input('Em quantos anos será o financiamento '))
ent = float(input('Qual o valor da entrada R$ '))
tj= float(input('Digite a taxa de juros '))
ms = anos*12
#numero de meses do financiamento
fin = casa - ent
# valor que vai ser financiado
tj = 1/100
#taxa de juros
#taxa de juros de 1% ao mês juros compostos M=C(1+i)t elevado ao tempo
#M é o montante do financiamento \ C é o capital \ i a taxa fixa
# t período de tempo
# calculo da prestação an juros compostos http://www.matematicadidatica.com.br/CalculoPrestacao.aspx
cf = tj/(1-(1/((1+tj)**ms)))
# coeficiente é reultado da formula
pres = fin*cf
#pres c= prestação
minimo = sal*(30/100)
print(pres)
if pres <= minimo:
print("Emprestimo concedido")
else:
print ("\033[1;31;43m Emprestimo Negado\033[m")
###Output
Entre com o valor do imóvel R$ 4000000
Entre com o valor do salário R$ 5000
Em quantos anos será o financiamento 10
Qual o valor da entrada R$ 30000
Digite a taxa de juros 1
56957.96651582717
[1;31;43m Emprestimo Negado[m
|
FinMath/Actuarial Mathematics/To Try KaTeX.ipynb | ###Markdown
Annuities Review- $\newcommand{\ffrac}{\displaystyle \frac}\newcommand{\Tran}[1]{{1}^{\mathrm{T}}}\newcommand{\d}[1]{\displaystyle{1}}\newcommand{\Var}[1]{\mathrm{Var}\left[1\right]}\newcommand{\using}[1]{\stackrel{\mathrm{1}}{=}}\newcommand{\EE}[1]{\mathbb{E}\left[ 1 \right]}\newcommand{\I}[1]{\mathrm{I}\left( 1 \right)}$The **effective annual rate of interst**, $i$, *Compound Interest*- **annual accumulation factor**, $1+i$- **discount factor**, $\boxed{ \upsilon = \ffrac{1} {1+i} }$- **force of interest** (instantameous rate of growth of value), $\boxed{ \delta = \log(1+i) }$- **nominal annual rate of interest compounded $p$ times per year**, $\boxed{ i^{\,(p)} = p\left( \left( 1+i \right)^{1/p} -1 \right) \Leftrightarrow 1+i = \left( 1+ \ffrac{i^{\,(p)}} {p} \right)^p }$- **periodic rate**, $\ffrac{i^{\,(p)}} {p}$- **The effective rate of discount per year**: $\boxed{ d = 1 - \upsilon = i\upsilon = 1 - e^{-\delta} }$- **the nominal rate of discount compounded $p$ times per year**: $\boxed{ d^{\,(p)} = p\left( 1 - \upsilon^{1/p} \right) \Leftrightarrow \left( 1 - \ffrac{d^{\,(p)}} {p} \right)^p = \upsilon }$The present value of an annuity-certain of $1$ payable annually in advance for $n$ years:$$\ddot{a}_{\overline{n}|} = \sum_{k=0}^{n-1} \upsilon^k = \frac{1 - \upsilon^n} {d}$$The present value of an annuity-certain of $1$ payable annually in arrear for $n$ years:$${a}_{\overline{n}|} = \sum_{k=0}^{n-1} \upsilon^{k+1} = \frac{\upsilon - \upsilon^{n+1}} {d} = \ddot{a}_{\overline{n}|} + \upsilon^n - 1 = \frac{1 - \upsilon^n} {i} = \upsilon \ddot{a}_{\overline{n}|}$$Another situation is to discount the value to a future point. The value at the time of the last payment:$${s}_{\overline{n}|} = {a}_{\overline{n}|} (1+i)^n$$And that of one period later$$\ddot{s}_{\overline{n}|} = \ddot{a}_{\overline{n}|} (1+i)^n$$The present value of an annuity-certain payable continuously at rate $1$ per year for $n$ years:$$\bar{a}_{\overline{n}|} = \int_{0}^{n} \upsilon^t \; \mathrm{d}t = \frac{1 - \upsilon^n} {\delta}$$And when it's divided into $m$ parts,$$\ddot{a}_{\overline{n}|}^{(m)} = \frac{1} {m} \sum_{k=0}^{mn-1} \upsilon^{\frac{k} {m}} = \frac{1 - \upsilon^n} {d^{(m)}}$$$${a}_{\overline{n}|}^{(m)} = \frac{1} {m} \sum_{k=0}^{mn-1} \upsilon^{\frac{k+1} {m}} = \frac{1 - \upsilon^n} {i^{(m)}} = \ddot{a}_{\overline{n}|}^{(m)} + \frac{1} {m}\upsilon^n - \frac{1} {m}$$***Relations$$d^{(m)} = \frac{m\cdot i^{(m)}} {m+i^{(m)}}$$$$\lim_{m\to\infty} i^{(m)} = \delta = \lim_{m \to\infty} d^{(m)}$$$$\bar{a}_{\overline{n}|} = \lim_{m\to\infty}a_{\overline{n}|}^{(m)} = \lim_{m\to\infty} \ddot{a} _{\overline{n}|} ^{(m)} = \frac{1 - \upsilon^n} {\delta}$$***Another way to separate them into groups is by the first time of the annuity payment. - Annuity-due: $\ddot{a}_{\overline{n}|} = \ffrac{1 - \upsilon^n} {d}$, $\ddot{a}_{\overline{n}|}^{(m)} = \ffrac{1 - \upsilon^n} {d^{(m)}}$- Annuity-immdiate: ${a}_{\overline{n}|} = \ffrac{1 - \upsilon^n} {i}$, ${a}_{\overline{n}|}^{(m)} = \ffrac{1 - \upsilon^n} {i^{(m)}}$- Continuously: $\bar{a}_{\overline{n}|} = \ffrac{1 - \upsilon^n} {\delta}$***Arithmetically increasing annuityStandard: payment increase $1$ each year.- Annuity-due: $\left( I\ddot{a} \right)_{\overline{n}|} = \ffrac{\ddot{a}_{\overline{n}|} - n \upsilon^n} {d}$- Annuity-immdiate: $\left( I{a} \right)_{\overline{n}|} = \ffrac{{a}_{\overline{n}|} - n \upsilon^{n+1}} {d}$Geometrically increasing annuity: for payment start at year $0$ with $P$ with increase rate $r$.$$PV = \sum_{k=0}^{n-1} P (1+r)^k (1+\upsilon)^k = P \sum_{k=0}^{n-1} \left( \upsilon^* \right)^k = P \ddot{a}_{\overline{n}|{i^*}}$$here $v^* = \ffrac{1+r} {1+i} = \ffrac{1} {1+i^*}$ so that $i^* = \ffrac{i-r} {1+r} > 0$ when $i>r$
###Code
ASDasdSD
###Output
_____no_output_____ |
create_clean_dimensions_with_pandas.ipynb | ###Markdown
Create Dimension Tables I94ADDR
###Code
import pandas as pd
with open("I94_SAS_Labels_Descriptions.SAS") as f:
content = f.readlines()
content = [x.strip().replace("'","") for x in content[981:1035]]
df_addr=pd.DataFrame()
for line in content:
value = line.split("=")[0].strip()
i94addrl = line.split("=")[-1].strip()
df_addr=df_addr.append(
{"state_code" : value, "state_name": i94addrl}, ignore_index=True
)
df_addr.head()
states = list(set(df_addr['state_code'].values))
df_addr.to_csv("dimensions/us_states.csv", index=False)
###Output
_____no_output_____
###Markdown
I94PORT
###Code
import pandas as pd
with open("I94_SAS_Labels_Descriptions.SAS") as f:
content = f.readlines()
content = [x.strip().replace("'","") for x in content[302:962]]
df_port_locations=pd.DataFrame()
for line in content:
port_code = line.split("=")[0].strip()
port_city = line.split("=")[1].strip().split(",")[0].strip()
port_state = line.split("=")[1].strip().split(",")[-1].strip()
if port_state == port_city or port_state not in states:
continue
else:
if " " in port_state:
port_state = port_state.split(" ")[0]
df_port_locations=df_port_locations.append(
{"port_code" : port_code, "municipality": port_city, "state_code": port_state}, ignore_index=True
)
df_port_locations.head()
municipality_port = list(set(df_port_locations['municipality'].values))
df_port_locations.head()
df_port_locations.to_csv("dimensions/us_ports.csv", index=False)
###Output
_____no_output_____
###Markdown
I94CIT & I94RES
###Code
with open("I94_SAS_Labels_Descriptions.SAS") as f:
content = f.readlines()
content = [x.strip().replace("'","") for x in content[10:299]]
df_cit_res=pd.DataFrame()
for line in content:
value = line.split("=")[0].strip()
i94cntyl = line.split("=")[-1].strip()
if "INVALID" in i94cntyl or "Not Reported" in i94cntyl or "Collapsed" in i94cntyl or value in i94cntyl:
continue
df_cit_res=df_cit_res.append(
{"country_code" : value, "country_name": i94cntyl}, ignore_index=True
)
df_cit_res.head()
df_cit_res.to_csv("dimensions/countries.csv", index=False)
###Output
_____no_output_____
###Markdown
Airports
###Code
import pandas as pd
df_ac = pd.read_csv('airport-codes_csv.csv')
df_ac=df_ac[df_ac['iso_country']=='US']
df_ac=df_ac.dropna(subset=['iata_code'])
new=df_ac["coordinates"].str.split(",", n = 1, expand = True)
df_ac["latitude"]= new[1]
df_ac["longitude"]= new[0]
new=df_ac["iso_region"].str.split("-", n = 1, expand = True)
df_ac["state_code"]= new[1]
df_ac = df_ac.drop(['coordinates', 'iso_country', 'continent', 'iso_region'], axis=1)
df_ac = df_ac.rename(columns={"ident": "id"})
df_ac.head()
df_ac[df_ac['iata_code']=='FCA']
df_ac.to_csv("dimensions/us_airport_codes.csv", index=False)
###Output
_____no_output_____
###Markdown
Temperature Data
###Code
df_temper = pd.read_csv('../../data2/GlobalLandTemperaturesByCity.csv')
df_temper=df_temper[df_temper['Country']=='United States']
df_temper = df_temper.dropna()
df_temper = df_temper.drop_duplicates(['dt', 'City', 'Country'],keep= 'first')
df_temper =df_temper.drop(columns=['AverageTemperatureUncertainty', 'Latitude', 'Longitude', 'Country'])
df_temper = df_temper.rename(columns=
{
"AverageTemperature": "avg_temp",
"City": "city"
})
df_temper.head()
df_temper.columns
df_temper.shape
df_temper.to_csv("dimensions/us_temperature.csv", index=False)
###Output
_____no_output_____
###Markdown
Demographics Data
###Code
import pandas as pd
df_uscd = pd.read_csv('us-cities-demographics.csv', delimiter=';')
df_uscd['City'] = df_uscd['City'].apply(lambda x: x.upper())
df_uscd['State'] = df_uscd['State'].apply(lambda x: x.upper())
df_uscd.columns = [i.lower().replace(" ", "_").replace("-", "_") for i in df_uscd.columns]
df_uscd =df_uscd.drop(columns=['state'])
df_uscd.head()
len(pd.unique(df_uscd['city']))
dem_citiy = set(df_uscd['city'].values)
por_cities = set(municipality_port)
print(len(dem_citiy))
print(len(por_cities))
print(len(por_cities.intersection(dem_citiy)))
df_uscd[df_uscd['city']=='COLUMBIA']
df_dem_gen = df_uscd[['city','state_code','median_age', 'male_population', 'female_population', 'total_population', 'number_of_veterans', 'foreign_born', 'average_household_size']].drop_duplicates()
df_dem_gen.head()
df_dem_gen.to_csv("dimensions/us-cities-demographics_general.csv", index=False)
df_dem_race = df_uscd[['city','state_code','race', 'count']].drop_duplicates()
df_dem_race.head()
df_dem_race.to_csv("dimensions/us-cities-demographics_race.csv", index=False)
len(pd.unique(df_dem_gen['city']))
len(df_dem_gen['city'])
###Output
_____no_output_____ |
Project_2/Soccer/ls88/student-final-solutions.ipynb | ###Markdown
[L&S 88] Open Science -- Project 1, part 1--- Instructors Eric Van Dusen and Josh QuanIn this notebook we will be covering different approaches to Exploratory Data Analysis (EDA), exploring how different techniques and approachs can lead to different results and conclusions about data.We will be exploring a controversial dataset which has led many data scientists down different analytical paths. This notebook contains autograder tests from [Gofer Grader](https://github.com/data-8/Gofer-Grader). Some of the tests check that you answered correctly, but some are not comprehensive. Autograder cells look like this:```pythoncheck('tests/q1-1.py')```If you pass the tests, the cell will output `All tests passed!`.*Estimated Time: 120 minutes*--- Topics Covered- Exploratory Data Analysis- Understanding past studies with data Table of Contents1 - [Introduction to Study](section1) 2 - [Introduction to EDA](section2) 3 - [More EDA and Visualizations](section3)4 - [More Practice](section4)5 - [Free Response](section5)**Dependencies:**Please consult the `datascience` library [documentation](http://data8.org/datascience/tables.html) for useful help on functions and visualizations throughout the assignment, as needed.
###Code
#Just run me
# if this cell errors, uncomment the line below and rerun
# !pip install gofer-grader
from gofer.ok import check
from datascience import *
import numpy as np
import pandas as pd
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
###Output
_____no_output_____
###Markdown
--- 1. Introduction to the Study Creator: ODD AndersenCredit: AFP/Getty Images Nothing frustrates both soccer fans and players as much as being [red-carded](https://en.wikipedia.org/wiki/Penalty_cardRed_card). In soccer, receiving a red card from the referee means that the player awarded the red card is expelled from the game, and consequently his team must play with one fewer player for the remainder of the game.Due to the inherently subjective nature of referees' judgments, questions involving the fairness of red card decisions crop up frequently, especially when soccer players with darker complexions are red-carded.For the remainder of this project, we will explore a dataset on red-cards and skin color and attempt to understand how different approachs to analysis can lead to different conclusions to the general question: "Are referees more likely to give red cards to darker-skinned players?" --- The Data In this notebook, you'll be working with a dataset containing entries for many European soccer players, containing variables such as club, position, games, and skin complexion.Important to note about this dataset is that it was generated as the result of an [observational study](https://en.wikipedia.org/wiki/Observational_study), rather than a [randomized controlled experiment](https://en.wikipedia.org/wiki/Randomized_controlled_trial). In an observational study, entities' independent variables (such as race, height, zip code) are observed, rather than controlled as in the randomized controlled experiment. Though data scientists often prefer the control and accuracy of controlled experiments, often performing one is either too costly or poses ethical questions (e.g., testing trial drugs and placebo treatments on cancer patients at random). Though our dataset was generated organically--in the real world rather than in a laboratory--it is statistically more challenging to prove causation among variables for these kinds of observational studies (more on this in Question 2).Please read this summary of the [dataset's description](https://osf.io/9yh4x/) to familiarize yourself with the context of the data:*"...we obtained data and profile photos from all soccer players (N = 2053) playing in the first male divisions of England, Germany, France and Spain in the 2012-2013 season and all referees (N = 3147) that these players played under in their professional career. We created a dataset of player dyads including the number of matches players and referees encountered each other and our dependent variable, the number of red cards given to a player by a particular referee throughout all matches the two encountered each other."**...implicit bias scores for each referee country were calculated using a race implicit association test (IAT), with higher values corresponding to faster white | good, black | bad associations. Explicit bias scores for each referee country were calculated using a racial thermometer task, with higher values corresponding to greater feelings of warmth toward whites versus blacks."*Run the cell below to load in the data into a `Table` object from the `datascience` library used in Data 8.
###Code
# Just run me
data = pd.read_csv("CrowdstormingDataJuly1st.csv").dropna()
data = Table.from_df(data)
###Output
_____no_output_____
###Markdown
Here are some of the important fields in our data set that we will focus on:|Variable Name | Description ||--------------|------------||`player` | player's name ||`club` | player's soccer club (team) ||`leagueCountry`| country of player club (England, Germany, France, and Spain) ||`height` | player height (in cm) ||`games`| number of games in the player-referee dyad ||`position` | detailed player position ||`goals`| goals scored by a player in the player-referee dyad ||`yellowCards`| number of yellow cards player received from referee ||`yellowReds`| number of yellow-red cards player received from referee ||`redCards`| number of red cards player received from referee ||`rater1`| skin rating of photo by rater 1 (5-point scale ranging from very light skin to very dark skin ||`rater2`| skin rating of photo by rater 2 (5-point scale ranging from very light skin to very dark skin ||`meanIAT`| mean implicit bias score (using the race IAT) for referee country, higher values correspond to faster white good, black bad associations ||`meanExp`| mean explicit bias score (using a racial thermometer task) for referee country, higher values correspond to greater feelings of warmth toward whites versus blacks | As you can see on the table above, two of the variables we will be exploring is the ratings on skin tone (1-5) measured by two raters, Lisa and Shareef. For context, we have added a series of images that were given to them so that you can better understand their perspective on skin tones. Keep in mind that this might affect our hypothesis and drive our conclusions. Note: On the following images, the only two were the rating for the two raters coincide is image 3 on the top and image 6 on the bottom.
###Code
# Run this cell to peek at the data
data
###Output
_____no_output_____
###Markdown
Question 1.1: What is the shape of data? Save the number of variables (columns) as `num_columns` and the number of players (rows) as `num_rows`.
###Code
num_columns = data.num_columns
num_rows = data.num_rows
print("Our dataset has {0} variables and {1} players.".format(num_columns, num_rows))
check('tests/q1-1.py')
###Output
_____no_output_____
###Markdown
Question 1.2: Which columns should we focus our analysis on? Drop the columns which contain variables which we're not going to analyze. You might consider using the `Table.drop` method to create a transformed copy of our `data`.
###Code
cols_to_drop = ["birthday", "victories", "ties", "defeats", "goals"\
, "photoID", "Alpha_3", "nIAT", "nExp"]
data = data.drop(cols_to_drop)
# Make sure data no longer contains those columns
data
check('tests/q1-2.py')
###Output
_____no_output_____
###Markdown
Question 1.3: Let's break down our remaining variables by type. Create a table with each of the variables' names and their classifications as either "categorical" or "quantitative" variables. In order to do this, use their Python types. *Hint*: Python's `type()` function might be helpful.
###Code
python_types = []
# Get the Python type of each variable by looking at the first value in each column
for column in data:
column_type = type(data[column].item(0))
python_types.append(column_type)
label_classifications = []
numeric_categorical_vars = ["refNum", "refCountry"] # Numerical variables that aren't quantitative
# Loop through the array of variable Python types and classify them as quantitative or categorical
for index in np.arange(len(python_types)):
if python_types[index] == str: # If label is a string...
label_classifications.append("categorical")
elif data.labels[index] in numeric_categorical_vars: # If label is a categorical numerical variable...
label_classifications.append("categorical")
else:
label_classifications.append("quantitative") # If label isn't categorical...
# Create a table with the data's labels and label_classifications array
variables = Table().with_columns("variable name", data.labels\
, "classification", label_classifications)
variables.show()
check('tests/q1-3.py')
###Output
_____no_output_____
###Markdown
Question 1.4: If we're trying to examine the relationship between red cards given and skin color, which variables ought we to consider? Classify the ones you choose as either independent or dependent variables and explain your choices. Independent Variables (variables that may correlate or cause red cards): ***YOUR ANSWER HERE*** Dependent Variables (variables which indicate red cards): ***YOUR ANSWER HERE*** --- 2. Introduction to EDA An overview of the Data Science Process with EDA highlighted. By Farcaster at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=40129394 Exploratory data analysis (EDA) is the stage of the data science process that takes the processed, cleaned dataset and attempts to learn a bit about its structure. In so doing, the data scientist ties to understand basic properties of the data. They seek to:- Craft hypotheses about the data- Make and assess assumptions for future statistical inference- Select useful statistical tools and techniques- Create explanatory of suggestive data visualizations Question 2.1: First, let's compute the minimum, maximum, mean and standard deviation of our data to understand how our data are distributed. *Hint*: look up the `Numpy` documentation for the relevant functions.
###Code
stats = data.stats(ops=(min, max, np.mean, np.std))
stats
check('tests/q2-1.py')
###Output
_____no_output_____
###Markdown
Question 2.2: Now let's take our `statistics` table and enhance it a bit. First, drop the columns with categorical variables. *Hint*: Use the `variables` table we created in Question 1.3.
###Code
categorical_vars = variables.where("classification", "categorical")
cols_to_drop = categorical_vars.column("variable name")
stats = stats.drop(cols_to_drop)
stats
check('tests/q2-2.py')
###Output
_____no_output_____
###Markdown
Question 2.3: Now that we have some information about the distribution of our data, let's try to get rid of some statistical outliers. Assume that data points with variables valued plus or minus 2 standard deviations (SDs) below those variables' means are statistical outliers. In other words, only data points whose variables' values are within 2 standard deviations on either side of the corresponding means are valid. Get rid of the outliers accordingly.Formally, we can describe the set of outliers for the $i$th variable, $O_i$, as: $$O_i = \{\text{values} \mid \text{values} \mu_i + 2 \sigma_i\}$$In words, we want the "union of all values of the $i$th variable $\pm$ 2 standard deviations from the mean"*Hint*: You'll need to look up values in your `stats` table to find the means and SDs for each variable.
###Code
# Just run me to drop remaining categorical variables
data = data.drop("player", "position", "leagueCountry", "club", "playerShort", "refNum", "refCountry")
for variable in data.labels:
data = data.where(variable, are.above_or_equal_to(
stats.column(variable)[2] - 2 * stats.column(variable)[3] # mean - 2 * SD
)).where(variable, are.below_or_equal_to(
stats.column(variable)[2] + 2 * stats.column(variable)[3] # mean + 2 * SD
))
data
check('tests/q2-3.py')
###Output
_____no_output_____
###Markdown
3. More EDA and Visualizations Hypotheses: Two types of general hypotheses can be made about the data. Either: $H_A:$ Referees give red cards to darker skinned players with higher (or lower) frequency.or $H_0:$ Referees give red cards to all players at similar frequencies.Where $H_A$ and $H_0$ denote a "alternative" hypothesis and a ["null" hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis), respectively.As mentioned before, we typically cannot prove causation in an observational study such as our dataset. Then we can only "reject" our null hypothesis if the our independent variable(s) and dependent variable have a statistically significant correlation, or "fail to reject" (basically, to accept the null) if there is no statistically significant correlation between the variables. Question 3.1: Scatter plots:To analyze the correlation between independent and dependent variables, we may use a scatter plot as a simple form of data visualization between one numerical "x" (independent) variable and one numerical "y" (dependent) variable. Below are a few scatterplot examples a data scientist might generate when asking the questions,"How are implicit and explicit bias correlated?", and "Is a player's height correlated with the number of yellow cards he receives?", respectively.
###Code
# Just run this. You don't need to understand this cell
data_df = pd.read_csv("CrowdstormingDataJuly1st.csv")
meanExp = []
meanIAT = []
for index, row in data_df.iterrows():
if row["meanExp"] not in meanExp:
meanExp.append(row["meanExp"])
meanIAT.append(row["meanIAT"])
exps = np.nan_to_num(meanExp)
iats = np.nan_to_num(meanIAT)
# Run to create a table of means
means = Table().with_columns("meanExps", exps, "meanIATs", iats)
means
# Run to display scatter plot meaEXPS vs meanIATS
means.select("meanIATs", "meanExps").scatter( "meanExps", fit_line=True)
###Output
_____no_output_____
###Markdown
What do you observe from the scatter plot? Why might these two variables be related in this way? Why might this be a coincidence? ***YOUR ANSWER HERE***
###Code
# Run to display scatter plot
height_yellowCards = data.select("yellowCards", "height")
height_yellowCards.scatter("height", fit_line=True)
###Output
_____no_output_____
###Markdown
What do you observe from this scatter plot? Why might these two variables be related in this way? Why might this be a coincidence? ***YOUR ANSWER HERE*** Question 3.2: Histograms:Histograms are a data visualization tool that helps one understand the shape of a single variable's distribution of values. Each bin in a histogram has a width such that the sum of all bin_widths * bin_heights = 1. Histograms are used to plot the empirical (observed) distribution of values. Below is an example histogram a data scientist might generate when asking the questions, "what is the empirical distribution of the `goals` variable?"
###Code
# Run to display histogram of skin colors, as measure by rater 1
goals = data.select("rater1")
goals.hist("rater1", bins=np.arange(0, 1, 0.2))
# Run to display histogram of skin colors, as measure by rater 2
goals = data.select("rater2")
goals.hist("rater2", bins=np.arange(0, 1, 0.2))
###Output
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
What do you observe from the histograms? Why could this be? ***YOUR ANSWER HERE*** Question 3.3: Now create a histogram with the empirical distribution of red cards and a histogram with the empirical distribution of yellow cards. Then create a histogram that displays both simultaneously and describe your findings and offer an explanation.
###Code
yellows = data.select("yellowCards")
reds = data.select("redCards")
yellows.hist("yellowCards", bins=np.arange(0, 5, 1))
reds.hist("redCards", bins=np.arange(0, 3))
data.hist("yellowCards", "redCards", bins=np.arange(0, 5, 1))
###Output
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
*Describe and explain your findings* ***YOUR ANSWER HERE*** Question 3.4: Box plots:Box plots are a data visualization that also allows a data scientist to understand the distribution of a particular variable or variables. In particular, it presents data in percentiles (25, 50, 75%) to give a more standardized picture of the spread of the data. Below is an example of box plots in a side-by-side layout describing the distribution of mean and explicit biases, respectively. Please refer to [this article](https://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/charts/box-plot.htm) for more information on what the components of a box plot mean.
###Code
# Run to create boxplot. We will be using the table of "means" we created in 3.1
means.select("meanIATs", "meanExps").boxplot()
###Output
_____no_output_____
###Markdown
What do you observe from the box plots? Why might each distribution may be shaped like this? ***YOUR ANSWER HERE*** Question 3.5: Now create a pair of side-by-side box plots analyzing the distribution of two comparable variables (i.e., red and yellow cards). Then describe your findings and offer an explanation.
###Code
### Create an illustrative data visualization ###
###Output
_____no_output_____
###Markdown
*Describe your findings and explain why the data may be distributed this way.* ***YOUR ANSWER HERE*** --- 4. More Practice
###Code
# Just run me to reload our dropped variables into our data table
data = pd.read_csv("CrowdstormingDataJuly1st.csv").dropna()
data = Table.from_df(data)
###Output
_____no_output_____
###Markdown
Observe below how we're able to use a pivot table to make an insightful series of bar plots on the number of red cards awarded by referees officiating in different leagues across Europe. The number to the left of the plots' y axes represents the number of red cards awarded in those kinds of games. The labels of the plots' y axes is the number of games in that particular referee/league combination for the given number of red cards.
###Code
agg = data.pivot("redCards", "leagueCountry")
agg
agg.bar("leagueCountry", overlay=False)
###Output
_____no_output_____
###Markdown
Question 4.1:Interpret what you see. *** YOUR ANSWER HERE *** Observe below how we are again able to use a pivot table to make a similar bar plot--this time aggregating the number of games with certain amounts of red cards given by referees of different countries. Note: the referees' countries are anonimized as random, numerical IDs.
###Code
agg = data.pivot("redCards", "refCountry")
agg
agg.bar("refCountry", overlay=False)
###Output
_____no_output_____
###Markdown
Question 4.2:Interpret each plot. Explain what the peaks in these bar plots represent. ***YOUR ANSWER HERE *** Observe below the further use of pivot tables to break down the distribution of red cards by player position.
###Code
agg = data.pivot("redCards", "position")
agg
agg.bar("position", overlay=False, width=20)
###Output
_____no_output_____
###Markdown
Question 4.3:Interpret each plot. What [positions](https://en.wikipedia.org/wiki/Association_football_positions) stand out and why might this be? ***YOUR ANSWER HERE*** Observe a scatter plot between victories and games. Intuitively, the two variables are positively correlate and the best fit line has a slope of about 1. This slope is consistent with the fact--with the exception of ties--a win by one team must be accompanied by a loss for the opposing team.
###Code
data.scatter("victories", "games", fit_line=True)
###Output
_____no_output_____
###Markdown
Observe a histogram of the number of games each player has appeared in the dataset.
###Code
data.hist("games", bins=np.arange(1, 20))
###Output
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
|
(LA) Determinant_of_Matrix.ipynb | ###Markdown
###Code
#solving for the determinant of the given matrix
import numpy as np #numerical python
A = np.array([[1,2,-1],[4,6,-2],[-1,3,3]]) #the given array
print("Matrix A") #displaying the matrix
print(A)
print("\n")
print("Determinant of Matrix A") #displaying the determinant of matrix
print("|A| =",round(np.linalg.det(A)))
###Output
Matrix A
[[ 1 2 -1]
[ 4 6 -2]
[-1 3 3]]
Determinant of Matrix A
|A| = -14
|
notebooks/munasinghe_nuwan-13104409-week2_randomforest.ipynb | ###Markdown
If not using Kaggle data set for submission, split train datasets for training (80%), testing (10%) and validation (10%)and normalize features using MinMaxScaler. Else load full Kaggle data and predict using Kaggle test set for submission
###Code
if use_kaggle_data:
X_train, y_train, X_test = process_test_data.load_kaggle_train_and_test_data('../data/raw/train.csv', '../data/raw/test.csv')
else:
X_train, y_train, X_test, y_test, X_valid, y_valid = \
process_test_data.split_and_normalize('../data/raw/train.csv', '../data/processed')
###Output
Original train shape: (8000, 21)
Concat shape: (8000, 20)
Files written to: ../data/processed
X_train shape: (6400, 19)
y_train shape: (6400, 1)
X_test shape: (800, 20)
y_test shape: (800, 1)
X_valid shape: (800, 20)
y_valid shape: (800, 1)
###Markdown
Check details of the data if required
###Code
# X_test.describe()
###Output
_____no_output_____
###Markdown
Selecting features using sequential feature selection if required
###Code
if run_feature_selection==True:
num_of_features_to_select = 7
features = process_test_data.sequential_feature_selection('../data/raw/train.csv', num_of_features_to_select) # ['GP', 'MIN', 'FGM', '3P Made', 'OREB', 'BLK', 'TOV']
X_train = X_train[features]
# Appending Id column since it should be kept
features.append('Id')
X_test = X_test[features]
###Output
_____no_output_____
###Markdown
Running parameter optimization if required
###Code
# Defining pre-identified best parameters if parameter optimization is not going to run
hyp_params = {
'n_estimators': 200,
'max_depth': 12,
'criterion': 'entropy',
'class_weight': None,
'max_features': 'auto'
}
def hyperparameter_tuning(params):
from sklearn.model_selection import cross_val_score
clf = RandomForestClassifier(**params, n_jobs=-1)
acc = cross_val_score(clf, X_train, y_train.values.ravel(), scoring="accuracy", cv=10).mean()
return {"loss": -acc, "status": STATUS_OK}
if run_parameter_optimization==True:
n_estimators=[100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
criterian=["gini", "entropy"]
class_weight=["balanced_subsample", "balanced", None]
max_features=["auto", "sqrt", "log2"]
space = {
"n_estimators": hp.choice("n_estimators", n_estimators),
"max_depth": hp.quniform("max_depth", 1, 30,1),
"criterion": hp.choice("criterion", ["gini", "entropy"]),
"class_weight": hp.choice("class_weight", class_weight),
"max_features": hp.choice("max_features", max_features),
}
# Initialize trials object
trials = Trials()
best = fmin(fn=hyperparameter_tuning, space = space, algo=tpe.suggest, max_evals=100, trials=trials)
hyp_params['n_estimators'] = n_estimators[best['n_estimators']]
hyp_params['max_depth'] = best['max_depth']
hyp_params['criterion'] = criterian[best['criterion']]
hyp_params['class_weight'] = class_weight[best['class_weight']]
hyp_params['max_features'] = max_features[best['max_features']]
print("Best: {}".format(best))
###Output
100%|████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [1:06:40<00:00, 40.00s/trial, best loss: -0.8359375]
Best: {'class_weight': 2, 'criterion': 1, 'max_depth': 28.0, 'max_features': 1, 'n_estimators': 7}
###Markdown
Training the random forest
###Code
rf = RandomForestClassifier(n_estimators=hyp_params['n_estimators'], n_jobs=1, random_state = 44, max_features=hyp_params['max_features'], \
oob_score=True, class_weight=hyp_params['class_weight'], max_depth=hyp_params['max_depth'], criterion=hyp_params['criterion'])
# Converting column y values to 1d array
rf.fit(X_train, y_train.values.ravel())
###Output
_____no_output_____
###Markdown
Predicting using trained random forest
###Code
# Selecting columns to train
test_X = X_test.loc[:, 'GP':'TOV']
# Selecting Ids for CSV
test_X_Ids = X_test.loc[:,'Id']
if use_kaggle_data==True:
# Predicting probabilities for kaggle submission and selecting probability of class 1.
pred = rf.predict_proba(test_X)[:,1]
else:
# Predicting classes (1 or 0) for calculating accuracy
pred = rf.predict(test_X)
# Probabilities for calculating ROC
rf_probs = rf.predict_proba(test_X)[:,1]
# Data frame with ID for csv writing. In Kaggle mode pred will contains probabilities and else contains classes
result = pd.DataFrame(data = {'Id': test_X_Ids, 'TARGET_5Yrs': pred})
# Extracting values for calculating stats
result_values = result[['TARGET_5Yrs']]
###Output
_____no_output_____
###Markdown
Saving the trainned model and writing result to a CSV file
###Code
joblib.dump(rf, "../models/nuwan_random_forest_v13.joblib", compress=3)
###Output
_____no_output_____
###Markdown
Show stats related to performance of the model if not using Kaggle dataset
###Code
if use_kaggle_data==False:
visualize.show_random_forest_stats(rf, test_X, y_test, rf_probs)
# visualize.show_feature_importance(rf, X_train) # Uncomment to see feature importance if required
else:
result.to_csv("../data/external/submission_nuwan_v13.csv", index = False)
print("Kaggle dataset and no stats. Writing to a file.")
###Output
Average absolute error: 16.75%
ROC: 0.69202
|
examples/tutorials/6_Creating_Generic_HDF5.ipynb | ###Markdown
Creating Generic HDF5 FilesThe DL1 files shown in the previous tutorials are created and read by subclasses to the `HDF5Writer` and `HDF5Reader` classes, respectively. These classes can be used for more custom purposes, such as the storage of some data in a tabular format. I personally find this very useful, and many of my personal scripts store data into a HDF5 file as a intermediary step (using `HDF5Writer`), while a second script will create the plot from this file (using `HDF5Reader`). Reminder about HDF5 and DataFramesThe .h5 extension is used by HDF5 files https://support.hdfgroup.org/HDF5/whatishdf5.html.Inside the HDF5 files are HDFStores, which are the format pandas DataFrames are stored inside HDF5 files. You can read about HDFStores here: https://pandas.pydata.org/pandas-docs/stable/io.htmlhdf5-pytables.Pandas DataFrames are a tabular data structure widely used by data scientists for Python analysis: https://pandas.pydata.org/pandas-docs/stable/dsintro.htmldataframe. They allow easy querying, sorting, grouping, and processing of data. HDF5Writer Example The most straight-forward way to write to a HDF5 file is via the `write` method:
###Code
import pandas as pd
import numpy as np
from CHECLabPy.core.io import HDF5Writer
x = np.arange(100)
y2 = x**2
df2 = pd.DataFrame(dict(
x=x,
y=y2,
))
y5 = x**5
df5 = pd.DataFrame(dict(
x=x,
y=y5,
))
metadata_2 = dict(
size=x.size,
power=2,
)
metadata_5 = dict(
size=x.size,
power=5,
)
with HDF5Writer("refdata/data1.h5") as writer:
writer.write(data_2=df2, data_5=df5)
writer.add_metadata(key='data_2', **metadata_2)
writer.add_metadata(key='data_5', **metadata_5)
# Add a second metadata field for the data_5 table
writer.add_metadata(key='data_5', name='test', **metadata_5)
###Output
_____no_output_____
###Markdown
However, if you are instead iterating through a dataset, and cannot hold the entire result in memory for storage, you can instead use the `append` method. This is used in the extract_dl1 script.
###Code
import pandas as pd
import numpy as np
from CHECLabPy.core.io import HDF5Writer
metadata = dict(
size=100*3,
)
with HDF5Writer("refdata/data2.h5") as writer:
for x in range(100):
power = np.array([2, 4, 5])
y = x**power
df = pd.DataFrame(dict(
x=x,
y=y,
power=power,
))
writer.append(df, key='data')
writer.add_metadata(key='data', **metadata)
###Output
_____no_output_____
###Markdown
If you are processing data from a TIO or DL1 file, you may wish to store the pixel mapping inside the HDF5 file with your results, which could be useful for plotting the results later:
###Code
# Plotting a camera image of charge extracted per pixel for the nth event
import pandas as pd
from CHECLabPy.core.io import HDF5Writer
from CHECLabPy.core.io import DL1Reader
dl1_path = "refdata/Run17473_dl1.h5"
reader = DL1Reader(dl1_path)
pixel, charge = reader.select_columns(['pixel', 'charge_cc'])
df = pd.DataFrame(dict(
pixel=pixel,
charge=charge,
))
with HDF5Writer("refdata/data3.h5") as writer:
writer.write(data=df)
writer.add_mapping(reader.mapping)
###Output
_____no_output_____
###Markdown
HDF5Reader ExampleIt is possible to see what contents of a file are accessible with the `dataframe_keys` and `metadata_keys` attributes:
###Code
from CHECLabPy.core.io import HDF5Reader
with HDF5Reader("refdata/data1.h5") as reader:
print(reader.dataframe_keys)
print(reader.metadata_keys)
###Output
_____no_output_____
###Markdown
Reading the data back from the file is achieved as follows:
###Code
from CHECLabPy.core.io import HDF5Reader
with HDF5Reader("refdata/data1.h5") as reader:
df_2 = reader.read("data_2")
df_5 = reader.read("data_5")
metadata_2 = reader.get_metadata(key='data_2')
metadata_5 = reader.get_metadata(key='data_5', name='test')
print(df_2)
print(metadata_2)
from CHECLabPy.core.io import HDF5Reader
with HDF5Reader("refdata/data2.h5") as reader:
df = reader.read("data")
metadata = reader.get_metadata(key='data')
print(df)
print(metadata)
from CHECLabPy.core.io import HDF5Reader
with HDF5Reader("refdata/data3.h5") as reader:
df = reader.read("data")
mapping = reader.get_mapping()
print(df)
print(mapping)
###Output
_____no_output_____ |
GAN02-dcgan-svhn/DCGAN_Exercises.ipynb | ###Markdown
Deep Convolutional GANsIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [original paper here](https://arxiv.org/pdf/1511.06434.pdf).You'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what [you saw previously](https://github.com/udacity/deep-learning/tree/master/gan_mnist) are in the generator and discriminator, otherwise the rest of the implementation is the same.
###Code
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
###Output
A subdirectory or file data already exists.
###Markdown
Getting the dataHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.
###Code
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
###Output
_____no_output_____
###Markdown
These SVHN files are `.mat` files typically used with Matlab. However, we can load them in with `scipy.io.loadmat` which we imported above.
###Code
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
###Output
_____no_output_____
###Markdown
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
###Code
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
###Output
_____no_output_____
###Markdown
Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
###Code
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
###Output
_____no_output_____
###Markdown
Network InputsHere, just creating some placeholders like normal.
###Code
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
###Output
_____no_output_____
###Markdown
GeneratorHere you'll build the generator network. The input will be our noise vector `z` as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. >**Exercise:** Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
###Code
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# 4 * 4 * 512
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1) # leaky relu
# 8 * 8 * 256
x2 = tf.layers.conv2d_transpose(x1, filters=256, kernel_size=5, strides=2, padding="same")
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2) # leaky relu
# 16 * 16 * 128
x3 = tf.layers.conv2d_transpose(x2, filters=128, kernel_size=5, strides=2, padding="same")
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3) # leaky relu
# 32 * 32 * 3
logits = tf.layers.conv2d_transpose(x3, filters=3, kernel_size=5, strides=2, padding="same")
out = tf.tanh(logits)
return out
###Output
_____no_output_____
###Markdown
DiscriminatorHere you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.You'll also want to use batch normalization with `tf.layers.batch_normalization` on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set `training` to `True`.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the `training` parameter appropriately.>**Exercise:** Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
###Code
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
# 16x16x64
x1 = tf.layers.conv2d(x, filters=64, kernel_size=5, strides=2, padding='same')
x1 = tf.layers.batch_normalization(x1, training=True)
x1 = tf.maximum(alpha * x1, x1)
# 8x8x128
x2 = tf.layers.conv2d(x1, filters=128, kernel_size=5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=True)
x2 = tf.maximum(alpha * x2, x2)
# 4x4x256
x3 = tf.layers.conv2d(x2, filters=256, kernel_size=5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=True)
x3 = tf.maximum(alpha * x3, x3)
# fn
flat = tf.reshape(x3, (-1, 4 * 4 * 256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
###Output
_____no_output_____
###Markdown
Model LossCalculating the loss like before, nothing new here.
###Code
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
###Output
_____no_output_____
###Markdown
OptimizersNot much new here, but notice how the train operations are wrapped in a `with tf.control_dependencies` block so the batch normalization layers can update their population statistics.
###Code
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
###Output
_____no_output_____
###Markdown
Building the modelHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
###Code
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
###Output
_____no_output_____
###Markdown
Here is a function for displaying generated images.
###Code
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
###Output
_____no_output_____
###Markdown
And another function we can use to train our network. Notice when we call `generator` to create the samples to display, we set `training` to `False`. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the `net.input_real` placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the `tf.control_dependencies` block we created in `model_opt`.
###Code
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
###Output
_____no_output_____
###Markdown
HyperparametersGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them.>**Exercise:** Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
###Code
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
###Output
_____no_output_____ |
1-Lessons/Lesson08/.ipynb_checkpoints/ENGR-1330-Lesson08-checkpoint.ipynb | ###Markdown
Download this page as a jupyter notebook at [Lesson 8](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson08/ENGR-1330-Lesson08.ipynb) ENGR 1330 Computational Thinking with Data Science Copyright © 2021 Theodore G. Cleveland and Farhang ForghanparastLast GitHub Commit Date: 10 September 2021 Lesson 8 Matrix Arithmetic:- The Matrix:A data structure - Matrix Definition - Matrix Arithmetic - Multiply a matrix by a scalar . - Matrix addition (and subtraction) - Multiply a matrix - Identity matrix - Matrix Inverse - Gauss-Jordan method of finding $A^{−1}$
###Code
# Script block to identify host, user, and kernel
import sys
! hostname; ! whoami; ! pwd;
print(sys.executable)
%%html
<!-- Script Block to set tables to left alignment -->
<style>
table {margin-left: 0 !important;}
</style>
###Output
_____no_output_____
###Markdown
--- Objectives1. Demonstrate matrices as a list of lists2. Demonstrate matrix arithmetic using fundamental arithmetic and list manipulation --- Matrices and VectorsA matrix is a rectangular array of numbers.\begin{gather}\begin{pmatrix}1 & 5 & 7 & 2\\2 & 9 & 17 & 5 \\11 & 15 & 8 & 3 \\\end{pmatrix}\end{gather}The size of a matrix is referred to in terms of the number of rows and the number of columns. The enclosing parenthesis are optional above, but become meaningful when writing multiple matrices next to each other.The above matrix is 3 by 4. When we are discussing matrices we will often refer to specific numbers in the matrix.To refer to a specific element of a matrix we refer to the row number (i) and the column number (j).We will often call a specific element of the matrix, the $a_{i,j}$ -th element of the matrix.For example $a_{2,3}$ element in the above matrix is 17. We have seen in Python that we would refer to the element as $a_{matrix}[i][j]$ or whatever the name of the matrix is in the program; with the caveat that the elements start counting at 0. For instance in the matrix above $a_{matrix}[0][0]$ contains the value 1.A vector is really just a matrix with a single column, but are often treated as different kinds of entities.In python core, a matrix is simply a list of lists - each row is a list, and the matrix is a collection of rows. A vector is simply a collection (list) of elements (we can force a vector to be a matrix, but we would have a structure that builds a collection of single element lists)For small matrices we can build them with explicit code; but larger ones are usually kept in files (hence the file handling lesson prior to this lesson). Also processing matrices as lists is ultimately cumbersome, so we will later employ the `numpy` package that greatly facilitates matrix manipulation - here we will learn about matrix manipulation for the pedagogcal aspect.To complete this section, lets create the matrix above and access its contents.
###Code
amatrix = [[1 , 5 , 7 , 2],
[2 , 9 , 17 , 5],
[11 , 15 , 8 , 3]]
print('rows = ',len(amatrix),'cols = ',len(amatrix[0]))
for i in range(len(amatrix)): # print by row
print(amatrix[i][:])
###Output
rows = 3 cols = 4
[1, 5, 7, 2]
[2, 9, 17, 5]
[11, 15, 8, 3]
###Markdown
--- --- Matrix ArithmeticAnalysis of many problems in engineering result in systems of simultaneous equations.We typically represent systems of equations with a matrix. For example the two-equation system,\begin{gather}\begin{matrix}2x_1 & ~+~~3x_2 \\~\\4x_1 & ~-~~3x_2 \\\end{matrix}\begin{matrix}=~8\\~\\=~-2\\\end{matrix}\end{gather}Could be represented by set of vectors and matrices(Usually called ``vector-matrix'' form. Additionally, a vector is really just a matrix with column rank = 1 (a single column matrix).)\begin{gather}\mathbf{A} =\begin{pmatrix}2 & ~3 \\~\\4 & -3 \\\end{pmatrix}~\mathbf{x} =\begin{pmatrix}x_1\\~\\x_2\\\end{pmatrix}~\mathbf{b} =\begin{pmatrix}~8\\~\\-2\\\end{pmatrix}\end{gather}and the linear system then written as\begin{gather}\mathbf{A} \cdot \mathbf{x} = \mathbf{b}\end{gather}So the "algebra" is considerably simplified, at least for writing things, however we now have to be able to do things like multiplication (indicated by $ ~\cdot $) as well as the concept of addition and subtraction, and division (multiplication by an inverse). There are also several kinds of matrix multiplication -- the inner (or dot) product as required by the linear system, the vector (cross product), the exterior (wedge), and outer (tensor) product are a few of importance in both mathematics and engineering. --- Matrix Arithmetic Multiply a matrix by a scalarA scalar multiple of a matrix is simply each element of the matrix multiplied by the scalar value. Consider the matrix $\mathbf{A}$ below.\begin{gather}\mathbf{A}=\begin{pmatrix}1 & 5 & 7 \\2 & 9 & 3 \\4 & 4 & 8 \\\end{pmatrix}\end{gather}If the scalar is say 2, then $2 \times \mathbf{A}$ is computed by doubling each element of $\mathbf{A}$, as\begin{gather}2\mathbf{A}=\begin{pmatrix}2 & 10 & 14\\4 & 18 & 6 \\8 & 8 & 16 \\\end{pmatrix}\end{gather}
###Code
amatrix = [[1 , 5 , 7 ],
[2 , 9 , 3],
[4 , 4 , 8 ]]
MyScalar = input("Enter scalar value for multiply matrix \n")
MyScalar = float(MyScalar) # force a float
# now perform element-by-element multiplication
for i in range(0,len(amatrix),1):
for j in range(0,len(amatrix[0]),1):
amatrix[i][j] = MyScalar * amatrix[i][j] # this will change contents of amatrix
for i in range(len(amatrix)): # print by row
print(amatrix[i][:])
###Output
Enter scalar value for multiply matrix
2
###Markdown
--- Matrix addition (and subtraction)Matrix addition and subtraction are also element-by-element operations.In order to add or subtract two matrices they must be the same size and shape.This requirement means that they must have the same number of rows and columns.To add or subtract a matrix we simply add or subtract the corresponding elements from each matrix.For example consider the two matrices $\mathbf{A}$ and $\mathbf{B}$ below\begin{gather}\mathbf{A}=\begin{pmatrix}1 & 5 & 7 \\2 & 9 & 3 \\\end{pmatrix}~ \mathbf{B}=\begin{pmatrix}3 & -2 & 1 \\-2 & 1 & 1 \\\end{pmatrix}\end{gather}For example the sum of these two matrices is the matrix named $\mathbf{A+B}$, shown below:\begin{gather}\mathbf{A+B}=\begin{pmatrix}1+3 & 5-2 & 7+1 \\2-2 & 9+1 & 3+1 \\\end{pmatrix}=\begin{pmatrix}4 & 3 & 8 \\0 & 10 & 4 \\\end{pmatrix}\end{gather}Now to do the operation in Python, we need to read in the matrices, perform the addition, and write the result.
###Code
amatrix = [[1 , 5 , 7 ],
[2 , 9 , 3]]
bmatrix = [[3 , -2 , 1 ],
[-2 , 1 , 1]]
cmatrix = [[0 for j in range(len(amatrix[0]))] for i in range(len(amatrix))] # 2D list to receive input; explicit sizing
for i in range(len(amatrix)):
for j in range(len(amatrix[0])):
cmatrix[i][j]= amatrix[i][j] + bmatrix[i][j]
for i in range(len(cmatrix)): # print by row
print(cmatrix[i][:])
###Output
[4, 3, 8]
[0, 10, 4]
###Markdown
In the code example above I added a third matrix to store the result -- generally we don't want to clobber existing matrices; Also notice the construction of the third matrix, because I should know the size I can use a double constructor-type iterator assignment to create and fill the matrix with zeros in the correct size and shape. Sometimes it doesn't matter if we clobber an existing matrix in which case something like:
###Code
amatrix = [[1 , 5 , 7 ],
[2 , 9 , 3]]
bmatrix = [[3 , -2 , 1 ],
[-2 , 1 , 1]]
for i in range(len(amatrix)):
for j in range(len(amatrix[0])):
amatrix[i][j]= amatrix[i][j] + bmatrix[i][j] # amatrix is replaced with the sum
for i in range(len(cmatrix)): # print by row
print(amatrix[i][:])
###Output
[4, 3, 8]
[0, 10, 4]
###Markdown
would work just fine. Subtraction is performed in a similar fashion, except the subtraction operator is used.
###Code
amatrix = [[1 , 5 , 7 ],
[2 , 9 , 3]]
bmatrix = [[3 , -2 , 1 ],
[-2 , 1 , 1]]
cmatrix = [[0 for j in range(len(amatrix[0]))] for i in range(len(amatrix))] # 2D list to receive input; explicit sizing
for i in range(len(amatrix)):
for j in range(len(amatrix[0])):
cmatrix[i][j]= amatrix[i][j] - bmatrix[i][j] # subtract bmatrix from amatrix
for i in range(len(cmatrix)): # print by row
print(cmatrix[i][:])
###Output
[-2, 7, 6]
[4, 8, 2]
###Markdown
Matrix multiplicationMatrix multiplication is more complex than addition and subtraction. There are several types of multiplication with respect to matrices, usually when matrix multiplication is mentioned without further qualification ,the implied meaning is the inner (or dot) product of the matrix and a vector (or another matrix) of the correct shapes.If two matrices such as a matrix $\mathbf{A}$ (size L x m) and a matrix $\mathbf{B}$ ( size m x n) are multiplied together, the resulting matrix $\mathbf{C}$ has a size of L x n. The order of multiplication of matrices is important (Matrix multiplication is not transitive; $\mathbf{A}~\mathbf{B} ~\ne~ \mathbf{B}~\mathbf{A}$.). To obtain $\mathbf{C}$ = $\mathbf{A}$ $\mathbf{B}$, the number of columns in $\mathbf{A}$ must be the same as the number of rows in $\mathbf{B}$. In order to carry out the matrix operations for multiplication of matrices, the $i,j$-th element of $\mathbf{C}$ is simply equal to the scalar (dot or inner) product of row $i$ of $\mathbf{A}$ and column $j$ of $\mathbf{B}$.Consider the example below \begin{gather}\mathbf{A}=\begin{pmatrix}1 & 5 & 7 \\2 & 9 & 3 \\\end{pmatrix}~ \mathbf{B}=\begin{pmatrix}3 & -2 \\-2 & 1 \\1 & 1 \\\end{pmatrix}\end{gather}Suppose we wish to compute the inner product $\mathbf{A}~\mathbf{B}$ First, we would evaluate if the operation is even possible, $\mathbf{A}$ has two rows and three columns.$\mathbf{B}$ has three rows and two columns. By our implied multiplication "rules" for the multiplication to be defined the first matrix must have the same number of rows as the second matrix has columns (in this case it does), and the result matrix will have the same number of rows as the first matrix, and the same number of columns as the second matrix (in this case the result will be a 2X2 matrix).\begin{gather}\mathbf{C}=\mathbf{A}\mathbf{B}=\begin{pmatrix}c_{1,1} & c_{1,2} \\c_{2,1} & c_{2,2} \\\end{pmatrix}\end{gather}And each element of $\mathbf{C}$ is the dot product of the row vector of $\mathbf{A}$ and the column vector of $\mathbf{B}$.\newpage\begin{gather}c_{1,1} =\begin{pmatrix}1 & 5 & 7 \\\end{pmatrix}\cdot\begin{pmatrix}3 \\-2 \\1 \\\end{pmatrix}=\begin{pmatrix}(1)(3) +(5)(-2) + (7)(1)\\\end{pmatrix}= 0\end{gather}\begin{gather}c_{1,2} =\begin{pmatrix}1 & 5 & 7 \\\end{pmatrix}\cdot\begin{pmatrix}-2 \\1 \\1 \\\end{pmatrix}=\begin{pmatrix}(1)(-2) +(5)(1) + (7)(1)\\\end{pmatrix}= 10\end{gather}\begin{gather}c_{2,1} =\begin{pmatrix}2 & 9 & 3 \\\end{pmatrix}\cdot\begin{pmatrix}3 \\-2 \\1 \\\end{pmatrix}=\begin{pmatrix}(2)(3) +(9)(-2) + (3)(1)\\\end{pmatrix}= -9\end{gather}\begin{gather}c_{2,2} =\begin{pmatrix}2 & 9 & 3 \\\end{pmatrix}\cdot\begin{pmatrix}-2 \\1 \\1 \\\end{pmatrix}=\begin{pmatrix}(2)(-2) +(9)(1) + (3)(1)\\\end{pmatrix}= 8\end{gather}Making the substitutions results in :\begin{gather}\mathbf{C}=\mathbf{A}\mathbf{B}=\begin{pmatrix}0 & 10 \\-9 & 8 \\\end{pmatrix}\end{gather}So in an algorithmic sense we will have to deal with three matrices, the two source matrices and the destination matrix. We will also have to manage element-by-element multiplication and be able to correctly store through rows and columns.Here is the process in a script. The `bmatrix` object must be a 2D list, or it won't work
###Code
amatrix = [[1 , 5 , 7 ],
[2 , 9 , 3]]
bmatrix = [[3 , -2 ],
[-2 , 1 ],
[ 1 , 1]]
# destination matrix, rows count same a amatrix, columns count same as bmatrix
cmatrix = [[0 for j in range(len(bmatrix[0]))] for i in range(len(amatrix))] # 2D list to receive input; explicit sizing
# now for the multiplication
for i in range(0,len(amatrix)):
for j in range(0,len(bmatrix[0])):
for k in range(0,len(amatrix[0])):
cmatrix[i][j]=cmatrix[i][j]+amatrix[i][k]*bmatrix[k][j]
for i in range(len(cmatrix)): # print by row
print(cmatrix[i][:])
###Output
[0, 10]
[-9, 8]
###Markdown
Identity matrixIn computational linear algebra we often need to make use of a special matrix called the "Identity Matrix". The Identity Matrix is a square matrix with all zeros except the $i,i$-th element (diagonal) which is equal to 1:\begin{gather}\mathbf{I}_{3\times3}=\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\\\end{pmatrix}\end{gather}Usually we don't bother with the size subscript used above and just stipulate that the matrix is sized as appropriate.Multiplying any matrix by (a correctly sized) identity matrix results in no change in the matrix. $\mathbf{I}\mathbf{A} = \mathbf{A}$ Using our script above
###Code
amatrix = [[1 , 5 , 7 ],
[2 , 9 , 3],
[4 , 4 , 8 ]]
print('A matrix')
for i in range(len(amatrix)): # print by row
print(amatrix[i][:])
bmatrix = [[1 , 0 , 0 ],
[0 , 1 , 0],
[0 , 0 , 1 ]]
print('B matrix')
for i in range(len(bmatrix)): # print by row
print(bmatrix[i][:])
# destination matrix, rows count same a amatrix, columns count same as bmatrix
cmatrix = [[0 for j in range(len(bmatrix[0]))] for i in range(len(amatrix))] # 2D list to receive input; explicit sizing
# now for the multiplication
for i in range(0,len(amatrix)):
for j in range(0,len(bmatrix[0])):
for k in range(0,len(amatrix[0])):
cmatrix[i][j]=cmatrix[i][j]+amatrix[i][k]*bmatrix[k][j]
print('C = AB matrix')
for i in range(len(cmatrix)): # print by row
print(cmatrix[i][:])
###Output
A matrix
[1, 5, 7]
[2, 9, 3]
[4, 4, 8]
B matrix
[1, 0, 0]
[0, 1, 0]
[0, 0, 1]
C = AB matrix
[1, 5, 7]
[2, 9, 3]
[4, 4, 8]
###Markdown
Matrix InverseIn many practical computational and theoretical operations we employ the concept of the inverse of a matrix.The inverse is somewhat analogous to "dividing" by the matrix. Consider our linear system \begin{gather}\mathbf{A} \cdot \mathbf{x} = \mathbf{b}\end{gather}If we wished to solve for $\mathbf{x}$ we would "divide" both sides of the equation by $\mathbf{A}$.Instead of division (which is essentially left undefined for matrices) we instead multiply by the inverse of the matrix (The matrix inverse is the multiplicative inverse of the matrix -- we are defining a division operation, just calling it something else.).The inverse of a matrix $\mathbf{A}$ is denoted by $\mathbf{A}^{-1}$ and by definition is a matrix such that when $\mathbf{A}^{-1}$ and $\mathbf{A}$ are multiplied together, the identity matrix $\mathbf{I}$ results. e.g. $\mathbf{A}^{-1} \mathbf{A} = \mathbf{I}$Lets consider the matrixes below\begin{gather}\mathbf{A}=\begin{pmatrix}2 & 3 \\4 & -3 \\\end{pmatrix}\end{gather}\begin{gather}\mathbf{A}^{-1}=\begin{pmatrix}\frac{1}{6} & \frac{1}{6} \\~\\\frac{2}{9} & -\frac{1}{9} \\\end{pmatrix}\end{gather}We can check that the matrices are indeed inverses of each other using our Python code, performing the multiplication and then report the result. The result is the identity matrix regardless of the order of operation.
###Code
amatrix = [[2 , 3 ],
[4 , -3 ]]
print('A matrix')
for i in range(len(amatrix)): # print by row
print(amatrix[i][:])
bmatrix = [[1/6 , 1/6 ],
[2/9 , -1/9]]
print('A-inverse matrix')
for i in range(len(bmatrix)): # print by row
print(bmatrix[i][:])
# destination matrix, rows count same a amatrix, columns count same as bmatrix
cmatrix = [[0 for j in range(len(bmatrix[0]))] for i in range(len(amatrix))] # 2D list to receive input; explicit sizing
# now for the multiplication
for i in range(0,len(amatrix)):
for j in range(0,len(bmatrix[0])):
for k in range(0,len(amatrix[0])):
cmatrix[i][j]=cmatrix[i][j]+amatrix[i][k]*bmatrix[k][j]
print('C = A*A-inverse matrix')
for i in range(len(cmatrix)): # print by row
print(cmatrix[i][:])
###Output
A matrix
[2, 3]
[4, -3]
A-inverse matrix
[0.16666666666666666, 0.16666666666666666]
[0.2222222222222222, -0.1111111111111111]
C = A*A-inverse matrix
[1.0, 0.0]
[0.0, 1.0]
###Markdown
Gauss-Jordan method of finding $\mathbf{A}^{-1}$ (Optional)There are a number of methods that can be used to find the inverse of a matrix using elementary row operations.An elementary row operation is any one of the three operations listed below:1. Multiply or divide an entire row by a constant.2. Add or subtract a multiple of one row to/from another.3. Exchange the position of any 2 rows.The Gauss-Jordan method of inverting a matrix can be divided into 4 main steps. In order to find the inverse we will be working with the original matrix, augmented with the identity matrix -- this new matrix is called the augmented matrix (because no-one has tried to think of a cooler name yet). \begin{gather}\mathbf{A} | \mathbf{I} =\begin{pmatrix}2 & 3 & | & 1 & 0 \\4 & -3 & | & 0 & 1 \\\end{pmatrix}\end{gather}We will perform elementary row operations based on the left partition to convert it to an identity matrix -- we perform the same operations on the right partition and the result when we are done is the inverse of the original matrix.So here goes -- in the theory here, we also get to do infinite-precision arithmetic, no rounding/truncation errors. > Divide row one by the $a_{1,1}$ value to force a $1$ in the $a_{1,1}$ position. This is elementary row operation 1 in our list above.\begin{gather}\mathbf{A} | \mathbf{I} =\begin{pmatrix}2/2 & 3/2 & | & 1/2 & 0 \\4 & -3 & | & 0 & 1 \\\end{pmatrix}=\begin{pmatrix}1 & 3/2 & | & 1/2 & 0 \\4 & -3 & | & 0 & 1 \\\end{pmatrix}\end{gather}> For all rows below the first row, replace $row_j$ with $row_j - a_{j,1}*row_1$.This happens to be elementary row operation 2 in our list above.\begin{gather}\mathbf{A} | \mathbf{I} =\begin{pmatrix}1 & 3/2 & | & 1/2 & 0 \\4 - 4(1) & -3 - 4(3/2) & | & 0-4(1/2) & 1-4(0) \\\end{pmatrix}=\begin{pmatrix}1 & 3/2 & | & 1/2 & 0 \\0 & -9 & | & -2 & 1 \\\end{pmatrix}\end{gather}> Now multiply $row_2$ by $ \frac{1}{ a_{2,2}} $. This is again elementary row operation 1 in our list above.\begin{gather}\mathbf{A} | \mathbf{I} =\begin{pmatrix}1 & 3/2 & | & 1/2 & 0 \\0 & -9/-9 & | & -2/-9 & 1/-9 \\\end{pmatrix}=\begin{pmatrix}1 & 3/2 & | & 1/2 & 0 \\0 & 1 & | & 2/9 & -1/9 \\\end{pmatrix}\end{gather}> For all rows above and below this current row, replace $row_j$ with $row_j - a_{2,2}*row_2$.This happens to again be elementary row operation 2 in our list above.What we are doing is systematically converting the left matrix into an identity matrix by multiplication of constants and addition to eliminate off-diagonal values and force 1 on the diagonal.\begin{gather}\mathbf{A} | \mathbf{I} = \\\begin{pmatrix}1 & 3/2 - (3/2)(1) & | & 1/2 - (3/2)(2/9) & 0-(3/2)(-1/9) \\0 & 1 & | & 2/9 & -1/9 \\\end{pmatrix}= \\\begin{pmatrix}1 & 0 & | & 1/6 & 1/6 \\0 & 1 & | & 2/9 & -1/9 \\\end{pmatrix}\end{gather}> As far as this example is concerned we are done and have found the inverse.With more than a 2X2 system there will be many operations moving up and down the matrix to eliminate the off-diagonal terms.So the next logical step is to build an algorithm to perform these operations for us.The code for inversion is a bit long, but is included as a monolithic block so we dont break things.The first part reads in the matrix from a file named "A.txt", and then builds some workspaces for the inversion process.One of the workspaces is a matrix called "bmatrix" which is an identity matrix and is also the augmented portion of the system depicted in the 2X2 example. The actual inverse gets stored in a matrix named "xmatrix", which is really a column-by-column collection of solutions to a linear system where the right hand side is the different columns of the identity matrix.
###Code
# InvertASystem.py
# Code to read A and b
# Then solve Ax = b for x by Gaussian elimination with back substitution
#
print ("invert a matrix by Gaussian elimination - requires diagionally dominant system")
amatrix = [] # null list to store matrix reads
rowNumA = 0
colNumA = 0
afile = open("A.txt","r") # connect and read file for MATRIX A
for line in afile:
amatrix.append([float(n) for n in line.strip().split()])
rowNumA += 1
afile.close() # Disconnect the file
colNumA = len(amatrix[0])
bvector = [0 for i in range(rowNumA)] # will use as rhs in linear solver
cmatrix = [[0 for j in range(colNumA)]for i in range(rowNumA)]
dmatrix = [[0 for j in range(colNumA)]for i in range(rowNumA)]
bmatrix = [[0 for j in range(colNumA)]for i in range(rowNumA)]
xmatrix = [[0 for j in range(colNumA)]for i in range(rowNumA)]
xvector = [0 for i in range(rowNumA)]
for i in range(0,rowNumA,1):
bmatrix[i][i] = 1.0 #augmented partition
print (amatrix[i][0:colNumA], bmatrix[i][0:colNumA])
print ("-----------------------------")
dmatrix = [[amatrix[i][j] for j in range(colNumA)]for i in range(rowNumA)] # copy amatrix into dmatrix -- this is a static copy
# outer wrapper loop
for jcol in range(rowNumA):
xvector = [0 for i in range(rowNumA)] # empty column of the inverse
for i in range(rowNumA):
bvector[i]=bmatrix[i][jcol]
amatrix = [[dmatrix[i][j] for j in range(colNumA)]for i in range(rowNumA)]
cmatrix = [[dmatrix[i][j] for j in range(colNumA)]for i in range(rowNumA)]
for k in range(rowNumA-1): # build the diagonal -- assume diagonally dominant
l = k+1
for i in range(l,rowNumA):
for j in range(colNumA):
cmatrix[i][j]=amatrix[i][j]-amatrix[k][j]*amatrix[i][k]/amatrix[k][k]
bvector[i] = bvector[i]-bvector[k]*amatrix[i][k]/amatrix[k][k]
bmatrix[i][jcol] = bmatrix[i][jcol]-bmatrix[k][jcol]*amatrix[i][k]/amatrix[k][k]
for i in range(rowNumA):
for j in range(colNumA):
amatrix[i][j] = cmatrix[i][j]
# gaussian reduction done
# now for the back substitution
for k in range(rowNumA-1,-1,-1):
sum = 0.0
sum1 = 0.0
for i in range(rowNumA):
if i == k:
continue
else:
sum = sum + amatrix[k][i]*xvector[i]
sum1 = sum1 + amatrix[k][i]*xmatrix[i][jcol]
xvector[k]=(bvector[k]-sum)/amatrix[k][k]
xmatrix[k][jcol]=(bmatrix[k][jcol]-sum1)/amatrix[k][k]
# end of wrapper
print ("[ A-Matrix ]|[ A-Inverse ]")
print ("_____________________________________________________")
for i in range(0,rowNumA,1):
print (dmatrix[i][0:colNumA],"|", xmatrix[i][0:colNumA])
print ("_____________________________________________________")
ofile = open("A-Matrix.txt","w") # "w" clobbers content already there!
for i in range(0,rowNumA,1):
message = ' '.join(map(repr, dmatrix[i][0:colNumA])) + "\n"
ofile.write(message)
ofile.close()
ofile = open("A-Inverse.txt","w") # "w" clobbers content already there!
for i in range(0,rowNumA,1):
message = ' '.join(map(repr, xmatrix[i][0:colNumA])) + "\n"
ofile.write(message)
ofile.close()
###Output
invert a matrix by Gaussian elimination - requires diagionally dominant system
[4.0, 1.5, 0.7, 1.2, 0.5] [1.0, 0, 0, 0, 0]
[1.0, 6.0, 0.9, 1.4, 0.7] [0, 1.0, 0, 0, 0]
[0.5, 1.0, 3.9, 3.2, 0.9] [0, 0, 1.0, 0, 0]
[0.2, 2.0, 0.2, 7.5, 1.9] [0, 0, 0, 1.0, 0]
[1.7, 0.9, 1.2, 2.3, 4.9] [0, 0, 0, 0, 1.0]
-----------------------------
[ A-Matrix ]|[ A-Inverse ]
_____________________________________________________
[4.0, 1.5, 0.7, 1.2, 0.5] | [0.27196423630168165, -0.05581183146290884, -0.032853102922602934, -0.016869919448735553, -0.0072026931722172435]
[1.0, 6.0, 0.9, 1.4, 0.7] | [-0.036786468827077756, 0.18691841183385363, -0.032062455842026744, -0.011456196435011407, -0.012617687833839365]
[0.5, 1.0, 3.9, 3.2, 0.9] | [-0.025949127789423248, -0.0013334022990376664, 0.26826513178341493, -0.10875073215127727, -0.004266180002777282]
[0.2, 2.0, 0.2, 7.5, 1.9] | [0.027047195749338872, -0.05063248905238324, 0.01649816113355711, 0.1486518640705042, -0.05619749842697155]
[1.7, 0.9, 1.2, 2.3, 4.9] | [-0.0939389748254409, 0.009124153146082323, -0.05615458031041434, -0.03518550386250331, 0.23632125710787594]
_____________________________________________________
|
src/experiments.ipynb | ###Markdown
Dependencies, embeddings, transformers
###Code
! git clone https://github.com/josipjukic/Adversarial-NLP.git --quiet
% cd /content/Adversarial-NLP/src
% mkdir .vector_cache
% cp '/content/drive/My Drive/Master Thesis/glove/glove.6B.100d.txt.pt' .vector_cache/
! pip install transformers --quiet
###Output
_____no_output_____
###Markdown
Experiments
###Code
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext import data
from torchtext import datasets
import spacy
from data_utils import (load_data, set_seed_everywhere)
from training import run_experiment
from metrics import init_tqdms
SEED = 42
set_seed_everywhere(SEED)
LOAD_PATH = '/content/drive/My Drive/Master Thesis/AG'
MAX_VOCAB_SIZE = 25_000
EMBEDDINGS_FILE = 'glove.6B.100d'
splits, fields = load_data(LOAD_PATH,
MAX_VOCAB_SIZE=MAX_VOCAB_SIZE,
EMBEDDINGS_FILE=EMBEDDINGS_FILE,
float_label=False)
TEXT, LABEL, *_ = fields
from argparse import Namespace
from data_utils import expand_paths
from embeddings import get_embeddings
from models import PackedRNN
dataset_name = 'AG'
rnn_type = 'LSTM'
bidirectional = True
identifier = ('bi' if bidirectional else '') + rnn_type
args = Namespace(
# Data and Path hyper parameters
model_save_file=f'{identifier}.torch',
train_state_file=f'stats_{identifier}.json',
save_dir=f'/content/drive/My Drive/Master Thesis/torch_models/{dataset_name}',
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token],
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token],
# Model hyper parameters
input_dim = len(TEXT.vocab),
embedding_dim=100,
hidden_dim=256,
output_dim = len(LABEL.vocab) if len(LABEL.vocab) > 2 else 1,
num_layers=2,
bidirectional=bidirectional,
rnn_type=rnn_type,
# Training hyper parameter
seed=SEED,
learning_rate=0.001,
dropout_p=0.5,
batch_size=64,
num_epochs=20,
early_stopping_criteria=5,
# Runtime option
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
)
expand_paths(args)
model = PackedRNN(
args.embedding_dim,
args.hidden_dim,
args.output_dim,
args.num_layers,
get_embeddings(TEXT, args),
args.bidirectional,
args.dropout_p,
args.PAD_IDX,
args.rnn_type,
args.device
)
model = model.to(args.device)
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
splits,
batch_size=args.batch_size,
sort_within_batch=True,
sort_key = lambda x: len(x.text),
device=args.device)
iterator = dict(train=train_iterator, valid=valid_iterator, test=test_iterator)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=args.learning_rate)
run_experiment(args, model, iterator, optimizer, criterion, init_tqdms(args, iterator))
###Output
_____no_output_____
###Markdown
The model training has been completed, below stats describe the training process - Softmax model in red, EDL model in blue:It is evident the EDL model training faced a 'hurdle' up until ~4k steps into the training (see the `train/accuracy` and `train/loss` graphs), where accuracy reached levels described in the paper. It might be related to the training warmup phase implemented in the model described in the paper that was not part of this implementation. Either way, the model is ready for evaluation and experiments, so let's get to it!In the below analysis we will experiment with the Softmax and EDL models, specifically:* test behaviour of the models when the input image is rotated* test edl model uncertainty given out of sample, unusual inputs Imports
###Code
%load_ext autoreload
%autoreload 2
from typing import Tuple
from pathlib import Path
from matplotlib import pyplot as plt
import numpy as np
from torch import Tensor
from torchvision.transforms.functional import rotate
from pytorch_lightning import LightningModule
from dataset.mnist_data_module import MNISTDataModule
from model.lenet_softmax import LeNetSoftmax
from model.lenet_edl import LeNetEDL
from utilities import image_utils
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Let's import the champion models for further analysis:
###Code
sm_checkpoint_path = '../output/LeNetEDL-relu-mse-epoch=18-validation/accuracy=0.983.ckpt'
sm_model = LeNetSoftmax.load_from_checkpoint(sm_checkpoint_path)
edl_checkpoint_path = '../output/LeNetEDL-relu-mse-epoch=18-validation/accuracy=0.983.ckpt'
edl_model = LeNetEDL.load_from_checkpoint(edl_checkpoint_path)
data_module = MNISTDataModule()
train_dataloader = data_module.train_dataloader()
data_batch = next(iter(train_dataloader))
def compare_model_results(
data_batch: Tuple[Tensor, Tensor],
num_img: int,
rotate_angle: float,
sm_model: LeNetSoftmax,
edl_model: LeNetEDL
) -> None:
"""
Compares model results for rotated images
"""
batch_tensor_image = data_batch[0][num_img:num_img+1]
label = data_batch[1][num_img:num_img+1]
print('='*60)
print('Expected label:', int(label))
print(f'Rotation angle: {rotate_angle}')
rotated_image = rotate(batch_tensor_image, rotate_angle)
show_model_results(rotated_image, sm_model, edl_model)
def show_model_results(image: Tensor, sm_model: LeNetSoftmax, edl_model: LeNetEDL) -> None:
"""
Shows comparison of model results on a given image tensor
"""
image_utils.show_tensor_image(image[0])
plt.show()
print('-'*60)
print('SOFTMAX:')
print_model_results(sm_model, image)
print('-'*60)
print('EDL:')
print_model_results(edl_model, image)
print()
def print_model_results(model: LightningModule, image: Tensor):
"""
Displays the model results
"""
predicted_classes, predicted_probabilities, uncertainty = model.predict(image)
predicted_class = int(predicted_classes[0])
predicted_probability = float(predicted_probabilities[0][predicted_class])
print(f'Predicted class: {predicted_class} (probability={predicted_probability:.1%})')
if uncertainty:
uncertainty_val = float(uncertainty[0][0])
print(f'Epistemic uncertainty: {uncertainty_val:.1%}')
###Output
_____no_output_____
###Markdown
Vulnerability to image rotation
###Code
for _ in range(5):
num_img = np.random.randint(0, 63)
rotate_angle = np.random.randint(0,359)
compare_model_results(data_batch, num_img, rotate_angle, sm_model, edl_model)
###Output
============================================================
Expected label: 7
Rotation angle: 6
###Markdown
The above model comparison shows exactly why understanding epistemic uncertainty is important: for even highly rotated images, the Softmax model predicts results with near 100% probability. Measuring the uncertainty of the EDL model we can get a sense where the model is not sure of its predictions. Vulnerability to unusual out-of-sample images
###Code
out_of_sample_image_path = Path('../data/images/out_of_sample')
for image_path in out_of_sample_image_path.iterdir():
print('='*60)
image = image_utils.read_image(image_path)
plt.show()
image_tensor = image_utils.convert_to_model_input(image)
show_model_results(image_tensor, sm_model, edl_model)
###Output
============================================================
###Markdown
Experiments - Evolutionary Algorithm TSP
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
from tsp_evolutionary_algorithm import TSPEvolutionaryAlgorithm
from Reporter import Reporter
# The dataset filenames
datasets = ['tour29.csv', 'tour100.csv', 'tour194.csv', 'tour929.csv']
benchmarks = [(27200, 30350.13), (7350, 8636.5), (9000, 11385.01), (95300, 113683.58)]
limits = [(26000, 31000), [7000, 9500], [8500, 12000], [94000, 120000]]
###Output
_____no_output_____
###Markdown
Sequence of Experiments 1. Typical Convergence Graph
###Code
dataset_idx = 0
file = open('datasets/' + datasets[dataset_idx])
distance_matrix = np.loadtxt(file, delimiter=",")
file.close()
reporter = Reporter(datasets[dataset_idx][:-4])
ga = TSPEvolutionaryAlgorithm(distance_matrix, lambda_=10, mu=10, k=4,
recombination_probability=0.9,
mutation_probability=0.9,
local_search_probability=1,
mutation_strength=1,
fitness_sharing_alpha=1,
fitness_sharing_sigma=len(distance_matrix)//5)
while not ga.converged(improvement_criterion=True, improvement_threshold=200):
ga.update()
# extract results of current generation
mean_objective = ga.mean_objective
best_objective = ga.best_objective
best_solution = ga.best_solution
time_left = reporter.report(mean_objective,
best_objective,
best_solution)
print(ga.state, round(time_left))
if time_left < 0:
break
print('Converged!')
from matplotlib import pyplot as plt
def plot_convergence_graph(ga: TSPEvolutionaryAlgorithm, optimal, greedy, limits):
best_fitnesses = ga.best_history
mean_fitnesses = ga.mean_history
fig = plt.figure('Convergence Graph', figsize=(6,6), dpi= 100, facecolor='w', edgecolor='k')
plt.axhline(y=optimal, color='r', linestyle='--')
plt.axhline(y=greedy, color='y', linestyle='--')
plt.plot(best_fitnesses, 'g', alpha=1, lw=1)
plt.plot(mean_fitnesses, 'b', alpha=0.5, lw=1)
plt.plot(0, best_fitnesses[0], 'gx')
plt.plot(0, mean_fitnesses[0], 'bx')
plt.xlabel('Generation')
plt.ylabel('Fitness')
plt.legend(['Approximate Optimal', 'Greedy Heuristic', 'Best Fitness', 'Mean Fitness'],
bbox_to_anchor=(1, 1), loc=1, borderaxespad=0)
plt.grid(True)
plt.ylim(limits)
plt.show()
plot_convergence_graph(ga,
optimal=benchmarks[dataset_idx][0],
greedy=benchmarks[dataset_idx][1],
limits=limits[dataset_idx])
###Output
_____no_output_____
###Markdown
2. Best Tour Length and Best Sequence of Cities
###Code
print(f'Best solution:\n\tfitness = {ga.best_solution.fitness}')
print(f'\troute = {ga.best_solution.route}')
np.argmin(ga.best_solution.route)
route = ga.best_solution.route
print(route[573:] + route[:573])
###Output
[0, 18, 26, 51, 67, 65, 78, 131, 132, 102, 96, 88, 72, 59, 36, 32, 27, 12, 14, 9, 2, 3, 4, 10, 17, 13, 30, 29, 19, 28, 35, 33, 20, 38, 37, 46, 44, 50, 52, 53, 55, 54, 60, 63, 66, 70, 69, 74, 68, 73, 82, 86, 95, 98, 229, 251, 261, 288, 298, 299, 302, 294, 310, 316, 320, 334, 323, 301, 300, 290, 286, 289, 278, 266, 243, 248, 245, 249, 252, 255, 257, 267, 262, 276, 263, 258, 204, 153, 237, 203, 180, 165, 161, 147, 143, 130, 109, 105, 108, 104, 120, 124, 126, 133, 145, 156, 146, 154, 159, 174, 182, 185, 187, 199, 214, 227, 233, 222, 211, 208, 206, 193, 190, 186, 176, 173, 179, 172, 181, 184, 192, 205, 202, 194, 189, 196, 207, 215, 217, 221, 225, 234, 228, 216, 226, 235, 239, 240, 242, 246, 247, 241, 238, 232, 230, 224, 223, 210, 198, 213, 201, 188, 177, 175, 169, 167, 163, 160, 148, 141, 139, 137, 144, 151, 171, 168, 162, 157, 155, 140, 128, 138, 164, 166, 183, 195, 209, 197, 212, 218, 220, 231, 219, 191, 170, 150, 106, 112, 111, 121, 107, 116, 129, 113, 103, 114, 119, 123, 136, 134, 135, 142, 149, 158, 152, 125, 127, 122, 118, 117, 115, 110, 100, 91, 58, 61, 101, 97, 90, 94, 75, 42, 47, 56, 57, 49, 15, 34, 64, 71, 76, 79, 178, 287, 308, 307, 330, 332, 338, 347, 359, 362, 354, 358, 365, 383, 520, 533, 597, 519, 507, 549, 588, 626, 575, 621, 649, 631, 625, 548, 483, 425, 372, 335, 336, 322, 321, 291, 312, 275, 277, 280, 279, 295, 311, 319, 317, 328, 349, 374, 364, 369, 465, 494, 529, 542, 578, 612, 637, 647, 648, 754, 816, 744, 638, 759, 796, 825, 833, 855, 849, 856, 866, 869, 888, 891, 892, 889, 885, 874, 880, 879, 886, 882, 876, 862, 842, 837, 836, 831, 828, 824, 779, 745, 822, 821, 819, 829, 840, 850, 860, 867, 895, 902, 896, 900, 901, 905, 908, 914, 919, 928, 926, 923, 921, 924, 925, 927, 922, 920, 918, 913, 907, 894, 899, 916, 917, 915, 912, 910, 909, 911, 906, 904, 903, 898, 893, 890, 897, 887, 884, 863, 853, 843, 830, 804, 826, 834, 835, 854, 857, 861, 870, 881, 871, 873, 877, 878, 883, 872, 865, 864, 851, 846, 847, 859, 845, 841, 827, 832, 818, 811, 627, 628, 645, 723, 765, 783, 787, 813, 814, 812, 802, 803, 788, 792, 790, 807, 815, 805, 806, 808, 809, 797, 774, 768, 776, 770, 757, 758, 752, 748, 756, 762, 767, 760, 771, 784, 775, 786, 791, 798, 801, 799, 800, 785, 782, 789, 780, 769, 750, 749, 739, 731, 727, 712, 703, 698, 696, 688, 689, 701, 711, 718, 724, 734, 737, 728, 715, 699, 694, 684, 697, 713, 740, 753, 766, 747, 742, 702, 683, 658, 642, 668, 677, 670, 666, 674, 657, 656, 659, 669, 673, 676, 681, 693, 680, 678, 671, 652, 651, 654, 667, 655, 644, 639, 643, 650, 653, 660, 665, 695, 709, 710, 720, 733, 735, 746, 751, 755, 764, 773, 778, 781, 772, 761, 763, 716, 706, 708, 719, 736, 741, 738, 730, 717, 705, 714, 722, 732, 721, 704, 707, 700, 692, 690, 691, 682, 675, 664, 679, 672, 641, 640, 610, 528, 609, 595, 586, 580, 572, 579, 590, 603, 608, 607, 598, 589, 585, 577, 569, 568, 556, 547, 540, 557, 541, 532, 527, 518, 506, 499, 498, 480, 471, 463, 455, 448, 441, 454, 462, 492, 505, 526, 531, 536, 546, 555, 571, 576, 584, 593, 594, 602, 606, 611, 619, 620, 615, 633, 635, 634, 663, 686, 794, 777, 726, 662, 685, 743, 661, 810, 793, 725, 624, 623, 614, 622, 618, 617, 599, 581, 582, 604, 600, 605, 601, 583, 567, 566, 561, 554, 560, 553, 552, 539, 530, 524, 523, 510, 509, 522, 515, 521, 534, 543, 537, 544, 538, 535, 545, 551, 565, 564, 563, 559, 550, 508, 438, 443, 433, 426, 439, 434, 445, 444, 451, 466, 473, 467, 474, 484, 495, 496, 511, 516, 512, 502, 486, 476, 485, 475, 456, 452, 446, 457, 468, 447, 453, 477, 469, 487, 513, 488, 478, 458, 479, 490, 489, 497, 517, 525, 503, 504, 491, 461, 470, 460, 459, 440, 435, 428, 427, 418, 417, 406, 400, 401, 413, 402, 407, 420, 421, 429, 422, 430, 431, 423, 416, 408, 409, 410, 419, 399, 395, 389, 387, 376, 378, 388, 386, 382, 394, 396, 398, 405, 404, 415, 414, 403, 393, 392, 391, 384, 385, 381, 377, 355, 357, 348, 345, 346, 343, 333, 352, 375, 341, 342, 339, 344, 340, 331, 337, 363, 373, 370, 366, 367, 379, 432, 449, 450, 493, 500, 573, 587, 596, 562, 464, 397, 360, 368, 371, 380, 390, 411, 436, 442, 481, 482, 472, 501, 570, 613, 574, 592, 591, 629, 795, 729, 838, 817, 823, 844, 858, 868, 875, 852, 848, 839, 820, 646, 630, 636, 687, 616, 632, 558, 514, 437, 424, 412, 356, 361, 353, 351, 350, 329, 325, 315, 309, 303, 304, 297, 306, 293, 273, 272, 256, 265, 264, 260, 269, 271, 281, 285, 270, 314, 318, 284, 274, 313, 292, 283, 259, 253, 250, 268, 282, 327, 326, 324, 244, 305, 296, 254, 236, 92, 93, 87, 83, 84, 85, 81, 45, 48, 80, 77, 89, 200, 99, 62, 39, 41, 40, 21, 7, 1, 5, 43, 31, 24, 23, 25, 22, 11, 16, 8, 6]
###Markdown
3. Results Interpretation 4. Histogram of Best and Mean Fitnesses for 1000 iterations (only for tour29)
###Code
dataset_idx = 0
file = open('datasets/' + datasets[dataset_idx])
distance_matrix = np.loadtxt(file, delimiter=",")
file.close()
best_fitnesses = []
mean_fitnesses = []
while len(best_fitnesses) < 1000:
reporter = Reporter(datasets[dataset_idx][:-4] + f'_{iter}')
ga = TSPEvolutionaryAlgorithm(distance_matrix, lambda_=10, mu=5, k=3,
recombination_probability=0.9,
mutation_probability=0.1,
local_search_probability=1,#0.3,
mutation_strength=3,
fitness_sharing_alpha=1,
fitness_sharing_sigma=len(distance_matrix)//10)
while not ga.converged(max_iterations=50):
ga.update()
# extract results of current generation
mean_objective = ga.mean_objective
best_objective = ga.best_objective
best_solution = ga.best_solution
# time_left = reporter.report(mean_objective,
# best_objective,
# best_solution)
best_fitnesses.append(best_objective)
mean_fitnesses.append(mean_objective)
print(f'#{len(best_fitnesses)} converged!')
print(best_solution.route)
from matplotlib import pyplot as plt
def plot_histograms(best_fitnesses, mean_fitnesses):
fig = plt.figure('Histograms of 1000 runs', figsize=(12,4), dpi= 100, facecolor='w', edgecolor='k')
fig.add_subplot(1, 2, 1)
fig.add_subplot(1, 2, 2)
ax = fig.axes
labels = ['Best Fitnesses', 'Mean Fitnesses']
for idx in range(2):
ax[idx].set_xlabel(labels[idx])
ax[idx].set_ylabel("Count")
ax[idx].set_title(f'Histogram of {labels[idx]} for 1000 runs')
ax[idx].grid(True)
ax[idx].hist([best_fitnesses, mean_fitnesses][idx], bins=10)
plt.show()
""" Plot the histograms """
plot_histograms(best_fitnesses, mean_fitnesses)
best_mean = np.mean(best_fitnesses)
best_std = np.std(best_fitnesses)
mean_mean = np.mean(mean_fitnesses)
mean_std = np.std(mean_fitnesses)
print(f'Best fitness mean={best_mean}, std={best_std}')
print(f'Mean fitness mean={mean_mean}, std={mean_std}')
###Output
_____no_output_____
###Markdown
Leveraging World Events to Predict E-Commerce Consumer Demand under Anomaly
###Code
import sys
sys.path.append('.')
sys.path.append('../')
import os
import os.path as path
import datetime
import pandas as pd
import random
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
import darts
from darts import TimeSeries
import cufflinks as cf
cf.go_offline()
from plotly.offline import plot, download_plotlyjs, init_notebook_mode, plot, iplot
from IPython.display import display, Math, Markdown
from IPython.display import display, Markdown, clear_output
import ipywidgets as widgets
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Import Functions
###Code
import config as proj_config
cache_path = proj_config.CACHE_DIR
data_path = proj_config.DATA_DIR
events_data_path = proj_config.EVENTS_DATASET_DIR
categories_path = cache_path + '/categories_events/'
from demand_prediction.general_functions import get_file_path, get_df_table, load_table_cache, save_table_cache, get_pred_dates
from demand_prediction.dataset_functions import split_data, create_events_df
from demand_prediction.ts_models import train_models, test_models, save_model, load_model
from demand_prediction.events_models import load_events_model, save_events_model, calc_events_ts
from demand_prediction.neural_prophet_model import NeuralProphetEvents, reformat_events_name, get_events_for_neural_prophet, get_neural_prophet_results
from demand_prediction.lstm_models import get_lstm_results
from demand_prediction.tcn_models import get_tcn_results
from demand_prediction.results_functions import get_all_k_metrics
###Output
Global seed set to 0
###Markdown
Datasets Events
###Code
world_events = get_df_table("events/world_events_dataset_from_1980")
world_events.head()
###Output
Total data size: 16766
###Markdown
Ecommerce Use the following random time series as an expample or provide your own time series.Please make sure that the ts is a DataFrame that contains one column which is the product sales, and the index is the dates.
###Code
dates_example = pd.date_range("2018-06-01", "2020-12-31",freq='d')
values_example = np.random.randint(100,2000,size=(len(dates_example)))
categ_data = pd.DataFrame({'date': dates_example, 'Quantity': values_example})
categ_data.index = categ_data['date']
categ_data = categ_data.drop(columns=['date'])
###Output
_____no_output_____
###Markdown
Time Series
###Code
leaf_name = 'Football Cards'
categ_data.iplot(title=leaf_name, xTitle='Date', yTitle='Sales', theme='white', colors=['steelblue'])
###Output
_____no_output_____
###Markdown
Events Dataset
###Code
data = create_events_df(categ_data, world_events, emb_only=True)
events_dates = list(set(data['date']))
###Output
_____no_output_____
###Markdown
Hyper-Parameters
###Code
ts_cache = True
neural_cache = True
lstm_cache = True
lstm_df_cache = True
tcn_cache = True
tcn_df_cache = True
results_cache = True
n_in = 365
window_size = 2
prediction_time = 30
device = 'cpu' # use 'cuda:2' if you have GPUs
total_pred = pd.DataFrame()
start_pred_list = get_pred_dates('2020-01-01', '2021-01-01')
for start_pred_time in tqdm(start_pred_list):
pred_path = cache_path + "/saved_results/final_results_" + leaf_name + "_" + str(start_pred_time) + "_predictions"
if pred_path and os.path.isfile(pred_path):
total_pred = pd.read_pickle(pred_path)
else:
X_train, X_test = split_data(data, start_pred_time)
time_series = TimeSeries.from_dataframe(categ_data, value_cols='Quantity')
train, test_ts = time_series.split_before(pd.Timestamp(start_pred_time))
test = test_ts[:prediction_time]
events_all = pd.concat([X_train, X_test])
train_df, test_df = train.pd_dataframe(), test.pd_dataframe()
train_dates, test_dates = train_df.index.values, test_df.index.values
res_prediction = test_models(test, test_name=leaf_name, start_pred_time=start_pred_time, train=train, use_cache=ts_cache)
lstm_predictions = get_lstm_results(train, test, train_df, test_df, events_all, start_pred_time, leaf_name, n_in, window_size, categ_data, device, lstm_df_cache, lstm_cache)
tcn_predictions = get_tcn_results(train, test, train_df, test_df, events_all, start_pred_time, leaf_name, n_in, window_size, categ_data, device, tcn_df_cache, tcn_cache)
neural_predictions = get_neural_prophet_results(train, test, events_all, leaf_name, events_dates, start_pred_time, neural_cache)
total_pred = pd.concat([total_pred, pd.concat([res_prediction, lstm_predictions, tcn_predictions, neural_predictions], axis=1)])
os.makedirs(os.path.dirname(pred_path), exist_ok=True)
total_pred.to_pickle(pred_path)
###Output
100%|██████████| 1/1 [00:00<00:00, 193.05it/s]
###Markdown
Prediction Plot
###Code
total_pred = total_pred[total_pred.index >= start_pred_list[0]]
pred_df = total_pred[['Real Quantity', 'LSTM', 'GAN - Event LSTM', 'Event LSTM', 'Weighted Event LSTM', 'ARIMA', 'Prophet', 'NeuralProphet', 'GAN - Event CNN']]
pred_df.iplot(title = leaf_name + " - All Models", xTitle='Date', yTitle='Sales', theme='white')
###Output
_____no_output_____
###Markdown
Metrics@K
###Code
get_all_k_metrics(total_pred)
###Output
_____no_output_____
###Markdown
Deep Co-segmentation Experiments Imports
###Code
from dataset import iCosegDataset, PASCALVOCCosegDataset, MSRCDataset, InternetDataset
from model import SiameseSegNet
import numpy as np
import os
import time
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torchvision
from tqdm import tqdm_notebook
###Output
_____no_output_____
###Markdown
Constants
###Code
## Debug
DEBUG = False
## Dataset
BATCH_SIZE = 2 * 1 # two images at a time for Siamese net
INPUT_CHANNELS = 3 # RGB
OUTPUT_CHANNELS = 2 # BG + FG channel
## Inference
CUDA = "0"
## Output Dir
OUTPUT_DIR = "./experiments"
os.system(f"rm -r {OUTPUT_DIR}")
os.makedirs(OUTPUT_DIR, exist_ok=True)
###Output
_____no_output_____
###Markdown
Metrics
###Code
def metrics(pmapA, pmapB, masksA, masksB):
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = 0, 0, 0, 0, 0, 0
for idx in range(BATCH_SIZE//2):
pred_maskA = torch.argmax(pmapA[idx], dim=0).cpu().numpy()
pred_maskB = torch.argmax(pmapB[idx], dim=0).cpu().numpy()
masksA_cpu = masksA[idx].cpu().numpy()
masksB_cpu = masksB[idx].cpu().numpy()
intersection_a += np.sum(pred_maskA & masksA_cpu)
intersection_b += np.sum(pred_maskB & masksB_cpu)
union_a += np.sum(pred_maskA | masksA_cpu)
union_b += np.sum(pred_maskB | masksB_cpu)
precision_a += np.sum(pred_maskA == masksA_cpu)
precision_b += np.sum(pred_maskB == masksB_cpu)
return intersection_a, intersection_b, union_a, union_b, precision_a, precision_b
###Output
_____no_output_____
###Markdown
Experiments Load Deep Object Co-segmentation model trained on Pascal VOC
###Code
LOAD_CHECKPOINT = "/home/SharedData/intern_sayan/PASCAL_coseg/"
model = SiameseSegNet(input_channels=INPUT_CHANNELS,
output_channels=OUTPUT_CHANNELS,
gpu=CUDA)
if DEBUG:
print(model)
FloatTensor = torch.FloatTensor
LongTensor = torch.LongTensor
if CUDA is not None:
os.environ["CUDA_VISIBLE_DEVICES"] = CUDA
model = model.cuda()
FloatTensor = torch.cuda.FloatTensor
LongTensor = torch.cuda.LongTensor
if LOAD_CHECKPOINT:
model.load_state_dict(torch.load(os.path.join(LOAD_CHECKPOINT, "coseg_model_best.pth")))
###Output
_____no_output_____
###Markdown
iCoseg
###Code
root_dir = "/home/SharedData/intern_sayan/iCoseg/"
image_dir = os.path.join(root_dir, "images")
mask_dir = os.path.join(root_dir, "ground_truth")
dataset = iCosegDataset(image_dir=image_dir,
mask_dir=mask_dir)
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)
###Output
_____no_output_____
###Markdown
VOC + iCoseg [Car] iCoseg class indices = {5}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([5,5]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"car_iCoseg_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"car_iCoseg_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"car_iCoseg_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + iCoseg [People]iCoseg class indices = {1,4,26,27,28}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([1,1]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([4,4]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([26,26]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([26,27]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([27,27]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([27,28]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([28,28]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"people_iCoseg_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"people_iCoseg_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"people_iCoseg_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + iCoseg [Goose]iCoseg class indices = {10}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([10,10]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"goose_iCoseg_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"goose_iCoseg_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"goose_iCoseg_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + iCoseg [Airplane] iCoseg class indices = {12,13,14}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([12,12]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([12,13]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([13,13]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([13,14]))) or \
torch.equal(batch["label"], torch.from_numpy(np.array([14,14]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"airplane_iCoseg_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"airplane_iCoseg_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"airplane_iCoseg_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
Clean-up
###Code
del dataset
del dataloader
###Output
_____no_output_____
###Markdown
MSRC Dataloader
###Code
root_dir = "/home/SharedData/intern_sayan/MSRC_processed/"
image_dir = os.path.join(root_dir, "images")
mask_dir = os.path.join(root_dir, "GT")
dataset = MSRCDataset(image_dir=image_dir,
mask_dir=mask_dir)
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)
###Output
_____no_output_____
###Markdown
VOC + MSRC [Car] MSRC class indices = {2}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([2,2]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"car_MSRC_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"car_MSRC_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"car_MSRC_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + MSRC [Airplane] MSRC class indices = {10}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([10,10]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"airplane_MSRC_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"airplane_MSRC_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"airplane_MSRC_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + MSRC [Bird] MSRC class indices = {1}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([1,1]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"bird_MSRC_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"bird_MSRC_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"bird_MSRC_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + MSRC [Cat] MSRC class indices = {3}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([3,3]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"cat_MSRC_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"cat_MSRC_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"cat_MSRC_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + MSRC [Cow] MSRC class indices = {5}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([5,5]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"cow_MSRC_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"cow_MSRC_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"cow_MSRC_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
Clean-up
###Code
del dataset
del dataloader
###Output
_____no_output_____
###Markdown
Internet Dataloader
###Code
root_dir = "/home/SharedData/intern_sayan/internet_processed/"
image_dir = os.path.join(root_dir, "images", "Data")
mask_dir = os.path.join(root_dir, "GT", "Data")
dataset = InternetDataset(image_dir=image_dir,
mask_dir=mask_dir)
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)
###Output
_____no_output_____
###Markdown
VOC + Internet [Airplane] Internet class indices = {0}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([0,0]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"airplane_Internet_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"airplane_Internet_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"airplane_Internet_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
VOC + Internet [Car] Internet class indices = {1}
###Code
def infer():
model.eval()
intersection, union, precision = 0, 0, 0
t_start = time.time()
for batch_idx, batch in tqdm_notebook(enumerate(dataloader)):
images = batch["image"].type(FloatTensor)
labels = batch["label"].type(LongTensor)
masks = batch["mask"].type(FloatTensor)
if torch.equal(batch["label"], torch.from_numpy(np.array([1,1]))):
# pdb.set_trace()
pairwise_images = [(images[2*idx], images[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_labels = [(labels[2*idx], labels[2*idx+1]) for idx in range(BATCH_SIZE//2)]
pairwise_masks = [(masks[2*idx], masks[2*idx+1]) for idx in range(BATCH_SIZE//2)]
# pdb.set_trace()
imagesA, imagesB = zip(*pairwise_images)
labelsA, labelsB = zip(*pairwise_labels)
masksA, masksB = zip(*pairwise_masks)
# pdb.set_trace()
imagesA, imagesB = torch.stack(imagesA), torch.stack(imagesB)
labelsA, labelsB = torch.stack(labelsA), torch.stack(labelsB)
masksA, masksB = torch.stack(masksA).long(), torch.stack(masksB).long()
# pdb.set_trace()
eq_labels = []
for idx in range(BATCH_SIZE//2):
if torch.equal(labelsA[idx], labelsB[idx]):
eq_labels.append(torch.ones(1).type(LongTensor))
else:
eq_labels.append(torch.zeros(1).type(LongTensor))
eq_labels = torch.stack(eq_labels)
# pdb.set_trace()
masksA = masksA * eq_labels.unsqueeze(1)
masksB = masksB * eq_labels.unsqueeze(1)
imagesA_v = torch.autograd.Variable(FloatTensor(imagesA))
imagesB_v = torch.autograd.Variable(FloatTensor(imagesB))
pmapA, pmapB, similarity = model(imagesA_v, imagesB_v)
# pdb.set_trace()
res_images, res_masks, gt_masks = [], [], []
for idx in range(BATCH_SIZE//2):
res_images.append(imagesA[idx])
res_images.append(imagesB[idx])
res_masks.append(torch.argmax((pmapA * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
res_masks.append(torch.argmax((pmapB * similarity.unsqueeze(2).unsqueeze(2))[idx],
dim=0).reshape(1, 512, 512))
gt_masks.append(masksA[idx].reshape(1, 512, 512))
gt_masks.append(masksB[idx].reshape(1, 512, 512))
# pdb.set_trace()
images_T = torch.stack(res_images)
masks_T = torch.stack(res_masks)
gt_masks_T = torch.stack(gt_masks)
# metrics - IoU & precision
intersection_a, intersection_b, union_a, union_b, precision_a, precision_b = metrics(pmapA,
pmapB,
masksA,
masksB)
intersection += intersection_a + intersection_b
union += union_a + union_b
precision += (precision_a / (512 * 512)) + (precision_b / (512 * 512))
# pdb.set_trace()
torchvision.utils.save_image(images_T,
os.path.join(OUTPUT_DIR,f"car_Internet_{batch_idx}_images.png"),
nrow=2)
torchvision.utils.save_image(masks_T,
os.path.join(OUTPUT_DIR, f"car_Internet_{batch_idx}_masks.png"),
nrow=2)
torchvision.utils.save_image(gt_masks_T,
os.path.join(OUTPUT_DIR, f"car_Internet_{batch_idx}_gt_masks.png"),
nrow=2)
delta = time.time() - t_start
print(f"\nTime elapsed: [{delta} secs]\nPrecision : [{precision/(len(dataloader) * BATCH_SIZE)}]\nIoU : [{intersection/union}]")
infer()
###Output
_____no_output_____
###Markdown
Clean-up
###Code
del dataset
del dataloader
###Output
_____no_output_____ |
traffic_google_add_counters.ipynb | ###Markdown
Settings
###Code
sns.set_style("white")
colors = ['#1b9e77','#d95f02','#7570b3','#e7298a','#66a61e','#e6ab02']
#flatui = ['#e41a1c','#377eb8','#4daf4a','#984ea3','#ff7f00','#ffff33']
#flatui = ['#7fc97f','#beaed4','#fdc086','#ffff99','#386cb0','#f0027f']
sns.set_palette(colors)
x_size, y_size = 12,8
plt.rcParams.update({'font.size': 12})
###Output
_____no_output_____
###Markdown
Read the data
###Code
df = pd.read_csv("data\\google_data_preprocessed.csv", encoding="utf8")
df = df[["date", "hour", "route_id", "workday", "weather", "pace"]]
#df = df[["date", "hour", "route_id", "workday", "pace"]]
###Output
_____no_output_____
###Markdown
Rescale per hour
###Code
df = df.groupby(["date", "hour", "route_id", "workday", "weather"], as_index=False).mean()
#df = df.groupby(["date", "hour", "route_id", "workday"], as_index=False).mean()
###Output
_____no_output_____
###Markdown
Counters per route
###Code
f = open("data\\counters_per_route.txt", encoding="utf8")
route_counters = {}
for l in f:
ss = l.strip().split(";")
route_id = ss[0]
route_id = int(route_id)
cs = ss[1:]
if cs != ['']:
route_counters[route_id] = cs
route_counters
###Output
_____no_output_____
###Markdown
Prepare the main df
###Code
df_orig = df.copy()
for counters in route_counters.values():
for counter_id in counters:
df[counter_id] = np.nan
###Output
_____no_output_____
###Markdown
Read counter data
###Code
df_counters = pd.read_csv("data\counters.csv", encoding="utf8")
#df_counters = df_counters[["date", "time", "counter_id_direction", "count"]]
#df_counters.columns = ["date", "hour", "counter_id", "count"]
df_counters = df_counters[["date", "time", "counter_id", "count"]]
# Test: odstrani avtocesto
#df_counters = df_counters[df_counters["counter_id"] != 'HC-H3, LJ (S obvoznica) : LJ (Celovška - Dunajska)']
for route_id, counters in route_counters.items():
for counter_id in counters:
if not counter_id:
continue
print(counter_id, route_id)
df3 = df_counters[df_counters["counter_id"]==counter_id].copy()
df3 = df3.drop(columns=["counter_id"])
df3.columns = ["date", "hour", counter_id]
df3["route_id"] = route_id
df2 = pd.merge(df_orig, df3, how="left", on=["route_id", "date", "hour"])
df.update(df2)
df.to_csv("data\\counter_hours.csv", index=False)
###Output
_____no_output_____
###Markdown
Pairplots and correlations
###Code
#route_counters2 = {3:route_counters[3]}
def corrfunc(x,y, ax=None, **kws):
"""Plot the correlation coefficient in the top left hand corner of a plot."""
r_P, _ = pearsonr(x, y)
r_S, _ = spearmanr(x, y)
ax = ax or plt.gca()
# Unicode for lowercase rho (ρ)
rho_P = '$r_P$'
rho_S = '$r_S$'#'\u03C1$'
ax.annotate(f'{rho_P} = {r_P:.2f}, {rho_S} = {r_S:.2f}', xy=(.1, .9), xycoords=ax.transAxes)
df_corr = pd.DataFrame(columns=["route_id", "counter_id", "Pearson_r", "Pearson_p", "Pearson_q", "Spearman_r", "Spearman_p", "Spearman_q"])
for i, (route_id, counters) in enumerate(route_counters.items()):
sns.set_palette([colors[i]])
df2 = df[df["route_id"]==route_id][["pace"] + counters].copy()
df2 = df2.dropna(axis='columns', how="all")
df2 = df2.dropna(axis="rows")
g = sns.pairplot(df2, y_vars = ["pace"], x_vars = df2.columns[1:], kind="reg", plot_kws=dict(scatter_kws=dict(s=10)))
g.map(corrfunc)
#g.map_lower(corrfunc)
g.fig.suptitle("route ID: " + str(int(route_id)), y=1) # y= some height>1
plt.savefig("figs\\pairplots\\route_"+str(int(route_id))+"_pairplot_small.pdf", bbox_inches="tight")
plt.savefig("figs\\pairplots\\route_"+str(int(route_id))+"_pairplot_small.png", bbox_inches="tight")
plt.savefig("figs\\pairplots\\route_"+str(int(route_id))+"_pairplot_small.svg", bbox_inches="tight")
plt.show()
g = sns.pairplot(df2, kind="reg", plot_kws=dict(scatter_kws=dict(s=10)))
#g.map(corrfunc)
g.map_lower(corrfunc)
g.map_upper(corrfunc)
g.fig.suptitle("route ID: " + str(int(route_id)), y=1) # y= some height>1
plt.savefig("figs\\pairplots\\route_"+str(int(route_id))+"_pairplot.pdf", bbox_inches="tight")
plt.savefig("figs\\pairplots\\route_"+str(int(route_id))+"_pairplot.png", bbox_inches="tight")
plt.savefig("figs\\pairplots\\route_"+str(int(route_id))+"_pairplot.svg", bbox_inches="tight")
plt.show()
Y = df2["pace"].values
for counter in df2.columns[1:]:
X = df2[counter].values
p_r,p_p = pearsonr(X,Y)
s_r,s_p = spearmanr(X,Y)
df_corr = df_corr.append(pd.DataFrame({"route_id": [route_id],
"counter_id": [counter],
"Pearson_r": [p_r],
"Pearson_p": [p_p],
"Spearman_r": [s_r],
"Spearman_p": [s_p]}), ignore_index=True, sort = False)
df_corr["Pearson_q"] = multi.multipletests(df_corr["Pearson_p"], method = 'fdr_bh')[1]
df_corr["Spearman_q"] = multi.multipletests(df_corr["Spearman_p"], method = 'fdr_bh')[1]
#s_r,s_p = spearmanr(X,Y)
#p_r,p_p = pearsonr(X,Y)
#df_corr.to_csv("regression_results\\correlations.csv", index=False)
df_corr2 = df_corr
df_corr2.columns = ["route ID", "counter ID", "r_P", "p(r_P)", "q(r_P)", "r_S", "p(r_S)", "q(r_S)"]
df_corr2[df_corr2.columns[2:]] = df_corr2[df_corr2.columns[2:]].applymap(lambda x: round(x,2))
df_corr2.to_csv("regression_results\\correlations.csv", index=False, sep="\t")
route_counters2 = {}
route_counters2[2] = route_counters[2]
for i, (route_id, counters) in enumerate(route_counters2.items()):
sns.set_palette([colors[i]])
df2 = df[df["route_id"]==route_id][["pace"] + counters].copy()
df2 = df2.dropna(axis='columns', how="all")
sns.pairplot(df2.dropna(axis="rows"), kind="reg", plot_kws=dict(scatter_kws=dict(s=10)))
plt.title("route ID: " + str(int(route_id)))
plt.show()
df[(~df['1935-230'].isna()) & (df['route_id']==2)]
df
###Output
_____no_output_____
###Markdown
Data export
###Code
df.to_csv("data\\pace_counters.csv", encoding="utf8", index=False)
###Output
_____no_output_____
###Markdown
Remove route info
###Code
#df2 = df.drop(columns=["route_id", "pace"]).copy()
#df2 = df2.groupby(['date','hour','weather'], as_index=False).max()
#df2 = df2.dropna(axis='columns', how="all")
#df2.to_csv("data\\counters_filtered.csv", encoding="utf8", index=False)
###Output
_____no_output_____ |
.ipynb_checkpoints/predictingWithHeadlines-checkpoint.ipynb | ###Markdown
Getting TickersSo, for some reason, I can't find a nice downloadable csv file that already has all of the tickers in the nyse. I did, however, find a website that can be used to scrape off of!So first, lets get all the tickers we are going to use
###Code
os.system("curl --ftp-ssl anonymous:[email protected] "
"ftp://ftp.nasdaqtrader.com/SymbolDirectory/nasdaqlisted.txt "
"> nasdaq.lst")
#Reading from that file
with open('nasdaq.lst', 'r') as f:
file = f.read()
find_string = file.split('</html>')
ticker_string = find_string[len(find_string)-1]
rows = ticker_string.split('\n')
tickers = [row.split('|')[0] for row in rows][2:]
data = dict()
for t in tickers:
# try:
print(f'----{t}----')
price_data = get.daily(t)
print(price_data)
news_data = get.News(t)
data.update({t:[price_data, news_data]})
print('------------')
# except AttributeError:
# pass
#In case this expirement needs to be replicated
# import pickle
# with open('stockReg.pkl', 'wb') as f:
# pickle.dump(data, f)
###Output
_____no_output_____
###Markdown
I haven't finished working on data processing/scraping for this one. However, I was able to figure out the sentiment anaysis I wanted to use.Because WSJ is making scraping hard for me, I just decided to copy a page of headlines related to nvidia into a list, and then test my sentiment analysis method on that
###Code
headlines = [
["Nvidia's bid for Arm signals loftier chip ambition"],
['What are Nvidai and arm? and why are they talking about getting together'],
['SoftBank Reportedly Nears Deal to Sell Chip Unit to Nvidia'],
['SoftBank Nears $40 Billion Deal to Sell Arm Holdings'],
['Here are the biggest stock-market losers on Thursday as the tech sector tanks'],
['Nvidia Has to Play This Game Perfectly'],
['These 74 stocks in the S&P 500 hit all-time records on Wednesday'],
['Nvidia gaming card upgrade is quite a deal, analysts say, as stock breaks more records']
]
###Output
_____no_output_____ |
SC1v2/03_roots_of_1D_equations.ipynb | ###Markdown
ILI285 - Computación Científica I / INF285 - Computación Científica Roots of 1D equations [S]cientific [C]omputing [T]eam Version: 1.32 Table of Contents* [Introduction](intro)* [Bisection Method](bisection)* [Cobweb Plot](cobweb)* [Fixed Point Iteration](fpi)* [Newton Method](nm)* [Wilkinson Polynomial](wilkinson)* [Acknowledgements](acknowledgements)
###Code
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
%matplotlib inline
from ipywidgets import interact
from ipywidgets import widgets
sym.init_printing()
from scipy import optimize
###Output
_____no_output_____
###Markdown
IntroductionHello again! In this document we're going to learn how to find a 1D equation's solution using numerical methods. First, let's start with the definition of a root:Definition: The function $f(x)$ has a root in $x = r$ if $f(r) = 0$.An example: Let's say we want to solve the equation $x + \log(x) = 3$. We can rearrange the equation: $x + \log(x) - 3 = 0$. That way, to find its solution we can find the root of $f(x) = x + \log(x) - 3$. Now let's study some numerical methods to solve these kinds of problems. Defining a function $f(x)$
###Code
f = lambda x: x+np.log(x)-3
###Output
_____no_output_____
###Markdown
Finding $r$ using sympy
###Code
y = sym.Symbol('y')
fsym = lambda y: y+sym.log(y)-3
r_all=sym.solve(sym.Eq(fsym(y), 0), y)
r=r_all[0].evalf()
print(r)
print(r_all)
def find_root_manually(r=2.0):
x = np.linspace(1,3,1000)
plt.figure(figsize=(8,8))
plt.plot(x,f(x),'b-')
plt.grid()
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.title('What is r such that $f(r)='+str(f(r))+'$? $r='+str(r)+'$',fontsize=16)
plt.plot(r,f(r),'k.',markersize=20)
plt.show()
interact(find_root_manually,r=(1e-5,3,1e-3))
###Output
_____no_output_____
###Markdown
Bisection Method The bisection method finds the root of a function $f$, where $f$ is a **continuous** function.If we want to know if this has a root, we have to check if there is an interval $[a,b]$ for which $f(a)\cdot f(b) < 0$. When these 2 conditions are satisfied, it means that there is a value $r$, between $a$ and $b$, for which $f(r) = 0$. To summarize how this method works, start with the aforementioned interval (checking that there's a root in it), and split it into two smaller intervals $[a,c]$ and $[c,b]$. Then, check which of the two intervals contains a root. Keep splitting each "eligible" interval until the algorithm converges or the tolerance is surpassed.
###Code
def bisect(f, a, b, tol=1e-8):
fa = f(a)
fb = f(b)
i = 0
# Just checking if the sign is not negative => not root necessarily
if np.sign(f(a)*f(b)) >= 0:
print('f(a)f(b)<0 not satisfied!')
return None
#Printing the evolution of the computation of the root
print(' i | a | c | b | fa | fc | fb | b-a')
print('----------------------------------------------------------------------------------------')
while(b-a)/2 > tol:
c = (a+b)/2.
fc = f(c)
print('%2d | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f' %
(i+1, a, c, b, fa, fc, fb, b-a))
# Did we find the root?
if fc == 0:
print('f(c)==0')
break
elif np.sign(fa*fc) < 0:
b = c
fb = fc
else:
a = c
fa = fc
i += 1
xc = (a+b)/2.
return xc
## Finding a root of cos(x). What about if you change the interval?
#f = lambda x: np.cos(x)
## Another function
#f = lambda x: x**3-2*x**2+(4/3)*x-(8/27)
## Computing the cubic root of 7.
#f = lambda x: x**3-7
#bisect(f,0,2)
f = lambda x: x*np.exp(x)-3
#f2 = lambda x: np.cos(x)-x
bisect(f,0,3,tol=1e-13)
###Output
i | a | c | b | fa | fc | fb | b-a
----------------------------------------------------------------------------------------
1 | 0.0000000 | 1.5000000 | 3.0000000 | -3.0000000 | 3.7225336 | 57.2566108 | 3.0000000
2 | 0.0000000 | 0.7500000 | 1.5000000 | -3.0000000 | -1.4122500 | 3.7225336 | 1.5000000
3 | 0.7500000 | 1.1250000 | 1.5000000 | -1.4122500 | 0.4652440 | 3.7225336 | 0.7500000
4 | 0.7500000 | 0.9375000 | 1.1250000 | -1.4122500 | -0.6060099 | 0.4652440 | 0.3750000
5 | 0.9375000 | 1.0312500 | 1.1250000 | -0.6060099 | -0.1077879 | 0.4652440 | 0.1875000
6 | 1.0312500 | 1.0781250 | 1.1250000 | -0.1077879 | 0.1687856 | 0.4652440 | 0.0937500
7 | 1.0312500 | 1.0546875 | 1.0781250 | -0.1077879 | 0.0280899 | 0.1687856 | 0.0468750
8 | 1.0312500 | 1.0429688 | 1.0546875 | -0.1077879 | -0.0404419 | 0.0280899 | 0.0234375
9 | 1.0429688 | 1.0488281 | 1.0546875 | -0.0404419 | -0.0063254 | 0.0280899 | 0.0117188
10 | 1.0488281 | 1.0517578 | 1.0546875 | -0.0063254 | 0.0108447 | 0.0280899 | 0.0058594
11 | 1.0488281 | 1.0502930 | 1.0517578 | -0.0063254 | 0.0022503 | 0.0108447 | 0.0029297
12 | 1.0488281 | 1.0495605 | 1.0502930 | -0.0063254 | -0.0020399 | 0.0022503 | 0.0014648
13 | 1.0495605 | 1.0499268 | 1.0502930 | -0.0020399 | 0.0001046 | 0.0022503 | 0.0007324
14 | 1.0495605 | 1.0497437 | 1.0499268 | -0.0020399 | -0.0009678 | 0.0001046 | 0.0003662
15 | 1.0497437 | 1.0498352 | 1.0499268 | -0.0009678 | -0.0004316 | 0.0001046 | 0.0001831
16 | 1.0498352 | 1.0498810 | 1.0499268 | -0.0004316 | -0.0001635 | 0.0001046 | 0.0000916
17 | 1.0498810 | 1.0499039 | 1.0499268 | -0.0001635 | -0.0000294 | 0.0001046 | 0.0000458
18 | 1.0499039 | 1.0499153 | 1.0499268 | -0.0000294 | 0.0000376 | 0.0001046 | 0.0000229
19 | 1.0499039 | 1.0499096 | 1.0499153 | -0.0000294 | 0.0000041 | 0.0000376 | 0.0000114
20 | 1.0499039 | 1.0499067 | 1.0499096 | -0.0000294 | -0.0000127 | 0.0000041 | 0.0000057
21 | 1.0499067 | 1.0499082 | 1.0499096 | -0.0000127 | -0.0000043 | 0.0000041 | 0.0000029
22 | 1.0499082 | 1.0499089 | 1.0499096 | -0.0000043 | -0.0000001 | 0.0000041 | 0.0000014
23 | 1.0499089 | 1.0499092 | 1.0499096 | -0.0000001 | 0.0000020 | 0.0000041 | 0.0000007
24 | 1.0499089 | 1.0499091 | 1.0499092 | -0.0000001 | 0.0000009 | 0.0000020 | 0.0000004
25 | 1.0499089 | 1.0499090 | 1.0499091 | -0.0000001 | 0.0000004 | 0.0000009 | 0.0000002
26 | 1.0499089 | 1.0499089 | 1.0499090 | -0.0000001 | 0.0000002 | 0.0000004 | 0.0000001
27 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000001 | 0.0000000 | 0.0000002 | 0.0000000
28 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000001 | -0.0000000 | 0.0000000 | 0.0000000
29 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
30 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
31 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
32 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
33 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
34 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
35 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
36 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
37 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
38 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
39 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
40 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
41 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
42 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | -0.0000000 | 0.0000000 | 0.0000000
43 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
44 | 1.0499089 | 1.0499089 | 1.0499089 | -0.0000000 | 0.0000000 | 0.0000000 | 0.0000000
###Markdown
It's very important to define a concept called **convergence rate**. This rate shows how fast the convergence of a method is at a specified point.The convergence rate for the bisection is always 0.5 because this method uses the half of the interval for each iteration. Cobweb Plot
###Code
def cobweb(x,g=None):
min_x = np.amin(x)
max_x = np.amax(x)
plt.figure(figsize=(10,10))
ax = plt.axes()
plt.plot(np.array([min_x,max_x]),np.array([min_x,max_x]),'b-')
for i in np.arange(x.size-1):
delta_x = x[i+1]-x[i]
head_length = np.abs(delta_x)*0.04
arrow_length = delta_x-np.sign(delta_x)*head_length
ax.arrow(x[i], x[i], 0, arrow_length, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
ax.arrow(x[i], x[i+1], arrow_length, 0, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
if g!=None:
y = np.linspace(min_x,max_x,1000)
plt.plot(y,g(y),'r')
plt.title('Cobweb diagram')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point Iteration To learn about the Fixed-Point Iteration we will first learn about the concept of Fixed Point.A Fixed Point of a function $g$ is a real number $r$, where $g(r) = r$The Fixed-Point Iteration is based in the Fixed Point concept and works like this to find the root of a function:\begin{equation} x_{0} = initial\_guess \\ x_{i+1} = g(x_{i})\end{equation}To find an equation's solution using this method you'll have to move around some things to rearrange the equation in the form $x = g(x)$. That way, you'll be iterating over the funcion $g(x)$, but you will **not** find $g$'s root, but $f(x) = g(x) - x$ (or $f(x) = x - g(x)$)'s root. In our following example, we'll find the solution to $f(x) = x - \cos(x)$ by iterating over the funcion $g(x) = \cos(x)$.
###Code
def fpi(g, x0, k, flag_cobweb=False):
x = np.empty(k+1)
x[0] = x0
error_i = np.nan
print(' i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1}')
print('--------------------------------------------------------------')
for i in range(k):
x[i+1] = g(x[i])
error_iminus1 = error_i
error_i = abs(x[i+1]-x[i])
print('%2d | %.10f | %.10f | %.10f | %.10f' %
(i,x[i],x[i+1],error_i,error_i/error_iminus1))
if flag_cobweb:
cobweb(x,g)
return x[-1]
g = lambda x: np.cos(x)
fpi2(g, 2, 20, True)
###Output
i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
-----------------------------------------------------------------------------
0 | 2.0000000000 | -0.4161468365 | 2.4161468365 | 0.0000000000 | 0.0000000000
1 | -0.4161468365 | 0.9146533259 | 1.3308001624 | 0.5507944063 | 0.2279639623
2 | 0.9146533259 | 0.6100652997 | 0.3045880261 | 0.2288758558 | 0.1719836398
3 | 0.6100652997 | 0.8196106080 | 0.2095453083 | 0.6879630527 | 2.2586674253
4 | 0.8196106080 | 0.6825058579 | 0.1371047501 | 0.6542964443 | 3.1224580961
5 | 0.6825058579 | 0.7759946131 | 0.0934887552 | 0.6818783095 | 4.9734112712
6 | 0.7759946131 | 0.7137247340 | 0.0622698791 | 0.6660681166 | 7.1245800092
7 | 0.7137247340 | 0.7559287136 | 0.0422039796 | 0.6777591376 | 10.8842211877
8 | 0.7559287136 | 0.7276347923 | 0.0282939213 | 0.6704088465 | 15.8849675651
9 | 0.7276347923 | 0.7467496017 | 0.0191148094 | 0.6755800739 | 23.8772161593
10 | 0.7467496017 | 0.7339005972 | 0.0128490045 | 0.6722015485 | 35.1665315532
11 | 0.7339005972 | 0.7425675503 | 0.0086669531 | 0.6745233116 | 52.4961534769
12 | 0.7425675503 | 0.7367348584 | 0.0058326919 | 0.6729806736 | 77.6490502525
13 | 0.7367348584 | 0.7406662639 | 0.0039314055 | 0.6740293405 | 115.5605938426
14 | 0.7406662639 | 0.7380191412 | 0.0026471227 | 0.6733273142 | 171.2688547776
15 | 0.7380191412 | 0.7398027782 | 0.0017836370 | 0.6738021758 | 254.5413469114
16 | 0.7398027782 | 0.7386015286 | 0.0012012496 | 0.6734832006 | 377.5898286760
17 | 0.7386015286 | 0.7394108086 | 0.0008092800 | 0.6736984720 | 560.8313921735
18 | 0.7394108086 | 0.7388657151 | 0.0005450935 | 0.6735536472 | 832.2875198866
19 | 0.7388657151 | 0.7392329181 | 0.0003672029 | 0.6736512865 | 1235.8453896737
###Markdown
Let's quickly explain the Cobweb Diagram we have here. The blue line is the function $x$ and the red is the function $g(x)$. The point in which they meet is $g$'s fixed point. In this particular example, we start at $y = x = 1.5$ (the top right corner) and then we "jump" **vertically** to $y = \cos(1.5) \approx 0.07$. After this, we jump **horizontally** to $x = \cos(1.5) \approx 0.07$. Then, we jump again **vertically** to $y = \cos\left(\cos(1.5)\right) \approx 0.997$ and so on. See the pattern here? We're just iterating over $x = \cos(x)$, getting closer to the center of the diagram where the fixed point resides, in $x \approx 0.739$. It's very important to mention that the algorithm will converge only if the rate of convergence $S < 1$, where $S = \left| g'(r) \right|$. If you want to use this method, you'll have to construct $g(x)$ starting from $f(x)$ accordingly. In this example, $g(x) = \cos(x) \Rightarrow g'(x) = -\sin(x)$ and $|-\sin(0.739)| \approx 0.67$. Another example. Source: https://divisbyzero.com/2008/12/18/sharkovskys-theorem/amp/?__twitter_impression=true
###Code
g = lambda x: -(3/2)*x**2+(11/2)*x-2
gp = lambda x: -3*x+11/2
a=-1/2.7
g2 = lambda x: x+a*(x-g(x))
#x=np.linspace(2,3,100)
#plt.plot(x,gp(x),'-')
#plt.plot(x,gp(x)*0+1,'r-')
#plt.plot(x,gp(x)*0-1,'g-')
#plt.grid(True)
#plt.show()
fpi(g2, 2.45, 12, True)
###Output
i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1}
--------------------------------------------------------------
0 | 2.4500000000 | 2.4578703704 | 0.0078703704 | nan
1 | 2.4578703704 | 2.4573987149 | 0.0004716554 | 0.0599279835
2 | 2.4573987149 | 2.4574289190 | 0.0000302040 | 0.0640383807
3 | 2.4574289190 | 2.4574269922 | 0.0000019268 | 0.0637931300
4 | 2.4574269922 | 2.4574271151 | 0.0000001229 | 0.0638088397
5 | 2.4574271151 | 2.4574271073 | 0.0000000078 | 0.0638078348
6 | 2.4574271073 | 2.4574271078 | 0.0000000005 | 0.0638078595
7 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0638072307
8 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0638043463
9 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0634125082
10 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0618556701
11 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.1111111111
###Markdown
Newton's MethodFor this method, we want to iteratively find some function $f(x)$'s root, that is, the number $r$ for which $f(r) = 0$. The algorithm is as follows:\begin{equation} x_0 = initial\_guess \end{equation}\begin{equation} x_{i+1} = x_i - \cfrac{f(x_i)}{f'(x_i)} \end{equation}which means that you won't be able to find $f$'s root if $f'(r) = 0$. In this case, you would have to use the modified version of this method, but for now let's focus on the unmodified version first. Newton's (unmodified) method is said to have quadratic convergence.
###Code
def newton_method(f, fp, x0, rel_error=1e-8, m=1):
#Initialization of hybrid error and absolute
hybrid_error = 100
error_i = np.inf
print('i | x(i) | x(i+1) | |x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2')
print('----------------------------------------------------------------------------------------')
#Iteration counter
i = 1
while (hybrid_error > rel_error and hybrid_error < 1e12 and i < 1e4):
#Newton's iteration
x1 = x0-m*f(x0)/fp(x0)
#Checking if root was found
if f(x1) == 0.0:
hybrid_error = 0.0
break
#Computation of hybrid error
hybrid_error = abs(x1-x0)/np.max([abs(x1),1e-12])
#Computation of absolute error
error_iminus1 = error_i
error_i = abs(x1-x0)
#Increasing counter
i += 1
#Showing some info
print("%d | %.10f | %.10f | %.20f | %.10f | %.10f" %
(i, x0, x1, error_i, error_i/error_iminus1, error_i/(error_iminus1**2)))
#Updating solution
x0 = x1
#Checking if solution was obtained
if hybrid_error < rel_error:
return x1
elif i>=1e4:
print('Newton''s Method diverged. Too many iterations!!')
return None
else:
print('Newton''s Method diverged!')
return None
f = lambda x: np.sin(x)
fp = lambda x: np.cos(x) # the derivative of f
newton_method(f, fp, 3.1,rel_error=1e-14)
f = lambda x: x**2
fp = lambda x: 2*x # the derivative of f
newton_method(f, fp, 3.1,rel_error=1e-2, m=2)
###Output
i | x(i) | x(i+1) | |x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
----------------------------------------------------------------------------------------
###Markdown
Wilkinson Polynomialhttps://en.wikipedia.org/wiki/Wilkinson%27s_polynomial**Final question: Why is the root far far away from $16$?**
###Code
x = sym.symbols('x', reals=True)
W=1
for i in np.arange(1,21):
W*=(x-i)
W # Printing W nicely
# Expanding the Wilkinson polynomial
We=sym.expand(W)
We
# Just computiong the derivative
Wep=sym.diff(We,x)
Wep
# Lamdifying the polynomial to be used with sympy
P=sym.lambdify(x,We)
Pp=sym.lambdify(x,Wep)
# Using scipy function to compute a root
root = optimize.newton(P,16)
print(root)
###Output
_____no_output_____
###Markdown
Acknowledgements* _Material created by professor Claudio Torres_ (`[email protected]`) _and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. March 2016._ v1.1.* _Update April 2020 - v1.32 - C.Torres_ : Re-ordering the notebook. Propose ClassworkBuild a FPI such that given $x$ computes $\displaystyle \frac{1}{x}$. Write down your solution below or go and see the [solution](sol1)
###Code
print('Please try to think and solve before you see the solution!!!')
###Output
_____no_output_____
###Markdown
In class From the textbook
###Code
g1 = lambda x: 1-x**3
g2 = lambda x: (1-x)**(1/3)
g3 = lambda x: (1+2*x**3)/(1+3*x**2)
fpi(g3, 0.5, 10, True)
g1p = lambda x: -3*x**2
g2p = lambda x: -(1/3)*(1-x)**(-2/3)
g3p = lambda x: ((1+3*x**2)*(6*x**2)-(1+2*x**3)*6*x)/((1+3*x**2)**2)
r=0.6823278038280194
print(g3p(r))
###Output
0.0
###Markdown
Adding another implementation of FPI including and extra column for analyzing quadratic convergence
###Code
def fpi2(g, x0, k, flag_cobweb=False):
x = np.empty(k+1)
x[0] = x0
error_i = np.inf
print(' i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2')
print('-----------------------------------------------------------------------------')
for i in range(k):
x[i+1] = g(x[i])
error_iminus1 = error_i
error_i = abs(x[i+1]-x[i])
print('%2d | %.10f | %.10f | %.10f | %.10f | %.10f' %
(i,x[i],x[i+1],error_i,error_i/error_iminus1,error_i/(error_iminus1**2)))
if flag_cobweb:
cobweb(x,g)
return x[-1]
###Output
_____no_output_____
###Markdown
Which function shows quadratic convergence? Why?
###Code
g1 = lambda x: (4./5.)*x+1./x
g2 = lambda x: x/2.+5./(2*x)
g3 = lambda x: (x+5.)/(x+1)
fpi2(g1, 3.0, 30, True)
###Output
i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
-----------------------------------------------------------------------------
0 | 3.0000000000 | 2.7333333333 | 0.2666666667 | 0.0000000000 | 0.0000000000
1 | 2.7333333333 | 2.5525203252 | 0.1808130081 | 0.6780487805 | 2.5426829268
2 | 2.5525203252 | 2.4337859123 | 0.1187344129 | 0.6566696394 | 3.6317610455
3 | 2.4337859123 | 2.3579112134 | 0.0758746990 | 0.6390287123 | 5.3820008621
4 | 2.3579112134 | 2.3104331500 | 0.0474780634 | 0.6257430215 | 8.2470577167
5 | 2.3104331500 | 2.2811657946 | 0.0292673554 | 0.6164395368 | 12.9836706220
6 | 2.2811657946 | 2.2633049812 | 0.0178608134 | 0.6102639994 | 20.8513543865
7 | 2.2633049812 | 2.2524757347 | 0.0108292465 | 0.6063131795 | 33.9465604055
8 | 2.2524757347 | 2.2459365357 | 0.0065391990 | 0.6038461667 | 55.7606814775
9 | 2.2459365357 | 2.2419977848 | 0.0039387509 | 0.6023292551 | 92.1105557855
10 | 2.2419977848 | 2.2396289986 | 0.0023687862 | 0.6014054433 | 152.6893838578
11 | 2.2396289986 | 2.2382057226 | 0.0014232760 | 0.6008461352 | 253.6514828510
12 | 2.2382057226 | 2.2373510329 | 0.0008546897 | 0.6005087204 | 421.9200655626
13 | 2.2373510329 | 2.2368379579 | 0.0005130750 | 0.6003056077 | 702.3667490889
14 | 2.2368379579 | 2.2365300188 | 0.0003079392 | 0.6001835001 | 1169.7773139066
15 | 2.2365300188 | 2.2363452213 | 0.0001847974 | 0.6001101489 | 1948.7945766484
16 | 2.2363452213 | 2.2362343307 | 0.0001108907 | 0.6000661069 | 3247.1564738023
17 | 2.2362343307 | 2.2361677919 | 0.0000665388 | 0.6000396705 | 5411.0928445584
18 | 2.2361677919 | 2.2361278670 | 0.0000399249 | 0.6000238046 | 9017.6533878399
19 | 2.2361278670 | 2.2361039115 | 0.0000239555 | 0.6000142836 | 15028.5875811147
20 | 2.2361039115 | 2.2360895380 | 0.0000143735 | 0.6000085704 | 25046.8112110049
21 | 2.2360895380 | 2.2360809139 | 0.0000086242 | 0.6000051424 | 41743.8505759425
22 | 2.2360809139 | 2.2360757393 | 0.0000051745 | 0.6000030855 | 69572.2495166694
23 | 2.2360757393 | 2.2360726346 | 0.0000031047 | 0.6000018512 | 115952.9143770302
24 | 2.2360726346 | 2.2360707718 | 0.0000018628 | 0.6000011108 | 193254.0225505195
25 | 2.2360707718 | 2.2360696541 | 0.0000011177 | 0.6000006665 | 322089.2027604016
26 | 2.2360696541 | 2.2360689834 | 0.0000006706 | 0.6000004000 | 536814.5032072314
27 | 2.2360689834 | 2.2360685811 | 0.0000004024 | 0.6000002397 | 894690.0031417808
28 | 2.2360685811 | 2.2360683396 | 0.0000002414 | 0.6000001430 | 1491149.1692002052
29 | 2.2360683396 | 2.2360681948 | 0.0000001449 | 0.6000000883 | 2485247.7961213691
###Markdown
Building a FPI to compute the cubic root of 7
###Code
# What is 'a'? Can we find another 'a'?
a = -3*(1.7**2)
print(a)
f = lambda x: x**3-7
g = lambda x: f(x)/a+x
r=fpi(g, 1.7, 14, True)
print(f(r))
###Output
i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1}
--------------------------------------------------------------
0 | 1.7000000000 | 1.9407151096 | 0.2407151096 | nan
1 | 1.9407151096 | 1.9050217836 | 0.0356933259 | 0.1482803717
2 | 1.9050217836 | 1.9149952799 | 0.0099734962 | 0.2794218792
3 | 1.9149952799 | 1.9123789078 | 0.0026163720 | 0.2623324846
4 | 1.9123789078 | 1.9130779941 | 0.0006990863 | 0.2671968386
5 | 1.9130779941 | 1.9128920879 | 0.0001859062 | 0.2659273937
6 | 1.9128920879 | 1.9129415886 | 0.0000495007 | 0.2662670485
7 | 1.9129415886 | 1.9129284127 | 0.0000131759 | 0.2661767579
8 | 1.9129284127 | 1.9129319201 | 0.0000035074 | 0.2662008017
9 | 1.9129319201 | 1.9129309865 | 0.0000009337 | 0.2661944019
10 | 1.9129309865 | 1.9129312350 | 0.0000002485 | 0.2661961056
11 | 1.9129312350 | 1.9129311689 | 0.0000000662 | 0.2661956523
12 | 1.9129311689 | 1.9129311865 | 0.0000000176 | 0.2661957723
13 | 1.9129311865 | 1.9129311818 | 0.0000000047 | 0.2661957414
###Markdown
Playing with some roots
###Code
f = lambda x: 8*x**4-12*x**3+6*x**2-x
fp = lambda x: 32*x**3-36*x**2+12*x-1
x = np.linspace(-1,1,1000)
plt.figure(figsize=(10,10))
plt.title('What are we seeing with the semiloigy plot? Is this function differentiable?')
plt.semilogy(x,np.abs(f(x)),'b-')
plt.semilogy(x,np.abs(fp(x)),'r-')
plt.grid()
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.show()
r=newton_method(f, fp, 0.3, rel_error=1e-8, m=1)
print([r,f(r)])
# Is this showing quadratic convergence? If not, can you fix it?
###Output
i | x(i) | x(i+1) | |x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
----------------------------------------------------------------------------------------
2 | 0.3000000000 | 0.3857142857 | 0.08571428571428552079 | 0.0000000000 | 0.0000000000
3 | 0.3857142857 | 0.4279843444 | 0.04227005870841460400 | 0.4931506849 | 5.7534246575
4 | 0.4279843444 | 0.4534159993 | 0.02543165489222543041 | 0.6016470208 | 14.2334086872
5 | 0.4534159993 | 0.4694946399 | 0.01607864055554952820 | 0.6322294252 | 24.8599404138
6 | 0.4694946399 | 0.4798882000 | 0.01039356016793291371 | 0.6464203321 | 40.2036683316
7 | 0.4798882000 | 0.4866871127 | 0.00679891264490944947 | 0.6541466576 | 62.9376890188
8 | 0.4866871127 | 0.4911655766 | 0.00447846388652911598 | 0.6587029604 | 96.8835745915
9 | 0.4911655766 | 0.4941281466 | 0.00296257004712746630 | 0.6615147788 | 147.7101960697
10 | 0.4941281466 | 0.4960932149 | 0.00196506826728892747 | 0.6632984996 | 223.8929338621
11 | 0.4960932149 | 0.4973989041 | 0.00130568918550205693 | 0.6644497839 | 338.1306364449
12 | 0.4973989041 | 0.4982674500 | 0.00086854596470975487 | 0.6652011630 | 509.4636383483
13 | 0.4982674500 | 0.4988456368 | 0.00057818680119792187 | 0.6656951096 | 766.4477606024
14 | 0.4988456368 | 0.4992307216 | 0.00038508477299548094 | 0.6660213831 | 1151.9138480345
15 | 0.4992307216 | 0.4994872795 | 0.00025655790749512519 | 0.6662374767 | 1730.1060012498
16 | 0.4994872795 | 0.4996582449 | 0.00017096535582211692 | 0.6663811593 | 2597.3908416937
17 | 0.4996582449 | 0.4997721891 | 0.00011394425784067019 | 0.6664757155 | 3898.3085915658
18 | 0.4997721891 | 0.4998481375 | 0.00007594838735824894 | 0.6665398397 | 5849.7010060220
19 | 0.4998481375 | 0.4998987634 | 0.00005062583760417905 | 0.6665821272 | 8776.7778935922
20 | 0.4998987634 | 0.4999325121 | 0.00003374875558825874 | 0.6666310561 | 13167.8029965013
21 | 0.4999325121 | 0.4999550059 | 0.00002249375282564747 | 0.6665061403 | 19749.0582583161
22 | 0.4999550059 | 0.4999700110 | 0.00001500514206059789 | 0.6670804190 | 29656.2527473931
23 | 0.4999700110 | 0.4999800060 | 0.00000999499746007215 | 0.6661048206 | 44391.7703605403
24 | 0.4999800060 | 0.4999866717 | 0.00000666569396912120 | 0.6669030178 | 66723.6805685757
25 | 0.4999866717 | 0.4999909946 | 0.00000432287884150062 | 0.6485264492 | 97293.1629087970
26 | 0.4999909946 | 0.4999941319 | 0.00000313735547630145 | 0.7257560508 | 167887.2060628857
27 | 0.4999941319 | 0.4999956097 | 0.00000147777639253333 | 0.4710261249 | 150134.7642908100
28 | 0.4999956097 | 0.4999970497 | 0.00000144001474572386 | 0.9744469820 | 659400.8314715016
29 | 0.4999970497 | 0.4999954553 | 0.00000159440131292099 | 1.1072117960 | 768889.2070839963
30 | 0.4999954553 | 0.4999967992 | 0.00000134384518901687 | 0.8428525354 | 528632.6149851314
31 | 0.4999967992 | 0.4999977022 | 0.00000090303564465044 | 0.6719789244 | 500041.9169360077
32 | 0.4999977022 | 0.4999950737 | 0.00000262848010768035 | 2.9107157876 | 3223256.8059490011
33 | 0.4999950737 | 0.4999964081 | 0.00000133432963278501 | 0.5076430401 | 193131.7793302043
34 | 0.4999964081 | 0.4999982008 | 0.00000179273154921056 | 1.3435447322 | 1006906.1641414912
35 | 0.4999982008 | 0.4999939138 | 0.00000428698813359496 | 2.3913162768 | 1333895.3497373802
36 | 0.4999939138 | 0.4999960369 | 0.00000212305952357328 | 0.4952333567 | 115520.1137200339
37 | 0.4999960369 | 0.4999978040 | 0.00000176715078742395 | 0.8323604533 | 392057.0497707436
38 | 0.4999978040 | 0.4999987633 | 0.00000095926573961957 | 0.5428318548 | 307179.1375397388
39 | 0.4999987633 | 0.4999957387 | 0.00000302458381729043 | 3.1530197445 | 3286909.5750255324
40 | 0.4999957387 | 0.4999965029 | 0.00000076425488204634 | 0.2526810061 | 83542.4049666695
41 | 0.4999965029 | 0.4999968812 | 0.00000037826747445457 | 0.4949493727 | 647623.4360170967
42 | 0.4999968812 | 0.4999992591 | 0.00000237792675222837 | 6.2863632557 | 16618831.0658740196
43 | 0.4999992591 | 0.5000161146 | 0.00001685544768070812 | 7.0882955772 | 2980872.1276002550
44 | 0.5000161146 | 0.5000106637 | 0.00000545084237957294 | 0.3233875767 | 19185.9381501386
45 | 0.5000106637 | 0.5000070840 | 0.00000357972935949302 | 0.6567295677 | 120482.2157677659
46 | 0.5000070840 | 0.5000046874 | 0.00000239665618528839 | 0.6695076484 | 187027.4484852393
47 | 0.5000046874 | 0.5000025819 | 0.00000210541208811588 | 0.8784789829 | 366543.5986752341
48 | 0.5000025819 | 0.4999998063 | 0.00000277564977962941 | 1.3183403835 | 626167.3859103953
49 | 0.4999998063 | 0.5003696288 | 0.00036982248520706085 | 133.2381656797 | 48002513.3781396449
50 | 0.5003696288 | 0.5002464495 | 0.00012317931164718132 | 0.3330768587 | 900.6398259087
51 | 0.5002464495 | 0.5001643129 | 0.00008213655673550146 | 0.6668048038 | 5413.2856799283
52 | 0.5001643129 | 0.5001095469 | 0.00005476602556908627 | 0.6667679745 | 8117.7979821256
53 | 0.5001095469 | 0.5000730351 | 0.00003651177553432028 | 0.6666866028 | 12173.3610546054
54 | 0.5000730351 | 0.5000486897 | 0.00002434541746509922 | 0.6667826231 | 18262.1253924256
55 | 0.5000486897 | 0.5000324687 | 0.00001622103474174796 | 0.6662869826 | 27368.0656125717
56 | 0.5000324687 | 0.5000216575 | 0.00001081118876600229 | 0.6664919309 | 41088.1267130889
57 | 0.5000216575 | 0.5000144386 | 0.00000721887681853772 | 0.6677227616 | 61762.1961847271
58 | 0.5000144386 | 0.5000096458 | 0.00000479276611831114 | 0.6639213050 | 91970.1667908346
59 | 0.5000096458 | 0.5000063645 | 0.00000328135142357855 | 0.6846466826 | 142850.0088969091
60 | 0.5000063645 | 0.5000040805 | 0.00000228400611013146 | 0.6960565375 | 212124.9593909390
61 | 0.5000040805 | 0.5000024135 | 0.00000166697042569552 | 0.7298449940 | 319545.9901534105
62 | 0.5000024135 | 0.5000071783 | 0.00000476480902644738 | 2.8583644635 | 1714706.1635887653
63 | 0.5000071783 | 0.5000052033 | 0.00000197501001863998 | 0.4144993026 | 86991.7976422348
64 | 0.5000052033 | 0.5000034947 | 0.00000170858392567474 | 0.8651013967 | 438023.8016830212
65 | 0.5000034947 | 0.5000012221 | 0.00000227260331253643 | 1.3301092667 | 778486.3516206152
66 | 0.5000012221 | 0.4999888330 | 0.00001238911739925852 | 5.4515089945 | 2398794.7938135080
67 | 0.4999888330 | 0.4999925798 | 0.00000374679111453391 | 0.3024259916 | 24410.6163381144
68 | 0.4999925798 | 0.4999951003 | 0.00000252055427996112 | 0.6727234593 | 179546.5609816919
69 | 0.4999951003 | 0.4999962565 | 0.00000115617629375953 | 0.4586992246 | 181983.4741264939
[0.4999965866182223, 0.0]
###Markdown
Solutions Problem: Build a FPI such that given $x$ computes $\displaystyle \frac{1}{x}$
###Code
# We are finding the 1/a
# Solution code:
a = 2.1
g = lambda x: 2*x-a*x**2
gp = lambda x: 2-2*a*x
r=fpi2(g, 0.7, 7, flag_cobweb=True)
print([r,1/a])
# Are we seeing quadratic convergence?
###Output
i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2
-----------------------------------------------------------------------------
0 | 0.7000000000 | 0.3710000000 | 0.3290000000 | 0.0000000000 | 0.0000000000
1 | 0.3710000000 | 0.4529539000 | 0.0819539000 | 0.2491000000 | 0.7571428571
2 | 0.4529539000 | 0.4750566054 | 0.0221027054 | 0.2696968100 | 3.2908355795
3 | 0.4750566054 | 0.4761877763 | 0.0011311709 | 0.0511779387 | 2.3154603813
4 | 0.4761877763 | 0.4761904762 | 0.0000026999 | 0.0023867984 | 2.1100246103
5 | 0.4761904762 | 0.4761904762 | 0.0000000000 | 0.0000056698 | 2.1000206476
6 | 0.4761904762 | 0.4761904762 | 0.0000000000 | 0.0000000000 | 0.0000000000
###Markdown
What is this plot telling us?
###Code
xx=np.linspace(0.2,0.8,1000)
plt.figure(figsize=(10,10))
plt.plot(xx,g(xx),'-',label=r'$g(x)$')
plt.plot(xx,gp(xx),'r-',label=r'$gp(x)$')
plt.plot(xx,xx,'g-',label=r'$x$')
plt.plot(xx,0*xx+1,'k--')
plt.plot(xx,0*xx-1,'k--')
plt.legend(loc='best')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
INF285 - Computación Científica Roots of 1D equations [S]cientific [C]omputing [T]eam Version: 1.34 Table of Contents* [Introduction](intro)* [Bisection Method](bisection)* [Fixed Point Iteration and Cobweb diagram](fpi)* [FPI - example from etxtbook](fpi-textbook-example)* [Newton Method](nm)* [Wilkinson Polynomial](wilkinson)* [Acknowledgements](acknowledgements)* [Extra Examples](extraexamples)
###Code
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
%matplotlib inline
from ipywidgets import interact
from ipywidgets import widgets
sym.init_printing()
from scipy import optimize
###Output
_____no_output_____
###Markdown
Introduction[Back to TOC](toc)Hello again! In this document we're going to learn how to find a 1D equation's solution using numerical methods. First, let's start with the definition of a root:Definition: The function $f(x)$ has a root in $x = r$ if $f(r) = 0$.An example: Let's say we want to solve the equation $x + \log(x) = 3$. We can rearrange the equation: $x + \log(x) - 3 = 0$. That way, to find its solution we can find the root of $f(x) = x + \log(x) - 3$. Now let's study some numerical methods to solve these kinds of problems. Defining a function $f(x)$
###Code
f = lambda x: x+np.log(x)-3
###Output
_____no_output_____
###Markdown
Finding $r$ using sympy
###Code
y = sym.Symbol('y')
fsym = lambda y: y+sym.log(y)-3
r_all=sym.solve(sym.Eq(fsym(y), 0), y)
r=r_all[0].evalf()
print(r)
print(r_all)
def find_root_manually(r=2.0):
x = np.linspace(1,3,1000)
plt.figure(figsize=(8,8))
plt.plot(x,f(x),'b-')
plt.grid()
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.title('What is r such that $f(r)='+str(f(r))+'$? $r='+str(r)+'$',fontsize=16)
plt.plot(r,f(r),'k.',markersize=20)
plt.show()
interact(find_root_manually,r=(1e-5,3,1e-3))
###Output
_____no_output_____
###Markdown
Bisection Method[Back to TOC](toc) The bisection method finds the root of a function $f$, where $f$ is a **continuous** function.If we want to know if this has a root, we have to check if there is an interval $[a,b]$ for which $f(a)\cdot f(b) < 0$. When these 2 conditions are satisfied, it means that there is a value $r$, between $a$ and $b$, for which $f(r) = 0$. To summarize how this method works, start with the aforementioned interval (checking that there's a root in it), and split it into two smaller intervals $[a,c]$ and $[c,b]$. Then, check which of the two intervals contains a root. Keep splitting each "eligible" interval until the algorithm converges or the tolerance is surpassed.
###Code
def bisect(f, a, b, tol=1e-5, maxNumberIterations=100):
fa = f(a)
fb = f(b)
i = 0
# Just checking if the sign is not negative => not root necessarily
if np.sign(f(a)*f(b)) >= 0:
print('f(a)f(b)<0 not satisfied!')
return None
#Printing the evolution of the computation of the root
print(' i | a | c | b | fa | fc | fb | b-a')
print('----------------------------------------------------------------------------------------')
while ((b-a)/2 > tol) and i<=maxNumberIterations:
c = (a+b)/2.
fc = f(c)
print('%2d | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f' %
(i+1, a, c, b, fa, fc, fb, b-a))
# Did we find the root?
if fc == 0:
print('f(c)==0')
break
elif np.sign(fa*fc) < 0:
b = c
fb = fc
else:
a = c
fa = fc
i += 1
xc = (a+b)/2.
return xc
# Initial example
f1 = lambda x: x+np.log(x)-3
# A different function, notice that x is multiplied to teh exponential now and not added, as before.
f2 = lambda x: x*np.exp(x)-3
# This is the introductory example about Fixed Point Iteration
f3 = lambda x: np.cos(x)-x
bisect(f1,0.5,3)
# Initial example
f1 = lambda x: x+np.log(x)-3
# A different function, notice that x is multiplied to teh exponential now and not added, as before.
f2 = lambda x: x*np.exp(x)-3
# This is the introductory example about Fixed Point Iteration
f3 = lambda x: np.cos(x)-x
bisect(f2,0.5,3,tol=1e-5)
###Output
i | a | c | b | fa | fc | fb | b-a
----------------------------------------------------------------------------------------
1 | 0.5000000 | 1.7500000 | 3.0000000 | -2.1756394 | 7.0705547 | 57.2566108 | 2.5000000
2 | 0.5000000 | 1.1250000 | 1.7500000 | -2.1756394 | 0.4652440 | 7.0705547 | 1.2500000
3 | 0.5000000 | 0.8125000 | 1.1250000 | -2.1756394 | -1.1690030 | 0.4652440 | 0.6250000
4 | 0.8125000 | 0.9687500 | 1.1250000 | -1.1690030 | -0.4476837 | 0.4652440 | 0.3125000
5 | 0.9687500 | 1.0468750 | 1.1250000 | -0.4476837 | -0.0177307 | 0.4652440 | 0.1562500
6 | 1.0468750 | 1.0859375 | 1.1250000 | -0.0177307 | 0.2167810 | 0.4652440 | 0.0781250
7 | 1.0468750 | 1.0664062 | 1.0859375 | -0.0177307 | 0.0978261 | 0.2167810 | 0.0390625
8 | 1.0468750 | 1.0566406 | 1.0664062 | -0.0177307 | 0.0396284 | 0.0978261 | 0.0195312
9 | 1.0468750 | 1.0517578 | 1.0566406 | -0.0177307 | 0.0108447 | 0.0396284 | 0.0097656
10 | 1.0468750 | 1.0493164 | 1.0517578 | -0.0177307 | -0.0034689 | 0.0108447 | 0.0048828
11 | 1.0493164 | 1.0505371 | 1.0517578 | -0.0034689 | 0.0036814 | 0.0108447 | 0.0024414
12 | 1.0493164 | 1.0499268 | 1.0505371 | -0.0034689 | 0.0001046 | 0.0036814 | 0.0012207
13 | 1.0493164 | 1.0496216 | 1.0499268 | -0.0034689 | -0.0016825 | 0.0001046 | 0.0006104
14 | 1.0496216 | 1.0497742 | 1.0499268 | -0.0016825 | -0.0007891 | 0.0001046 | 0.0003052
15 | 1.0497742 | 1.0498505 | 1.0499268 | -0.0007891 | -0.0003422 | 0.0001046 | 0.0001526
16 | 1.0498505 | 1.0498886 | 1.0499268 | -0.0003422 | -0.0001188 | 0.0001046 | 0.0000763
17 | 1.0498886 | 1.0499077 | 1.0499268 | -0.0001188 | -0.0000071 | 0.0001046 | 0.0000381
###Markdown
It's very important to define a concept called **convergence rate**. This rate shows how fast the convergence of a method is at a specified point.The convergence rate for the bisection is always 0.5 because this method uses the half of the interval for each iteration. Fixed Point Iteration and Cobweb diagram[Back to TOC](toc) To learn about the Fixed-Point Iteration we will first learn about the concept of Fixed Point.A Fixed Point of a function $g$ is a real number $r$, where $g(r) = r$The Fixed-Point Iteration is based in the Fixed Point concept and works like this to find the root of a function:\begin{align*} x_{0} &= initial\_guess \\ x_{i+1} &= g(x_{i})\end{align*}To find an equation's solution using this method you'll have to move around some things to rearrange the equation in the form $x = g(x)$. That way, you'll be iterating over the funcion $g(x)$, but you will **not** find $g$'s root, but $f(x) = g(x) - x$ (or $f(x) = x - g(x)$)'s root. In our following example, we'll find the solution to $f(x) = x - \cos(x)$ by iterating over the funcion $g(x) = \cos(x)$.
###Code
def cobweb(x,g=None):
min_x = np.amin(x)
max_x = np.amax(x)
plt.figure(figsize=(10,10))
ax = plt.axes()
plt.plot(np.array([min_x,max_x]),np.array([min_x,max_x]),'b-')
for i in np.arange(x.size-1):
delta_x = x[i+1]-x[i]
head_length = np.abs(delta_x)*0.04
arrow_length = delta_x-np.sign(delta_x)*head_length
ax.arrow(x[i], x[i], 0, arrow_length, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
ax.arrow(x[i], x[i+1], arrow_length, 0, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
if g!=None:
y = np.linspace(min_x,max_x,1000)
plt.plot(y,g(y),'r')
plt.title('Cobweb diagram')
plt.grid(True)
plt.show()
def fpi(g, x0, k, flag_cobweb=False):
x = np.empty(k+1)
x[0] = x0
error_i = np.inf
print(' i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2')
print('-----------------------------------------------------------------------------')
for i in range(k):
x[i+1] = g(x[i])
error_iminus1 = error_i
error_i = abs(x[i+1]-x[i])
print('%2d | %.10f | %.10f | %.10f | %.10f | %.10f' %
(i,x[i],x[i+1],error_i,error_i/error_iminus1,error_i/(error_iminus1**2)))
if flag_cobweb:
cobweb(x,g)
return x[-1]
g = lambda x: np.cos(x)
fpi(g, 2, 20, True)
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 2.0000000000 | -0.4161468365 | 2.4161468365 | 0.0000000000 | 0.0000000000
1 | -0.4161468365 | 0.9146533259 | 1.3308001624 | 0.5507944063 | 0.2279639623
2 | 0.9146533259 | 0.6100652997 | 0.3045880261 | 0.2288758558 | 0.1719836398
3 | 0.6100652997 | 0.8196106080 | 0.2095453083 | 0.6879630527 | 2.2586674253
4 | 0.8196106080 | 0.6825058579 | 0.1371047501 | 0.6542964443 | 3.1224580961
5 | 0.6825058579 | 0.7759946131 | 0.0934887552 | 0.6818783095 | 4.9734112712
6 | 0.7759946131 | 0.7137247340 | 0.0622698791 | 0.6660681166 | 7.1245800092
7 | 0.7137247340 | 0.7559287136 | 0.0422039796 | 0.6777591376 | 10.8842211877
8 | 0.7559287136 | 0.7276347923 | 0.0282939213 | 0.6704088465 | 15.8849675651
9 | 0.7276347923 | 0.7467496017 | 0.0191148094 | 0.6755800739 | 23.8772161593
10 | 0.7467496017 | 0.7339005972 | 0.0128490045 | 0.6722015485 | 35.1665315532
11 | 0.7339005972 | 0.7425675503 | 0.0086669531 | 0.6745233116 | 52.4961534769
12 | 0.7425675503 | 0.7367348584 | 0.0058326919 | 0.6729806736 | 77.6490502525
13 | 0.7367348584 | 0.7406662639 | 0.0039314055 | 0.6740293405 | 115.5605938426
14 | 0.7406662639 | 0.7380191412 | 0.0026471227 | 0.6733273142 | 171.2688547776
15 | 0.7380191412 | 0.7398027782 | 0.0017836370 | 0.6738021758 | 254.5413469114
16 | 0.7398027782 | 0.7386015286 | 0.0012012496 | 0.6734832006 | 377.5898286760
17 | 0.7386015286 | 0.7394108086 | 0.0008092800 | 0.6736984720 | 560.8313921735
18 | 0.7394108086 | 0.7388657151 | 0.0005450935 | 0.6735536472 | 832.2875198866
19 | 0.7388657151 | 0.7392329181 | 0.0003672029 | 0.6736512865 | 1235.8453896737
###Markdown
Let's quickly explain the Cobweb Diagram we have here. The blue line is the function $x$ and the red is the function $g(x)$. The point in which they meet is $g$'s fixed point. In this particular example, we start at $y = x = 1.5$ (the top right corner) and then we "jump" **vertically** to $y = \cos(1.5) \approx 0.07$. After this, we jump **horizontally** to $x = \cos(1.5) \approx 0.07$. Then, we jump again **vertically** to $y = \cos\left(\cos(1.5)\right) \approx 0.997$ and so on. See the pattern here? We're just iterating over $x = \cos(x)$, getting closer to the center of the diagram where the fixed point resides, in $x \approx 0.739$. It's very important to mention that the algorithm will converge only if the rate of convergence $S < 1$, where $S = \left| g'(r) \right|$. If you want to use this method, you'll have to construct $g(x)$ starting from $f(x)$ accordingly. In this example, $g(x) = \cos(x) \Rightarrow g'(x) = -\sin(x)$ and $|-\sin(0.739)| \approx 0.67$. Another example. Look at this web page to undertand the context: https://divisbyzero.com/2008/12/18/sharkovskys-theorem/amp/?__twitter_impression=true
###Code
# Consider this funtion
g = lambda x: -(3/2)*x**2+(11/2)*x-2
# Here we compute the derivative of it.
gp = lambda x: -3*x+11/2
# We plot now the funcion itself (red), its derivative (magenta) and the function y=x (blue).
# We also plot the values -1 and 1 with green dashed curves.
# This analyis shows that the fixed point, which is the intersection between teh red and blue curves,
# does not generate a convergent fix-point-iteration since the derivative (magenta curve) has a value
# lower then -1 about the fized point.
x=np.linspace(2,3,100)
plt.figure(figsize=(8,8))
plt.plot(x,g(x),'r-',label=r'$g(x)$')
plt.plot(x,x,'b-')
plt.plot(x,gp(x),'m-')
plt.plot(x,gp(x)*0+1,'g--')
plt.plot(x,gp(x)*0-1,'g--')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
What it is interesting about the previous example is that it generates an interesting limit cicle! In the next cell we evaluate the fixed point with initial guess equal to 1. The iteration oscilates generating the following sequence: 1, 2, 3, 1, 2, 3, .... Which is nice!
###Code
fpi(g, 1, 12, True)
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 1.0000000000 | 2.0000000000 | 1.0000000000 | 0.0000000000 | 0.0000000000
1 | 2.0000000000 | 3.0000000000 | 1.0000000000 | 1.0000000000 | 1.0000000000
2 | 3.0000000000 | 1.0000000000 | 2.0000000000 | 2.0000000000 | 2.0000000000
3 | 1.0000000000 | 2.0000000000 | 1.0000000000 | 0.5000000000 | 0.2500000000
4 | 2.0000000000 | 3.0000000000 | 1.0000000000 | 1.0000000000 | 1.0000000000
5 | 3.0000000000 | 1.0000000000 | 2.0000000000 | 2.0000000000 | 2.0000000000
6 | 1.0000000000 | 2.0000000000 | 1.0000000000 | 0.5000000000 | 0.2500000000
7 | 2.0000000000 | 3.0000000000 | 1.0000000000 | 1.0000000000 | 1.0000000000
8 | 3.0000000000 | 1.0000000000 | 2.0000000000 | 2.0000000000 | 2.0000000000
9 | 1.0000000000 | 2.0000000000 | 1.0000000000 | 0.5000000000 | 0.2500000000
10 | 2.0000000000 | 3.0000000000 | 1.0000000000 | 1.0000000000 | 1.0000000000
11 | 3.0000000000 | 1.0000000000 | 2.0000000000 | 2.0000000000 | 2.0000000000
###Markdown
However, we prefer convergent fixed-point-iterations! Here is interesting way to make a non-convergent FPI into a convergent one.
###Code
# This is an "avocado" hidden in the code!
a=-1/2.7
g2 = lambda x: x+a*(x-g(x))
fpi(g2, 1, 14, True)
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 1.0000000000 | 1.3703703704 | 0.3703703704 | 0.0000000000 | 0.0000000000
1 | 1.3703703704 | 1.8702941625 | 0.4999237921 | 1.3497942387 | 3.6444444444
2 | 1.8702941625 | 2.3033768846 | 0.4330827222 | 0.8662974818 | 1.7328590786
3 | 2.3033768846 | 2.4540725779 | 0.1506956933 | 0.3479605294 | 0.8034504993
4 | 2.4540725779 | 2.4576349017 | 0.0035623237 | 0.0236391875 | 0.1568670408
5 | 2.4576349017 | 2.4574138249 | 0.0002210768 | 0.0620597109 | 17.4211316367
6 | 2.4574138249 | 2.4574279552 | 0.0000141303 | 0.0639159592 | 289.1120393896
7 | 2.4574279552 | 2.4574270537 | 0.0000009015 | 0.0638009889 | 4515.1789387494
8 | 2.4574270537 | 2.4574271112 | 0.0000000575 | 0.0638083384 | 70777.8849634007
9 | 2.4574271112 | 2.4574271075 | 0.0000000037 | 0.0638078755 | 1109218.2194221437
10 | 2.4574271075 | 2.4574271078 | 0.0000000002 | 0.0638079298 | 17383734.4597040825
11 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0638079618 | 272438601.8737151027
12 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0638297872 | 4271125133.7568655014
13 | 2.4574271078 | 2.4574271078 | 0.0000000000 | 0.0642458101 | 67350420444.0673751831
###Markdown
FPI - example from textbook[Back to TOC](toc) This example is from the textbook. We are trying to find a root of $f(x)=x^3+x-1$.
###Code
# These are the three functions proposed.
g1 = lambda x: 1-x**3
g2 = lambda x: (1-x)**(1/3)
g3 = lambda x: (1+2*x**3)/(1+3*x**2)
# Change the input function to evaluate different functions.
# Are the three functions convergent fixed point iterations?
fpi(g3, 0.75, 10, True)
# This is a "hack" to improve the convergence of g2!
a = -0.6
g4 = lambda x: x+a*(x-g2(x))
fpi(g4, 0.75, 10, True)
# Why does this hack works?
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 0.7500000000 | 0.6779763150 | 0.0720236850 | 0.0000000000 | 0.0000000000
1 | 0.6779763150 | 0.6824480491 | 0.0044717341 | 0.0620869943 | 0.8620357907
2 | 0.6824480491 | 0.6823242405 | 0.0001238086 | 0.0276869349 | 6.1915431709
3 | 0.6823242405 | 0.6823279092 | 0.0000036687 | 0.0296324135 | 239.3404885671
4 | 0.6823279092 | 0.6823278007 | 0.0000001085 | 0.0295782485 | 8062.2187013180
5 | 0.6823278007 | 0.6823278039 | 0.0000000032 | 0.0295798537 | 272587.3459096886
6 | 0.6823278039 | 0.6823278038 | 0.0000000001 | 0.0295798272 | 9215295.8032791298
7 | 0.6823278038 | 0.6823278038 | 0.0000000000 | 0.0295799384 | 311541051.4653884768
8 | 0.6823278038 | 0.6823278038 | 0.0000000000 | 0.0295687236 | 10528180608.3659896851
9 | 0.6823278038 | 0.6823278038 | 0.0000000000 | 0.0294117647 | 354167948047.3809204102
###Markdown
Now that we found the root, let's compute the derivative of each $g(x)$ used previously and understand what exactly was going on.
###Code
g1p = lambda x: -3*x**2
g2p = lambda x: -(1/3)*(1-x)**(-2/3)
g3p = lambda x: ((1+3*x**2)*(6*x**2)-(1+2*x**3)*6*x)/((1+3*x**2)**2)
g4p = lambda x: 1+a*(1-g2p(x))
r=0.6823278038280194
print('What is the conclusion then?')
print([g1p(r), g2p(r), g3p(r), g4p(r)])
# Or it may be better to apply the absolute value.
print(np.abs([g1p(r), g2p(r), g3p(r), g4p(r)]))
###Output
[1.3967137 0.71596635 0. 0.02957981]
###Markdown
Newton's Method[Back to TOC](toc)The Newton's method also finds a root of a function $f(x)$ but it requires its derivative, i.e. $f'(x)$.The algorithm is as follows:\begin{align*} x_0 &= \text{initial guess},\\ x_{i+1} &= x_i - \dfrac{f(x_i)}{f'(x_i)}.\end{align*}For roots with multiplicity equal to 1, Newton's method convergens quadratically. However, when the multiplicity is larger that 1, it will show linear convergence. Fortunately, we can modify Newton's method if we know the multiplicity of the root, say $m$, this is as follows:\begin{align*} x_0 &= \text{initial guess},\\ x_{i+1} &= x_i - m\,\dfrac{f(x_i)}{f'(x_i)}.\end{align*}This modified version will also show quadratic convergence!
###Code
def newton_method(f, fp, x0, rel_error=1e-8, m=1, maxNumberIterations=100):
#Initialization of hybrid error and absolute
hybrid_error = 100
error_i = np.inf
print('i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2')
print('----------------------------------------------------------------------------------------')
#Iteration counter
i = 1
while (hybrid_error > rel_error and hybrid_error < 1e12 and i<=maxNumberIterations):
#Newton's iteration
x1 = x0-m*f(x0)/fp(x0)
#Checking if root was found
if f(x1) == 0.0:
hybrid_error = 0.0
break
#Computation of hybrid error
hybrid_error = abs(x1-x0)/np.max([abs(x1),1e-12])
#Computation of absolute error
error_iminus1 = error_i
error_i = abs(x1-x0)
#Increasing counter
i += 1
#Showing some info
print("%d | %.10f | %.10f | %.20f | %.10f | %.10f" %
(i, x0, x1, error_i, error_i/error_iminus1, error_i/(error_iminus1**2)))
#Updating solution
x0 = x1
#Checking if solution was obtained
if hybrid_error < rel_error:
return x1
elif i>=maxNumberIterations:
print('Newton''s Method did not converged. Too many iterations!!')
return None
else:
print('Newton''s Method did not converged!')
return None
###Output
_____no_output_____
###Markdown
First example, let's compute a root of $\sin(x)$, near $x_0=3.1$.
###Code
# Example funtion
f = lambda x: np.sin(x)
# The derivative of f
fp = lambda x: np.cos(x)
newton_method(f, fp, 3.1,rel_error=1e-15)
###Output
i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
----------------------------------------------------------------------------------------
2 | 3.1000000000 | 3.1416166546 | 0.04161665458563579278 | 0.0000000000 | 0.0000000000
3 | 3.1416166546 | 3.1415926536 | 0.00002400099584720650 | 0.0005767161 | 0.0138578204
4 | 3.1415926536 | 3.1415926536 | 0.00000000000000444089 | 0.0000000002 | 0.0000077092
5 | 3.1415926536 | 3.1415926536 | 0.00000000000000000000 | 0.0000000000 | 0.0000000000
###Markdown
Now, we will look at the example when Newton's method shows linear convergence.
###Code
f = lambda x: x**2
fp = lambda x: 2*x # the derivative of f
newton_method(f, fp, 3.1, rel_error=1e-1, m=3, maxNumberIterations=10)
###Output
i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
----------------------------------------------------------------------------------------
2 | 3.1000000000 | -1.5500000000 | 4.65000000000000035527 | 0.0000000000 | 0.0000000000
3 | -1.5500000000 | 0.7750000000 | 2.32500000000000017764 | 0.5000000000 | 0.1075268817
4 | 0.7750000000 | -0.3875000000 | 1.16249999999999986677 | 0.5000000000 | 0.2150537634
5 | -0.3875000000 | 0.1937500000 | 0.58124999999999993339 | 0.5000000000 | 0.4301075269
6 | 0.1937500000 | -0.0968750000 | 0.29062499999999996669 | 0.5000000000 | 0.8602150538
7 | -0.0968750000 | 0.0484375000 | 0.14531249999999998335 | 0.5000000000 | 1.7204301075
8 | 0.0484375000 | -0.0242187500 | 0.07265624999999999167 | 0.5000000000 | 3.4408602151
9 | -0.0242187500 | 0.0121093750 | 0.03632812499999999584 | 0.5000000000 | 6.8817204301
10 | 0.0121093750 | -0.0060546875 | 0.01816406249999999792 | 0.5000000000 | 13.7634408602
11 | -0.0060546875 | 0.0030273437 | 0.00908203124999999896 | 0.5000000000 | 27.5268817204
Newtons Method did not converged. Too many iterations!!
###Markdown
e_{i+1}/e_i=S=(m-1)/m -> e_{i+1}/e_i=(m-1)/m -> 0.5=(m-1)/m -> m=2
###Code
newton_method(f, fp, 3.1, rel_error=1e-1, m=2, maxNumberIterations=10)
###Output
i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
----------------------------------------------------------------------------------------
###Markdown
Wilkinson Polynomial[Back to TOC](toc)https://en.wikipedia.org/wiki/Wilkinson%27s_polynomial**Final question: Why is the root far far away from $16$?**
###Code
x = sym.symbols('x', reals=True)
W=1
for i in np.arange(1,21):
W*=(x-i)
W # Printing W nicely
# Expanding the Wilkinson polynomial
We=sym.expand(W)
We
# Just computiong the derivative
Wep=sym.diff(We,x)
Wep
# Lamdifying the polynomial to be used with sympy
P=sym.lambdify(x,We)
Pp=sym.lambdify(x,Wep)
# Using scipy function to compute a root
root = optimize.newton(P,16)
print(root)
newton_method(P, Pp, 16.01, rel_error=1e-10, maxNumberIterations=10)
###Output
i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
----------------------------------------------------------------------------------------
2 | 16.0100000000 | 16.0425006915 | 0.03250069153185108917 | 0.0000000000 | 0.0000000000
3 | 16.0425006915 | 16.0050204647 | 0.03748022678450269041 | 1.1532132093 | 35.4827283637
4 | 16.0050204647 | 16.0078597186 | 0.00283925384213290499 | 0.0757533795 | 2.0211558458
5 | 16.0078597186 | 15.9851271041 | 0.02273261449981944793 | 8.0065452981 | 2819.9469801818
6 | 15.9851271041 | 16.0029892675 | 0.01786216337692181355 | 0.7857505074 | 34.5648982597
7 | 16.0029892675 | 16.0136315293 | 0.01064226179553884322 | 0.5957991522 | 33.3553746866
8 | 16.0136315293 | 16.0001980846 | 0.01343344470737761753 | 1.2622734683 | 118.6095110764
9 | 16.0001980846 | 16.0178167527 | 0.01761866814365831146 | 1.3115525115 | 97.6333725295
10 | 16.0178167527 | 16.0287977529 | 0.01098100018750258755 | 0.6232593802 | 35.3749429376
11 | 16.0287977529 | 16.0248062796 | 0.00399147331818383577 | 0.3634890493 | 33.1016340099
Newtons Method did not converged. Too many iterations!!
###Markdown
Acknowledgements[Back to TOC](toc)* _Material created by professor Claudio Torres_ (`[email protected]`) _and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. March 2016._ v1.1.* _Update April 2020 - v1.32 - C.Torres_ : Re-ordering the notebook.* _Update April 2021 - v1.33 - C.Torres_ : Updating format and re-re-ordering the notebook. Adding 'maxNumberIterations' to bisection, fpi and Newton's method. Adding more explanations.* _Update April 2021 - v1.33 - C.Torres_ : Updating description and solution of 'Proposed classwork'. Extra examples[Back to TOC](toc) Propose Classwork1. Build a FPI such that given $a$ computes $\displaystyle \frac{1}{a}$. The constraint is that you can't use a division in the 'final' FPI. Write down your solution below or go and see the [solution](sol1). _Hint: I strongly suggest to use Newton's method._
###Code
print('Please try to solve it before you see the solution!!!')
###Output
Please try to solve it before you see the solution!!!
###Markdown
2. Build an algorithm that computes $\log(x_i)$ for $x_i=0.1*i+0.5$, for $i\in{0,1,2,\dots,10}$. The only special function available is $\exp(x)$, in particular use _np.exp(x)_. You can also use $*$, $÷$, $+$, and $-$. It would be nice to use the result from previous example to replace $÷$. In class Which function shows quadratic convergence? Why?
###Code
g1 = lambda x: (4./5.)*x+1./x
g2 = lambda x: x/2.+5./(2*x)
g3 = lambda x: (x+5.)/(x+1)
fpi(g1, 3.0, 10, True)
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 3.0000000000 | 2.7333333333 | 0.2666666667 | 0.0000000000 | 0.0000000000
1 | 2.7333333333 | 2.5525203252 | 0.1808130081 | 0.6780487805 | 2.5426829268
2 | 2.5525203252 | 2.4337859123 | 0.1187344129 | 0.6566696394 | 3.6317610455
3 | 2.4337859123 | 2.3579112134 | 0.0758746990 | 0.6390287123 | 5.3820008621
4 | 2.3579112134 | 2.3104331500 | 0.0474780634 | 0.6257430215 | 8.2470577167
5 | 2.3104331500 | 2.2811657946 | 0.0292673554 | 0.6164395368 | 12.9836706220
6 | 2.2811657946 | 2.2633049812 | 0.0178608134 | 0.6102639994 | 20.8513543865
7 | 2.2633049812 | 2.2524757347 | 0.0108292465 | 0.6063131795 | 33.9465604055
8 | 2.2524757347 | 2.2459365357 | 0.0065391990 | 0.6038461667 | 55.7606814775
9 | 2.2459365357 | 2.2419977848 | 0.0039387509 | 0.6023292551 | 92.1105557855
###Markdown
Building a FPI to compute the cubic root of 7
###Code
# What is 'a'? Can we find another 'a'?
a = -3*(1.7**2)
print(a)
f = lambda x: x**3-7
g = lambda x: f(x)/a+x
r=fpi(g, 1.7, 14, True)
print(f(r))
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 1.7000000000 | 1.9407151096 | 0.2407151096 | 0.0000000000 | 0.0000000000
1 | 1.9407151096 | 1.9050217836 | 0.0356933259 | 0.1482803717 | 0.6159994359
2 | 1.9050217836 | 1.9149952799 | 0.0099734962 | 0.2794218792 | 7.8284069055
3 | 1.9149952799 | 1.9123789078 | 0.0026163720 | 0.2623324846 | 26.3029612960
4 | 1.9123789078 | 1.9130779941 | 0.0006990863 | 0.2671968386 | 102.1249403998
5 | 1.9130779941 | 1.9128920879 | 0.0001859062 | 0.2659273937 | 380.3927776292
6 | 1.9128920879 | 1.9129415886 | 0.0000495007 | 0.2662670485 | 1432.2655048556
7 | 1.9129415886 | 1.9129284127 | 0.0000131759 | 0.2661767579 | 5377.2324999715
8 | 1.9129284127 | 1.9129319201 | 0.0000035074 | 0.2662008017 | 20203.5604717720
9 | 1.9129319201 | 1.9129309865 | 0.0000009337 | 0.2661944019 | 75894.1168915227
10 | 1.9129309865 | 1.9129312350 | 0.0000002485 | 0.2661961056 | 285109.6870567193
11 | 1.9129312350 | 1.9129311689 | 0.0000000662 | 0.2661956523 | 1071049.4838517620
12 | 1.9129311689 | 1.9129311865 | 0.0000000176 | 0.2661957723 | 4023544.1768357521
13 | 1.9129311865 | 1.9129311818 | 0.0000000047 | 0.2661957414 | 15114979.7521125861
###Markdown
Playing with some roots The following example proposed a particular function $f(x)$. The idea here is first obtain an initial guess for applying the Newton's method from the plot in semilogy scale.The plot of $f(x)$ (blue) shows that there seems to be 2 roots in the interval plotted.Now, the plot of $f'(x)$ (magenta) indicates that the derivative may also have a 0 together with a root, this means that the multiplicity of that root may be higher than 1. How can we find out that?
###Code
f = lambda x: 8*x**4-12*x**3+6*x**2-x
fp = lambda x: 32*x**3-36*x**2+12*x-1
x = np.linspace(-1,1,10000)
plt.figure(figsize=(10,10))
plt.title('What are we seeing with the semiloigy plot? Is this function differentiable?')
plt.semilogy(x,np.abs(f(x)),'b-',label=r'$|f(x)|$')
plt.semilogy(x,np.abs(fp(x)),'m-',label=r'$|fp(x)|$')
plt.grid()
plt.legend()
plt.xlabel(r'$x$',fontsize=16)
plt.show()
r=newton_method(f, fp, 0.4, rel_error=1e-8, m=1)
print([r,f(r)])
# Is this showing quadratic convergence? If not, can you fix it?
###Output
i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
----------------------------------------------------------------------------------------
2 | 0.4000000000 | 0.4363636364 | 0.03636363636363626473 | 0.0000000000 | 0.0000000000
3 | 0.4363636364 | 0.4586595886 | 0.02229595222295999157 | 0.6131386861 | 16.8613138686
4 | 0.4586595886 | 0.4728665654 | 0.01420697676575599644 | 0.6371998210 | 28.5791705436
5 | 0.4728665654 | 0.4820874099 | 0.00922084451765831092 | 0.6490363622 | 45.6843403691
6 | 0.4820874099 | 0.4881331524 | 0.00604574254164202962 | 0.6556603932 | 71.1063278311
7 | 0.4881331524 | 0.4921210847 | 0.00398793230801725018 | 0.6596265522 | 109.1059613639
8 | 0.4921210847 | 0.4947614808 | 0.00264039610890853815 | 0.6620965214 | 166.0250150478
9 | 0.4947614808 | 0.4965138385 | 0.00175235769376258510 | 0.6636722755 | 251.3533000745
10 | 0.4965138385 | 0.4976786184 | 0.00116477990795793573 | 0.6646930088 | 379.3135449254
11 | 0.4976786184 | 0.4984536173 | 0.00077499883014725546 | 0.6653607474 | 571.2330225492
12 | 0.4984536173 | 0.4989696118 | 0.00051599450909012301 | 0.6658003716 | 859.0985504567
13 | 0.4989696118 | 0.4993133111 | 0.00034369932075445364 | 0.6660910430 | 1290.8878511100
14 | 0.4993133111 | 0.4995423124 | 0.00022900128606345715 | 0.6662837900 | 1938.5659202606
15 | 0.4995423124 | 0.4996949214 | 0.00015260904304309486 | 0.6664112926 | 2910.0766376211
16 | 0.4996949214 | 0.4997966348 | 0.00010171339535092194 | 0.6664965150 | 4367.3461398087
17 | 0.4997966348 | 0.4998644327 | 0.00006779791883326780 | 0.6665584076 | 6553.3001358038
18 | 0.4998644327 | 0.4999096253 | 0.00004519259103091811 | 0.6665778509 | 9831.8335188516
19 | 0.4999096253 | 0.4999397542 | 0.00003012888079562126 | 0.6666774378 | 14751.9189004836
20 | 0.4999397542 | 0.4999598336 | 0.00002007941249576595 | 0.6664506601 | 22119.9939244004
21 | 0.4999598336 | 0.4999732282 | 0.00001339460597526987 | 0.6670815682 | 33222.1656531949
22 | 0.4999732282 | 0.4999821551 | 0.00000892684249254039 | 0.6664505480 | 49755.1439166074
23 | 0.4999821551 | 0.4999881259 | 0.00000597078794178918 | 0.6688577677 | 74926.5788229439
24 | 0.4999881259 | 0.4999921287 | 0.00000400283269313961 | 0.6704027562 | 112280.4498807474
25 | 0.4999921287 | 0.4999947419 | 0.00000261325394618206 | 0.6528511548 | 163097.2875577784
26 | 0.4999947419 | 0.4999967498 | 0.00000200786547838172 | 0.7683392122 | 294016.2831700209
[0.49999718771149326, 0.0]
###Markdown
SolutionsProblem: Build a FPI such that given $a$ computes $\displaystyle \frac{1}{a}$
###Code
# We are finding the 1/a
# Solution code:
a = 2.1
g = lambda x: 2*x-a*x**2
gp = lambda x: 2-2*a*x
r=fpi(g, 0.7, 7, flag_cobweb=False)
print('Reciprocal found :',r)
print('Reciprocal computed explicitly: ', 1/a)
# Are we seeing quadratic convergence?
###Output
i | x_i | x_{i+1} ||x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2
-----------------------------------------------------------------------------
0 | 0.7000000000 | 0.3710000000 | 0.3290000000 | 0.0000000000 | 0.0000000000
1 | 0.3710000000 | 0.4529539000 | 0.0819539000 | 0.2491000000 | 0.7571428571
2 | 0.4529539000 | 0.4750566054 | 0.0221027054 | 0.2696968100 | 3.2908355795
3 | 0.4750566054 | 0.4761877763 | 0.0011311709 | 0.0511779387 | 2.3154603813
4 | 0.4761877763 | 0.4761904762 | 0.0000026999 | 0.0023867984 | 2.1100246103
5 | 0.4761904762 | 0.4761904762 | 0.0000000000 | 0.0000056698 | 2.1000206476
6 | 0.4761904762 | 0.4761904762 | 0.0000000000 | 0.0000000000 | 0.0000000000
Reciprocal found : 0.47619047619047616
Reciprocal computed explicitly: 0.47619047619047616
###Markdown
What is this plot telling us? This plots shows that, even if we don't know the exact value of $g'(r)$, we can determine if the FPI will convergan by looking at the plot.In this plot we observe that when plotting $g'(x)$ (magenta), we can determine that the value of $|g'(r)|$ will be less than 1 since it is between the black lines, that are located at $y=-1$ and $y=1$.
###Code
xx=np.linspace(0.2,0.8,1000)
plt.figure(figsize=(10,10))
plt.plot(xx,g(xx),'r-',label=r'$g(x)$')
plt.plot(xx,gp(xx),'m-',label=r'$gp(x)$')
plt.plot(xx,xx,'b-',label=r'$x$')
plt.plot(xx,0*xx+1,'k--')
plt.plot(xx,0*xx-1,'k--')
plt.legend(loc='best')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
INF285 - Computación Científica Roots of 1D equations [S]cientific [C]omputing [T]eam Version: 1.37 Table of Contents* [Introduction](intro)* [Bisection Method](bisection)* [Fixed Point Iteration and Cobweb diagram](fpi)* [FPI - example from etxtbook](fpi-textbook-example)* [Newton Method](nm)* [Wilkinson Polynomial](wilkinson)* [Acknowledgements](acknowledgements)* [Extra Examples](extraexamples)
###Code
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
%matplotlib inline
from ipywidgets import interact
from ipywidgets import widgets
sym.init_printing()
from scipy import optimize
import pandas as pd
pd.set_option("display.colheader_justify","center")
pd.options.display.float_format = '{:.10f}'.format
from colorama import Fore, Back, Style
# https://pypi.org/project/colorama/
# Fore: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET.
# Back: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET.
# Style: DIM, NORMAL, BRIGHT, RESET_ALL
textBold = lambda x: Style.BRIGHT+x+Style.RESET_ALL
textBoldH = lambda x: Style.BRIGHT+Back.YELLOW+x+Style.RESET_ALL
###Output
_____no_output_____
###Markdown
Introduction[Back to TOC](toc)In this document we're going to study how to find roots of 1D equations using numerical methods. First, let's start with the definition of a root:Definition: The function $f(x)$ has a root in $x = r$ if $f(r) = 0$.Example: Let's say we want to solve the equation $r + \log(r) = 3$.We can re-arrange the equation as follows: $r + \log(r) - 3 = 0$. Thus, solving the previous equation is equivalent to find the root of $f(x) = x + \log(x) - 3$.This example shows how we can translate an equation into a root-finding problem!We will study now several numerical methods to find roots. We will start by defining a function $f(x)$ using a __lambda__ definition.
###Code
f = lambda x: x+np.log(x)-3
###Output
_____no_output_____
###Markdown
Notice that we have used the NumPy implementation for the logarithmic function. _**Quick question**: what is the base for this implementation? Is it the natural logarithm or logarithm base 10?_ But before we start working on a numerical implementation, we should always consider in solving the problem algebraically.This can be done with SymPy, i.e. using symbolic computation.For instance, we will start by defining a symbolic variable:
###Code
# Definition of symbolic variable
x = sym.Symbol('x')
# Defining 'symbolic' function
fsym = lambda x: x+sym.log(x)-3
# Finding the root 'symbolically' and obtaining the only root
r=sym.solve(sym.Eq(fsym(x), 0), x)[0]
print(textBoldH('Root obtained:'),r)
print(textBoldH('Numerical root:'),r.evalf())
###Output
[1m[43mRoot obtained:[0m LambertW(exp(3))
[1m[43mNumerical root:[0m 2.20794003156932
###Markdown
**Lamber W function**: https://en.wikipedia.org/wiki/Lambert_W_function We will now obtain the root 'manually'.This is not a recommended path but it is useful initially.
###Code
def find_root_manually(r=2.0):
# Defining a vector to evaluate f(x) in a vectorized fashion
x = np.linspace(1,3,1000)
# reating the figure
plt.figure(figsize=(8,8))
# Plotting the function in a vectorized way
plt.plot(x,f(x),'b-')
# Plotting the x-axis.
# Quick question: Why do we have to multiply 'x' by '0'? What would happen if we only put '0' instead of 'x*0'?
plt.plot(x,x*0,'r--')
# Adding the background grid to the plot. We strongly recommend it!
plt.grid(True)
# Just adding labels.
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.title('$r='+str(r)+',\, f(r)='+str(f(r))+'$',fontsize=16)
plt.plot(r,f(r),'k.',markersize=20)
plt.show()
interact(find_root_manually,r=(1,3,1e-3))
###Output
_____no_output_____
###Markdown
Bisection Method[Back to TOC](toc) The bisection method finds the root of a function $f$. It requires that:1. $f$ be a **continuous** function.2. The interval $[a,b]$, such that $f(a)\cdot f(b) < 0$.If these 2 conditions are satisfied, it means that there is a value $r$, between $a$ and $b$, for which $f(r) = 0$.To summarize how this method works, start with the aforementioned interval (checking that there's a root in it), and split it into two smaller intervals $[a,c]$ and $[c,b]$. Then, check which of the two intervals contains a root. Keep splitting each "eligible" interval until the algorithm converges or the tolerance is achived.
###Code
def bisect(f, a, b, tol=1e-5, maxNumberIterations=100):
# Evaluating the extreme points of the interval provided
fa = f(a)
fb = f(b)
# Iteration counter.
i = 0
# Just checking if the sign is not negative => not root necessarily
if np.sign(f(a)*f(b)) >= 0:
print('f(a)f(b)<0 not satisfied!')
return None
# Output table to store the numerical evolution of the algorithm
output_table = []
# Main loop: it will iterate until it satisfies one of the two criterias:
# The tolerance 'tol' is achived or the max number of iterations is reached.
while ((b-a)/2 > tol) and i<=maxNumberIterations:
# Obtaining the midpoint of the interval. Quick question: What could happen if a different point is used?
c = (a+b)/2.
# Evaluating the mid point
fc = f(c)
# Saving the output data
output_table.append([i, a, c, b, fa, fc, fb, b-a])
# Did we find the root?
if fc == 0:
print('f(c)==0')
break
elif np.sign(fa*fc) < 0:
# This first case consider that the new inetrval is defined by [a,c]
b = c
fb = fc
else:
# This second case consider that the new interval is defined by [c,b]
a = c
fa = fc
# Increasing the iteration counter
i += 1
# Showing final output table
columns = ['$i$', '$a_i$', '$c_i$', '$b_i$', '$f(a_i)$', '$f(c_i)$', '$f(b_i)$', '$b_i-a_i$']
df = pd.DataFrame(data=output_table, columns=columns)
display(df)
# Computing the best approximation obtaind for the root, which is the midpoint of the final interval.
xc = (a+b)/2.
return xc
# Initial example
f1 = lambda x: x+np.log(x)-3
# A different function, notice that x is multiplied to the exponential now and not added, as before.
f2 = lambda x: x*np.exp(x)-3
# This is the introductory example about Fixed Point Iteration
f3 = lambda x: np.cos(x)-x
bisect(f1,1e-10,3) # Recall to change the 'tol'!
###Output
_____no_output_____
###Markdown
It's very important to define a concept called **convergence rate**. This rate shows how fast the convergence of a method is at a specified point.The convergence rate for the bisection is always 0.5 because this method uses the half of the interval for each iteration.In this particular case we observe $e_{i+1} \approx \dfrac{e_{i}}{2}$, why? where? Fixed Point Iteration and Cobweb diagram[Back to TOC](toc) To learn about the Fixed-Point Iteration we will first learn about the concept of a Fixed Point.A Fixed Point of a function $g$ is a real number $r$, where $g(r) = r$The Fixed-Point Iteration is based in the Fixed Point concept and works like this to find the root of a function:\begin{align*} x_{0} &= initial\_guess \\ x_{i+1} &= g(x_{i})\end{align*}To find an equation's root using this method, we'll have to rearrange the equation to make it of the following form $x = g(x)$.For example, if we want to obtain the root of $f(r)=0$, one could add a zero convenient this way, $f(r)+r=r$, i.e. we add it $r$ on both sides.This way we have $g(r)=r+f(r)$ and the fixed point iteration could be performed.In the following example, we'll find the root of $f(x) = x - \cos(x)$ by iterating over the funcion $g(x) = \cos(x)$.
###Code
# Just plotting the Cobweb diagram: https://en.wikipedia.org/wiki/Cobweb_plot
def cobweb(x,g=None):
min_x = np.amin(x)
max_x = np.amax(x)
plt.figure(figsize=(10,10))
ax = plt.axes()
plt.plot(np.array([min_x,max_x]),np.array([min_x,max_x]),'b-')
for i in np.arange(x.size-1):
delta_x = x[i+1]-x[i]
head_length = np.abs(delta_x)*0.04
arrow_length = delta_x-np.sign(delta_x)*head_length
ax.arrow(x[i], x[i], 0, arrow_length, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
ax.arrow(x[i], x[i+1], arrow_length, 0, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
if g!=None:
y = np.linspace(min_x,max_x,1000)
plt.plot(y,g(y),'r')
plt.title('Cobweb diagram')
plt.grid(True)
plt.show()
# This code performs the fixed point iteration.
def fpi(g, x0, k, flag_cobweb=False):
# This is where we store all the approximation,
# this is technically not needed but we store them because we need them for the cobweb diagram at the end.
x = np.empty(k+1)
# Just starting the fixed point iteration from the 'initial guess'
x[0] = x0
# Initializing the error in NaN
error_i = np.nan
# Output table to store the numerical evolution of the algorithm
output_table = []
# Main loop
for i in range(k):
# Iteration
x[i+1] = g(x[i])
# Storing error from previous iteration
error_iminus1 = error_i
# Computing error for current iteration.
# Notice that from the theory we need to compute e_i=|x_i-r|, i.e. we need the root 'r'
# but we don't have it, so we approximate it by 'x_{i+1}'.
error_i = abs(x[i]-x[i+1])
output_table.append([i,x[i],x[i+1],error_i,error_i/error_iminus1,error_i/(error_iminus1**((1+np.sqrt(5))/2.)),error_i/(error_iminus1**2)])
# Showing final output table
columns = ['$i$', '$x_i$', '$x_{i+1}$', '$e_i$', r'$\frac{e_i}{e_{i-1}}$', r'$\frac{e_i}{e_{i-1}^\alpha}$', r'$\frac{e_i}{e_{i-1}^2}$']
df = pd.DataFrame(data=output_table, columns=columns)
display(df)
# Just showing cobweb if required
if flag_cobweb:
cobweb(x,g)
return x[-1]
# First example
g = lambda x: np.cos(x)
# Examples from classnotes
g1 = lambda x: -(3/2)*x+5/2
g2 = lambda x: -(1/2)*x+3/2
fpi(g, 1.1, 20, True)
# Suggestions:
# 1.- A very useful and simple 'limit cicle' example! Try it. Credit: anonymous student from class 20210922.
# fpi(lambda x: -x, 2, 10, True)
# 2.- Try the next example. Why do we see 1.0000 for e_{i+1}/e_i over 90 iterations?
# fpi(g, 1, 100, True)
# 3.- The following fixed-point iteration obtain sqrt(2). Credits: Anonymous student from class 20210921 and 20210922.
# gD = lambda x: (x+2)/(x+1)
# fpi(gD, 1, 10, True)
###Output
_____no_output_____
###Markdown
Let's quickly explain the Cobweb Diagram we have here. The blue line is the function $y=x$ and the red is the function $y=g(x)$. The point in which they meet is $r=g(r)$, i.e. the fixed point. In this particular example, we start at $y = x = 1.5$ (the top right corner) and then we "jump" **vertically** to $y = \cos(1.5) \approx 0.07$. After this, we jump **horizontally** to $x = \cos(1.5) \approx 0.07$. Then, we jump again **vertically** to $y = \cos\left(\cos(1.5)\right) \approx 0.997$ and so on. See the pattern here? We're just iterating over $x = \cos(x)$, getting closer to the center of the diagram where the fixed point resides, in $x \approx 0.739$. It's very important to mention that the algorithm will converge only if the rate of convergence $S < 1$, where $S = \left| g'(r) \right|$. If you want to use this method, you'll have to construct $g(x)$ starting from $f(x)$ accordingly. In this example, $g(x) = \cos(x) \Rightarrow g'(x) = -\sin(x)$ and $|-\sin(0.739)| \approx 0.67$.**Quick question:** Do you see the value 0.67 in the previous table? Another example. Look at this web page to undertand the context: https://divisbyzero.com/2008/12/18/sharkovskys-theorem/amp/?__twitter_impression=true
###Code
# Consider this funtion
g = lambda x: -(3/2)*x**2+(11/2)*x-2
# Here we compute the derivative of it.
gp = lambda x: -3*x+11/2
# We plot now the funcion itself (red), its derivative (magenta) and the function y=x (blue).
# We also plot the values -1 and 1 with green dashed curves.
# This analyis shows that the fixed point, which is the intersection between teh red and blue curves,
# does not generate a convergent fix-point-iteration since the derivative (magenta curve) has a value
# lower then -1 about the fized point.
x=np.linspace(2,3,100)
plt.figure(figsize=(8,8))
plt.plot(x,g(x),'r-',label=r'$g(x)$')
plt.plot(x,x,'b-')
plt.plot(x,gp(x),'m-')
plt.plot(x,gp(x)*0+1,'g--')
plt.plot(x,gp(x)*0-1,'g--')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
What it is interesting about the previous example is that it generates an interesting limit cicle! In the next cell we evaluate the fixed point with initial guess equal to 1. The iteration oscilates generating the following sequence: 1, 2, 3, 1, 2, 3, .... Which is nice!
###Code
fpi(g, 1, 12, True)
# Suggestion, try the following alternative.
# fpi(g, 2.5, 100, True)
###Output
_____no_output_____
###Markdown
However, we prefer **convergent** fixed-point-iterations! Here is interesting way to make a non-convergent FPI into a convergent one.
###Code
# This is a "palta" hidden in the code! Think about it. Quick question: what is it doing?
a=-1/(1-(-1.72)) # a = -1 / (1 - g'(r)), where does "a" come from?
g2 = lambda x: x+a*(x-g(x))
fpi(g2, 1, 14, True)
###Output
_____no_output_____
###Markdown
FPI - example from textbook[Back to TOC](toc) This example is from the textbook. We are trying to find a root of $f(x)=x^3+x-1$.
###Code
# These are the three functions proposed.
g1 = lambda x: 1-x**3
g2 = lambda x: (1-x)**(1/3)
g3 = lambda x: (1+2*x**3)/(1+3*x**2)
# Change the input function to evaluate different functions.
# Are the three functions convergent fixed point iterations?
fpi(g3, 0.75, 10, True)
# This is a "hack" to improve the convergence of g2!
a = -0.6
g4 = lambda x: x+a*(x-g2(x))
fpi(g4, 0.75, 10, True)
# Why does this hack works?
###Output
_____no_output_____
###Markdown
Now that we have found the root, let's compute the derivative of each $g(x)$ used previously and understand what exactly was going on.
###Code
g1p = lambda x: -3*x**2
g2p = lambda x: -(1/3)*(1-x)**(-2/3)
g3p = lambda x: ((1+3*x**2)*(6*x**2)-(1+2*x**3)*6*x)/((1+3*x**2)**2)
g4p = lambda x: 1+a*(1-g2p(x))
r=0.6823278038280194
print('What is the conclusion then?')
print([g1p(r), g2p(r), g3p(r), g4p(r)])
# Or it may be better to apply the absolute value.
print(np.abs([g1p(r), g2p(r), g3p(r), g4p(r)]))
###Output
[1.3967137 0.71596635 0. 0.02957981]
###Markdown
Newton's Method[Back to TOC](toc)The Newton's method also finds a root of a function $f(x)$ but it requires its derivative, i.e. $f'(x)$.The algorithm is as follows:\begin{align*} x_0 &= \text{initial guess},\\ x_{i+1} &= x_i - \dfrac{f(x_i)}{f'(x_i)}.\end{align*}For roots with multiplicity equal to 1, Newton's method convergens quadratically. However, when the multiplicity is larger that 1, it will show linear convergence. Fortunately, we can modify Newton's method if we know the multiplicity of the root, say $m$, this is as follows:\begin{align*} x_0 &= \text{initial guess},\\ x_{i+1} &= x_i - m\,\dfrac{f(x_i)}{f'(x_i)}.\end{align*}This modified version will also show quadratic convergence!
###Code
def newton_method(f, fp, x0, rel_error=1e-8, m=1, maxNumberIterations=100):
#Initialization of hybrid error and absolute
hybrid_error = 100
error_i = np.inf
#print('i | x_i | x_{i+1} | |x_{i+1}-x_i| | e_{i+1}/e_i | e_{i+1}/e_i^2')
#print('----------------------------------------------------------------------------------------')
# Output table to store the numerical evolution of the algorithm
output_table = []
#Iteration counter
i = 0
while (hybrid_error > rel_error and hybrid_error < 1e12 and i<=maxNumberIterations):
#Newton's iteration
x1 = x0-m*f(x0)/fp(x0)
#Checking if root was found
if f(x1) == 0.0:
hybrid_error = 0.0
break
#Computation of hybrid error
hybrid_error = abs(x1-x0)/np.max([abs(x1),1e-12])
#Computation of absolute error
error_iminus1 = error_i
error_i = abs(x1-x0)
# Storing output data
output_table.append([i,x0,x1,error_i,error_i/error_iminus1,error_i/(error_iminus1**((1+np.sqrt(5))/2.)),error_i/(error_iminus1**2)])
#Updating solution
x0 = x1
#Increasing iteration counter
i += 1
# Showing final output table
columns = ['$i$', '$x_i$', '$x_{i+1}$', '$e_i$', r'$\frac{e_i}{e_{i-1}}$', r'$\frac{e_i}{e_{i-1}^\alpha}$', r'$\frac{e_i}{e_{i-1}^2}$']
df = pd.DataFrame(data=output_table, columns=columns)
display(df)
#Checking if solution was obtained
if hybrid_error < rel_error:
return x1
elif i>=maxNumberIterations:
print('Newton''s Method did not converge. Too many iterations!!')
return None
else:
print('Newton''s Method did not converge!')
return None
###Output
_____no_output_____
###Markdown
First example, let's compute a root of $\sin(x)$, near $x_0=3.1$.
###Code
# Example funtion
f = lambda x: np.sin(x)
# The derivative of f
fp = lambda x: np.cos(x)
newton_method(f, fp, 3.1,rel_error=1e-15)
###Output
_____no_output_____
###Markdown
Now, we will look at the example when Newton's method shows linear convergence.
###Code
f = lambda x: x**2
fp = lambda x: 2*x # the derivative of f
newton_method(f, fp, 3.1, rel_error=1e-1, m=1, maxNumberIterations=10)
###Output
_____no_output_____
###Markdown
So, in the previous example Newton's method showed linear convergence.But, how can we uss its outcome to improve the convergence?This can fixed by understanding the following facts:1. Linear convergence definition: $e_{i+1}/e_i=S$2. Linear convergence exhibit by Newton's method when the root has multiplicity greater than 1: $S=(m-1)/m$Connecting the two previous two facts we get, $$e_{i+1}/e_i=(m-1)/m$$.From the table we obtain that $e_{i+1}/e_i\approx 0.5$, this implies the following equation,$$0.5=(m-1)/m.$$Solving for $m$ we get $m=2$.Knowing this is very useful because we can use it with the Newton's method and recover its quadratic convergence! Wilkinson Polynomial[Back to TOC](toc)https://en.wikipedia.org/wiki/Wilkinson%27s_polynomial**Final question: Why is the root far far away from $16$?**
###Code
x = sym.symbols('x', reals=True)
W=1
for i in np.arange(1,21):
W*=(x-i)
W # Printing W nicely
# Expanding the Wilkinson polynomial
We=sym.expand(W)
We
# Just computiong the derivative
Wep=sym.diff(We,x)
Wep
# Lamdifying the polynomial to be used with sympy
P=sym.lambdify(x,We)
Pp=sym.lambdify(x,Wep)
# Using scipy function to compute a root
root = optimize.newton(P,16)
print(root)
newton_method(P, Pp, 16.01, rel_error=1e-10, maxNumberIterations=10)
###Output
_____no_output_____
###Markdown
Acknowledgements[Back to TOC](toc)* _Material created by professor Claudio Torres_ (`[email protected]`) _and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. March 2016._ v1.1.* _Update April 2020 - v1.32 - C.Torres_ : Re-ordering the notebook.* _Update April 2021 - v1.33 - C.Torres_ : Updating format and re-re-ordering the notebook. Adding 'maxNumberIterations' to bisection, fpi and Newton's method. Adding more explanations.* _Update April 2021 - v1.33 - C.Torres_ : Updating description and solution of 'Proposed classwork'.* _Update September 2021 - v1.35 - C.Torres_ : Updating and commeting code more.* _Update September 2021 - v1.36 - C.Torres_ : Updating the way we show the output tables.* _Update September 2021 - v1.37 - C.Torres_ : Fixing typo suggested by Nicolás Tapia 2021-2. Thanks Nicolás! And removing extra code. Extra examples[Back to TOC](toc) Propose Classwork1. Build a FPI such that given $a$ computes $\displaystyle \frac{1}{a}$. The constraint is that you can't use a division in the 'final' FPI. Write down your solution below or go and see the [solution](sol1). _Hint: I strongly suggest to use Newton's method._
###Code
print('Please try to solve it before you see the solution!!!')
###Output
Please try to solve it before you see the solution!!!
###Markdown
2. Build an algorithm that computes $\log(x_i)$ for $x_i=0.1*i+0.5$, for $i\in{0,1,2,\dots,10}$. The only special function available is $\exp(x)$, in particular use _np.exp(x)_. You can also use $*$, $÷$, $+$, and $-$. It would be nice to use the result from previous example to replace $÷$. In class Which function shows quadratic convergence? Why?
###Code
g1 = lambda x: (4./5.)*x+1./x
g2 = lambda x: x/2.+5./(2*x)
g3 = lambda x: (x+5.)/(x+1)
fpi(g1, 3.0, 10, True)
###Output
_____no_output_____
###Markdown
Building a FPI to compute the cubic root of 7
###Code
# What is 'a'? Can we find another 'a'?
a = -3*(1.7**2)
print(a)
f = lambda x: x**3-7
g = lambda x: f(x)/a+x
r=fpi(g, 1.7, 14, True)
print(f(r))
###Output
_____no_output_____
###Markdown
Playing with some roots The following example proposed a particular function $f(x)$. The idea here is first obtain an initial guess for applying the Newton's method from the plot in semilogy scale.The plot of $f(x)$ (blue) shows that there seems to be 2 roots in the interval plotted.Now, the plot of $f'(x)$ (magenta) indicates that the derivative may also have a 0 together with a root, this means that the multiplicity of that root may be higher than 1. **Do you see it?**
###Code
f = lambda x: 8*x**4-12*x**3+6*x**2-x
fp = lambda x: 32*x**3-36*x**2+12*x-1
x = np.linspace(-1,1,10000)
plt.figure(figsize=(10,10))
plt.title('What are we seeing with the semiloigy plot? Is this function differentiable?')
plt.semilogy(x,np.abs(f(x)),'b-',label=r'$|f(x)|$')
plt.semilogy(x,np.abs(fp(x)),'m-',label=r'$|fp(x)|$')
plt.grid()
plt.legend()
plt.xlabel(r'$x$',fontsize=16)
plt.show()
r=newton_method(f, fp, 0.4, rel_error=1e-8, m=1)
print([r,f(r)])
# Is this showing quadratic convergence? If not, can you fix it?
###Output
_____no_output_____
###Markdown
SolutionsProblem: Build a FPI such that given $a$ computes $\displaystyle \frac{1}{a}$
###Code
# We are finding the 1/a
# Solution code:
a = 2.1
g = lambda x: 2*x-a*x**2
gp = lambda x: 2-2*a*x
r=fpi(g, 0.7, 7, flag_cobweb=False)
print('Reciprocal found :',r)
print('Reciprocal computed explicitly: ', 1/a)
# Are we seeing quadratic convergence?
###Output
_____no_output_____
###Markdown
What is this plot telling us? This plots shows that, even if we don't know the exact value of $g'(r)$, we can determine if the FPI will convergan by looking at the plot.In this plot we observe that when plotting $g'(x)$ (magenta), we can determine that the value of $|g'(r)|$ will be less than 1 since it is between the black lines, that are located at $y=-1$ and $y=1$.
###Code
xx=np.linspace(0.2,0.8,1000)
plt.figure(figsize=(10,10))
plt.plot(xx,g(xx),'r-',label=r'$g(x)$')
plt.plot(xx,gp(xx),'m-',label=r'$gp(x)$')
plt.plot(xx,xx,'b-',label=r'$x$')
plt.plot(xx,0*xx+1,'k--')
plt.plot(xx,0*xx-1,'k--')
plt.legend(loc='best')
plt.grid()
plt.show()
###Output
_____no_output_____ |
docs/cli/commands/destroy.ipynb | ###Markdown
Destroy ❌ The destroyer is accessible through the `destroy` command (abbreviated `d`).
###Code
! D destroy --help
###Output
Usage: D destroy [OPTIONS] COMMAND [ARGS]...
Removes models, serializers, and other resources
Options:
--dry, --dry-run Display output without deleting files
--force Override any conflicting files.
--verbose Run in verbose mode.
--help Show this message and exit.
Commands:
admin Destroys an admin model or inline.
fixture Destroys a fixture.
form Destroys a form.
manager Destroys a model manager.
model Destroys a model.
resource Destroys a resource and its related modules.
serializer Destroys a serializer.
template Destroys a template.
test Destroys a TestCase.
view Destroys a view.
viewset Destroys a viewset.
###Markdown
Destroying Models
###Code
! D destroy model --help
! D destroy --force model ep
###Output
[32mSuccessfully deleted ep.py for Ep.[0m
###Markdown
Destroying Resources Suppose you have a resource (consisting of admin, model, serializer, view, viewset...), you can delete all related files by running the `destroy resource` command.The following example shows how to destroy a resource called `Article`, which we'll create with the `generator`.
###Code
! D generate resource article char:title text:content
###Output
[32mSuccessfully created article.py for Article.[0m
[32mSuccessfully created article.py for ArticleAdmin.[0m
[32mSuccessfully created article.py for ArticleInline.[0m
[32mSuccessfully created article.py for ArticleTestCase.[0m
[32mSuccessfully created article.py for ArticleSerializer.[0m
[32mSuccessfully created article.py for ArticleTestCase.[0m
[32mSuccessfully created article.py for ArticleViewSet.[0m
[32mSuccessfully created article.py for ArticleForm.[0m
[32mSuccessfully created article.html template.[0m
[32mSuccessfully created article_list.py for ArticleListView..[0m
###Markdown
And here's how you can essentially revert the creating of this resource. Note that we're passing the `--force` flag because we're running these examples in a Jupyter notebook and we are sure we want to delete these files. Ideally, you'd want to run the command without this flag so you can be warned about potentially deleting files you may not want deleted.
###Code
! D destroy --force resource article
! D destroy --force resource article
###Output
[32mSuccessfully deleted article.py for Article.[0m
[32mSuccessfully deleted article.py for ArticleAdmin.[0m
[32mSuccessfully deleted article.py for ArticleInline.[0m
[32mSuccessfully deleted article.py for ArticleTestCase.[0m
[32mSuccessfully deleted article.py for ArticleSerializer.[0m
[32mSuccessfully deleted article.py for ArticleTestCase.[0m
[32mSuccessfully deleted article.py for ArticleViewSet.[0m
[32mSuccessfully deleted article.py for ArticleForm.[0m
[32mSuccessfully deleted article.html template.[0m
[32mSuccessfully deleted article_list.py for ArticleListView..[0m
[31mFile article_detail.py does not exist.[0m
[31mFile article.py does not exist.[0m
|
code/R02_Zaklady.ipynb | ###Markdown
Základy R... čo bude trocha nudná, ale nutná časť. Žiadna štatistika, ale dátové typy, príkazy, definície.
###Code
x <- 5.5
x
class(x)
6.5 -> y # To sice vyzera sexy, ale je to len ozdoba.
y
###Output
_____no_output_____
###Markdown
Priradenie <-V R existujú viaceré znaky priradenia. Funguje *=*, *->*, ale aj *>* (Nechcete vedieť, čo robia). *=* slúži na mapovanie parametrov a v príkaze *case*, takže je slušné pre priradenie používať *<-*, hoci pre ľudí, ktorí programujú aj v inom jazyku, je ťažké občas nenapísať *=*. Vopred sa ospravedlňujem. Číselné typy*numeric* je generické číslo v R. Ale máme aj ďalšie typy:
###Code
y <- 5
class(y)
y <- as.integer(5)
class(y)
is.integer(y)
###Output
_____no_output_____
###Markdown
Celé číslo musíme ako také definovať. Reťazce
###Code
s <- "Peter"
s == 'Peter'
class(s)
substring(s, first=2, last=3)
s <- sub("ete", "avo", s)
s
substring(s, 1:3, 3:5)
sprintf("%s má %d rokov.", "Peter", 54)
###Output
_____no_output_____
###Markdown
substring je *vektorizovaná* funkcia - pre vektorový argument vráti vektor.
###Code
1:5
sqrt(1:5)
vec <- c(1,2,3,4,5,6) # toto je komentar o tom, ze c() vytvara vektor
vec
sin(vec)
###Output
_____no_output_____
###Markdown
Apropos, máme \[\pi\]?
###Code
apropos("pi")
###Output
_____no_output_____
###Markdown
Aha - asi máme.
###Code
help("pi")
cos(pi)
###Output
_____no_output_____
###Markdown
Kuk doprava. Logické hodnoty
###Code
class(TRUE)
c(T,T,F,F) == c(T,F,T,F)
c(T,T,F,F) & c(T,F,T,F)
c(T,T,F,F) | c(T,F,T,F)
###Output
_____no_output_____
###Markdown
Vektory
###Code
1:3 == c(1,2,3)
help(seq)
vec1 <- seq(from = 1, to = 10, by = 3)
vec1
vec2 <- seq(from = 1, by = 2, length.out = 5)
vec2
vec3 <- seq(from = 1, to = 10, length.out = 4)
vec3
vec4 <- seq(from = 0, by = 1, along.with = vec3)
vec4
all.equal(vec1, vec3)
ifelse(vec1>5,"Yes", "No")
###Output
_____no_output_____
###Markdown
Dvojbodka nám umožňuje pohodlne vytvárať číselné rady. Zoznam (list) je vektor s pomenovanými prvkami.
###Code
vec <- 1:4
names(vec) <- c("jeden", "dva", "tri", "štyri")
vec
###Output
_____no_output_____
###Markdown
Data frame* Logicky vzniká spojením zoznamov s rovnakými menami položiek. * Fyzicky to je skôr zoznam vektorov.
###Code
x <- data.frame(
v1 = c(1,2,3,4),
v2 = as.integer(c(0,0,1,1)),
v3 = c("a","b","a","b")
)
x
x$v1 # stlpec
x$v2[2] # element
x[2,"v1"] # element
x[2,] # riadok
x$v3
is.factor(x$v3)
nrow(x) # pocet riadkov
ncol(x) # pocet stlpcov
###Output
_____no_output_____
###Markdown
Príkazy R
###Code
for(i in 1:10) {
if (i %% 2 == 0) {
print(i)
} else {
print(paste(i, "je", "neparne"))
}
}
###Output
[1] "1 je neparne"
[1] 2
[1] "3 je neparne"
[1] 4
[1] "5 je neparne"
[1] 6
[1] "7 je neparne"
[1] 8
[1] "9 je neparne"
[1] 10
###Markdown
Funkcie
###Code
factorial <- function(n)
{
if (n==0 | n==1)
{ return(1); }
else
{ return(n * factorial(n-1)); }
}
factorial(10)
###Output
_____no_output_____
###Markdown
__return__ je funkcia!Použitie funkcie na vektor:* Ak je vnútro funkcie vektorizované, potom môžeme funkciu aplikovať na číslo i na vektor.
###Code
kvadrat <- function(x)
return(x**2)
kvadrat(1:5)
###Output
_____no_output_____
###Markdown
* Ak vnútro funkcie nie je vektorizované (ako v nasledujúcom príklade funkcie min a max, musíme aplikovať funkciu na jednotlivé prvky vektora.
###Code
my_clip <- function(x, a, b) # orezavacia funkcia
{
return(min(b, max(a,x)))
}
vec = seq(from = -10, to = 10, by = 1)
vec <- sapply(vec, my_clip, a = -5, b = 5); # potrebne argumenty pre funkciu napiseme za nu...
vecx <- seq(from = 0, by = 1, along.with = vec)
plot(vecx, vec, xlab = "index")
###Output
_____no_output_____ |
old_notebooks/003_pandas_dataframe_melt_cleaned_for presentation.ipynb | ###Markdown
Import pandas so we can use this library
###Code
import catheat
import numpy as np
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import Metadata Values for 96W Plates values corresponding to wellID. Import with no index collumn set so that we can use the A-H collumn later for restructing the dataset. Import values for buffer conditions
###Code
nucleofection_buffer= pd.read_csv('lonza_nucleofection_buffer.csv') # read in csv
nucleofection_buffer.head()
nucleofection_buffer.shape
catheat.heatmap(lonza_nucleofection_buffer_96w.set_index("Unnamed: 0"), palette='Set2')
###Output
_____no_output_____
###Markdown
import values for experimental or control designation
###Code
experimental_or_control_well= pd.read_csv('experimental_or_control.csv') # read in csv
print(experimental_or_control_well.shape)
experimental_or_control_well
catheat.heatmap(experimental_or_control_well.set_index("Unnamed: 0"), palette='Set1')
###Output
_____no_output_____
###Markdown
Change column names change unnamed: 0 to test_plate_96w_row_name (plate csv name _ row_names) so that we know that these indexes correspond to this plate. Do so by passing the name of the collumn to be renamed into a dictonary where you give the value of the name to be changed and the desired new name. change name of experimental or control well
###Code
experimental_or_control_well_renamed=experimental_or_control_well.rename(columns={"Unnamed: 0":"row_letter"})
print(experimental_or_control_well_renamed.shape)
print(experimental_or_control_well_renamed.head())
###Output
(8, 13)
row_letter 1 2 3 4 5 6 7 8 9 10 11 12
0 A exp exp exp exp exp exp exp exp exp exp exp exp
1 B exp exp exp exp exp exp exp exp exp exp exp exp
2 C exp exp exp exp exp exp exp exp exp exp exp exp
3 D exp exp exp exp exp exp exp exp exp exp exp exp
4 E exp exp exp exp exp exp exp exp exp exp exp exp
###Markdown
change name for loza nucleofection buffer
###Code
lonza_nucleofection_buffer_96w_renamed=lonza_nucleofection_buffer_96w.rename(columns={"Unnamed: 0":"row_letter"})
print(lonza_nucleofection_buffer_96w_renamed.shape)
lonza_nucleofection_buffer_96w_renamed.head()
###Output
_____no_output_____
###Markdown
Other option that did not work to ask Olga about Restructure the dataset based on the row labels of a 96 well plate. For the pd.melt function we take id_vars which is the variable for which we want get the values corresponding to other variables in the table. In this case, we want the values for all other variables, which are all other collumns, because we did not specify a specific subset of variables. If we wanted to specify this subset we could use value_vars and pass in a list of the other collumns (variables) that we want to see the values for relative to column 0. For example, if we only selected collumn 1, we would see the values at each position in the id_vars collumn from the corresponding index position in column 1. We can name these columns for clarity. Print out information about new data frame• `.shape` to get the dimensions of the dataframe. Here we have 96 rows and 3 collumns. This is because we are getting values from every collumn in the plate (12) for each of the 8 row labels (A-H) in collumn unnamed: 0. • `.head()` to view the first entries in the dataframe. This requires parenthasis in calling the attribute as .head is a function that can take in arguments and is not intrinsic to the data frame, in that you canspecify how many rows you want to see. • `.index or .index.values` to get the index range (0-96) which means it godes from 0 up to 95 not including 95 in steps of 1. or .index.values to get the actual values of the index, a list from 0-95. • `.columns` to get the collumn names. • `type()` with the dataframe as the argument to get the type of the object. In this case we check that the object is still a dataframe. This is important because there are certain ways of selecting from a dataframe where the sliced object, if only one column can become a series not a dataframe. tidy format of nucleofection buffer
###Code
lonza_nucleofection_buffer_96w_renamed_tidy = pd.melt(lonza_nucleofection_buffer_96w_renamed,
id_vars= "row_letter",
var_name="column_num",
value_name="lonza_nucleofection_buffer") #restructure data
lonza_nucleofection_buffer_96w_renamed_tidy["well_id"] = lonza_nucleofection_buffer_96w_renamed_tidy.row_letter + \
lonza_nucleofection_buffer_96w_renamed_tidy.column_num
print(lonza_nucleofection_buffer_96w_renamed_tidy.shape)
lonza_nucleofection_buffer_96w_renamed_tidy.head()
###Output
_____no_output_____
###Markdown
tidy format for experimental condition plate
###Code
experimental_or_control_well_renamed_tidy = pd.melt(experimental_or_control_well_renamed,
id_vars= "row_letter",
var_name="column_num",
value_name="experimental_or_control") #restructure data
experimental_or_control_well_renamed_tidy["well_id"] = experimental_or_control_well_renamed_tidy.row_letter + \
experimental_or_control_well_renamed_tidy.column_num
print(experimental_or_control_well_renamed_tidy.shape)
experimental_or_control_well_renamed_tidy.head()
###Output
(96, 4)
###Markdown
Merge tidy data frames notice that inner, outer, right and left did the same thing why?
###Code
merged_data = lonza_nucleofection_buffer_96w_renamed_tidy.merge(experimental_or_control_well_renamed_tidy, left_on="well_id" , right_on="well_id")
print(merged_data.shape)
merged_data.head()
merged_data_renamed=merged_data.rename(columns={"Unnamed: 0":"96_well_row_letter"})
print(merged_data_renamed.shape)
merged_data_renamed.head
merged_data_renamed
experimental_or_control_well_renamed_tidy.head()
import numpy as np
experimental_or_control_well_renamed_tidy_shuffled = pd.DataFrame(experimental_or_control_well_renamed_tidy,
index=np.random.permutation(experimental_or_control_well_renamed_tidy.index))
experimental_or_control_well_renamed_tidy_shuffled.head()
merged_data = lonza_nucleofection_buffer_96w_renamed_tidy.merge(experimental_or_control_well_renamed_tidy_shuffled,
left_on="well_id" , right_on="well_id")
print(merged_data.shape)
merged_data.head()
merged_data=lonza_nucleofection_buffer_96w_tidy.merge(experimental_or_control_well_tidy, left_on=["Unnamed: 0", "column_num"], right_on=["Unnamed: 0", "column_num"])
print(merged_data.shape)
merged_data
###Output
(96, 4)
###Markdown
Merge on just one columnmerge on row labels: this lists all the aligns the data so that the row labels are shared. For the left data frame the column labels are repeated and aligned with the unique column labled, from 1-12 for the right data frame. merge on column labels: this gives the first column for both of the data sets shared. For the left df it repeats that row lables (A-H) and gives a unqiue value A-H for the right data frame.
###Code
merged_data=lonza_nucleofection_buffer_96w_tidy.merge(experimental_or_control_well_tidy, left_on=["Unnamed: 0"], right_on=["Unnamed: 0"])
print(merged_data.shape)
merged_data
merged_data=lonza_nucleofection_buffer_96w_tidy.merge(experimental_or_control_well_tidy, left_on=["column_num"], right_on=["column_num"])
print(merged_data.shape)
merged_data
###Output
(768, 5)
###Markdown
Joining well_type and buffers
###Code
cols_to_drop = ['row_letter', 'column_num']
buffers = lonza_nucleofection_buffer_96w_renamed_tidy.set_index("well_id")
buffers = buffers.drop(cols_to_drop, axis=1)
buffers.head()
well_type = experimental_or_control_well_renamed_tidy.set_index("well_id")
well_type = well_type.drop(cols_to_drop, axis=1)
well_type.head()
joined=well_type.join(buffers)
import numpy as np
np.random.seed(0)
scrambled_wells = np.random.permutation(well_type.index)
scrambled_wells
well_type_scrambled = well_type.loc[scrambled_wells]
well_type_scrambled.head()
well_type_scrambled.join(buffers)
wells_tidy['column_number'] = wells_tidy['Empty96ColNum'].astype(int)
print(wells_tidy.shape)
wells_tidy.head()
###Output
_____no_output_____
###Markdown
Wrong way! We did numbers, lettesr for `wells_tidy` and letters, numbers for `buffer_tidy`
###Code
wells_tidy.merge(buffer_tidy, left_on=[ "Empty96ColNum", 'Unnamed: 0',],
right_on=['row_letter', 'column_number'])
wells_tidy.merge(buffer_tidy, left_on=['Unnamed: 0', "Empty96ColNum"],
right_on=['row_letter', 'column_number'])
wells_tidy.merge(buffer_tidy, left_on=['Unnamed: 0', "column_number"],
right_on=['row_letter', 'column_number'])
Empty96W_ColumnList.columns
type(Empty96W_ColumnList)
SelectValues_Empty96WellID=Empty96W_ColumnList.iloc[:,1] # select needed collumn
SelectValues_Empty96WellID
SelectValues_Empty96WellID_df=SelectValues_Empty96WellID.to_frame() # turn series into df
SelectValues_Empty96WellID_df
type(SelectValues_Empty96WellID)
Empty96W_ColumnList2=pd.melt(empty_96W, var_name="Empty96ColNum2",value_name="Empty96WellID2")
SelectValues_Empty96WellID2=Empty96W_ColumnList2.iloc[:,1]
SelectValues_Empty96WellID2
SelectValues_Empty96WellID2_df=SelectValues_Empty96WellID2.to_frame()
SelectValues_Empty96WellID2_df
pd.concat([SelectValues_Empty96WellID_df,SelectValues_Empty96WellID2_df], axis=1)
###Output
_____no_output_____ |
src/Sepsis/Alpha+/Sepsis_Analysis_Alpha+.ipynb | ###Markdown
Alpha+ Miner Step 1: Handling and import event data
###Code
import pm4py
from pm4py.objects.log.importer.xes import importer as xes_importer
log = xes_importer.apply('../Sepsis Cases - Event Log.xes')
###Output
_____no_output_____
###Markdown
Step 2: Mining event log - Process Discovery
###Code
net, initial_marking, final_marking = pm4py.discover_petri_net_alpha_plus(log)
###Output
_____no_output_____
###Markdown
Step 3: Visualize Petri of Mined Process from log
###Code
pm4py.view_petri_net(net, initial_marking, final_marking)
###Output
_____no_output_____
###Markdown
Step 4: Convert Petri Net to BPMN
###Code
bpmn_graph = pm4py.convert_to_bpmn(*[net, initial_marking, final_marking])
pm4py.view_bpmn(bpmn_graph, "png")
###Output
_____no_output_____
###Markdown
Step 5: Log-Model Evaluation Replay Fitness
###Code
# The calculation of the replay fitness aim to calculate how much of the behavior in the log is admitted by the process model. We propose two methods to calculate replay fitness, based on token-based replay and alignments respectively.
# The two variants of replay fitness are implemented as Variants.TOKEN_BASED and Variants.ALIGNMENT_BASED respectively.
# To calculate the replay fitness between an event log and a Petri net model, using the token-based replay method, the code on the right side can be used. The resulting value is a number between 0 and 1.
from pm4py.algo.evaluation.replay_fitness import algorithm as replay_fitness_evaluator
fitness = replay_fitness_evaluator.apply(log, net, initial_marking, final_marking, variant=replay_fitness_evaluator.Variants.TOKEN_BASED)
fitness
###Output
_____no_output_____
###Markdown
Precision
###Code
# We propose two approaches for the measurement of precision in PM4Py:
# ETConformance (using token-based replay): the reference paper is Muñoz-Gama, Jorge, and Josep Carmona. "A fresh look at precision in process conformance." International Conference on Business Process Management. Springer, Berlin, Heidelberg, 2010.
# Align-ETConformance (using alignments): the reference paper is Adriansyah, Arya, et al. "Measuring precision of modeled behavior." Information systems and e-Business Management 13.1 (2015): 37-67.
from pm4py.algo.evaluation.precision import algorithm as precision_evaluator
prec = precision_evaluator.apply(log, net, initial_marking, final_marking, variant=precision_evaluator.Variants.ETCONFORMANCE_TOKEN)
prec
###Output
_____no_output_____
###Markdown
F-Measure
###Code
def f_measure(f, p):
return (2*f*p)/(f+p)
f_measure(fitness['average_trace_fitness'], prec)
###Output
_____no_output_____ |
alg/dynamic_disease_network_ddp/tutorial.ipynb | ###Markdown
Tutorial
###Code
import pickle
import torch
import torch.optim as optim
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import data_loader
import models
###Output
_____no_output_____
###Markdown
Loading dataEach event sequence is represented as a list of dictionaries. The dictionary represent when and what event has happens. The following gives the first three events in the first training sequene.
###Code
with open('data/test_ev50_big.pkl', 'rb') as handle:
train_data = pickle.load(handle)
train_data['train'] += train_data['dev']
train_data['train'] += (train_data['test'])
train_data['train'][0][:3]
###Output
_____no_output_____
###Markdown
The `process_seq` helper function converts the sequence from a list to several numpy arraies, encoding information such as event time and event type.
###Code
max_len = 20
n_event_type = dim_process = 50
n_sample = 10000
context_dim = 1
train_input = data_loader.process_seq(train_data, list(range(n_sample)), max_len=max_len, n_event_type=n_event_type,
tag_batch='train', dtype=np.float32)
###Output
_____no_output_____
###Markdown
The simulation data set does not contain static feature, so we create dummies (a matrix with all ones) as the static feature.
###Code
batch_input_np = list(train_input)
df_patient_static_mat = np.ones((1, n_sample)).astype('float32')
batch_input_np.append(df_patient_static_mat)
gap = batch_input_np[0][:-1, :] - batch_input_np[0][1:, :]
gap_mean = np.mean(gap)
gap_std = np.std(gap)
###Output
_____no_output_____
###Markdown
Loading the ground-truth model that generated the data.
###Code
with open('data/model_test_ev50_big.pkl', 'rb') as handle:
true_model = pickle.load(handle)
true_alpha = true_model['alpha'] / true_model['delta']
true_lambda = true_model['delta']
###Output
_____no_output_____
###Markdown
Training modelFirst we defined the model the the optimizer in the standard way. We compare the DDP with the Hawkes process.
###Code
alpha_init = np.float32(
np.log(
np.random.uniform(
low=0.5, high=1.5,
size=(dim_process, dim_process)
)
)
)
lambda_init = np.float32(
np.log(
np.random.uniform(
low=10.0, high=20.0,
size=(dim_process, dim_process)
)
)
)
ddp_model = models.DDP(
n_event_type=n_event_type,
n_context_dim=context_dim,
first_occurrence_only=False,
embedding_size=50,
rnn_hidden_size=50,
alpha_mat_np=alpha_init,
lambda_mat_np=lambda_init,
gap_mean=gap_mean,
gap_scale=gap_std
)
opt_ddp = optim.SGD(ddp_model.parameters(), lr=0.001, momentum=0.9)
c_hawkes_model = models.CHawkes(n_event_type=n_event_type, n_context_dim=context_dim,
first_occurrence_only=False, alpha_mat_np=alpha_init, lambda_mat_np=lambda_init)
opt_c_hawkes = optim.SGD(c_hawkes_model.parameters(), lr = 0.001, momentum=0.9)
with torch.no_grad():
test_batch = data_loader.get_whole_batch(batch_input_np)
###Output
_____no_output_____
###Markdown
Setting up training parameters.
###Code
with torch.no_grad():
test_batch = data_loader.get_whole_batch(batch_input_np)
mat_dist_ddp = list()
mat_dist_hawkes = list()
rnn_sd = list()
batch_size = 100
training_itr = 1000
report_step = 1
current_best = 10000
###Output
_____no_output_____
###Markdown
We start the training iteration.
###Code
for i in range(training_itr):
if i % report_step == 0:
with torch.no_grad():
test_batch = data_loader.get_whole_batch(batch_input_np)
ddp_model.set_input(*test_batch)
weights = ddp_model.graph_weights_seq.numpy()
rnn_sd.append(np.std(weights))
avg_weight_list = list()
a = test_batch[4].numpy()
b = test_batch[2].numpy()
for j in range(n_event_type):
ind = np.logical_not(np.logical_and(a == 1, b == j))
weights_cp = np.copy(weights)
weights_cp[ind] = np.nan
avg_weight_list.append(np.nanmean(weights_cp))
avg_weight = np.array(avg_weight_list)
mat_dist_ddp.append(
np.sum(np.abs(torch.exp(ddp_model.alpha_mat).numpy() * avg_weight - true_alpha)))
mat_dist_hawkes.append(np.sum(np.abs(torch.exp(c_hawkes_model.alpha_mat).numpy() - true_alpha)))
mini_batch = data_loader.get_mini_batch(batch_size, batch_input_np)
ddp_model.set_input(*mini_batch)
log_lik = ddp_model() * (-1.0)
models.cross_ent_one_step(log_lik, opt_ddp)
c_hawkes_model.set_input(*mini_batch)
log_lik2 = c_hawkes_model() * (-1.0)
models.cross_ent_one_step(log_lik2, opt_c_hawkes)
###Output
_____no_output_____
###Markdown
Visualizing the training performance as follows:
###Code
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
line = ax.plot(np.array(mat_dist_ddp), label='DDP')
ax.axhline(y=min(mat_dist_hawkes), color='r', linestyle='--', label='Hawkes (Ground Truth)')
plt.xlabel('Training Iteration')
plt.ylabel('L1 Distance')
plt.legend()
###Output
_____no_output_____ |
crawlerNominatedForDeletion.ipynb | ###Markdown
This crawler crawls all the articles that were nominated for deletion but didn't get deleted - The crawler is a one-shot crawler. - For wikipedia articles from 2006 and on it is crawling roughly 98% of the data. - For wikipedia articles below 2006 (the HTML structure changes) it is crawling roughly 90% of the data.
###Code
def crawl_year(year, yearContent, df):
""" Crawl the different years of the wikipedia's archieved deletion discussions page and store the content
in a Data Frame.
Args:
year: the year in which the archived articles were flagged for deletion
yearContent: the html content containing the year (a h2 tag)
df: the data frame where the data are stored in the format year | month | title | Id | Gender
Returns:
a data frame
"""
for monthContent in yearContent.find_next_siblings(limit=24):
if monthContent.name == "h2":
# Crawl only this year. If the year doesn't yet have 12 months(e.g. 2019), don't go for more.
break
elif monthContent.name == "h3":
month = monthContent.get_text().split(str(year)+" ")[1].split("[")[0]
print("Month",month)
elif monthContent.name == "ul":
# Go through the list of days
for dayRelative in monthContent.find_all("a"):
print(dayRelative['href'])
dayPageLink = "https://en.wikipedia.org/"+dayRelative['href']
try :
dayPage = requests.get(dayPageLink)
except requests.exceptions.RequestException as e:
continue
soupPage = BeautifulSoup(dayPage.content, "html.parser")
if dayPage.status_code == 200:
# Get the number of articles in a particular day
# From the beginning till the june 2006 wikipedia has a different HTML code on this
if (int(year) < 2006 | ((int(year) == 2006) & (month in ['June', 'May', 'April', 'March', 'February',
'January']))):
try:
articlesLength = float(soupPage.find_all("li", {"class": "toclevel-2"})[-1].get_text().split(" ")[0])
except Exception:
continue
nrLength = len(str(articlesLength).split(".")[1])
if nrLength == 2:
articlesLength = round(articlesLength%1 * 100,2)
elif nrLength == 3:
articlesLength = round(articlesLength%1 * 1000,3)
else:
try:
articlesLength = float(soupPage.find_all("ul")[2].find_all("li")[-1].get_text().split(" ")[0])
except ValueError:
try:
articlesLength = float(soupPage.find_all("ul")[0].find_all("li")[-1].get_text().split(" ")[0])
except ValueError:
continue
numberDec = round(articlesLength % 1 * 10, 2)
if int(numberDec) != numberDec:
numberDec *= 10
articlesLength = int(articlesLength) + numberDec
print("Articles to be crawled in this page: ",articlesLength)
# Every article is located in an <h3> tag
for article in soupPage.find_all("h3", limit = articlesLength):
try:
# Don't read deleted articles
if article.find("a")['title'].find("(page does") == -1:
articleTitle = article.get_text().split("[")[0]
pageLink = "https://en.wikipedia.org"+article.find("a")['href']
df = crawl_article(year, month, articleTitle, pageLink, df)
except Exception as e:
continue
return df
def crawl_article(year, month, title, pageLink, df):
""" Crawl the content of the corresponding dbpedia page of a wikipedia article in order to get its id and gender.
Store an entry in the dataframe.
Args:
year: the year in which the current article was flagged for deletion
month: the month in which the current article was flagged for deletion
articleTitle: the title of the article flagged for deletion
pageLink: the wikipedia link of the article
df: the data frame where the data are stored in the format year | month | title | Id | gender
Returns:
A data frame
"""
url = "http://dbpedia.org/page/"+pageLink.split("/wiki/")[1]
try :
dbpediaPage = requests.get(url)
except requests.exceptions.RequestException as e:
return df
soup = BeautifulSoup(dbpediaPage.content, "html.parser")
wikiIdTag = soup.find("span", {"property":"dbo:wikiPageID"})
genderTag = soup.find("span", {"property":"foaf:gender"})
if genderTag == None:
# Not a person
return df
dic = {"Year":year, "Month":month, "Tile":title, "Id": wikiIdTag.contents[0]
, "Gender":genderTag.contents[0]}
if df.empty:
df = pd.DataFrame(data=dic, index=[0])
else:
df_temp = pd.DataFrame(data=dic, index=[0])
df = pd.concat([df, df_temp])
return df
startTime = timeit.default_timer()
seedURL = "https://en.wikipedia.org/wiki/Wikipedia:Archived_deletion_discussions#Deletion_discussions/"
archivePage = requests.get(seedURL)
soup = BeautifulSoup(archivePage.content, "html.parser")
# Get the year
years = []
yearContents = []
for yearContent in soup.find_all("h2", limit=17):
year = yearContent.get_text().split("[")[0]
if year == "Contents":
continue
years.append(year)
yearContents.append(yearContent)
# print(years[5])
df = pd.DataFrame()
df = crawl_year(years[5], yearContents[5], df)
elapsedTime = timeit.default_timer() - startTime
print("Crawl time ", elapsedTime)
df
export_csv = df.to_csv (r'2014.csv', index = None, header=True)
###Output
_____no_output_____ |
PO240_AtividadePriscila_Solvers.ipynb | ###Markdown
###Code
! pip install ortools
from google.colab import files
uploaded = files.upload() # subir arquivo Instancia
# colab: incluir !pip install
import numpy as np # módulo para manipulação de vetores e matrizes
from ortools.linear_solver import pywraplp # CBC
# import docplex.mp.sdetails # CPLEX
# from docplex.mp.model import Model # CPLEX
from ortools.sat.python import cp_model # CPSAT
def leitura(arquivo):
arq = open(arquivo, "r") # r- read
lixo = arq.readline() # linha com texto Number of jobs
linha = arq.readline() # linha com número de tarefas
N = int(linha) # linha com o número de tarefas
p = np.zeros(N, dtype=np.int64) # p = [p0, p1,..., p(N-1)]
w = np.zeros(N, dtype=np.int64)
d = np.zeros(N, dtype=np.int64)
lixo = arq.readline() # linha com texto Job data (job number, release date, processing time, due date)
for i in range(N): # repetir N vezes
# i-ésima linha
linha = arq.readline().split() # posições: 0-indice 1-p, 2-w, 3-d
p[i] = int(linha[1])
w[i] = int(linha[2])
d[i] = int(linha[3])
return N, p, w, d
def modelo_Min_TerminoPonderado_CPLEX(N, p, w, d):
# cria o modelo
mdl = Model(name="Única Máquina")
M = 2*sum(p)
# alocação de memória para as estruturas do problema
x = [0]*N
y = []
for i in range(N):
crialinha = [0]*N
y.append(crialinha)
# declaração das variáveis de decisão
for i in range(N): # min, max
x[i] = mdl.integer_var(0, None, 'x'+str(i))
for i in range(N):
for k in range(N):
if k > i :
y[i][k] = mdl.binary_var('y'+str(i)+","+str(k))
# declaração das restrições
for i in range(N):
mdl.add_constraint( x[i] + p[i] <= d[i] )
for k in range(N):
if k > i :
mdl.add_constraint( x[i] + p[i] <= x[k] + M*(1-y[i][k]) )
mdl.add_constraint( x[k] + p[k] <= x[i] + M*y[i][k] )
# função objetivo
mdl.minimize( mdl.sum(w[i]*(x[i] + p[i]) for i in range(N)) )
# limitar tempo de execução (em segundos)
mdl.set_time_limit(1*60*60)
#resolver o problema
mdl.solve()
status = mdl.solve_details.status
print("Status: ", status)
print("Número de Variáveis: ", mdl.number_of_variables)
print("Número de Restrições: ", mdl.number_of_constraints)
print("Tempo de execução (s): %.4f" %mdl.solve_details.time)
print("Nós B&B: ", mdl.solve_details.nb_nodes_processed)
print("Função-objetivo: ", mdl.objective_value)
print("Best Bound: %.4f" %mdl.solve_details.best_bound)
print("GAP: %.4f" %mdl.solve_details.mip_relative_gap)
print("Lista de Alocação (tarefa, início):")
aux = []
for i in range(N):
aux.append((i, x[i].solution_value))
solOrd = sorted(aux, key=lambda tup: tup[1])
print(solOrd)
print()
def modelo_Min_TerminoPonderado_CBC(N, p, w, d):
# cria o modelo
mdl = pywraplp.Solver('Única Máquina', pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
M = 2*sum(p)
# alocação de memória para as estruturas do problema
x = [0]*N
y = []
for i in range(N):
crialinha = [0]*N
y.append(crialinha)
# declaração das variáveis de decisão
for i in range(N): # min, max
x[i] = mdl.IntVar(0, mdl.infinity(), 'x'+str(i))
for i in range(N):
for k in range(N):
if k > i :
y[i][k] = mdl.IntVar(0, 1,'y'+str(i)+","+str(k))
# declaração das restrições
for i in range(N):
mdl.Add( x[i] + p[i] <= d[i] )
for k in range(N):
if k > i :
mdl.Add( x[i] + p[i] <= x[k] + M*(1-y[i][k]) )
mdl.Add( x[k] + p[k] <= x[i] + M*y[i][k] )
# função objetivo
mdl.Minimize( mdl.Sum(w[i]*(x[i] + p[i]) for i in range(N)) )
# limitar tempo de execução (em milisegundos)
mdl.SetTimeLimit(1*60*60*1000) # 1 hora
#resolver o problema
status = mdl.Solve()
print("Status: ", status)
print("Número de Variáveis: ", mdl.NumVariables())
print("Número de Restrições: ", mdl.NumConstraints())
print("Tempo de execução (s): %.4f" %float(mdl.WallTime()/1000))
print("Nós B&B: ", mdl.nodes())
print("Função-objetivo: ", mdl.Objective().Value())
print("Best Bound: %.4f" %mdl.Objective().BestBound())
gap = -1
if mdl.Objective().Value():
gap = (mdl.Objective().Value() - mdl.Objective().BestBound())/mdl.Objective().Value()
print("GAP: %.4f" %gap)
print("Lista de Alocação (tarefa, início):")
aux = []
for i in range(N):
aux.append((i, x[i].solution_value()))
solOrd = sorted(aux, key=lambda tup: tup[1])
print(solOrd)
print()
def modelo_Min_TerminoPonderado_CPSAT(N, p, w, d):
# cria o modelo
mdl = cp_model.CpModel()
M = int(2*sum(p))
# alocação de memória para as estruturas do problema
x = [0]*N
y = []
for i in range(N):
crialinha = [0]*N
y.append(crialinha)
xmax = int(sum(p))
# declaração das variáveis de decisão
for i in range(N): # min, max
x[i] = mdl.NewIntVar(0, xmax, 'x'+str(i))
for i in range(N):
for k in range(N):
if k > i :
y[i][k] = mdl.NewBoolVar('y'+str(i)+","+str(k))
# declaração das restrições
for i in range(N):
mdl.Add( x[i] + p[i] <= d[i] )
for k in range(N):
if k > i :
mdl.Add( x[i] + p[i] <= x[k] + M*(1-y[i][k]) )
mdl.Add( x[k] + p[k] <= x[i] + M*y[i][k] )
# função objetivo
mdl.Minimize( sum(w[i]*(x[i] + p[i]) for i in range(N)) )
# resolver com CP-SAT
solver = cp_model.CpSolver()
# núcleos para processamento
solver.parameters.num_search_workers = 8
# limitar tempo de execução (em segundos)
solver.parameters.max_time_in_seconds = 1*60*60
status = solver.Solve(mdl)
status = solver.StatusName(status)
print("Status: ", status)
z = solver.ObjectiveValue()
bb = solver.BestObjectiveBound()
nos = solver.NumBranches()
gap = -1
if solver.ObjectiveValue():
gap = (solver.ObjectiveValue() - solver.BestObjectiveBound())/solver.ObjectiveValue()
print("Função-objetivo: ", z)
print("Best Bound: %.4f" %bb)
print("Nós B&B: ", nos)
print("GAP: %.4f" %gap)
print(solver.ResponseStats())
print("Lista de Alocação (tarefa, início):")
aux = []
for i in range(N):
aux.append((i, solver.Value(x[i])))
solOrd = sorted(aux, key=lambda tup: tup[1])
print(solOrd)
print()
# *********************** 10 TAREFAS CP-SAT ***************************
# Leitura do arquivo com os dados
# N (qtde tarefas), p (proc), w (peso), d (prazo) são as saídas da função leitura
N, p, w, d = leitura("instancia10_.txt")
print("===== CP-SAT =====")
modelo_Min_TerminoPonderado_CPSAT(N, p, w, d)
# *********************** 20 TAREFAS CP-SAT ***************************
# Leitura do arquivo com os dados
# N (qtde tarefas), p (proc), w (peso), d (prazo) são as saídas da função leitura
N, p, w, d = leitura("Instancia20.txt")
print("===== CP-SAT =====")
modelo_Min_TerminoPonderado_CPSAT(N, p, w, d)
# *********************** 100 TAREFAS CP-SAT ***************************
# Leitura do arquivo com os dados
# N (qtde tarefas), p (proc), w (peso), d (prazo) são as saídas da função leitura
N, p, w, d = leitura("Instancia100.txt")
print("===== CP-SAT =====")
modelo_Min_TerminoPonderado_CPSAT(N, p, w, d)
# *********************** 10 TAREFAS CBC ***************************
# Leitura do arquivo com os dados
# N (qtde tarefas), p (proc), w (peso), d (prazo) são as saídas da função leitura
N, p, w, d = leitura("instancia10_.txt")
print("===== CBC =====")
modelo_Min_TerminoPonderado_CBC(N, p, w, d)
# *********************** 20 TAREFAS CBC ***************************
# Leitura do arquivo com os dados
# N (qtde tarefas), p (proc), w (peso), d (prazo) são as saídas da função leitura
N, p, w, d = leitura("Instancia20.txt")
print("===== CBC =====")
modelo_Min_TerminoPonderado_CBC(N, p, w, d)
# *********************** 100 TAREFAS CBC ***************************
# Leitura do arquivo com os dados
# N (qtde tarefas), p (proc), w (peso), d (prazo) são as saídas da função leitura
N, p, w, d = leitura("Instancia100.txt")
print("===== CBC =====")
modelo_Min_TerminoPonderado_CBC(N, p, w, d)
###Output
===== CBC =====
Status: 6
Número de Variáveis: 5050
Número de Restrições: 10000
Tempo de execução (s): 3781.9360
Nós B&B: 11057
Função-objetivo: 0.0
Best Bound: 23511.5943
GAP: -1.0000
Lista de Alocação (tarefa, início):
[(0, 0.0), (1, 0.0), (2, 0.0), (3, 0.0), (4, 0.0), (5, 0.0), (6, 0.0), (7, 0.0), (8, 0.0), (9, 0.0), (10, 0.0), (11, 0.0), (12, 0.0), (13, 0.0), (14, 0.0), (15, 0.0), (16, 0.0), (17, 0.0), (18, 0.0), (19, 0.0), (20, 0.0), (21, 0.0), (22, 0.0), (23, 0.0), (24, 0.0), (25, 0.0), (26, 0.0), (27, 0.0), (28, 0.0), (29, 0.0), (30, 0.0), (31, 0.0), (32, 0.0), (33, 0.0), (34, 0.0), (35, 0.0), (36, 0.0), (37, 0.0), (38, 0.0), (39, 0.0), (40, 0.0), (41, 0.0), (42, 0.0), (43, 0.0), (44, 0.0), (45, 0.0), (46, 0.0), (47, 0.0), (48, 0.0), (49, 0.0), (50, 0.0), (51, 0.0), (52, 0.0), (53, 0.0), (54, 0.0), (55, 0.0), (56, 0.0), (57, 0.0), (58, 0.0), (59, 0.0), (60, 0.0), (61, 0.0), (62, 0.0), (63, 0.0), (64, 0.0), (65, 0.0), (66, 0.0), (67, 0.0), (68, 0.0), (69, 0.0), (70, 0.0), (71, 0.0), (72, 0.0), (73, 0.0), (74, 0.0), (75, 0.0), (76, 0.0), (77, 0.0), (78, 0.0), (79, 0.0), (80, 0.0), (81, 0.0), (82, 0.0), (83, 0.0), (84, 0.0), (85, 0.0), (86, 0.0), (87, 0.0), (88, 0.0), (89, 0.0), (90, 0.0), (91, 0.0), (92, 0.0), (93, 0.0), (94, 0.0), (95, 0.0), (96, 0.0), (97, 0.0), (98, 0.0), (99, 0.0)]
|
atmosphere/20210506_wekeo_webinar/20_Sentinel5P_TROPOMI_NO2_L2_retrieve.ipynb | ###Markdown
21 - Sentinel-5P NO2 - Load and browse >> DATA RETRIEVE Copernicus Sentinel-5 Precursor (Sentinel-5P) - NO2 The example below illustrates step-by-step how Copernicus Sentinel-5P NO2 data can be retrieved from WEkEO with the help of the [Harmonized Data Access (HDA) API](https://wekeo.eu/hda-api).The HDA API workflow is a six-step process: - [1. Search for datasets on WEkEO](wekeo_search) - [2. Get the API request](wekeo_api_request) - [3. Get your WEkEO API key](wekeo_api_key) - [4. Initialise the WEkEO Harmonised Data Access request](wekeo_hda_request) - [5. Load data descriptor file and request data](wekeo_json) - [6. Download requested data](wekeo_download) All steps have to be performed in order to be able to retrieve data from WEkEO. All HDA API functions needed to retrieve data are stored in the notebook [hda_api_functions](./hda_api_functions.ipynb). Load required libraries
###Code
import os
import sys
import json
import time
import base64
import requests
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Load helper functions
###Code
# HDA API tools
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'wekeo-hda'))
from hda_api_functions import * # this is the PYTHON version
###Output
_____no_output_____
###Markdown
1. Search for datasets on WEkEO Under [WEkEO DATA](https://www.wekeo.eu/data), you can search all datasets available on WEkEO. To add additional layers, you have to click on the `+` sign, which opens the `Catalogue` interface.There are two search options: - a `free keyword search`, and - a pre-defined `predefined keyword search`, that helps to filter the data based on `area`, `platform`, `data provider` and more. Under `PLATFORM`, you can select *`Sentinel-5P`* and retrieve the results. You can either directly add the data to the map or you can click on `Details`, which opens a dataset description.When you click on `Add to map...`, a window opens where you can select one specific variable of Sentinel-5P TROPOMI. WEkEO interface to search for datasets 2. Get the API request When a layer is added to the map, you can select the download icon, which opens an interface that allows you to tailor your data request.For Sentinel-5P, the following information can be selected:* `Bounding box`* `Sensing start stop time`* `Processing level`* `Product type`Once you made your selection, you can either directly requet the data or you can click on `Show API request`, which opens a window with the HDA API request for the specific data selection. Sentinel-5P API request - Example `Copy` the API request and save it as a `JSON` file. We did the same and you can open the `data descriptor` file for Sentinel-5P [here](./s5P_data_descriptor.json). Each dataset on WEkEO is assigned a unique `datasetId`. Let us store the dataset ID for Sentinel-5P as a variable called `dataset_id` to be used later.
###Code
dataset_id = "EO:ESA:DAT:SENTINEL-5P:TROPOMI"
###Output
_____no_output_____
###Markdown
3. Get the WEkEO API key In order to interact with WEkEO's Harmonised Data Access API, each user gets assigned an `API key` and `API token`. You will need the API key in order to download data in a programmatic way.The `api key` is generated by encoding your `username` and `password` to Base64. You can use the function [generate_api_key](./hda_api_functions.ipynbgenerate_api_key) to programmatically generate your Base64-encoded api key. For this, you have to replace the 'username' and 'password' strings with your WEkEO username and password in the cell below.Alternatively, you can go to this [website](https://www.base64encode.org/) that allows you to manually encode your `username:password` combination. An example of an encoded key is `wekeo-test:wekeo-test`, which is encoded to `d2VrZW8tdGVzdDp3ZWtlby10ZXN0`.
###Code
user_name = '##########'
password = '##########'
api_key = generate_api_key(user_name, password)
api_key
###Output
_____no_output_____
###Markdown
Alternative: enter manually the generated api key
###Code
#api_key =
###Output
_____no_output_____
###Markdown
4. Initialise the Harmonised Data Access (HDA) API request In order to initialise an API request, you have to initialise a dictionary that contains information on `dataset_id`, `api_key` and `download_directory_path`.Please enter the path of the directory where the data shall be downloaded to.
###Code
# Enter here the directory path where you want to download the data to
download_dir_path = './data/'
###Output
_____no_output_____
###Markdown
With `dataset_id`, `api_key` and `download_dir_path`, you can initialise the dictionary with the function [init](./hda_api_functions.ipynbinit).
###Code
hda_dict = init(dataset_id, api_key, download_dir_path)
###Output
_____no_output_____
###Markdown
Request access token Once initialised, you can request an access token with the function [get_access_token](./hda_api_functions.ipynbget_access_token). The access token is stored in the `hda_dict` dictionary.You might need to accept the Terms and Conditions, which you can do with the function [acceptTandC](./hda_api_functions.ipynbacceptTandC).
###Code
hda_dict = get_access_token(hda_dict)
###Output
_____no_output_____
###Markdown
Accept Terms and Conditions (if applicable)
###Code
hda_dict = acceptTandC(hda_dict)
###Output
_____no_output_____
###Markdown
5. Load data descriptor file and request data The Harmonised Data Access API can read your data request from a `JSON` file. In this JSON-based file, you can describe the dataset you are interested in downloading. The file is in principle a dictionary. The following keys can be defined:- `datasetID`: the dataset's collection ID- `stringChoiceValues`: type of dataset, e.g. 'processing level' or 'product type'- `dataRangeSelectValues`: time period you would like to retrieve data- `boundingBoxValues`: optional to define a subset of a global fieldYou can load the `JSON` file with `json.load()`.
###Code
with open('./s5p_data_descriptor.json', 'r') as f:
data = json.load(f)
data
###Output
_____no_output_____
###Markdown
Initiate the request by assigning a job ID The function [get_job_id](./hda_api_functions.ipynbget_job_id) will launch your data request and your request is assigned a `job ID`.
###Code
hda_dict = get_job_id(hda_dict,data)
###Output
_____no_output_____
###Markdown
Build list of file names to be ordered and downloaded The next step is to gather a list of file names available, based on your assigned `job ID`. The function [get_results_list](./hda_api_functions.ipynbget_results_list) creates the list.
###Code
hda_dict = get_results_list(hda_dict)
###Output
_____no_output_____
###Markdown
Create an `order ID` for each file to be downloaded The next step is to create an `order ID` for each file name to be downloaded. You can use the function [get_order_ids](./hda_api_functions.ipynbget_order_ids).
###Code
hda_dict = get_order_ids(hda_dict)
###Output
_____no_output_____
###Markdown
6. Download requested data As a final step, you can use the function [download_data](./hda_api_functions.ipynbdownload_data) to initialize the data download and to download each file that has been assigned an `order ID`.
###Code
download_data(hda_dict)
###Output
_____no_output_____
###Markdown
21 - Sentinel-5P NO2 - Load and browse >> DATA RETRIEVE Copernicus Sentinel-5 Precursor (Sentinel-5P) - NO2 The example below illustrates step-by-step how Copernicus Sentinel-5P NO2 data can be retrieved from WEkEO with the help of the [Harmonized Data Access (HDA) API](https://wekeo.eu/hda-api).The HDA API workflow is a six-step process: - [1. Search for datasets on WEkEO](wekeo_search) - [2. Get the API request](wekeo_api_request) - [3. Get your WEkEO API key](wekeo_api_key) - [4. Initialise the WEkEO Harmonised Data Access request](wekeo_hda_request) - [5. Load data descriptor file and request data](wekeo_json) - [6. Download requested data](wekeo_download) All steps have to be performed in order to be able to retrieve data from WEkEO. All HDA API functions needed to retrieve data are stored in the notebook [hda_api_functions](./hda_api_functions.ipynb). Load required libraries
###Code
import os
import sys
import json
import time
import base64
import requests
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Load helper functions
###Code
# HDA API tools
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'wekeo-hda'))
from hda_api_functions import * # this is the PYTHON version
###Output
_____no_output_____
###Markdown
1. Search for datasets on WEkEO Under [WEkEO DATA](https://www.wekeo.eu/data), you can search all datasets available on WEkEO. To add additional layers, you have to click on the `+` sign, which opens the `Catalogue` interface.There are two search options: - a `free keyword search`, and - a pre-defined `predefined keyword search`, that helps to filter the data based on `area`, `platform`, `data provider` and more. Under `PLATFORM`, you can select *`Sentinel-5P`* and retrieve the results. You can either directly add the data to the map or you can click on `Details`, which opens a dataset description.When you click on `Add to map...`, a window opens where you can select one specific variable of Sentinel-5P TROPOMI. WEkEO interface to search for datasets 2. Get the API request When a layer is added to the map, you can select the download icon, which opens an interface that allows you to tailor your data request.For Sentinel-5P, the following information can be selected:* `Bounding box`* `Sensing start stop time`* `Processing level`* `Product type`Once you made your selection, you can either directly requet the data or you can click on `Show API request`, which opens a window with the HDA API request for the specific data selection. Sentinel-5P API request - Example `Copy` the API request and save it as a `JSON` file. We did the same and you can open the `data descriptor` file for Sentinel-5P [here](./s5P_data_descriptor.json). Each dataset on WEkEO is assigned a unique `datasetId`. Let us store the dataset ID for Sentinel-5P as a variable called `dataset_id` to be used later.
###Code
dataset_id = "EO:ESA:DAT:SENTINEL-5P:TROPOMI"
###Output
_____no_output_____
###Markdown
3. Get the WEkEO API key In order to interact with WEkEO's Harmonised Data Access API, each user gets assigned an `API key` and `API token`. You will need the API key in order to download data in a programmatic way.The `api key` is generated by encoding your `username` and `password` to Base64. You can use the function [generate_api_key](./hda_api_functions.ipynbgenerate_api_key) to programmatically generate your Base64-encoded api key. For this, you have to replace the 'username' and 'password' strings with your WEkEO username and password in the cell below.Alternatively, you can go to this [website](https://www.base64encode.org/) that allows you to manually encode your `username:password` combination. An example of an encoded key is `wekeo-test:wekeo-test`, which is encoded to `d2VrZW8tdGVzdDp3ZWtlby10ZXN0`.
###Code
user_name = '##########'
password = '##########'
api_key = generate_api_key(user_name, password)
api_key
###Output
_____no_output_____
###Markdown
Alternative: enter manually the generated api key
###Code
#api_key =
###Output
_____no_output_____
###Markdown
4. Initialise the Harmonised Data Access (HDA) API request In order to initialise an API request, you have to initialise a dictionary that contains information on `dataset_id`, `api_key` and `download_directory_path`.Please enter the path of the directory where the data shall be downloaded to.
###Code
# Enter here the directory path where you want to download the data to
download_dir_path = './data/'
###Output
_____no_output_____
###Markdown
With `dataset_id`, `api_key` and `download_dir_path`, you can initialise the dictionary with the function [init](./hda_api_functions.ipynbinit).
###Code
hda_dict = init(dataset_id, api_key, download_dir_path)
###Output
_____no_output_____
###Markdown
Request access token Once initialised, you can request an access token with the function [get_access_token](./hda_api_functions.ipynbget_access_token). The access token is stored in the `hda_dict` dictionary.You might need to accept the Terms and Conditions, which you can do with the function [acceptTandC](./hda_api_functions.ipynbacceptTandC).
###Code
hda_dict = get_access_token(hda_dict)
###Output
_____no_output_____
###Markdown
Accept Terms and Conditions (if applicable)
###Code
hda_dict = acceptTandC(hda_dict)
###Output
_____no_output_____
###Markdown
5. Load data descriptor file and request data The Harmonised Data Access API can read your data request from a `JSON` file. In this JSON-based file, you can describe the dataset you are interested in downloading. The file is in principle a dictionary. The following keys can be defined:- `datasetID`: the dataset's collection ID- `stringChoiceValues`: type of dataset, e.g. 'processing level' or 'product type'- `dataRangeSelectValues`: time period you would like to retrieve data- `boundingBoxValues`: optional to define a subset of a global fieldYou can load the `JSON` file with `json.load()`.
###Code
with open('./s5p_data_descriptor.json', 'r') as f:
data = json.load(f)
data
###Output
_____no_output_____
###Markdown
Initiate the request by assigning a job ID The function [get_job_id](./hda_api_functions.ipynbget_job_id) will launch your data request and your request is assigned a `job ID`.
###Code
hda_dict = get_job_id(hda_dict,data)
###Output
_____no_output_____
###Markdown
Build list of file names to be ordered and downloaded The next step is to gather a list of file names available, based on your assigned `job ID`. The function [get_results_list](./hda_api_functions.ipynbget_results_list) creates the list.
###Code
hda_dict = get_results_list(hda_dict)
###Output
_____no_output_____
###Markdown
Create an `order ID` for each file to be downloaded The next step is to create an `order ID` for each file name to be downloaded. You can use the function [get_order_ids](./hda_api_functions.ipynbget_order_ids).
###Code
hda_dict = get_order_ids(hda_dict)
###Output
_____no_output_____
###Markdown
6. Download requested data As a final step, you can use the function [download_data](./hda_api_functions.ipynbdownload_data) to initialize the data download and to download each file that has been assigned an `order ID`.
###Code
download_data(hda_dict)
###Output
_____no_output_____ |
Training_TFKeras_CPU/4.0d-Training-HLF-TF_Keras_Petastorm_Parquet.ipynb | ###Markdown
Traininig of the High Level Feature classifier with TensorFlow/Keras and Petastorm**4.0 Tensorflow/Keras and Petastorm, HLF classifier** This notebooks trains a dense neural network for the particle classifier using High Level Features. It uses TensorFlow/Keras on a single node. Data is read using the Petastorm library.Note, Spark is not used in this case.To run this notebook we used the following configuration:* *Software stack*: TensorFlow 1.14.0 or 2.0.0_beta1, Petastorm (`pip install petastom`)* *Platform*: CentOS 7, Python 3.6 Create the Keras model
###Code
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
def create_model(nh_1, nh_2, nh_3):
## Create model
model = Sequential()
model.add(Dense(nh_1, input_shape=(14,), activation='relu'))
model.add(Dense(nh_2, activation='relu'))
model.add(Dense(nh_3, activation='relu'))
model.add(Dense(3, activation='softmax'))
## Compile model
optimizer = 'Adam'
loss = 'categorical_crossentropy'
model.compile(loss=loss, optimizer=optimizer, metrics=["accuracy"])
return model
# define the Keras model
keras_model = create_model(50,20,10)
###Output
_____no_output_____
###Markdown
Load data and train the Keras model
###Code
import os
CWD=os.getcwd()
PATH = 'file:///' + CWD + "/../Data/"
# PATH = "file:<full_path>/SparkDLTrigger/Data/"
file_train_dataset = PATH + "trainUndersampled_HLF_features.parquet"
file_test_dataset = PATH + "testUndersampled_HLF_features.parquet"
# We use the petastorm libary to load and feed the training and test data in Parquet format
# It makes use TensorFLow tf.data.dataset
from petastorm import make_batch_reader
from petastorm.tf_utils import make_petastorm_dataset
test_data = make_batch_reader(file_test_dataset, num_epochs=None)
train_data = make_batch_reader(file_train_dataset, num_epochs=None)
# Materialize the test dataset as numpy array
import numpy as np
test_dataset_arrow=test_data.dataset.read()
print("Number of test rows:", test_dataset_arrow.num_rows)
%time y_test = np.array(test_dataset_arrow.column("encoded_label").to_pylist())
%time X_test = np.array(test_dataset_arrow.column("HLF_input").to_pylist())
# Training using tf.dataset and Petastorm
# Petastorm in this configuration uses Parquet row group size as batch size
# The row group size of the training set is 1MB (configured at dataset creation, see provided code)
print("Number of training rows:", train_data.dataset.read().num_rows)
with train_data as reader_train:
train_dataset = make_petastorm_dataset(reader_train) \
.map(lambda x: (tf.reshape(x.HLF_input, [-1, 14]), tf.reshape(x.encoded_label, [-1,3])))
#
# train the Keras model
#
history = keras_model.fit(train_dataset, steps_per_epoch=1500, \
validation_data=(X_test, y_test), \
epochs=5, verbose=1)
###Output
Number of training rows: 3426083
Epoch 1/5
1500/1500 [==============================] - 59s 40ms/step - loss: 0.3652 - accuracy: 0.8681 - val_loss: 0.2842 - val_accuracy: 0.8980
Epoch 2/5
1500/1500 [==============================] - 58s 39ms/step - loss: 0.2771 - accuracy: 0.9002 - val_loss: 0.2730 - val_accuracy: 0.9020
Epoch 3/5
1500/1500 [==============================] - 58s 39ms/step - loss: 0.2686 - accuracy: 0.9029 - val_loss: 0.2635 - val_accuracy: 0.9046
Epoch 4/5
1500/1500 [==============================] - 58s 39ms/step - loss: 0.2571 - accuracy: 0.9061 - val_loss: 0.2527 - val_accuracy: 0.9075
Epoch 5/5
1500/1500 [==============================] - 58s 39ms/step - loss: 0.2488 - accuracy: 0.9083 - val_loss: 0.2465 - val_accuracy: 0.9089
###Markdown
Performance metrics
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
# Graph with loss vs. epoch
plt.figure()
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(loc='upper right')
plt.title("HLF classifier loss")
plt.show()
# Graph with accuracy vs. epoch
%matplotlib notebook
plt.figure()
plt.plot(history.history['acc'], label='train')
plt.plot(history.history['val_acc'], label='validation')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(loc='lower right')
plt.title("HLF classifier accuracy")
plt.show()
###Output
_____no_output_____
###Markdown
Confusion Matrix
###Code
y_pred=history.model.predict(X_test)
y_true=y_test
from sklearn.metrics import accuracy_score
print('Accuracy of the HLF classifier: {:.4f}'.format(
accuracy_score(np.argmax(y_true, axis=1),np.argmax(y_pred, axis=1))))
import seaborn as sns
from sklearn.metrics import confusion_matrix
labels_name = ['qcd', 'tt', 'wjets']
labels = [0,1,2]
cm = confusion_matrix(np.argmax(y_true, axis=1), np.argmax(y_pred, axis=1), labels=labels)
## Normalize CM
cm = cm / cm.astype(np.float).sum(axis=1)
fig, ax = plt.subplots()
ax = sns.heatmap(cm, annot=True, fmt='g')
ax.xaxis.set_ticklabels(labels_name)
ax.yaxis.set_ticklabels(labels_name)
plt.xlabel('True labels')
plt.ylabel('Predicted labels')
plt.show()
###Output
_____no_output_____
###Markdown
ROC and AUC
###Code
from sklearn.metrics import roc_curve, auc
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(3):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_pred[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Dictionary containign ROC-AUC for the three classes
roc_auc
%matplotlib notebook
# Plot roc curve
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.figure()
plt.plot(fpr[0], tpr[0], lw=2,
label='HLF classifier (AUC) = %0.4f' % roc_auc[0])
plt.plot([0, 1], [0, 1], linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Background Contamination (FPR)')
plt.ylabel('Signal Efficiency (TPR)')
plt.title('$tt$ selector')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
Insights_potus_speeches_stats.ipynb | ###Markdown
General Descriptive Stats on POTUS dataset
###Code
print('speeches spanning: ', df.date.max() - df.date.min())
print(f'from: {df.date.min()} to: {df.date.max()}')
print(f'{len(df)} speeches by {len(df.speaker.unique())} speakers')
display(df.head(2))
display(df.tail(2))
df.token_count.plot.box(title='Speeches Token Count')
df.token_count.describe()
speakers = df.speaker.unique()
print(df.speaker.unique())
print(df.speaker.nunique())
display(df.speaker.value_counts())
byspeaker_df = df.groupby('speaker').agg(
{
'date': ['min', 'max', ('duration', lambda d: max(d) - min(d))],
'token_count': [sum, 'mean', 'std', min, max],
'speaker': [('speeches_count', 'count')]
}
)
flatten_column_names = ['_'.join(col).strip('_') for col in byspeaker_df.columns]
byspeaker_df.columns = flatten_column_names
byspeaker_df['days'] = byspeaker_df.date_duration.dt.days
byspeaker_df['months'] = round(byspeaker_df.date_duration / np.timedelta64(1, 'M'), 0)
byspeaker_df['years'] = round(byspeaker_df.date_duration / np.timedelta64(1, 'Y'), 0)
display(byspeaker_df.head())
display(byspeaker_df.token_count_sum.describe())
byspeaker_df.token_count_mean.plot.box(title='Averge speech token count for across speakers')
byspeaker_df.speaker_speeches_count.nlargest(10)
byspeaker_df.nlargest(10, 'speaker_speeches_count')[['speaker_speeches_count']]\
.sort_values('speaker_speeches_count')\
.plot.barh()
top_10_stats = byspeaker_df.merge(
right=byspeaker_df.speaker_speeches_count.nlargest(10).reset_index(),
how='inner',
on='speaker',
suffixes=('', '_r')
)
top_10_stats = top_10_stats[['speaker', 'date_min', 'date_max', 'years', 'speaker_speeches_count', 'token_count_sum']]\
.sort_values('speaker_speeches_count', ascending=False)
top_10_stats['speaches_per_year'] = round(top_10_stats.speaker_speeches_count / top_10_stats.years)
display(top_10_stats)
byspeaker_df.years.plot.box()
print(byspeaker_df[['speaker_speeches_count', 'years']].query('years > 8').sort_values('years', ascending=False))
import matplotlib.pylab as plt
plt.figure(figsize=(5,5))
top5 = int(np.quantile(df['token_count'], 0.95))
plt.xlim(0,top5)
plt.xlabel('number of tokens in a speech')
df['token_count'].plot(kind='density')
print('5% of speeches contain more than ' + str(top5))
df['year'] = [d.year for d in df['date']]
import matplotlib.pylab as plt
plt.figure(figsize=(5,5))
plt.xlim(1789, 2016)
plt.xlabel('number of speeches per year')
df['year'].plot(kind='density')
y = df['year']
import matplotlib.pylab as plt
plt.figure(figsize=(15,5))
minm = 1789
maxm = 2016
plt.xlim(minm,maxm)
plt.xlabel('number of speeches per year')
y.plot(kind='hist', bins=228)
byspeaker_df.sort_values(by='date_min')
len(byspeaker_df)
###Output
_____no_output_____ |
Extract_Transform/.ipynb_checkpoints/ET_CA-checkpoint.ipynb | ###Markdown
Extraction
###Code
# Extract CSV into pandas df
df_ca = pd.read_csv('../resources/california-history.csv')
df_ca.head()
###Output
_____no_output_____
###Markdown
Drop Columns from Data
###Code
df_ca = df_ca[['date', 'state', 'deathIncrease', 'inIcuCurrently', 'positiveCasesViral', 'positiveIncrease', \
'totalTestResults', 'totalTestResultsIncrease']]
df_ca.head()
###Output
_____no_output_____
###Markdown
Rename Columns
###Code
clean_df_ca = df_ca.rename(columns={'deathIncrease' : 'deaths', 'hospitalizedIncrease' : 'daily hospitalization', \
'inIcuCurrently' : 'Icu hospitalized', 'positiveCasesViral' : 'positive cases viral', \
'positiveIncrease' : 'positive increase', 'totalTestResults' : 'test results total', \
'totalTestResultsIncrease' : 'test increase'})
clean_df_ca.head()
###Output
_____no_output_____
###Markdown
Replace NaN or Missing Values with 0
###Code
clean_df_ca.fillna(0)
#Convert Clean CA Data to CSV for Merge
clean_df_ca.to_csv('../resources/clean_ca_data.csv')
clean_df_ca.head()
###Output
_____no_output_____ |
602 Project/Exploratory_Data_Analysis.ipynb | ###Markdown
EDA Importing Libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importing Dataset
###Code
dataset = pd.read_csv('Cleaned_Data.csv')
dataset.head().T
###Output
_____no_output_____
###Markdown
Crash Year Distribution
###Code
sns.countplot(x='Year', palette="pastel", data=dataset)
plt.gcf().set_size_inches(20,10)
plt.title('Year Distribution')
plt.xlabel('Year')
plt.xticks(rotation='vertical')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
Observations- We observe that during the years 2017, 2018 we have an average of 175000 accident.- There is a decrease in the number of accidents in 2020 and 2021 due to the pandemic. Monthly Distribution of Accidents
###Code
sns.catplot(y="Year", hue="Month", kind="count", palette="pastel", data=dataset)
plt.title('Monthly Distribution of Accidents')
plt.xlabel('Count')
plt.ylabel('Monthly')
plt.gcf().set_size_inches(20,20)
plt.show()
###Output
_____no_output_____
###Markdown
Observation- From the above graph we can observe that their is a sudden spike in the number of accidents in the months of May and June. Time of Accidents Distribution
###Code
sns.countplot(x='Hour', palette="pastel", data=dataset)
plt.gcf().set_size_inches(20,10)
plt.title('Time Distribution')
plt.xlabel('Time')
plt.xticks(rotation='vertical')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
Observations- Contrast to common conception most of the accidents took place in the afternoon. Contributing factor Distribution
###Code
sns.countplot(x='CONTRIBUTING_FACTOR_1', palette="pastel", data=dataset, order=dataset.CONTRIBUTING_FACTOR_1.value_counts().index)
plt.gcf().set_size_inches(20,10)
plt.title('Contributing factor Distribution')
plt.xlabel('Contributing factors')
plt.xticks(rotation='vertical')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
Observations- Lack of attention was the major contributor for accidents followed by following too closely, not DUI or DWI. Point of Impact and Pre Crash action correlation.
###Code
sns.catplot(y="POINT_OF_IMPACT", hue="PRE_CRASH", kind="count", data=dataset)
plt.title('Point of Impact and Pre Crash action correlation')
plt.xlabel('Count')
plt.ylabel('Point of Impact')
plt.gcf().set_size_inches(30,15)
plt.show()
###Output
_____no_output_____
###Markdown
Observations- For the accidents having damages on the front end the driver was going straight.- For the accidents having damages on the back end the driver was mostly backing the car. Point of Impact and Contributing Factor correlation
###Code
sns.catplot(y="POINT_OF_IMPACT", hue="CONTRIBUTING_FACTOR_1", kind="count", data=dataset)
plt.title('Point of Impact and Contributing Factor correlation')
plt.xlabel('Count')
plt.ylabel('Point of Impact')
plt.gcf().set_size_inches(30,15)
plt.show()
###Output
_____no_output_____
###Markdown
Observations- For the accidents having damages on the front end the Contributing Factor was drivers inattention.- For the accidents having damages on the back end the Contributing Factor was the driver was backing the car unsafely.- For the accidents having damages on the sides end the Contributing Factor was unsafe lane changing. Pre Crash and Contributing Factor correlation
###Code
sns.catplot(y="PRE_CRASH", hue="CONTRIBUTING_FACTOR_1", kind="count", data=dataset)
plt.title('Pre Crash and Contributing Factor correlation')
plt.xlabel('Count')
plt.ylabel('Pre Crash Action')
plt.gcf().set_size_inches(30,15)
plt.show()
###Output
_____no_output_____
###Markdown
Vehicle Make Distribution
###Code
sns.countplot(x='MAKE', palette="pastel", data=dataset, order=dataset.MAKE.value_counts().index)
plt.gcf().set_size_inches(20,10)
plt.title('Vehicle Make Distribution')
plt.xlabel('Vehicle Make')
plt.xticks(rotation='vertical')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
How old the car is Distribution
###Code
sns.countplot(x='how_old', palette="pastel", data=dataset)
plt.gcf().set_size_inches(20,10)
plt.title('how old the car is Distribution')
plt.xlabel('years')
plt.xticks(rotation='vertical')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
Vehicle Occupants and Contributing Factor Distribution.
###Code
sns.catplot(y="VEHICLE_OCCUPANTS", hue="CONTRIBUTING_FACTOR_1", kind="count", palette="pastel", data=dataset)
plt.title('Monthly Distribution')
plt.xlabel('Count')
plt.ylabel('Monthly')
plt.gcf().set_size_inches(30,20)
plt.show()
###Output
_____no_output_____
###Markdown
Observations- When there is only one person in the vehicle lack of attention and over speeding were the causes for accidents
###Code
sns.catplot(x="DRIVER_SEX", hue="CONTRIBUTING_FACTOR_1", kind="count", palette="pastel", data=dataset)
plt.title('Sex and Contributing Factors')
plt.xlabel('Drivers sex')
plt.ylabel('Count')
plt.gcf().set_size_inches(20,15)
plt.show()
###Output
_____no_output_____ |
2021/Day 13.ipynb | ###Markdown
Paper folding* We are asked to fold transparent paper. Folding not only doubles the thickness of the paper (so thickness progresses exponentially), but the length of paper required in the curve of the fold _quadruples_, as famously proven by [Britney Gallivan](https://www.youtube.com/watch?v=AfPDvhKvaa0), who holds the World Record paper folding, having folded paper 12 times. An episode of the BBC OneShow made a [valliant attempt at 13 folds](https://www.youtube.com/watch?v=ZQ0QWn7Z-IQ), illustrating how hard folding paper can be! I note, with some satisfaction, that our puzzle input asks us to fold the 'paper' 12 times as well. :-)We'll have it easier. If you model the paper as a numpy 2D boolean array, we can just use slicing and reversing, then the boolean `|` or operator to combine the markings. A quick glance at the folding lines also shows that the matrix will be folded along the exact middle each time, so we don't have to account for shifting a smaller or larger halve over the other side.
###Code
from __future__ import annotations
import re
from io import BytesIO
from typing import Iterable
import numpy as np
from PIL import Image
_FOLD_INSTRUCTION = re.compile(r"fold along (?P<axis>[xy])=(?P<line>\d+)").match
class TransparentOrigami:
def __init__(self, paper: np.array) -> None:
self._matrix = paper
@classmethod
def from_positions(
cls, positions: list[str], width: int, height: int
) -> TransparentOrigami:
dot_positions = np.array(
[tuple(map(int, line.split(","))) for line in positions], dtype=np.uint
)
# matrix is indexed using (x, y), so is transposed from the usual (y, x)
matrix = np.zeros((width, height), dtype=np.bool_)
matrix[dot_positions[:, 0], dot_positions[:, 1]] = True
return cls(matrix)
@classmethod
def from_instructions(cls, instructions: str) -> Iterable[TransparentOrigami]:
positions, steps = instructions.split("\n\n")
instr = (_FOLD_INSTRUCTION(step).groups() for step in steps.splitlines())
steps = [{axis: int(line)} for axis, line in instr]
width = next(x * 2 + 1 for s in steps if (x := s.get("x")))
height = next(y * 2 + 1 for s in steps if (y := s.get("y")))
paper = cls.from_positions(positions.splitlines(), width, height)
yield paper
for step in steps:
paper = paper.fold(**step)
yield paper
def __str__(self) -> str:
return np.array2string( # transpose to (y, x) indexing for display
self._matrix.T, separator="", formatter={"bool": ".#".__getitem__}
).translate(
# Remove spaces and square brackets, [ and ]
dict.fromkeys((0x20, 0x5B, 0x5D))
)
def _repr_png_(self) -> bytes:
img = Image.fromarray(self._matrix.T)
f = BytesIO()
img.resize((img.width * 10, img.height * 10)).save(f, "PNG")
return f.getvalue()
def __len__(self) -> int:
return self._matrix.sum()
def fold(self, x: int | None = None, y: int | None = None) -> TransparentOrigami:
if y is not None:
top, bottom = self._matrix[:, :y], self._matrix[:, y + 1 :]
return type(self)(top | bottom[:, ::-1])
else:
left, right = self._matrix[:x, :], self._matrix[x + 1 :, :]
return type(self)(left | right[::-1, :])
test_instructions = """\
6,10
0,14
9,10
0,3
10,4
4,11
6,0
6,12
4,1
0,13
10,12
3,4
3,0
8,4
1,10
2,14
8,10
9,0
fold along y=7
fold along x=5
"""
test_paper_folds = list(TransparentOrigami.from_instructions(test_instructions))
assert len(test_paper_folds[1]) == 17
import aocd
instructions = aocd.get_data(day=13, year=2021)
paper_folds = list(TransparentOrigami.from_instructions(instructions))
print("Part 1:", len(paper_folds[1]))
###Output
Part 1: 753
###Markdown
Part 2, keep on foldingFor part 2, we only need to display the result after following all folding instructions. I gave my origami class a [`_display_png_` method](https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.formatters.html?highlight=_repr_png_IPython.core.formatters.PNGFormatter)
###Code
print("Part 2:")
paper_folds[-1]
###Output
Part 2:
###Markdown
Each individiual folding resultI don't (yet) have the time to create an animation from this, but here are all the paper states, from the initial instructions to the final, 12th fold:
###Code
from IPython.display import display
for res in paper_folds:
display(res)
###Output
_____no_output_____
###Markdown
Paper folding* We are asked to fold transparent paper. Folding not only doubles the thickness of the paper (so thickness progresses exponentially), but the length of paper required in the curve of the fold _quadruples_, as famously proven by [Britney Gallivan](https://www.youtube.com/watch?v=AfPDvhKvaa0), who holds the World Record paper folding, having folded paper 12 times. An episode of the BBC OneShow made a [valliant attempt at 13 folds](https://www.youtube.com/watch?v=ZQ0QWn7Z-IQ), illustrating how hard folding paper can be! I note, with some satisfaction, that our puzzle input asks us to fold the 'paper' 12 times as well. :-)We'll have it easier. If you model the paper as a numpy 2D boolean array, we can just use slicing and reversing, then the boolean `|` or operator to combine the markings. A quick glance at the folding lines also shows that the matrix will be folded along the exact middle each time, so we don't have to account for shifting a smaller or larger halve over the other side.
###Code
from __future__ import annotations
import re
from io import BytesIO
from typing import Iterable, Optional
import numpy as np
from PIL import Image
_FOLD_INSTRUCTION = re.compile(r"fold along (?P<axis>[xy])=(?P<line>\d+)").match
class TransparentOrigami:
def __init__(self, paper: np.array) -> None:
self._matrix = paper
@classmethod
def from_positions(
cls, positions: list[str], width: int, height: int
) -> TransparentOrigami:
dot_positions = np.array(
[tuple(map(int, line.split(","))) for line in positions], dtype=np.uint
)
# matrix is indexed using (x, y), so is transposed from the usual (y, x)
matrix = np.zeros((width, height), dtype=np.bool_)
matrix[dot_positions[:, 0], dot_positions[:, 1]] = True
return cls(matrix)
@classmethod
def from_instructions(cls, instructions: str) -> Iterable[TransparentOrigami]:
positions, steps = instructions.split("\n\n")
instr = (_FOLD_INSTRUCTION(step).groups() for step in steps.splitlines())
steps = [{axis: int(line)} for axis, line in instr]
width = next(x * 2 + 1 for s in steps if (x := s.get("x")))
height = next(y * 2 + 1 for s in steps if (y := s.get("y")))
paper = cls.from_positions(positions.splitlines(), width, height)
yield paper
for step in steps:
paper = paper.fold(**step)
yield paper
def __str__(self) -> str:
return np.array2string( # transpose to (y, x) indexing for display
self._matrix.T, separator="", formatter={"bool": ".#".__getitem__}
).translate(
# Remove spaces and square brackets, [ and ]
dict.fromkeys((0x20, 0x5B, 0x5D))
)
def _repr_png_(self) -> bytes:
img = Image.fromarray(self._matrix.T)
f = BytesIO()
img.resize((img.width * 10, img.height * 10)).save(f, "PNG")
return f.getvalue()
def __len__(self) -> int:
return self._matrix.sum()
def fold(
self, x: Optional[int] = None, y: Optional[int] = None
) -> TransparentOrigami:
if y is not None:
top, bottom = self._matrix[:, :y], self._matrix[:, y + 1 :]
return type(self)(top | bottom[:, ::-1])
else:
left, right = self._matrix[:x, :], self._matrix[x + 1 :, :]
return type(self)(left | right[::-1, :])
test_instructions = """\
6,10
0,14
9,10
0,3
10,4
4,11
6,0
6,12
4,1
0,13
10,12
3,4
3,0
8,4
1,10
2,14
8,10
9,0
fold along y=7
fold along x=5
"""
test_paper_folds = list(TransparentOrigami.from_instructions(test_instructions))
assert len(test_paper_folds[1]) == 17
import aocd
instructions = aocd.get_data(day=13, year=2021)
paper_folds = list(TransparentOrigami.from_instructions(instructions))
print("Part 1:", len(paper_folds[1]))
###Output
Part 1: 753
###Markdown
Part 2, keep on foldingFor part 2, we only need to display the result after following all folding instructions. I gave my origami class a [`_display_png_` method](https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.formatters.html?highlight=_repr_png_IPython.core.formatters.PNGFormatter)
###Code
print("Part 2:")
paper_folds[-1]
###Output
Part 2:
###Markdown
Each individiual folding resultI don't (yet) have the time to create an animation from this, but here are all the paper states, from the initial instructions to the final, 12th fold:
###Code
from IPython.display import display
for res in paper_folds:
display(res)
###Output
_____no_output_____ |
Understanding_and_Creating_Binary_Classification_NNs/1_layer_toy_network_MSE_AND_dataset.ipynb | ###Markdown
Nothing But NumPy: A 1-layer Binary Classification Neural Network on AND data Using MSE Cost functionPart of the blog ["Nothing but NumPy: Understanding & Creating Binary Classification Neural Networks with Computational Graphs from Scratch"](https://medium.com/@rafayak/nothing-but-numpy-understanding-creating-binary-classification-neural-networks-with-e746423c8d5c)- by [Rafay Khan](https://twitter.com/RafayAK)In this notebook we'll create a 1-layer neural network (i.e. just an output layer) and train it on AND dataset. We'll set custom weights and see the shortfall of using Mean Squared Error(MSE) Cost function in a Binary Classification setting. First, let's import NumPy, our neural net Layers, the Cost functions and helper functions._Feel free to look into the helper functions in the utils directory._
###Code
import numpy as np
from Layers.LinearLayer import LinearLayer
from Layers.ActivationLayer import SigmoidLayer
from util.utilities import *
from util.cost_functions import compute_mse_cost, compute_stable_bce_cost, compute_bce_cost, compute_keras_like_bce_cost
# to show all the generated plots inline in the notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
The AND data:
###Code
# This is our AND gate data
X = np.array([
[0, 0],
[0, 1],
[1, 0],
[1, 1]
])
Y = np.array([
[0],
[0],
[0],
[1]
])
###Output
_____no_output_____
###Markdown
Let's set up training data. Recall, data needs to be in $(features \times \text{number_of_examples})$ shape. So, we need to transpose X and Y.
###Code
X_train = X.T
Y_train = Y.T
X_train
Y_train
###Output
_____no_output_____
###Markdown
This is the neural net architecture we'll use
###Code
# define training constants
learning_rate = 1
number_of_epochs = 5000
np.random.seed(48) # set seed value so that the results are reproduceable
# (weights will now be initailzaed to the same pseudo-random numbers, each time)
# Our network architecture has the shape:
# (input)--> [Linear->Sigmoid] -->(output)
#------ LAYER-1 ----- define output layer that takes in training data
Z1 = LinearLayer(input_shape=X_train.shape, n_out=1, ini_type='plain')
A1 = SigmoidLayer(Z1.Z.shape)
###Output
_____no_output_____
###Markdown
We'll set custom weights to break Mean Squared Error Cost function for gradient descent.Let's see what's the shape of the weight matrix($W$)
###Code
Z1.params['W'].shape
###Output
_____no_output_____
###Markdown
So, the new weights will also need to be of the same shape
###Code
Z1.params['W'] = np.array([[-10, 60]])
Z1.params # params for a linear layer are weights(W) and biases(b), stored in a dictionary
###Output
_____no_output_____
###Markdown
Now we can start the training loop:
###Code
costs = [] # initially empty list, this will store all the costs after a certain number of epochs
# Start training
for epoch in range(number_of_epochs):
# ------------------------- forward-prop -------------------------
Z1.forward(X_train)
A1.forward(Z1.Z)
# ---------------------- Compute Cost ----------------------------
cost, dA1 = compute_mse_cost(Y=Y_train, Y_hat=A1.A)
# print and store Costs every 100 iterations and of the last iteration.
if (epoch % 100) == 0 or epoch == number_of_epochs - 1:
print("Cost at epoch#{}: {}".format(epoch, cost))
costs.append(cost)
# ------------------------- back-prop ----------------------------
A1.backward(dA1)
Z1.backward(A1.dZ)
# ----------------------- Update weights and bias ----------------
Z1.update_params(learning_rate=learning_rate)
###Output
Cost at epoch#0: 0.1562500002576208
Cost at epoch#100: 0.1291735909412119
Cost at epoch#200: 0.12688149709290097
Cost at epoch#300: 0.12617626580805902
Cost at epoch#400: 0.12584559300260187
Cost at epoch#500: 0.12565627734621776
Cost at epoch#600: 0.12553447923266486
Cost at epoch#700: 0.12544988783459748
Cost at epoch#800: 0.12538787511021365
Cost at epoch#900: 0.12534055052085766
Cost at epoch#1000: 0.12530329644095023
Cost at epoch#1100: 0.12527323672250662
Cost at epoch#1200: 0.12524848939298686
Cost at epoch#1300: 0.12522777278273756
Cost at epoch#1400: 0.12521018468712414
Cost at epoch#1500: 0.12519507209383407
Cost at epoch#1600: 0.12518195099769108
Cost at epoch#1700: 0.12517045523023962
Cost at epoch#1800: 0.12516030277453224
Cost at epoch#1900: 0.12515127298278708
Cost at epoch#2000: 0.1251431907979599
Cost at epoch#2100: 0.12513591559406895
Cost at epoch#2200: 0.12512933313404928
Cost at epoch#2300: 0.12512334967604713
Cost at epoch#2400: 0.12511788758823347
Cost at epoch#2500: 0.12511288204086518
Cost at epoch#2600: 0.12510827847951447
Cost at epoch#2700: 0.12510403067275325
Cost at epoch#2800: 0.1251000991877359
Cost at epoch#2900: 0.1250964501882969
Cost at epoch#3000: 0.12509305447879604
Cost at epoch#3100: 0.12508988673711377
Cost at epoch#3200: 0.12508692489460294
Cost at epoch#3300: 0.1250841496312149
Cost at epoch#3400: 0.12508154396162774
Cost at epoch#3500: 0.12507909289382693
Cost at epoch#3600: 0.12507678314578227
Cost at epoch#3700: 0.12507460290902334
Cost at epoch#3800: 0.12507254165031162
Cost at epoch#3900: 0.12507058994444656
Cost at epoch#4000: 0.12506873933265575
Cost at epoch#4100: 0.1250669822021222
Cost at epoch#4200: 0.12506531168306306
Cost at epoch#4300: 0.1250637215604502
Cost at epoch#4400: 0.12506220619800332
Cost at epoch#4500: 0.12506076047251174
Cost at epoch#4600: 0.12505937971688713
Cost at epoch#4700: 0.12505805967062283
Cost at epoch#4800: 0.1250567964365623
Cost at epoch#4900: 0.1250555864430598
Cost at epoch#4999: 0.12505443777388855
###Markdown
We have broken Gradient Descent by exploiting MSE Cost function in a Binary Classifer. No matter the Learning rate, Gradient Descent cannot recover from a bad postion(concave area) on the the Loss/Cost Curve Now let's see how well the neural net peforms on the training data after the training as finished`predict` helper functionin the cell below returns three things:* `p`: predicted labels (output 1 if predictded output is greater than classification threshold `thresh`)* `probas`: raw probabilities (how sure the neural net thinks the output is 1, this is just `P_hat`)* `accuracy`: the number of correct predictions from total predictions
###Code
classifcation_thresh = 0.5
predicted_outputs, p_hat, accuracy = predict(X=X_train, Y=Y_train,
Zs=[Z1], As=[A1], thresh=classifcation_thresh)
print("The predicted outputs of first 5 examples: \n{}".format(predicted_outputs[:,:5]))
print("The predicted prbabilities of first 5 examples:\n {}".format(np.round(p_hat[:, :5], decimals=3)) )
print("\nThe accuracy of the model is: {}%".format(accuracy))
###Output
The predicted outputs of first 5 examples:
[[ 0. 1. 0. 1.]]
The predicted prbabilities of first 5 examples:
[[ 0.021 1. 0. 1. ]]
The accuracy of the model is: 75.0%
###Markdown
___The accuracy of model shows it doing ok, but recall from the blog that accuracy alone is a misleading metric. Let's plot the Decision Boundary and make things much more clear___ The Learning Curve
###Code
plot_learning_curve(costs, learning_rate, total_epochs=number_of_epochs, save=True)
###Output
_____no_output_____
###Markdown
The Decision Boundary
###Code
plot_decision_boundary(lambda x: predict_dec(Zs=[Z1], As=[A1], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train, axis_lines=True, save=True)
###Output
_____no_output_____
###Markdown
The Shaded Decision Boundary
###Code
plot_decision_boundary_shaded(lambda x: predict_dec(Zs=[Z1], As=[A1], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train, axis_lines=True, save=True)
###Output
_____no_output_____
###Markdown
The Decision Boundary with Shortest DistancesPlay with the `classifcation_thresh` and visualize the effects
###Code
plot_decision_boundary_distances(lambda x: predict_dec(Zs=[Z1], As=[A1], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train, axis_lines=True)
###Output
_____no_output_____ |
Natural Language Processing Specialization/autocorrect_and_minimum_edit_distance/NLP_C2_W1_lecture_nb_01.ipynb | ###Markdown
NLP Course 2 Week 1 Lesson : Building The Model - Lecture Exercise 01Estimated Time: 10 minutes Vocabulary Creation Create a tiny vocabulary from a tiny corpusIt's time to start small ! Imports and Data
###Code
# imports
import re # regular expression library; for tokenization of words
from collections import Counter # collections library; counter: dict subclass for counting hashable objects
import matplotlib.pyplot as plt # for data visualization
# the tiny corpus of text !
text = 'red pink pink blue blue yellow green green ORANGE BLUE BLUE PINK' # 🌈
print(text)
print('string length : ',len(text))
###Output
red pink pink blue blue yellow green green ORANGE BLUE BLUE PINK
string length : 64
###Markdown
Preprocessing
###Code
# convert all letters to lower case
text_lowercase = text.lower()
print(text_lowercase)
print('string length : ',len(text_lowercase))
# some regex to tokenize the string to words and return them in a list
words = re.findall(r'\w+', text_lowercase)
print(words)
print('count : ',len(words))
###Output
['red', 'pink', 'pink', 'blue', 'blue', 'yellow', 'green', 'green', 'orange', 'blue', 'blue', 'pink']
count : 12
###Markdown
Create VocabularyOption 1 : A set of distinct words from the text
###Code
# create vocab
vocab = set(words)
print(vocab)
print('count : ',len(vocab))
###Output
{'blue', 'yellow', 'orange', 'green', 'red', 'pink'}
count : 6
###Markdown
Add Information with Word CountsOption 2 : Two alternatives for including the word count as well
###Code
# create vocab including word count
counts_a = dict()
for w in words:
counts_a[w] = counts_a.get(w,0)+1
print(counts_a)
print('count : ',len(counts_a))
# create vocab including word count using collections.Counter
counts_b = dict()
counts_b = Counter(words)
print(counts_b)
print('count : ',len(counts_b))
# barchart of sorted word counts
d = {'blue': counts_b['blue'], 'pink': counts_b['pink'], 'red': counts_b['red'], 'yellow': counts_b['yellow'], 'orange': counts_b['orange']}
plt.bar(range(len(d)), list(d.values()), align='center', color=d.keys())
_ = plt.xticks(range(len(d)), list(d.keys()))
###Output
_____no_output_____
###Markdown
Ungraded ExerciseNote that `counts_b`, above, returned by `collections.Counter` is sorted by word countCan you modify the tiny corpus of ***text*** so that a new color appears between ***pink*** and ***red*** in `counts_b` ?Do you need to run all the cells again, or just specific ones ?
###Code
print('counts_b : ', counts_b)
print('count : ', len(counts_b))
###Output
counts_b : Counter({'blue': 4, 'pink': 3, 'green': 2, 'red': 1, 'yellow': 1, 'orange': 1})
count : 6
|
notebooks/contracts_intro.ipynb | ###Markdown
Introduction to US Federal Government Contracts * *This notebook is part of the [Government Procurement Queries](https://github.com/antontarasenko/gpq) project* The Dataset The BigQuery dataset (17 years of data, 45mn transactions, $6.7tn worth of goods and services):- [gpqueries:contracts](https://bigquery.cloud.google.com/dataset/gpqueries:contracts)*Important:* You need a Google account and a Google Cloud project to access the data (both free). Google offers you to create a new project when you open BigQuery. Do it. Then you'll need to follow Google's instructions and enable BigQuery in this project. Table `gpqueries:contracts.raw` Table [`gpqueries:contracts.raw`](https://bigquery.cloud.google.com/table/gpqueries:contracts.raw) contains the unmodified data from the [USASpending.gov archives](https://www.usaspending.gov/DownloadCenter/Pages/dataarchives.aspx). It's constructed from `_All_Contracts_Full_20160515.csv.zip` files and includes contracts from 2000 to May 15, 2016.Table `gpqueries:contracts.raw` contains 45M rows and 225 columns.Each row refers to a transaction (a purchase or refund) made by a federal agency. It may be a pizza or an airplane.The columns are grouped into categories:- Transaction: `unique_transaction_id`-`baseandalloptionsvalue`- Buyer (government agency): `maj_agency_cat`-`fundedbyforeignentity`- Dates: `signeddate`-`lastdatetoorder`, `last_modified_date`- Contract: `contractactiontype`-`programacronym`- Contractor (supplier, vendor): `vendorname`-`statecode`- Place of performance: `PlaceofPerformanceCity`-`placeofperformancecongressionaldistrict`- Product or service bought: `psc_cat`-`manufacturingorganizationtype`- General contract information: `agencyid`-`idvmodificationnumber`- Competitive procedure: `solicitationid`-`statutoryexceptiontofairopportunity`- Contractor details: `organizationaltype`-`otherstatutoryauthority`- Contractor's executives: `prime_awardee_executive1`-`interagencycontractingauthority`Detailed description for each variable is available in the official codebook:- [`USAspending.govDownloadsDataDictionary.pdf`](https://www.usaspending.gov/DownloadCenter/Documents/USAspending.govDownloadsDataDictionary.pdf) Queries BigQuery Web GUI You can execute queries mentioned here at (press "Compose query"). Datalab This notebook was written in Google Datalab. You may need the libraries imported below to replicate it:
###Code
import gcp.bigquery as bq
###Output
_____no_output_____
###Markdown
Within Datalab, you can define queries with [`sql` magic](https://github.com/catherinedevlin/ipython-sql) like this:
###Code
%%sql --module gpq
define query totals
select
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated,
count(unique(dunsnumber)) vendors,
count(unique(solicitationid)) purchase_procedures
from
gpqueries:contracts.raw
###Output
_____no_output_____
###Markdown
And execute with `bq` to get a dataframe:
###Code
bq.Query(gpq.totals).to_dataframe()
###Output
_____no_output_____
###Markdown
Which means we're dealing with 44.5M transactions totalling 6.7 trillion dollars. These purchases came from 622k vendors that won 2.2mn solicitations issued by government agencies. Data Mining Government Clients Suppose you want to start selling to the government. While [FBO.gov](http://www.fbo.gov/) publishes government RFPs and you can apply there, government agencies often issue requests when they've already chosen the supplier. Agencies go through FBO.gov because it's a mandatory step for deals north of $25K. But winning at this stage is unlikely if an RFP is already tailored for another supplier.Reaching warm leads in advance would increase chances of winning a government contract. The contracts data helps identify the warm leads by looking at purchases in the previous years.There're several ways of searching through those years. Who Buys What You Make The goods and services bought in each transaction are encoded in the variable `productorservicecode`. Top ten product categories according to this variable:
###Code
%%sql
select
substr(productorservicecode, 1, 4) product_id,
first(substr(productorservicecode, 7)) product_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
product_id
order by
sum_dollarsobligated desc
limit 10
###Output
_____no_output_____
###Markdown
You can find agencies that buy products like yours. If it's "software":
###Code
%%sql
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
where
productorservicecode contains 'software'
group by
agency_id
order by
sum_dollarsobligated desc
ignore case
###Output
_____no_output_____
###Markdown
What Firms in Your Industry Sell to the Government Another way to find customers is the variable `principalnaicscode` that encodes the industry in which the vendor does business.The list of NAICS codes is available at [Census.gov](http://www.census.gov/cgi-bin/sssd/naics/naicsrch?chart=2012), but you can do text search in the table. Let's find who bought software from distributors in 2015:
###Code
%%sql
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
substr(principalnaicscode, 1, 6) naics_id,
first(substr(principalnaicscode, 9)) naics_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
where
principalnaicscode contains 'software' and
fiscal_year = 2015
group by
agency_id, naics_id
order by
sum_dollarsobligated desc
ignore case
###Output
_____no_output_____
###Markdown
Inspecting Specific Transactions You can learn details from looking at transactions for a specific `(agency, NAICS)` pair. For example, what software does TSA buy?
###Code
%%sql
select
fiscal_year,
dollarsobligated,
vendorname, city, state, annualrevenue, numberofemployees,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
agencyid contains 'transportation security administration' and
principalnaicscode contains 'computer and software stores'
ignore case
###Output
_____no_output_____
###Markdown
Alternatively, specify vendors your product relates to and check how the government uses it. Top deals in data analytics:
###Code
%%sql
select
agencyid,
dollarsobligated,
vendorname,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
vendorname contains 'tableau' or
vendorname contains 'socrata' or
vendorname contains 'palantir' or
vendorname contains 'revolution analytics' or
vendorname contains 'mathworks' or
vendorname contains 'statacorp' or
vendorname contains 'mathworks'
order by
dollarsobligated desc
limit
100
ignore case
###Output
_____no_output_____
###Markdown
Searching Through Descriptions Full-text search and regular expressions for the variable `descriptionofcontractrequirement` narrow results for relevant product groups:
###Code
%%sql
select
agencyid,
dollarsobligated,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
descriptionofcontractrequirement contains 'body camera'
limit
100
ignore case
###Output
_____no_output_____
###Markdown
Some rows of `descriptionofcontractrequirement` contain codes like "IGF::CT::IGF". These codes classify the purchase into three groups of "[Inherently Governmental Functions](https://www.fpds.gov/fpdsng_cms/index.php/en/newsroom/108-nherently-governmental-functions.html)" (IGF):1. IGF::CT::IGF for Critical Functions2. IGF::CL::IGF for Closely Associated3. IGF::OT::IGF for Other Functions Narrowing Your Geography You can find local opportunities using variables for vendors (`city`, `state`) and services sold (`PlaceofPerformanceCity`, `pop_state_code`). The states where most contracts are delivered in:
###Code
%%sql
select
substr(pop_state_code, 1, 2) state_code,
first(substr(pop_state_code, 4)) state_name,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
state_code
order by
sum_dollarsobligated desc
###Output
_____no_output_____
###Markdown
Facts about Government Contracting Let's check some popular statements about government contracting. Small Businesses Win Most Contracts Contractors had to report their revenue and the number of employees. It makes easy to check if small business is welcomed in government contracting:
###Code
%%sql --module gpq
define query vendor_size_by_agency
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
nth(11, quantiles(annualrevenue, 21)) vendor_median_annualrevenue,
nth(11, quantiles(numberofemployees, 21)) vendor_median_numberofemployees,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
agency_id
having
transactions > 1000 and
sum_dollarsobligated > 10e6
order by
vendor_median_annualrevenue asc
bq.Query(gpq.vendor_size_by_agency).to_dataframe()
###Output
_____no_output_____
###Markdown
The median shows the most likely supplier. Agencies on the top of the table actively employ vendors whose annual revenue is less than $1mn.The Department of Defence, the largest buyer with $4.5tn worth of goods and services bought over these 17 years, has the median vendor with $2.5mn in revenue and 20 employees. It means that half of the DoD's vendors have less than $2.5mn in revenue. Set-Aside Deals Take a Small Share Set-aside purchases are reserved for special categories of suppliers, like women-, minority-, and veteran-owned businesses. There's a lot of confusion about their share in transactions. We can settle this confusion with data:
###Code
%%sql
select
womenownedflag,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
womenownedflag
###Output
_____no_output_____
###Markdown
Women-owned businesses make about one tenth of the transactions, but their share in terms of sales is only 3.7%.A cross-tabulation for major set-aside categories:
###Code
%%sql
select
womenownedflag, veteranownedflag, minorityownedbusinessflag,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
womenownedflag, veteranownedflag, minorityownedbusinessflag
order by
womenownedflag, veteranownedflag, minorityownedbusinessflag desc
###Output
_____no_output_____
###Markdown
For example, firms owned by women, veterans, and minorities (all represented at the same time) sell $5bn in goods and services. That's 0.07% of all government purchases. New Vendors Emerge Each Year Becoming a government contractor may seem difficult at first, but let's see how many new contractors the government had in 2015.
###Code
%%sql
select
sum(if(before2015.dunsnumber is null, 1, 0)) new_vendors,
sum(if(before2015.dunsnumber is null, 0, 1)) old_vendors
from
flatten((select unique(dunsnumber) dunsnumber from gpqueries:contracts.raw where fiscal_year = 2015), dunsnumber) in2015
left join
flatten((select unique(dunsnumber) dunsnumber from gpqueries:contracts.raw where fiscal_year < 2015), dunsnumber) before2015
on before2015.dunsnumber = in2015.dunsnumber
###Output
_____no_output_____ |
Data_fetching_and_Analysis.ipynb | ###Markdown
It takes too long for machine learning models to diffuse to diseases of the poor The main question we seek to answer is:- How well has machine learning been adopted by Biologists?- Compared with cancer research, how well has machine learning been adopted?This notebook supports the [medium article](https://medium.com/@siwomolbio/machinelearning4malaria-e171ca85e7f5)pubmed data downloaded on 3rd May, 2019.
###Code
from Bio import Entrez
import pandas as pd
import seaborn as sns
sns.set(style="whitegrid")
import matplotlib.pyplot as plt
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
Entrez.email = "[email protected]"
other_tech = ['"DNA sequencing"', '"PCR"',
'"ELISA"','"microarray"',
'"Sanger sequencing"','"Nanopore sequencing"',
'"GWAS"']
ml = ['"machine learning"','"neural networks"',
'"support vector machine"',
'"k-nearest neighbor"',
'"linear regression"',
'"random forests"',
'"logistic regression"',
'"convolutional neural networks"','"bayesian networks"', '"data science"']
###Output
_____no_output_____
###Markdown
Download data Details from PubMedUsing the code below, we get to fetch the data we are interested which talks about malaria and the various machine learning models.
###Code
# make a placeholder to store result from querying
def get_count(disease,ml):
df = pd.DataFrame(columns=["ML_algorithm", "Count", 'First_Paper','Latest_paper'])
for machine in ml:
handle = Entrez.esearch(db = "pubmed", retmax=10000000, term="[%s] AND %s" % (machine,disease))
kenyan_records = Entrez.read(handle)
kenyan_pubids = kenyan_records["IdList"]
handle_1 = Entrez.esummary(db="pubmed", id = kenyan_pubids[0])
handle_2 = Entrez.esummary(db="pubmed", id = kenyan_pubids[-1])
record_1 = Entrez.read(handle_1)
record_2 = Entrez.read(handle_2)
# see what we are capable of subsetting
df.loc[len(df)]= [machine.replace('"',''), len(kenyan_pubids), record_2[0]["PubDate"],record_1[0]["PubDate"]]
#df.to_csv()
return df
cancer_df = get_count("cancer",ml)
cancer_df
malaria_df = get_count("malaria",ml)
malaria_df
other_tech_cancer = get_count("cancer",other_tech)
other_tech_cancer
other_tech_malaria = get_count("malaria",other_tech)
other_tech_malaria
tuberculosis_df = get_count("tuberculosis",ml)
tuberculosis_df
other_tech_tb = get_count("tuberculosis",other_tech)
other_tech_tb
###Output
_____no_output_____
###Markdown
Now we can clean the dataframes to include delay details and the year of First Publication
###Code
def clean_df(df,disease):
#df['date'] = pd.to_datetime(df['First_Paper'])
df['date'] = df.apply(
lambda row: pd.to_datetime(row['First_Paper']), axis=1)
df['year_%s' % disease] = df.date.dt.year
df = df[['ML_algorithm','Count','year_%s' % disease]]
df.columns = ['ML_algorithm','%s_Count' % disease,'year_%s' % disease]
return df
malaria_clean = clean_df(malaria_df,'malaria')
malaria_clean
cancer_clean = clean_df(cancer_df,'cancer')
cancer_clean
def merge_data(df1,df2,disease1,disease2):
"""
Ceates a merged dataframe of two diseases being compared
with the delay in tech adoption in years
"""
marged_data = pd.merge(df1, df2, on='ML_algorithm')
marged_data['delay'] = marged_data['year_%s' % disease2] - marged_data['year_%s' % disease1]
marged_data.set_index('ML_algorithm',inplace=True)
return marged_data
malaria_cancer_ml = merge_data(cancer_clean,malaria_clean,"cancer","malaria")
malaria_cancer_ml.delay.mean()
malaria_cancer_ml
other_cancer = clean_df(other_tech_cancer,'cancer')
other_malaria = clean_df(other_tech_malaria,'malaria')
malaria_cancer_other = merge_data(other_cancer,other_malaria,"cancer","malaria")
malaria_cancer_other
malaria_cancer_other.delay.mean()
###Output
_____no_output_____
###Markdown
The total number of papers in PubMed mentioning cancer are 4281357
###Code
handle = Entrez.esearch(db = "pubmed", retmax=10000000, term="cancer")
kenyan_records = Entrez.read(handle)
cancer_total = len(kenyan_records["IdList"])
cancer_total
###Output
_____no_output_____
###Markdown
While for malaria, we have 99759 papers in PubMed
###Code
handle = Entrez.esearch(db = "pubmed", retmax=100000, term="malaria")
kenyan_records = Entrez.read(handle)
malaria_total = len(kenyan_records["IdList"])
malaria_total
###Output
_____no_output_____
###Markdown
Download paper details for additional analysisWith that captured, we can now check for the number of papers published for the popular machine learning algorithms for malaria research. We will use this data to observe the trends in the adoption of various algorithms
###Code
def get_paper_details(ml,disease):
for machine in ml:
handle = Entrez.esearch(db = "pubmed", retmax=100000, term="[%s] AND %s" % (machine,disease))
kenyan_records = Entrez.read(handle)
kenyan_pubids = kenyan_records["IdList"]
write_paper = "_".join(machine.split()) +"_paper.txt"
Main_df = pd.DataFrame()
for pubid in kenyan_pubids:
#retrieve paper abstracts so that we can extract additional information, like country
paper_retriever(pubid, "[email protected]", "Data/%s/abstracts/%s" % (disease,write_paper.replace('"','')))
test= Entrez.read(Entrez.esummary(db = "pubmed", id = pubid))
df2 = pd.DataFrame(test)
Main_df = pd.concat([Main_df,df2])
Cleaned_Main_df = Main_df[['Id', 'ArticleIds', 'AuthorList', 'DOI' ,
'FullJournalName', 'HasAbstract', 'LastAuthor', 'NlmUniqueID',
'PubDate', 'PubTypeList', 'RecordStatus', 'Source', 'Title']]
out_file = "_".join(machine.split())+".txt"
Cleaned_Main_df.to_csv("Data/%s/metadata/%s" % (disease,out_file.replace('"','')), sep='\t', index=False)
ml = ['"data science"']
get_paper_details(ml,'malaria')
get_paper_details(ml,'cancer')
###Output
_____no_output_____
###Markdown
Next, we write a function to retrieve abstract and metadata, which we get to use later to extract important information.
###Code
def paper_retriever(pubmedid, email, output_file):
'''The paper retriever function takes your email which uses the same name email as an
argument, pubmedid you can get this from the previous function, searchterm take the NCBI type of query as a string
and renamefile just changing your file names to avoid confusion.
Return the full paper depending on if it's open access or not.
'''
# Enter your own email
Entrez.email = email
# the method efetch does and fetches the information you need brings it back to your Ipython session
handle2 = Entrez.efetch(db="pubmed", id = pubmedid, rettype="gb",retmode="text")
# using cell magic in a function in the jupyter notebook
with open(output_file, 'a') as paper_data:
paper_data.write(handle2.read())
def parseAbstracts(infile,outfile):
with open(outfile,'w') as clean:
with open(infile) as abstract:
tag = False
for line in abstract:
if line[0].isdigit() and (
line[1:3] == '. ' or line[2:4] == '. ' or line[3:5] == '. '):
if tag:
continue
else:
try:
date = line.replace(
';','.').replace(':','.').split('.')[2]
journal = line.replace(
';','.').replace(':','.').split('.')[1]
tag = True
except IndexError:
print(line)
tag = False
if tag and line.startswith('PMID:'):
pubid = line.split()[1]
tag=False
clean.write('%s\t%s\t%s\n' % (pubid, journal, date.strip()))
###Output
_____no_output_____
###Markdown
Fetch Country details from Author InformationIn this section, we are interested in extracting the author country information. We want to understand who is driving the adoption of machine learning approaches in malaria research. We use the affiliation of the first author or the most common country. First, we install `geograpy` using: `python3 -m pip install git+https://github.com/reach2ashish/geograpy.git`However, python3 version of this tool does not work well, it seems to extract incorrect details. We then opted for an alternative, `pycountry`, which we use to check if a country name exists in the affiliation section of the paper. However, this tool does not consider abbreviations and alternative names. We have to manually test for UK and USA.
###Code
import geograpy
###Output
_____no_output_____
###Markdown
First we need to dowload the required nltk data
###Code
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('words')
###Output
[nltk_data] Downloading package punkt to /Users/caleb/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /Users/caleb/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /Users/caleb/nltk_data...
[nltk_data] Unzipping chunkers/maxent_ne_chunker.zip.
[nltk_data] Downloading package words to /Users/caleb/nltk_data...
[nltk_data] Unzipping corpora/words.zip.
###Markdown
Here is an the pycountry tool in action.
###Code
import pycountry
texts = "United States (New York), United Kingdom (London)"
for country in pycountry.countries:
if country.name.lower() in texts.lower():
print(country.name)
p=geograpy.get_place_context(text=texts)
p.countries
###Output
_____no_output_____
###Markdown
As can be observed above, the geograpy tool does not give the correct information. Hence the need for an alternative. Extract country details from paper metadataWith the downloaded data, we can attempt to extract country details from the Author information using the pycountry tool. For each article, we pass the author information to the algorithm which creates a list of all the countries in the text, in order of appearance in the text. We can assume the first will host information of the first author, where available. IEven if that's not the case, we expect the first item will be representative paper country affiliation.
###Code
pwd
for machine in ml:
disease = 'malaria'
nations = "_".join(machine.split()) +"_country.txt"
write_paper = "_".join(machine.split()) +"_paper.txt"
print(machine)
with open("Data/%s/country/%s" % (disease,nations.replace('"','')),'w') as nation:
with open("Data/%s/abstracts/%s" % (disease,write_paper.replace('"',''))) as abstract:
tag = False
search_text = ""
for line in abstract:
if line.startswith("Author information:"):
tag = True
if tag:
if line != "\n":
search_text = search_text + line
else:
#print(search_text)
tag = False
countries = []
for country in pycountry.countries:
if country.name.lower() in search_text.lower():
countries.append(country.name)
if "usa" in search_text.lower():
countries.append("United States")
if "UK" in search_text:
countries.append("United Kingdom")
if len(countries) == 0:
countries.append("Missing")
#print(countries)
search_text = ""
#print(countries[0],end="\t")
#print(countries)
nation.write(countries[0]+"\t")
if line.startswith('PMID:'):
pubid = line.split()[1]
nation.write(pubid+"\n")
#print(pubid)
#print(pubid,country)
c = pd.read_table('countries.txt')
g = sns.factorplot(x="Country", kind="count", hue='Continent', data=c, orient='v', size=5, aspect=1.5,dodge=False)
g.set_xticklabels(rotation=90)
g.savefig('Plots/Country_continent.png')
###Output
_____no_output_____
###Markdown
From the above, we observe that lest basic machine learning techniques are widely used in malaria research, they are also being used by African researchers. Clearly, linear and logistic regression are widely used for identifying relationships between multiple factors for categorical and continuous variables, respectively.
###Code
def convertDate(data):
'''
Given a dataframe, convert to date time and separate
the date columns
'''
#data.set_index('Id', inplace=True)
#### Conver the date column to date format
data['date'] = pd.to_datetime(data['PubDate'], errors='coerce')
data['year'] = data.date.dt.year
data['month'] = data.date.dt.month
### Save the data in a csv for future re-use
#data.to_csv(outcsv,sep='\t')
return data
###Output
_____no_output_____
###Markdown
Data Analysis and VisualizationHere we perform quick plots to understand trends in machine learning diffusion to malaria research.
###Code
sns.set(style="white")
def plot_counts(machine):
"""
Function to plot the counts of papers in each yeaar that
mention a machine learning algorithm
"""
machine = machine.replace('"','')
out_file = "_".join(machine.split())
path = "Data/%s/metadata/%s.txt" % ('malaria',out_file)
data = pd.read_table(path, index_col='Id')
data = convertDate(data)
data.dropna(axis = 0, how ='any',inplace=True)
data.year = data.year.astype(int)
ax = data['year'].value_counts().sort_index().plot(kind='bar',
title=' Number of papers talking about malaria and %s per year'% machine,
figsize=(8, 6))
ax.set_ylabel('Number of papers')
ax.set_xlabel('Year')
sns.despine()
plt.savefig('Plots/malaria_%s.png' % out_file)
for machine in ml:
plot_counts(machine)
machine = "linear regression"
plot_counts(machine)
###Output
_____no_output_____
###Markdown
Support vector machines
###Code
machine = "support vector machine"
plot_counts(machine)
###Output
_____no_output_____
###Markdown
Neural networks
###Code
machine = "neural networks"
plot_counts(machine)
###Output
_____no_output_____
###Markdown
The above quick analysis seems to show that the algorithms popularity may have peeked in 2015, but have been on the decline ever since. We need to investigat this further.
###Code
machine = "data science"
plot_counts(machine)
###Output
_____no_output_____ |
Python_Operators.ipynb | ###Markdown
**Python Operators**- In this tutorial, you'll learn everything about different types of operators in Python, their syntax and how to use them with examples. **1. What are operators in python?**- **Operators** are special symbols in Python that carry out arithmetic or logical computation. - The value that the operator operates on is called the **operand**.
###Code
2+3
###Output
_____no_output_____
###Markdown
Here, - `+` is the `operator` that performs addition. - `2` and `3` are the `operands` and - `5` is the `output` of the operation. **2. Arithmetic Operators**- Arithmetic operators are used to perform mathematical operations like addition, subtraction, multiplication, etc. - **Operator** - **Meaning** - **Example** - **(x = 15, y = 4)**- `+` - `Add two operands or unary plus` - `(x + y = 19)`- `-` - `Subtract right operand from the left or unary minus` - `(x - y = 11)`- `*` - `Multiply two operands` - `(x * y = 60)`- `/` - `Divide left operand by the right one (always results into float)` - `(x / y = 3.75)`- `%` - `Modulus` - `remainder of the division of left operand by the right` - `(x % y = 3) (remainder of x/y)`- `//` - `Floor division - division that results into whole number adjusted to the left in the number line` - `(x // y = 3)`- `**` - `Exponent - left operand raised to the power of right` - `(x**y = 50625) (x to the power y)` **Example 1: Arithmetic operators in Python**
###Code
x = 15
y = 4
# Output: x + y = 19
print('x + y =',x+y)
# Output: x - y = 11
print('x - y =',x-y)
# Output: x * y = 60
print('x * y =',x*y)
# Output: x / y = 3.75
print('x / y =',x/y)
# Output: x % y = 3
print('x % y =',x%y)
# Output: x // y = 3
print('x // y =',x//y)
# Output: x ** y = 50625
print('x ** y =',x**y)
###Output
x + y = 19
x - y = 11
x * y = 60
x / y = 3.75
x % y = 3
x // y = 3
x ** y = 50625
###Markdown
**3. Comparison (Relational) Operators**- Comparison operators are used to compare values. - It returns either `True` or `False` according to the condition. - **Operator** - **Meaning** - **Example** - **(x = 10, y = 12)**- `>` - `Greater than - True if left operand is greater than the right` - `(x > y is False)`- `<` - `Less than - True if left operand is less than the right` - `(x < y is True)`- `==` - `Equal to - True if both operands are equal` - `(x == y is False)`- `!=` - `Not equal to - True if operands are not equal` - `(x != y is True)`- `>=` - `Greater than or equal to - True if left operand is greater than or equal to the right` - `(x >= y is False)`- `<=` - `Less than or equal to - True if left operand is less than or equal to the right` - `(x <= y is True)` **Example 2: Comparison operators in Python**
###Code
x = 10
y = 12
# Output: x > y is False
print('x > y is',x>y)
# Output: x < y is True
print('x < y is',x<y)
# Output: x == y is False
print('x == y is',x==y)
# Output: x != y is True
print('x != y is',x!=y)
# Output: x >= y is False
print('x >= y is',x>=y)
# Output: x <= y is True
print('x <= y is',x<=y)
###Output
x > y is False
x < y is True
x == y is False
x != y is True
x >= y is False
x <= y is True
###Markdown
**4. Logical (Boolean) Operators**- Logical operators are the `and`, `or`, `not` operators.- Here is the [truth table](https://www.programiz.com/python-programming/keyword-listand_or_not) for these operators. - **Operator** - **Meaning** - **Example** - **(x = True, y = False)**- `and` - `True if both the operands are true` - `(x and y is False)`- `or` - `True if either of the operands is true` -`(x or y is True)`- `not` - `True if operand is false (complements the operand)` - `(not x is False)` **Example 3: Logical Operators in Python**
###Code
x = True
y = False
print('x and y is',x and y)
print('x or y is',x or y)
print('not x is',not x)
###Output
x and y is False
x or y is True
not x is False
###Markdown
- Here is the [truth table](https://www.programiz.com/python-programming/keyword-listand_or_not) for these operators. **5. Bitwise Operators**- Bitwise operators act on operands as if they were strings of binary digits. They operate bit by bit, hence the name.- For example, 2 is `10` in binary and 7 is `111`.- **In the table below**: Let `x` = 10 (`0000 1010` in binary) and `y` = 4 (`0000 0100` in binary) - **Operator** - **Meaning** - **Example**- `&` - `Bitwise AND` - `x & y = 0 (0000 0000)`- `|` - `Bitwise OR` - `x | y = 14 (0000 1110)`- `~` - `Bitwise NOT` - `~x = -11 (1111 0101)`- `^` - `Bitwise XOR` - `x ^ y = 14 (0000 1110)`- `>>` - `Bitwise right shift` - `x >> 2 = 2 (0000 0010)`- `<<` - `Bitwise left shift` - `x << 2 = 40 (0010 1000)` **6. Assignment Operators**- Assignment operators are used in Python to assign values to variables.- `a = 5` is a simple assignment operator that assigns the value `5` on the right to the variable `a` on the left.- There are various compound operators in Python like `a += 5` that adds to the variable and later assigns the same. It is equivalent to `a = a + 5`. - **Operator** - **Example** - **Equivalent to**- `=` - `x = 5` - `x = 5`- `+=` - `x += 5` - `x = x + 5`- `-=` - `x -= 5` - `x = x - 5`- `*=` - `x *= 5` - `x = x * 5`- `/=` - `x /= 5` - `x = x / 5`- `%=` - `x %= 5` - `x = x % 5`- `//=` - `x //= 5` - `x = x // 5`- `**=` - `x **= 5` - `x = x ** 5`- `&=` - `x &= 5` - `x = x & 5`- `|=` - `x |= 5` - `x = x | 5`- `^=` - `x ^= 5` - `x = x ^ 5`- `>>=` - `x >>= 5` - `x = x >> 5`- `<<=` - `x <<= 5` - `x = x << 5` **7. Special Operators**Python language offers some special types of operators like the - **identity operator** ,or the - **membership operator**. They are described below with examples. **8. Identity Operators**- There are two identity operators in Python. They are - - `is`, - `is not`- `is` and `is not` are the identity operators in Python. - They are used to check if two values (or variables) are located on the same part of the memory. - **Two variables that are equal does not imply that they are identical**. - **Operator** - **Meaning** - **Example**- `is` - `True if the operands are identical (refer to the same object)` - `x is True`- `is not` - `True if the operands are not identical (do not refer to the same object)` - `x is not True` **Example 4: Identity operators in Python**
###Code
x1 = 5
y1 = 5
x2 = 'Hello'
y2 = 'Hello'
x3 = [1,2,3]
y3 = [1,2,3]
# Output: False
print(x1 is not y1)
# Output: True
print(x2 is y2)
# Output: False
print(x3 is y3)
###Output
False
True
False
###Markdown
- Here, we see that `x1` and `y1` are integers of the same values, so they are equal as well as identical. Same is the case with `x2` and `y2` (strings).- But `x3` and `y3` are lists. They are equal but not identical. It is because the interpreter locates them separately in memory although they are equal. **9. Membership Operators**- There are two membership operators in Python. They are - - `in`, - `not in`- `in` and `not in` are the membership operators in Python. - They are used to test whether a value or variable is found in a sequence ([string](https://www.programiz.com/python-programming/string), [list](https://www.programiz.com/python-programming/list), [tuple](https://www.programiz.com/python-programming/tuple), [set](https://www.programiz.com/python-programming/set) and [dictionary](https://www.programiz.com/python-programming/dictionary).- **In a dictionary we can only test for presence of key, not the value**. - **Operator** - **Meaning** - **Example**- `in` - `True if value/variable is found in the sequence` - `5 in x`- `not in` - `True if value/variable is not found in the sequence` - `5 not in x` **Example 5: Membership operators in Python**
###Code
x = 'Hello world'
y = {1:'a',2:'b'}
# Output: True
print('H' in x)
# Output: True
print('hello' not in x)
# Output: True
print(1 in y)
# Output: False
print('a' in y)
###Output
True
True
True
False
###Markdown
**Python Operators**Operators are used to perform operations on variables and values.Python divides the operators in the following groups:```Arithmetic operatorsAssignment operatorsComparison operatorsLogical operatorsIdentity operatorsMembership operatorsBitwise operators``` **Python Arithmetic Operators**Arithmetic operators are used with numeric values to perform common mathematical operations:
###Code
a = 5 + 4
print (a)
###Output
9
###Markdown
**Python Assignment Operators**Assignment operators are used to assign values to variables:
###Code
x = 30
x /= 3
print(x)
x= 34
x %= 3
print(x)
x=40
x //= 3
print(x)
x=2
x **= 3
print(x)
x=40
x >>= 3
print(x)
###Output
5
###Markdown
**Python Comparison Operators**Comparison operators are used to compare two values:
###Code
x= 20
y= 10
z = x >= y
print(z)
###Output
True
###Markdown
**Python Logical Operators**Logical operators are used to combine conditional statements:
###Code
x = 2
print(not(x < 5 and x < 10))
print(x)
###Output
False
2
###Markdown
**Python Identity Operators**Identity operators are used to compare the objects, not if they are equal, but if they are actually the same object, with the same memory location
###Code
x = 3
y = 2
print (x is not y)
x = 3
y = 3
print (x is y)
###Output
True
###Markdown
**Python Membership Operators**Membership operators are used to test if a sequence is presented in an object
###Code
x = ["apple", "banana"]
print("banana" in x)
# returns True because a sequence with the value "banana" is in the list
x = ["apple", "banana"]
print("pineapple" not in x)
# returns True because a sequence with the value "pineapple" is not in the list
###Output
True
###Markdown
**Python Bitwise Operators**Bitwise operators are used to compare (binary) numbers:
###Code
# & AND Sets each bit to 1 if both bits are 1
# | OR Sets each bit to 1 if one of two bits is 1
# ^ XOR Sets each bit to 1 if only one of two bits is 1
# ~ NOT Inverts all the bits
# << Zero fill left shift Shift left by pushing zeros in from the right and let the leftmost bits fall off
# >> Signed right shift Shift right by pushing copies of the leftmost bit in from the left, and let the rightmost bits fall off
###Output
_____no_output_____
###Markdown
Python Operators: Python divides the operators in the following groups:* Arithmetic operators* Assignment operators* Comparison operators* Logical operators* Identity operators* Membership operators* Bitwise operators
###Code
### Arithmetic operators:
'''
Operator Name
+ Addition
- Subtraction
* Multiplication
/ Division
% Modulus Division
** Exponentiation
// Floor Division
'''
a=10
b=5
print(a+b)
print(a-b)
print(a*b)
print(a/b)
print(a%b)
print(a**b)
print(a//b)
### Assignment operators
'''
operator example same as
= x=5 x=5
+= x+=3 x=x+3
-= x-=3 x=x-3
*= x*=3 x=x*3
/= x/=3 x=x/3
%= x%=3 x=x%3
//= x//=3 x=x//3
**= x**=3 x=x**3
'''
x=5
print(x)
x+=3
print(x)
x-=3
print(x)
x*=3
print(x)
x/=3
print(x)
x%=3
print(x)
x//=3
print(x)
x**=3
print(x)
### Comparision Operator/Relational Operator
Relational operators are used for comparing the values. It either returns True or False according to the condition.
These operators are also known as Comparison Operators.
'''
Operator Description Example
== If the values of two operands are equal, then the condition becomes true. (a == b) is not true.
!= If values of two operands are not equal, then condition becomes true. (a != b) is true.
<> If values of two operands are not equal, then condition becomes true. (a <> b) is true. This is similar to != operator.
> If the value of left operand is greater than the value of right operand,
then condition becomes true. (a > b) is not true.
< If the value of left operand is less than the value of right operand, then
condition becomes true. (a < b) is true.
>= If the value of left operand is greater than or equal to the
value of right operand, then condition becomes true. (a >= b) is not true.
<= If the value of left operand is less than or equal to
the value of right operand, then condition becomes true. (a <= b) is true.
'''
a=10
b=20
print(a==b)
print(a!=b)
#print(a<>b)
print(a>b)
print(a<b)
print(a>=b)
print(a<=b)
n=5
print(n<10)
a = 21
b = 10
c = 0
if ( a == b ):
print("a is equal to b")
else:
print("a is not equal to b")
if ( a != b ):
print("a is not equal to b")
else:
print("a is equal to b")
if ( a < b ):
print("a is less than b")
else:
print("a is not less than b")
if ( a > b ):
print("a is greater than b")
else:
print("a is not greater than b")
a = 5;
b = 20;
if ( a <= b ):
print("a is either less than or equal to b")
else:
print("a is neither less than nor equal to b")
if ( b >= a ):
print ("b is either greater than or equal to b")
else:
print ("b is neither greater than nor equal to b")
### Python Logical Operators
Operators are used to perform operations on values and variables. These are the special symbols that carry out arithmetic and logical computations. The value the operator operates on is known as Operand.
'''
Operator Description Example
and Logical AND If both the operands are true then condition becomes true. (a and b) is true.
or Logical OR If any of the two operands are non-zero then condition
becomes true. (a or b) is true.
not Logical NOT Used to reverse the logical state of its operand. Not(a and b) is false.
'''
x = 5
print(x > 3 and x < 10)
# returns True because 5 is greater than 3 AND 5 is less than 10
x = 5
print(x > 3 or x < 4)
# returns True because one of the conditions are true (5 is greater than 3, but 5 is not less than 4)
x = 5
print(not(x > 3 and x < 10))
# returns False because not is used to reverse the result
# logical and operator
a = 10
b = 10
c = -10
if a > 0 and b > 0:
print("The numbers are greater than 0")
if a > 0 and b > 0 and c > 0:
print("The numbers are greater than 0")
else:
print("Atleast one number is not greater than 0")
a = 10
b = -5
if a < 0 or b < 0:
print("Their product will be negative")
else:
print("Their product will be positive")
a = 15
if not a == 10:
print ("a not equals 10")
else:
print("a equals 10")
### Identity operators
The identity operators in Python are used to determine whether a value is of a certain class or type.
They are usually used to determine the type of data a certain variable contains. For example, you can combine the identity operators with the
built-in type() function to ensure that you are working with the specific variable type
'''
Operator Description Example
is Returns true if both variables are the same object x is y
is not Returns true if both variables are not the same object x is not y
'''
x = ["apple", "banana"]
y = ["apple", "banana"]
z = x
print(x is z)
# returns True because z is the same object as x
print(x is y)
# returns False because x is not the same object as y, even if they have the same content
print(x == y)
# to demonstrate the difference betweeen "is" and "==": this comparison returns True because x is equal to y
a = 20
b = 20
if ( a is b ):
print("a and b have same identity")
else:
print("a and b do not have same identity")
if ( id(a) == id(b) ):
print("a and b have same identity")
else:
print("a and b do not have same identity")
b = 30
if ( a is b ):
print("a and b have same identity")
else:
print("a and b do not have same identity")
if ( a is not b ):
print("a and b do not have same identity")
else:
print("a and b have same identity")
### membership operator
Membership operators are operators used to validate the membership of a value. It tests for membership in a sequence,
such as strings, lists, or tuples.
'''
Operator Description Example
in Returns True if a sequence with the specified value
is present in the object x in y
not in Returns True if a sequence with the specified value
is not present in the object x not in y
'''
# using 'in' operator
list1=[1,2,3,4,5]
list2=[6,7,8,9]
for item in list1:
if item in list2:
print("overlapping")
else:
print("not overlapping")
# not 'in' operator
x = 24
y = 20
list = [10, 20, 30, 40, 50 ];
if ( x not in list ):
print("x is NOT present in given list")
else:
print("x is present in given list")
if ( y in list ):
print("y is present in given list")
else:
print("y is NOT present in given list")
### Bitwise operators
In Python, bitwise operators are used to performing bitwise calculations on integers. The integers are first converted into binary and then operations are performed on bit by bit, hence the name bitwise operators.
Then the result is returned in decimal format.
'''
OPERATOR DESCRIPTION SYNTAX
& Bitwise AND x & y
| Bitwise OR x | y
~ Bitwise NOT ~x
^ Bitwise XOR x ^ y
>> Bitwise right
shift x>>
<< Bitwise left
shift x<<
'''
# bitwise operators
a = 10
b = 4
# Print bitwise AND operation
print("a & b =", a & b)
# Print bitwise OR operation
print("a | b =", a | b)
# Print bitwise NOT operation
print("~a =", ~a)
# print bitwise XOR operation
print("a ^ b =", a ^ b)
# shift operators
a = 10
b = -10
# print bitwise right shift operator
print("a >> 1 =", a >> 1)
print("b >> 1 =", b >> 1)
a = 5
b = -10
# print bitwise left shift operator
print("a << 1 =", a << 1)
print("b << 1 =", b << 1)
###Output
a >> 1 = 5
b >> 1 = -5
a << 1 = 10
b << 1 = -20
|
PyTorch Recipes/Part 2/warmstarting_model_using_parameters_from_a_different_model.ipynb | ###Markdown
Warmstarting model using parameters from a different model in PyTorch=====================================================================Partially loading a model or loading a partial model are commonscenarios when transfer learning or training a new complex model.Leveraging trained parameters, even if only a few are usable, will helpto warmstart the training process and hopefully help your model convergemuch faster than training from scratch.Introduction------------Whether you are loading from a partial ``state_dict``, which is missingsome keys, or loading a ``state_dict`` with more keys than the modelthat you are loading into, you can set the strict argument to ``False``in the ``load_state_dict()`` function to ignore non-matching keys.In this recipe, we will experiment with warmstarting a model usingparameters of a different model.Setup-----Before we begin, we need to install ``torch`` if it isn’t alreadyavailable.:: pip install torch Steps-----1. Import all necessary libraries for loading our data2. Define and intialize the neural network A and B3. Save model A4. Load into model B1. Import necessary libraries for loading our data~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``and ``torch.optim``.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
###Output
_____no_output_____
###Markdown
2. Define and intialize the neural network A and B~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~For sake of example, we will create a neural network for trainingimages. To learn more see the Defining a Neural Network recipe. We willcreate two neural networks for sake of loading one parameter of type Ainto type B.
###Code
class NetA(nn.Module):
def __init__(self):
super(NetA, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
netA = NetA()
class NetB(nn.Module):
def __init__(self):
super(NetB, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
netB = NetB()
###Output
_____no_output_____
###Markdown
3. Save model A~~~~~~~~~~~~~~~~~~~
###Code
# Specify a path to save to
PATH = "model.pt"
torch.save(netA.state_dict(), PATH)
###Output
_____no_output_____
###Markdown
4. Load into model B~~~~~~~~~~~~~~~~~~~~~~~~If you want to load parameters from one layer to another, but some keysdo not match, simply change the name of the parameter keys in thestate_dict that you are loading to match the keys in the model that youare loading into.
###Code
netB.load_state_dict(torch.load(PATH), strict=False)
###Output
_____no_output_____ |
notebooks/08-netprofiler-and-netshark.ipynb | ###Markdown
SteelScript NetProfiler and NetShark Analysis Integration Imports and Setup
###Code
import sys
import csv
import datetime
import pandas
import steelscript
from steelscript.common.service import UserAuth
from steelscript.common.datautils import Formatter
from steelscript.common import timeutils
from steelscript.netprofiler.core.netprofiler import NetProfiler
from steelscript.netprofiler.core.filters import TimeFilter, TrafficFilter
from steelscript.netprofiler.core.report import TrafficOverallTimeSeriesReport, TrafficSummaryReport
from steelscript.netshark.core.netshark import NetShark
from steelscript.netshark.core.types import Key, Value
from steelscript.netshark.core.filters import NetSharkFilter
from steelscript.netshark.core.filters import TimeFilter as NSTimeFilter
netshark_host = "NETSHARK.HOSTNAME.COM"
netprofiler_host = "NETPROFILER.HOSTNAME.COM"
username = "USERNAME"
password = "PASSWORD"
auth = UserAuth(username, password)
###Output
_____no_output_____
###Markdown
Initialize NetProfiler and NetShark Objects
###Code
p = NetProfiler(netprofiler_host, auth=auth)
s = NetShark(netshark_host, auth=auth)
###Output
_____no_output_____
###Markdown
Define Report Criteria Time filters, Columns, and Groupbys
###Code
timefilter = TimeFilter.parse_range('last 1 hour')
print 'Start: %s' % timefilter.start
print 'End: %s' % timefilter.end
print timefilter
columns = [
p.columns.key.group_name,
p.columns.key.group_id,
p.columns.value.in_avg_bytes,
p.columns.value.in_avg_pkts,
p.columns.value.out_avg_bytes,
p.columns.value.out_avg_pkts,
p.columns.value.response_time
]
groupby = p.groupbys.host_group
###Output
_____no_output_____
###Markdown
Create NetProfiler Report and Retrieve Data
###Code
report = TrafficSummaryReport(p)
report.run(columns=columns,
groupby=groupby,
centricity='int',
resolution='1m',
timefilter=timefilter,
trafficexpr=None)
data = report.get_data()
report.delete()
data[:2]
###Output
_____no_output_____
###Markdown
Format Data Simple table formatting
###Code
headers = [c.key for c in columns]
print headers
Formatter.print_table(data, headers=headers)
###Output
_____no_output_____
###Markdown
Formatting using pandas data analysis library
###Code
df = pandas.DataFrame(data, columns=headers)
df
###Output
_____no_output_____
###Markdown
Find row with the highest response time
###Code
rowidx = df['response_time'].idxmax()
rowidx
df.ix[rowidx]
df.ix[rowidx,'group_name']
###Output
_____no_output_____
###Markdown
Find application using the most resources at that hostgroup
###Code
columns = [
p.columns.key.app_name,
p.columns.value.network_rtt,
p.columns.value.in_avg_pkts,
p.columns.value.out_avg_bytes,
p.columns.value.out_avg_pkts,
]
groupby = p.groupbys.application
filterexpr = TrafficFilter('hostgroup ByLocation:%s' % df.ix[rowidx,'group_name'])
report = TrafficSummaryReport(p)
report.run(columns=columns,
sort_col=p.columns.value.network_rtt,
groupby=groupby,
centricity='int',
resolution='1m',
timefilter=timefilter,
trafficexpr=filterexpr)
app_data = report.get_data()
report.delete()
app_df = pandas.DataFrame(app_data, columns=[c.key for c in columns]).replace('', 0)
app_df.sort(('network_rtt'), inplace=True, ascending=False)
app_df.head()
###Output
_____no_output_____
###Markdown
Query NetShark for Microbursts of Hostgroup IP Addresses Extract list of IPs from hostgroup definition
###Code
from steelscript.netprofiler.core.hostgroup import HostGroupType, HostGroup
hgtype = HostGroupType.find_by_name(p, 'ByLocation')
print hgtype.name
hgtype.groups
df.ix[rowidx]
location = df.ix[rowidx]['group_name']
hostgroup = hgtype.groups[location]
print 'Hostgroup name: %s\nHostgroup CIDRs: %s' % (hostgroup.name, hostgroup.get())
###Output
_____no_output_____
###Markdown
Apply Hostgroup CIDRs to NetShark filter
###Code
s.get_capture_jobs()
job = s.get_capture_jobs()[0]
###Output
_____no_output_____
###Markdown
We use a different CIDR block here because our demo NetProfiler and NetShark are on different networks, in the actual script, this value gets carried over from the previous hostgroup definition.
###Code
ns_columns = [
Key(s.columns.ip.src),
Key(s.columns.tcp.src_port),
Key(s.columns.ip.dst),
Key(s.columns.tcp.dst_port),
Value(s.columns.generic.max_microburst_1ms.bits),
]
cidrs = '172.0.0.0/8'
nsfilter = NetSharkFilter('ip.address="%s"' % cidrs)
ns_filters = [
NSTimeFilter(timefilter.start, timefilter.end),
nsfilter
]
###Output
_____no_output_____
###Markdown
Retrive All Microbursts over same time period
###Code
with s.create_view(job, ns_columns, ns_filters, sync=True) as view:
d = view.get_data(aggregated=True)
d
###Output
_____no_output_____
###Markdown
Find hostpair with biggest burst
###Code
vals = d[0]['vals']
hostpair = max(vals, key=lambda x:x[4])
hostpair
###Output
_____no_output_____
###Markdown
Create new NetShark Timeseries view for biggest burst hostpair
###Code
nsfilter = NetSharkFilter(
'ip.src="{0}" & tcp.src_port="{1}" & ip.dst="{2}" & tcp.dst_port="{3}"'.format(*hostpair)
)
ns_filters = [
NSTimeFilter(timefilter.start, timefilter.end),
nsfilter
]
with s.create_view(job, ns_columns, ns_filters, sync=True) as view:
dtime = view.get_data(aggregated=False,
delta=datetime.timedelta(seconds=1))
dtime
###Output
_____no_output_____
###Markdown
Transform into simple table and plot results
###Code
timeseries = []
headers = ['time', 'packets', '1ms_uburst']
for item in dtime:
row = (item['t'], item['p'], item['vals'][0][-1])
timeseries.append(row)
tdf = pandas.DataFrame(timeseries, columns=headers).set_index('time')
tdf[:10]
%pylab inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
tdf.plot()
tdf.plot(y=['packets'], figsize=(12,3))
tdf.plot(y=['1ms_uburst'], figsize=(12,3))
tdf.packets.plot()
tdf['1ms_uburst'].plot(secondary_y=True, figsize=(12,6))
tdf.plot(subplots=True, figsize=(12,8))
###Output
_____no_output_____ |
playground/disease_gene/old_experiments/discriminator_model_benchmarking.ipynb | ###Markdown
Discriminator Model Benchmarking The goal here is to find the best discriminator model for predicting disease associates gene (DaG) relationships. The few models tested in this are: bag of words, Doc2CecC 500k randomly sampled iterations, Doc2VecC all disease gene sentences, and a unidirectional long short term memory network (LSTM). The underlying hypothesis is that **The LSTM will be the best model in predicting DaG associations.** Set up The Environment
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import glob
from itertools import product
import pickle
import os
import sys
sys.path.append(os.path.abspath('../../../modules'))
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from tqdm import tqdm_notebook
#Set up the environment
username = "danich1"
password = "snorkel"
dbname = "pubmeddb"
#Path subject to change for different os
database_str = "postgresql+psycopg2://{}:{}@/{}?host=/var/run/postgresql".format(username, password, dbname)
os.environ['SNORKELDB'] = database_str
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.annotations import LabelAnnotator
from snorkel.learning.structure import DependencySelector
from snorkel.learning.pytorch.rnn import LSTM
from snorkel.models import candidate_subclass, Candidate
from utils.label_functions import DG_LFS
from utils.notebook_utils.dataframe_helper import load_candidate_dataframes
from utils.notebook_utils.doc2vec_helper import get_candidate_objects, execute_doc2vec, write_sentences_to_file
from utils.notebook_utils.label_matrix_helper import label_candidates, make_cids_query, get_auc_significant_stats
from utils.notebook_utils.train_model_helper import train_generative_model, run_grid_search
from utils.notebook_utils.plot_helper import plot_curve
DiseaseGene = candidate_subclass('DiseaseGene', ['Disease', 'Gene'])
quick_load = True
###Output
_____no_output_____
###Markdown
Get Estimated Training Labels From the work in the [previous notebook](gen_model_benchmarking.ipynb), we determined that the best parameters for the generative model are: 0.4 reg_param, 100 burnin interations and 100 epochs for training. Using this information, we trained the generative model to get the estimated training labels show in the historgram below.
###Code
spreadsheet_names = {
'train': '../../sentence_labels_train.xlsx',
'dev': '../../sentence_labels_train_dev.xlsx',
'test': '../../sentence_labels_dev.xlsx'
}
candidate_dfs = {
key:load_candidate_dataframes(spreadsheet_names[key])
for key in spreadsheet_names
}
for key in candidate_dfs:
print("Size of {} set: {}".format(key, candidate_dfs[key].shape[0]))
label_functions = (
list(DG_LFS["DaG_DB"].values()) +
list(DG_LFS["DaG_TEXT"].values())
)
if quick_load:
labeler = LabelAnnotator(lfs=[])
label_matricies = {
key:labeler.load_matrix(session, cids_query=make_cids_query(session, candidate_dfs[key]))
for key in candidate_dfs
}
else:
labeler = LabelAnnotator(lfs=label_functions)
label_matricies = {
key:label_candidates(
labeler,
cids_query=make_cids_query(session, candidate_dfs[key]),
label_functions=label_functions,
apply_existing=(key!='train')
)
for key in candidate_dfs
}
gen_model = train_generative_model(
label_matricies['train'],
burn_in=100,
epochs=100,
reg_param=0.401,
step_size=1/label_matricies['train'].shape[0],
deps=DependencySelector().select(label_matricies['train']),
lf_propensity=True
)
training_prob_labels = gen_model.marginals(label_matricies['train'])
training_labels = list(map(lambda x: 1 if x > 0.5 else 0, training_prob_labels))
import matplotlib.pyplot as plt
plt.hist(training_prob_labels)
###Output
_____no_output_____
###Markdown
Based on this graph more than half of the data is receiving a positive label. Hard to tell if this is correct; however, based on some prior experience this seems to be incorrectly skewed towards the positive side. Discriminator Models As mentioned above here we train various discriminator models to determine which model can best predict DaG sentences through noisy labels. Bag of Words Model
###Code
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(
candidate_dfs['train'].sentence.values
)
dev_X = vectorizer.transform(candidate_dfs['dev'].sentence.values)
test_X = vectorizer.transform(candidate_dfs['test'].sentence.values)
bow_model = run_grid_search(LogisticRegression(), X, {'C':pd.np.linspace(1e-6,5, num=20)}, training_labels)
plt.plot(pd.np.linspace(1e-6,5, num=20), bow_model.cv_results_['mean_train_score'])
###Output
_____no_output_____
###Markdown
Doc2VecC This model comes from this [paper](https://arxiv.org/pdf/1707.02377.pdf), which is builds off of popular sentence/document embedding algorithms. Through their use of corruption, which involves removing words from a document to generate embeddings, the authors were able to achieve significant speed boosts and results. Majority of the steps to embed these sentences are located in this script [here](../../generate_doc2vec_sentences.py). Shown below are results after feeding these embeddings into the logistic regression algorithm. Doc2VecC 500k Subsample Experiment
###Code
files = zip(
glob.glob('../../doc2vec/doc_vectors/500k_random_sample/train/train_doc_vectors_500k_subset_*.txt.xz'),
glob.glob('../../doc2vec/doc_vectors/500k_random_sample/dev/dev_doc_vectors_500k_subset_*.txt.xz'),
glob.glob('../../doc2vec/doc_vectors/500k_random_sample/test/test_doc_vectors_500k_subset_*.txt.xz')
)
doc2vec_500k_dev_marginals_df = pd.DataFrame()
doc2vec_500k_test_marginals_df = pd.DataFrame()
for index, data in tqdm_notebook(enumerate(files)):
doc2vec_train = pd.read_table(data[0], header=None, sep=" ")
doc2vec_train = doc2vec_train.values[:-1, :-1]
doc2vec_dev = pd.read_table(data[1], header=None, sep=" ")
doc2vec_dev = doc2vec_dev.values[:-1, :-1]
doc2vec_test = pd.read_table(data[2], header=None, sep=" ")
doc2vec_test = doc2vec_test.values[:-1, :-1]
model = run_grid_search(LogisticRegression(), doc2vec_train,
{'C':pd.np.linspace(1e-6, 5, num=4)}, training_labels)
doc2vec_500k_dev_marginals_df['subset_{}'.format(index)] = model.predict_proba(doc2vec_dev)[:,1]
doc2vec_500k_test_marginals_df['subset_{}'.format(index)] = model.predict_proba(doc2vec_test)[:,1]
model_aucs=plot_curve(doc2vec_500k_dev_marginals_df, candidate_dfs['dev'].curated_dsh,
figsize=(20,6), model_type="scatterplot")
doc2vec_subset_df = pd.DataFrame.from_dict(model_aucs, orient='index')
doc2vec_subset_df.describe()
###Output
_____no_output_____
###Markdown
Doc2Vec All D-G Sentences
###Code
doc2vec_X_all_DG = pd.read_table("../../doc2vec/doc_vectors/train_doc_vectors_all_dg.txt.xz",
header=None, sep=" ")
doc2vec_X_all_DG = doc2vec_X_all_DG.values[:-1,:-1]
doc2vec_dev_X_all_DG = pd.read_table("../../doc2vec/doc_vectors/dev_doc_vectors_all_dg.txt.xz",
header=None, sep=" ")
doc2vec_dev_X_all_DG = doc2vec_dev_X_all_DG.values[:-1,:-1]
doc2vec_test_X_all_DG = pd.read_table("../../doc2vec/doc_vectors/test_doc_vectors_all_dg.txt.xz",
header=None, sep=" ")
doc2vec_test_X_all_DG = doc2vec_test_X_all_DG.values[:-1,:-1]
doc2vec_all_pubmed_model = run_grid_search(LogisticRegression(), doc2vec_X_all_DG,
{'C':pd.np.linspace(1e-6, 1, num=20)}, training_labels)
plt.plot(pd.np.linspace(1e-6, 1, num=20), doc2vec_all_pubmed_model.cv_results_['mean_train_score'])
###Output
_____no_output_____
###Markdown
LSTM Here is the LSTM network uses the pytorch library. Because of the needed calculations, this whole sections gets ported onto penn's gpu cluster. Utilizing about 4 gpus this network takes less than a few hours to run depending on the embedding size. Train LSTM on GPU
###Code
lstm = LSTM()
cand_objs = get_candidate_objects(session, candidate_dfs)
X = lstm.preprocess_data(cand_objs['train'], extend=True)
dev_X = lstm.preprocess_data(cand_objs['dev'], extend=False)
test_X = lstm.preprocess_data(cand_objs['test'], extend=False)
pickle.dump(X, open('../../lstm_cluster/train_matrix.pkl', 'wb'))
pickle.dump(X, open('../../lstm_cluster/dev_matrix.pkl', 'wb'))
pickle.dump(X, open('../../lstm_cluster/test_matrix.pkl', 'wb'))
pickle.dump(lstm, open('../../lstm_cluster/model.pkl', 'wb'))
pickle.dump(training_labels, open('../../lstm_cluster/train_labels.pkl', 'wb'))
pickle.dump(candidate_dfs['dev'].curated_dsh.astype(int).tolist(), open('../../lstm_cluster/dev_labels.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Look at LSTM Results
###Code
dev_marginals = pickle.load(open('../../lstm_cluster/dev_lstm_marginals.pkl', 'rb'))
test_marginals = pickle.load(open('../../lstm_cluster/test_lstm_marginals.pkl', 'rb'))
lstm_dev_marginals_df = pd.DataFrame.from_dict(dev_marginals)
lstm_test_marginals_df = pd.DataFrame.from_dict(test_marginals)
model_aucs = plot_curve(lstm_dev_marginals_df, candidate_dfs['dev'].curated_dsh, model_type='heatmap',
y_label="Embedding Dim", x_label="Hidden Dim", metric="ROC")
ci_auc_stats_df = get_auc_significant_stats(candidate_dfs['dev'], model_aucs).sort_values('auroc', ascending=False)
ci_auc_stats_df
model_aucs = plot_curve(lstm_dev_marginals_df, candidate_dfs['dev'].curated_dsh, model_type='heatmap',
y_label="Embedding Dim", x_label="Hidden Dim", metric="PR")
###Output
_____no_output_____
###Markdown
Let's see how the models compare with each other
###Code
dev_marginals_df = pd.DataFrame(
pd.np.array([
gen_model.marginals(label_matricies['dev']),
bow_model.predict_proba(dev_X)[:,1],
doc2vec_500k_dev_marginals_df['subset_6'],
doc2vec_all_pubmed_model.predict_proba(doc2vec_dev_X_all_DG)[:,1],
lstm_dev_marginals_df['1250,1000'].tolist()
]).T,
columns=['Gen_Model', 'Bag_of_Words', 'Doc2Vec 500k', 'Doc2Vec All DG', 'LSTM']
)
dev_marginals_df.head(2)
model_aucs = plot_curve(
dev_marginals_df, candidate_dfs['dev'].curated_dsh,
plot_title="Dev ROC", model_type='curve',
figsize=(10,6), metric="ROC"
)
get_auc_significant_stats(candidate_dfs['dev'], model_aucs)
test_marginals_df = pd.DataFrame(
pd.np.array([
gen_model.marginals(label_matricies['test']),
bow_model.best_estimator_.predict_proba(test_X)[:,1],
doc2vec_500k_test_marginals_df['subset_6'],
doc2vec_all_pubmed_model.best_estimator_.predict_proba(doc2vec_test_X_all_DG)[:,1],
lstm_test_marginals_df['1250,1000'].tolist()
]).T,
columns=['Gen_Model', 'Bag_of_Words', 'Doc2Vec 500k', 'Doc2Vec All DG', 'LSTM']
)
test_marginals_df.head(2)
model_aucs = plot_curve(
test_marginals_df, candidate_dfs['test'].curated_dsh,
plot_title="Test ROC", model_type='curve',
figsize=(10,6), metric="ROC"
)
get_auc_significant_stats(candidate_dfs['test'], model_aucs)
model_aucs = plot_curve(
test_marginals_df, candidate_dfs['test'].curated_dsh,
plot_title="Test PRC", model_type='curve',
figsize=(10,6), metric="PR"
)
###Output
_____no_output_____ |
课程汇集/虚谷号内置课程目录/9.人工智能综合应用/06.颜值检测仪/摄像头颜值测试1.0.ipynb | ###Markdown
颜值测试仪(摄像头版)
###Code
案例描述:获取摄像头图片,调用百度AI进行识别。
本范例的具体介绍请参考百度AI的文档。
https://ai.baidu.com/docs#/Face-Python-SDK/81dd3e06
###Output
_____no_output_____
###Markdown
准备工作1.导入库
###Code
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import cv2
import time
import base64
from aip import AipFace
from IPython import display
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
###Output
_____no_output_____
###Markdown
2.定义变量
###Code
face_num = 0
frame = None
now_time = 0
###Output
_____no_output_____
###Markdown
3.设置认证信息注:这里用的是测试账号,有访问次数的限制,请使用自己的账号信息。
###Code
""" 你的 APPID AK SK """
APP_ID = '15469649'
API_KEY = '3vZgLINSnGGEafPflkTLzkGh'
SECRET_KEY = '8cUXtkMed2z86kqfyrV606ylnCmfcc48'
client = AipFace(APP_ID, API_KEY, SECRET_KEY)
imageType = "BASE64"
options = {}
options["face_field"] = "age,beauty,expression,gender,glasses"
options["max_face_num"] = 2
options["face_type"] = "LIVE"
options["liveness_control"] = "LOW"
###Output
_____no_output_____
###Markdown
4.基本函数:读取图片
###Code
def cvimg_to_b64(img):
try:
image = cv2.imencode('.jpg', img)[1]
base64_data = str(base64.b64encode(image))[2:-1]
return base64_data
except Exception as e:
return "error"
###Output
_____no_output_____
###Markdown
5.基本函数:框出人脸
###Code
#注意:haarcascade_frontalface_default.xml要放在同一个文件夹下。
def faceDetect(img, face_cascade=cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')):
size = img.shape[:2]
divisor = 8
h, w = size
minSize = (w // divisor, h // divisor)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.2, 1, cv2.CASCADE_SCALE_IMAGE, minSize)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
return img, len(faces)
###Output
_____no_output_____
###Markdown
6.基本函数:将信息写到图片
###Code
#将text写到图片上
def put_Text(cvimg, text, location, size=2):
cvimg = cv2.putText(cvimg, text, location, cv2.FONT_HERSHEY_SIMPLEX, size, (51, 102, 255), 3)
return cvimg
###Output
_____no_output_____
###Markdown
开始工作描述:摄像头将拍摄照片,并上传到百度AI平台进行识别,然后将识别结果输出来。
###Code
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
if ret:
frame1, face_num = faceDetect(frame)
frame1 = cv2.flip(frame1, 1, dst=None)
frame1 = cv2.resize(frame1, (1280, 800), interpolation=cv2.INTER_LINEAR)
img64 = cvimg_to_b64(frame1)
#获取图片信息
res = client.detect(img64, imageType, options)
#如果找到人脸信息就读出
if (res['error_msg']=="SUCCESS"):
gender = res['result']['face_list'][0]['gender']['type']
age = res['result']['face_list'][0]['age']
beauty = res['result']['face_list'][0]['beauty']
frame1 = put_Text(frame1, str(age), (300, 50))
frame1 = put_Text(frame1, str(gender), (300, 120))
frame1 = put_Text(frame1, str(beauty), (300, 190))
frame1 = put_Text(frame1, "Age:", (50, 50))
frame1 = put_Text(frame1, "Gender:", (50, 120))
frame1 = put_Text(frame1, "Beauty:", (50, 190))
else:
frame1 = put_Text(frame1, "no face", (50, 50))
#display.clear_output(wait=True)
img=frame1[:,:,::-1]
plt.imshow(img)
plt.axis('off') #不显示坐标
plt.show()
else:
print("没有接摄像头或者摄像头被占用!")
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
综合拓展功能描述:当摄像头前面有障碍(有人),LED亮起,自动拍摄照进行识别。识别结束后,LED熄灭,显示识别结果,图片自动保存,。装置搭建:红外测障传感器接在D3脚;舵机接到D7;LED接到D13。其他说明:请设计一张颜值指示表,并测试舵机的指向情况。
###Code
#注意事项:测试下面的代码,每一次运行都要先通过“服务-重启 & 清空输出”来初始化。
from xugu import Pin,Servo
p1 = Pin(3, Pin.IN)
led = Pin(13, Pin.OUT)
servo = Servo(7)
while True:
v1=p1.read_digital()
if v1==1:
led.write_digital(1)
print("开始测试,请稍候")
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
if ret:
frame1, face_num = faceDetect(frame)
frame1 = cv2.flip(frame1, 1, dst=None)
frame1 = cv2.resize(frame1, (1280, 800), interpolation=cv2.INTER_LINEAR)
img64 = cvimg_to_b64(frame1)
#获取图片信息
res = client.detect(img64, imageType, options)
#如果找到人脸信息就读出
if (res['error_msg']=="SUCCESS"):
gender = res['result']['face_list'][0]['gender']['type']
age = res['result']['face_list'][0]['age']
beauty = res['result']['face_list'][0]['beauty']
frame1 = put_Text(frame1, str(age), (300, 50))
frame1 = put_Text(frame1, str(gender), (300, 120))
frame1 = put_Text(frame1, str(beauty), (300, 190))
frame1 = put_Text(frame1, "Age:", (50, 50))
frame1 = put_Text(frame1, "Gender:", (50, 120))
frame1 = put_Text(frame1, "Beauty:", (50, 190))
#检测到人脸的图片,保存
cv2.imwrite(str(time.time())+".jpg",frame1)
else:
frame1 = put_Text(frame1, "no face", (50, 50))
display.clear_output(wait=True)
img=frame1[:,:,::-1]
plt.imshow(img)
plt.axis('off') #不显示坐标
plt.show()
print("图片已经保存")
servo.write_angle(int(beauty*2))
led.write_digital(0)
time.sleep(10)
else:
print("没有接摄像头或者摄像头被占用!")
cap.release()
cv2.destroyAllWindows()
time.sleep(0.2)
###Output
_____no_output_____ |
5. Building Deep Learning Models with TensorFlow/Week 2 Supervised Learning Models/CNN-MNIST-Dataset.ipynb | ###Markdown
CONVOLUTIONAL NEURAL NETWORK APPLICATION Introduction In this section, we will use the famous [MNIST Dataset](http://yann.lecun.com/exdb/mnist/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01) to build two Neural Networks capable to perform handwritten digits classification. The first Network is a simple Multi-layer Perceptron (MLP) and the second one is a Convolutional Neural Network (CNN from now on). In other words, when given an input our algorithm will say, with some associated error, what type of digit this input represents. *** Click on the links to go to the following sections:Table of Contents What is Deep Learning Simple test: Is TensorFlow working? 1st part: classify MNIST using a simple model Evaluating the final result How to improve our model? 2nd part: Deep Learning applied on MNIST Summary of the Deep Convolutional Neural Network Define functions and train the model Evaluate the model What is Deep Learning? Brief Theory: Deep learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple non-linear transformations. It's time for deep learning. Our brain doesn't work with only one or three layers. Why it would be different with machines?. In Practice, defining the term "Deep": in this context, deep means that we are studying a Neural Network which has several hidden layers (more than one), no matter what type (convolutional, pooling, normalization, fully-connected etc). The most interesting part is that some papers noticed that Deep Neural Networks with the right architectures/hyper-parameters achieve better results than shallow Neural Networks with the same computational power (e.g. number of neurons or connections). In Practice, defining "Learning": In the context of supervised learning, digits recognition in our case, the learning part consists of a target/feature which is to be predicted using a given set of observations with the already known final prediction (label). In our case, the target will be the digit (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) and the observations are the intensity and relative position of the pixels. After some training, it is possible to generate a "function" that map inputs (digit image) to desired outputs(type of digit). The only problem is how well this map operation occurs. While trying to generate this "function", the training process continues until the model achieves a desired level of accuracy on the training data. Installing TensorFlow We begin by installing TensorFlow version 2.2.0 and its required prerequistes.
###Code
!pip install grpcio==1.24.3
!pip install tensorflow==2.2.0
###Output
_____no_output_____
###Markdown
Notice: This notebook has been created with TensorFlow version 2.2, and might not work with other versions. Therefore we check:
###Code
import tensorflow as tf
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">'+string+'</span>'))
if not tf.__version__ == '2.2.0':
printmd('<<<<<!!!!! ERROR !!!! please upgrade to TensorFlow 2.2.0, or restart your Kernel (Kernel->Restart & Clear Output)>>>>>')
###Output
_____no_output_____
###Markdown
In this tutorial, we first classify MNIST using a simple Multi-layer perceptron and then, in the second part, we use deeplearning to improve the accuracy of our results.1st part: classify MNIST using a simple model. We are going to create a simple Multi-layer perceptron, a simple type of Neural Network, to perform classification tasks on the MNIST digits dataset. If you are not familiar with the MNIST dataset, please consider to read more about it: click here What is MNIST? According to LeCun's website, the MNIST is a: "database of handwritten digits that has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image". Import the MNIST dataset using TensorFlow built-in feature It's very important to notice that MNIST is a high optimized data-set and it does not contain images. You will need to build your own code if you want to see the real digits. Another important side note is the effort that the authors invested on this data-set with normalization and centering operations.
###Code
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
The features data are between 0 and 255, and we will normalize this to improve optimization performance.
###Code
x_train, x_test = x_train / 255.0, x_test / 255.0
###Output
_____no_output_____
###Markdown
Let's take a look at the first few label values:
###Code
print(y_train[0:5])
###Output
_____no_output_____
###Markdown
The current label scheme simply identifies the category to which each data point belongs (each handwritten digit is assigned a category equal to the number value). We need to convert this into a one-hot encoded vector. In contrast to Binary representation, the labels will be presented in a way that to represent a number N, the $N^{th}$ bit is 1 while the the other bits are 0. For example, five and zero in a binary code would be: Number representation: 0Binary encoding: [2^5] [2^4] [2^3] [2^2] [2^1] [2^0] Array/vector: 0 0 0 0 0 0 Number representation: 5Binary encoding: [2^5] [2^4] [2^3] [2^2] [2^1] [2^0] Array/vector: 0 0 0 1 0 1 Using a different notation, the same digits using one-hot vector representation can be show as: Number representation: 0One-hot encoding: [5] [4] [3] [2] [1] [0] Array/vector: 0 0 0 0 0 1 Number representation: 5One-hot encoding: [5] [4] [3] [2] [1] [0] Array/vector: 1 0 0 0 0 0 This is a standard operation, and is shown below.
###Code
print("categorical labels")
print(y_train[0:5])
# make labels one hot encoded
y_train = tf.one_hot(y_train, 10)
y_test = tf.one_hot(y_test, 10)
print("one hot encoded labels")
print(y_train[0:5])
###Output
_____no_output_____
###Markdown
Understanding the imported data The imported data can be divided as follows:* Training >> Use the given dataset with inputs and related outputs for training of NN. In our case, if you give an image that you know that represents a "nine", this set will tell the neural network that we expect a "nine" as the output.\ \- 60,000 data points \- x_train for inputs \- y_train for outputs/labels* Test >> The model does not have access to this information prior to the testing phase. It is used to evaluate the performance and accuracy of the model against "real life situations". No further optimization beyond this point.\ \- 10,000 data points \- x_test for inputs \- y_test for outputs/labels* Validation data is not used in this example.
###Code
print("number of training examples:" , x_train.shape[0])
print("number of test examples:" , x_test.shape[0])
###Output
_____no_output_____
###Markdown
The new Dataset API in TensorFlow 2.X allows you to define batch sizes as part of the dataset. It also has improved I/O characteristics, and is the recommended way of loading data. This allows you to iterate through subsets (batches) of the data during training. This is a common practice that improves performance by computing gradients over smaller batches. We will see this in action during the training step.Additionally, you can shuffle the dataset if you believe that there is a skewed distribution of data in the original dataset that may result in batches with different distributions. We aren't shuffling data here.
###Code
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(50)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(50)
###Output
_____no_output_____
###Markdown
Converting a 2D Image into a 1D Vector MNIST images are black and white thumbnail square images with 28x28 pixels. Each pixel is assigned an intensity (originally on a scale of 0 to 255). To make the input useful to us, we need these to be arranged in a 1D vector using a consistent strategy, as is shown in the figure below. We can use `Flatten` to accomplish this task.
###Code
# showing an example of the Flatten class and operation
from tensorflow.keras.layers import Flatten
flatten = Flatten(dtype='float32')
"original data shape"
print(x_train.shape)
"flattened shape"
print(flatten(x_train).shape)
###Output
_____no_output_____
###Markdown
Illustration of the Flatten operation Assigning bias and weights to null tensors Now we are going to create the weights and biases, for this purpose they will be used as arrays filled with zeros. The values that we choose here can be critical, but we'll cover a better way on the second part, instead of this type of initialization.Since these values will be adjusted during the optimization process, we define them using `tf.Variable`.NOTE: `tf.Variable` creates adjustable variables that are in the global namespace, so any function that references these variables need not pass the varibles. But they are globals, so exercise caution when naming!
###Code
# Weight tensor
W = tf.Variable(tf.zeros([784, 10], tf.float32))
# Bias tensor
b = tf.Variable(tf.zeros([10], tf.float32))
###Output
_____no_output_____
###Markdown
Adding Weights and Biases to input The only difference for our next operation to the picture below is that we are using the mathematical convention for what is being executed in the illustration. The tf.matmul operation performs a matrix multiplication between x (inputs) and W (weights) and after the code add biases. Illustration showing how weights and biases are added to neurons/nodes.
###Code
def forward(x):
return tf.matmul(x,W) + b
###Output
_____no_output_____
###Markdown
Softmax Regression Softmax is an activation function that is normally used in classification problems. It generates the probabilities for the output. For example, our model will not be 100% sure that one digit is the number nine, instead, the answer will be a distribution of probabilities where, if the model is right, the nine number will have a larger probability than the other other digits.For comparison, below is the one-hot vector for a nine digit label:
###Code
0 --> 0
1 --> 0
2 --> 0
3 --> 0
4 --> 0
5 --> 0
6 --> 0
7 --> 0
8 --> 0
9 --> 1
###Output
_____no_output_____
###Markdown
A machine does not have all this certainty, so we want to know what is the best guess, but we also want to understand how sure it was and what was the second better option. Below is an example of a hypothetical distribution for a nine digit:
###Code
0 -->0.01
1 -->0.02
2 -->0.03
3 -->0.02
4 -->0.12
5 -->0.01
6 -->0.03
7 -->0.06
8 -->0.1
9 -->0.6
###Output
_____no_output_____
###Markdown
Softmax is simply an exponential of each value of a vector that is also normalized. The formula is:$$\sigma(z_i) = \frac{e^{z_i}}{\sum{e^{z_i}}}$$
###Code
# a sample softmax calculation on an input vector
vector = [10, 0.2, 8]
softmax = tf.nn.softmax(vector)
print("softmax calculation")
print(softmax.numpy())
print("verifying normalization")
print(tf.reduce_sum(softmax))
print("finding vector with largest value (label assignment)")
print("category", tf.argmax(softmax).numpy())
###Output
_____no_output_____
###Markdown
Now we can define our output layer
###Code
def activate(x):
return tf.nn.softmax(forward(x))
###Output
_____no_output_____
###Markdown
Logistic function output is used for the classification between two target classes 0/1. Softmax function is generalized type of logistic function. That is, Softmax can output a multiclass categorical probability distribution. Let's create a `model` function for convenience.
###Code
def model(x):
x = flatten(x)
return activate(x)
###Output
_____no_output_____
###Markdown
Cost function It is a function that is used to minimize the difference between the right answers (labels) and estimated outputs by our Network. Here we use the cross entropy function, which is a popular cost function used for categorical models. The function is defined in terms of probabilities, which is why we must used normalized vectors. It is given as:$$ CrossEntropy = \sum{y\_{Label}\cdot \log(y\_{Prediction})}$$
###Code
def cross_entropy(y_label, y_pred):
return (-tf.reduce_sum(y_label * tf.math.log(y_pred + 1.e-10)))
# addition of 1e-10 to prevent errors in zero calculations
# current loss function for unoptimized model
cross_entropy(y_train, model(x_train)).numpy()
###Output
_____no_output_____
###Markdown
Type of optimization: Gradient Descent This is the part where you configure the optimizer for your Neural Network. There are several optimizers available, in our case we will use Gradient Descent because it is a well established optimizer.
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.25)
###Output
_____no_output_____
###Markdown
Now we define the training step. This step uses `GradientTape` to automatically compute deriviatives of the functions we have manually created and applies them using the `SGD` optimizer.
###Code
def train_step(x, y ):
with tf.GradientTape() as tape:
#compute loss function
current_loss = cross_entropy( y, model(x))
# compute gradient of loss
#(This is automatic! Even with specialized funcctions!)
grads = tape.gradient( current_loss , [W,b] )
# Apply SGD step to our Variables W and b
optimizer.apply_gradients( zip( grads , [W,b] ) )
return current_loss.numpy()
###Output
_____no_output_____
###Markdown
Training batches Train using minibatch Gradient Descent.In practice, Batch Gradient Descent is not often used because is too computationally expensive. The good part about this method is that you have the true gradient, but with the expensive computing task of using the whole dataset in one time. Due to this problem, Neural Networks usually use minibatch to train.We have already divided our full dataset into batches of 50 each using the Datasets API. Now we can iterate through each of those batches to compute a gradient. Once we iterate through all of the batches in the dataset, we complete an **epoch**, or a full traversal of the dataset.
###Code
# zeroing out weights in case you want to run this cell multiple times
# Weight tensor
W = tf.Variable(tf.zeros([784, 10],tf.float32))
# Bias tensor
b = tf.Variable(tf.zeros([10],tf.float32))
loss_values=[]
accuracies = []
epochs = 10
for i in range(epochs):
j=0
# each batch has 50 examples
for x_train_batch, y_train_batch in train_ds:
j+=1
current_loss = train_step(x_train_batch, y_train_batch)
if j%500==0: #reporting intermittent batch statistics
print("epoch ", str(i), "batch", str(j), "loss:", str(current_loss) )
# collecting statistics at each epoch...loss function and accuracy
# loss function
current_loss = cross_entropy( y_train, model( x_train )).numpy()
loss_values.append(current_loss)
correct_prediction = tf.equal(tf.argmax(model(x_train), axis=1),
tf.argmax(y_train, axis=1))
# accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)).numpy()
accuracies.append(accuracy)
print("end of epoch ", str(i), "loss", str(current_loss), "accuracy", str(accuracy) )
###Output
_____no_output_____
###Markdown
Test and Plots It is common to run intermittent diagnostics (such as accuracy and loss over entire dataset) during training. Here we compute a summary statistic on the test dataset as well. Fitness metrics for the training data should closely match those of the test data. If the test metrics are distinctly less favorable, this can be a sign of overfitting.
###Code
correct_prediction_train = tf.equal(tf.argmax(model(x_train), axis=1),tf.argmax(y_train,axis=1))
accuracy_train = tf.reduce_mean(tf.cast(correct_prediction_train, tf.float32)).numpy()
correct_prediction_test = tf.equal(tf.argmax(model(x_test), axis=1),tf.argmax(y_test, axis=1))
accuracy_test = tf.reduce_mean(tf.cast(correct_prediction_test, tf.float32)).numpy()
print("training accuracy", accuracy_train)
print("test accuracy", accuracy_test)
###Output
_____no_output_____
###Markdown
The next two plots show the performance of the optimization at each epoch.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
#print(loss_values)
plt.plot(loss_values,'-ro')
plt.title("loss per epoch")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.plot(accuracies,'-ro')
plt.title("accuracy per epoch")
plt.xlabel("epoch")
plt.ylabel("accuracy")
###Output
_____no_output_____
###Markdown
Evaluating the final result 84% accuracy is not bad considering the simplicity of the model, but >90% accuracy has been achieved in the past. How to improve our model? Several options as follow: Regularization of Neural Networks using DropConnect Multi-column Deep Neural Networks for Image Classification APAC: Augmented Pattern Classification with Neural Networks Simple Deep Neural Network with DropoutIn the next part we are going to explore the option: Simple Deep Neural Network with Dropout (more than 1 hidden layer) 2nd part: Deep Learning applied on MNIST In the first part, we learned how to use a simple ANN to classify MNIST. Now we are going to expand our knowledge using a Deep Neural Network.Architecture of our network is:* (Input) -> \[batch_size, 28, 28, 1] >> Apply 32 filter of \[5x5]* (Convolutional layer 1) -> \[batch_size, 28, 28, 32]* (ReLU 1) -> \[?, 28, 28, 32]* (Max pooling 1) -> \[?, 14, 14, 32]* (Convolutional layer 2) -> \[?, 14, 14, 64]* (ReLU 2) -> \[?, 14, 14, 64]* (Max pooling 2) -> \[?, 7, 7, 64]* \[fully connected layer 3] -> \[1x1024]* \[ReLU 3] -> \[1x1024]* \[Drop out] -> \[1x1024]* \[fully connected layer 4] -> \[1x10]The next cells will explore this new architecture. The MNIST data The MNIST Dataset will be used from the above example. Initial parameters Create general parameters for the model
###Code
width = 28 # width of the image in pixels
height = 28 # height of the image in pixels
flat = width * height # number of pixels in one image
class_output = 10 # number of possible classifications for the problem
###Output
_____no_output_____
###Markdown
Converting images of the data set to tensors The input image is 28 pixels by 28 pixels, 1 channel (grayscale). In this case, the first dimension is the batch number of the image, and can be of any size (so we set it to -1). The second and third dimensions are width and height, and the last one is the image channels.
###Code
x_image_train = tf.reshape(x_train, [-1,28,28,1])
x_image_train = tf.cast(x_image_train, 'float32')
x_image_test = tf.reshape(x_test, [-1,28,28,1])
x_image_test = tf.cast(x_image_test, 'float32')
#creating new dataset with reshaped inputs
train_ds2 = tf.data.Dataset.from_tensor_slices((x_image_train, y_train)).batch(50)
test_ds2 = tf.data.Dataset.from_tensor_slices((x_image_test, y_test)).batch(50)
###Output
_____no_output_____
###Markdown
Reducing data set size from this point on because the Skills Netowrk Labs only provides 4 GB of main memory but 8 are needed otherwise. If you want to run faster (in multiple CPU or GPU) and on the whole data set consider using IBM Watson Studio. You get 100 hours of free usage every month.
###Code
x_image_train = tf.slice(x_image_train,[0,0,0,0],[10000, 28, 28, 1])
y_train = tf.slice(y_train,[0,0],[10000, 10])
###Output
_____no_output_____
###Markdown
Convolutional Layer 1 Defining kernel weight and biasWe define a kernel here. The Size of the filter/kernel is 5x5; Input channels is 1 (grayscale); and we need 32 different feature maps (here, 32 feature maps means 32 different filters are applied on each image. So, the output of convolution layer would be 28x28x32). In this step, we create a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels]
###Code
W_conv1 = tf.Variable(tf.random.truncated_normal([5, 5, 1, 32], stddev=0.1, seed=0))
b_conv1 = tf.Variable(tf.constant(0.1, shape=[32])) # need 32 biases for 32 outputs
###Output
_____no_output_____
###Markdown
Convolve with weight tensor and add biases.To create convolutional layer, we use tf.nn.conv2d. It computes a 2-D convolution given 4-D input and filter tensors.Inputs:* tensor of shape \[batch, in_height, in_width, in_channels]. x of shape \[batch_size,28 ,28, 1]* a filter / kernel tensor of shape \[filter_height, filter_width, in_channels, out_channels]. W is of size \[5, 5, 1, 32]* stride which is \[1, 1, 1, 1]. The convolutional layer, slides the "kernel window" across the input tensor. As the input tensor has 4 dimensions: \[batch, height, width, channels], then the convolution operates on a 2D window on the height and width dimensions. **strides** determines how much the window shifts by in each of the dimensions. As the first and last dimensions are related to batch and channels, we set the stride to 1. But for second and third dimension, we could set other values, e.g. \[1, 2, 2, 1]Process:* Change the filter to a 2-D matrix with shape \[5\*5\*1,32]* Extracts image patches from the input tensor to form a *virtual* tensor of shape `[batch, 28, 28, 5*5*1]`.* For each batch, right-multiplies the filter matrix and the image vector.Output:* A `Tensor` (a 2-D convolution) of size tf.Tensor 'add\_7:0' shape=(?, 28, 28, 32)- Notice: the output of the first convolution layer is 32 \[28x28] images. Here 32 is considered as volume/depth of the output image.
###Code
def convolve1(x):
return(
tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1)
###Output
_____no_output_____
###Markdown
Apply the ReLU activation Function In this step, we just go through all outputs convolution layer, convolve1, and wherever a negative number occurs, we swap it out for a 0. It is called ReLU activation Function. Let f(x) is a ReLU activation function $f(x) = max(0,x)$.
###Code
def h_conv1(x): return(tf.nn.relu(convolve1(x)))
###Output
_____no_output_____
###Markdown
Apply the max pooling max pooling is a form of non-linear down-sampling. It partitions the input image into a set of rectangles and, and then find the maximum value for that region.Lets use tf.nn.max_pool function to perform max pooling. Kernel size: 2x2 (if the window is a 2x2 matrix, it would result in one output pixel)\ Strides: dictates the sliding behaviour of the kernel. In this case it will move 2 pixels everytime, thus not overlapping. The input is a matrix of size 28x28x32, and the output would be a matrix of size 14x14x32.
###Code
def conv1(x):
return tf.nn.max_pool(h_conv1(x), ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
###Output
_____no_output_____
###Markdown
First layer completed Convolutional Layer 2Weights and Biases of kernels We apply the convolution again in this layer. Lets look at the second layer kernel:* Filter/kernel: 5x5 (25 pixels)* Input channels: 32 (from the 1st Conv layer, we had 32 feature maps)* 64 output feature mapsNotice: here, the input image is \[14x14x32], the filter is \[5x5x32], we use 64 filters of size \[5x5x32], and the output of the convolutional layer would be 64 convolved image, \[14x14x64].Notice: the convolution result of applying a filter of size \[5x5x32] on image of size \[14x14x32] is an image of size \[14x14x1], that is, the convolution is functioning on volume.
###Code
W_conv2 = tf.Variable(tf.random.truncated_normal([5, 5, 32, 64], stddev=0.1, seed=1))
b_conv2 = tf.Variable(tf.constant(0.1, shape=[64])) #need 64 biases for 64 outputs
###Output
_____no_output_____
###Markdown
Convolve image with weight tensor and add biases.
###Code
def convolve2(x):
return(
tf.nn.conv2d(conv1(x), W_conv2, strides=[1, 1, 1, 1], padding='SAME') + b_conv2)
###Output
_____no_output_____
###Markdown
Apply the ReLU activation Function
###Code
def h_conv2(x): return tf.nn.relu(convolve2(x))
###Output
_____no_output_____
###Markdown
Apply the max pooling
###Code
def conv2(x):
return(
tf.nn.max_pool(h_conv2(x), ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME'))
###Output
_____no_output_____
###Markdown
Second layer completed. So, what is the output of the second layer, layer2?* it is 64 matrix of \[7x7] Fully Connected Layer You need a fully connected layer to use the Softmax and create the probabilities in the end. Fully connected layers take the high-level filtered images from previous layer, that is all 64 matrices, and convert them to a flat array.So, each matrix \[7x7] will be converted to a matrix of \[49x1], and then all of the 64 matrix will be connected, which make an array of size \[3136x1]. We will connect it into another layer of size \[1024x1]. So, the weight between these 2 layers will be \[3136x1024] Flattening Second Layer
###Code
def layer2_matrix(x): return tf.reshape(conv2(x), [-1, 7 * 7 * 64])
###Output
_____no_output_____
###Markdown
Weights and Biases between layer 2 and 3 Composition of the feature map from the last layer (7x7) multiplied by the number of feature maps (64); 1027 outputs to Softmax layer
###Code
W_fc1 = tf.Variable(tf.random.truncated_normal([7 * 7 * 64, 1024], stddev=0.1, seed = 2))
b_fc1 = tf.Variable(tf.constant(0.1, shape=[1024])) # need 1024 biases for 1024 outputs
###Output
_____no_output_____
###Markdown
Matrix Multiplication (applying weights and biases)
###Code
def fcl(x): return tf.matmul(layer2_matrix(x), W_fc1) + b_fc1
###Output
_____no_output_____
###Markdown
Apply the ReLU activation Function
###Code
def h_fc1(x): return tf.nn.relu(fcl(x))
###Output
_____no_output_____
###Markdown
Third layer completed Dropout Layer, Optional phase for reducing overfitting It is a phase where the network "forget" some features. At each training step in a mini-batch, some units get switched off randomly so that it will not interact with the network. That is, it weights cannot be updated, nor affect the learning of the other network nodes. This can be very useful for very large neural networks to prevent overfitting.
###Code
keep_prob=0.5
def layer_drop(x): return tf.nn.dropout(h_fc1(x), keep_prob)
###Output
_____no_output_____
###Markdown
Readout Layer (Softmax Layer) Type: Softmax, Fully Connected Layer. Weights and Biases In last layer, CNN takes the high-level filtered images and translate them into votes using softmax.Input channels: 1024 (neurons from the 3rd Layer); 10 output features
###Code
W_fc2 = tf.Variable(tf.random.truncated_normal([1024, 10], stddev=0.1, seed = 2)) #1024 neurons
b_fc2 = tf.Variable(tf.constant(0.1, shape=[10])) # 10 possibilities for digits [0,1,2,3,4,5,6,7,8,9]
###Output
_____no_output_____
###Markdown
Matrix Multiplication (applying weights and biases)
###Code
def fc(x): return tf.matmul(layer_drop(x), W_fc2) + b_fc2
###Output
_____no_output_____
###Markdown
Apply the Softmax activation Functionsoftmax allows us to interpret the outputs of fcl4 as probabilities. So, y_conv is a tensor of probabilities.
###Code
def y_CNN(x): return tf.nn.softmax(fc(x))
###Output
_____no_output_____
###Markdown
*** Summary of the Deep Convolutional Neural Network Now is time to remember the structure of our network 0) Input - MNIST dataset 1) Convolutional and Max-Pooling 2) Convolutional and Max-Pooling 3) Fully Connected Layer 4) Processing - Dropout 5) Readout layer - Fully Connected 6) Outputs - Classified digits *** Define functions and train the model Define the loss functionWe need to compare our output, layer4 tensor, with ground truth for all mini_batch. we can use cross entropy>/b> to see how bad our CNN is working - to measure the error at a softmax layer.The following code shows an toy sample of cross-entropy for a mini-batch of size 2 which its items have been classified. You can run it (first change the cell type to code in the toolbar) to see how cross entropy changes.
###Code
import numpy as np
layer4_test =[[0.9, 0.1, 0.1],[0.9, 0.1, 0.1]]
y_test=[[1.0, 0.0, 0.0],[1.0, 0.0, 0.0]]
np.mean( -np.sum(y_test * np.log(layer4_test),1))
###Output
_____no_output_____
###Markdown
reduce_sum computes the sum of elements of (y\_ \* tf.log(layer4) across second dimension of the tensor, and reduce_mean computes the mean of all elements in the tensor..$$ CrossEntropy = \sum{y\_{Label}\cdot \log(y\_{Prediction})}$$
###Code
def cross_entropy(y_label, y_pred):
return (-tf.reduce_sum(y_label * tf.math.log(y_pred + 1.e-10)))
###Output
_____no_output_____
###Markdown
Define the optimizerIt is obvious that we want minimize the error of our network which is calculated by cross_entropy metric. To solve the problem, we have to compute gradients for the loss (which is minimizing the cross-entropy) and apply gradients to variables. It will be done by an optimizer: GradientDescent or Adagrad.
###Code
optimizer = tf.keras.optimizers.Adam(1e-4)
###Output
_____no_output_____
###Markdown
Following the convention of our first example, we will use `GradientTape` to define a model.
###Code
variables = [W_conv1, b_conv1, W_conv2, b_conv2,
W_fc1, b_fc1, W_fc2, b_fc2, ]
def train_step(x, y):
with tf.GradientTape() as tape:
current_loss = cross_entropy( y, y_CNN( x ))
grads = tape.gradient( current_loss , variables )
optimizer.apply_gradients( zip( grads , variables ) )
return current_loss.numpy()
"""results = []
increment = 1000
for start in range(0,60000,increment):
s = tf.slice(x_image_train,[start,0,0,0],[start+increment-1, 28, 28, 1])
t = y_CNN(s)
#results.append(t)
"""
###Output
_____no_output_____
###Markdown
Define predictionDo you want to know how many of the cases in a mini-batch has been classified correctly? lets count them.
###Code
correct_prediction = tf.equal(tf.argmax(y_CNN(x_image_train), axis=1), tf.argmax(y_train, axis=1))
###Output
_____no_output_____
###Markdown
Define accuracyIt makes more sense to report accuracy using average of correct cases.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float32'))
###Output
_____no_output_____
###Markdown
Run session, train If you want a fast result (it might take sometime to train it)
###Code
loss_values=[]
accuracies = []
epochs = 1
for i in range(epochs):
j=0
# each batch has 50 examples
for x_train_batch, y_train_batch in train_ds2:
j+=1
current_loss = train_step(x_train_batch, y_train_batch)
if j%50==0: #reporting intermittent batch statistics
correct_prediction = tf.equal(tf.argmax(y_CNN(x_train_batch), axis=1),
tf.argmax(y_train_batch, axis=1))
# accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)).numpy()
print("epoch ", str(i), "batch", str(j), "loss:", str(current_loss),
"accuracy", str(accuracy))
current_loss = cross_entropy( y_train, y_CNN( x_image_train )).numpy()
loss_values.append(current_loss)
correct_prediction = tf.equal(tf.argmax(y_CNN(x_image_train), axis=1),
tf.argmax(y_train, axis=1))
# accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)).numpy()
accuracies.append(accuracy)
print("end of epoch ", str(i), "loss", str(current_loss), "accuracy", str(accuracy) )
###Output
_____no_output_____
###Markdown
Wow...95% accuracy after only 1 epoch! You can increase the number of epochs in the previsous cell if you REALLY have time to wait, or you are running it using PowerAI (change the type of the cell to code) PS. If you have problems running this notebook, please shutdown all your Jupyter runnning notebooks, clear all cells outputs and run each cell only after the completion of the previous cell. Evaluate the model Print the evaluation to the user
###Code
j=0
acccuracies=[]
# evaluate accuracy by batch and average...reporting every 100th batch
for x_train_batch, y_train_batch in train_ds2:
j+=1
correct_prediction = tf.equal(tf.argmax(y_CNN(x_train_batch), axis=1),
tf.argmax(y_train_batch, axis=1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)).numpy()
#accuracies.append(accuracy)
if j%100==0:
print("batch", str(j), "accuracy", str(accuracy) )
import numpy as np
print("accuracy of entire set", str(np.mean(accuracies)))
###Output
_____no_output_____
###Markdown
Visualization Do you want to look at all the filters?
###Code
kernels = tf.reshape(tf.transpose(W_conv1, perm=[2, 3, 0,1]),[32, -1])
!wget --output-document utils1.py https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork/labs/Week2/data/utils.py
import utils1
import imp
imp.reload(utils1)
from utils1 import tile_raster_images
import matplotlib.pyplot as plt
from PIL import Image
%matplotlib inline
image = Image.fromarray(tile_raster_images(kernels.numpy(), img_shape=(5, 5) ,tile_shape=(4, 8), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (18.0, 18.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
###Output
_____no_output_____
###Markdown
Do you want to see the output of an image passing through first convolution layer?
###Code
import numpy as np
plt.rcParams['figure.figsize'] = (5.0, 5.0)
sampleimage = [x_image_train[0]]
plt.imshow(np.reshape(sampleimage,[28,28]), cmap="gray")
#ActivatedUnits = sess.run(convolve1,feed_dict={x:np.reshape(sampleimage,[1,784],order='F'),keep_prob:1.0})
keep_prob=1.0
ActivatedUnits = convolve1(sampleimage)
filters = ActivatedUnits.shape[3]
plt.figure(1, figsize=(20,20))
n_columns = 6
n_rows = np.math.ceil(filters / n_columns) + 1
for i in range(filters):
plt.subplot(n_rows, n_columns, i+1)
plt.title('Filter ' + str(i))
plt.imshow(ActivatedUnits[0,:,:,i], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
What about second convolution layer?
###Code
#ActivatedUnits = sess.run(convolve2,feed_dict={x:np.reshape(sampleimage,[1,784],order='F'),keep_prob:1.0})
ActivatedUnits = convolve2(sampleimage)
filters = ActivatedUnits.shape[3]
plt.figure(1, figsize=(20,20))
n_columns = 8
n_rows = np.math.ceil(filters / n_columns) + 1
for i in range(filters):
plt.subplot(n_rows, n_columns, i+1)
plt.title('Filter ' + str(i))
plt.imshow(ActivatedUnits[0,:,:,i], interpolation="nearest", cmap="gray")
###Output
_____no_output_____ |
notebook_examples/Interactive_Example.ipynb | ###Markdown
Example Lipidomics Data Analysis (Interactive)_(lipydomics version: 1.4.x)_--- 1) Initialize a DatasetWe will be using `example_raw.csv` as the raw data file for this work (the data is positive mode and has not been normalized). We first need to initialize a lipydomics dataset from the raw data:
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 1
Please enter the path to the csv file you want to work with.
> example_raw.csv
What ESI mode was used for this data? (pos/neg)
> pos
! INFO: Loaded a new Dataset from .csv file: "example_raw.csv"
Would you like to automatically assign groups from headers? (y/N)
>
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 8
Dataset(
csv="example_raw.csv",
esi_mode="pos",
samples=16,
features=3342,
identified=False,
normalized=False,
rt_calibrated=False,
ext_var=False,
group_indices=None,
stats={}
)
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
We have now done the bare minimum to load the data and we have a lipydomics dataset initialized. We can see from the overview that there are 16 samples and 3342 features in this dataset. We saved our Dataset to file (`example.pickle`) for easy loading in subsequent steps. --- 2) Prepare the Dataset 2.1) Assign GroupsCurrently, we have 16 samples in our dataset, but we have not provided any information on what groups they belong to. We could have automatically assigned groups based on the properly formatted column headings in the raw data file (`example_raw.csv`) when we initialized the dataset, but we will assign them manually instead.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 1
Managing groups... What would you like to do?
1. Assign group
2. View assigned groups
3. Get data by group(s)
"back" to go back
> 1
Please provide a name for a group and its indices in order of name > starting index > ending index.
* group name should not contain spaces
* indices start at 0
* example: 'A 1 3'
> 0641 0 3
! INFO: Assigned indices: [0, 1, 2, 3] to group: "0641"
Managing groups... What would you like to do?
1. Assign group
2. View assigned groups
3. Get data by group(s)
"back" to go back
> 1
Please provide a name for a group and its indices in order of name > starting index > ending index.
* group name should not contain spaces
* indices start at 0
* example: 'A 1 3'
> geh 4 7
! INFO: Assigned indices: [4, 5, 6, 7] to group: "geh"
Managing groups... What would you like to do?
1. Assign group
2. View assigned groups
3. Get data by group(s)
"back" to go back
> 1
Please provide a name for a group and its indices in order of name > starting index > ending index.
* group name should not contain spaces
* indices start at 0
* example: 'A 1 3'
> sal 8 11
! INFO: Assigned indices: [8, 9, 10, 11] to group: "sal"
Managing groups... What would you like to do?
1. Assign group
2. View assigned groups
3. Get data by group(s)
"back" to go back
> 1
Please provide a name for a group and its indices in order of name > starting index > ending index.
* group name should not contain spaces
* indices start at 0
* example: 'A 1 3'
> wt 12 15
! INFO: Assigned indices: [12, 13, 14, 15] to group: "wt"
Managing groups... What would you like to do?
1. Assign group
2. View assigned groups
3. Get data by group(s)
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 8
Dataset(
csv="example_raw.csv",
esi_mode="pos",
samples=16,
features=3342,
identified=False,
normalized=False,
rt_calibrated=False,
ext_var=False,
group_indices={
"0641": [0, 1, 2, 3]
"geh": [4, 5, 6, 7]
"sal": [8, 9, 10, 11]
"wt": [12, 13, 14, 15]
},
stats={}
)
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
Now all of the samples have been assigned to one of four groups: `0641`, `geh`, `sal1`, and `wt`. These group IDs will be used later on when we select data or perform statistical analyses. 2.2) Normalize IntensitiesCurrently, the feature intensities are only raw values. We are going to normalize them using weights derived from an external normalization factor (pellet masses), but we also have the option to normalize to the signal from an internal standard if desired. The normalization weights are in `weights.txt`, a simple text file with the weights for each sample, one per line (16 total).
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 6
Normalizing data... What would you like to do?
1. Internal
2. External
"back" to go back
> 2
Please provide a text file with the normalization values
weights.txt
! INFO: Successfully normalized
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 8
Dataset(
csv="example_raw.csv",
esi_mode="pos",
samples=16,
features=3342,
identified=False,
normalized=True,
rt_calibrated=False,
ext_var=False,
group_indices={
"0641": [0, 1, 2, 3]
"geh": [4, 5, 6, 7]
"sal": [8, 9, 10, 11]
"wt": [12, 13, 14, 15]
},
stats={}
)
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
If we look at the dataset overview we can see that we now have assigned all of our samples to groups and we have a table of normalized intensities. 2.3) Identify LipidsAnother dataset preparation step we can perform before diving in to the data analysis is identifying as many lipids as possible. There are multiple identification criteria that take into account theoretical and measured m/z, retention time, and/or CCS, all of which vary in the level of confidence in the identifications they yield. We will use an approach that tries the highest confidence identification criteria first, then tries others.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 5
Identifying Lipids... Please enter the tolerances for m/z, retention time and CCS matching
* separated by spaces
* example: '0.01 0.5 3.0'
* CCS tolerance is a percentage, not an absolute value
> 0.03 0.3 3.0
Please specify an identification level
'theo_mz' - match on theoretical m/z
'theo_mz_rt' - match on theoretical m/z and retention time
'theo_mz_ccs' - match on theoretical m/z and CCS
'theo_mz_rt_ccs' - match on theoretical m/z, retention time, and CCS
'meas_mz_ccs' - match on measured m/z and CCS
'meas_mz_rt_ccs' - match on measured m/z, retention time, and CCS
'any' - try all criteria (highest confidence first)
'back' to go back
> any
###Markdown
Using the `any` identification level and m/z, retention time, and CCS tolerances of 0.03 0.3 3.0, respectively, 2063 lipids were identified. Now the dataset is fully prepared and we can start performing statistical analyses and generating plots. --- 3) Statistical Analyses and Plotting 3.1) Compute ANOVA P-value for All GroupsA common analysis performed on lipidomics data is calculating the p-value of each feature from an ANOVA using the intensities from all groups. This gives an indication of how the variance between groups compares to the variance within groups, and a significant p-value indicates that there is some significant difference in the intensities for a given feature between the different groups.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 3
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> 1
Computing statistics... What would you like to do?
1. Anova-P
2. PCA3
3. PLS-DA
4. Two Group Correlation
5. PLS-RA (using external continuous variable)
6. Two Group Log2(fold-change)
"back" to go back
> 1
Would you like to use normalized data? (y/N)
> y
Please enter group names to use in this analysis, separated by spaces
> 0641 geh sal wt
###Markdown
_* The above `RuntimeWarning` can be ignored in this case, it is caused by the presence of features that have all 0 intensities which gives a within-group variance of 0 and therefore causing devision by 0._ 3.2) Pricipal Components Analysis (All Groups)PCA is an untargeted analysis that gives an indication of the overall variation between samples, as well as the individual features that contribute to this variation. We will compute a 3-component PCA in order to assess the variance between groups in this dataset.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 3
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> 1
Computing statistics... What would you like to do?
1. Anova-P
2. PCA3
3. PLS-DA
4. Two Group Correlation
5. PLS-RA (using external continuous variable)
6. Two Group Log2(fold-change)
"back" to go back
> 2
Would you like to use normalized data? (y/N)
> y
Please enter group names to use in this analysis, separated by spaces
> 0641 geh sal wt
! INFO: Applied new statistical analysis using groups: ['0641', 'geh', 'sal', 'wt']
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 8
Dataset(
csv="example_raw.csv",
esi_mode="pos",
samples=16,
features=3342,
identified=2063,
normalized=True,
rt_calibrated=False,
ext_var=False,
group_indices={
"0641": [0, 1, 2, 3]
"geh": [4, 5, 6, 7]
"sal": [8, 9, 10, 11]
"wt": [12, 13, 14, 15]
},
stats={
"ANOVA_0641-geh-sal-wt_normed"
"PCA3_0641-geh-sal-wt_loadings_normed"
"PCA3_0641-geh-sal-wt_projections_normed"
}
)
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
Now we have computed the 3-component PCA, and we can see two new stats entries in our dataset: "PCA3_0641-geh-sal1-wt_projections_normed" and "PCA3_0641-geh-sal1-wt_loadings_normed". Now we can take a look at the projections in a plot.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 4
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> 3
Where would you like to save the plot(s)? (default = current directory)
>
Which groups would you like to plot (separated by spaces)?
> 0641 geh sal wt
Would you like to use normalized data? (y/N)
> y
! INFO: Generated plot for groups: ['0641', 'geh', 'sal', 'wt']
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
Now we can take a look at the plot (`PCA3_0641-geh-sal1-wt_projections_normed.png`). It looks like `geh` and `wt` separate along PC1 while `sal1` and `wt` separate along PC2, so these might be a couple of good pairwise comparisons to explore further. 3.3) PLS-DA and Correlation on `wt` and `geh`Partial least-squares discriminant analysis (PLS-DA) is an analysis that is similar to PCA, except it finds significant variance between two specified groups (_i.e._ it is a supervised analysis).
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 3
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> 1
Computing statistics... What would you like to do?
1. Anova-P
2. PCA3
3. PLS-DA
4. Two Group Correlation
5. PLS-RA (using external continuous variable)
6. Two Group Log2(fold-change)
"back" to go back
> 3
Would you like to use normalized data? (y/N)
> y
Please enter group names to use in this analysis, separated by spaces
> geh wt
! INFO: Applied new statistical analysis using groups: ['geh', 'wt']
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 8
Dataset(
csv="example_raw.csv",
esi_mode="pos",
samples=16,
features=3342,
identified=2063,
normalized=True,
rt_calibrated=False,
ext_var=False,
group_indices={
"0641": [0, 1, 2, 3]
"geh": [4, 5, 6, 7]
"sal": [8, 9, 10, 11]
"wt": [12, 13, 14, 15]
},
stats={
"ANOVA_0641-geh-sal-wt_normed"
"PCA3_0641-geh-sal-wt_loadings_normed"
"PCA3_0641-geh-sal-wt_projections_normed"
"PLS-DA_geh-wt_loadings_normed"
"PLS-DA_geh-wt_projections_normed"
}
)
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
Now we have computed the PLS-DA, and we can see two new stats entries in our dataset: "PLS-DA_geh-wt_projections_normed" and "PLS-DA_geh-wt_loadings_normed". Now we can take a look at the projections in a plot.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 4
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> 4
Where would you like to save the plot(s)? (default = current directory)
>
Which groups would you like to plot (separated by spaces)?
> geh wt
Would you like to use normalized data? (y/N)
> y
! INFO: Generated plot for groups: ['geh', 'wt']
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
Now we can take a look at the plot (`PLS-DA_projections_geh-wt_normed.png`).As expected, `geh` and `wt` separate cleanly along component 1 corresponding to between group differences. The spread of both groups along component 2, related to intra-group variance, is similar between both groups indicating a similar amount of variance in both groups uncorrelated between them. A similar targeted analysis is the Pearson correlation coefficient between the two groups, which we need to calculate in order to produce an S-plot and tease out which lipid features are driving the separation between `geh` and `wt`.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 3
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> 1
Computing statistics... What would you like to do?
1. Anova-P
2. PCA3
3. PLS-DA
4. Two Group Correlation
5. PLS-RA (using external continuous variable)
6. Two Group Log2(fold-change)
"back" to go back
> 4
Would you like to use normalized data? (y/N)
> y
Please enter group names to use in this analysis, separated by spaces
> geh wt
! INFO: Applied new statistical analysis using groups: ['geh', 'wt']
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 4
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> 5
Where would you like to save the plot(s)? (default = current directory)
>
Which groups would you like to plot (separated by spaces)?
> geh wt
Would you like to use normalized data? (y/N)
> y
! INFO: Generated plot for groups: ['geh', 'wt']
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
We can take a look at the plot that was generated (`S-Plot_geh-wt_normed.png`).There appear to be several lipid features that drive separation between `geh` and `wt`, as indicated by the points in the lower left (red) and upper right (blue) corners of the plot. The last step is to export the data and manually inspect these significant features. 4) Export Dataset to SpreadsheetWe need to export our processed Dataset into a spreadsheet format so that we can more closely inspect the data and identify the lipid features that drive the separation that we identified between the `geh` and `wt` groups.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 10
Exporting data... Where would you like to save the file?
example: 'jang_ho/results.xlsx'
"back" to go back
> example.xlsx
! INFO: Successfully exported dataset to Excel spreadsheet: example.xlsx.
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
5) Examine Specific LipidsManual inspection of the data has revealed a handful of individual lipid species that differ significantly between `geh` and `wt`:| abundant in | m/z | retention time | CCS | putative id | id level || :---: | :---: | :---: | :---: | :--- | :--- || `geh` | 874.7869 | 0.43 | 320.3 | TG(52:3)_[M+NH4]+ | meas_mz_ccs || `geh` | 878.8154 | 0.62 | 322.7 | TG(52:1)_[M+NH4]+ | meas_mz_ccs || `geh` | 848.7709 | 0.40 | 313.3 | TG(50:2)_[M+NH4]+ | theo_mz_ccs || `geh` | 605.5523 | 0.86 | 267.7 | DG(36:1)_[M+H-H2O]+ | theo_mz_ccs || `geh` | 591.5378 | 0.93 | 263.9 | DG(35:1)_[M+H-H2O]+ | theo_mz_ccs || `wt` | 496.3423 | 4.15 | 229.8 | LPC(16:0)_[M+H]+ | meas_mz_ccs || `wt` | 524.3729 | 4.08 | 235.1 | LPC(18:0)_[M+H]+ | meas_mz_ccs || `wt` | 810.6031 | 3.46 | 295.3 | PC(36:1)_[M+Na]+ | meas_mz_ccs || `wt` | 782.5729 | 3.50 | 290.5 | PG(35:0)_[M+NH4]+ | theo_mz_ccs | 5.1) Generate Plots for Significant Lipid FeaturesNow that we have identified some potentially significant lipid feautures, we need to generate some bar plots for comparison. To avoid clogging up our working directory, we will save the feature plots in the `features` directory. The m/z, retention time, and CCS values are all listed in `features.csv`, and we will use this to generate the barplots all at once.
###Code
# start an interactive session
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 4
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> 2
Where would you like to save the plot(s)? (default = current directory)
> features
Which groups would you like to plot (separated by spaces)?
> geh wt
Would you like to use normalized data? (y/N)
> y
Please enter the path to the .csv file containing features to plot...
* example: 'plot_these_features.csv'
> features.csv
Please enter the search tolerances for m/z, retention time, and CCS
* separated by spaces
* example: '0.01 0.5 3.0'
* CCS tolerance is a percentage, not an absolute value
> 0.01 0.1 0.1
! INFO: Generated plot(s) for groups: ['geh', 'wt']
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLSA-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
###Markdown
Now we can look at all of the plots that have been generated in the `features/` directory. __Abundant in `geh`__ __Abundant in `wt`__ 5.2) Generate a Heatmap of TGs There seems to be an upregulation of TGs in `geh` relative to `wt`, so it might be nice to see if there are any large-scale trends among TGs as a lipid class between these groups. In order to make this comparison, we will need to compute another statistic: the Log2(fold-change) between the two groups.
###Code
main()
###Output
What would you like to do?
1. Make a new Dataset
2. Load a previous Dataset
> 2
Please enter the path to the pickle file you want to load.
> example.pickle
! INFO: Loaded existing Dataset from .pickle file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 3
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> 1
Computing statistics... What would you like to do?
1. Anova-P
2. PCA3
3. PLS-DA
4. Two Group Correlation
5. PLS-RA (using external continuous variable)
6. Two Group Log2(fold-change)
"back" to go back
> 6
Would you like to use normalized data? (y/N)
> y
Please enter group names to use in this analysis, separated by spaces
> geh wt
! INFO: Applied new statistical analysis using groups: ['geh', 'wt']
Managing statistics... What would you like to do?
1. Compute Statistics
2. View Statistics
3. Export .csv File of Computed Statistics
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 4
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLS-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> 7
Where would you like to save the plot(s)? (default = current directory)
>
Which groups would you like to plot (separated by spaces)?
> geh wt
Would you like to use normalized data? (y/N)
> y
Please enter the lipid class you would like to generate a heatmap with
> TG
! INFO: Generated heatmap for lipid class: TG
Making Plots... What would you like to do?
1. Bar plot feature by group
2. Batch bar plot features by group
3. Scatter PCA3 Projections by group
4. Scatter PLS-DA Projections by group
5. S-Plot PLS-DA and Pearson correlation by group
6. Scatter PLS-RA Projections by group
7. Heatmap of Log2(fold-change) by lipid class
"back" to go back
> back
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> 11
Saving Current Dataset to File... Please enter the full path and file name to save the Dataset under.
* .pickle file
* no spaces in path)
example: 'jang_ho/191120_bacterial_pos.pickle'
> example.pickle
! INFO: Dataset saved to file: "example.pickle"
What would you like to do with this Dataset?
1. Manage Groups
2. Filter Data
3. Manage Statistics
4. Make Plots
5. Lipid Identification
6. Normalize Intensities
7. Calibrate Retention Time
8. Overview of Dataset
9. Batch Feature Selection
10. Export Current Dataset to Spreadsheet
11. Save Current Dataset to File
"exit" to quit the interface
> exit
|
UTKFace_Classifier.ipynb | ###Markdown
UTKFace Data
###Code
batch_size = 128
from data import UTKFace
dataset = UTKFace(transform=transforms.Compose([
transforms.Grayscale(),
transforms.Resize((256, 256)),
transforms.ToTensor()]),
label='gender'
)
###Output
_____no_output_____
###Markdown
Distribution is about 50-50 between male and female
###Code
%%time
num_male = 0
num_female = 0
data_loader = torch.utils.data.DataLoader(dataset, num_workers=4, batch_size=128)
for _, ages in tqdm(data_loader):
ages = ages.sum(dim=0)
num_female += ages[0].item()
num_male += ages[1].item()
plt.bar(['male', 'female'], [num_male, num_female])
###Output
_____no_output_____
###Markdown
idcs = np.arange(len(dataset))np.random.seed(0) This is important to split the same way every single timenp.random.shuffle(idcs)split_idx = int(0.9 * len(dataset))train_idcs = idcs[:split_idx]test_idcs = idcs[split_idx:] train_dataset = torch.utils.data.Subset(dataset, train_idcs)test_dataset = torch.utils.data.Subset(dataset, test_idcs)
###Code
### Toggle cell below for sample debugging
###Output
_____no_output_____
###Markdown
train_dataset = torch.utils.data.Subset(train_dataset, range(512))test_dataset = torch.utils.data.Subset(test_dataset, range(512))
###Code
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, shuffle=True, num_workers=4)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size, shuffle=True, num_workers=4)
a_face, gender = dataset[1]
gender = dataset.classes[torch.argmax(gender,dim=0).item()]
show(a_face, gender)
###Output
_____no_output_____
###Markdown
Classifier Model
###Code
from models import Classifier
device = torch.device("cuda")
model = Classifier(2).to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-6)
import torch.nn.functional as F
def train(epoch):
model.train()
train_loss = 0
total_corr = 0
for data, genders in tqdm(train_loader):
genders = genders.to(device)
data = data.to(device)
optimizer.zero_grad()
pred = model(data)
gender_idcs = torch.argmax(genders, dim=1)
pred_idcs = torch.argmax(pred, dim=1)
num_corr = torch.eq(gender_idcs, pred_idcs).sum()
total_corr += num_corr
loss = F.binary_cross_entropy(pred, genders)
loss.backward()
train_loss += loss.item()
optimizer.step()
acc = total_corr.float() / len(train_loader.dataset)
print('====> Epoch: {} acc: {:.4f} avg loss: {:.4f}'.format(
epoch, acc, train_loss / len(train_loader.dataset)))
def test(epoch):
model.eval()
test_loss = 0
total_corr = 0
with torch.no_grad():
for data, genders in tqdm(test_loader):
genders = genders.to(device)
data = data.to(device)
pred = model(data)
gender_idcs = torch.argmax(genders, dim=1)
pred_idcs = torch.argmax(pred, dim=1)
num_corr = torch.eq(gender_idcs, pred_idcs).sum()
total_corr += num_corr
test_loss += F.binary_cross_entropy(pred, genders).item()
test_loss /= len(test_loader.dataset)
acc = total_corr.float() / len(test_loader.dataset)
print('====> Test acc: {:.4f}, loss: {:.4f}'.format(acc, test_loss))
###Output
_____no_output_____
###Markdown
Load
###Code
weight_path = 'weights/{}_{}.pt'.format(model.__class__.__name__, dataset.__class__.__name__)
weight_path
import os
if os.path.exists(weight_path):
model.load_state_dict(torch.load(weight_path))
###Output
_____no_output_____
###Markdown
Train
###Code
%%time
for epoch in range(1, epochs + 1):
train(epoch)
test(epoch)
###Output
100%|██████████| 167/167 [00:43<00:00, 4.13it/s]
0%| | 0/19 [00:00<?, ?it/s]
###Markdown
Save
###Code
torch.save(model.state_dict(), weight_path)
###Output
_____no_output_____
###Markdown
Try it
###Code
import os
if os.path.exists(weight_path):
model.load_state_dict(torch.load(weight_path))
model.eval()
for idx in torch.utils.data.RandomSampler(test_dataset, num_samples=10, replacement=True):
a_face, gender = test_dataset[idx]
a_face = a_face.to(device)
a_face = a_face.unsqueeze(0)
pred = model(a_face)
pred_idx = torch.argmax(pred, dim=1)
gender_idx = torch.argmax(gender, dim=0)
pred_class = dataset.classes[pred_idx]
gender_class = dataset.classes[gender_idx]
show(a_face.cpu().view(1,256,256), 'True: {} Pred: {}'.format(gender_class, pred_class))
###Output
_____no_output_____ |
epytope/tutorials/CleavageAndTAPPrediction.ipynb | ###Markdown
Cleavage Site and TAP Prediction This tutorial illustrates the use of epytope to predict the steps of the HLA-I antigen processing pathway including proteasomal cleavage and TAP transport. epytope offers a long list of prediction methods and was designed in such a way that extending epytope with your favorite method is easy.This tutorial will entail:- Simple cleavage site/fragment prediction from a list of peptide sequences and protein sequences- Simple TAP prediction methods- Consensus prediction of proteasomal cleavage, TAP, and HLA binding to model the complete antigen processing pathway
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from epytope.Core import Protein, Peptide, Allele
from epytope.Core import generate_peptides_from_proteins
from epytope.IO import read_fasta
from epytope.CleavagePrediction import CleavageSitePredictorFactory, CleavageFragmentPredictorFactory
from epytope.TAPPrediction import TAPPredictorFactory
###Output
_____no_output_____
###Markdown
Chapter 1: Cleavage Prediction epytope offers a comprehensive list of state-of-the art proteasomal cleavage prediction methods. Usually one distinguishes cleavage methods into cleavage site and cleavage fragment prediction. Cleavage site prediction methods predict each possible cleavage site within a given amino acid sequence, whereas cleavage fragment prediction methods predict the likelihood of a peptide fragment of being a product of proteasomal cleavage. Additionally, the source of the training data are often also quite different. The majority of prediction tools was train on in vitro data of up to three fully analyzed proteins and distinguish between constitutive and immunoproteasomal cleavage. Some prediction methods use natural HLA ligands as training data as they are products of antigen processing pathway and therefore also of cleavage events.But all methods start with reading in protein sequences. epytope offers several ways of defining Proteins. We can either directly initialize a `epytope.Core.Protein` object by specifying a amino acid sequence and optionally a progenitor gene and transcript id, as well as the progenitor `epytope.Core.Transcript` object, or we can directly read in proteins from FASTA files by using `epytope.IO.read_fasta`.
###Code
protein = Protein("AAAAAAAAAAA", gene_id="Dummy", transcript_id="Dummy")
proteins = read_fasta("./data/proteins.fasta", id_position=3, in_type=Protein)
proteins
###Output
_____no_output_____
###Markdown
Once we have a protein sequence to work with, we can specify the cleavage site prediction method of our choice. epytope offers one entry point for each type of prediction methods via so called factories. For cleavage site prediction it is `CleavageSitePredictorFactory`. To get an overview which prediction methods are currently implemented, we can use `CleavageSitePredictorFactory` as follows:
###Code
for name,version in CleavageSitePredictorFactory.available_methods().items():
print(name, ",".join(version))
###Output
pcm 1.0
proteasmm_c 1.0
proteasmm_i 1.0
netchop 3.1
###Markdown
Lets select `PCM` for example and make predictions:
###Code
pcm = CleavageSitePredictorFactory("PCM")
site_result = pcm.predict(proteins)
site_result
###Output
_____no_output_____
###Markdown
To specify a particular version of a prediction method, we can use the flag `version=""` when calling the PredictorFactories. If we do not specify any version, epytope will initialize the most recent version that is supported.
###Code
pcm = CleavageSitePredictorFactory("PCM", version="1.0")
site_result = pcm.predict(proteins)
site_result.head()
###Output
_____no_output_____
###Markdown
External tools like `NetChop` offer two additional flags when calling `.predict()`, `command="/path/to/binary"` and `options="command options"`. `command=""` specifies the path to an alternative binary that should be used instead of the one which is globally registered. With options you can specify additional commands that will directly be passed to the command line call without any sanity checks. For CleavageFragment prediction we first have to generate peptides from protein sequences with `epytope.Core.generate_peptides_from_proteins`. epytope currently supports only one CleavageFragment prediction methods proposed by Ginodi et al. (Ginodi, et al.(2008) Bioinformatics 24(4)), which supports only 11mers (9mer epitopes and two flaking amino acids).
###Code
pep = generate_peptides_from_proteins(proteins, 11)
CleavageFragmentPredictorFactory("Ginodi").predict(pep).head()
###Output
_____no_output_____
###Markdown
The result object is based on pandas' `DataFrame`, thus all possibilities of manipulating the results pandas offers are possible, including rudimentary plotting capabilities.
###Code
import matplotlib.pyplot as plt
f, a = plt.subplots(len(site_result.index.levels[0]),1)
for i,r in enumerate(site_result.index.levels[0]):
site_result.xs(r).plot(kind='bar', ax=a[i]).set_xticklabels(site_result.loc[(r,slice(None)),"Seq"], rotation=0)
###Output
_____no_output_____
###Markdown
We also can combine several prediction results of the same type via `CleavageSitePredictionResults.merge_results` (Note this function returns a merge Result DataFrame):
###Code
import pandas as pd
import numpy
site_result2 = CleavageSitePredictorFactory("proteasmm_c").predict(proteins)
merged_result = site_result.merge_results([site_result2])
merged_result.head(7)
###Output
_____no_output_____
###Markdown
We can also filter the results based on multiple expressions with `CleavageSitePredictionResults.filter_result`.
###Code
comp = lambda x,y: x > y
expressions=[("pcm",comp,0)]
merged_result.filter_result(expressions)
###Output
_____no_output_____
###Markdown
Chapter 2: TAP prediction epytope offers only limited prediction methods for TAP prediction, due to lack of publicly available methods.
###Code
for name,version in TAPPredictorFactory.available_methods().items():
print(name, ",".join(version))
###Output
doytchinova 1.0
smmtap 1.0
###Markdown
For TAP prediction, we first have to generate peptides. Lets take the proteins we already imported and generate 9mers, as these two methods only support 9mer peptides.
###Code
pep = list(generate_peptides_from_proteins(proteins,9))
tap_result = TAPPredictorFactory("smmtap").predict(pep[:15])
tap_result.head()
###Output
_____no_output_____
###Markdown
Again we can do all rudimentary operations on the result object as with the cleavage result objects, including merging and filtering.
###Code
tap_result2 = TAPPredictorFactory("Doytchinova").predict(pep[:15])
tap_result.merge_results(tap_result2)
from operator import ge
tap_result.filter_result([("smmtap",ge, -1)])
###Output
_____no_output_____
###Markdown
Chapter 3: Consensus prediction for natural ligand prediction Proteasomal cleavage, TAP prediction, as well as HLA binding prediction have been combined to increase the specificity of predicting natural processed HLA ligands. One example is `WAPP` (Dönnes, et al.(2005). Protein Science 14(8)), which uses proteasomal cleavage and TAP prediction methods to filter for possibly processed peptides.The same approach can be implemented with epytope. We start again with the two protein sequences and exemplify the workflows for CleavageFragmentPrediction and CleavageSitePrediction methods. Antigen processing prediction with CleavageFragment prediction We will use `PSSMGinodi`, `SVMTAP`, and `UniTope` for prediction for HLA-A*02:01. For `PSSMGionid` and `SVMTAP` we use a threshold of -15 and -30 respectively.
###Code
from operator import ge
from epytope.Core import Allele
from epytope.EpitopePrediction import EpitopePredictorFactory
allele = Allele("HLA-A*02:01")
pep = list(generate_peptides_from_proteins(proteins,11))
print("Number of peptides: ", len(pep))
#cleavage prediction and filtering
df_cl = CleavageFragmentPredictorFactory("Ginodi").predict(pep).filter_result(("ginodi",ge,-15))
print("Number of peptides after proteasomal cleavage: ", len(df_cl))
#tap prediction and filtering
df_tap = TAPPredictorFactory("smmtap").predict(df_cl.index).filter_result(("smmtap",ge,1))
print("Number of peptides after TAP transport: ", len(df_tap))
#epitope prediction and filtering
df_epi = EpitopePredictorFactory("smm").predict(df_tap.index,alleles=allele)
df_epi
###Output
Number of peptides: 50
Number of peptides after proteasomal cleavage: 31
Number of peptides after TAP transport: 13
###Markdown
Based on this analysis, there are no natural ligands predicted for the two test proteins. Antigen processing prediction with CleavageSite prediction We will use `PCM`, `SVMTAP`, and `SVMHC` for prediction for HLA-A*02:01 like in the original work of WAPP. For `PCM` and `SVMTAP` we use a threshold of −4.8 and −27 respectively.
###Code
from operator import ge
from epytope.Core import Allele,Protein
from epytope.EpitopePrediction import EpitopePredictorFactory
allele = Allele("HLA-A*02:01")
#cleavage prediction and filtering
df_cl = CleavageSitePredictorFactory("PCM").predict(proteins).filter_result(("pcm",ge,-4.8))
print("Number of peptides after proteasomal cleavage: ", len(df_cl))
#since we only predicted possible cleavage site, we now have to generate all possible peptides
#peptides
pep_dic = {}
for p in proteins:
for i in df_cl.loc[(p.transcript_id,slice(None)),:].index.codes[1]:
if i-8>=0:
seq = str(p[i-8:i+1])
pep_dic.setdefault(seq, []).append(p)
peps = [Peptide(seq, protein_pos={pp:[0] for pp in p}) for seq, p in pep_dic.items()]
#tap prediction and filtering
df_tap = TAPPredictorFactory("smmtap").predict(peps).filter_result(("smmtap",ge,1))
print("Number of peptides after TAP transport: ", len(df_tap))
#epitope prediction and filtering
df_epi = EpitopePredictorFactory("smm").predict(df_tap.index,alleles=allele).filter_result(("smm",ge,100000.0))
df_epi
###Output
Number of peptides after proteasomal cleavage: 62
Number of peptides after TAP transport: 17
###Markdown
Cleavage Site and TAP Prediction This tutorial illustrates the use of epytope to predict the steps of the HLA-I antigen processing pathway including proteasomal cleavage and TAP transport. epytope offers a long list of prediction methods and was designed in such a way that extending epytope with your favorite method is easy.This tutorial will entail:- Simple cleavage site/fragment prediction from a list of peptide sequences and protein sequences- Simple TAP prediction methods- Consensus prediction of proteasomal cleavage, TAP, and HLA binding to model the complete antigen processing pathway
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from epytope.Core import Protein, Peptide, Allele
from epytope.Core import generate_peptides_from_proteins
from epytope.IO import read_fasta
from epytope.CleavagePrediction import CleavageSitePredictorFactory, CleavageFragmentPredictorFactory
from epytope.TAPPrediction import TAPPredictorFactory
###Output
_____no_output_____
###Markdown
Chapter 1: Cleavage Prediction epytope offers a comprehensive list of state-of-the art proteasomal cleavage prediction methods. Usually one distinguishes cleavage methods into cleavage site and cleavage fragment prediction. Cleavage site prediction methods predict each possible cleavage site within a given amino acid sequence, whereas cleavage fragment prediction methods predict the likelihood of a peptide fragment of being a product of proteasomal cleavage. Additionally, the source of the training data are often also quite different. The majority of prediction tools was train on in vitro data of up to three fully analyzed proteins and distinguish between constitutive and immunoproteasomal cleavage. Some prediction methods use natural HLA ligands as training data as they are products of antigen processing pathway and therefore also of cleavage events.But all methods start with reading in protein sequences. epytope offers several ways of defining Proteins. We can either directly initialize a `epytope.Core.Protein` object by specifying a amino acid sequence and optionally a progenitor gene and transcript id, as well as the progenitor `epytope.Core.Transcript` object, or we can directly read in proteins from FASTA files by using `epytope.IO.read_fasta`.
###Code
protein = Protein("AAAAAAAAAAA", gene_id="Dummy", transcript_id="Dummy")
proteins = read_fasta("./data/proteins.fasta", id_position=3, in_type=Protein)
protein
###Output
_____no_output_____
###Markdown
Once we have a protein sequence to work with, we can specify the cleavage site prediction method of our choice. epytope offers one entry point for each type of prediction methods via so called factories. For cleavage site prediction it is `CleavageSitePredictorFactory`. To get an overview which prediction methods are currently implemented, we can use `CleavageSitePredictorFactory` as follows:
###Code
for name,version in CleavageSitePredictorFactory.available_methods().items():
print(name, ",".join(version))
###Output
pcm 1.0
proteasmm_c 1.0
proteasmm_i 1.0
netchop 3.1
###Markdown
Lets select `PCM` for example and make predictions:
###Code
pcm = CleavageSitePredictorFactory("PCM")
site_result = pcm.predict(proteins)
site_result.head()
###Output
_____no_output_____
###Markdown
To specify a particular version of a prediction method, we can use the flag `version=""` when calling the PredictorFactories. If we do not specify any version, epytope will initialize the most recent version that is supported.
###Code
pcm = CleavageSitePredictorFactory("PCM", version="1.0")
site_result = pcm.predict(proteins)
site_result.head()
###Output
_____no_output_____
###Markdown
External tools like `NetChop` offer two additional flags when calling `.predict()`, `command="/path/to/binary"` and `options="command options"`. `command=""` specifies the path to an alternative binary that should be used instead of the one which is globally registered. With options you can specify additional commands that will directly be passed to the command line call without any sanity checks. For CleavageFragment prediction we first have to generate peptides from protein sequences with `epytope.Core.generate_peptides_from_proteins`. epytope currently supports only one CleavageFragment prediction methods proposed by Ginodi et al. (Ginodi, et al.(2008) Bioinformatics 24(4)), which supports only 11mers (9mer epitopes and two flaking amino acids).
###Code
pep = generate_peptides_from_proteins(proteins, 11)
CleavageFragmentPredictorFactory("Ginodi").predict(pep).head()
###Output
_____no_output_____
###Markdown
The result object is based on pandas' `DataFrame`, thus all possibilities of manipulating the results pandas offers are possible, including rudimentary plotting capabilities.
###Code
import matplotlib.pyplot as plt
f, a = plt.subplots(len(site_result.index.levels[0]),1)
for i,r in enumerate(site_result.index.levels[0]):
site_result.xs(r).plot(kind='bar', ax=a[i]).set_xticklabels(site_result.loc[(r,slice(None)),"Seq"], rotation=0)
###Output
_____no_output_____
###Markdown
We also can combine several prediction results of the same type via `CleavageSitePredictionResults.merge_results` (Note this function returns a merge Result DataFrame):
###Code
import pandas as pd
import numpy
site_result2 = CleavageSitePredictorFactory("proteasmm_c").predict(proteins)
merged_result = site_result.merge_results([site_result2])
merged_result.head(7)
###Output
_____no_output_____
###Markdown
We can also filter the results based on multiple expressions with `CleavageSitePredictionResults.filter_result`.
###Code
comp = lambda x,y: x > y
expressions=[("pcm",comp,0)]
merged_result.filter_result(expressions)
###Output
_____no_output_____
###Markdown
Chapter 2: TAP prediction epytope offers only limited prediction methods for TAP prediction, due to lack of publicly available methods.
###Code
for name,version in TAPPredictorFactory.available_methods().items():
print(name, ",".join(version))
###Output
doytchinova 1.0
smmtap 1.0
###Markdown
For TAP prediction, we first have to generate peptides. Lets take the proteins we already imported and generate 9mers, as these two methods only support 9mer peptides.
###Code
pep = list(generate_peptides_from_proteins(proteins,9))
tap_result = TAPPredictorFactory("smmtap").predict(pep[:15])
tap_result.head()
###Output
_____no_output_____
###Markdown
Again we can do all rudimentary operations on the result object as with the cleavage result objects, including merging and filtering.
###Code
tap_result2 = TAPPredictorFactory("Doytchinova").predict(pep[:15])
tap_result.merge_results(tap_result2).head()
from operator import ge
tap_result.filter_result([("smmtap",ge, -30)])
###Output
_____no_output_____
###Markdown
Chapter 3: Consensus prediction for natural ligand prediction Proteasomal cleavage, TAP prediction, as well as HLA binding prediction have been combined to increase the specificity of predicting natural processed HLA ligands. One example is `WAPP` (Dönnes, et al.(2005). Protein Science 14(8)), which uses proteasomal cleavage and TAP prediction methods to filter for possibly processed peptides.The same approach can be implemented with epytope. We start again with the two protein sequences and exemplify the workflows for CleavageFragmentPrediction and CleavageSitePrediction methods. Antigen processing prediction with CleavageFragment prediction We will use `PSSMGinodi`, `SVMTAP`, and `UniTope` for prediction for HLA-A*02:01. For `PSSMGionid` and `SVMTAP` we use a threshold of -15 and -30 respectively.
###Code
from operator import ge
from epytope.Core import Allele
from epytope.EpitopePrediction import EpitopePredictorFactory
allele = Allele("HLA-A*02:01")
pep = list(generate_peptides_from_proteins(proteins,11))
print("Number of peptides: ", len(pep))
#cleavage prediction and filtering
df_cl = CleavageFragmentPredictorFactory("Ginodi").predict(pep).filter_result(("ginodi",ge,-15))
print("Number of peptides after proteasomal cleavage: ", len(df_cl))
#tap prediction and filtering
df_tap = TAPPredictorFactory("smmtap").predict(df_cl.index).filter_result(("smmtap",ge,-30))
print("Number of peptides after TAP transport: ", len(df_tap))
#epitope prediction and filtering
#df_epi = EpitopePredictorFactory("UniTope").predict(df_tap.index,alleles=allele)
#df_epi
###Output
Number of peptides: 50
Number of peptides after proteasomal cleavage: 31
Number of peptides after TAP transport: 31
###Markdown
Based on this analysis, there are no natural ligands predicted for the two test proteins. Antigen processing prediction with CleavageSite prediction We will use `PCM`, `SVMTAP`, and `SVMHC` for prediction for HLA-A*02:01 like in the original work of WAPP. For `PCM` and `SVMTAP` we use a threshold of −4.8 and −27 respectively.
###Code
from operator import ge
from epytope.Core import Allele,Protein
from epytope.EpitopePrediction import EpitopePredictorFactory
allele = Allele("HLA-A*02:01")
#cleavage prediction and filtering
df_cl = CleavageSitePredictorFactory("PCM").predict(proteins).filter_result(("pcm",ge,-4.8))
print("Number of peptides after proteasomal cleavage: ", len(df_cl))
#since we only predicted possible cleavage site, we now have to generate all possible peptides
#peptides
pep_dic = {}
for p in proteins:
for i in df_cl.loc[(p.transcript_id,slice(None)),:].index.codes[1]:
if i-8>=0:
seq = str(p[i-8:i+1])
pep_dic.setdefault(seq, []).append(p)
peps = [Peptide(seq, protein_pos={pp:[0] for pp in p}) for seq, p in pep_dic.items()]
#tap prediction and filtering
df_tap = TAPPredictorFactory("smmtap").predict(peps).filter_result(("smmtap",ge,-27))
print("Number of peptides after TAP transport: ", len(df_tap))
#epitope prediction and filtering
df_epi = EpitopePredictorFactory("smm").predict(df_tap.index,alleles=allele).filter_result(("smm",ge,-1.0))
df_epi
###Output
Number of peptides after proteasomal cleavage: 62
Number of peptides after TAP transport: 47
|
00 Readme.ipynb | ###Markdown
Quantum-inspired algorithms for numerical analysis In this folder you will find the code that I used when writing the work [Quantum-inspired algorithms for multivariate analysis: from interpolation to partial differential equations](https://arxiv.org/abs/1909.06619). Part of this code is self-contained, implementing simple algorithms, such as the computation of entanglement measures or estimating interpolation errors; other parts depend on the [SeeMPS library](https://github.com/juanjosegarciaripoll/seemps). This notebook takes care of preparing the folder to host that and other libraries. The list of files is as follows- [00 Readme.ipynb](00%20Readme.ipynb). This file you are reading.- [01 Exact samplings.ipynb](01%20Exact%20samplings.ipynb). Tools to encode discretized continuous functions into quantum states. Various 1D, 2D and 3D functions used throughout the work. Estimates of the entanglement in those encodings.- [02 MPS discussion.ipynb](02%20MPS%20discussion.ipynb). Algorithms for encoding continuous functions using MPS states. Analysis of this representation for squeezed Gaussian states in 2D and 3D.- [03 MPS Finite differences.ipynb](03%20MPS%20Finite%20differences.ipynb). As the name indicates, discussion of the encoding of differential operators and solution of Fokker-Planck equations using MPS.- [04 MPS Fourier methods.ipynb](04%20MPS%20Fourier%20methods.ipynb). Discussion of the implementation of Quantum Fourier Transforms using MPS, its use for interpolation and for solving PDEs.- [05 Plots.ipynb](05%20Plots.ipynb). Code to process the data computed by other notebooks into publication-ready plots. This repository is organized with the following structure- `./` The notebooks mentioned above and some helper scripts- `seeq` Automatically downloaded library- `data` Pickle files generated by the simulations of wavefunctions- `data-mps` Pickle files generated by the simulations of MPS states- `figures` Output directory for plots- `jobs` Jobs that we send to our cluster when simulations are particularly heavy This code download some required libraries
###Code
import os.path
if not os.path.exists('seemps'):
!git clone http://github.com/juanjosegarciaripoll/seemps
!cd seemps && make.cmd all
for d in ['data','mps-data','figures']:
if not os.path.exists(d):
os.mkdir(d)
###Output
_____no_output_____
###Markdown
The following is a Windows script used for the following tasks1. `make all` extracts Python files from all notebooks. These files are used as tiny libraries in other parts, or they contain jobs that we submit to the cluster.2. `make clean` eliminates files that can be recreated easily. This include scripts, libraries and data files.3. `make cleanup` eliminates the output from all Jupyter notebooks. Only use it if you plan to run them all, which takes a lot of time.
###Code
%%writefile make.cmd
@echo off
if "%1" == "all" (
python -c "import exportnb; import glob; exportnb.export_notebooks(glob.glob('*.ipynb'),verbose=True); quit()"
)
if "%1" == "clean" (
rmdir /S /Q seemps data
del /Q core_mps.py core.py mpi*.py job*.py
)
if "%1" == "cleanup" (
for %%i in (*.ipynb); do jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace "%%i"
)
!make.cmd all
###Output
_____no_output_____
###Markdown
Tools This code is used to manage our MPI jobs. It is a stupid class that spreads a number of simulations over various processors that have been coordinated using `mpirun`.
###Code
# file: mpijobs.py
from mpi4py import MPI
import time
import pickle
import io
from contextlib import redirect_stdout
with io.StringIO() as buf, redirect_stdout(buf):
print('redirected')
output = buf.getvalue()
class Manager(object):
def __init__(self, root=0, debug=False, root_computes=False):
self.comm = MPI.COMM_WORLD
self.inode = MPI.Get_processor_name() # Node where this MPI process runs
self.size = self.comm.Get_size() # Size of communicator
self.rank = self.comm.Get_rank() # Ranks in communicator
self.root = root
self.isroot = self.rank == root
self.root_computes = root_computes
self.debug = debug
if debug:
name = 'master' if self.isroot else 'slave'
print(f'Running {self.inode} [{name}], rank={self.rank} out of {self.size}')
def partition(self, data):
if self.root_computes:
data = [data[i::self.size] for i in range(self.size)]
else:
data = [data[i::self.size-1] for i in range(self.size-1)]
data = data[0:self.root] + [[]] + data[self.root:]
return data
def run(self, job_description, file=None):
job_description = list(enumerate(job_description))
if self.isroot:
data = self.partition(job_description)
else:
data = None
data = self.comm.scatter(data, root=self.root)
if self.debug:
print(f'** Node {self.rank} received {len(data)} items {[order for order,_ in data]}')
data = [self.run_one(pair) for pair in data]
if self.debug:
print(f'** Node {self.rank} computed {len(data)} items')
data = self.comm.gather(data, root=self.root)
if self.isroot:
data = sorted(sum(data, []), key=lambda x: x[0])
if self.debug and self.isroot:
print(f'** Root {self.rank} gathered {len(data)} items (expected {len(job_description)})')
if file is not None:
try:
clean = [value for order, time, value, text_output in data]
with open(file, 'ab') as f:
pickle.dump((clean, data), f)
if self.debug:
print(f'Master node {self.rank} saved data in {file}')
except:
print(f'Unable to save data. Aborting')
if self.debug:
for order, time, _, text_output in data:
print(f'-----\nJob {order} output:')
print(text_output)
print(f'Ran in {time}s')
def run_one(self, pair):
order, job_item = pair
t = time.process_time()
output = ''
with io.StringIO() as buf:
with redirect_stdout(buf):
try:
values = job_item[0](*job_item[1:])
except Exception as e:
if debug:
print(f'Exception raised: "{e}"')
values = e
text_output = buf.getvalue()
return order, time.process_time() - t, values, text_output
###Output
_____no_output_____ |
notebooks/baseline_supply_chain_scraper.ipynb | ###Markdown
Baseline Supply Chain ScraperThis notebook runs the scraper for our baseline sample. This consists of IDs that we have collected from different sources. Below, we describe the sample we used to begin with. This gave us a first body of IDs. The next phase is randomization (below) which simply tries to guess and record IDs. Supermarket in Tokyo Sample Satoshi’s friend took photos of beef sold in a supermarket in Tokyo Sep.13 2021 Recalled Sample Due to the possibility of nonstandard ID tags, one maker recalls the suspicious ID tags distributed in FY2020. The company announced the ID numbers subject to the recall on its website. Satoshi is confused! Especially for smaller ID numbers on the list, Satoshi found the IDs are already registered in the database. If the IDs are newly issued in 2020, why are they already on the database? One possibility is the tags accidentally came off and the farmers needed to get new tags with the same IDs. Another possibility is that the gov reuses the same IDs several years after death, or slaughter. Considering the average life years of cattle/cow (= 5~6 yrs), the latter is more plausible. Fukushima Tracking Sample For accountability to citizens, the gov published the IDs of cattle and their descendants raised within 20 km from the Fukushima nuclear power plant as of March 31 in 2012. The cattle should not be shipped to slaughterhouses because they are considered “contaminated by radiation”. **Import the module:**
###Code
from supply_chain_mapping import supply_chain_data_scraper as scd
from supply_chain_mapping import random_id_generator as rig
###Output
_____no_output_____
###Markdown
**Original Sample:**
###Code
# Import the three samples
sample_recall, sample_fukushima, sample_tokyo_sm = scd.quickload_samples()
print('# of ids in Tokyo SM sample:', len(sample_tokyo_sm))
print('# of ids in Fukushima sample:', len(sample_fukushima))
print('# of ids in Recall sample:', len(sample_recall))
master_sample = sample_tokyo_sm.append(sample_fukushima)
master_sample = master_sample.append(sample_recall)
print('Total # of ids in our baseline sample:', len(master_sample))
###Output
# of ids in Tokyo SM sample: 7
# of ids in Fukushima sample: 340
# of ids in Recall sample: 7000
Total # of ids in our baseline sample: 7347
###Markdown
*Review how much of the initial sample has been collected:*
###Code
uncollected_sample = scd.check_collected_data(list(master_sample['id'].values))
###Output
10 IDs in the submitted list have not been collected
0 new IDs in the temporary folder have been appended to 98261 IDs in the collected folder
There are a total of 98261 collected IDs
There are a total of 137847 failed IDs
###Markdown
**Generate completely random and unique IDs that do not overlap with collected ones (including failed ones):**We go through this step first in order to identify rules in how IDs are generated which we can then use to create targetted random IDs.
###Code
#uncollected_random_ids = rig.random_cowid_generator(batch_size=10000)
###Output
_____no_output_____
###Markdown
**Examine Generate random IDs based on collected ones:**
###Code
# Check if there are any patterns in the digits of the IDs that do not fail
import pandas as pd
collected_ids, failed_ids = scd.get_collected_ids()
collected_ids = pd.DataFrame({'collected':collected_ids})
for i in range(2,4): collected_ids['dig'+str(i)] = collected_ids['collected'].str[:i]
print(collected_ids['dig2'].value_counts().sort_index())
failed_ids = pd.DataFrame({'failed':failed_ids})
for i in range(2,4): failed_ids['dig'+str(i)] = failed_ids['failed'].str[:i]
print(failed_ids['dig2'].value_counts().sort_index())
list_of_random_ids = rig.bounded_random_ids(['08','11','12','13','14','15','16'], 2, batch_size=100000, lower_bound=11000000, upper_bound=14000000)
###Output
_____no_output_____
###Markdown
Scraping the original supply chain sample!
###Code
import os
from path import Path
project_path = Path(os.getcwd()).parent
print('The number of uncollected IDs from the original sample is',len(list_of_random_ids))
print('The project path is:\n',project_path)
final_data, complete_data, failures = scd.supply_chain_data_scraper(list_of_random_ids,
japanese=True,
split_save=200,
temporary_file_path=project_path+'/data/temporary',
final_file_path=project_path+'/data/collected')
###Output
Last error was:
list index out of range
Scraped 2021 IDs from the 100000 originally submitted
Current loop is working on the 98010 remaining IDs
Working on ID # 2021 of the current loop
###Markdown
Scraper Evaluation To evaluate a reasonable stopping point we can use the marginal entropy of each additional unit scraped. In other words, how much new information about our network do we get with each additional cow. The following functions describe the number of nodes and number of connections for each additional ID scraped. It randomly samples IDs until the entire sample is acccounted for and then repeats this process in a monte-carlo simulation. The resulting rates of change and confidence intervals are graphed.
###Code
from supply_chain_mapping import data_cleaning_and_processing as dc
complete_data = dc._load_complete_data(rename=True)
plot_scraper_entropy(5000, complete_data)
###Output
_____no_output_____ |
src/archive/feature_engineering.ipynb | ###Markdown
Feature EngineeringCreating features for the ML models from scrapped data
###Code
#imports
import numpy as np
import pandas as pd
from tqdm import tqdm
import save as sv
import capture as cp
###Output
_____no_output_____
###Markdown
Target Variable
###Code
path = "../data/rating/{}"
rating_1 = pd.read_csv(path.format("rating_1.csv"),names=['url','username','r1'])
rating_1['r1']= [float(r) for r in rating_1['r1']]
print(len(rating_1))
rating_2 = pd.read_csv(path.format("rating_2.csv"),names=['url','username','r2'])
rating_2['r2']= [float(r) for r in rating_2['r2']]
print(len(rating_2))
rating = pd.merge(rating_1, rating_2, how='inner', on=['url','username'])
rating['y'] = (rating['r1'] + rating['r2'])/2
print(len(rating))
rating[['r1','r2','y']].head()
rating.to_csv(path.format('rating.csv'))
###Output
_____no_output_____
###Markdown
Features
###Code
path = "../data/html/"
def feat_engineer(user,path="../data/html/"):
#counts
counts = cp.get_counts(user, path)
feat = counts
feat['foll_ratio'] = -1 if feat['following'] == 0 else round(feat['followers']/feat['following'],2)
#languages
repos = cp.get_repos(user, path)
lang = list(set(repos['languages']))
n_lang = len(lang)
feat['lang'] = lang
feat['n_lang'] = n_lang
#organisations
orgs = cp.get_orgs(user, path)
feat['org_flag'] = 0 if len(orgs) == 0 else 1
#contributions
cont = cp.get_contributions(user, path);
cont_values = [int(c[1]) for c in cont]
n_cont = sum(cont_values)
n_cont_90days = sum(cont_values[275:])
last_cont = 0 if n_cont ==0 else next((i for i, x in enumerate(cont_values[::-1]) if x), None)
feat['n_cont'] = n_cont
feat['last_cont'] = last_cont
feat['stab_cont'] = 0 if n_cont == 0 else round(n_cont_90days/n_cont,2)
#additional features
feat['cont_repo_ratio'] = 0 if feat['repos'] == 0 else round(n_cont/feat['repos'],2)
return feat
features = []
for user in tqdm(rating['username']):
feat = feat_engineer(user)
feat['username'] = user
features.append(feat)
columns = ['repos','stars','followers', 'following','foll_ratio',
'lang', 'n_lang','org_flag','n_cont','last_cont','stab_cont','cont_repo_ratio']
data = pd.DataFrame(features,columns=columns)
data['r1'] = rating['r1']
data['r2'] = rating['r2']
data['y'] = rating['y']
data.head()
data.to_csv('../data/gitrater.csv')
###Output
_____no_output_____
###Markdown
Archive
###Code
data.r1 = (data.r1-np.mean(data.r1))/ np.std(data.r1)
data.r2 = (data.r2-np.mean(data.r2))/ np.std(data.r2)
unique_lang = list(set([l for lang in data['lang'] for l in lang]))
count = []
r1_avg = []
r2_avg = []
y_avg = []
for lang in unique_lang:
rows = data[[lang in row for row in data.lang]]
count.append(len(rows))
r1_avg.append(np.mean(rows.r1))
r2_avg.append(np.mean(rows.r2))
y_avg.append(np.mean(rows.y))
pd.DataFrame({'lang':unique_lang, 'count':count, 'r1_avg':r1_avg, 'r2_avg':r2_avg,'y_avg':y_avg })
###Output
_____no_output_____ |
notebooks/BasicStats2.ipynb | ###Markdown
Distributions and EstimatorsG. Richards, 2016Resources for this material include Ivezic Sections 3.2-3.5, Karen' Leighly's [Bayesian Statistics Lecture](http://seminar.ouml.org/lectures/bayesian-statistics/), and Bevington's book. DistributionsIf we are attempting to characterize our data in a way that is **parameterized**, then we need a functional form or a **distribution**. There are many naturally occurring distributions. The book goes through quite a few of them. Here we'll just talk about a few basic ones to get us started. Uniform DistributionThe uniform distribution is perhaps more commonly called a "top-hat" or a "box" distribution. It is specified by a mean, $\mu$, and a width, $W$, where$$p(x|\mu,W) = \frac{1}{W}$$over the range $|x-\mu|\le \frac{W}{2}$ and $0$ otherwise. That says that "given $\mu$ AND $W$, the probability of $x$ is $\frac{1}{W}$" (as long as we are within a certain range).Since we are used to thinking of a Gaussian as the *only* type of distribution the concept of $\sigma$ (aside from the width) may seem strange. But $\sigma$ as mathematically defined last time applies here and is$$\sigma = \frac{W}{\sqrt{12}}.$$
###Code
# Execute this cell
%matplotlib inline
%run ../code/fig_uniform_distribution.py
###Output
_____no_output_____
###Markdown
We can implement [uniform](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.htmlscipy.stats.uniform) in `scipy` as follows. We'll use the methods listed at the bottom of the link to complete the cell: `dist.rvs()` and `dist.pdf`. First create a uniform distribution with bin edges of `0` and `2`.
###Code
# Complete and execute this cell
from scipy import stats
import numpy as np
dist = stats.uniform(0,2) #Complete
draws = dist.rvs(10) # ten random draws
print(draws)
p = dist.pdf(x=1) #pdf evaluated at x=1
print(p)
###Output
[0.141957 0.24006624 0.8940587 0.00309021 0.7586349 0.26032786
1.88771424 0.97244483 0.7414465 1.40981156]
0.5
###Markdown
Did you expect that answer for the pdf? Why? What would the pdf be if you changed the width to 4? Gaussian DistributionWe have already seen that the Gaussian distribution is given by$$p(x|\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(\frac{-(x-\mu)^2}{2\sigma^2}\right).$$It is also called the **normal distribution** and can be noted by $\mathscr{N}(\mu,\sigma)$. Note that the convolution of two Gaussians results in a Gaussian. So $\mathscr{N}(\mu,\sigma)$ convolved with $\mathscr{N}(\nu,\rho)$ is $\mathscr{N}(\mu+\nu,\sqrt{\sigma^2+\rho^2})$
###Code
# Execute this cell
%run ../code/fig_gaussian_distribution.py
###Output
_____no_output_____
###Markdown
In the same manner as above, create a normal distribution with `loc=0` and `scale=1`. Produce 10 random draws and determine the probability at `x=0`.
###Code
# Complete and execute this cell
dist = stats.norm(0,1) # Normal distribution with mean = 0, stdev = 1
draws = dist.rvs(10) # 10 random draws
p = dist.pdf(x=0) # pdf evaluated at x=0
print(draws)
print(p)
# %load ../code/fig_gaussian_distribution.py
"""
Example of a Gaussian distribution
----------------------------------
Figure 3.8.
This shows an example of a gaussian distribution with various parameters.
We'll generate the distribution using::
dist = scipy.stats.norm(...)
Where ... should be filled in with the desired distribution parameters
Once we have defined the distribution parameters in this way, these
distribution objects have many useful methods; for example:
* ``dist.pmf(x)`` computes the Probability Mass Function at values ``x``
in the case of discrete distributions
* ``dist.pdf(x)`` computes the Probability Density Function at values ``x``
in the case of continuous distributions
* ``dist.rvs(N)`` computes ``N`` random variables distributed according
to the given distribution
Many further options exist; refer to the documentation of ``scipy.stats``
for more details.
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from scipy.stats import norm
from matplotlib import pyplot as plt
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Define the distributions to be plotted
sigma_values = [0.5, 1.0, 2.0]
linestyles = ['-', '--', ':']
mu = 0
x = np.linspace(-10, 10, 1000)
#------------------------------------------------------------
# plot the distributions
fig, ax = plt.subplots(figsize=(5, 3.75))
for sigma, ls in zip(sigma_values, linestyles):
# create a gaussian / normal distribution
dist = norm(mu, sigma)
plt.plot(x, dist.pdf(x), ls=ls, c='black',
label=r'$\mu=%i,\ \sigma=%.1f$' % (mu, sigma))
plt.xlim(-5, 5)
plt.ylim(0, 0.85)
plt.xlabel('$x$')
plt.ylabel(r'$p(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Let's make sure that everyone can create a nice plot of a Gaussian distribution. Create a gaussian pdf with `mu=100` and `sigma=15`. Have the plot sample the distribution 1000 times from 0 to 200.
###Code
## Let's play with Gaussians! Or Normal distributions, N(mu,sigma)
## see http://www.astroml.org/book_figures/chapter3/fig_gaussian_distribution.html
## Example: IQ is (by definition) distributed as N(mu=100,sigma=15)
# generate distribution for a grid of x values
import matplotlib.pyplot as plt
x = np.linspace(0,200,1000)
mu=100
sigma=15
gauss = stats.norm(mu,sigma).pdf(x) # this is a function of x: gauss(x)
# actual plotting
fig, ax = plt.subplots(figsize=(5, 3.75))
plt.plot(x, gauss, ls='-', c='black', label=r'$\mu=%i,\ \sigma=%i$' % (mu, sigma))
plt.xlim(0, 200)
plt.ylim(0, 0.03)
plt.xlabel('$x$')
plt.ylabel(r'$p(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend()
###Output
_____no_output_____
###Markdown
Above we used probability density function the cumulative distribution function, cdf, is the integral of pdf from $x'=-\infty$ to $x'=x$:$${\rm CDF}(x|\mu,\sigma) = \int_{-\infty}^{x'} p(x'|\mu,\sigma) dx',$$where${\rm CDF}(\infty) = 1$.
###Code
#The same as above but now with the cdf method
gaussCDF = stats.norm(mu, sigma).cdf(x)
fig, ax = plt.subplots(figsize=(5, 3.75))
plt.plot(x, gaussCDF, ls='-', c='black', label=r'$\mu=%i,\ \sigma=%i$' % (mu, sigma))
plt.xlim(0, 200)
plt.ylim(-0.01, 1.01)
plt.xlabel('$x$')
plt.ylabel(r'$CDF(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend(loc=4)
###Output
_____no_output_____
###Markdown
What fraction of people have IQ>145? First let's determine that using the theoretical CDF. Then we'll try simulating it using `sampleSize=1000000`.
###Code
# What fraction of people have IQ>145?
cdf145 = stats.norm(100,15).cdf(145)
#print(cdf145)
fraction145 = 1 - cdf145
# survival function fraction145 = stats.norm(100,15).sf(145)
print(fraction145)
# let's now look at the same problems using a sample of million points drawn from N(100,15)
sampleSize=1_000_000
gaussSample = stats.norm(100,15).rvs(sampleSize) # USE RVS CASE SAMPLE SIZE JUST A NUMBER.
smartOnes = gaussSample[gaussSample>145] #Extract only those draws with >145
FracSmartOnes = 1.0*np.size(smartOnes)/sampleSize
print(FracSmartOnes)
###Output
0.001328
###Markdown
How about the IQ that corresponds to "one in a million"?
###Code
#First try it using norm.ppf
# norm.ppf returns x for specified cdf, assuming mu=0 and sigma=1 ("standard normal pdf")
nSigma = stats.norm.ppf(1.0-1.e-6)
IQ = mu + nSigma*sigma # translate from (0,1) gaussian to (100,15) gaussian
print('nSigma=',nSigma)
print('IQ=', IQ)
#What is another way to estimate this with `gaussSample`?
# Max of a million draws is roughly what 1-in-a-million is.
print(np.max(gaussSample))
###Output
170.4671730461419
###Markdown
Gaussian confidence levelsThe probability of a measurement drawn from a Gaussian distribution that is between $\mu-a$ and $\mu+b$ is$$\int_{\mu-a}^{\mu+b} p(x|\mu,\sigma) dx.$$For $a=b=1\sigma$, we get the familar result of 68.3%. For $a=b=2\sigma$ it is 95.4%. So we refer to the range $\mu \pm 1\sigma$ and $\mu \pm 2\sigma$ as the 68% and 95% **confidence limits**, respectively. Can you figure out what the probability is for $-2\sigma, +4\sigma$? Check to see that you get the right answer for the cases above first!
###Code
# Complete and execute this cell
import numpy as np
N=10000
mu=0
sigma=1
dist = norm(mu, sigma) # Complete
v = np.linspace(-2,4,N)
prob = dist.pdf(v)*(v.max()-v.min())/N
#print prob.sum()
###Output
_____no_output_____
###Markdown
We could do this a number of different ways. I did it this way so that we could see what is going on. Basically using the trapezoidal method, computing the height and the width and summing them up. Do it below with the CDF and check that the answer is the same.
###Code
upper = norm.cdf(v.max()) #Complete
lower = norm.cdf(v.min())
p = upper-lower
print(p)
###Output
0.9772181968099877
###Markdown
Log NormalNote that if $x$ is Gaussian distributed with $\mathscr{N}(\mu,\sigma)$, then $y=\exp(x)$ will have a **log-normal** distribution, where the mean of y is $\exp(\mu + \sigma^2/2)$. Try it.
###Code
# Execute this cell
x = stats.norm(0,1) # mean = 0, stdev = 1
y = np.exp(x)
print y.mean()
print x
###Output
_____no_output_____
###Markdown
The catch here is that stats.norm(0,1) returns an *object* and not something that we can just do math on in the expected manner. What *can* you do with it? Try dir(x) to get a list of all the methods and properties.
###Code
import math
# Complete and execute this cell
dist = stats.norm(0,1) # mean = 0, stdev = 1
x = dist.rvs(10000)
y = np.exp(x)
print(math.exp(0+1*1/2.0),y.mean())
###Output
1.6487212707001282 1.6287165460013104
###Markdown
$\chi^2$ DistributionWe'll run into the $\chi^2$ distribution when we talk about Maximum Likelihood in the next chapter.If we have a Gaussian distribution with values ${x_i}$ and we scale and normalize them according to$$z_i = \frac{x_i-\mu}{\sigma},$$then the sum of squares, $Q$ $$Q = \sum_{i=1}^N z_i^2,$$will follow the $\chi^2$ distribution. The *number of degrees of freedom*, $k$ is given by the number of data points, $N$ (minus any constraints). The pdf of $Q$ given $k$ defines $\chi^2$ and is given by$$p(Q|k)\equiv \chi^2(Q|k) = \frac{1}{2^{k/2}\Gamma(k/2)}Q^{k/2-1}\exp(-Q/2),$$where $Q>0$ and the $\Gamma$ function would just be the usual factorial function if we were dealing with integers, but here we have half integers.This is ugly, but it is really just a formula like anything else. Note that the shape of the distribution *only* depends on the sample size $N=k$ and not on $\mu$ or $\sigma$. For large $k$ (say, $k > 10$ or so), $\chi^2$-distribution becomes well approximated by the Normal distribution (Gaussian):$$ p(\chi^2|k) \sim \mathscr{N}(\chi^2 | k, \sqrt{2k}) $$
###Code
# Execute this cell
%run ../code/fig_chi2_distribution.py
###Output
_____no_output_____
###Markdown
Chi-squared per degree of freedomIn practice we frequently divide $\chi^2$ by the number of degrees of freedom, and work with:$$\chi^2_{dof} = \frac{1}{N-1} \sum_{i=1}^N \left(\frac{x_i-\overline{x}}{\sigma}\right)^2$$which is distributed as$$ p(\chi^2_{dof}) \sim \mathscr{N}\left(1, \sqrt{\frac{2}{N-1}}\right) $$(where $k = N-1$, and $N$ is the number of samples). Therefore, we expect $\chi^2_{dof}$ to be 1, to within a few $\sqrt{\frac{2}{N-1}}$. Student's $t$ DistributionAnother distribution that we'll see later is the Student's $t$ Distribution.If you have a sample of $N$ measurements, $\{x_i\}$, drawn from a Gaussian distribution, $\mathscr{N}(\mu,\sigma)$, and you apply the transform$$t = \frac{\overline{x}-\mu}{s/\sqrt{N}},$$then $t$ will be distributed according to Student's $t$ with the following pdf (for $k$ degrees of freedom): $$p(x|k) = \frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi k} \Gamma(\frac{k}{2})} \left(1+\frac{x^2}{k}\right)^{-\frac{k+1}{2}}$$As with a Gaussian, Student's $t$ is bell shaped, but has "heavier" tails.
###Code
# Execute this cell
%run ../code/fig_student_t_distribution.py
###Output
_____no_output_____
###Markdown
What's the point?The point is that we are going to make some measurement. And we will want to know how likely it is that we would get that measurement in our experiment as compared to random chance. To determine that we need to know the shape of the distribution. Let's say that we find that $x=6$. If our data is $\chi^2$ distributed with 2 degrees of freedom, then we would integrate the $k=2$ curve above from 6 to $\infty$ to determine how likely it is that we would have gotten 6 or larger by chance. If our distribution was instead $t$ distributed, we would get a *very* different answer. Note that it is important that you decide *ahead of time* what the metric will be for deciding whether this result is significant or not. More on this later, but see [this article](http://fivethirtyeight.com/features/science-isnt-broken/). Central Limit TheoremOne of the reasons that a Gaussian (or Normal) Distribution is so common is because of the **Central Limit Theorem**. It says that for an arbitrary distribution, $h(x)$, that has a well-defined mean, $\mu$, and standard deviation, $\sigma$, the mean of $N$ values \{$x_i$\} drawn from the distribution will follow a Gaussian Distribution with $\mathscr{N}(\mu,\sigma/\sqrt{N})$. (A Cauchy distribution is one example where this fails.)This theorem is the foudation for the performing repeat measurements in order to improve the accuracy of one's experiment. It is telling us something about the *shape* of the distribution that we get when averaging. The **Law of Large Numbers** further says that the sample mean will converge to the distribution mean as $N$ increases. Personally, I always find this a bit confusing (or at least I forget how it works). So, let's look at it in detail.Start by plotting a normal distribution with $\mu=0.5$ and $\sigma=1/\sqrt{12}/\sqrt{2}$.Now take `N=2` draws using the `np.random.random` distribution and plot them as a rug plot. Do that a couple of times (e.g., keep hitting Cntrl-Enter in the cell).
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
from math import sqrt
N=2 # Number of draws
mu=0.5 # Location
sigma =1/sqrt(12)/sqrt(2) # Scale factor
u = np.linspace(0, 200, 1000) # Array to sample the space
dist = norm(mu,sigma) # Complete
plt.plot(u,dist.pdf(u)) # Complete
x = np.random.random(2) # Two random draws
plt.plot(x, 0*x, '|', markersize=50)
plt.xlabel('x')
plt.ylabel('pdf')
plt.xlim(-1, 5)
###Output
_____no_output_____
###Markdown
Now let's average those two draws and plot the result (in the same panel). Do it as a histogram for 1,000,000 samples (of 2 each). Use a stepfilled histogram that is normalized with 50% transparency and 100 bins.
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
N=____ # Number of draws
mu=____ # Location
sigma =____ # Scale factor
u = np.____(____,____,1000) # Array to sample the space
dist = ____(____,____) # Complete
plt.plot(____,____.____(____)) # Complete
x = np.____.____(____) # N random draws
plt.plot(x, 0*x, '|', markersize=50)
plt.xlabel('x')
plt.ylabel('pdf')
# Add a histogram that is the mean of 1,000,000 draws
yy = []
for i in np.arange(100000):
xx = np.random.random(N) # N random draws
yy.append(____) # Append average of those random draws to the end of the array
_ = plt.hist(yy,bins=100,histtype='stepfilled', alpha=0.5, normed=True)
###Output
_____no_output_____
###Markdown
Now instead of averaging 2 draws, average 3. Then do it for 10. Then for 100. Each time for 1,000,000 samples.
###Code
# Copy your code from above and edit accordingly (or just edit your code from above)
###Output
_____no_output_____
###Markdown
For 100 you will note that your draws are clearly sampling the full range, but the means of those draws are in a *much* more restrictred range. Moreover they are very closely following a Normal Distribution. This is the power of the Central Limit Theorem. We'll see this more later when we talk about **maximum likelihood**.By the way, if your code is ugly, you can run the following cell to reproduce Ivezic, Figure 3.20 which nicely illustrates this in one plot.
###Code
# Execute this cell
%run ../code/fig_central_limit.py
###Output
_____no_output_____
###Markdown
If you are confused, then watch this video from the Khan Academy:[https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/central-limit-theorem](https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/central-limit-theorem) Bivariate and Multivariate Distribution FunctionsUp to now we have been dealing with one-dimensional distribution functions. Let's now consider a two dimensional distribution $h(x,y)$ where $$\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}h(x,y)dy = 1.$$ $h(x,y)$ is telling us the probability that $x$ is between $x$ and $dx$ and *also* that $y$ is between $y$ and $dy$.Then we have the following definitions:$$\sigma^2_x = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x-\mu_x)^2 h(x,y) dx dy$$$$\sigma^2_y = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(y-\mu_y)^2 h(x,y) dx dy$$$$\mu_x = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}x h(x,y) dx dy$$$$\sigma_{xy} = Cov(x,y) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x-\mu_x) (y-\mu_y) h(x,y) dx dy$$If $x$ and $y$ are uncorrelated, then we can treat the system as two independent 1-D distributions. This means that choosing a range on one variable has no effect on the distribution of the other. We can write a 2-D Gaussian pdf as$$p(x,y|\mu_x,\mu_y,\sigma_x,\sigma_y,\sigma_{xy}) = \frac{1}{2\pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp\left(\frac{-z^2}{2(1-\rho^2)}\right),$$where $$z^2 = \frac{(x-\mu_x)^2}{\sigma_x^2} + \frac{(y-\mu_y)^2}{\sigma_y^2} - 2\rho\frac{(x-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y},$$with $$\rho = \frac{\sigma_{xy}}{\sigma_x\sigma_y}$$as the (dimensionless) correlation coefficient.If $x$ and $y$ are perfectly correlated then $\rho=\pm1$ and if they are uncorrelated, then $\rho=0$. The pdf is now not a histogram, but rather a series of contours in the $x-y$ plane. These are centered at $(x=\mu_x, y=\mu_y)$ and are tilted at angle $\alpha$, which is given by$$\tan(2 \alpha) = 2\rho\frac{\sigma_x\sigma_y}{\sigma_x^2-\sigma_y^2} = 2\frac{\sigma_{xy}}{\sigma_x^2-\sigma_y^2}.$$For example (Ivezic, Figure 3.22): We can define new coordinate axes that are aligned with the minimum and maximum widths of the distribution. These are called the **principal axes** and are given by$$P_1 = (x-\mu_x)\cos\alpha + (y-\mu_y)\sin\alpha,$$and$$P_2 = -(x-\mu_x)\sin\alpha + (y-\mu_y)\cos\alpha.$$The widths in this coordinate system are$$\sigma^2_{1,2} = \frac{\sigma_x^2+\sigma_y^2}{2}\pm\sqrt{\left(\frac{\sigma_x^2-\sigma_y^2}{2}\right)^2 + \sigma^2_{xy}}.$$Note that the correlation vanishes in this coordinate system (by definition) and the bivariate Gaussian is just a product of two univariate Gaussians. This concept will be crucial for understanding Principal Component Analysis when we get to Chapter 7, where PCA extends this idea to even more dimensions. In the univariate case we used $\overline{x}$ and $s$ to *estimate* $\mu$ and $\sigma$. In the bivariate case we estimate 5 parameters: $(\overline{x},\overline{y},s_x,s_y,s_{xy})$. As with the univariate case, it is important to realize that outliers can bias these estimates and that it may be more appropriate to use the median rather than the mean as a more robust estimator for $\mu_x$ and $\mu_y$. Similarly we want robust estimators for the other parameters of the fit. We won't go into that in detail right now, but see Ivezic, Figure 3.23 for an example: For an example of how to generate a bivariate distribution and plot confidence contours, execute the following cell.
###Code
# Base code drawn from Ivezic, Figure 3.22, edited by G. Richards to simplify the example
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
from astroML.stats.random import bivariate_normal
from astroML.stats import fit_bivariate_normal
mux = 0
muy = 0
sigx = 1.0
sigy = 1.0
sigxy = 0.3
#------------------------------------------------------------
# Create 10,000 points from a multivariate normal distribution
mean = [mux, muy]
cov = [[sigx, sigxy], [sigxy, sigy]]
x, y = np.random.multivariate_normal(mean, cov, 10000).T
# Fit those data with a bivariate normal distribution
mean, sigma_x, sigma_y, alpha = fit_bivariate_normal(x,y)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
plt.scatter(x,y,s=2,edgecolor='none')
# draw 1, 2, 3-sigma ellipses over the distribution
for N in (1, 2, 3):
ax.add_patch(Ellipse(mean, N * sigma_x, N * sigma_y, angle=alpha * 180./np.pi, lw=1, ec='k', fc='none'))
###Output
_____no_output_____
###Markdown
Distributions and EstimatorsG. Richards, 2016Resources for this material include Ivezic Sections 3.2-3.5, Karen' Leighly's [Bayesian Statistics Lecture](http://seminar.ouml.org/lectures/bayesian-statistics/), and Bevington's book. DistributionsIf we are attempting to characterize our data in a way that is **parameterized**, then we need a functional form or a **distribution**. There are many naturally occurring distributions. The book goes through quite a few of them. Here we'll just talk about a few basic ones to get us started. Uniform DistributionThe uniform distribution is perhaps more commonly called a "top-hat" or a "box" distribution. It is specified by a mean, $\mu$, and a width, $W$, where$$p(x|\mu,W) = \frac{1}{W}$$over the range $|x-\mu|\le \frac{W}{2}$ and $0$ otherwise. That says that "given $\mu$ AND $W$, the probability of $x$ is $\frac{1}{W}$" (as long as we are within a certain range).Since we are used to thinking of a Gaussian as the *only* type of distribution the concept of $\sigma$ (aside from the width) may seem strange. But $\sigma$ as mathematically defined last time applies here and is$$\sigma = \frac{W}{\sqrt{12}}.$$
###Code
# Execute this cell
%matplotlib inline
%run code/fig_uniform_distribution.py
###Output
_____no_output_____
###Markdown
We can implement [uniform](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.htmlscipy.stats.uniform) in `scipy` as follows. We'll use the methods listed at the bottom of the link to complete the cell: `dist.rvs()` and `dist.pdf`. First create a uniform distribution with bin edges of `0` and `2`.
###Code
# Complete and execute this cell
from scipy import stats
import numpy as np
dist = stats.uniform(____,____) #Complete
draws = dist.rvs(____) # ten random draws
print(draws)
p = dist.pdf(____) #pdf evaluated at x=1
print(p)
###Output
_____no_output_____
###Markdown
Did you expect that answer for the pdf? Why? What would the pdf be if you changed the width to 4? Gaussian DistributionWe have already seen that the Gaussian distribution is given by$$p(x|\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(\frac{-(x-\mu)^2}{2\sigma^2}\right).$$It is also called the **normal distribution** and can be noted by $\mathscr{N}(\mu,\sigma)$. Note that the convolution of two Gaussians results in a Gaussian. So $\mathscr{N}(\mu,\sigma)$ convolved with $\mathscr{N}(\nu,\rho)$ is $\mathscr{N}(\mu+\nu,\sqrt{\sigma^2+\rho^2})$
###Code
# Execute this cell
%run code/fig_gaussian_distribution.py
###Output
_____no_output_____
###Markdown
In the same manner as above, create a normal distribution with `loc=0` and `scale=1`. Produce 10 random draws and determine the probability at `x=0`.
###Code
# Complete and execute this cell
dist = stats.____(____,____) # Normal distribution with mean = 0, stdev = 1
draws = ____.____(____) # 10 random draws
p = ____.____(____) # pdf evaluated at x=0
print(draws)
print(p)
# Uncomment the next line and run
# I just want you to know that this magic function exists.
#%load code/fig_gaussian_distribution.py
###Output
_____no_output_____
###Markdown
Let's make sure that everyone can create a nice plot of a Gaussian distribution. Create a gaussian pdf with `mu=100` and `sigma=15`. Have the plot sample the distribution 1000 times from 0 to 200.
###Code
## Let's play with Gaussians! Or Normal distributions, N(mu,sigma)
## see http://www.astroml.org/book_figures/chapter3/fig_gaussian_distribution.html
## Example: IQ is (by definition) distributed as N(mu=100,sigma=15)
# generate distribution for a grid of x values
x = np.linspace(____,____,____)
mu=____
sigma=____
gauss = stats.norm(____,____).____(____) # this is a function of x: gauss(x)
# actual plotting
fig, ax = plt.subplots(figsize=(5, 3.75))
plt.plot(x, gauss, ls='-', c='black', label=r'$\mu=%i,\ \sigma=%i$' % (mu, sigma))
plt.xlim(0, 200)
plt.ylim(0, 0.03)
plt.xlabel('$x$')
plt.ylabel(r'$p(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend()
###Output
_____no_output_____
###Markdown
Above we used probability density function the cumulative distribution function, cdf, is the integral of pdf from $x'=-\infty$ to $x'=x$:$${\rm CDF}(x|\mu,\sigma) = \int_{-\infty}^{x'} p(x'|\mu,\sigma) dx',$$where${\rm CDF}(\infty) = 1$.
###Code
#The same as above but now with the cdf method
gaussCDF = stats.norm(mu, sigma).cdf(x)
fig, ax = plt.subplots(figsize=(5, 3.75))
plt.plot(x, gaussCDF, ls='-', c='black', label=r'$\mu=%i,\ \sigma=%i$' % (mu, sigma))
plt.xlim(0, 200)
plt.ylim(-0.01, 1.01)
plt.xlabel('$x$')
plt.ylabel(r'$CDF(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend(loc=4)
###Output
_____no_output_____
###Markdown
What fraction of people have IQ>145? First let's determine that using the theoretical CDF. Then we'll try simulating it using `sampleSize=1000000`.
###Code
# What fraction of people have IQ>145?
cdf145 = _____.____(____,____).____(____)
fraction145 = ____
print(fraction145)
# let's now look at the same problems using a sample of million points drawn from N(100,15)
sampleSize=___
gaussSample = stats.norm(____,____).____(____)
smartOnes = gaussSample[gaussSample>145] #Extract only those draws with >145
FracSmartOnes = 1.0*np.size(smartOnes)/sampleSize
print(FracSmartOnes)
###Output
_____no_output_____
###Markdown
How about the IQ that corresponds to "one in a million"?
###Code
#First try it using norm.ppf
# norm.ppf returns x for specified cdf, assuming mu=0 and sigma=1 ("standard normal pdf")
nSigma = stats.norm.ppf(____)
IQ = mu + nSigma*sigma
print('nSigma=',nSigma)
print('IQ=', IQ)
#What is another way to estimate this with `gaussSample`?
print(____)
###Output
_____no_output_____
###Markdown
Gaussian confidence levelsThe probability of a measurement drawn from a Gaussian distribution that is between $\mu-a$ and $\mu+b$ is$$\int_{\mu-a}^{\mu+b} p(x|\mu,\sigma) dx.$$For $a=b=1\sigma$, we get the familar result of 68.3%. For $a=b=2\sigma$ it is 95.4%. So we refer to the range $\mu \pm 1\sigma$ and $\mu \pm 2\sigma$ as the 68% and 95% **confidence limits**, respectively. Can you figure out what the probability is for $-2\sigma, +4\sigma$? Check to see that you get the right answer for the cases above first!
###Code
# Complete and execute this cell
N=10000
mu=0
sigma=1
dist = norm(mu, sigma) # Complete
v = np.linspace(-2,4,N)
prob = dist.pdf(v)*(v.max()-v.min())/N
print prob.sum()
###Output
_____no_output_____
###Markdown
We could do this a number of different ways. I did it this way so that we could see what is going on. Basically using the trapezoidal method, computing the height and the width and summing them up. Do it below with the CDF and check that the answer is the same.
###Code
upper = ____.____(____) #Complete
lower = ____.____(____)
p = upper-lower
print(p)
###Output
_____no_output_____
###Markdown
Log NormalNote that if $x$ is Gaussian distributed with $\mathscr{N}(\mu,\sigma)$, then $y=\exp(x)$ will have a **log-normal** distribution, where the mean of y is $\exp(\mu + \sigma^2/2)$. Try it.
###Code
# Execute this cell
x = stats.norm(0,1) # mean = 0, stdev = 1
y = np.exp(x)
print y.mean()
print x
###Output
_____no_output_____
###Markdown
The catch here is that stats.norm(0,1) returns an *object* and not something that we can just do math on in the expected manner. What *can* you do with it? Try dir(x) to get a list of all the methods and properties.
###Code
import math
# Complete and execute this cell
dist = stats.norm(0,1) # mean = 0, stdev = 1
x = dist.rvs(10000)
y = np.exp(x)
print(math.exp(0+1*1/2.0),y.mean())
###Output
_____no_output_____
###Markdown
$\chi^2$ DistributionWe'll run into the $\chi^2$ distribution when we talk about Maximum Likelihood in the next chapter.If we have a Gaussian distribution with values ${x_i}$ and we scale and normalize them according to$$z_i = \frac{x_i-\mu}{\sigma},$$then the sum of squares, $Q$ $$Q = \sum_{i=1}^N z_i^2,$$will follow the $\chi^2$ distribution. The *number of degrees of freedom*, $k$ is given by the number of data points, $N$ (minus any constraints). The pdf of $Q$ given $k$ defines $\chi^2$ and is given by$$p(Q|k)\equiv \chi^2(Q|k) = \frac{1}{2^{k/2}\Gamma(k/2)}Q^{k/2-1}\exp(-Q/2),$$where $Q>0$ and the $\Gamma$ function would just be the usual factorial function if we were dealing with integers, but here we have half integers.This is ugly, but it is really just a formula like anything else. Note that the shape of the distribution *only* depends on the sample size $N=k$ and not on $\mu$ or $\sigma$. For large $k$ (say, $k > 10$ or so), $\chi^2$-distribution becomes well approximated by the Normal distribution (Gaussian):$$ p(\chi^2|k) \sim \mathscr{N}(\chi^2 | k, \sqrt{2k}) $$
###Code
# Execute this cell
%run code/fig_chi2_distribution.py
###Output
_____no_output_____
###Markdown
Chi-squared per degree of freedomIn practice we frequently divide $\chi^2$ by the number of degrees of freedom, and work with:$$\chi^2_{dof} = \frac{1}{N-1} \sum_{i=1}^N \left(\frac{x_i-\overline{x}}{\sigma}\right)^2$$which is distributed as$$ p(\chi^2_{dof}) \sim \mathscr{N}\left(1, \sqrt{\frac{2}{N-1}}\right) $$(where $k = N-1$, and $N$ is the number of samples). Therefore, we expect $\chi^2_{dof}$ to be 1, to within a few $\sqrt{\frac{2}{N-1}}$. Student's $t$ DistributionAnother distribution that we'll see later is the Student's $t$ Distribution.If you have a sample of $N$ measurements, $\{x_i\}$, drawn from a Gaussian distribution, $\mathscr{N}(\mu,\sigma)$, and you apply the transform$$t = \frac{\overline{x}-\mu}{s/\sqrt{N}},$$then $t$ will be distributed according to Student's $t$ with the following pdf (for $k$ degrees of freedom): $$p(x|k) = \frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi k} \Gamma(\frac{k}{2})} \left(1+\frac{x^2}{k}\right)^{-\frac{k+1}{2}}$$As with a Gaussian, Student's $t$ is bell shaped, but has "heavier" tails.
###Code
# Execute this cell
%run code/fig_student_t_distribution.py
###Output
_____no_output_____
###Markdown
What's the point?The point is that we are going to make some measurement. And we will want to know how likely it is that we would get that measurement in our experiment as compared to random chance. To determine that we need to know the shape of the distribution. Let's say that we find that $x=6$. If our data is $\chi^2$ distributed with 2 degrees of freedom, then we would integrate the $k=2$ curve above from 6 to $\infty$ to determine how likely it is that we would have gotten 6 or larger by chance. If our distribution was instead $t$ distributed, we would get a *very* different answer. Note that it is important that you decide *ahead of time* what the metric will be for deciding whether this result is significant or not. More on this later, but see [this article](http://fivethirtyeight.com/features/science-isnt-broken/). Central Limit TheoremOne of the reasons that a Gaussian (or Normal) Distribution is so common is because of the **Central Limit Theorem**. It says that for an arbitrary distribution, $h(x)$, that has a well-defined mean, $\mu$, and standard deviation, $\sigma$, the mean of $N$ values \{$x_i$\} drawn from the distribution will follow a Gaussian Distribution with $\mathscr{N}(\mu,\sigma/\sqrt{N})$. (A Cauchy distribution is one example where this fails.)This theorem is the foudation for the performing repeat measurements in order to improve the accuracy of one's experiment. It is telling us something about the *shape* of the distribution that we get when averaging. The **Law of Large Numbers** further says that the sample mean will converge to the distribution mean as $N$ increases. Personally, I always find this a bit confusing (or at least I forget how it works). So, let's look at it in detail.Start by plotting a normal distribution with $\mu=0.5$ and $\sigma=1/\sqrt{12}/\sqrt{2}$.Now take `N=2` draws using the `np.random.random` distribution and plot them as a rug plot. Do that a couple of times (e.g., keep hitting Cntrl-Enter in the cell).
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
N=____ # Number of draws
mu=____ # Location
sigma =1.0/np.sqrt(12)/np.sqrt(N) # Sqrt(N) properly normalizes the pdf
u = np.____(____,____,1000) # Array to sample the space
dist = ____(____,____) # Complete
plt.plot(____,____.____(____)) # Complete
x = np.____.____(____) # Two random draws
plt.plot(x, 0*x, '|', markersize=50)
plt.xlabel('x')
plt.ylabel('pdf')
###Output
_____no_output_____
###Markdown
Now let's average those two draws and plot the result (in the same panel). Do it as a histogram for 1,000,000 samples (of 2 each). Use a stepfilled histogram that is normalized with 50% transparency and 100 bins.
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
N=____ # Number of draws
mu=____ # Location
sigma =____ # Scale factor
u = np.____(____,____,1000) # Array to sample the space
dist = ____(____,____) # Complete
plt.plot(____,____.____(____)) # Complete
x = np.____.____(____) # N random draws
plt.plot(x, 0*x, '|', markersize=50)
plt.xlabel('x')
plt.ylabel('pdf')
# Add a histogram that is the mean of 1,000,000 draws
yy = []
for i in np.arange(100000):
xx = np.random.random(N) # N random draws
yy.append(____) # Append average of those random draws to the end of the array
_ = plt.hist(yy,bins=100,histtype='stepfilled', alpha=0.5, normed=True)
###Output
_____no_output_____
###Markdown
Now instead of averaging 2 draws, average 3. Then do it for 10. Then for 100. Each time for 1,000,000 samples.
###Code
# Copy your code from above and edit accordingly (or just edit your code from above)
###Output
_____no_output_____
###Markdown
For 100 you will note that your draws are clearly sampling the full range, but the means of those draws are in a *much* more restrictred range. Moreover they are very closely following a Normal Distribution. This is the power of the Central Limit Theorem. We'll see this more later when we talk about **maximum likelihood**.By the way, if your code is ugly, you can run the following cell to reproduce Ivezic, Figure 3.20 which nicely illustrates this in one plot.
###Code
# Execute this cell
%run code/fig_central_limit.py
###Output
_____no_output_____
###Markdown
If you are confused, then watch this video from the Khan Academy:[https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/central-limit-theorem](https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/central-limit-theorem) Bivariate and Multivariate Distribution FunctionsUp to now we have been dealing with one-dimensional distribution functions. Let's now consider a two dimensional distribution $h(x,y)$ where $$\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}h(x,y)dy = 1.$$ $h(x,y)$ is telling us the probability that $x$ is between $x$ and $dx$ and *also* that $y$ is between $y$ and $dy$.Then we have the following definitions:$$\sigma^2_x = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x-\mu_x)^2 h(x,y) dx dy$$$$\sigma^2_y = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(y-\mu_y)^2 h(x,y) dx dy$$$$\mu_x = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}x h(x,y) dx dy$$$$\sigma_{xy} = Cov(x,y) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x-\mu_x) (y-\mu_y) h(x,y) dx dy$$If $x$ and $y$ are uncorrelated, then we can treat the system as two independent 1-D distributions. This means that choosing a range on one variable has no effect on the distribution of the other. We can write a 2-D Gaussian pdf as$$p(x,y|\mu_x,\mu_y,\sigma_x,\sigma_y,\sigma_{xy}) = \frac{1}{2\pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp\left(\frac{-z^2}{2(1-\rho^2)}\right),$$where $$z^2 = \frac{(x-\mu_x)^2}{\sigma_x^2} + \frac{(y-\mu_y)^2}{\sigma_y^2} - 2\rho\frac{(x-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y},$$with $$\rho = \frac{\sigma_{xy}}{\sigma_x\sigma_y}$$as the (dimensionless) correlation coefficient.If $x$ and $y$ are perfectly correlated then $\rho=\pm1$ and if they are uncorrelated, then $\rho=0$. The pdf is now not a histogram, but rather a series of contours in the $x-y$ plane. These are centered at $(x=\mu_x, y=\mu_y)$ and are tilted at angle $\alpha$, which is given by$$\tan(2 \alpha) = 2\rho\frac{\sigma_x\sigma_y}{\sigma_x^2-\sigma_y^2} = 2\frac{\sigma_{xy}}{\sigma_x^2-\sigma_y^2}.$$For example (Ivezic, Figure 3.22): We can define new coordinate axes that are aligned with the minimum and maximum widths of the distribution. These are called the **principal axes** and are given by$$P_1 = (x-\mu_x)\cos\alpha + (y-\mu_y)\sin\alpha,$$and$$P_2 = -(x-\mu_x)\sin\alpha + (y-\mu_y)\cos\alpha.$$The widths in this coordinate system are$$\sigma^2_{1,2} = \frac{\sigma_x^2+\sigma_y^2}{2}\pm\sqrt{\left(\frac{\sigma_x^2-\sigma_y^2}{2}\right)^2 + \sigma^2_{xy}}.$$Note that the correlation vanishes in this coordinate system (by definition) and the bivariate Gaussian is just a product of two univariate Gaussians. This concept will be crucial for understanding Principal Component Analysis when we get to Chapter 7, where PCA extends this idea to even more dimensions. In the univariate case we used $\overline{x}$ and $s$ to *estimate* $\mu$ and $\sigma$. In the bivariate case we estimate 5 parameters: $(\overline{x},\overline{y},s_x,s_y,s_{xy})$. As with the univariate case, it is important to realize that outliers can bias these estimates and that it may be more appropriate to use the median rather than the mean as a more robust estimator for $\mu_x$ and $\mu_y$. Similarly we want robust estimators for the other parameters of the fit. We won't go into that in detail right now, but see Ivezic, Figure 3.23 for an example: For an example of how to generate a bivariate distribution and plot confidence contours, execute the following cell.
###Code
# Base code drawn from Ivezic, Figure 3.22, edited by G. Richards to simplify the example
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
from astroML.stats.random import bivariate_normal
from astroML.stats import fit_bivariate_normal
mux = 0
muy = 0
sigx = 1.0
sigy = 1.0
sigxy = 0.3
#------------------------------------------------------------
# Create 10,000 points from a multivariate normal distribution
mean = [mux, muy]
cov = [[sigx, sigxy], [sigxy, sigy]]
x, y = np.random.multivariate_normal(mean, cov, 10000).T
# Fit those data with a bivariate normal distribution
mean, sigma_x, sigma_y, alpha = fit_bivariate_normal(x,y)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
plt.scatter(x,y,s=2,edgecolor='none')
# draw 1, 2, 3-sigma ellipses over the distribution
for N in (1, 2, 3):
ax.add_patch(Ellipse(mean, N * sigma_x, N * sigma_y, angle=alpha * 180./np.pi, lw=1, ec='k', fc='none'))
###Output
_____no_output_____
###Markdown
Distributions and EstimatorsG. Richards(2016, 2018, 2020)Resources for this material include Ivezic Sections 3.3-3.5, Karen' Leighly's [Bayesian Statistics Lecture](http://seminar.ouml.org/lectures/bayesian-statistics/), and [Bevington's book](http://hosting.astro.cornell.edu/academics/courses/astro3310/Books/Bevington_opt.pdf). DistributionsIf we are attempting to characterize our data in a way that is **parameterized**, then we need a functional form or a **distribution**. There are many naturally occurring distributions. The book goes through quite a few of them. Here we'll just talk about a few basic ones to get us started. Uniform DistributionThe uniform distribution is perhaps more commonly called a "top-hat" or a "box" distribution. It is specified by a mean, $\mu$, and a width, $W$, where$$p(x|\mu,W) = \frac{1}{W}$$over the range $|x-\mu|\le \frac{W}{2}$ and $0$ otherwise. That says that "given $\mu$ AND $W$, the probability of $x$ is $\frac{1}{W}$" (as long as we are within a certain range).Since we are used to thinking of a Gaussian as the *only* type of distribution the concept of $\sigma$ (aside from the width) may seem strange. But $\sigma$ as mathematically defined last time applies here and is$$\sigma = \frac{W}{\sqrt{12}}.$$
###Code
# Execute this cell
# Note that if you moved the file out of the git repo, this path will be wrong
# You will need to change the path, copy the code, etc.
# Or perhaps just work in the git repo and change the name of the file before moving it later.
%matplotlib inline
%run ../code/fig_uniform_distribution.py
###Output
_____no_output_____
###Markdown
We can implement [uniform](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.htmlscipy.stats.uniform) in `scipy` as follows. We'll use the methods listed at the bottom of the link to complete the cell: `dist.rvs(size=N)` which produces `N` random draws from the distribution and `dist.pdf(x)` which returns the value of the pdf at a given $x$. First create a uniform distribution with parameters `loc=0`, `scale=2`, and `N=10`.
###Code
# Complete and execute this cell
from scipy import stats
import numpy as np
N = ____ #Complete
distU = stats.uniform(____,____) #Complete
draws = distU.rvs(____) # ten random draws
print(draws)
p = distU.pdf(____) #pdf evaluated at x=1
print(p)
###Output
_____no_output_____
###Markdown
Did you expect that answer for the pdf? Why? What would the pdf be if you changed the width to 4? Gaussian DistributionWe have already seen that the Gaussian distribution is given by$$p(x|\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(\frac{-(x-\mu)^2}{2\sigma^2}\right).$$It is also called the **normal distribution** and can be noted by $\mathscr{N}(\mu,\sigma)$. Note that the convolution of two Gaussians results in a Gaussian. So $\mathscr{N}(\mu,\sigma)$ convolved with $\mathscr{N}(\nu,\rho)$ is $\mathscr{N}(\mu+\nu,\sqrt{\sigma^2+\rho^2})$
###Code
# Execute this cell
%run ../code/fig_gaussian_distribution.py
#Uncomment the next line and run this cell; I just want you to know that this magic function exists.
# %load ../code/fig_gaussian_distribution.py
###Output
_____no_output_____
###Markdown
Let's get some practice! See http://www.astroml.org/book_figures/chapter3/fig_gaussian_distribution.htmlIn the same manner as above, create a [normal distribution](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html?highlight=stats%20normscipy.stats.norm) with `loc=100` and `scale=15`. Produce 10 random draws and determine the probability at `x=145`.This will be helpful to us in our later example with IQ tests, since IQ is distributed (by definition) as $\mathscr{N}$(mu=100,sigma=15)
###Code
# Complete and execute this cell
distG = stats.____(____,____) # Normal distribution with mean = 100, stdev = 15
draws = ____.____(____) # 10 random draws
p = ____.____(____) # pdf evaluated at x=0
print(draws)
print(p)
###Output
_____no_output_____
###Markdown
Now let's plot that. Have the plot sample the distribution 1000 times from 0 to 200.
###Code
## Let's play with Gaussians! Or Normal distributions, N(mu,sigma)
## see http://www.astroml.org/book_figures/chapter3/fig_gaussian_distribution.html
## Example: IQ is (by definition) distributed as N(mu=100,sigma=15)
xgrid = np.linspace(____,____,____) # generate distribution for a uniform grid of x values
____ = distG.pdf(____) # this is a function of xgrid
# actual plotting
fig, ax = plt.subplots(figsize=(5, 3.75))
plt.plot(xgrid, gaussPDF, ls='-', c='black', label=r'$\mu=%i,\ \sigma=%i$' % (mu, sigma))
plt.xlim(0, 200)
plt.ylim(0, 0.03)
plt.xlabel('$x$')
plt.ylabel(r'$p(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend()
###Output
_____no_output_____
###Markdown
Above we plotted probability density function. Sometimes what you want instead is the cumulative distribution function, cdf. The CDF is the integral of pdf from $x'=-\infty$ to $x'=x$:$${\rm CDF}(x|\mu,\sigma) = \int_{-\infty}^{x'} p(x'|\mu,\sigma) dx',$$where${\rm CDF}(\infty) = 1$.You will get some practice with CDFs in the Data Camp homework assignment.
###Code
#The same as above but now with the cdf method
gaussCDF = distG.cdf(xgrid)
fig, ax = plt.subplots(figsize=(5, 3.75))
plt.plot(xgrid, gaussCDF, ls='-', c='black', label=r'$\mu=%i,\ \sigma=%i$' % (mu, sigma))
plt.xlim(0, 200)
plt.ylim(-0.01, 1.01)
plt.xlabel('$x$')
plt.ylabel(r'$CDF(x|\mu,\sigma)$')
plt.title('Gaussian Distribution')
plt.legend(loc=4)
###Output
_____no_output_____
###Markdown
What fraction of people have IQ>145? First let's determine that using the theoretical CDF, using the `sf()` method of [scipy.stats.norm](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html). Then we'll try try a brute force method, simulating it using `sampleSize=1000000`.
###Code
#What fraction of people have IQ>145?
#cdf is fraction of people with IQ<=145, 1-cdf is IQ>145
#sf (survival function) is 1-cdf, which is what we want in this case
sf145 = distG.____(____)
print(sf145)
###Output
_____no_output_____
###Markdown
Did you get 0.13% (or 0.0013)?Basically this is doing the CDF integral in the opposite direction. Start at $x=\infty$ and integrate down the curve to the value of interest (here $x=145$), then report the fraction of the time values in that range (145 and above) are expected given the known distribution.
###Code
# let's now look at the same problems using a sample of a million points drawn from N(100,15)
sampleSize=____
gaussSample = distG.rvs(sampleSize)
smartOnes = gaussSample[gaussSample>____] #Extract only those draws with >145
FracSmartOnes = 1.0*np.size(smartOnes)/sampleSize
print(FracSmartOnes)
###Output
_____no_output_____
###Markdown
How about the IQ that corresponds to "one in a million"? Here we want the inverse survival function, `isf()`. Note that the inverse of the cdf is `ppf()`, the percent point function.
###Code
OneInAMillionVal = distG.____(____) #Complete
print('IQ=', OneInAMillionVal)
#What is another way you could estimate this with `gaussSample`?
#Think about how you can take advantage of the sampleSize we used above.
print(____)
###Output
_____no_output_____
###Markdown
Gaussian confidence levelsThe probability of a measurement drawn from a Gaussian distribution that is between $\mu-a$ and $\mu+b$ is$$\int_{\mu-a}^{\mu+b} p(x|\mu,\sigma) dx.$$For $a=b=1\sigma$, we get the familar result of 68.3%. For $a=b=2\sigma$ it is 95.4%. So we refer to the range $\mu \pm 1\sigma$ and $\mu \pm 2\sigma$ as the 68% and 95% **confidence limits**, respectively.Note that if your distribution is not Gaussian, then these confidence intervals will be different! Can you figure out what the probability is for $-2\sigma, +4\sigma$? Check to see that you get the right answer for the cases above first!You'll get some practice with this in a Data Camp lesson next week (after we talk about Bootstrap and Jackknife estimates).
###Code
# Complete and execute this cell
N=10000
mu=0
sigma=1
distN = ____.____(mu, sigma) # Complete
xgrid = np.linspace(____,____,N) # Complete
dx = (xgrid.max()-xgrid.min())/N
prob = distN.pdf(xgrid)*dx
print(prob.sum())
###Output
_____no_output_____
###Markdown
We could do this a number of different ways. I did it this way so that we could see what is going on. Basically using the trapezoidal method, computing the height and the width and summing them up. We'll do it below with the CDF and check that the answer is the same.
###Code
upper = distN.cdf(4)
lower = distN.cdf(-2)
p = upper-lower
print(p)
###Output
_____no_output_____
###Markdown
Log NormalIf $x$ is Gaussian distributed with $\mathscr{N}(\mu,\sigma)$, then $y=\exp(x)$ will have a **log-normal** distribution, where the mean of y is $\exp(\mu + \sigma^2/2)$. Try it.
###Code
# Execute this cell
import numpy as np
x = stats.norm(0,1) # mean = 0, stdev = 1
y = np.exp(x.rvs(100))
print(y.mean())
print(x)
###Output
_____no_output_____
###Markdown
The catch here is that stats.norm(0,1) returns an *object* and not something that we can just do math on in the expected manner. What *can* you do with it? Try ```dir(x)``` to get a list of all the methods and properties.
###Code
import math
# Complete and execute this cell
distLN = stats.norm(0,1) # mean = 0, stdev = 1
x = distLN.rvs(10000)
y = np.exp(x)
print(math.exp(0+1*1/2.0), y.mean())
###Output
_____no_output_____
###Markdown
$\chi^2$ DistributionWe'll run into the $\chi^2$ distribution when we talk about Maximum Likelihood in the next chapter.If we have a Gaussian distribution with values ${x_i}$, and we scale and normalize them according to$$z_i = \frac{x_i-\mu}{\sigma},$$then the sum of squares, $Q$ $$Q = \sum_{i=1}^N z_i^2,$$will follow the $\chi^2$ distribution. The *number of degrees of freedom*, $k$, is given by the number of data points, $N$ (minus any constraints). The pdf of $Q$ given $k$ defines $\chi^2$ and is given by$$p(Q|k)\equiv \chi^2(Q|k) = \frac{1}{2^{k/2}\Gamma(k/2)}Q^{k/2-1}\exp(-Q/2),$$where $Q>0$ and the $\Gamma$ function would just be the usual factorial function if we were dealing with integers, but here we have half integers.This is ugly, but it is really just a formula like anything else. Note that the shape of the distribution *only* depends on the sample size $N=k$ and not on $\mu$ or $\sigma$.
###Code
# Execute this cell
%run ../code/fig_chi2_distribution.py
###Output
_____no_output_____
###Markdown
Chi-squared per degree of freedomIn practice we frequently divide $\chi^2$ by the number of degrees of freedom, and work with:$$\chi^2_{dof} = \frac{1}{N-1} \sum_{i=1}^N \left(\frac{x_i-\overline{x}}{\sigma}\right)^2$$which (for large $k$) is distributed as$$ p(\chi^2_{dof}) \sim \mathscr{N}\left(1, \sqrt{\frac{2}{N-1}}\right) $$(where $k = N-1$, and $N$ is the number of samples). Therefore, we expect $\chi^2_{dof}$ to be 1, to within a few $\sqrt{\frac{2}{N-1}}$. See the [Khan Academy's chi-square distribution introduction](https://www.khanacademy.org/math/statistics-probability/inference-categorical-data-chi-square-tests/chi-square-goodness-of-fit-tests/v/chi-square-distribution-introduction), which is actually somewhat confusing but shows how to use probability tables (starting at 7:50) to get so-called $p$-values.If you are so inclined, you could look ahead to next week a bit and watch the video on Least-squares regressionhttps://www.khanacademy.org/math/statistics-probability/describing-relationships-quantitative-data/regression-library/v/introduction-to-residuals-and-least-squares-regression Student's $t$ DistributionAnother distribution that we'll see later is the Student's $t$ Distribution.If you have a sample of $N$ measurements, $\{x_i\}$, drawn from a Gaussian distribution, $\mathscr{N}(\mu,\sigma)$, and you apply the transform$$t = \frac{\overline{x}-\mu}{s/\sqrt{N}},$$then $t$ will be distributed according to Student's $t$ with the following pdf (for $k$ degrees of freedom): $$p(x|k) = \frac{\Gamma(\frac{k+1}{2})}{\sqrt{\pi k} \Gamma(\frac{k}{2})} \left(1+\frac{x^2}{k}\right)^{-\frac{k+1}{2}}$$As with a Gaussian, Student's $t$ is bell shaped, but has "heavier" tails.Note the similarity between $t$ and $z$ for a Gaussian, which reflects the difference between estimates of the mean and standard deviation and their true values. (Which means that often you should be using a $t$-distribution instead of a normal distribution, but the difference goes away for large enough $k$, so shouldn't matter for "Big Data").
###Code
# Execute this cell
%run ../code/fig_student_t_distribution.py
###Output
_____no_output_____
###Markdown
What's the point?The point is that we are going to make some measurement. And we will want to know how likely it is that we would get that measurement in our experiment as compared to random chance. To determine that we need to know the shape of the distribution. Let's say that we find that $x=6$. If our data is $\chi^2$ distributed with 2 degrees of freedom, then we would integrate the $k=2$ curve above from 6 to $\infty$ to determine how likely it is that we would have gotten 6 or larger by chance. If our distribution was instead $t$ distributed, we would get a *very* different answer. Note that it is important that you decide *ahead of time* what the metric will be for deciding whether this result is significant or not. More on this later, but see [this article](http://fivethirtyeight.com/features/science-isnt-broken/). Central Limit TheoremOne of the reasons that a Gaussian (or Normal) Distribution is so common is because of the **Central Limit Theorem**. It says that for an arbitrary distribution, $h(x)$, that has a well-defined mean, $\mu$, and standard deviation, $\sigma$, the mean of $N$ values \{$x_i$\} drawn from the distribution will follow a Gaussian Distribution with $\mathscr{N}(\mu,\sigma/\sqrt{N})$. (A Cauchy distribution is one example where this fails.)This theorem is the foudation for the performing repeat measurements in order to improve the accuracy of one's experiment. It is telling us something about the *shape* of the distribution that we get when averaging. The **Law of Large Numbers** further says that the sample mean will converge to the distribution mean as $N$ increases. Personally, I always find this a bit confusing (or at least I forget how it works). So, let's look at it in detail.Start by plotting a normal distribution with $\mu=0.5$ and $\sigma=1/\sqrt{12}/\sqrt{2}$.Now take `N=2` draws using the `np.random.random` distribution and plot them as a rug plot. Do that a couple of times (e.g., keep hitting Cntrl-Enter in the cell).
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
N=____ # Number of draws
mu=____ # Location
sigma =1.0/np.sqrt(12)/np.sqrt(N) # Sqrt(N) properly normalizes the pdf
xgrid = ____.____(____,____,1000) # Array to sample the space
distG = stats.norm(mu,sigma) # Complete
plt.plot(____,distG.____(____)) # Complete
x = np.random.____(____) # Two random draws
plt.plot(x, 0*x, '|', markersize=50)
plt.xlabel('x')
plt.ylabel('pdf')
###Output
_____no_output_____
###Markdown
Now let's average those two draws and plot the result (in the same panel). Do it as a histogram for 1,000,000 samples (of 2 each). Use a stepfilled histogram that is normalized with 50% transparency and 100 bins.
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
N=____ # Number of draws
mu=____ # Location
sigma = 1.0/np.sqrt(12)/np.sqrt(N) # Scale factor
xgrid = ____.____(____,____,____) # Array to sample the space
distG = ____.____(____,____) # Complete
plt.plot(____,____.____(____)) # Complete
x = np.____.____(____) # N random draws
plt.plot(x, 0*x, '|', markersize=50)
plt.xlabel('x')
plt.ylabel('pdf')
# Add a histogram that is the mean of 1,000,000 draws
yy = []
for i in np.arange(100000):
xx = np.random.random(N) # N random draws
yy.append(____.____()) # Append average of those random draws to the end of the array
_ = plt.hist(yy,bins=100,histtype='stepfilled', alpha=0.5, density=True)
###Output
_____no_output_____
###Markdown
Now instead of averaging 2 draws, average 3. Then do it for 10. Then for 100. Each time for 1,000,000 samples.
###Code
# Copy your code from above and edit accordingly (or just edit your code from above)
###Output
_____no_output_____
###Markdown
For 100 you will note that your draws are clearly sampling the full range, but the means of those draws are in a *much* more restrictred range. Moreover they are very closely following a Normal Distribution. This is the power of the Central Limit Theorem. We'll see this more later when we talk about **maximum likelihood**.By the way, if your code is ugly, you can run the following cell to reproduce Ivezic, Figure 3.20 which nicely illustrates this in one plot.
###Code
%run ../code/fig_central_limit.py
###Output
_____no_output_____
###Markdown
If you are confused, then watch this video from the Khan Academy:[https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/central-limit-theorem](https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/central-limit-theorem) Bivariate and Multivariate Distribution FunctionsUp to now we have been dealing with one-dimensional distribution functions. Let's now consider a two dimensional distribution $h(x,y)$ where $$\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}h(x,y)dy = 1.$$ $h(x,y)$ is telling us the probability that $x$ is between $x$ and $dx$ and *also* that $y$ is between $y$ and $dy$.Then we have the following definitions:$$\sigma^2_x = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x-\mu_x)^2 h(x,y) dx dy$$$$\sigma^2_y = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(y-\mu_y)^2 h(x,y) dx dy$$$$\mu_x = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}x h(x,y) dx dy$$$$\sigma_{xy} = Cov(x,y) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x-\mu_x) (y-\mu_y) h(x,y) dx dy$$If $x$ and $y$ are uncorrelated, then we can treat the system as two independent 1-D distributions. This means that choosing a range on one variable has no effect on the distribution of the other. We can write a 2-D Gaussian pdf as$$p(x,y|\mu_x,\mu_y,\sigma_x,\sigma_y,\sigma_{xy}) = \frac{1}{2\pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp\left(\frac{-z^2}{2(1-\rho^2)}\right),$$where $$z^2 = \frac{(x-\mu_x)^2}{\sigma_x^2} + \frac{(y-\mu_y)^2}{\sigma_y^2} - 2\rho\frac{(x-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y},$$with $$\rho = \frac{\sigma_{xy}}{\sigma_x\sigma_y}$$as the (dimensionless) correlation coefficient.If $x$ and $y$ are perfectly correlated then $\rho=\pm1$ and if they are uncorrelated, then $\rho=0$. The pdf is now not a histogram, but rather a series of contours in the $x-y$ plane. These are centered at $(x=\mu_x, y=\mu_y)$ and are tilted at angle $\alpha$, which is given by$$\tan(2 \alpha) = 2\rho\frac{\sigma_x\sigma_y}{\sigma_x^2-\sigma_y^2} = 2\frac{\sigma_{xy}}{\sigma_x^2-\sigma_y^2}.$$For example (Ivezic, Figure 3.22): We can define new coordinate axes that are aligned with the minimum and maximum widths of the distribution. These are called the **principal axes** and are given by$$P_1 = (x-\mu_x)\cos\alpha + (y-\mu_y)\sin\alpha,$$and$$P_2 = -(x-\mu_x)\sin\alpha + (y-\mu_y)\cos\alpha.$$The widths in this coordinate system are$$\sigma^2_{1,2} = \frac{\sigma_x^2+\sigma_y^2}{2}\pm\sqrt{\left(\frac{\sigma_x^2-\sigma_y^2}{2}\right)^2 + \sigma^2_{xy}}.$$Note that the correlation vanishes in this coordinate system (by definition) and the bivariate Gaussian is just a product of two univariate Gaussians. This concept will be crucial for understanding Principal Component Analysis when we get to Chapter 7, where PCA extends this idea to even more dimensions. In the univariate case we used $\overline{x}$ and $s$ to *estimate* $\mu$ and $\sigma$. In the bivariate case we estimate 5 parameters: $(\overline{x},\overline{y},s_x,s_y,s_{xy})$. As with the univariate case, it is important to realize that outliers can bias these estimates and that it may be more appropriate to use the median rather than the mean as a more robust estimator for $\mu_x$ and $\mu_y$. Similarly we want robust estimators for the other parameters of the fit. We won't go into that in detail right now, but see Ivezic, Figure 3.23 for an example: For an example of how to generate a bivariate distribution and plot confidence contours, execute the following cell.
###Code
# Base code drawn from Ivezic, Figure 3.22, edited by G. Richards to simplify the example
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
from astroML.stats.random import bivariate_normal
from astroML.stats import fit_bivariate_normal
mux = 0
muy = 0
sigx = 1.0
sigy = 1.0
sigxy = 0.3
#------------------------------------------------------------
# Create 10,000 points from a multivariate normal distribution
mean = [mux, muy]
cov = [[sigx, sigxy], [sigxy, sigy]]
x, y = np.random.multivariate_normal(mean, cov, 10000).T
# Fit those data with a bivariate normal distribution
mean, sigma_x, sigma_y, alpha = fit_bivariate_normal(x,y)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
plt.scatter(x,y,s=2,edgecolor='none')
# draw 1, 2, 3-sigma ellipses over the distribution
for N in (1, 2, 3):
ax.add_patch(Ellipse(mean, N * sigma_x, N * sigma_y, angle=alpha * 180./np.pi, lw=1, ec='k', fc='none'))
###Output
_____no_output_____ |
exploration/Sending Data to Postgres.ipynb | ###Markdown
Creating DataFrame of all test files
###Code
import pandas as pd
import numpy as np
import sys
sys.path.append('..')
from dis_ds import parsing
all_files = !ls ../test_data
full_path_all_files = ['../test_data/' + a for a in all_files]
all_files_df = parsing.parse_file_list(full_path_all_files)
all_files_df[:1000]
import xlsxwriter
writer = pd.ExcelWriter('tfldata.xlsx', engine='xlsxwriter')
all_files_df.to_excel(writer, sheet_name="Sheet 1")
all_files_df.save(all_files_df)
###Output
/Users/pivotal/anaconda/envs/python3.4/lib/python3.4/site-packages/pandas/core/generic.py:1000: FutureWarning: save is deprecated, use to_pickle
warnings.warn("save is deprecated, use to_pickle", FutureWarning)
###Markdown
Importing Test Files to PostgreSQL
###Code
%save?
from sqlalchemy import create_engine
engine = create_engine('postgres://pmgigyko:[email protected]:5432/pmgigyko')
all_files_df.to_sql('disruptions_test1',engine)
###Output
_____no_output_____
###Markdown
Creating DataFrame of all Files
###Code
df_feb = parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-02')
df_march=parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-03')
df_april=parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-04')
df_may=parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-05')
df_may_full= parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-05').to_string()
frames = (df_feb,df_march,df_april,df_may)
total_df = pd.concat(frames)
total_df
parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-05')
s3_files_df=parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-')
s3_files_df
parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-09-24_07:16:27')
parsing.parse_s3_files('tfl_api_line_mode_status_tube_2015-09-12_07:16:23')
s3_files_df.describe()
s3_files_df
###Output
_____no_output_____ |
Copy_of_LS_DS_243_Select_models_and_parameters_Jason_Meil_DS3.ipynb | ###Markdown
_Lambda School Data Science — Practicing & Understanding Predictive Modeling_ Hyperparameter Optimization Today we'll use this process: "A universal workflow of machine learning"_Excerpt from Francois Chollet, [Deep Learning with Python](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/README.md), Chapter 4: Fundamentals of machine learning_ **1. Define the problem at hand and the data on which you’ll train.** Collect this data, or annotate it with labels if need be.**2. Choose how you’ll measure success on your problem.** Which metrics will you monitor on your validation data?**3. Determine your evaluation protocol:** hold-out validation? K-fold validation? Which portion of the data should you use for validation?**4. Develop a first model that does better than a basic baseline:** a model with statistical power.**5. Develop a model that overfits.** The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it.**6. Regularize your model and tune its hyperparameters, based on performance on the validation data.** Repeatedly modify your model, train it, evaluate on your validation data (not the test data, at this point), modify it again, and repeat, until the model is as good as it can get. **Iterate on feature engineering: add new features, or remove features that don’t seem to be informative.** Once you’ve developed a satisfactory model configuration, you can **train your final production model on all the available data (training and validation) and evaluate it one last time on the test set.** 1. Define the problem at hand and the data on which you'll train We'll apply the workflow to a [project from _Python Data Science Handbook_](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.htmlExample:-Predicting-Bicycle-Traffic) by Jake VanderPlas:> **Predicting Bicycle Traffic**> As an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.> We will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor. Fortunately, the NOAA makes available their daily [weather station data](http://www.ncdc.noaa.gov/cdo-web/search?datasetid=GHCND) (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.> Let's start by loading the two datasets, indexing by date: So this is a regression problem, not a classification problem. We'll define the target, choose an evaluation metric, and choose models that are appropriate for regression problems. Download data
###Code
!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
!wget https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv
###Output
--2019-05-20 20:59:01-- https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234945 (229K) [text/plain]
Saving to: ‘BicycleWeather.csv’
BicycleWeather.csv 0%[ ] 0 --.-KB/s
BicycleWeather.csv 100%[===================>] 229.44K --.-KB/s in 0.05s
2019-05-20 20:59:01 (4.56 MB/s) - ‘BicycleWeather.csv’ saved [234945/234945]
###Markdown
Load data
###Code
# Modified from cells 15, 16, and 20, at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
import pandas as pd
# Download and join data into a dataframe
def load():
fremont_bridge = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
bicycle_weather = 'https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv'
counts = pd.read_csv(fremont_bridge, index_col='Date', parse_dates=True,
infer_datetime_format=True)
weather = pd.read_csv(bicycle_weather, index_col='DATE', parse_dates=True,
infer_datetime_format=True)
daily = counts.resample('d').sum()
daily['Total'] = daily.sum(axis=1)
daily = daily[['Total']] # remove other columns
weather_columns = ['PRCP', 'SNOW', 'SNWD', 'TMAX', 'TMIN', 'AWND']
daily = daily.join(weather[weather_columns], how='inner')
# Make a feature for yesterday's total
daily['Total_yesterday'] = daily.Total.shift(1)
daily = daily.drop(index=daily.index[0])
return daily
daily = load()
###Output
_____no_output_____
###Markdown
First fast look at the data- What's the shape?- What's the date range?- What's the target and the features? **Shape**
###Code
daily.shape
###Output
_____no_output_____
###Markdown
**Date Range**
###Code
daily.head()
daily.tail()
daily.info()
# There is no date column it is the index
daily.columns
###Output
_____no_output_____
###Markdown
Target- Total : Daily total number of bicycle trips across Seattle's Fremont BridgeFeatures- Date (index) : from 2012-10-04 to 2015-09-01- Total_yesterday : Total trips yesterday- PRCP : Precipitation (1/10 mm)- SNOW : Snowfall (1/10 mm)- SNWD : Snow depth (1/10 mm)- TMAX : Maximum temperature (1/10 Celsius)- TMIN : Minimum temperature (1/10 Celsius)- AWND : Average daily wind speed (1/10 meters per second) 2. Choose how you’ll measure success on your problem.Which metrics will you monitor on your validation data?This is a regression problem, so we need to choose a regression [metric](https://scikit-learn.org/stable/modules/model_evaluation.htmlcommon-cases-predefined-values).I'll choose mean absolute error.
###Code
# I could also use mean squared error
from sklearn.metrics import mean_absolute_error
###Output
_____no_output_____
###Markdown
3. Determine your evaluation protocol We're doing model selection, hyperparameter optimization, and performance estimation. So generally we have two ideal [options](https://sebastianraschka.com/images/blog/2018/model-evaluation-selection-part4/model-eval-conclusions.jpg) to choose from:- 3-way holdout method (train/validation/test split)- Cross-validation with independent test setI'll choose cross-validation with independent test set. Scikit-learn makes cross-validation convenient for us!Specifically, I will use random shuffled cross validation to train and validate, but I will hold out an "out-of-time" test set, from the last 100 days of data:
###Code
# Everything except the last 100 days
train = daily[:-100]
# The last 100 days
test = daily[-100:]
# Checking out the shapes of my info
train.shape, test.shape
X_train = train.drop(columns='Total')
y_train = train['Total']
X_test = test.drop(columns='Total')
y_test = test['Total']
X_train.shape, y_train.shape, X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
4. Develop a first model that does better than a basic baseline Look at the target's distribution and descriptive stats
###Code
%matplotlib inline
import seaborn as sns
sns.distplot(y_train, color='blue');
y_train.describe()
###Output
_____no_output_____
###Markdown
Basic baseline 1
###Code
y_pred = [y_train.mean()] * len(y_train)
mean_absolute_error(y_train, y_pred)
###Output
_____no_output_____
###Markdown
Basic baseline 2
###Code
y_pred = X_train['Total_yesterday']
mean_absolute_error(y_train, y_pred)
###Output
_____no_output_____
###Markdown
First model that does better than a basic baseline https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_validate
# 2 train and 1 test -
# train on a & b and test c
scores = cross_validate(LinearRegression(), X_train, y_train,
scoring='neg_mean_absolute_error', cv=3,
return_train_score=True, return_estimator=True)
pd.DataFrame(scores)
scores['test_score'].mean()
for i, model in enumerate(scores['estimator']):
coefficients = model.coef_
intercept = model.intercept_
feature_names = X_train.columns
print('Model from cross-validaton fold #', i)
print('Intercept', intercept)
print(pd.Series(coefficients, feature_names).to_string())
print('\n')
# Test out
# Underfitting Model
import statsmodels.api as sm
model = sm.OLS(y_train, sm.add_constant(X_train))
print(model.fit().summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Total R-squared: 0.628
Model: OLS Adj. R-squared: 0.625
Method: Least Squares F-statistic: 230.2
Date: Mon, 20 May 2019 Prob (F-statistic): 4.80e-200
Time: 20:59:18 Log-Likelihood: -7736.8
No. Observations: 963 AIC: 1.549e+04
Df Residuals: 955 BIC: 1.553e+04
Df Model: 7
Covariance Type: nonrobust
===================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------
const 571.7691 93.165 6.137 0.000 388.937 754.601
PRCP -3.0616 0.396 -7.726 0.000 -3.839 -2.284
SNOW -0.0271 0.038 -0.721 0.471 -0.101 0.047
SNWD -9.1379 8.974 -1.018 0.309 -26.748 8.472
TMAX 9.4823 0.774 12.258 0.000 7.964 11.000
TMIN -4.6742 1.026 -4.555 0.000 -6.688 -2.660
AWND -3.7006 1.747 -2.119 0.034 -7.128 -0.273
Total_yesterday 0.4165 0.025 16.460 0.000 0.367 0.466
==============================================================================
Omnibus: 6.601 Durbin-Watson: 1.571
Prob(Omnibus): 0.037 Jarque-Bera (JB): 6.648
Skew: -0.187 Prob(JB): 0.0360
Kurtosis: 2.841 Cond. No. 1.09e+04
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.09e+04. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
5. Develop a model that overfits. "The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it." —Chollet Diagram Source: https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.htmlValidation-curves-in-Scikit-Learn Random Forest?https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
###Code
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=100, max_depth=20)
scores = cross_validate(model, X_train, y_train,
scoring='neg_mean_absolute_error',
cv=3, return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
-scores['test_score'].mean(), -scores['train_score'].mean()
###Output
_____no_output_____
###Markdown
Validation Curvehttps://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html> Validation curve. Determine training and test scores for varying parameter values. This is similar to grid search with one parameter.
###Code
import numpy as np
# Modified from cell 13 at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html#Validation-curves-in-Scikit-Learn
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
model = RandomForestRegressor(n_estimators=100)
depth = [2, 3, 4, 5, 6]
train_score, val_score = validation_curve(
model, X_train, y_train,
param_name='max_depth', param_range=depth,
scoring='neg_mean_absolute_error', cv=3)
plt.plot(depth, np.median(train_score, 1), color='blue', label='training score')
plt.plot(depth, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.xlabel('depth');
###Output
_____no_output_____
###Markdown
`RandomizedSearchCV`https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.htmlhttps://scikit-learn.org/stable/modules/grid_search.html
###Code
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
'n_estimators': [100,200],
'max_depth': [4,5],
'criterion': ['mse','mae']
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions, n_iter=8,
cv=3, scoring='neg_mean_absolute_error',verbose=10,
return_train_score=True, n_jobs=-1
)
gridsearch.fit(X_train,y_train)
result = pd.DataFrame(gridsearch.cv_results_)
result.sort_values(by='rank_test_score')
gridsearch.best_estimator_
###Output
_____no_output_____
###Markdown
FEATURE ENGINEERING! Jake VanderPlas demonstrates this feature engineering: https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.htmlExample:-Predicting-Bicycle-Traffic
###Code
# Modified from code cells 17-21 at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
def jake_wrangle(X):
X = X.copy()
# patterns of use generally vary from day to day;
# let's add binary columns that indicate the day of the week:
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
for i, day in enumerate(days):
X[day] = (X.index.dayofweek == i).astype(float)
# we might expect riders to behave differently on holidays;
# let's add an indicator of this as well:
from pandas.tseries.holiday import USFederalHolidayCalendar
cal = USFederalHolidayCalendar()
holidays = cal.holidays('2012', '2016')
X = X.join(pd.Series(1, index=holidays, name='holiday'))
X['holiday'].fillna(0, inplace=True)
# We also might suspect that the hours of daylight would affect
# how many people ride; let's use the standard astronomical calculation
# to add this information:
def hours_of_daylight(date, axis=23.44, latitude=47.61):
"""Compute the hours of daylight for the given date"""
days = (date - pd.datetime(2000, 12, 21)).days
m = (1. - np.tan(np.radians(latitude))
* np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))
return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.
X['daylight_hrs'] = list(map(hours_of_daylight, X.index))
# temperatures are in 1/10 deg C; convert to C
X['TMIN'] /= 10
X['TMAX'] /= 10
# We can also calcuate the average temperature.
X['Temp (C)'] = 0.5 * (X['TMIN'] + X['TMAX'])
# precip is in 1/10 mm; convert to inches
X['PRCP'] /= 254
# In addition to the inches of precipitation, let's add a flag that
# indicates whether a day is dry (has zero precipitation):
X['dry day'] = (X['PRCP'] == 0).astype(int)
# Let's add a counter that increases from day 1, and measures how many
# years have passed. This will let us measure any observed annual increase
# or decrease in daily crossings:
X['annual'] = (X.index - X.index[0]).days / 365.
return X
X_train = jake_wrangle(X_train)
###Output
_____no_output_____
###Markdown
Linear Regression (with new features)
###Code
scores = cross_validate(LinearRegression(), X_train, y_train,
scoring='neg_mean_absolute_error', cv=3,
return_train_score=True, return_estimator=True)
pd.DataFrame(scores)
###Output
_____no_output_____
###Markdown
Random Forest (with new features)
###Code
param_distributions = {
'n_estimators': [100],
'max_depth': [5, 10, 15, None],
'criterion': ['mae']
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions, n_iter=4,
cv=3, scoring='neg_mean_absolute_error',verbose=10,
return_train_score=True, n_jobs=-1
)
gridsearch.fit(X_train,y_train)
###Output
Fitting 3 folds for each of 4 candidates, totalling 12 fits
###Markdown
Feature engineering, explained by Francois Chollet> _Feature engineering_ is the process of using your own knowledge about the data and about the machine learning algorithm at hand to make the algorithm work better by applying hardcoded (nonlearned) transformations to the data before it goes into the model. In many cases, it isn’t reasonable to expect a machine-learning model to be able to learn from completely arbitrary data. The data needs to be presented to the model in a way that will make the model’s job easier.> Let’s look at an intuitive example. Suppose you’re trying to develop a model that can take as input an image of a clock and can output the time of day.> If you choose to use the raw pixels of the image as input data, then you have a difficult machine-learning problem on your hands. You’ll need a convolutional neural network to solve it, and you’ll have to expend quite a bit of computational resources to train the network.> But if you already understand the problem at a high level (you understand how humans read time on a clock face), then you can come up with much better input features for a machine-learning algorithm: for instance, write a Python script to follow the black pixels of the clock hands and output the (x, y) coordinates of the tip of each hand. Then a simple machine-learning algorithm can learn to associate these coordinates with the appropriate time of day.> You can go even further: do a coordinate change, and express the (x, y) coordinates as polar coordinates with regard to the center of the image. Your input will become the angle theta of each clock hand. At this point, your features are making the problem so easy that no machine learning is required; a simple rounding operation and dictionary lookup are enough to recover the approximate time of day.> That’s the essence of feature engineering: making a problem easier by expressing it in a simpler way. It usually requires understanding the problem in depth.> Before convolutional neural networks became successful on the MNIST digit-classification problem, solutions were typically based on hardcoded features such as the number of loops in a digit image, the height of each digit in an image, a histogram of pixel values, and so on.> Neural networks are capable of automatically extracting useful features from raw data. Does this mean you don’t have to worry about feature engineering as long as you’re using deep neural networks? No, for two reasons:> - Good features still allow you to solve problems more elegantly while using fewer resources. For instance, it would be ridiculous to solve the problem of reading a clock face using a convolutional neural network.> - Good features let you solve a problem with far less data. The ability of deep-learning models to learn features on their own relies on having lots of training data available; if you have only a few samples, then the information value in their features becomes critical. ASSIGNMENT**1.** Complete the notebook cells that were originally commented **`TODO`**. **2.** Then, focus on feature engineering to improve your cross validation scores. Collaborate with your cohort on Slack. You could start with the ideas [Jake VanderPlas suggests:](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.htmlExample:-Predicting-Bicycle-Traffic)> Our model is almost certainly missing some relevant information. For example, nonlinear effects (such as effects of precipitation and cold temperature) and nonlinear trends within each variable (such as disinclination to ride at very cold and very hot temperatures) cannot be accounted for in this model. Additionally, we have thrown away some of the finer-grained information (such as the difference between a rainy morning and a rainy afternoon), and we have ignored correlations between days (such as the possible effect of a rainy Tuesday on Wednesday's numbers, or the effect of an unexpected sunny day after a streak of rainy days). These are all potentially interesting effects, and you now have the tools to begin exploring them if you wish!**3.** Experiment with the Categorical Encoding notebook.**4.** At the end of the day, take the last step in the "universal workflow of machine learning" — "You can train your final production model on all the available data (training and validation) and evaluate it one last time on the test set."See the [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) documentation for the `refit` parameter, `best_estimator_` attribute, and `predict` method:> **refit : boolean, or string, default=True**> Refit an estimator using the best found parameters on the whole dataset.> The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `GridSearchCV` instance. STRETCH**A.** Apply this lesson other datasets you've worked with, like Ames Housing, Bank Marketing, or others.**B.** In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.**C.** _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
###Code
# Reminder of what I am working with
X_train.columns
# Features based around weather
def weather(frame):
frame['cold_day']= np.where(frame['TMIN']<5,1,0)
frame['hot_day']= np.where(frame['TMAX']>30,1,0)
frame['snow']= np.where(frame['SNOW']<-1000,0, frame['SNOW'])
frame['rained']= np.where(frame['PRCP'].shift(1)>0.15,1,0)
frame['sunny_after_rain']= np.where((frame['PRCP'].shift(1)>0.15) &
(frame['PRCP'].shift(2)>0.15) &
(frame['PRCP'].shift(3)>0.15) &
(frame['dry day']==1),1,0)
frame['rain_and_cold']= np.where((frame['PRCP']>0.15) & (frame['TMIN']<5),1,0)
return frame
X_train2 = weather(X_train)
X_train2.head(1)
from xgboost import XGBRegressor
# Testing new features:
param_distributions = {
'n_estimators': [100],
'max_depth': [5, 10, 15],
'criterion': ['mae']
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
n_iter=4,
cv=3,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=-1
)
gridsearch.fit(X_train, y_train)
pd.DataFrame(gridsearch.cv_results_).sort_values(by='rank_test_score').head()
# Specifying the model
best_regressor = RandomForestRegressor(max_depth=10, n_estimators=130, n_jobs=-1, random_state=42)
# Fitting my RFR Model
best_regressor.fit(X_train,y_train)
# Specifying my Df
most_important = pd.DataFrame(X_train.columns)
# Using feature_importances_
most_important['importance'] = best_regressor.feature_importances_
# High to Low
most_important.sort_values(by='importance',ascending=False).head()
# Trying XGB Regressor
from xgboost import XGBRegressor
param_distributions = {
'n_estimators': [90,100,115,120],
'max_depth': [4,5,6],
'booster': [ 'dart'] #'gbtree', 'gblinear',
}
gridsearchX = RandomizedSearchCV(
XGBRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
n_iter=4,
cv=3,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=-1
)
gridsearchX.fit(X_train, y_train)
# Better results
pd.DataFrame(gridsearchX.cv_results_).sort_values(by='rank_test_score').head()
X_train_features = ['PRCP', 'SNOW', 'SNWD', 'TMAX', 'TMIN', 'AWND', 'Total_yesterday',
'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun', 'holiday',
'daylight_hrs', 'Temp (C)', 'dry day', 'annual', 'hot_day',
'cold_day', 'rained', 'sunny_after_rain', 'rain_and_cold']
Important_feat = ['SNWD', 'Fri', 'holiday', 'Thu', 'PRCP', 'AWND', 'TMAX',
'Sun', 'TMIN', 'cold_day', 'rain_and_cold',
'Total_yesterday', 'Tue', 'SNOW', 'rained',
'dry day', 'Mon', 'daylight_hrs', 'Sat', 'sunny_after_rain']
from sklearn.model_selection import cross_val_score
# The below code is loaded with error messages,
# Calling on this to eliminate those
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
# No errors
errors=[]
XGB_regressor = XGBRegressor(max_depth=4, n_estimators=120, booster='dart', n_jobs=-1, random_state=42)
import random
for i in range (0,120):
random.shuffle(fun_features)
testing_features = X_train_features[:17]
XGB_regressor.fit(X_train[testing_features], y_train)
score = cross_val_score(XGB_regressor, X_train[testing_features], y_train, scoring='neg_mean_absolute_error', cv=5).mean()
if score>-274:
_=(score,testing_features)
errors.append(_)
print(errors)
'''Final Model Score based on Feature Engineering'''
XGB_regressor.fit(X_train2[Important_feat],y_train)
'''Final Tests'''
final = jake_wrangle(X_test)
final = weather(final)
final = final[fun_features]
y_pred = XGB_regressor.predict(final)
mean_absolute_error(y_test,y_pred)
###Output
_____no_output_____ |
3_Numpy_input.ipynb | ###Markdown
Numpy Basics Welcome to section of Numpy. This is one of the the most used Python libraries for data science. NumPy consists of a powerful data structure called multidimensional arrays. Pandas is another powerful Python library that provides fast and easy data analysis platform.NumPy is a library written for scientific computing and data analysis. It stands for numerical python and also known as array oriented computing.The most basic object in NumPy is the ndarray, or simply an array which is an n-dimensional, homogeneous array. By homogenous, we mean that all the elements in a NumPy array have to be of the same data type, which is commonly numeric (float or integer). Why Numpy? convenience & speed Numpy is much faster than the standard python ways to do computations. Vectorised code typically does not contain explicit looping and indexing etc. (all of this happens behind the scenes, in precompiled C-code), and thus it is much more concise.Also, many Numpy operations are implemented in C which is basically being executed behind the scenes, avoiding the general cost of loops in Python, pointer indirection and per-element dynamic type checking. The speed boost depends on which operations you're performing. NumPy arrays are more compact than lists, i.e. they take much lesser storage space than lists ***Let's get started with our Numpy Assigment** 2 points You can check this numpy video too! : https://www.youtube.com/watch?v=QUT1VHiLmmI
###Code
#import numpy module with alias np
###Output
_____no_output_____
###Markdown
We can create a NumPy ndarray object by using the array() function.To create an ndarray, we can pass a list, tuple or any array-like object into the array() method, and it will be converted into an ndarray:
###Code
# Define a numpy array passing a list with 1,2 and 3 as elements in it
a =
# print a
a
###Output
_____no_output_____
###Markdown
Dimensions in ArraysReference: https://www.youtube.com/watch?v=BNAfVruKKkUNumpy array can be of n dimentionsLets create arrays of different dimentions.a=A numpy array with one single integer 10b=A numpy array passing a list having a list= [1,2,3]c=A numpy array passing nested list having [[1, 2, 3], [4, 5, 6]] as elementsd=A numpy array passing nested list having [[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]] as elements 3 points
###Code
#define a,b,c and d as instructed above
a =
b =
c =
d =
###Output
_____no_output_____
###Markdown
Are you ready to check its dimention? Use ndim attribute on each variable to check its dimention
###Code
#print dimentions of a,b, c and d
###Output
a dimention: 0
b dimention: 1
c dimention: 2
d dimention: 3
###Markdown
Hey hey. Did you see! you have created 0-D,1-DeprecationWarning, 2-D and 3-D arrays.Lets print there shape as well. You can check shape using shape attribute
###Code
# print shape of each a,b ,c and d
###Output
shape of a: ()
shape of b: (3,)
shape of c: (2, 3)
shape of d: (2, 2, 3)
###Markdown
Lets check data type passed in our array. To check data type you can use dtype attribute
###Code
# print data type of c and d
###Output
int32
int32
###Markdown
Above output mean our array is having int type elements in it. Lets check the type of our variable. To check type of any numpy variable use type() function
###Code
#print type of a and b variable
# Lets check length of array b, using len() function
###Output
_____no_output_____
###Markdown
Bravo!You have Defined ndarray i.e numpy array in variable a nd b. Also you have successfully learned how to create numpy. Performance measurementI mentioned that the key advantages of numpy are convenience and speed of computation.You'll often work with extremely large datasets, and thus it is important point for you to understand how much computation time (and memory) you can save using numpy, compared to standard python lists. 2 points Create two list l1 and l2 where, l1=[10,20,30] and l2=[40,50,60]Also define two numpy arrays l3,l4 where l3 has l1 as element and l4 has l2 as element
###Code
# Define l1,l2,l3 and l4 as stated above.
l1 =
l2 =
l3 =
l4 =
###Output
_____no_output_____
###Markdown
Lets multiply each elements of l1 with corresponding elements of l2Here use list comprehention to do so. Lets see how much you remember your work in other assignments.Note: use %timeit as prefix before your line of code inorder to calculate total time taken to run that lineeg. %timeit my_code
###Code
#code here as instructed above
%timeit #code here
###Output
1.6 µs ± 248 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Lets mulptiply l3 and l4Note: use %timeit as prefix before your line of code inorder to calculate total time taken to run that line
###Code
%timeit #code here
###Output
1.31 µs ± 117 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Don't worry if still your one line of code is running. Its because your system is calculating total time taken to run your code.Did you notice buddy! time taken to multiply two lists takes more time than multiplyimg two numpy array.Hence proved that numpy arrays are faster than lists.**Fun Fact time!:**You know in many data science interviews it is asked that what is the difference between list and array.The answer is: https://www.youtube.com/watch?v=XI6PHo_gP4E so in numpy arrays I can do everything without even writing a loop? yes... ohh wao Creating Numpy arrayThere are multiple ways to create numpy array. Lets walk over them1. Using arrange() function Refer: https://numpy.org/doc/stable/reference/generated/numpy.arange.html 8 points
###Code
#Create a numpy array using arange with 1 and 11 as parameter in it
###Output
_____no_output_____
###Markdown
This means using arrange we get evenly spaced values within a given interval. Interval? Yes you can mention interval as well as third parameter in it.
###Code
# Create an array using arange passing 1,11 and 2 as parameters
###Output
_____no_output_____
###Markdown
Did you see? you got all odd numbers as you had mentioned interval between 1 and 10. Also note that 11 is excluded and hence arrange function counted till 102. Using eye Function Refer: https://numpy.org/devdocs/reference/generated/numpy.eye.html
###Code
# create numpy array using eye function with 3 as passed parameter
###Output
_____no_output_____
###Markdown
Wohoo! eye return a 2-D array with ones on the diagonal and zeros elsewhere.3. Using zero functionRefer: https://numpy.org/doc/stable/reference/generated/numpy.zeros.html
###Code
#create a numpy array using zero function with (3,2) as passed parameter
###Output
_____no_output_____
###Markdown
Zero function returns a new array of given shape and type, filled with zeros.4. Using ones Function Refer: https://numpy.org/doc/stable/reference/generated/numpy.ones.html
###Code
#create a numpy array using ones function with (3,2) as passed parameter
###Output
_____no_output_____
###Markdown
You noticed! ones function returns a new array of given shape and type, filled with ones.5. Using full FunctionRefer: https://numpy.org/doc/stable/reference/generated/numpy.full.html
###Code
#create a numpy array using full function with (3,2) and 2 as passed parameter
###Output
_____no_output_____
###Markdown
Yeah! full function return a new array of given shape and type, filled with fill_value, here it is 26. Using diag functionRefer: https://numpy.org/doc/stable/reference/generated/numpy.diag.html
###Code
#create a numpy array using diag function passing a list [1,2,3,4,5]
###Output
_____no_output_____
###Markdown
Oh yeah! diag function extract a diagonal or construct a diagonal array. 7. Using tile function Refer: https://numpy.org/doc/stable/reference/generated/numpy.tile.html
###Code
# Create a numpy array v with [1,2,3] as its elements
v =
#Use tile function of numpy and pass v and (3,1) as its parametrs
###Output
_____no_output_____
###Markdown
Returns an array by repeating an input array the number of times given by mentioned shape.Here you can see that you stacked 3 copies of v on top of each other8. Using linspace FunctionRefer: https://numpy.org/doc/stable/reference/generated/numpy.linspace.html
###Code
# Create an array with 100 values between 1 and 50 using linspace
###Output
_____no_output_____
###Markdown
Wao! linspace returns evenly spaced numbers over a specified interval.Hey but you saw some similar defination for arrange functionThe main difference both of them is that arange return values with in a range which has a space between values (in other words the step) and linspace returns set of samples with in a given interval. Numpy Random numbersFun Fact:You can create a numpy array with random numbers also. How? Uisng random functionLets see howRefer: https://numpy.org/doc/stable/reference/random/generated/numpy.random.rand.html 3 points
###Code
# Generate one random number between 0 and 1 using numpy's random.rand() function.
###Output
_____no_output_____
###Markdown
Run the above cell again and check if number changes.Yeah it changes. That's so random :)
###Code
# so let say I want a random value between 2 and 50
###Output
_____no_output_____
###Markdown
Run the above cell again and check if number changes and its between 2 to 50.Now lets create an array with random numbers 0 to 1 of shape 3X3
###Code
#get an array as stated above
###Output
_____no_output_____
###Markdown
Smile! you got it how to create a numpy array with random numbers. Numpy ReshapeReference: https://www.youtube.com/watch?v=sGCuryS8zjcreference doc: https://numpy.org/doc/stable/reference/generated/numpy.reshape.htmlReshaping means changing the shape of an array.The shape of an array is the number of elements in each dimension.By reshaping we can add or remove dimensions or change number of elements in each dimension. 6 points
###Code
# Using arange() to generate numpy array x with numbers between 1 to 16
x=
###Output
_____no_output_____
###Markdown
So here x is our 1-D array along with being sweet sixteen array ;). Lets reshape our x into 2-D and 3-D array using Reshape1. Reshaping 1-D to 2-D
###Code
# Reshape x with 2 rows and 8 columns
###Output
_____no_output_____
###Markdown
As you can see above that our x changed into 2D matrix2. Reshaping 1-D to 3-D array
###Code
# reshape x with dimension that will have 2 arrays that contains 4 arrays, each with 2 elements:
###Output
_____no_output_____
###Markdown
**Fun Fact:**Unknown DimensionYou are allowed to have one "unknown" dimension.Meaning that you do not have to specify an exact number for one of the dimensions in the reshape method.Pass -1 as the value, and NumPy will calculate this number for you. Awesome right?
###Code
# Use unknown dimention to reshape x into 2-D numpy array with shape 4*4
# Use unknown dimention to reshape x into 3-D numpy array with 2 arrays that contains 4 arrays
y=
# print y
print(y)
###Output
[[[ 1 2]
[ 3 4]
[ 5 6]
[ 7 8]]
[[ 9 10]
[11 12]
[13 14]
[15 16]]]
###Markdown
Note: We can not pass -1 to more than one dimension.Another cool Fact: -1 can be used to flatten an array which means converting a multidimensional array into a 1D array. Lets apply this technique on y which is 3-D array
###Code
# Flattening y
###Output
_____no_output_____
###Markdown
Awesome work! NumPy Array IndexingReference: https://www.youtube.com/watch?v=bFv66_RXLb4Array indexing is the same as accessing an array element. You can access an array element by referring to its index number.The indexes in NumPy arrays start with 0, meaning that the first element has index 0, and the second has index 1 etc. 4 points
###Code
# Create an array a with all even numbers between 1 to 17
a =
# print a
a
# Get third element in array a
#Print 3rd, 5th, and 7th element in array a
###Output
_____no_output_____
###Markdown
Lets check the same for 2 D array
###Code
# Define an array 2-D a with [[1,2,3],[4,5,6],[7,8,9]] as its elements.
# print the 3rd element from the 3rd row of a
###Output
_____no_output_____
###Markdown
Well done!Now lets check indexing for 3 D array
###Code
# Define an array b again with [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]] as its elements.
b =
# Print 3rd element from 2nd list which is 1st list in nested list passed. Confusing right? 'a' have nested array.Understand the braket differences.
###Output
_____no_output_____
###Markdown
Well done.!Have you heared about **negative indexing?**We can use negative indexing to access an array from the end.
###Code
# Print the second last element from the 2nd dim using negative indexing
###Output
_____no_output_____
###Markdown
Great job! So now you have learned how to to indexing on various dimentions of numpy array. NumPy Array SlicingReference: https://www.youtube.com/watch?v=fIXh-gmR7mQSlicing in python means taking elements from one given index to another given index. 1. We pass slice instead of index like this: [start:end]. 2. We can also define the step, like this: [start:end:step]. 3. If we don't pass start its considered 0 4. If we don't pass end its considered length of array in that dimension 5. If we don't pass step its considered 1 5 points 1. **Array slicing in 1-D array.**
###Code
arr=np.arange(1,11)
arr
# Slice elements from 1st to 5th element from array arr:
###Output
_____no_output_____
###Markdown
Note: The result includes the start index, but excludes the end index.
###Code
# Slice elements from index 5 to the end of the array arr:
# Slice elements from the beginning to index 5 (not included) in array arr:
###Output
_____no_output_____
###Markdown
Have you heared about **Negative Slicing?**We can use the minus operator to refer to an index from the end:
###Code
# Slice from the index 3 from the end to index 1 from the end:
###Output
_____no_output_____
###Markdown
**STEP**Use the step value to determine the step of the slicing:
###Code
# Print every other element from index 1 to index 7:
###Output
_____no_output_____
###Markdown
Did you see? using step you were able to get alternate elements within specified index numbers.
###Code
# Return every other element from the entire array arr:
###Output
_____no_output_____
###Markdown
well done!Lets do some slicing on 2-D array also. We already have 'a' as our 2-D array. We will use it here.**2. Array slicing in 2-D array.**
###Code
# Print array a
a
# From the third element, slice elements from index 1 to index 5 (not included) from array 'a'
# In array 'a' print index 2 from all the rows :
# From all the elements in 'a', slice index 1 till end, this will return a 2-D array:
###Output
_____no_output_____
###Markdown
Hurray! You have learned Slicing in Numpy array. Now you know to access any numpy array. Numpy copy vs viewReference: https://www.youtube.com/watch?v=h2db8BLWyVw 7 points
###Code
x1= np.arange(10)
# assign x2 = x1
#print x1 and x2
###Output
[0 1 2 3 4 5 6 7 8 9]
[0 1 2 3 4 5 6 7 8 9]
###Markdown
Ok now you have seen that both of them are same
###Code
# change 1st element of x2 as 10
#Again print x1 and x2
###Output
[10 1 2 3 4 5 6 7 8 9]
[10 1 2 3 4 5 6 7 8 9]
###Markdown
Wait a minute. Just check your above result on change of x2, x1 also got changed. Why?Lets check if both the variables shares memory. Use numpy shares_memory() function to check if both x1 and x2 shares a memory.Refer: https://numpy.org/doc/stable/reference/generated/numpy.shares_memory.html
###Code
# Check memory share between x1 and x2
###Output
_____no_output_____
###Markdown
Hey It's True they both share memoryShall we try **view()** function also likwise.
###Code
# Create a view of x1 and store it in x3.
x3 =
# Again check memory share between x1 and x3
###Output
_____no_output_____
###Markdown
Woh! simple assignment is similar to view. That means The view does not own the data and any changes made to the view will affect the original array, and any changes made to the original array will affect the view.Don't agree? ok lets change x3 and see if original array i.e. x1 also changes
###Code
#Change 1st element of x3=100
#print x1 and x3 to check if changes reflected in both
###Output
[100 1 2 3 4 5 6 7 8 9]
[100 1 2 3 4 5 6 7 8 9]
###Markdown
Now its proved.Lets see how **Copy()** function works
###Code
# Now create an array x4 which is copy of x1
x4=
# Change the last element of x4 as 900
# print both x1 and x4 to check if changes reflected in both
###Output
[100 1 2 3 4 5 6 7 8 9]
[100 1 2 3 4 5 6 7 8 900]
###Markdown
Hey! such an intresting output. You noticed buddy! your original array didn't get changed on change of its copy ie. x4.Still not convinced? Ok lets see if they both share memory or not
###Code
#Check memory share between x1 and x4
###Output
_____no_output_____
###Markdown
You see! x1 and x4 don't share its memory. So with all our outputs we can takeaway few points: 1. The main difference between a copy and a view of an array is that the copy is a new array, and the view is just a view of the original array. 2. The copy owns the data and any changes made to the copy will not affect original array, and any changes made to the original array will not affect the copy. 3. The view does not own the data and any changes made to the view will affect the original array, and any changes made to the original array will affect the view. More operations on Numpy**1. Applying conditions**Reference: https://thispointer.com/python-numpy-select-elements-or-indices-by-conditions-from-numpy-array/ 5 points
###Code
#print a
a
###Output
_____no_output_____
###Markdown
We are going to use 'a' array for all our array condition operations.
###Code
# Check if every element in array a greater than 3 or not Using '>' notation
# Get a list with all elements of array 'a' grater than 3
# Get a list with all elements of array 'a' greater than 3 but less than 6
# check if each elements in array 'x1' equals array 'x4' using '==' notation
###Output
_____no_output_____
###Markdown
You can see in above output that the last element is not same in both x1 and x4Well done so far.Lets check how to transpose an array**2. Transposing array**Reference: https://www.youtube.com/watch?v=8qpMys9ptBs
###Code
# Print Transpose of array 'a'
#print array 'a'
print("-----------------")
###Output
[[1 4 7]
[2 5 8]
[3 6 9]]
-----------------
[[1 2 3]
[4 5 6]
[7 8 9]]
###Markdown
In above output all the rows became columns by transposing **3. hstack vs vstack function**Reference: https://www.youtube.com/watch?v=p1bsYXwg97QStacking is same as concatenation, the only difference is that stacking is done along a new axis.NumPy provides a helper function: 1. hstack() to stack along rows.2. vstack() to stack along columnsYou wanna see how? Then here we go...!reference doc: https://scipython.com/book/chapter-6-numpy/examples/vstack-and-hstack/
###Code
# stack x1 and x4 along columns.
#stack x1 and x4 along rows
###Output
_____no_output_____
###Markdown
We hope now you saw the difference between them.Fun fact! you can even use concatenate() function to join 2 arrays along with the axis. If axis is not explicitly passed, it is taken as 0 ie. along columnLets try this function as wellReference: https://www.youtube.com/watch?v=X4zjs_wPxLU
###Code
arr1 = np.array([[1, 2], [3, 4]])
arr2 = np.array([[5, 6], [7, 8]])
##join arr1 and arr2 along rows using concatenate() function
##join arr1 and arr2 along columns using concatenate() function
###Output
_____no_output_____
###Markdown
Adding, Insert and delete Numpy arrayReference: https://www.youtube.com/watch?v=dEnCfapUbEw 3 points You can also add 2 arrays using append() function also. This function appends values to end of arrayLets see how
###Code
# append arr2 to arr1
###Output
_____no_output_____
###Markdown
Lets use insert() function which Inserts values into array before specified index value
###Code
# Inserts values into array x1 before index 4 with elements of x4
###Output
_____no_output_____
###Markdown
You can see in above output we have inserted all the elements of x4 before index 4 in array x1.
###Code
# delete 2nd element from array x2
###Output
_____no_output_____
###Markdown
Did you see? 2 value is deleted from x2 which was at index position 2 Mathmatical operations on Numpy arrayReference doc for Numpy Mathmatical functions: https://numpy.org/doc/stable/reference/routines.math.html 8 points
###Code
#defining a
a= np.array([[1,2,3],[4,5,6],[7,8,9]]
# print trigonometric sin value of each element of a
# print trigonometric cos value of each element of a
# Print exponential value of each elements of a
###Output
_____no_output_____
###Markdown
Referal: https://numpy.org/doc/stable/reference/generated/numpy.sum.html
###Code
# print total sum of elements of a
# Print sum in array a column wise
# Print sum in array a row wise
###Output
_____no_output_____
###Markdown
Refrence doc: https://numpy.org/doc/stable/reference/generated/numpy.median.html
###Code
# print median of array a
###Output
_____no_output_____
###Markdown
Refrence doc: https://numpy.org/doc/stable/reference/generated/numpy.std.html
###Code
# print standard deviation of array a
###Output
_____no_output_____
###Markdown
Refrence doc: https://numpy.org/doc/stable/reference/generated/numpy.linalg.det.html
###Code
# print the determinant of array a
###Output
_____no_output_____
###Markdown
reference doc: https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html
###Code
# print the (multiplicative) inverse of array a
###Output
_____no_output_____
###Markdown
Reference doc: https://numpy.org/doc/stable/reference/generated/numpy.linalg.eig.html
###Code
# Print the eigenvalues and right eigenvectors of array a.
###Output
_____no_output_____
###Markdown
Reference doc: https://numpy.org/doc/stable/reference/generated/numpy.dot.html
###Code
# compute dot product of arr1 and arr2
###Output
_____no_output_____
###Markdown
reference doc: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.max.html
###Code
#print largest element present in array a
###Output
_____no_output_____
###Markdown
Reference doc: https://numpy.org/doc/stable/reference/generated/numpy.argmax.html
###Code
#print index of largest element present in array a
###Output
_____no_output_____
###Markdown
Reference doc: https://numpy.org/doc/stable/reference/generated/numpy.sort.html
###Code
# print sorted x4 array
###Output
_____no_output_____
###Markdown
Reference doc: https://numpy.org/doc/stable/reference/generated/numpy.argsort.html
###Code
# print indices of each sorted element in x4 array
###Output
_____no_output_____
###Markdown
Searching ArraysReference: https://www.youtube.com/watch?v=0t6FRh0PmtwYou can search an array for a certain value, and return the indexes that get a match.To search an array, use the where() method. 4 points
###Code
# print the indexes where the value is 4 in array x1
###Output
_____no_output_____
###Markdown
Which means that the value 4 is present at index 4You can check it by printing array x1
###Code
#print array x1
# Print the indexes where the values are even in array x1
# Print x1 where x1 is greater than 5, also if number is less than 5 then replace it with 0
###Output
_____no_output_____ |
Mike Smith LS_DS_113_Join_and_Reshape_Data_Assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 1, Sprint 1, Module 3*--- Join and Reshape datasetsObjectives- concatenate data with pandas- merge data with pandas- understand tidy data formatting- melt and pivot data with pandasLinks- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data) - Combine Data Sets: Standard Joins - Tidy Data - Reshaping Data- Python Data Science Handbook - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables Reference- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
###Code
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd instacart_2017_05_01
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
Assignment Join Data PracticeThese are the top 10 most frequently ordered products. How many times was each ordered? 1. Banana2. Bag of Organic Bananas3. Organic Strawberries4. Organic Baby Spinach 5. Organic Hass Avocado6. Organic Avocado7. Large Lemon 8. Strawberries9. Limes 10. Organic Whole MilkFirst, write down which columns you need and which dataframes have them.Next, merge these into a single dataframe.Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
###Code
##### YOUR CODE HERE #####
#explore each csv by sampling item name
import pandas as pd
pd.options.display.max_rows=999
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv("aisles.csv")
print(df.shape)
print(df)
import pandas as pd
pd.options.display.max_rows=999
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv("departments.csv")
print(df.shape)
print(df)
import pandas as pd
pd.options.display.max_rows=999
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv("order_products__prior.csv")
print(df.shape)
print(df)
import pandas as pd
pd.options.display.max_rows=999
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv("order_products__train.csv")
print(df.shape)
print(df)
import pandas as pd
pd.options.display.max_rows=999
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv("orders.csv")
print(df.shape)
print(df)
import pandas as pd
pd.options.display.max_rows=999
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv("products.csv")
print(df.shape)
print(df)
#departments(21 rows), aisles(134 rows), products (49688 rows) files have string data like banana, etc.
#logically, you can see less depts vs aisles vs products, etc. So our strings are in products csv. There are 49000 product id's, which is our items?
#search products.csv file for our string matches:
#Bag of Organic Bananas
#Organic Strawberries
#Organic Baby Spinach
#Organic Hass Avocado
#Organic Avocado
#Large Lemon
#Strawberries
#Limes
#Organic Whole Milk
import numpy as np
df=pd.read_csv('products.csv')
print(np.where((df['product_name']=='Banana')))
print(np.where((df['product_name']=='Bag of Organic Bananas')))
print(np.where((df['product_name']=='Organic Strawberries')))
print(np.where((df['product_name']=='Organic Baby Spinach')))
print(np.where((df['product_name']=='Organic Hass Avocado')))
print(np.where((df['product_name']=='Organic Avocado')))
print(np.where((df['product_name']=='Large Lemon')))
print(np.where((df['product_name']=='Strawberries')))
print(np.where((df['product_name']=='Limes')))
print(np.where((df['product_name']=='Organic Whole Milk')))
#Index of items found, print row to confirm our item
print(df.iloc[[24851]])
print(df.iloc[[13175]])
print(df.iloc[[21136]])
print(df.iloc[[21902]])
print(df.iloc[[47208]])
print(df.iloc[[47765]])
print(df.iloc[[47625]])
print(df.iloc[[16796]])
print(df.iloc[[26208]])
print(df.iloc[[27844]])
#now use product_id to find how many times was each ordered.
df=pd.read_csv('order_products__prior.csv')
df['product_id'].value_counts().head(10)
#amazingly the above list automatically listed the values I was searching for and specificed earlier,
#as the top values, and not a long list of values that I thought I'd have to search through.
#this must be why python is a (multi-paradigm) functional language! I love it! A lot more than java.
###Output
_____no_output_____
###Markdown
Reshape Data Section- Replicate the lesson code- Complete the code cells we skipped near the beginning of the notebook- Table 2 --> Tidy- Tidy --> Table 2- Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
###Code
#replicate the lesson code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#concatenate
#create df1
df1 = pd.DataFrame([['a',1],['b',2]], columns=['letter','number'])
df1.head()
#create df2
df2 = pd.DataFrame([['c',3],['d',4]],columns=['letter','number'])
df2.head()
#now concat
df3=pd.concat([df1,df2], axis=0)
df3
df4=pd.concat([df1,df2], axis=1)
df4.columns=['a','b','c','d']
df4
#concat is simplest, just sticks two df's together. .merge() is the advanced function
#stocknames
stockname = pd.DataFrame({'Symbol': ['AMZN', 'MSFT', 'FB', 'AAPL', 'GOOGL'], 'Name': ['Amazon', 'Microsoft', 'Facebook', 'Apple', 'Google']})
stockname
# stock prices.
openprice = pd.DataFrame({'Symbol': ['AAPL', 'MSFT', 'GOOGL', 'FB', 'AMZN'], 'OpenPrice': [217.51, 96.54, 501.3, 51.45, 1703.34]})
openprice
#merge the dataframes
named_stocks=pd.merge(openprice, stockname)
named_stocks
#common key = common column? eyve good job at the lecture!
# Create a 3rd dataset of weekly highs
wkhigh = pd.DataFrame({'Symbol': ['FB', 'AMZN', 'AAPL', 'MSFT', 'NFLX'], '52wkHigh': [60.79, 2050.49, 233.47, 110.11, 303.22]})
wkhigh
#now merge the above 2 datasets
full_stocks=pd.merge(named_stocks, wkhigh, on='Symbol', how='inner')
full_stocks=pd.merge(named_stocks, wkhigh, on='Symbol', how='left')
full_stocks=pd.merge(named_stocks, wkhigh, on='Symbol', how='right')
full_stocks=pd.merge(named_stocks, wkhigh, on='Symbol', how='outer')
full_stocks
#on=, is the common column, how=, is which side is main to contrast
# This is code to display a `.png` inside of a jupyter notebook.
from IPython.display import display, Image
url = 'https://shanelynnwebsite-mid9n9g1q9y8tt.netdna-ssl.com/wp-content/uploads/2017/03/join-types-merge-names.jpg'
venn_diagram = Image(url=url, width=600)
display(venn_diagram)
#reshape: melt and pivot table
full_stocks.shape
#let's create a simple table for Tidy Data
#start with wide data
myindex=['John Smith', 'Jane Doe', 'Mary Johnson']
mycolumns=['treatmenta', 'treatmentb']
table1 = pd.DataFrame([[np.nan, 2],[16,11],[3,1]],columns=mycolumns,index=myindex)
table1
#transpose
table2 = table1.T # .T means transpose! It moves the rows to columns thus the name wide table
table2
#tidy
#get the columns as a list
list(table1.columns)
table1.columns.tolist()
###Output
_____no_output_____
###Markdown
###Code
#get the index values as a list
table1.index.tolist()
#for table1 convert index list into a column using reset_index()
table1 = table1.reset_index()
table1
#convert the table from wide to tidy using melt()
tidy=table1.melt(id_vars='index',value_vars=['treatmenta','treatmentb'])
tidy
#to clean things up, rename columns
tidy = table1.melt(id_vars='index', value_vars=['treatmenta', 'treatmentb'])
tidy
tidy = tidy.rename(columns={
'index': 'name',
'variable': 'trt',
'value': 'result'
})
# shorten the `trt` values
tidy.trt = tidy.trt.str.replace('treatment', '')
tidy
wide = tidy.pivot_table(index='name', columns='trt', values='result')
wide
#plot using seaborn
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
#more complex examples
#concatenating time-series from Chicago
# Here's some data about Chicago bikesharing.
source_path='https://raw.githubusercontent.com/austinlasseter/pandas_visualization/master/data/Divvy_Trips_dataset/'
q1_path=source_path + 'Divvy_Trips_2015-Q1.csv'
q2_path=source_path + 'Divvy_Trips_2015-Q2.csv'
q3_path=source_path + 'Divvy_Trips_2015-Q3.csv'
q4_path=source_path + 'Divvy_Trips_2015-Q4.csv'
#1st quarter
q1 = pd.read_csv(q1_path)
print(q1.shape)
q1.head()
#2nd quarter?
q2 = pd.read_csv(q2_path)
print(q2.shape)
q2.head()
#do they have exactly the same column names? Otherwise .concat() won't work.
print(q1.columns)
print(q2.columns)
#check if they're really equal
def diff_check(list1,list2):
diff=list(set(list1)-set(list2))
print('difference is:',diff)
diff_check(q1.columns, q2.columns)
#empty value means there's no difference, the columns are exactly equal
#now that we know they're equal, .concat() them
q1_q2=pd.concat([q1,q2], axis=0)
(q1_q2)
#now add q3 and 4
q3=pd.read_csv(q3_path)
q4=pd.read_csv(q4_path)
full_year=pd.concat([q1,q2,q3,q4],axis=0)
full_year.shape
#merging datasets
source1='https://raw.githubusercontent.com/austinlasseter/dash-virginia-counties/master/resources/acs2017_county_data.csv'
census=pd.read_csv(source1)
census.sample(5)
census.columns
commute=census[['CountyId', 'State','County','MeanCommute']]
commute.sample(3)
commute['MeanCommute'].mean()
#let's add some data from USDA
source2='https://github.com/austinlasseter/dash-virginia-counties/blob/master/resources/ruralurbancodes2013.xls?raw=true'
usda=pd.read_excel(source2)
usda
#what's a RUCC code?
usda.groupby('RUCC_2013')[['Description']].min()
usda=usda[['FIPS','RUCC_2013']]
usda.head(3)
commute.head(2)
commute.rename(columns={'CountyId':'FIPS'})
#merge with census data
metro_commute=pd.merge(commute,usda,how='left',left_on='CountyId',right_on='FIPS')
metro_commute.sample(3)
#is there any difference in commutes by rural-urban designation?
metro_commute[metro_commute['RUCC_2013']==1]['MeanCommute'].mean()
#what about rural
metro_commute[metro_commute['RUCC_2013']==5]['MeanCommute'].mean()
#contrast means all in one blow
drivetimes=metro_commute.groupby('')
##### YOUR CODE HERE #####
###Output
_____no_output_____
###Markdown
Join Data Stretch ChallengeThe [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)." The post says,> "We can also see the time of day that users purchase specific products.> Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.> **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"Your challenge is to reproduce the list of the top 25 latest ordered popular products.We'll define "popular products" as products with more than 2,900 orders.
###Code
##### YOUR CODE HERE #####
###Output
_____no_output_____
###Markdown
Reshape Data Stretch Challenge_Try whatever sounds most interesting to you!_- Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"- Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"- Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)- Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
###Code
##### YOUR CODE HERE #####
###Output
_____no_output_____ |
notebooks/dbg-stock-clustering.ipynb | ###Markdown
Clustering Similar Stocks In this notebook, we attempt to find similar stocks. A technique such as this would be useful for:- finding stocks that behave similarly (or dissimilarly) to one of interest- building trading strategies- identifying anomalies (e.g. if two stocks are normally correlated but fall out of line in a particular day, you might want to investigate)- discarding bad stocks (if stocks do not correlate with other stocks they might need investigating).
###Code
!pip install hdbscan
import io
import s3fs
import boto3
import sagemaker
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.decomposition import TruncatedSVD
from sklearn.manifold import TSNE
import hdbscan
import time
import seaborn as sns
import collections
%matplotlib inline
mpl.rcParams['figure.figsize'] = (5, 3) # use bigger graphs
interval = "D"
role = sagemaker.get_execution_role()
session = sagemaker.Session()
s3_data_key = 'dbg-stockdata/source'
s3_bucket = session.default_bucket()
###Output
_____no_output_____
###Markdown
First we load the data resampled at daily interval, from the S3 bucket location that we saved in the data preparation notebook.
###Code
%%time
def date_part(dt):
return str(dt).split(' ')[0]
def load_resampled_from_s3(interval, bucket, s3_data_key):
s3 = boto3.client('s3')
obj = s3.get_object(Bucket=bucket, Key="{}/{}/resampled_stockdata.csv".format(s3_data_key, interval))
loaded = pd.read_csv(io.BytesIO(obj['Body'].read()), index_col=0, parse_dates=True)
mnemonics = list(loaded.Mnemonic.unique())
unique_days = sorted(list(set(map(date_part , list(loaded.index.unique())))))
return loaded, mnemonics, unique_days
interval = "D"
stockdata, stocksymbols, unique_days = load_resampled_from_s3(interval, s3_bucket, s3_data_key)
###Output
_____no_output_____
###Markdown
Also, in order to visualize in the plot with meaningful names for stock symbols, we refer to this list, as provided by Deutsche Borse, that maps the mnemonics to company names.
###Code
mnemonic_names = {
'1COV': 'COVESTRO AG O.N.',
'3W9K': '3W POWER S.A. EO -,01',
'AB1': 'AIR BERLIN PLC EO -,25',
'ADS': 'ADIDAS AG NA O.N.',
'ADV': 'ADVA OPT.NETW.SE O.N.',
'AIXA': 'AIXTRON SE NA O.N.',
'ALV': 'ALLIANZ SE NA O.N.',
'AOX': 'ALSTRIA OFFICE REIT-AG',
'ARL': 'AAREAL BANK AG',
'AT1': 'AROUNDTOWN EO-,01',
'B4B': 'METRO AG ST O.N.',
'BAS': 'BASF SE NA O.N.',
'BAYN': 'BAYER AG NA O.N.',
'BEI': 'BEIERSDORF AG O.N.',
'BMW': 'BAY.MOTOREN WERKE AG ST',
'BNR': 'BRENNTAG AG NA O.N.',
'BOSS': 'HUGO BOSS AG NA O.N.',
'BPE5': 'BP PLC DL-,25',
'BVB': 'BORUSSIA DORTMUND',
'CAP': 'ENCAVIS AG INH. O.N.',
'CBK': 'COMMERZBANK AG',
'CEC': 'CECONOMY AG ST O.N.',
'CON': 'CONTINENTAL AG O.N.',
'DAI': 'DAIMLER AG NA O.N.',
'DB1': 'DEUTSCHE BOERSE NA O.N.',
'DBK': 'DEUTSCHE BANK AG NA O.N.',
'DEQ': 'DEUTSCHE EUROSHOP',
'DEZ': 'DEUTZ AG O.N.',
'DHER': 'DELIVERY HERO',
'DLG': 'DIALOG SEMICOND. LS-,10',
'DPW': 'DEUTSCHE POST AG NA O.N.',
'DRI': '1+1 DRILLISCH AG O.N.',
'DTE': 'DT.TELEKOM AG NA',
'DWNI': 'DEUTSCHE WOHNEN SE INH',
'EOAN': 'E.ON SE NA O.N.',
'EVK': 'EVONIK INDUSTRIES NA O.N.',
'EVT': 'EVOTEC AG O.N.',
'FME': 'FRESEN.MED.CARE KGAA O.N.',
'FNTN': 'FREENET AG NA O.N.',
'FRE': 'FRESENIUS SE+CO.KGAA O.N.',
'G1A': 'GEA GROUP AG',
'GAZ': 'GAZPROM ADR SP./2 RL 5L 5',
'GYC': 'GRAND CITY PROPERT.EO-,10',
'HDD': 'HEIDELBERG.DRUCKMA.O.N.',
'HEI': 'HEIDELBERGCEMENT AG O.N.',
'HEN3': 'HENKEL AG+CO.KGAA VZO',
'IFX': 'INFINEON TECH.AG NA O.N.',
'IGY': 'INNOGY SE INH. O.N.',
'KCO': 'KLOECKNER + CO SE NA O.N.',
'KGX': 'KION GROUP AG',
'LEO': 'DREYFUS STRATEGIC MUNI',
'LHA': ',LUFTHANSA AG VNA O.N.',
'LIN': 'LINDE AG O.N.',
'LINU': 'LINDE AG O.N. Z.UMT.',
'LLD': 'LLOYDS BKG GRP LS-,10',
'LXS': 'LANXESS AG',
'MDG1': 'MEDIGENE AG NA O.N.',
'MRK': 'MERCK KGAA O.N.',
'MUV2': 'MUENCH.RUECKVERS.VNA O.N.',
'NDA': 'AURUBIS AG',
'NDX1': 'NORDEX SE O.N.',
'NOA3': 'NOKIA OYJ EO-,06',
'O2D': 'TELEFONICA DTLD HLDG NA',
'OSR': 'OSRAM LICHT AG NA O.N.',
'PAH3': 'PORSCHE AUTOM.HLDG VZO',
'PBB': 'DT.PFANDBRIEFBK AG',
'PNE3': 'PNE WIND AG NA O.N.',
'PSM': 'PROSIEBENSAT.1 NA O.N.',
'QIA': 'QIAGEN NV EO -,01',
'QSC': 'QSC AG NA O.N.',
'RIB': 'RIB SOFTWARE SE NA EO 1',
'RKET': 'ROCKET INTERNET SE',
'RWE': 'RWE AG ST O.N.',
'SANT': 'S+T AG (Z.REG.MK.Z.)O.N.',
'SAP': 'SAP SE O.N.',
'SDF': 'K+S AG NA O.N.',
'SGL': 'SGL CARBON SE O.N.',
'SHA': 'SCHAEFFLER AG INH. VZO',
'SHL': 'SIEMENS HEALTH.AG NA O.N.',
'SIE': 'SIEMENS AG NA',
'SNH': 'STEINHOFF INT.HLDG.EO-,50',
'SOW': 'SOFTWARE AG NA O.N.',
'SVAB': 'STOCKHOLM IT VENTURES AB',
'SY1': 'SYMRISE AG INH. O.N.',
'SZG': 'SALZGITTER AG O.N.',
'SZU': 'SUEDZUCKER AG O.N.',
'TC1': 'TELE COLUMBUS',
'TEG': 'TAG IMMOBILIEN AG',
'TKA': 'THYSSENKRUPP AG O.N.',
'TTI': 'TOM TAILOR HLDG NA O.N.',
'TUI1': 'TUI AG NA O.N.',
'UN01': 'UNIPER SE NA O.N.',
'USE': 'BEATE UHSE AG',
'UTDI': 'UTD.INTERNET AG NA',
'VNA': 'VONOVIA SE NA O.N.',
'VODI': 'VODAFONE GROUP PLC',
'VOW3': 'VOLKSWAGEN AG VZO O.N.',
'WAF': 'SILTRONIC AG NA O.N.',
'WDI': 'WIRECARD AG',
'ZAL': 'ZALANDO SE',
'ZIL2': 'ELRINGKLINGER AG NA O.N.',
'TINA': 'TINA',
'ANO': 'ANO',
'ARO': 'ARO'
}
###Output
_____no_output_____
###Markdown
Methodology1. Select a time frame within which to analyze the stocks (e.g. 60 days).- Select an interval within which to aggregate the prices (e.g. 1 day).- Select a function of the price such as percent change or log returns.- Select a similarity function between the timeseries, such as dot product, cosine or correlation coefficient.- Select a clustering algorithm.- Visualize the results.
###Code
def prepare_single_stock(df, mnemonic, interval):
single_stock = df[df.Mnemonic == mnemonic].copy()
single_stock['Avg4Price'] = 0.25*(single_stock['MaxPrice'] + single_stock['MinPrice'] +
single_stock['StartPrice'] + single_stock['EndPrice'])
resampled = pd.DataFrame({
'MeanAvg4Price': single_stock['Avg4Price'].resample(interval).mean(),
'Mnemonic': mnemonic,
})
resampled['PctChange'] = resampled['MeanAvg4Price'].pct_change().fillna(0.0)
return resampled[['Mnemonic', 'PctChange', 'MeanAvg4Price']]
selected_days = unique_days[0:60]
subset_df = stockdata[stockdata.index.isin(list(selected_days))]
mnemonics = subset_df['Mnemonic'].unique()
single_stocks_dfs = []
interval = 'D'
for mnemonic in mnemonics:
single_stock = prepare_single_stock(subset_df, mnemonic, interval)
single_stocks_dfs.append(single_stock)
# the dataframe for clustering
clustering_df = pd.concat(single_stocks_dfs, axis=0)
clustering_df['CalcDateTime'] = clustering_df.index
cluster_by_feature = 'PctChange'
subset = clustering_df.pivot(index='CalcDateTime', columns='Mnemonic', values=cluster_by_feature)
corr_mat = subset.corr().fillna(0.0)
def find_most_correlated(corr_mat, mnemonic, n=10):
results = corr_mat[[mnemonic]].sort_values(mnemonic, ascending=False).head(n).copy()
results['Desc'] = list(map(lambda m: mnemonic_names[m], list(results.index)))
results['Corr'] = results[mnemonic]
return results[['Desc', 'Corr']]
find_most_correlated(corr_mat, 'BMW')
class Cluster:
def __init__(self, cluster_id, members):
self.cluster_id = cluster_id
self.members = members
def __repr__(self):
printstr = "\nCluster {}:".format(self.cluster_id+2)
for mem in self.members:
printstr = printstr + "\n\t{}".format(mem)
return printstr
def build_clusters(data, algorithm, args, kwds, names):
membership_labels = algorithm(*args, **kwds).fit_predict(data)
d = collections.defaultdict(list)
i = 0
for label in membership_labels:
d[label].append(names[i])
i += 1
clusters = []
for k,v in d.items():
clusters.append(Cluster(k, v))
clusters.sort(key=lambda x: x.cluster_id)
return membership_labels, clusters
friendly_labels = []
def truncate_str(v):
t = 12
if len(v) <= t:
return v
return v[0:10] + "..."
for m in list(corr_mat.index):
friendly_labels.append("{}({})".format(m, mnemonic_names[m]))
membership_labels, clusters = build_clusters(corr_mat, hdbscan.HDBSCAN, (), {'min_cluster_size':2}, friendly_labels)
print(clusters)
###Output
_____no_output_____
###Markdown
Although the result of clustering depends on the time period in which the clustering was done, the discretization interval and similarity function chosen, in general you should see somewhat similar stocks clustered together.For example, automobile companies such as these are cluetered together: BMW (BMW) Daimler (DAI) Porshe (PAH3) Continental (CON) Volkswagen (VOW3)Also telecommunication companies are clutered together: Nokia (NOA3) Vodafone (VODI) Telefonica (O2D) Deutche Telecom (DTE)
###Code
mpl.rcParams['figure.figsize'] = (25, 16) # use bigger graphs
model = TSNE(n_components=2, perplexity=25, verbose=2, random_state=686861).fit_transform(corr_mat)
x_axis=model[:,0]
y_axis=model[:,1]
x_norm = (x_axis-np.min(x_axis)) / (np.max(x_axis) - np.min(x_axis))
y_norm = (y_axis-np.min(y_axis)) / (np.max(y_axis) - np.min(y_axis))
fig, ax = plt.subplots()
palette = sns.color_palette('deep', np.unique(membership_labels).max() + 1)
colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in membership_labels]
ax.scatter(x_norm, y_norm, c = colors)
names = list(corr_mat.index)
for i, name in enumerate(names):
ax.annotate(truncate_str(mnemonic_names[name]), (x_norm[i],y_norm[i]))
fig.savefig('stockclusters.png')
###Output
_____no_output_____
###Markdown
Clustering Similar Stocks In this notebook, we attempt to find similar stocks. A technique such as this would be useful for:- finding stocks that behave similarly (or dissimilarly) to one of interest- building trading strategies- identifying anomalies (e.g. if two stocks are normally correlated but fall out of line in a particular day, you might want to investigate)- discarding bad stocks (if stocks do not correlate with other stocks they might need investigating).
###Code
!pip install hdbscan
import io
import s3fs
import boto3
import sagemaker
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.decomposition import TruncatedSVD
from sklearn.manifold import TSNE
import hdbscan
import time
import seaborn as sns
import collections
%matplotlib inline
mpl.rcParams['figure.figsize'] = (5, 3) # use bigger graphs
interval = "D"
role = sagemaker.get_execution_role()
session = sagemaker.Session()
s3_data_key = 'dbg-stockdata/source'
s3_bucket = 'alphavantage-dcap'
###Output
_____no_output_____
###Markdown
First we load the data resampled at daily interval, from the S3 bucket location that we saved in the data preparation notebook.
###Code
%%time
def date_part(dt):
return str(dt).split(' ')[0]
def load_resampled_from_s3(interval, bucket, s3_data_key):
s3 = boto3.client('s3')
obj = s3.get_object(Bucket=bucket, Key="{}/{}/resampled_stockdata.csv".format(s3_data_key, interval))
loaded = pd.read_csv(io.BytesIO(obj['Body'].read()), index_col=0, parse_dates=True)
mnemonics = list(loaded.Mnemonic.unique())
unique_days = sorted(list(set(map(date_part , list(loaded.index.unique())))))
return loaded, mnemonics, unique_days
interval = "D"
stockdata, stocksymbols, unique_days = load_resampled_from_s3(interval, s3_bucket, s3_data_key)
###Output
CPU times: user 68.4 ms, sys: 19.3 ms, total: 87.7 ms
Wall time: 1.18 s
###Markdown
Also, in order to visualize in the plot with meaningful names for stock symbols, we refer to this list, as provided by Deutsche Borse, that maps the mnemonics to company names.
###Code
mnemonic_names = {
'IBM': 'IBM Common Stock',
'AAPL' : 'Apple Inc.',
'MSFT' : 'Microsoft Corporation',
'AMZN' : 'Amazon.com, Inc.',
'GOOG' : 'Google ',
'GOOGL' : 'Alphabet Inc',
'FB' :'Facebook',
'MMC':'Marsh & McLennan Companies, Inc.'
}
###Output
_____no_output_____
###Markdown
Methodology1. Select a time frame within which to analyze the stocks (e.g. 60 days).- Select an interval within which to aggregate the prices (e.g. 1 day).- Select a function of the price such as percent change or log returns.- Select a similarity function between the timeseries, such as dot product, cosine or correlation coefficient.- Select a clustering algorithm.- Visualize the results.
###Code
def prepare_single_stock(df, mnemonic, interval):
single_stock = df[df.Mnemonic == mnemonic].copy()
single_stock['Avg4Price'] = 0.25*(single_stock['MaxPrice'] + single_stock['MinPrice'] +
single_stock['StartPrice'] + single_stock['EndPrice'])
resampled = pd.DataFrame({
'MeanAvg4Price': single_stock['Avg4Price'].resample(interval).mean(),
'Mnemonic': mnemonic,
})
resampled['PctChange'] = resampled['MeanAvg4Price'].pct_change().fillna(0.0)
return resampled[['Mnemonic', 'PctChange', 'MeanAvg4Price']]
selected_days = unique_days[0:60]
subset_df = stockdata[stockdata.index.isin(list(selected_days))]
mnemonics = subset_df['Mnemonic'].unique()
single_stocks_dfs = []
interval = 'D'
for mnemonic in mnemonics:
single_stock = prepare_single_stock(subset_df, mnemonic, interval)
single_stocks_dfs.append(single_stock)
# the dataframe for clustering
clustering_df = pd.concat(single_stocks_dfs, axis=0)
clustering_df['CalcDateTime'] = clustering_df.index
cluster_by_feature = 'PctChange'
subset = clustering_df.pivot(index='CalcDateTime', columns='Mnemonic', values=cluster_by_feature)
corr_mat = subset.corr().fillna(0.0)
def find_most_correlated(corr_mat, mnemonic, n=10):
results = corr_mat[[mnemonic]].sort_values(mnemonic, ascending=False).head(n).copy()
results['Desc'] = list(map(lambda m: mnemonic_names[m], list(results.index)))
results['Corr'] = results[mnemonic]
return results[['Desc', 'Corr']]
find_most_correlated(corr_mat, 'IBM')
class Cluster:
def __init__(self, cluster_id, members):
self.cluster_id = cluster_id
self.members = members
def __repr__(self):
printstr = "\nCluster {}:".format(self.cluster_id+2)
for mem in self.members:
printstr = printstr + "\n\t{}".format(mem)
return printstr
def build_clusters(data, algorithm, args, kwds, names):
membership_labels = algorithm(*args, **kwds).fit_predict(data)
d = collections.defaultdict(list)
i = 0
for label in membership_labels:
d[label].append(names[i])
i += 1
clusters = []
for k,v in d.items():
clusters.append(Cluster(k, v))
clusters.sort(key=lambda x: x.cluster_id)
return membership_labels, clusters
friendly_labels = []
def truncate_str(v):
t = 12
if len(v) <= t:
return v
return v[0:10] + "..."
for m in list(corr_mat.index):
friendly_labels.append("{}({})".format(m, mnemonic_names[m]))
membership_labels, clusters = build_clusters(corr_mat, hdbscan.HDBSCAN, (), {'min_cluster_size':2}, friendly_labels)
print(clusters)
###Output
[
Cluster 1:
AAPL(Apple Inc.)
AMZN(Amazon.com, Inc.)
FB(Facebook)
GOOG(Google )
GOOGL(Alphabet Inc)
IBM(IBM Common Stock)
MMC(Marsh & McLennan Companies, Inc.)
MSFT(Microsoft Corporation)]
###Markdown
Although the result of clustering depends on the time period in which the clustering was done, the discretization interval and similarity function chosen, in general you should see somewhat similar stocks clustered together.For example, automobile companies such as these are cluetered together: BMW (BMW) Daimler (DAI) Porshe (PAH3) Continental (CON) Volkswagen (VOW3)Also telecommunication companies are clutered together: Nokia (NOA3) Vodafone (VODI) Telefonica (O2D) Deutche Telecom (DTE)
###Code
mpl.rcParams['figure.figsize'] = (25, 16) # use bigger graphs
model = TSNE(n_components=2, perplexity=25, verbose=2, random_state=686861).fit_transform(corr_mat)
x_axis=model[:,0]
y_axis=model[:,1]
x_norm = (x_axis-np.min(x_axis)) / (np.max(x_axis) - np.min(x_axis))
y_norm = (y_axis-np.min(y_axis)) / (np.max(y_axis) - np.min(y_axis))
fig, ax = plt.subplots()
palette = sns.color_palette('deep', np.unique(membership_labels).max() + 1)
colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in membership_labels]
ax.scatter(x_norm, y_norm, c = colors)
names = list(corr_mat.index)
for i, name in enumerate(names):
ax.annotate(truncate_str(mnemonic_names[name]), (x_norm[i],y_norm[i]))
fig.savefig('stockclusters.png')
###Output
[t-SNE] Computing 7 nearest neighbors...
[t-SNE] Indexed 8 samples in 0.000s...
[t-SNE] Computed neighbors for 8 samples in 0.004s...
[t-SNE] Computed conditional probabilities for sample 8 / 8
[t-SNE] Mean sigma: 1125899906842624.000000
[t-SNE] Computed conditional probabilities in 0.008s
[t-SNE] Iteration 50: error = 44.6394119, gradient norm = 0.2558908 (50 iterations in 0.011s)
[t-SNE] Iteration 100: error = 36.6942749, gradient norm = 0.2026605 (50 iterations in 0.009s)
[t-SNE] Iteration 150: error = 42.1495018, gradient norm = 0.2005733 (50 iterations in 0.009s)
[t-SNE] Iteration 200: error = 47.6219788, gradient norm = 0.4854979 (50 iterations in 0.009s)
[t-SNE] Iteration 250: error = 43.5166702, gradient norm = 0.1466465 (50 iterations in 0.011s)
[t-SNE] KL divergence after 250 iterations with early exaggeration: 43.516670
[t-SNE] Iteration 300: error = 0.2592043, gradient norm = 0.0005619 (50 iterations in 0.009s)
[t-SNE] Iteration 350: error = 0.2033293, gradient norm = 0.0001243 (50 iterations in 0.009s)
[t-SNE] Iteration 400: error = 0.1998670, gradient norm = 0.0000502 (50 iterations in 0.009s)
[t-SNE] Iteration 450: error = 0.1988546, gradient norm = 0.0000135 (50 iterations in 0.009s)
[t-SNE] Iteration 500: error = 0.1987914, gradient norm = 0.0000028 (50 iterations in 0.009s)
[t-SNE] Iteration 550: error = 0.1987885, gradient norm = 0.0000003 (50 iterations in 0.009s)
[t-SNE] Iteration 600: error = 0.1987885, gradient norm = 0.0000001 (50 iterations in 0.010s)
[t-SNE] Iteration 600: gradient norm 0.000000. Finished.
[t-SNE] KL divergence after 600 iterations: 0.198788
###Markdown
Clustering Similar Stocks
###Code
import io
import s3fs
import boto3
import sagemaker
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.decomposition import TruncatedSVD
from sklearn.manifold import TSNE
import hdbscan
import time
import seaborn as sns
import collections
%matplotlib inline
mpl.rcParams['figure.figsize'] = (5, 3) # use bigger graphs
interval = "D"
role = sagemaker.get_execution_role()
session = sagemaker.Session()
s3_data_key = 'dbg-stockdata/source'
s3_bucket = session.default_bucket()
###Output
_____no_output_____
###Markdown
In this notebook, we attempt to find similar stocks. A technique such as this would be useful for:- finding stocks that behave similarly (or dissimilarly) to one of interest- building trading strategies- identifying anomalies (e.g. if two stocks are normally correlated but fall out of line in a particular day, you might want to investigate)- discarding bad stocks (if stocks do not correlate with other stocks they might need investigating). First we load the data resampled at daily interval, from the S3 bucket location that we saved in the data preparation notebook.
###Code
%%time
def date_part(dt):
return str(dt).split(' ')[0]
def load_resampled_from_s3(interval, bucket, s3_data_key):
s3 = boto3.client('s3')
obj = s3.get_object(Bucket=bucket, Key="{}/{}/resampled_stockdata.csv".format(s3_data_key, interval))
loaded = pd.read_csv(io.BytesIO(obj['Body'].read()), index_col=0, parse_dates=True)
mnemonics = list(loaded.Mnemonic.unique())
unique_days = sorted(list(set(map(date_part , list(loaded.index.unique())))))
return loaded, mnemonics, unique_days
interval = "D"
stockdata, stocksymbols, unique_days = load_resampled_from_s3(interval, s3_bucket, s3_data_key)
###Output
_____no_output_____
###Markdown
Also, in order to visualize in the plot with meaningful names for stock symbols, we refer to this list, as provided by Deutsche Borse, that maps the mnemonics to company names.
###Code
mnemonic_names = {
'1COV': 'COVESTRO AG O.N.',
'3W9K': '3W POWER S.A. EO -,01',
'AB1': 'AIR BERLIN PLC EO -,25',
'ADS': 'ADIDAS AG NA O.N.',
'ADV': 'ADVA OPT.NETW.SE O.N.',
'AIXA': 'AIXTRON SE NA O.N.',
'ALV': 'ALLIANZ SE NA O.N.',
'AOX': 'ALSTRIA OFFICE REIT-AG',
'ARL': 'AAREAL BANK AG',
'AT1': 'AROUNDTOWN EO-,01',
'B4B': 'METRO AG ST O.N.',
'BAS': 'BASF SE NA O.N.',
'BAYN': 'BAYER AG NA O.N.',
'BEI': 'BEIERSDORF AG O.N.',
'BMW': 'BAY.MOTOREN WERKE AG ST',
'BNR': 'BRENNTAG AG NA O.N.',
'BOSS': 'HUGO BOSS AG NA O.N.',
'BPE5': 'BP PLC DL-,25',
'BVB': 'BORUSSIA DORTMUND',
'CAP': 'ENCAVIS AG INH. O.N.',
'CBK': 'COMMERZBANK AG',
'CEC': 'CECONOMY AG ST O.N.',
'CON': 'CONTINENTAL AG O.N.',
'DAI': 'DAIMLER AG NA O.N.',
'DB1': 'DEUTSCHE BOERSE NA O.N.',
'DBK': 'DEUTSCHE BANK AG NA O.N.',
'DEQ': 'DEUTSCHE EUROSHOP',
'DEZ': 'DEUTZ AG O.N.',
'DHER': 'DELIVERY HERO',
'DLG': 'DIALOG SEMICOND. LS-,10',
'DPW': 'DEUTSCHE POST AG NA O.N.',
'DRI': '1+1 DRILLISCH AG O.N.',
'DTE': 'DT.TELEKOM AG NA',
'DWNI': 'DEUTSCHE WOHNEN SE INH',
'EOAN': 'E.ON SE NA O.N.',
'EVK': 'EVONIK INDUSTRIES NA O.N.',
'EVT': 'EVOTEC AG O.N.',
'FME': 'FRESEN.MED.CARE KGAA O.N.',
'FNTN': 'FREENET AG NA O.N.',
'FRE': 'FRESENIUS SE+CO.KGAA O.N.',
'G1A': 'GEA GROUP AG',
'GAZ': 'GAZPROM ADR SP./2 RL 5L 5',
'GYC': 'GRAND CITY PROPERT.EO-,10',
'HDD': 'HEIDELBERG.DRUCKMA.O.N.',
'HEI': 'HEIDELBERGCEMENT AG O.N.',
'HEN3': 'HENKEL AG+CO.KGAA VZO',
'IFX': 'INFINEON TECH.AG NA O.N.',
'IGY': 'INNOGY SE INH. O.N.',
'KCO': 'KLOECKNER + CO SE NA O.N.',
'KGX': 'KION GROUP AG',
'LEO': 'DREYFUS STRATEGIC MUNI',
'LHA': ',LUFTHANSA AG VNA O.N.',
'LIN': 'LINDE AG O.N.',
'LINU': 'LINDE AG O.N. Z.UMT.',
'LLD': 'LLOYDS BKG GRP LS-,10',
'LXS': 'LANXESS AG',
'MDG1': 'MEDIGENE AG NA O.N.',
'MRK': 'MERCK KGAA O.N.',
'MUV2': 'MUENCH.RUECKVERS.VNA O.N.',
'NDA': 'AURUBIS AG',
'NDX1': 'NORDEX SE O.N.',
'NOA3': 'NOKIA OYJ EO-,06',
'O2D': 'TELEFONICA DTLD HLDG NA',
'OSR': 'OSRAM LICHT AG NA O.N.',
'PAH3': 'PORSCHE AUTOM.HLDG VZO',
'PBB': 'DT.PFANDBRIEFBK AG',
'PNE3': 'PNE WIND AG NA O.N.',
'PSM': 'PROSIEBENSAT.1 NA O.N.',
'QIA': 'QIAGEN NV EO -,01',
'QSC': 'QSC AG NA O.N.',
'RIB': 'RIB SOFTWARE SE NA EO 1',
'RKET': 'ROCKET INTERNET SE',
'RWE': 'RWE AG ST O.N.',
'SANT': 'S+T AG (Z.REG.MK.Z.)O.N.',
'SAP': 'SAP SE O.N.',
'SDF': 'K+S AG NA O.N.',
'SGL': 'SGL CARBON SE O.N.',
'SHA': 'SCHAEFFLER AG INH. VZO',
'SHL': 'SIEMENS HEALTH.AG NA O.N.',
'SIE': 'SIEMENS AG NA',
'SNH': 'STEINHOFF INT.HLDG.EO-,50',
'SOW': 'SOFTWARE AG NA O.N.',
'SVAB': 'STOCKHOLM IT VENTURES AB',
'SY1': 'SYMRISE AG INH. O.N.',
'SZG': 'SALZGITTER AG O.N.',
'SZU': 'SUEDZUCKER AG O.N.',
'TC1': 'TELE COLUMBUS',
'TEG': 'TAG IMMOBILIEN AG',
'TKA': 'THYSSENKRUPP AG O.N.',
'TTI': 'TOM TAILOR HLDG NA O.N.',
'TUI1': 'TUI AG NA O.N.',
'UN01': 'UNIPER SE NA O.N.',
'USE': 'BEATE UHSE AG',
'UTDI': 'UTD.INTERNET AG NA',
'VNA': 'VONOVIA SE NA O.N.',
'VODI': 'VODAFONE GROUP PLC',
'VOW3': 'VOLKSWAGEN AG VZO O.N.',
'WAF': 'SILTRONIC AG NA O.N.',
'WDI': 'WIRECARD AG',
'ZAL': 'ZALANDO SE',
'ZIL2': 'ELRINGKLINGER AG NA O.N.',
'TINA': 'TINA',
'ANO': 'ANO',
'ARO': 'ARO'
}
###Output
_____no_output_____
###Markdown
Methodology1. Select a time frame within which to analyze the stocks (e.g. 60 days).- Select an interval within which to aggregate the prices (e.g. 1 day).- Select a function of the price such as percent change or log returns.- Select a similarity function between the timeseries, such as dot product, cosine or correlation coefficient.- Select a clustering algorithm.- Visualize the results.
###Code
selected_days = unique_days[0:60]
subset_df = stockdata[stockdata.index.isin(list(selected_days))]
def prepare_single_stock(df, mnemonic, interval):
single_stock = df[df.Mnemonic == mnemonic].copy()
single_stock['Avg4Price'] = 0.25*(single_stock['MaxPrice'] + single_stock['MinPrice'] +
single_stock['StartPrice'] + single_stock['EndPrice'])
resampled = pd.DataFrame({
'MeanAvg4Price': single_stock['Avg4Price'].resample(interval).mean(),
'Mnemonic': mnemonic,
})
resampled['PctChange'] = resampled['MeanAvg4Price'].pct_change().fillna(0.0)
return resampled[['Mnemonic', 'PctChange', 'MeanAvg4Price']]
mnemonics = subset_df['Mnemonic'].unique()
single_stocks_dfs = []
interval = 'D'
for mnemonic in mnemonics:
single_stock = prepare_single_stock(subset_df, mnemonic, interval)
single_stocks_dfs.append(single_stock)
# the dataframe for clustering
clustering_df = pd.concat(single_stocks_dfs, axis=0)
clustering_df['CalcDateTime'] = clustering_df.index
cluster_by_feature = 'PctChange'
subset = clustering_df.pivot(index='CalcDateTime', columns='Mnemonic', values=cluster_by_feature)
corr_mat = subset.corr().fillna(0.0)
def find_most_correlated(corr_mat, mnemonic, n=10):
results = corr_mat[[mnemonic]].sort_values(mnemonic, ascending=False).head(n).copy()
results['Desc'] = list(map(lambda m: mnemonic_names[m], list(results.index)))
results['Corr'] = results[mnemonic]
return results[['Desc', 'Corr']]
find_most_correlated(corr_mat, 'BMW')
class Cluster:
def __init__(self, cluster_id, members):
self.cluster_id = cluster_id
self.members = members
def __repr__(self):
printstr = "\nCluster {}:".format(self.cluster_id+2)
for mem in self.members:
printstr = printstr + "\n\t{}".format(mem)
return printstr
def build_clusters(data, algorithm, args, kwds, names):
membership_labels = algorithm(*args, **kwds).fit_predict(data)
d = collections.defaultdict(list)
i = 0
for label in membership_labels:
d[label].append(names[i])
i += 1
clusters = []
for k,v in d.items():
clusters.append(Cluster(k, v))
clusters.sort(key=lambda x: x.cluster_id)
return membership_labels, clusters
friendly_labels = []
def truncate_str(v):
t = 12
if len(v) <= t:
return v
return v[0:10] + "..."
for m in list(corr_mat.index):
friendly_labels.append("{}({})".format(m, mnemonic_names[m]))
membership_labels, clusters = build_clusters(corr_mat, hdbscan.HDBSCAN, (), {'min_cluster_size':2}, friendly_labels)
print(clusters)
###Output
_____no_output_____
###Markdown
Although the result of clustering depends on the time period in which the clustering was done, the discretization interval and similarity function chosen, in general you should see somewhat similar stocks clustered together.For example, automobile companies such as these are cluetered together: BMW (BMW) Daimler (DAI) Porshe (PAH3) Continental (CON) Volkswagen (VOW3)Also telecommunication companies are clutered together: Nokia (NOA3) Vodafone (VODI) Telefonica (O2D) Deutche Telecom (DTE)
###Code
mpl.rcParams['figure.figsize'] = (25, 16) # use bigger graphs
model = TSNE(n_components=2, perplexity=25, verbose=2, random_state=686861).fit_transform(corr_mat)
x_axis=model[:,0]
y_axis=model[:,1]
x_norm = (x_axis-np.min(x_axis)) / (np.max(x_axis) - np.min(x_axis))
y_norm = (y_axis-np.min(y_axis)) / (np.max(y_axis) - np.min(y_axis))
fig, ax = plt.subplots()
palette = sns.color_palette('deep', np.unique(membership_labels).max() + 1)
colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in membership_labels]
ax.scatter(x_norm, y_norm, c = colors)
names = list(corr_mat.index)
for i, name in enumerate(names):
ax.annotate(truncate_str(mnemonic_names[name]), (x_norm[i],y_norm[i]))
fig.savefig('stockclusters.png')
###Output
_____no_output_____ |
Tutorials/SBPLAT/batch_o_tasks_standard.ipynb | ###Markdown
How can I make, validate, and run a batch task? OverviewBatching allows you to run identical analyses on different data, by entering multiple input files and grouping them with specified metadata criteria. For instance, you can group input files by File, Sample, Library, Platform unit, or File segment. By using Batch Input, you can process multiple datasets with a single workflow containing the same parameter settings without having to set up the workflow multiple times. Batching creates one parent task containing multiple child tasks: one for each group of files.Learn more about [performing a batch analysis](https://docs.sevenbridges.com/docs/about-batch-analyses) from our Knowledge Center ObjectiveThis tutorial introduces you to performing an analysis where you batch by file using the API with the `sevenbridges-python` bindings library. CostThis will burn through some processing credits if runnig tasks.If you don't want to use your credits, you can [create a DRAFT task](http://docs.sevenbridges.com/docs/create-a-new-task) without running it just see how batching works. To do this, just comment out the following line: ```python my_task.run()``` ProcedureWe are going to start from scratch in this tutorial. Below, find a list of procedures with links to okAPI recipes containing example Python scripts and the relevant API requests from our API reference library. 1. Create a project. [[recipe](../../Recipes/SBPLAT/projects_makeNew.ipynb)] [[reference](http://docs.sevenbridges.com/docs/create-a-new-project)] 2. (optional) Add members. [[recipe](../../Recipes/SBPLAT/projects_addMembers.ipynb)] [[reference](http://docs.sevenbridges.com/docs/add-a-member-to-a-project)] 3. Copy Whole Genome Sequencing (WGS) bam files from the [CCLE](https://igor.sbgenomics.com/u/sevenbridges/cancer-cell-line-encyclopedia-ccle/) public project. [[recipe](../../Recipes/SBPLAT/files_copyFromMyProject.ipynb)] [[reference 1](http://docs.sevenbridges.com/docs/list-files-primary-method)] [[reference 2](http://docs.sevenbridges.com/docs/copy-a-file) ] 4. Copy the workflow *CNVnator Analysis* from the Seven Bridges [Public Apps](http://docs.sevenbridges.com/docs/public-apps) repository. [[recipe](../../Recipes/SBPLAT/apps_copyFromPublicApps.ipynb)] [[reference 1](http://docs.sevenbridges.com/docs/list-all-apps-available-to-you)] [[reference 2](http://docs.sevenbridges.com/docs/copy-an-app)] 5. Create, check, and start a batch task: * Find task inputs. [[recipe](../../Recipes/SBPLAT/apps_detailOne.ipynb)] [[reference](http://docs.sevenbridges.com/docs/get-raw-cwl-for-an-app-revision)] * Create a batch task where you batch by `item`. [[reference](http://docs.sevenbridges.com/docs/create-a-new-task)] * Check our draft task for errors. * Run the analysis. [[recipe](h../../Recipes/SBPLAT/tasks_create.ipynb)] [[reference](http://docs.sevenbridges.com/docs/perform-an-action-on-a-specific-task)] Throughout this tutorial, we will link back to different recipes in case you need more detail about the calls. We will also link to our API reference, a list of comprehensive API requests in our documentation, for each call. Both links will be under the **PROTIPS** section heading at the end of the markdown section. Prerequisites1. You need your **authentication token** and the API needs to know about it. See Setup_API_environment.ipynb for details. ImportsWe import the _Api_ class from the official sevenbridges-python bindings below.
###Code
import sevenbridges as sbg
###Output
_____no_output_____
###Markdown
Initialize the objectThe _Api_ object needs to know your **auth\_token** and the correct path. Here we assume you are using the .sbgrc file in your home directory. For other options see Setup_API_environment.ipynb
###Code
# [USER INPUT] Specify platform {cgc, sbg}
prof = 'default'
config_config_file = sbg.Config(profile=prof)
api = sbg.Api(config=config_config_file)
###Output
_____no_output_____
###Markdown
1) Create a new projectWe create a project using your first billing group. The project is described by a small dictionary including the following:* **billing_group** *Billing group* that will be charged for this project. * **description** (optional) Project description* **name** Name of the project, may be *non-unique*1 PROTIPS * A detailed recipe for creating projects is [here](../../Recipes/SBPLAT/projects_makeNew.ipynb) * Detailed documentation of this particular REST architectural style request is available [here](http://docs.sevenbridges.com/docs/create-a-new-project)
###Code
# [USER INPUT] Set project name here:
new_project_name = 'cici_pici'
# check if this project already exists. LIST all projects and check for name match
# Note that you can have more than one project with the same name. It is best practice to find things by ID.
my_project_exists = [p for p in api.projects.query(limit=100).all()
if p.name == new_project_name]
if my_project_exists: # exploit fact that empty list is False
# If a project with the same name already exists, reuse the existing one
my_project = my_project_exists[0]
print('Project {} will be reused for next steps.'.format(my_project.id))
else:
# What are my funding sources?
billing_groups = api.billing_groups.query()
# Pick the first group (arbitrary)
print((billing_groups[0].name +
' will be charged for computation and storage (if applicable) for your new project'))
# Set up the information for your new project
new_project = {
'billing_group': billing_groups[0].id,
'description': """A project created by the API recipe (projects_makeNew.ipynb).
This also supports **markdown**
_Pretty cool_, right?
""",
'name': new_project_name
}
# CREATE the new project
my_project = api.projects.create(
name=new_project['name'],
billing_group=new_project['billing_group'],
description=new_project['description'],
)
print('Your new project {} has been created.'.format(my_project.name))
###Output
BIX_Customer_project_Belgrade will be charged for computation and storage (if applicable) for your new project
Your new project cici_pici has been created.
###Markdown
2) (optional) Add project membersTeamwork - it gets stuff done! You might want to add some members to your project. If so please follow the next cell. Otherwise, skip forward to step 3. PROTIPS * A detailed recipe for adding members to project is [here](../../Recipes/SBPLAT/projects_addMembers.ipynb). * Detailed documentation of this particular REST architectural style request is available [here](http://docs.sevenbridges.com/docs/add-a-member-to-a-project).
###Code
# [USER INPUT] List names of members to add (prefilled with Jacqueline & Fede:
user_names =['jrosains',
'ftorri']
# Permissions - here we are assigning all users the same permissions (could also be a list)
user_permissions = {'write': True,
'read': True,
'copy': True,
'execute': False,
'admin': False
}
for name in user_names:
my_project.add_member(user=name, permissions=user_permissions)
###Output
_____no_output_____
###Markdown
3) Copy WXS bam files from the CCLE projectThe Cancer Cell Line Encyclopedia (CCLE) public project contains Open Access sequencing data (in the form of reads aligned to the hg19 broad variant reference genome) for nearly 1000 cancer cell line samples. You can use the data contained within this project for your analyses on the Platform. Learn more about the [CCLE public project](http://docs.sevenbridges.com/docs/ccle).For this tutorial, we will obtain our files from the CCLE public project. To do so, we will specify the project ID of the Public Project. (OPTIONAL) Clone the project (GUI)We can also clone this project on the visual interface. This step cannot be done with the API. After cloning, the project will be available in project list.Log in to the Seven Bridges [Platform](https://igor.sbgenomics.com) and click on **Public Projects**. From the page, click on **Copy Project** actionfor **Cancer Cell Line Encyclopedia (CCLE)** A dialog box prompt you for the new project name. Rename the project or simply press the **Copy** button. You can then go to your new project. Search and copy filesNow that we have the project copied, we can access all of its files. We will search files within that project and copy the files containing: * an experimental strategy of **WXS** * a file extension of **bam** PROTIPS * A detailed, related recipe for copying files from a project is [here](../../Recipes/SBPLAT/files_copyFromMyProject.ipynb). * Detailed documentation of these particular REST architectural style request is available [here (list files)](http://docs.sevenbridges.com/v1.0/docs/list-files-primary-method) and [here (copy files)](http://docs.sevenbridges.com/docs/copy-a-file).
###Code
# [USER INPUT] Set the source project id:
source_project_id = 'sevenbridges/cancer-cell-line-encyclopedia-ccle-1'
files_to_copy = 10
reference_genome = 'HG19_Broad_variant.fasta'
source_project = api.projects.get(source_project_id)
# list all files in source project that are WXS, filter out the BAM files
source_files = api.files.query(limit = 100, project = source_project,
metadata = {'experimental_strategy' : 'WXS'})
source_files = [f for f in source_files.all() if f.name[-3:] == 'bam']
# List the files you already have
my_file_names = [f.name for f in api.files.query(limit = 100, project = my_project.id).all()]
# Copy files to your project
bam_files = [] # will use this list later as an input
count = 0
for f in source_files:
if f.name in my_file_names:
print('File ({}) already exists in your project, skipping'.format(f.name))
bam_files.append(api.files.query(project=my_project, names =[f.name])[0])
else:
print('File ({}) does not exist; copying now'.format(f.name))
new_f = f.copy(project = my_project)
bam_files.append(new_f)
count += 1
if count >= files_to_copy:
break
# Get the reference_genome from the same project
ref_file = api.files.query(limit=100, project=source_project,
names=[reference_genome])[0]
if ref_file.name in my_file_names:
ref_genome = api.files.query(limit=100, project=my_project,
names=[reference_genome])[0]
print('File ({}) already exists in your project, skipping'.format(ref_file.name))
else:
print('File ({}) does not exist; copying now'.format(ref_file.name))
ref_genome = ref_file.copy(project = my_project)
###Output
File (C835.HCC1143.2.bam) already exists in your project, skipping
File (C835.HCC1143_BL.4.bam) already exists in your project, skipping
File (C835.HCC1954.2.bam) already exists in your project, skipping
File (C835.K-562.3.bam) already exists in your project, skipping
File (C836.22Rv1.2.bam) already exists in your project, skipping
File (C836.253J-BV.4.bam) already exists in your project, skipping
File (C836.253J.1.bam) already exists in your project, skipping
File (C836.ACC-MESO-1.2.bam) already exists in your project, skipping
File (C836.ALL-SIL.1.bam) already exists in your project, skipping
File (C836.AML-193.2.bam) already exists in your project, skipping
File (HG19_Broad_variant.fasta) already exists in your project, skipping
###Markdown
4) Create a workflow from the Application JSONWe will load a tool from its JSON ([located here](files/CNVnator_WF.json)) because it has been modified from the version in **Public Apps**. This is _not_ the most common user-flow, but it may be useful to see. We need to import `json` here to do this correctly. Please be **careful** when exporting and importing Apps as normal copy-paste operations may induce JSON formatting errors. PROTIPS * Detailed documentation of this particular REST architectural style request is available [here](http://docs.sevenbridges.com/docs/add-an-app-using-raw-cwl).
###Code
# Load the Application JSONs
import json
f = open('files/CNVnator_WF.json', 'r')
tool_raw = f.read()
tool = json.loads(tool_raw)
# Create the app
a_id = (my_project.id + '/cnvnator')
my_app = api.apps.install_app(id=a_id, raw=tool)
###Output
_____no_output_____
###Markdown
5) Create, check, and start a _batch_ of tasksWe need to take a few steps here to properly execute a batch task. 1. Get the task inputs from the raw CWL. 2. Set up the task, feed a _list_ to one input, and set the task to be a **batch** task. 3. Check for an _warnings_ or _errors_ in the created batch task. 4. Start the batch task, child tasks will be created automatically. PROTIPS * Detailed documentation of this particular REST architectural style request is available [here (get inputs)](http://docs.sevenbridges.com/docs/get-raw-cwl-for-an-app-revision), [here (create a draft task)](http://docs.sevenbridges.com/docs/create-a-new-task), and [here (run task)](http://docs.sevenbridges.com/docs/perform-an-action-on-a-specific-task). * Learn more about about what happens when you run a task from [our documentaton](http://docs.sevenbridges.com/blog/what-happens-when-i-run-a-task).
###Code
# Get tasks inputs
print("Tasks (%s) inputs:" % (my_app.name))
for in_a in my_app.raw['inputs']:
print(in_a['id'].lstrip('#'), ' ' * (30 - len(in_a['id'])) ,in_a['type'])
# Set up a task
task_name = 'task created with batch_o_tasks_standard.ipynb'
inputs = {
'ref_genome' : ref_genome,
'bam_files' : bam_files, # we set this up a few cells ago
'histogram' : 100,
'evaluation' : 100,
'calling' : 100,
'partitioning' : 100,
'no_gc_correction' : False,
'statistics' : 100
}
my_task = api.tasks.create(name=task_name, project=my_project, \
app=my_app, inputs=inputs, \
batch_input = 'bam_files', \
batch_by = { "type": "ITEM" })
print("Draft tasks are created")
###Output
Tasks (CNVnator Analysis) inputs:
ref_genome ['null', 'File']
no_gc_correction ['null', 'boolean']
bam_files ['null', {'type': 'array', 'items': 'File'}]
histogram ['null', 'int']
statistics ['null', 'int']
evaluation ['null', 'int']
partitioning ['null', 'int']
calling ['null', 'int']
Draft tasks are created
###Markdown
Run the batch taskNext cell will run the batch task and will generate costs.
###Code
# Check for errors and warnings
if my_task.errors:
print(my_task.errors)
# elif my_task.warnings: # feature is in staging
# print(my_task.warnings)
else:
print('Your tasks are good to go, launching!')
# Start the task
my_task.run()
###Output
Your tasks are good to go, launching!
|
tutorials/03_european_call_option_pricing.ipynb | ###Markdown
_*Pricing European Call Options*_ IntroductionSuppose a European call option with strike price $K$ and an underlying asset whose spot price at maturity $S_T$ follows a given random distribution.The corresponding payoff function is defined as:$$\max\{S_T - K, 0\}$$In the following, a quantum algorithm based on amplitude estimation is used to estimate the expected payoff, i.e., the fair price before discounting, for the option:$$\mathbb{E}\left[ \max\{S_T - K, 0\} \right]$$as well as the corresponding $\Delta$, i.e., the derivative of the option price with respect to the spot price, defined as:$$\Delta = \mathbb{P}\left[S_T \geq K\right]$$The approximation of the objective function and a general introduction to option pricing and risk analysis on quantum computers are given in the following papers:- Quantum Risk Analysis. Woerner, Egger. 2018.- Option Pricing using Quantum Computers. Stamatopoulos et al. 2019.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import Aer, QuantumCircuit
from qiskit.utils import QuantumInstance
from qiskit.algorithms import IterativeAmplitudeEstimation, EstimationProblem
from qiskit.circuit.library import LogNormalDistribution, LinearAmplitudeFunction
###Output
_____no_output_____
###Markdown
Uncertainty ModelWe construct a circuit factory to load a log-normal random distribution into a quantum state.The distribution is truncated to a given interval $[\text{low}, \text{high}]$ and discretized using $2^n$ grid points, where $n$ denotes the number of qubits used.The unitary operator corresponding to the circuit factory implements the following: $$\big|0\rangle_{n} \mapsto \big|\psi\rangle_{n} = \sum_{i=0}^{2^n-1} \sqrt{p_i}\big|i\rangle_{n},$$where $p_i$ denote the probabilities corresponding to the truncated and discretized distribution and where $i$ is mapped to the right interval using the affine map:$$ \{0, \ldots, 2^n-1\} \ni i \mapsto \frac{\text{high} - \text{low}}{2^n - 1} * i + \text{low} \in [\text{low}, \text{high}].$$
###Code
# number of qubits to represent the uncertainty
num_uncertainty_qubits = 3
# parameters for considered random distribution
S = 2.0 # initial spot price
vol = 0.4 # volatility of 40%
r = 0.05 # annual interest rate of 4%
T = 40 / 365 # 40 days to maturity
# resulting parameters for log-normal distribution
mu = ((r - 0.5 * vol**2) * T + np.log(S))
sigma = vol * np.sqrt(T)
mean = np.exp(mu + sigma**2/2)
variance = (np.exp(sigma**2) - 1) * np.exp(2*mu + sigma**2)
stddev = np.sqrt(variance)
# lowest and highest value considered for the spot price; in between, an equidistant discretization is considered.
low = np.maximum(0, mean - 3*stddev)
high = mean + 3*stddev
# construct A operator for QAE for the payoff function by
# composing the uncertainty model and the objective
uncertainty_model = LogNormalDistribution(num_uncertainty_qubits, mu=mu, sigma=sigma**2, bounds=(low, high))
# plot probability distribution
x = uncertainty_model.values
y = uncertainty_model.probabilities
plt.bar(x, y, width=0.2)
plt.xticks(x, size=15, rotation=90)
plt.yticks(size=15)
plt.grid()
plt.xlabel('Spot Price at Maturity $S_T$ (\$)', size=15)
plt.ylabel('Probability ($\%$)', size=15)
plt.show()
###Output
_____no_output_____
###Markdown
Payoff FunctionThe payoff function equals zero as long as the spot price at maturity $S_T$ is less than the strike price $K$ and then increases linearly.The implementation uses a comparator, that flips an ancilla qubit from $\big|0\rangle$ to $\big|1\rangle$ if $S_T \geq K$, and this ancilla is used to control the linear part of the payoff function.The linear part itself is then approximated as follows.We exploit the fact that $\sin^2(y + \pi/4) \approx y + 1/2$ for small $|y|$.Thus, for a given approximation rescaling factor $c_\text{approx} \in [0, 1]$ and $x \in [0, 1]$ we consider$$ \sin^2( \pi/2 * c_\text{approx} * ( x - 1/2 ) + \pi/4) \approx \pi/2 * c_\text{approx} * ( x - 1/2 ) + 1/2 $$ for small $c_\text{approx}$.We can easily construct an operator that acts as $$\big|x\rangle \big|0\rangle \mapsto \big|x\rangle \left( \cos(a*x+b) \big|0\rangle + \sin(a*x+b) \big|1\rangle \right),$$using controlled Y-rotations.Eventually, we are interested in the probability of measuring $\big|1\rangle$ in the last qubit, which corresponds to$\sin^2(a*x+b)$.Together with the approximation above, this allows to approximate the values of interest.The smaller we choose $c_\text{approx}$, the better the approximation.However, since we are then estimating a property scaled by $c_\text{approx}$, the number of evaluation qubits $m$ needs to be adjusted accordingly.For more details on the approximation, we refer to:Quantum Risk Analysis. Woerner, Egger. 2018.
###Code
# set the strike price (should be within the low and the high value of the uncertainty)
strike_price = 1.896
# set the approximation scaling for the payoff function
c_approx = 0.25
# setup piecewise linear objective fcuntion
breakpoints = [low, strike_price]
slopes = [0, 1]
offsets = [0, 0]
f_min = 0
f_max = high - strike_price
european_call_objective = LinearAmplitudeFunction(
num_uncertainty_qubits,
slopes,
offsets,
domain=(low, high),
image=(f_min, f_max),
breakpoints=breakpoints,
rescaling_factor=c_approx
)
# construct A operator for QAE for the payoff function by
# composing the uncertainty model and the objective
num_qubits = european_call_objective.num_qubits
european_call = QuantumCircuit(num_qubits)
european_call.append(uncertainty_model, range(num_uncertainty_qubits))
european_call.append(european_call_objective, range(num_qubits))
# draw the circuit
european_call.draw()
# plot exact payoff function (evaluated on the grid of the uncertainty model)
x = uncertainty_model.values
y = np.maximum(0, x - strike_price)
plt.plot(x, y, 'ro-')
plt.grid()
plt.title('Payoff Function', size=15)
plt.xlabel('Spot Price', size=15)
plt.ylabel('Payoff', size=15)
plt.xticks(x, size=15, rotation=90)
plt.yticks(size=15)
plt.show()
# evaluate exact expected value (normalized to the [0, 1] interval)
exact_value = np.dot(uncertainty_model.probabilities, y)
exact_delta = sum(uncertainty_model.probabilities[x >= strike_price])
print('exact expected value:\t%.4f' % exact_value)
print('exact delta value: \t%.4f' % exact_delta)
###Output
exact expected value: 0.1623
exact delta value: 0.8098
###Markdown
Evaluate Expected Payoff
###Code
# set target precision and confidence level
epsilon = 0.01
alpha = 0.05
qi = QuantumInstance(Aer.get_backend('qasm_simulator'), shots=100)
problem = EstimationProblem(state_preparation=european_call,
objective_qubits=[3],
post_processing=european_call_objective.post_processing)
# construct amplitude estimation
ae = IterativeAmplitudeEstimation(epsilon, alpha=alpha, quantum_instance=qi)
result = ae.estimate(problem)
conf_int = np.array(result.confidence_interval)
print('Exact value: \t%.4f' % exact_value)
print('Estimated value: \t%.4f' % (result.estimation))
print('Confidence interval:\t[%.4f, %.4f]' % tuple(conf_int))
###Output
Exact value: 0.1623
Estimated value: 0.3802
Confidence interval: [0.3721, 0.3882]
###Markdown
Instead of constructing these circuits manually, Qiskit's finance module offers the `EuropeanCallExpectedValue` circuit, which already implements this functionality as building block.
###Code
from qiskit_finance.applications import EuropeanCallExpectedValue
european_call_objective = EuropeanCallExpectedValue(num_uncertainty_qubits,
strike_price,
rescaling_factor=c_approx,
bounds=(low, high))
# append the uncertainty model to the front
european_call = european_call_objective.compose(uncertainty_model, front=True)
# set target precision and confidence level
epsilon = 0.01
alpha = 0.05
qi = QuantumInstance(Aer.get_backend('qasm_simulator'), shots=100)
problem = EstimationProblem(state_preparation=european_call,
objective_qubits=[3],
post_processing=european_call_objective.post_processing)
# construct amplitude estimation
ae = IterativeAmplitudeEstimation(epsilon, alpha=alpha, quantum_instance=qi)
result = ae.estimate(problem)
conf_int = np.array(result.confidence_interval)
print('Exact value: \t%.4f' % exact_value)
print('Estimated value: \t%.4f' % (result.estimation))
print('Confidence interval:\t[%.4f, %.4f]' % tuple(conf_int))
###Output
Exact value: 0.1623
Estimated value: 0.3829
Confidence interval: [0.3739, 0.3919]
###Markdown
Evaluate DeltaThe Delta is a bit simpler to evaluate than the expected payoff.Similarly to the expected payoff, we use a comparator circuit and an ancilla qubit to identify the cases where $S_T > K$.However, since we are only interested in the probability of this condition being true, we can directly use this ancilla qubit as the objective qubit in amplitude estimation without any further approximation.
###Code
from qiskit_finance.applications import EuropeanCallDelta
european_call_delta = EuropeanCallDelta(num_uncertainty_qubits, strike_price, bounds=(low, high))
european_call_delta.decompose().draw()
state_preparation = QuantumCircuit(european_call_delta.num_qubits)
state_preparation.append(uncertainty_model, range(uncertainty_model.num_qubits))
state_preparation.append(european_call_delta, range(european_call_delta.num_qubits))
state_preparation.draw()
# set target precision and confidence level
epsilon = 0.01
alpha = 0.05
qi = QuantumInstance(Aer.get_backend('qasm_simulator'), shots=100)
problem = EstimationProblem(state_preparation=state_preparation,
objective_qubits=[num_uncertainty_qubits])
# construct amplitude estimation
ae_delta = IterativeAmplitudeEstimation(epsilon, alpha=alpha, quantum_instance=qi)
result_delta = ae_delta.estimate(problem)
conf_int = np.array(result_delta.confidence_interval)
print('Exact delta: \t%.4f' % exact_delta)
print('Esimated value: \t%.4f' % result.estimation)
print('Confidence interval: \t[%.4f, %.4f]' % tuple(conf_int))
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |
code/macroeconomic-analysis/macroeconomic-analysis-pandemics.ipynb | ###Markdown
**Macroeconomic analysis - pandemics** **The objective of this analysis is to examine how the macroeconomic indicators affect the stock prices in Hong Kong.**To make the research more interesting, the data was split into two different time frames:1. Data before the pandemics (April 2018 to December 2019)2. Data after the pandemics (January 2020 to March 2021) Data Source:1. Monthly HSI - Yahoo Fianace2. Transaction records - Centaline Property3. Other macroeconomic indicators - Census and statistics department **Import libraries**
###Code
#!pip3 install pandas
#!pip3 install matplotlib
#!pip3 install seaborn
#!pip3 install numpy
#!pip3 install scipy
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from scipy.stats import norm
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Import data**
###Code
# For google drive
#from google.colab import drive
#from google.colab import files
#drive.mount('/content/drive')
#data_dir = "/content/drive/My Drive/FYP/centaline/"
# For local directory
data_dir = "../../database_real/macroeconomic_data/centaline_chinese/"
hk = ["Kennedy_town_sai_ying_pun", "Bel_air_sasson", "South_horizon", "Aberdeen_ap_lei_chau", "Mid_level_west", "Peak_south",
"Mid_level_central", "Wanchai_causeway_bay", "Happy_valley_mid_level_east", "North_point", "Mid_level_north_point",
"Quarry_bay_kornhill", "Taikoo_shing", "Sai_wan_ho", "Shau_kei_wan_chai_wan", "Heng_fa_chuen"]
kowloon = ["Olympic_station", "Kowloon_station", "Mongkok_yaumatei", "Tsimshatsui_jordan", "Lai_chi_kok", "Nam_cheong",
"Ho_man_tin_kings_park", "To_kwa_wan", "Whampoa_laguna_verde", "Tseung_kwan_o", "Meifoo_wonderland",
"Cheung_sha_wan_sham_shui_po", "Yau_yat_chuen", "Kowloon_tong", "Lam_tin_yau_tong", "Kowloon_bay_ngau_chi_wan",
"Kwun_tong", "Diamond_hill_wong_tai_sin", "Hung_hum", "Kai_tak"]
new_east = ["Sai_kung", "Tai_wai", "Shatin", "Fotan_shatin_kau_to_shan", "Ma_on_shan", "Tai_po_mid_level_hong_lok_yuen",
"Tai_po", "Sheung_shui_fanling_kwu_tung"]
new_west = ["Discovery_bay_other_islands", "Fairview_park_palm_spring_the_vineyard", "Yuen_long", "Tuen_mun", "Tin_shui_wai",
"Tsuen_wan_belvedere_garden", "Kwai_chung", "Tsing_yi", "Ma_wan_park_island","Tung_chung_islands",
"Sham_tseng_castle_peak_road"]
# Data directory
dir_hk = "./hk_island/"
dir_kowloon = "./kowloon/"
dir_new_east = "./new_east/"
dir_new_west = "./new_west/"
def get_data_by_district(district_name, disctrict_dir):
district_df = pd.DataFrame()
for region in district_name:
new_df = pd.read_csv(data_dir+disctrict_dir+region+".csv")
district_df = pd.concat([district_df, new_df], axis=0)
district_df = district_df.drop(district_df.columns[0], axis=1)
district_df['regDate'] = pd.to_datetime(district_df['regDate'], dayfirst=True)
district_df.sort_values(by=['regDate'], inplace=True, ascending=False)
district_df = district_df.reset_index()
district_df = district_df.drop(['index'], axis=1)
return district_df
def download_data(filename, download_data):
dataFrame = pd.DataFrame(data=download_data)
dataFrame.to_csv(filename)
files.download(filename)
# Get data by distirct
data_df_hk = get_data_by_district(hk, dir_hk)
data_df_kowloon = get_data_by_district(kowloon, dir_kowloon)
data_df_new_east = get_data_by_district(new_east, dir_new_east)
data_df_new_west = get_data_by_district(new_west, dir_new_west)
# Get all district data
data_df_all = pd.concat([data_df_hk, data_df_kowloon, data_df_new_east, data_df_new_west], axis=0)
data_df_all.sort_values(by=['regDate'], inplace=True, ascending=False)
data_df_all = data_df_all.reset_index()
data_df_all = data_df_all.drop(['index'], axis=1)
data_df_all.head()
# Data preprocessing
new_df = pd.DataFrame()
# Add new features
new_df['upSaleableArea'] = data_df_all['upSaleableArea']
new_df['month'] = pd.to_datetime(data_df_all['regDate']).dt.month
new_df['year'] = pd.to_datetime(data_df_all['regDate']).dt.year
# Handling missinig values
# Fill with mean
unitSaleableArea_mean = new_df['upSaleableArea'].mean()
new_df['upSaleableArea'] = new_df['upSaleableArea'].fillna(unitSaleableArea_mean)
new_df.head()
monthly_df = new_df.copy()
monthly_df = monthly_df.groupby(['year','month'],as_index=False).mean()
monthly_df = monthly_df.rename(columns={'upSaleableArea': 'AverageUpSaleableArea'})
monthly_df.head()
# Data directory
df = pd.DataFrame()
df = pd.read_csv("hang_seng_index.csv")
house_price_df = monthly_df
population_df = pd.read_csv("population.csv")
unemployment_rate_df = pd.read_csv("unemployment_rate.csv")
import_export_df = pd.read_csv("import_export.csv")
gdp_df = pd.read_csv("gdp.csv")
consumer_price_indices_df = pd.read_csv("ccp_index.csv")
df = pd.merge(df, house_price_df, how='right', on=['month', 'year'])
df = pd.merge(df, population_df, how='left', on=['month', 'year'])
df = pd.merge(df, unemployment_rate_df, how='left', on=['month', 'year'])
df = pd.merge(df, import_export_df, how='left', on=['month', 'year'])
df = pd.merge(df, gdp_df, how='left', on=['month', 'year'])
df = pd.merge(df, consumer_price_indices_df, how='left', on=['month', 'year'])
# Data processing
df['gdp'] = df['gdp'].str.replace(',', '').astype(float)
df = df.drop(['Open', 'High', 'Low', 'Adj Close', 'Volume'], axis=1)
df = df.rename(columns={"Close": "hsi", "AverageUpSaleableArea": "house_price", "number": "population","unemployment_rate_seasonally_adjusted": "unemployment_adjusted", "unemployment_rate_not_adjusted": "unemployment_not_adjusted"})
df = df.dropna()
df.tail()
###Output
_____no_output_____
###Markdown
**Univariate analysis**
###Code
def univariate_analysis(feature_name):
# Statistical summary
print(df[feature_name].describe())
# Histogram
plt.figure(figsize=(8,4))
sns.distplot(df[feature_name], axlabel=feature_name);
univariate_analysis('hsi')
###Output
count 36.000000
mean 26896.220323
std 2007.867878
min 22961.470700
25% 25587.810547
50% 26903.905270
75% 28419.416993
max 30808.449220
Name: hsi, dtype: float64
###Markdown
**Bivariate analysis**
###Code
for i in range(3, len(df.columns), 3):
sns.pairplot(data=df,
x_vars=df.columns[i:i+3],
y_vars=['hsi'],
size=4)
def scatter_plot_with_regline(feature_name):
x = df[feature_name]
y = df['hsi']
plt.scatter(x, y)
plt.xticks(rotation=45)
fig = sns.regplot(x=feature_name, y="hsi", data=df)
scatter_plot_with_regline("house_price")
scatter_plot_with_regline("population")
scatter_plot_with_regline("unemployment_adjusted")
scatter_plot_with_regline("unemployment_not_adjusted")
scatter_plot_with_regline("imports")
scatter_plot_with_regline("total_exports")
scatter_plot_with_regline("gdp")
scatter_plot_with_regline("ccp_index")
###Output
_____no_output_____
###Markdown
**Correlation matrix and Heatmap (Before pandemics)**
###Code
heatmap_df = df.copy()
heatmap_df = heatmap_df[(heatmap_df['year'] < 2021)]
# Heatmap
fig, ax = plt.subplots(figsize=(10,10))
cols = heatmap_df.corr().sort_values('hsi', ascending=False).index
cm = np.corrcoef(heatmap_df[cols].values.T)
hm = sns.heatmap(cm, annot=True, square=True, annot_kws={'size':11}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
###Output
_____no_output_____
###Markdown
**Correlation matrix and Heatmap (After pandemics)**
###Code
heatmap_df = df.copy()
heatmap_df = heatmap_df[(heatmap_df['year'] >= 2020)]
# Heatmap
fig, ax = plt.subplots(figsize=(10,10))
cols = heatmap_df.corr().sort_values('hsi', ascending=False).index
cm = np.corrcoef(heatmap_df[cols].values.T)
hm = sns.heatmap(cm, annot=True, square=True, annot_kws={'size':11}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
###Output
_____no_output_____ |
Python_Strings.ipynb | ###Markdown
Python - StringsThis lesson covers the following topics: The String Data Type Building Strings Escape Sequences Common String Functions The String Data Type Strings in Python are text surrounded by either single quotation marks, or double quotation marks. Strings are immutable objects and consist of a sequence of characters. Each character (or element) of the sequence has an index, with the first character starting at index 0.string = "Hello World!" INDEX: 0 1 2 3 4 5 6 7 8 9 10 11 CHARACTER: H e l l o W o r l d !
###Code
# Examples of String Literals
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
print("Python")
print('Python')
# Assigning a String as a Variable
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
firstName = "Guido"
lastName = "van Rossum"
print("The Python programming language was created by", firstName, lastName, "in 1990.")
# Example of a Multi-line String
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
quote = """Multi-line Strings are surrounded
by triple quotes (single or double)."""
###Output
Python
Python
The Python programming language was created by Guido van Rossum in 1990.
###Markdown
Building StringsThere are many ways to build strings. We will discuss two here, Concatenation and f-String. Concatenation is the most straight-forward and common way to join strings together. We concatenate strings by using the addition (+) operator. f-String stands for "formatted string" and it is a new and efficient way to build strings, without making it too verbose. To create a "formatted string", we simply put an f directly in front of the string (before the quotation marks).
###Code
# Examples of Concatenation (Joining Strings)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
string1 = "abc"
string2 = "def"
print(string1 + string2) # Concatenation
favNum = 2
print("Your favorite number is " + str(favNum) + ".") # You must type-cast the integer to a string to concatenate
# because you can only concatenate two string data types.
# Examples of using f-String
# ^^^^^^^^^^^^^^^^^^^^^^^^^^
x = 1
y = 2
print(f"The point we are graphing is ({x},{y}).")
print(f"{x} + {y} = {x+y}")
###Output
abcdef
Your favorite number is 2.
The point we are graphing is (1,2).
1 + 2 = 3
###Markdown
Escape SequencesTo insert characters that are illegal in a string, use an escape sequence. An escape sequence is a backslash \ followed by the character you want to insert. An example of an illegal character is a double quote inside a string that is surrounded by double quotes.Some of the common escape sequences we'll use in this class are: Code Result \" Double Quote \' Single Quote \\ Backslash \n New Line \t Tab
###Code
# Examples of Escape Sequences
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
print("\"Simplicity is the ultimate sophistication.\" – Leonardo da Vinci") # \" Double Quote
print('"Don\'t panic." - Hitchhiker\'s Guide to the Galaxy') # \' Single Quote
print("C:\\\\Desktop\\python_rules.py") # \\ Backslash
print("I feel like... \n moving to a new line.") # \n New Line
print("Taaaab to the right. \t criss cross!") # \t Tab
###Output
"Simplicity is the ultimate sophistication." – Leonardo da Vinci
"Don't panic." - Hitchhiker's Guide to the Galaxy
C:\\Desktop\python_rules.py
I feel like...
moving to a new line.
Taaaab to the right. criss cross!
###Markdown
Common String FunctionsThe Python String class has many useful string functions you can use on a string. You use these functions by placing a dot after a string or string variable and calling the function of your choice. This is known as "dot notation". It's important to note that these functions don't actually change the string itself, but return a new string with the changed properties. Function Description find( ) Searches the string for a specified value and returns the position of where it was found. lower( ) Returns a lower case version of the string. title( ) Returns the string with the first character of each word in upper case. upper( ) Returns an upper case version of the string.
###Code
# Examples of String Functions
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
sentence = "ThIs CaPiTaLiZaTiOn DoEsN't MaKe AnY sEnSe!"
print(" sentence = ", sentence, " <= Original")
print('sentence.find("P") = ', sentence.find("P"))
print(" sentence.lower() = ", sentence.lower())
print(" sentence.title() = ", sentence.title())
print(" sentence.upper() = ", sentence.upper())
print(" sentence = ", sentence, " <= You can see the original never changed.")
###Output
sentence = ThIs CaPiTaLiZaTiOn DoEsN't MaKe AnY sEnSe! <= Original
sentence.find("P") = 7
sentence.lower() = this capitalization doesn't make any sense!
sentence.title() = This Capitalization Doesn'T Make Any Sense!
sentence.upper() = THIS CAPITALIZATION DOESN'T MAKE ANY SENSE!
sentence = ThIs CaPiTaLiZaTiOn DoEsN't MaKe AnY sEnSe! <= You can see the original never changed.
###Markdown
Strings in (Monty) Python
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Strings are just arrays of characters
###Code
my_string = 'spam'
my_string, len(my_string), my_string[0], my_string[0:2]
my_string[::-1]
###Output
_____no_output_____
###Markdown
But unlike numerical arrays, you cannot reassign elements (immutable)
###Code
my_string[0] = "S"
###Output
_____no_output_____
###Markdown
Or do array-math-like stuff ...
###Code
my_string.sum()
###Output
_____no_output_____
###Markdown
"Arithmetic" with Strings (concatenate)
###Code
my_string = 'spam'
my_egg = "eggs"
my_string + my_egg
my_string + " " + my_egg
4 * (my_string + " ") + my_egg
print(4 * (my_string + " ") + my_string + " and\n" + my_egg) # use \n to get a newline with the print function
###Output
_____no_output_____
###Markdown
String operators and comparisons* String comparison is performed using the characters in both strings.* The characters in both strings are compared one by one (from left to right).* When different characters are found then their [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is compared.* The character with lower [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is considered to be smaller.
###Code
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"spam" < "eggs"
"sp" < "spam"
"spam_one" < "spam_t"
"sp" in "spam"
"sp" not in "spam"
my_string.isalpha()
my_string.isdigit()
my_string.isspace()
###Output
_____no_output_____
###Markdown
---- Python supports `Unicode` characters You can enter `unicode` characters directly from the keyboard (depends on your operating system), or you can use the `ASCII` encoding. [Unicode - ASCII encoding list](https://en.wikipedia.org/wiki/List_of_Unicode_characters).For example the `ASCII` ecoding for the greek capital omega is `U+03A9`, so you can create the character with `\U000003A9`
###Code
my_resistor = "Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
These characters can be used as variable names
###Code
Ω = 100e6
Ω * np.pi
###Output
_____no_output_____
###Markdown
Python supports (almost) all characters from international keyboards
###Code
movie_title = "Mønti Pythøn ik den Hølie Gräilen"
movie_title
###Output
_____no_output_____
###Markdown
[Emoji](https://en.wikipedia.org/wiki/Emoji) are unicode characters, so you can use them a well (not all OSs will show all characters!)
###Code
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
###Output
_____no_output_____
###Markdown
Emoji can not be used as variable names (at least not yet ...)
###Code
☢ = 2.345
###Output
_____no_output_____
###Markdown
Raw strings - `r" "` * Sometime you do not want python to interpret anything in the string * You can do this by adding a "r" to the front of the string
###Code
my_resistor = r"Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
Watch out for variable types!
###Code
n = 42
print("I would like " + n + " orders of spam")
###Output
_____no_output_____
###Markdown
---- Python `f-string` formatting
###Code
my_a = 42
my_b = 1.23456
my_c = True
my_d = 'Spam'
type(my_a), type(my_b), type(my_c), type(my_d)
f"I would like {my_a} orders of {my_d}"
my_output = f"I would like {my_a} orders of {my_d}"
print(my_output)
###Output
_____no_output_____
###Markdown
Format Typesd = Integer decimal g = Floating point format (Uses exponential format if exponent is less than -4)f = Floating point decimal x = hexs = String o = octale = Floating point exponential b = binary
###Code
f"The float {my_b} can be printed with only two places after the decimal: {my_b:.2f}"
f"The integer {my_a} can be printed in hex: {my_a:x}, octal: {my_a:o}, or binary: {my_a:b}"
f"The number {my_b} times 1000 in scientific notation: {my_b * 1000 :.2e}"
f"The value {my_c} as a float: {my_c:f}"
f"The value {my_c} as an integer: {my_c:d}"
###Output
_____no_output_____
###Markdown
---- Who are you who are so wise in the ways of science? Output from `DataFrames - .iterrows()`* A legitimate use of For-Loops
###Code
import pandas as pd
witch_table = pd.read_csv('./Data/Witches.csv')
print(witch_table)
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']} has a density of {row['Density']} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Padding - `{Variable:N}`* `{row['Object']:8}` - the variable `row['Object']` in 8 spaces* `{row['Density']:5.1f}` - the variable `row['Density']` in 5 spaces with 1 decimal place
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:8} has a density of {row['Density']:5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Justified Strings - `{Variable:>N}`* By default, the strings are justified to the left, number to the right.* Use the `>` character to right-justify, and `<` to the left justify.* `{row['Object']:>8}` - the variable `row['Object']` right-justified in 8 spaces* `{row['Density']:<5.1f}` - the variable `row['Density']` left-justified in 5 spaces with 1 decimal place.
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:>8} has a density of {row['Density']:<5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Really long strings* Put everything between `()`* add `\n` for line breaks
###Code
long_string = (
f"Well, there's egg and bacon; egg sausage and bacon; "
f"egg and spam; egg bacon and spam; egg bacon sausage and spam; \n"
f"spam bacon sausage and spam; spam egg spam spam bacon and spam: "
f"spam sausage spam spam bacon spam tomato and spam; \n"
f"spam spam spam egg and spam; spam spam spam spam spam spam baked beans spam spam spam \n"
f"or Lobster Thermidor au Crevette with a Mornay sauce served in a Provencale manner with shallots \n"
f"and aubergines garnished with truffle pate, brandy and with a fried egg on top and spam."
)
print(long_string)
long_string.count('spam')
###Output
_____no_output_____
###Markdown
---- Python has lots of built-in [String Methods](https://docs.python.org/3/library/stdtypes.htmlstring-methods).
###Code
line = "My hovercraft is full of eels"
line
###Output
_____no_output_____
###Markdown
Find and Replace
###Code
line.replace('is full of eels', 'has no wheels')
###Output
_____no_output_____
###Markdown
Justification and Cleaning
###Code
line.center(100)
line.ljust(100)
line.rjust(100, "*")
line2 = " My hovercraft is full of eels "
line2
line2.strip()
###Output
_____no_output_____
###Markdown
Splitting
###Code
line.split()
line.split()[1]
line.partition('is')
long_string.splitlines()
long_string.splitlines()[2]
###Output
_____no_output_____
###Markdown
Joining* `string.join(list)`* `string` - the string you want to put between all of the elements of `list`
###Code
'___'.join(line.split())
'☢'.join(line.partition('is'))
' '.join(line.split()[::-1])
###Output
_____no_output_____
###Markdown
Line Formatting
###Code
anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
anotherline
anotherline.upper()
anotherline.lower()
anotherline.title()
anotherline.capitalize()
anotherline.swapcase()
translation = anotherline.maketrans("aeiou", "*****")
anotherline.translate(translation)
###Output
_____no_output_____
###Markdown
One last For-Loop thing
###Code
for char in anotherline:
print(char, end=' ')
import time
for char in anotherline:
print(char, end=' ')
time.sleep(.25) # seconds
###Output
_____no_output_____
###Markdown
Strings in (Monty) Python
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Strings are just arrays [lists] of characters
###Code
my_string = 'spam'
my_string
len(my_string)
my_string[0]
my_string[0:2]
my_string[::-1]
###Output
_____no_output_____
###Markdown
But unlike numerical arrays, you cannot reassign elements (immutable)
###Code
my_string[0] = "S"
###Output
_____no_output_____
###Markdown
Or do array-math-like stuff ...
###Code
my_string.sum()
###Output
_____no_output_____
###Markdown
"Arithmetic" with Strings (concatenate)
###Code
my_string = 'spam'
my_egg = "eggs"
my_string + my_egg
my_string + " " + my_egg
4 * (my_string + " ") + my_egg
print(4 * (my_string + " ") + my_string + " and\n" + my_egg) # use \n to get a newline with the print function
###Output
_____no_output_____
###Markdown
String operators and comparisons* String comparison is performed using the characters in both strings.* The characters in both strings are compared one by one (from left to right).* When different characters are found then their [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is compared.* The character with lower [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is considered to be smaller.
###Code
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"spam" < "eggs"
"sp" < "spam"
"spam_one" < "spam_t"
"sp" in "spam"
"sp" not in "spam"
my_string.isalpha()
my_string.isdigit()
my_string.isspace()
###Output
_____no_output_____
###Markdown
---- Python supports `Unicode` charactersYou can enter `unicode` characters directly from the keyboard (depends on your operating system), or you can use the `ASCII` encoding. [Unicode - ASCII encoding list](https://en.wikipedia.org/wiki/List_of_Unicode_characters).For example the `ASCII` ecoding for the greek capital omega is `U+03A9`, so you can create the character with `\U000003A9`
###Code
my_resistor = "Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
These characters can be used as variable names
###Code
Ω = 100e6
Ω * np.pi
###Output
_____no_output_____
###Markdown
I like to cut and paste from [Symbol Salad](https://symbolsalad.com/) Python supports (almost) all characters from international keyboards
###Code
movie_title = "Mønti Pythøn ik den Hølie Gräilen"
movie_title
###Output
_____no_output_____
###Markdown
[Emoji](https://en.wikipedia.org/wiki/Emoji) are unicode characters, so you can use them a well (not all OSs will show all characters!)
###Code
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
###Output
_____no_output_____
###Markdown
Emoji can not be used as variable names (at least not yet ...)
###Code
☢ = 2.345
###Output
_____no_output_____
###Markdown
Raw strings - `r" "` * Sometime you do not want python to interpret anything in the string * You can do this by adding a "r" to the front of the string
###Code
my_resistor = r"Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
Watch out for variable types!
###Code
n = 42
print("I would like " + n + " orders of spam")
###Output
_____no_output_____
###Markdown
---- Python `f-string` formatting
###Code
my_a = 42
my_b = 1.23456
my_c = True
my_d = 'Spam'
type(my_a), type(my_b), type(my_c), type(my_d)
f"I would like {my_a} orders of {my_d}"
my_output = f"I would like {my_a} orders of {my_d}"
print(my_output)
###Output
_____no_output_____
###Markdown
Format Typesd = Integer decimal g = Floating point format (Uses exponential format if exponent is less than -4)f = Floating point decimal x = hexs = String o = octale = Floating point exponential b = binary
###Code
f"The float {my_b} can be printed with only two places after the decimal: {my_b:.3f}"
f"The integer {my_a} can be printed in hex: {my_a:x}, octal: {my_a:o}, or binary: {my_a:b}"
f"The number {my_b} times 1000 in scientific notation: {my_b * 1000 :.2e}"
f"The value {my_c} as a float: {my_c:f}"
f"The value {my_c} as an integer: {my_c:d}"
###Output
_____no_output_____
###Markdown
---- Who are you who are so wise in the ways of science? Output from `DataFrames - .iterrows()`* A legitimate use of For-Loops
###Code
import pandas as pd
witch_table = pd.read_csv('./Data/Witches.csv')
print(witch_table)
for my_index, my_row in witch_table.iterrows():
my_out_string = (f"The object at index {my_index} is {my_row['Object']} and has a density of {my_row['Density']} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Long strings* When output string get long, you can break them into separate f-strings* Put () around the separate f-strings
###Code
for my_index, my_row in witch_table.iterrows():
my_out_string = (
f"The object at index {my_index} is "
f"{my_row['Object']} and has a density of "
f"{my_row['Density']} g/cc"
)
print(my_out_string)
###Output
_____no_output_____
###Markdown
Padding - `{Variable:N}`* `{my_row['Object']:8}` - the variable `my_row['Object']` in 8 spaces* `{my_row['Density']:5.1f}` - the variable `my_row['Density']` in 5 spaces with 1 decimal place
###Code
for my_index, my_row in witch_table.iterrows():
my_out_string = (
f"The object at index {my_index} "
f"is {my_row['Object']:8} "
f"and has a density of {my_row['Density']:5.1f} g/cc"
)
print(my_out_string)
###Output
_____no_output_____
###Markdown
Justified Strings - `{Variable:>N}`* By default, the strings are justified to the left, number to the right.* Use the `>` character to right-justify, and `<` to the left justify.* `{my_row['Object']:>8}` - the variable `my_row['Object']` right-justified in 8 spaces* `{my_row['Density']:<5.1f}` - the variable `my_row['Density']` left-justified in 5 spaces with 1 decimal place.
###Code
for my_index, my_row in witch_table.iterrows():
my_out_string = (
f"The object at index {my_index} "
f"is {my_row['Object']:>8} "
f"and has a density of {my_row['Density']:<5.1f} g/cc"
)
print(my_out_string)
###Output
_____no_output_____
###Markdown
You can break up long stings by adding a `\n` to forca a line break
###Code
another_long_string = (
f"Well, there's egg and bacon; egg sausage and bacon; egg and spam; \n"
f"egg bacon and spam; egg bacon sausage and spam; spam bacon sausage \n"
f"and spam; spam egg spam spam bacon and spam; spam sausage spam spam \n"
f"bacon spam tomato and spam"
)
print(another_long_string)
###Output
_____no_output_____
###Markdown
---- Python has lots of built-in [String Methods](https://docs.python.org/3/library/stdtypes.htmlstring-methods).
###Code
my_line = "My hovercraft is full of eels"
my_line
###Output
_____no_output_____
###Markdown
Find* Returns the index of the first occurrence of the argument in the string* Returns -1 if nothing is found
###Code
my_line.find("r")
my_line[7]
my_line.find("Z")
###Output
_____no_output_____
###Markdown
Find and Replace
###Code
my_line.replace('is full of eels', 'has no wheels')
###Output
_____no_output_____
###Markdown
Justification and Cleaning
###Code
my_line.center(100)
my_line.ljust(100)
my_line.rjust(100, "*")
my_line_two = " My hovercraft is full of eels "
my_line_two
my_line_two.strip()
###Output
_____no_output_____
###Markdown
Splitting
###Code
my_line.split()
my_line.split()[1]
my_line.partition('is')
###Output
_____no_output_____
###Markdown
Joining* `string.join(list)`* `string` - the string you want to put between all of the elements of `list`
###Code
'___'.join(my_line.split())
'☢'.join(my_line.partition('is'))
' '.join(my_line.split()[::-1])
###Output
_____no_output_____
###Markdown
Line Formatting
###Code
my_anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
my_anotherline
my_anotherline.upper()
my_anotherline.lower()
my_anotherline.title()
my_anotherline.capitalize()
my_anotherline.swapcase()
translation = my_anotherline.maketrans("aeiou", "*****")
my_anotherline.translate(translation)
###Output
_____no_output_____
###Markdown
One last For-Loop thing
###Code
for char in my_anotherline:
print(char, end=' ')
import time
for char in my_anotherline:
print(char, end='***')
time.sleep(.25) # seconds
###Output
_____no_output_____
###Markdown
**Python Strings** **1. String Literals**- String literals in python are surrounded by either single quotation marks, or double quotation marks.- 'hello' is the same as "hello".- You can display a string literal with the **print()** function:
###Code
# Example
print("Hello")
print('Hello')
###Output
Hello
Hello
###Markdown
**2. Assign String to a Variable**- Assigning a string to a variable is done with the variable name followed by an equal sign and the string.
###Code
# Example
a = "Hello"
print(a)
###Output
Hello
###Markdown
**3. Multiline Strings**- You can assign a multiline string to a variable by using three quotes.
###Code
# Example - You can use three double quotes:
a = """Lorem ipsum dolor sit amet,
consectetur adipiscing elit,
sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua."""
print(a)
###Output
Lorem ipsum dolor sit amet,
consectetur adipiscing elit,
sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
###Markdown
Or three single quotes:
###Code
# Example
a1 = '''Lorem ipsum dolor sit amet,
consectetur adipiscing elit,
sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.'''
print(a1)
###Output
Lorem ipsum dolor sit amet,
consectetur adipiscing elit,
sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
###Markdown
- **Note**: In the result, the line breaks are inserted at the same position as in the code. **4. Strings are Arrays**- Like many other popular programming languages, strings in Python are arrays of bytes representing unicode characters.- However, Python does not have a character data type, a single character is simply a string with a length of 1.- Square brackets can be used to access elements of the string.
###Code
# Example - Get the character at position 1 (remember that the first character has the position 0):
a = "Hello, World!"
print(a[1])
###Output
e
###Markdown
**5. Slicing**- You can return a range of characters by using the slice syntax.- Specify the start index and the end index, separated by a colon, to return a part of the string.
###Code
# Example - Get the characters from position 2 to position 5 (not included)
b = "Hello, World!"
print(b[2:5])
###Output
llo
###Markdown
**6. Negative Indexing**- Use negative indexes to start the slice from the end of the string.
###Code
# Example - Get the characters from position 5 to position 1 (not included), starting the count from the end of the string:
b = "Hello, World!"
print(b[-5:-2])
###Output
orl
###Markdown
**7. String Length**- To get the length of a string, use the **len()** function.
###Code
# Example - The len() function returns the length of a string
a = "Hello, World!"
print(len(a))
###Output
13
###Markdown
**8. String Methods**- Python has a set of built-in methods that you can use on strings.
###Code
# Example - The strip() method removes any whitespace from the beginning or the end:
a = " Hello, World! "
print(a.strip()) # returns "Hello, World!"
# Example - The lower() method returns the string in lower case:
a = "Hello, World!"
print(a.lower())
# Example - The upper() method returns the string in upper case:
a = "Hello, World!"
print(a.upper())
# Example - The replace() method replaces a string with another string:
a = "Hello, World!"
print(a.replace("H", "J"))
# Example - The split() method splits the string into substrings if it finds instances of the separator:
a = "Hello, World!"
print(a.split(",")) # returns ['Hello', ' World!']
###Output
['Hello', ' World!']
###Markdown
- Learn more about String Methods with our [String Methods Reference](https://www.w3schools.com/python/python_ref_string.asp) **9. Check String**- To check if a certain phrase or character is present in a string, we can use the keywords **in** or **not in**.
###Code
# Example - Check if the phrase "ain" is present in the following text:
txt = "The rain in Spain stays mainly in the plain"
x = "ain" in txt
print(x)
# Example - Check if the phrase "ain" is NOT present in the following text:
txt = "The rain in Spain stays mainly in the plain"
x = "ain" not in txt
print(x)
###Output
False
###Markdown
**10. String Concatenation**- To concatenate, or combine, two strings you can use the + operator.
###Code
# Example - Merge variable a with variable b into variable c:
a = "Hello"
b = "World"
c = a + b
print(c)
# Example - To add a space between them, add a " ":
a = "Hello"
b = "World"
c = a + " " + b
print(c)
###Output
Hello World
###Markdown
**11. String Format**- As we learned in the Python Variables chapter, we cannot combine strings and numbers like this. - Example- age = 36- txt = "My name is John, I am " + age- print(txt)- 3 age = 36----> 4 txt = "My name is John, I am " + age- 5 print(txt)- TypeError: must be str, not int - But we can combine strings and numbers by using the **format()** method.- The **format()** method takes the passed arguments, formats them, and places them in the string where the placeholders {} are.
###Code
# Example - Use the format() method to insert numbers into strings:
age = 36
txt = "My name is John, and I am {}"
print(txt.format(age))
###Output
My name is John, and I am 36
###Markdown
- The **format()** method takes unlimited number of arguments, and are placed into the respective placeholders.
###Code
# Example
quantity = 3
itemno = 567
price = 49.95
myorder = "I want {} pieces of item {} for {} dollars."
print(myorder.format(quantity, itemno, price))
###Output
I want 3 pieces of item 567 for 49.95 dollars.
###Markdown
- You can use index numbers {0} to be sure the arguments are placed in the correct placeholders.
###Code
# Example
quantity = 3
itemno = 567
price = 49.95
myorder = "I want to pay {2} dollars for {0} pieces of item {1}."
print(myorder.format(quantity, itemno, price))
###Output
I want to pay 49.95 dollars for 3 pieces of item 567.
###Markdown
**12. Escape Character**- To insert characters that are illegal in a string, use an escape character.- An escape character is a backslash \ followed by the character you want to insert.- An example of an illegal character is a double quote inside a string that is surrounded by double quotes: - Example - You will get an error if you use double quotes inside a string that is surrounded by double quotes.- txt = "We are the so-called "Vikings" from the north."- File "demo_string_escape_error.py", line 1 txt = "We are the so-called "Vikings" from the north."- SyntaxError: invalid syntax - **Escape character**To fix this problem, use the escape character \":
###Code
# Example - The escape character allows you to use double quotes when you normally would not be allowed:
txt = "We are the so-called \"Vikings\" from the north."
txt
###Output
_____no_output_____
###Markdown
Strings in (Monty) Python
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Strings are just arrays of characters
###Code
my_string = 'spam'
my_string
len(my_string)
my_string[0]
my_string[0:2]
my_string[::-1]
###Output
_____no_output_____
###Markdown
But unlike numerical arrays, you cannot reassign elements (immutable)
###Code
my_string[0] = "S"
###Output
_____no_output_____
###Markdown
Or do array-math-like stuff ...
###Code
my_string.sum()
###Output
_____no_output_____
###Markdown
"Arithmetic" with Strings (concatenate)
###Code
my_string = 'spam'
my_egg = "eggs"
my_string + my_egg
my_string + " " + my_egg
4 * (my_string + " ") + my_egg
print(4 * (my_string + " ") + my_string + " and\n" + my_egg) # use \n to get a newline with the print function
###Output
_____no_output_____
###Markdown
String operators and comparisons* String comparison is performed using the characters in both strings.* The characters in both strings are compared one by one (from left to right).* When different characters are found then their [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is compared.* The character with lower [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is considered to be smaller.
###Code
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"spam" < "eggs"
"sp" < "spam"
"spam_one" < "spam_t"
"sp" in "spam"
"sp" not in "spam"
my_string.isalpha()
my_string.isdigit()
my_string.isspace()
###Output
_____no_output_____
###Markdown
---- Python supports `Unicode` charactersYou can enter `unicode` characters directly from the keyboard (depends on your operating system), or you can use the `ASCII` encoding. [Unicode - ASCII encoding list](https://en.wikipedia.org/wiki/List_of_Unicode_characters).For example the `ASCII` ecoding for the greek capital omega is `U+03A9`, so you can create the character with `\U000003A9`
###Code
my_resistor = "Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
These characters can be used as variable names
###Code
Ω = 100e6
Ω * np.pi
###Output
_____no_output_____
###Markdown
I like to cut and paste from [Symbol Salad](https://symbolsalad.com/) Python supports (almost) all characters from international keyboards
###Code
movie_title = "Mønti Pythøn ik den Hølie Gräilen"
movie_title
###Output
_____no_output_____
###Markdown
[Emoji](https://en.wikipedia.org/wiki/Emoji) are unicode characters, so you can use them a well (not all OSs will show all characters!)
###Code
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
###Output
_____no_output_____
###Markdown
Emoji can not be used as variable names (at least not yet ...)
###Code
☢ = 2.345
###Output
_____no_output_____
###Markdown
Raw strings - `r" "` * Sometime you do not want python to interpret anything in the string * You can do this by adding a "r" to the front of the string
###Code
my_resistor = r"Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
Watch out for variable types!
###Code
n = 42
print("I would like " + n + " orders of spam")
###Output
_____no_output_____
###Markdown
---- Python `f-string` formatting
###Code
my_a = 42
my_b = 1.23456
my_c = True
my_d = 'Spam'
type(my_a), type(my_b), type(my_c), type(my_d)
f"I would like {my_a} orders of {my_d}"
my_output = f"I would like {my_a} orders of {my_d}"
print(my_output)
###Output
_____no_output_____
###Markdown
Format Typesd = Integer decimal g = Floating point format (Uses exponential format if exponent is less than -4)f = Floating point decimal x = hexs = String o = octale = Floating point exponential b = binary
###Code
f"The float {my_b} can be printed with only two places after the decimal: {my_b:.3f}"
f"The integer {my_a} can be printed in hex: {my_a:x}, octal: {my_a:o}, or binary: {my_a:b}"
f"The number {my_b} times 1000 in scientific notation: {my_b * 1000 :.2e}"
f"The value {my_c} as a float: {my_c:f}"
f"The value {my_c} as an integer: {my_c:d}"
###Output
_____no_output_____
###Markdown
---- Who are you who are so wise in the ways of science? Output from `DataFrames - .iterrows()`* A legitimate use of For-Loops
###Code
import pandas as pd
witch_table = pd.read_csv('./Data/Witches.csv')
print(witch_table)
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']} has a density of {row['Density']} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Padding - `{Variable:N}`* `{row['Object']:8}` - the variable `row['Object']` in 8 spaces* `{row['Density']:5.1f}` - the variable `row['Density']` in 5 spaces with 1 decimal place
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:8} has a density of {row['Density']:5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Justified Strings - `{Variable:>N}`* By default, the strings are justified to the left, number to the right.* Use the `>` character to right-justify, and `<` to the left justify.* `{row['Object']:>8}` - the variable `row['Object']` right-justified in 8 spaces* `{row['Density']:<5.1f}` - the variable `row['Density']` left-justified in 5 spaces with 1 decimal place.
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:>8} has a density of {row['Density']:<5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Really long strings* Put everything between `()`* add `\n` for line breaks
###Code
long_string = (
f"Well, there's egg and bacon; egg sausage and bacon; "
f"egg and spam; egg bacon and spam; egg bacon sausage and spam; \n"
f"spam bacon sausage and spam; spam egg spam spam bacon and spam: "
f"spam sausage spam spam bacon spam tomato and spam; \n"
f"spam spam spam egg and spam; spam spam spam spam spam spam baked beans spam spam spam \n"
f"or Lobster Thermidor au Crevette with a Mornay sauce served in a Provencale manner with shallots \n"
f"and aubergines garnished with truffle pate, brandy and with a fried egg on top and spam."
)
print(long_string)
long_string.count('spam')
###Output
_____no_output_____
###Markdown
---- Python has lots of built-in [String Methods](https://docs.python.org/3/library/stdtypes.htmlstring-methods).
###Code
my_line = "My hovercraft is full of eels"
my_line
###Output
_____no_output_____
###Markdown
Find* Returns the index of the first occurrence of the argument in the string* Returns -1 if nothing is found
###Code
my_line.find("r")
my_line[7]
my_line.find("Z")
###Output
_____no_output_____
###Markdown
Find and Replace
###Code
my_line.replace('is full of eels', 'has no wheels')
###Output
_____no_output_____
###Markdown
Justification and Cleaning
###Code
my_line.center(100)
my_line.ljust(100)
my_line.rjust(100, "*")
my_line_two = " My hovercraft is full of eels "
my_line_two
my_line_two.strip()
###Output
_____no_output_____
###Markdown
Splitting
###Code
my_line.split()
my_line.split()[1]
my_line.partition('is')
long_string.splitlines()
long_string.splitlines()[2]
###Output
_____no_output_____
###Markdown
Joining* `string.join(list)`* `string` - the string you want to put between all of the elements of `list`
###Code
'___'.join(my_line.split())
'☢'.join(my_line.partition('is'))
' '.join(my_line.split()[::-1])
###Output
_____no_output_____
###Markdown
Line Formatting
###Code
my_anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
my_anotherline
my_anotherline.upper()
my_anotherline.lower()
my_anotherline.title()
my_anotherline.capitalize()
my_anotherline.swapcase()
translation = my_anotherline.maketrans("aeiou", "*****")
my_anotherline.translate(translation)
###Output
_____no_output_____
###Markdown
One last For-Loop thing
###Code
for char in my_anotherline:
print(char, end=' ')
import time
for char in my_anotherline:
print(char, end='***')
time.sleep(.25) # seconds
###Output
_____no_output_____
###Markdown
First Python Live session for Python for Beginners Covering how to install Python, basics of setting up your environment and string/string manipulation For debian based systems we want to run the following code `sudo apt-get install python3` For windows you can refer to: https://www.youtube.com/watch?v=Cd5XCrfiSv8&list=PL3GPxPa8j1HwXiyTAKZKCeGx18sTVCKCg&index=1 Strings in Python Python strings are "immutable" which means they cannot be changed after they are created.Since strings can't be changed, we construct *new* strings as we go, to represent computed values. So for example the expression ('hello' + 'there') takes in the 2 strings 'hello' and 'there' and builds a new string 'hellothere'. Raw string r'this is a raw string' raw strings pass all characaters to stdout and does not perform any special formatting like `\n` or `\\`
###Code
new_string = 'This is the first python live session'
new_string
new_string_ = 't' + new_string[1:]
new_string_
###Output
_____no_output_____
###Markdown
Unicode strings
###Code
# Unicode strings
u_string = u'A unicode \u018e string \xf1'
u_string
###Output
_____no_output_____
###Markdown
Ascii Stringsdefault when creating strings
###Code
# ascii string
a_string = 'this\tis a ascii\nstring'
print(a_string)
# r-string or raw string
r_string = r'this is an r \nstring \t'
r_string
string1 = 'hello'
string2 = 'there'
string3 = 'Mary had a little lamb'
###Output
_____no_output_____
###Markdown
Byte String Working with Hex/Oct/Bin String Data Common Erros- TypeError: unicode strings are not supported, please encode to bytes We will work with split, join, rstrip, lstrip, rfind Slice Operator
###Code
# to get last element
print(string1)
string1[4]
# to get the THIRD element
print(string2)
string2[2]
# TO get a range of elements
print(string3)
string3[0:4]
###Output
Mary had a little lamb
###Markdown
Split
###Code
help(str.split)
print(string3)
string3.split()
le_falta_la_a = string3.split('a')
le_falta_la_a
###Output
_____no_output_____
###Markdown
Join
###Code
help(str.join)
tiene_la_a = 'a'.join(le_falta_la_a)
tiene_la_a
string1 = ['this', 'iss', 'a', 'random', 'string']
'*'.join(string1)
###Output
_____no_output_____
###Markdown
rstrip
###Code
help(str.rstrip)
this_is_a_string = 'adfadfds '
this_is_a_string.rstrip()
###Output
_____no_output_____
###Markdown
lstrip
###Code
left_spaces = ' adgfgfsgsfdgsr'
left_spaces
left_spaces.lstrip()
###Output
_____no_output_____
###Markdown
rfind
###Code
help(str.rfind)
print(string3)
string3.rfind('had ')
###Output
Mary had a little lamb
###Markdown
Formatting Strings How to build strings properly.
###Code
'{one}{one}{one}'.format(one=5,three=3, two=4)
stringgggg = '%d%d%d' % (5,4,3)
print(stringgggg)
###Output
543
###Markdown
**Python Strings.*** Strings are amongst the most popular types in Python. We can create them simply by enclosing characters in quotes.* Python treats single quotes the same as double quotes.* Python string is a built-in type text sequence. It is used to handle textual data in python. * Python Strings are immutable sequences of Unicode points.
###Code
x='python strings'
x
y="python strings"
y
z=' '
z
###Output
_____no_output_____
###Markdown
**Accessing Values in Strings.*** Python does not support a character type i.e it doesn’t have a data type called char; these are treated as strings of length one, thus also considered a substring.hence, we can do indexing and slicing in python strings easily.* The subscript creates a slice by including a colon within the braces as **string[start:stop:step].**
###Code
s="python is the best language."
s[0]
s[-1]
s[1:4]
s[1:]+s[:1]
s[:3]
s[::1]
s[::-1]
s[::2]
###Output
_____no_output_____
###Markdown
**Updating strings.*** Strings are immutable and so we cannot replace particular indexed value from string.* You can "update" an existing string by (re)assigning a variable to another string. The new value can be related to its previous value or to a completely different string altogether.
###Code
s1='i do not like java and cpp'
s1[2]='c'
s1
s1+"hello"
s1[:2]+"hello"
###Output
_____no_output_____
###Markdown
**Basic string operations.**
###Code
"python"+"xv"
"python"*12
s1+" ragnarok"
"p" in s1
"y" not in s1
###Output
_____no_output_____
###Markdown
**String Formatting.*** “%” is string formatting operator.* %s = string conversion via str() prior to formatting and lists also.* %i = integer values. * %d = decimal values with single integer.* %f = decimal values with all decimals.* %c = character values.
###Code
t=10
s='Ahmedabad'
d=1.14354565
c='@'
l=[1,2,3]
print("i went to %s on wednesday and left %c %i and had a cigarette worth rupee %f nd wrote a code containing list %s"%(s,c,t,d,l))
print("%.2f"%(d)) #decimal value rounded upto 2 decimals.
###Output
_____no_output_____
###Markdown
**String formatting using .format() method**
###Code
a='Ravi'
b=17
c='age'
d=2.25
e='@'
print("{0} is {1} old and he's very compared to teenager of his {2}. Also. he's {3}m tall and live {4} 4th street".format(a,b,c,d,e))
print("{0},{1},{2},{3}".format('python','is','best',4))
print("{0}...{1}...{2}".format(*'abc'))
###Output
_____no_output_____
###Markdown
**Built-in String Methods.**1. ** str.capitalize()** = Capitalizes first letter of string.2. **isalnum()** = Returns true if string has at least 1 character and all characters are alphanumeric and false otherwise.3. **isalpha()** = Returns true if string has at least 1 character and all characters are alphabetic and false otherwise.4. **isdigit()** = Returns true if the string contains only digits and false otherwise.5. **islower()** = Returns true if string has at least 1 cased character and all cased characters are in lowercase and false otherwise.6. **isnumeric()** = Returns true if a unicode string contains only numeric characters and false otherwise. 7. **isspace()** = Returns true if string contains only whitespace characters and false otherwise. 8. **istitle()** = Returns true if string is properly "titlecased" and false otherwise. 9. **isupper()** = Returns true if string has at least one cased character and all cased characters are in uppercase and false otherwise.10. **isdecimal()** = Returns true if a unicode string contains only decimal characters and false otherwise.11. **max(str)** = Returns the max alphabetical character from the string str. 12. **min(str)** = Returns the min alphabetical character from the string str.13. **str.count(str, beg= 0,end=len(string))** = Counts how many times str occurs in string or in a substring of string if starting index beg and ending index end are given.14. **string.find(str, beg=0 end=len(string))** = Determine if str occurs in string or in a substring of string if starting index beg and ending index end are given returns index if found and -1 otherwise.15. **“ ”.join(seq)** = Merges (concatenates) the string representations of elements in sequence seq into a string, with separator string.16. **len(string)** = Returns the length of the string17. **str.lower()** = Converts all uppercase letters in string to lowercase.18. **str.upper()** = Converts lowercase letters in string to uppercase.19. **str.tittler()** = Converts a string in titled format.20. **str.swapcase()** = Inverts case for all letters in string21. **str.split()** = it splits the string in different elements.22. **endswith(suffix, beg=0, end=len(string))** = Determines if string or a substring of string (if starting index beg and ending index end are given) ends with suffix; returns true if so and false otherwise.23. **str.ljust(width[, fillchar])** = The method ljust() returns the string left justified in a string of length width. Padding is done using the specified fillchar (default is a space). The original string is returned if width is less than len(s).24. **str.rjust(width[, fillchar])** = The method ljust() returns the string right justified in a string of length width. Padding is done using the specified fillchar (default is a space). The original string is returned if width is less than len(s).25. **str.lstrip()** = Removes all leading whitespace in string.26. **str.rstrip()** = Removes all trailing whitespace of string.
###Code
s1='python'
s1.capitalize()
s1.islower()
s1.istitle()
s1 #to demonstrate that whether built-in methods changes the original string or not.
s1.isupper()
s2='python@435#$sdf'
s2.isalnum()
s3="39431032139099"
max(s3)
min(s3)
s4='python is best language'
s4.split()
s4.split('is')
s4.split(",")
s4.split(" ")
s5="dfsgdhsadsgdfard3546580877$%^^#"
print('s5.isdecimal(): ',s5.isdecimal())
print('s5.isnumeric(): ',s5.isnumeric())
print('s5.isdigit(): ',s5.isdigit())
print('len(s5): ',len(s5))
print('s5.count("d"): ',s5.count("d"))
print('s5.count("d",5,10): ',s5.count("d",5,10))
print('s5.find("d",2,10): ',s5.find("d",2,10))
print('s5.count("d"): ',s5.count("d"))
','.join(s5)
' '.join(s5)
''.join(s5)
print("s5.swapcase(): ",s5.swapcase())
print("s5.lower(): ",s5.lower())
print("s5.upper(): ",s5.upper())
print("s5.title(): ",s5.title())
print("s5.endswith("f"): ",s5.endswith("f"))
print("s5.endswith(""): ",s5.endswith(""))
print("s5.endswith("f",0,5): ",s5.endswith("f",0,5))
print("s5.startswith('d'): ",s5.endswith('d'))
print("s5.startswith(''): ",s5.endswith(''))
s6=" sdfkdsfdls21313%$#% "
s6.lstrip()
s6.rstrip()
s6.ljust(20,'@')
###Output
_____no_output_____
###Markdown
Strings and Stuff in (Monty) Python
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Strings are just arrays of characters
###Code
my_string = 'spam'
my_string,len(my_string),my_string[0],my_string[0:2]
my_string[::-1]
###Output
_____no_output_____
###Markdown
But unlike numerical arrays, you cannot reassign elements:
###Code
my_string[0] = "S"
###Output
_____no_output_____
###Markdown
Or do math-like stuff ...
###Code
my_string.sum()
###Output
_____no_output_____
###Markdown
"Arithmetic" with Strings
###Code
my_string = 'spam'
my_egg = "eggs"
my_string + my_egg
my_string + " " + my_egg
4 * (my_string + " ") + my_egg
print(4 * (my_string + " ") + my_string + " and\n" + my_egg) # use \n to get a newline with the print function
###Output
_____no_output_____
###Markdown
String operators and comparisons* String comparison is performed using the characters in both strings.* The characters in both strings are compared one by one.* When different characters are found then their [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is compared.* The character with lower [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is considered to be smaller.
###Code
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"sp" < "spam"
"spam" < "eggs"
"sp" in "spam"
"sp" not in "spam"
my_string.isalpha()
my_string.isdigit()
my_string.isspace()
###Output
_____no_output_____
###Markdown
Python supports `Unicode` charactersYou can enter `unicode` characters directly from the keyboard (depends on your operating system), or you can use the `ASCII` encoding. [Unicode - ASCII encoding list](https://en.wikipedia.org/wiki/List_of_Unicode_characters).For example the `ASCII` ecoding for the greek capital omega is `U+03A9`, so you can create the character with `\U000003A9`
###Code
my_resistor = "Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
Ω = 100e6
Ω * np.pi
###Output
_____no_output_____
###Markdown
[Emoji](https://en.wikipedia.org/wiki/Emoji) are unicode characters, so you can use them a well (not all OSs will show all characters!)
###Code
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
###Output
_____no_output_____
###Markdown
Emoji can not be used as variable names (at least not yet ...)
###Code
☢ = 2.345
☢ ** 2
###Output
_____no_output_____
###Markdown
Raw strings - `r" "` * Sometime you do not want python to interpret anything in the string * You can do this by adding a "r" to the front of the string
###Code
my_resistor = r"Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
Watch out for variable types!
###Code
n = 42
print("I would like " + n + " orders of spam")
print("I would like " + str(n) + " orders of spam")
###Output
_____no_output_____
###Markdown
---- Python `f-string` formatting
###Code
my_a = 42
my_b = 1.23456
my_c = True
my_d = 'Spam'
type(my_a), type(my_b), type(my_c), type(my_d)
f"I would like {my_a} orders of {my_d}"
my_output = f"I would like {my_a} orders of {my_d}"
print(my_output)
###Output
_____no_output_____
###Markdown
Format Typesd = Integer decimal g = Floating point format (Uses exponential format if exponent is less than -4)f = Floating point decimal x = hexs = String o = octale = Floating point exponential b = binary
###Code
f"The float {my_b} can be printed with only two places after the decimal: {my_b:.2f}"
f"The integer {my_a} can be printed in hex: {my_a:x}, octal: {my_a:o}, or binary: {my_a:b}"
f"The number {my_b} times 1000 in scientific notation: {my_b * 1000 :.2e}"
f"The value {my_c} as a float: {my_c:f}"
f"The value {my_c} as an integer: {my_c:d}"
###Output
_____no_output_____
###Markdown
---- Who are you who are so wise in the ways of science? Output from `DataFrames - .iterrows()`
###Code
import pandas as pd
witch_table = pd.read_csv('./Data/Witches.csv')
print(witch_table)
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']} has a density of {row['Density']:.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Padding - `{Variable:N}`* `{row['Object']:8}` - the variable `row['Object']` in 8 spaces* `{row['Density']:5.1f}` - the variable `row['Density']` in 5 spaces with 1 decimal place
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:8} has a density of {row['Density']:5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Justified Strings - `{Variable:>N}`* By default, the strings are justified to the left, number to the right.* Use the `>` character to right-justify, and `<` to the left justify.* `{row['Object']:>8}` - the variable `row['Object']` right-justified in 8 spaces* `{row['Density']:<5.1f}` - the variable `row['Density']` left-justified in 5 spaces with 1 decimal place.
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:>8} has a density of {row['Density']:<5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Really long strings* add `\n` for line breaks
###Code
long_string = (
f"Well, there's egg and bacon; egg sausage and bacon; "
f"egg and spam; egg bacon and spam; egg bacon sausage and spam; \n"
f"spam bacon sausage and spam; spam egg spam spam bacon and spam: "
f"spam sausage spam spam bacon spam tomato and spam; \n"
f"spam spam spam egg and spam; spam spam spam spam spam spam baked beans spam spam spam \n"
f"or Lobster Thermidor au Crevette with a Mornay sauce served in a Provencale manner with shallots \n"
f"and aubergines garnished with truffle pate, brandy and with a fried egg on top and spam."
)
print(long_string)
###Output
_____no_output_____
###Markdown
---- Python has lots of built-in [String Methods](https://docs.python.org/3/library/stdtypes.htmlstring-methods).
###Code
line = "My hovercraft is full of eels"
line
###Output
_____no_output_____
###Markdown
Find and Replace
###Code
line.replace('is full of eels', 'has no wheels')
###Output
_____no_output_____
###Markdown
Justification and Cleaning
###Code
line.center(100)
line.ljust(100)
line.rjust(100, "*")
line2 = " My hovercraft is full of eels "
line2
line2.strip()
line3 = "*$*$*$*$*$*$*$*$My hovercraft is full of eels*$*$*$*$"
line3
line3.strip('*$')
line3.lstrip('*$')
###Output
_____no_output_____
###Markdown
Splitting and Joining
###Code
line.split()
'___'.join(line.split())
' '.join(line.split()[::-1])
###Output
_____no_output_____
###Markdown
Line Formatting
###Code
anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
anotherline
anotherline.upper()
anotherline.lower()
anotherline.title()
anotherline.capitalize()
anotherline.swapcase()
###Output
_____no_output_____
###Markdown
Strings in (Monty) Python
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Strings are just arrays of characters
###Code
my_string = 'spam'
my_string
len(my_string)
my_string[0]
my_string[0:2]
my_string[::-1]
###Output
_____no_output_____
###Markdown
But unlike numerical arrays, you cannot reassign elements (immutable)
###Code
my_string[0] = "S"
###Output
_____no_output_____
###Markdown
Or do array-math-like stuff ...
###Code
my_string.sum()
###Output
_____no_output_____
###Markdown
"Arithmetic" with Strings (concatenate)
###Code
my_string = 'spam'
my_egg = "eggs"
my_string + my_egg
my_string + " " + my_egg
4 * (my_string + " ") + my_egg
print(4 * (my_string + " ") + my_string + " and\n" + my_egg) # use \n to get a newline with the print function
###Output
_____no_output_____
###Markdown
String operators and comparisons* String comparison is performed using the characters in both strings.* The characters in both strings are compared one by one (from left to right).* When different characters are found then their [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is compared.* The character with lower [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_charactersBasic_Latin) value is considered to be smaller.
###Code
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"spam" < "eggs"
"sp" < "spam"
"spam_one" < "spam_t"
"sp" in "spam"
"sp" not in "spam"
my_string.isalpha()
my_string.isdigit()
my_string.isspace()
###Output
_____no_output_____
###Markdown
---- Python supports `Unicode` charactersYou can enter `unicode` characters directly from the keyboard (depends on your operating system), or you can use the `ASCII` encoding. [Unicode - ASCII encoding list](https://en.wikipedia.org/wiki/List_of_Unicode_characters).For example the `ASCII` ecoding for the greek capital omega is `U+03A9`, so you can create the character with `\U000003A9`
###Code
my_resistor = "Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
These characters can be used as variable names
###Code
Ω = 100e6
Ω * np.pi
###Output
_____no_output_____
###Markdown
Python supports (almost) all characters from international keyboards
###Code
movie_title = "Mønti Pythøn ik den Hølie Gräilen"
movie_title
###Output
_____no_output_____
###Markdown
[Emoji](https://en.wikipedia.org/wiki/Emoji) are unicode characters, so you can use them a well (not all OSs will show all characters!)
###Code
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
###Output
_____no_output_____
###Markdown
Emoji can not be used as variable names (at least not yet ...)
###Code
☢ = 2.345
###Output
_____no_output_____
###Markdown
Raw strings - `r" "` * Sometime you do not want python to interpret anything in the string * You can do this by adding a "r" to the front of the string
###Code
my_resistor = r"Spam has an electrical resistance of greater than 100 M\U000003A9"
print(my_resistor)
###Output
_____no_output_____
###Markdown
Watch out for variable types!
###Code
n = 42
print("I would like " + n + " orders of spam")
###Output
_____no_output_____
###Markdown
---- Python `f-string` formatting
###Code
my_a = 42
my_b = 1.23456
my_c = True
my_d = 'Spam'
type(my_a), type(my_b), type(my_c), type(my_d)
f"I would like {my_a} orders of {my_d}"
my_output = f"I would like {my_a} orders of {my_d}"
print(my_output)
###Output
_____no_output_____
###Markdown
Format Typesd = Integer decimal g = Floating point format (Uses exponential format if exponent is less than -4)f = Floating point decimal x = hexs = String o = octale = Floating point exponential b = binary
###Code
f"The float {my_b} can be printed with only two places after the decimal: {my_b:.2f}"
f"The integer {my_a} can be printed in hex: {my_a:x}, octal: {my_a:o}, or binary: {my_a:b}"
f"The number {my_b} times 1000 in scientific notation: {my_b * 1000 :.2e}"
f"The value {my_c} as a float: {my_c:f}"
f"The value {my_c} as an integer: {my_c:d}"
###Output
_____no_output_____
###Markdown
---- Who are you who are so wise in the ways of science? Output from `DataFrames - .iterrows()`* A legitimate use of For-Loops
###Code
import pandas as pd
witch_table = pd.read_csv('./Data/Witches.csv')
print(witch_table)
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']} has a density of {row['Density']} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Padding - `{Variable:N}`* `{row['Object']:8}` - the variable `row['Object']` in 8 spaces* `{row['Density']:5.1f}` - the variable `row['Density']` in 5 spaces with 1 decimal place
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:8} has a density of {row['Density']:5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Justified Strings - `{Variable:>N}`* By default, the strings are justified to the left, number to the right.* Use the `>` character to right-justify, and `<` to the left justify.* `{row['Object']:>8}` - the variable `row['Object']` right-justified in 8 spaces* `{row['Density']:<5.1f}` - the variable `row['Density']` left-justified in 5 spaces with 1 decimal place.
###Code
for index, row in witch_table.iterrows():
my_out_string = (f"The object: {row['Object']:>8} has a density of {row['Density']:<5.1f} g/cc")
print(my_out_string)
###Output
_____no_output_____
###Markdown
Really long strings* Put everything between `()`* add `\n` for line breaks
###Code
long_string = (
f"Well, there's egg and bacon; egg sausage and bacon; "
f"egg and spam; egg bacon and spam; egg bacon sausage and spam; \n"
f"spam bacon sausage and spam; spam egg spam spam bacon and spam: "
f"spam sausage spam spam bacon spam tomato and spam; \n"
f"spam spam spam egg and spam; spam spam spam spam spam spam baked beans spam spam spam \n"
f"or Lobster Thermidor au Crevette with a Mornay sauce served in a Provencale manner with shallots \n"
f"and aubergines garnished with truffle pate, brandy and with a fried egg on top and spam."
)
print(long_string)
long_string.count('spam')
###Output
_____no_output_____
###Markdown
---- Python has lots of built-in [String Methods](https://docs.python.org/3/library/stdtypes.htmlstring-methods).
###Code
my_line = "My hovercraft is full of eels"
my_line
###Output
_____no_output_____
###Markdown
Find* Returns the index of the first occurrence of the argument in the string* Returns -1 if nothing is found
###Code
my_line.find("r")
my_line[7]
my_line.find("Z")
###Output
_____no_output_____
###Markdown
Find and Replace
###Code
my_line.replace('is full of eels', 'has no wheels')
###Output
_____no_output_____
###Markdown
Justification and Cleaning
###Code
my_line.center(100)
my_line.ljust(100)
my_line.rjust(100, "*")
my_line_two = " My hovercraft is full of eels "
my_line_two
my_line_two.strip()
###Output
_____no_output_____
###Markdown
Splitting
###Code
my_line.split()
my_line.split()[1]
my_line.partition('is')
long_string.splitlines()
long_string.splitlines()[2]
###Output
_____no_output_____
###Markdown
Joining* `string.join(list)`* `string` - the string you want to put between all of the elements of `list`
###Code
'___'.join(my_line.split())
'☢'.join(my_line.partition('is'))
' '.join(my_line.split()[::-1])
###Output
_____no_output_____
###Markdown
Line Formatting
###Code
my_anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
my_anotherline
my_anotherline.upper()
my_anotherline.lower()
my_anotherline.title()
my_anotherline.capitalize()
my_anotherline.swapcase()
translation = my_anotherline.maketrans("aeiou", "*****")
my_anotherline.translate(translation)
###Output
_____no_output_____
###Markdown
One last For-Loop thing
###Code
for char in my_anotherline:
print(char, end=' ')
import time
for char in my_anotherline:
print(char, end=' ')
time.sleep(.25) # seconds
###Output
_____no_output_____ |
_notebooks/TIL8.ipynb | ###Markdown
확률 - [확률의 분할 법칙](https://statcraft.tistory.com/entry/%ED%99%95%EB%A5%A0%EC%9D%98-%EB%B6%84%ED%95%A0%EB%B2%95%EC%B9%99)- [베이즈 정리](https://ko.wikipedia.org/wiki/%EB%B2%A0%EC%9D%B4%EC%A6%88_%EC%A0%95%EB%A6%AC) - 사전 확률 - 사후 확률  아침에 검은색 셔츠입을 확률p(A) = 3/4 넥타이를 맬 확률 p(B) = ? p(B|A) = 3/4 p(B|c^C) = 1/2 p(A|B) = ? 베이즈 정리를 이해하는 것이 중요 - 확률 분포 - 확률 변수 - 실험 결과에 의존하는 수 - ex) 주사위 2개를 던지는 실험 - 주사위 두개의 합 = X - 주사위 두개의 차 = X - 표본 공간에서 대응 됨 - 이산 확률 변수 - 값을 셀 수 있는 것 - ex 주사위, 동전 등등 - 연속 확률 변수 - 셀 수 없는 것 - ex 어느 학교에서 랜덤하게 선택된 남학생의 키 - 키의 범위가 다양 - 확률 분포 == 확률 변수를 표로 만듬 - 표현 방법은 다양) 표, 그래프, 함수 - 이산 확률 변수 - 평균(E), 분산(var), 표준 편차(sd) - 분산 간편식 - E(x^2) - {E(X)}^2 - 결합 확률 분포 - 두개 이상의 확률 변수가 동시에 취하는 값들에 대해 대응 - 변수를 축소할 때는 해당 하는 변수들의 값을 더함 - 주변 확률 분포 - 공분산 - 예시 문제 - 고등학교 1학년 학생 - X == 키 - Y == 몸무게 - Z == 수학성적 - X 와 Y의 공분산 - ( X - ux )( Y - uy ) - E(XY) - uxuy = E(XY) - E(X)E(Y) - 상관 계수 - corr(X,Y) = cov(X,Y) / sdXsdY 확률 분포 - 이항 분포 - 베르누이 시행 - 두개의 결과만 가지는 실험 - 성공 또는 실패 - 성공 확률 : p - 확률 변수 X - n번의 베르누이 시행 - 이항 확률 변수 문제 어떤 랜덤박스의 뽑기 성공확률 0.23개를 뽑았을때 적어도 하나 이상의 성공이 발생할 확률은?
###Code
from scipy import stats
1 - stats.binom.cdf(0, n=3, p=0.2)
###Output
_____no_output_____
###Markdown
- 평균 - E(X) = np- 분산 - Var(X) = np(1-p)- 표준편차 - SD(X) = 루트(np(1-p))
###Code
from scipy import stats
stats.binom.stats(n=3, p=0.2)
###Output
_____no_output_____
###Markdown
- 정규 분포 - 연속 확률 변수 - [확률 밀도 함수](https://ko.wikipedia.org/wiki/%ED%99%95%EB%A5%A0_%EB%B0%80%EB%8F%84_%ED%95%A8%EC%88%98) - 넓이가 확률이 됨 - [정규 분포의 확률 밀도 함수](https://suhak.tistory.com/87) - X~N(mu, sig^2) - 표준 정규 확률 변수 - Z = (X - mu)/sig - Z~N(0,1) - [포아송 분포](https://ko.wikipedia.org/wiki/%ED%91%B8%EC%95%84%EC%86%A1_%EB%B6%84%ED%8F%AC) - 시간단위 또는 공간단위에서 발생하는 이벤트 수의 확률분포 문제 어느 웹사이트 시간당 접속자 평균 3, p(x <= 2) = ?
###Code
from scipy import stats
stats.poisson.cdf(2, mu = 3)
stats.poisson.cdf(2, mu=3)
###Output
_____no_output_____
###Markdown
- [지수 분포](https://ko.wikipedia.org/wiki/%EC%A7%80%EC%88%98_%EB%B6%84%ED%8F%AC) - 포아송 분포에 의해 어떤 사건이 발생 할때 어느 한 시점으로 부터 이 사건이 발생할 때까지 걸리는 시간에 대한 확률 분포
###Code
lam = 3
stats.expon.cdf(0.5, scale=1/lam)
###Output
_____no_output_____ |
Project 5- Deploying a Sentiment Analysis Model/SageMaker Project.ipynb | ###Markdown
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. General OutlineRecall the general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
!pip install pandas-compat
###Output
Collecting pandas-compat
Downloading pandas_compat-0.1.1-py2.py3-none-any.whl (4.2 kB)
Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas-compat) (1.0.1)
Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->pandas-compat) (2019.3)
Requirement already satisfied: python-dateutil>=2.6.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->pandas-compat) (2.8.1)
Requirement already satisfied: numpy>=1.13.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->pandas-compat) (1.19.4)
Requirement already satisfied: six>=1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-dateutil>=2.6.1->pandas->pandas-compat) (1.15.0)
Installing collected packages: pandas-compat
Successfully installed pandas-compat-0.1.1
[33mWARNING: You are using pip version 20.3; however, version 20.3.3 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-12-28 13:57:56-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 25.1MB/s in 3.8s
2020-12-28 13:58:00 (21.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
###Output
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
###Markdown
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
###Code
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
###Output
IMDb reviews (combined): train = 25000, test = 25000
###Markdown
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
###Code
print(train_X[100])
print(train_y[100])
###Output
I sat through both parts of Che last night, back to back with a brief bathroom break, and I can't recall when 4 hours last passed so quickly. I'd had to psyche myself up for a week in advance because I have a real 'thing' about directors, producers and editors who keep putting over blown, over long quasi epics in front of us and I feel that on the whole, 2 to 2.5 hours is about right for a movie. So 4 hours seemed to be stretching the limits of my tolerance and I was very dubious about the whole enterprise. But I will say upfront that this is a beautifully I might say lovingly made movie and I'm really glad I saw it. Director Steven Soderbergh is to be congratulated on the clarity of his vision. The battle scenes zing as if you were dodging the bullets yourself.<br /><br />If there is a person on the planet who doesn't know, Ernesto 'Che' Guevara was the Argentinian doctor who helped Fidel Castro overthrow Fulgencio Batista via the 1959 Cuban revolution. When I was a kid in the 1960s, Che's image was everywhere; on bedroom wall posters, on T shirts, on magazine covers. Che's image has to be one of the most over exploited ever. If the famous images are to be relied on, then Che was a very good looking guy, the epitome of revolutionary romanticism. Had he been butt ugly, I have to wonder if he would have ever been quite so popular in the public imagination? Of course dying young helps.<br /><br />Movies have been made about Che before (notably the excellent Motorcycle Diaries of 2004 which starred the unbearably cute Gael Garcia Bernal as young Che, touring South America and seeing the endemic poverty which formed his Marxist politics) but I don't think anyone has ever tackled the entire story from beginning to end, and this two-parter is an ambitious project. I hope it pays off for Soderbergh but I can only imagine that instant commercial success may not have been uppermost in his mind.<br /><br />The first movie (The Agentine) shows Che meeting Castro in Mexico and follows their journey to Cuba to start the revolution and then the journey to New York in 1964 to address the UN. Cleverly shot black and white images look like contemporary film but aren't. The second film (Guerilla) picks up again in 1966 when Che arrives in Bolivia to start a new revolutionary movement. The second movie takes place almost entirely in the forest. As far as I can see it was shot mostly in Spain but I can still believe it must have been quite grueling to film. Benicio Del Toro is excellent as Che, a part he seems born to play.<br /><br />Personally, I felt that The Argentine (ie part one) was much easier to watch and more 'entertaining' in the strictly movie sense, because it is upbeat. They are winning; the Revolution will succeed. Che is in his element leading a disparate band of peasants, workers and intellectuals in the revolutionary cause. The second part is much harder to watch because of the inevitability of his defeat. In much the same way that the recent Valkyrie - while being a good movie - was an exercise in witnessing heroic failure, so I felt the same about part two of Che (Guerilla). We know at the outset that he dies, we know he fails. It is frustrating because the way the story is told, it is obvious fairly early on that the fomentation of revolution in Bolivia is doomed; Che is regarded as a foreign intruder and fails to connect with the indigenous peoples in the way that he did with the Cubans. He doggedly persists which is frustrating to watch because I felt that he should have known when to give up and move on to other, perhaps more successful, enterprises. The movie does not romanticise him too much. He kills people, he executes, he struggles with his asthma and follows a lost cause long after he should have given up and moved on, he leaves a wife alone to bring up five fatherless children.<br /><br />But overall, an excellent exercise in classic movie making. One note; as I watched the US trained Bolivian soldiers move in en masse to pick off Che and his small band of warriors one by one, it reminded me of the finale to Butch Cassidy. I almost turned to my husband and said so, but hesitated, thinking he would find such thoughts trite and out of place. As we left the theatre he turned to me and said "Didn't you think the end was like Butch Cassidy
!"
1
###Markdown
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
###Code
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
###Output
_____no_output_____
###Markdown
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
###Code
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[7])
###Output
_____no_output_____
###Markdown
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input? **Answer:** The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
###Code
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
###Code
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for review in data:
for word in review:
if word in word_count:
word_count[word] +=1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = sorted(word_count, key=word_count.get, reverse=True)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
###Output
_____no_output_____
###Markdown
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:** The five most frequently appearing words in the training set are ['movi', 'film', 'one', 'like', 'time'] and it makes sense that these words will be used in IMDB reviews.
###Code
# TODO: Use this space to determine the five most frequently appearing words in the training set.
print(word_dict)
###Output
{'movi': 2, 'film': 3, 'one': 4, 'like': 5, 'time': 6, 'good': 7, 'make': 8, 'charact': 9, 'get': 10, 'see': 11, 'watch': 12, 'stori': 13, 'even': 14, 'would': 15, 'realli': 16, 'well': 17, 'scene': 18, 'look': 19, 'show': 20, 'much': 21, 'end': 22, 'peopl': 23, 'bad': 24, 'go': 25, 'great': 26, 'also': 27, 'first': 28, 'love': 29, 'think': 30, 'way': 31, 'act': 32, 'play': 33, 'made': 34, 'thing': 35, 'could': 36, 'know': 37, 'say': 38, 'seem': 39, 'work': 40, 'plot': 41, 'two': 42, 'actor': 43, 'year': 44, 'come': 45, 'mani': 46, 'seen': 47, 'take': 48, 'life': 49, 'want': 50, 'never': 51, 'littl': 52, 'best': 53, 'tri': 54, 'man': 55, 'ever': 56, 'give': 57, 'better': 58, 'still': 59, 'perform': 60, 'find': 61, 'feel': 62, 'part': 63, 'back': 64, 'use': 65, 'someth': 66, 'director': 67, 'actual': 68, 'interest': 69, 'lot': 70, 'real': 71, 'old': 72, 'cast': 73, 'though': 74, 'live': 75, 'star': 76, 'enjoy': 77, 'guy': 78, 'anoth': 79, 'new': 80, 'role': 81, 'noth': 82, '10': 83, 'funni': 84, 'music': 85, 'point': 86, 'start': 87, 'set': 88, 'girl': 89, 'origin': 90, 'day': 91, 'world': 92, 'everi': 93, 'believ': 94, 'turn': 95, 'quit': 96, 'us': 97, 'direct': 98, 'thought': 99, 'fact': 100, 'minut': 101, 'horror': 102, 'kill': 103, 'action': 104, 'comedi': 105, 'pretti': 106, 'young': 107, 'wonder': 108, 'happen': 109, 'around': 110, 'got': 111, 'effect': 112, 'right': 113, 'long': 114, 'howev': 115, 'big': 116, 'line': 117, 'famili': 118, 'enough': 119, 'seri': 120, 'may': 121, 'need': 122, 'fan': 123, 'bit': 124, 'script': 125, 'beauti': 126, 'person': 127, 'becom': 128, 'without': 129, 'must': 130, 'alway': 131, 'friend': 132, 'tell': 133, 'reason': 134, 'saw': 135, 'last': 136, 'final': 137, 'kid': 138, 'almost': 139, 'put': 140, 'least': 141, 'sure': 142, 'done': 143, 'whole': 144, 'place': 145, 'complet': 146, 'kind': 147, 'expect': 148, 'differ': 149, 'shot': 150, 'far': 151, 'mean': 152, 'anyth': 153, 'book': 154, 'laugh': 155, 'might': 156, 'name': 157, 'sinc': 158, 'begin': 159, '2': 160, 'probabl': 161, 'woman': 162, 'help': 163, 'entertain': 164, 'let': 165, 'screen': 166, 'call': 167, 'tv': 168, 'moment': 169, 'away': 170, 'read': 171, 'yet': 172, 'rather': 173, 'worst': 174, 'run': 175, 'fun': 176, 'lead': 177, 'hard': 178, 'audienc': 179, 'idea': 180, 'anyon': 181, 'episod': 182, 'american': 183, 'found': 184, 'appear': 185, 'bore': 186, 'especi': 187, 'although': 188, 'hope': 189, 'keep': 190, 'cours': 191, 'anim': 192, 'job': 193, 'goe': 194, 'move': 195, 'sens': 196, 'dvd': 197, 'version': 198, 'war': 199, 'money': 200, 'someon': 201, 'mind': 202, 'mayb': 203, 'problem': 204, 'true': 205, 'hous': 206, 'everyth': 207, 'nice': 208, 'second': 209, 'rate': 210, 'three': 211, 'night': 212, 'face': 213, 'follow': 214, 'recommend': 215, 'main': 216, 'product': 217, 'worth': 218, 'leav': 219, 'human': 220, 'special': 221, 'excel': 222, 'togeth': 223, 'wast': 224, 'everyon': 225, 'sound': 226, 'john': 227, 'hand': 228, '1': 229, 'father': 230, 'later': 231, 'eye': 232, 'said': 233, 'view': 234, 'instead': 235, 'review': 236, 'boy': 237, 'high': 238, 'hour': 239, 'miss': 240, 'classic': 241, 'talk': 242, 'wife': 243, 'understand': 244, 'left': 245, 'care': 246, 'black': 247, 'death': 248, 'open': 249, 'murder': 250, 'write': 251, 'half': 252, 'head': 253, 'rememb': 254, 'chang': 255, 'viewer': 256, 'fight': 257, 'gener': 258, 'surpris': 259, 'includ': 260, 'short': 261, 'die': 262, 'fall': 263, 'less': 264, 'els': 265, 'entir': 266, 'piec': 267, 'involv': 268, 'pictur': 269, 'simpli': 270, 'home': 271, 'power': 272, 'top': 273, 'total': 274, 'usual': 275, 'budget': 276, 'attempt': 277, 'suppos': 278, 'releas': 279, 'hollywood': 280, 'terribl': 281, 'song': 282, 'men': 283, 'possibl': 284, 'featur': 285, 'portray': 286, 'disappoint': 287, '3': 288, 'poor': 289, 'coupl': 290, 'camera': 291, 'stupid': 292, 'dead': 293, 'wrong': 294, 'low': 295, 'produc': 296, 'video': 297, 'either': 298, 'aw': 299, 'definit': 300, 'except': 301, 'rest': 302, 'given': 303, 'absolut': 304, 'women': 305, 'lack': 306, 'word': 307, 'writer': 308, 'titl': 309, 'talent': 310, 'decid': 311, 'full': 312, 'perfect': 313, 'along': 314, 'style': 315, 'close': 316, 'truli': 317, 'school': 318, 'save': 319, 'emot': 320, 'sex': 321, 'age': 322, 'next': 323, 'bring': 324, 'mr': 325, 'case': 326, 'killer': 327, 'heart': 328, 'comment': 329, 'sort': 330, 'creat': 331, 'perhap': 332, 'came': 333, 'brother': 334, 'sever': 335, 'joke': 336, 'art': 337, 'dialogu': 338, 'game': 339, 'small': 340, 'base': 341, 'flick': 342, 'written': 343, 'sequenc': 344, 'meet': 345, 'earli': 346, 'often': 347, 'other': 348, 'mother': 349, 'develop': 350, 'humor': 351, 'actress': 352, 'consid': 353, 'dark': 354, 'guess': 355, 'amaz': 356, 'unfortun': 357, 'lost': 358, 'light': 359, 'exampl': 360, 'cinema': 361, 'drama': 362, 'white': 363, 'ye': 364, 'experi': 365, 'imagin': 366, 'mention': 367, 'stop': 368, 'natur': 369, 'forc': 370, 'manag': 371, 'felt': 372, 'present': 373, 'cut': 374, 'children': 375, 'fail': 376, 'son': 377, 'qualiti': 378, 'support': 379, 'car': 380, 'ask': 381, 'hit': 382, 'side': 383, 'voic': 384, 'extrem': 385, 'impress': 386, 'wors': 387, 'evil': 388, 'went': 389, 'stand': 390, 'certainli': 391, 'basic': 392, 'oh': 393, 'overal': 394, 'favorit': 395, 'horribl': 396, 'mysteri': 397, 'number': 398, 'type': 399, 'danc': 400, 'wait': 401, 'hero': 402, '5': 403, 'alreadi': 404, 'learn': 405, 'matter': 406, '4': 407, 'michael': 408, 'genr': 409, 'fine': 410, 'despit': 411, 'throughout': 412, 'walk': 413, 'success': 414, 'histori': 415, 'question': 416, 'zombi': 417, 'town': 418, 'relationship': 419, 'realiz': 420, 'child': 421, 'past': 422, 'daughter': 423, 'late': 424, 'b': 425, 'wish': 426, 'hate': 427, 'credit': 428, 'event': 429, 'theme': 430, 'touch': 431, 'citi': 432, 'today': 433, 'sometim': 434, 'behind': 435, 'god': 436, 'twist': 437, 'sit': 438, 'deal': 439, 'stay': 440, 'annoy': 441, 'abl': 442, 'rent': 443, 'pleas': 444, 'edit': 445, 'blood': 446, 'deserv': 447, 'comic': 448, 'anyway': 449, 'appar': 450, 'soon': 451, 'gave': 452, 'etc': 453, 'level': 454, 'slow': 455, 'chanc': 456, 'score': 457, 'bodi': 458, 'brilliant': 459, 'incred': 460, 'figur': 461, 'situat': 462, 'major': 463, 'self': 464, 'stuff': 465, 'decent': 466, 'element': 467, 'return': 468, 'dream': 469, 'obvious': 470, 'order': 471, 'continu': 472, 'pace': 473, 'ridicul': 474, 'happi': 475, 'highli': 476, 'add': 477, 'group': 478, 'thank': 479, 'ladi': 480, 'novel': 481, 'speak': 482, 'pain': 483, 'career': 484, 'shoot': 485, 'strang': 486, 'heard': 487, 'sad': 488, 'husband': 489, 'polic': 490, 'import': 491, 'break': 492, 'took': 493, 'strong': 494, 'cannot': 495, 'predict': 496, 'robert': 497, 'violenc': 498, 'hilari': 499, 'recent': 500, 'countri': 501, 'known': 502, 'particularli': 503, 'pick': 504, 'documentari': 505, 'season': 506, 'critic': 507, 'jame': 508, 'compar': 509, 'alon': 510, 'obviou': 511, 'told': 512, 'state': 513, 'visual': 514, 'rock': 515, 'offer': 516, 'exist': 517, 'theater': 518, 'opinion': 519, 'gore': 520, 'hold': 521, 'crap': 522, 'result': 523, 'realiti': 524, 'room': 525, 'hear': 526, 'effort': 527, 'clich': 528, 'thriller': 529, 'caus': 530, 'sequel': 531, 'explain': 532, 'serious': 533, 'king': 534, 'local': 535, 'ago': 536, 'none': 537, 'hell': 538, 'note': 539, 'allow': 540, 'sister': 541, 'david': 542, 'simpl': 543, 'femal': 544, 'deliv': 545, 'ok': 546, 'convinc': 547, 'class': 548, 'check': 549, 'suspens': 550, 'win': 551, 'buy': 552, 'oscar': 553, 'huge': 554, 'valu': 555, 'sexual': 556, 'cool': 557, 'scari': 558, 'similar': 559, 'excit': 560, 'exactli': 561, 'apart': 562, 'provid': 563, 'avoid': 564, 'shown': 565, 'seriou': 566, 'english': 567, 'whose': 568, 'taken': 569, 'cinematographi': 570, 'shock': 571, 'polit': 572, 'spoiler': 573, 'offic': 574, 'across': 575, 'middl': 576, 'pass': 577, 'street': 578, 'messag': 579, 'somewhat': 580, 'silli': 581, 'charm': 582, 'modern': 583, 'filmmak': 584, 'confus': 585, 'form': 586, 'tale': 587, 'singl': 588, 'jack': 589, 'mostli': 590, 'carri': 591, 'attent': 592, 'william': 593, 'sing': 594, 'subject': 595, 'five': 596, 'prove': 597, 'richard': 598, 'team': 599, 'stage': 600, 'cop': 601, 'unlik': 602, 'georg': 603, 'televis': 604, 'monster': 605, 'earth': 606, 'villain': 607, 'cover': 608, 'pay': 609, 'marri': 610, 'toward': 611, 'build': 612, 'parent': 613, 'pull': 614, 'due': 615, 'respect': 616, 'fill': 617, 'four': 618, 'dialog': 619, 'remind': 620, 'futur': 621, 'typic': 622, 'weak': 623, '7': 624, 'cheap': 625, 'intellig': 626, 'atmospher': 627, 'british': 628, '80': 629, 'clearli': 630, 'non': 631, 'dog': 632, 'paul': 633, '8': 634, 'knew': 635, 'artist': 636, 'fast': 637, 'crime': 638, 'easili': 639, 'escap': 640, 'adult': 641, 'doubt': 642, 'detail': 643, 'date': 644, 'romant': 645, 'fire': 646, 'member': 647, 'gun': 648, 'drive': 649, 'straight': 650, 'beyond': 651, 'fit': 652, 'attack': 653, 'imag': 654, 'upon': 655, 'posit': 656, 'whether': 657, 'fantast': 658, 'peter': 659, 'aspect': 660, 'appreci': 661, 'captur': 662, 'ten': 663, 'plan': 664, 'discov': 665, 'remain': 666, 'period': 667, 'near': 668, 'realist': 669, 'air': 670, 'mark': 671, 'red': 672, 'dull': 673, 'adapt': 674, 'within': 675, 'lose': 676, 'spend': 677, 'color': 678, 'materi': 679, 'chase': 680, 'mari': 681, 'storylin': 682, 'forget': 683, 'bunch': 684, 'clear': 685, 'lee': 686, 'victim': 687, 'nearli': 688, 'box': 689, 'york': 690, 'match': 691, 'inspir': 692, 'finish': 693, 'mess': 694, 'standard': 695, 'easi': 696, 'truth': 697, 'suffer': 698, 'busi': 699, 'bill': 700, 'space': 701, 'dramat': 702, 'western': 703, 'e': 704, 'list': 705, 'battl': 706, 'notic': 707, 'de': 708, 'french': 709, 'ad': 710, '9': 711, 'tom': 712, 'larg': 713, 'among': 714, 'eventu': 715, 'accept': 716, 'train': 717, 'agre': 718, 'spirit': 719, 'soundtrack': 720, 'third': 721, 'teenag': 722, 'adventur': 723, 'soldier': 724, 'drug': 725, 'suggest': 726, 'sorri': 727, 'famou': 728, 'normal': 729, 'babi': 730, 'cri': 731, 'ultim': 732, 'troubl': 733, 'contain': 734, 'certain': 735, 'cultur': 736, 'romanc': 737, 'rare': 738, 'lame': 739, 'somehow': 740, 'disney': 741, 'mix': 742, 'gone': 743, 'cartoon': 744, 'student': 745, 'reveal': 746, 'fear': 747, 'suck': 748, 'kept': 749, 'attract': 750, 'appeal': 751, 'premis': 752, 'greatest': 753, 'design': 754, 'secret': 755, 'shame': 756, 'throw': 757, 'copi': 758, 'scare': 759, 'wit': 760, 'america': 761, 'admit': 762, 'brought': 763, 'particular': 764, 'relat': 765, 'screenplay': 766, 'whatev': 767, 'pure': 768, '70': 769, 'averag': 770, 'harri': 771, 'master': 772, 'describ': 773, 'treat': 774, 'male': 775, '20': 776, 'issu': 777, 'fantasi': 778, 'warn': 779, 'inde': 780, 'forward': 781, 'background': 782, 'project': 783, 'free': 784, 'memor': 785, 'japanes': 786, 'poorli': 787, 'award': 788, 'locat': 789, 'amus': 790, 'potenti': 791, 'struggl': 792, 'magic': 793, 'weird': 794, 'societi': 795, 'okay': 796, 'imdb': 797, 'doctor': 798, 'accent': 799, 'hot': 800, 'water': 801, '30': 802, 'alien': 803, 'dr': 804, 'express': 805, 'odd': 806, 'choic': 807, 'crazi': 808, 'studio': 809, 'fiction': 810, 'control': 811, 'becam': 812, 'masterpiec': 813, 'difficult': 814, 'fli': 815, 'joe': 816, 'scream': 817, 'costum': 818, 'lover': 819, 'uniqu': 820, 'refer': 821, 'remak': 822, 'vampir': 823, 'girlfriend': 824, 'prison': 825, 'execut': 826, 'wear': 827, 'jump': 828, 'unless': 829, 'wood': 830, 'creepi': 831, 'cheesi': 832, 'superb': 833, 'otherwis': 834, 'parti': 835, 'roll': 836, 'ghost': 837, 'public': 838, 'mad': 839, 'depict': 840, 'earlier': 841, 'moral': 842, 'week': 843, 'jane': 844, 'badli': 845, 'fi': 846, 'dumb': 847, 'grow': 848, 'flaw': 849, 'sci': 850, 'deep': 851, 'maker': 852, 'cat': 853, 'footag': 854, 'connect': 855, 'older': 856, 'plenti': 857, 'bother': 858, 'outsid': 859, 'stick': 860, 'gay': 861, 'catch': 862, 'co': 863, 'plu': 864, 'popular': 865, 'equal': 866, 'social': 867, 'quickli': 868, 'disturb': 869, 'perfectli': 870, 'dress': 871, 'era': 872, '90': 873, 'mistak': 874, 'lie': 875, 'previou': 876, 'ride': 877, 'combin': 878, 'band': 879, 'concept': 880, 'answer': 881, 'rich': 882, 'surviv': 883, 'front': 884, 'christma': 885, 'sweet': 886, 'insid': 887, 'bare': 888, 'concern': 889, 'eat': 890, 'listen': 891, 'ben': 892, 'beat': 893, 'c': 894, 'serv': 895, 'term': 896, 'german': 897, 'la': 898, 'meant': 899, 'stereotyp': 900, 'hardli': 901, 'law': 902, 'innoc': 903, 'desper': 904, 'promis': 905, 'memori': 906, 'intent': 907, 'cute': 908, 'steal': 909, 'variou': 910, 'inform': 911, 'brain': 912, 'post': 913, 'tone': 914, 'island': 915, 'amount': 916, 'compani': 917, 'nuditi': 918, 'track': 919, 'claim': 920, 'store': 921, 'hair': 922, 'flat': 923, '50': 924, 'land': 925, 'univers': 926, 'danger': 927, 'fairli': 928, 'kick': 929, 'scott': 930, 'player': 931, 'step': 932, 'crew': 933, 'plain': 934, 'toni': 935, 'share': 936, 'tast': 937, 'centuri': 938, 'achiev': 939, 'engag': 940, 'cold': 941, 'travel': 942, 'record': 943, 'suit': 944, 'rip': 945, 'sadli': 946, 'manner': 947, 'spot': 948, 'tension': 949, 'wrote': 950, 'intens': 951, 'fascin': 952, 'familiar': 953, 'remark': 954, 'depth': 955, 'burn': 956, 'destroy': 957, 'histor': 958, 'sleep': 959, 'purpos': 960, 'languag': 961, 'ignor': 962, 'ruin': 963, 'delight': 964, 'italian': 965, 'unbeliev': 966, 'soul': 967, 'abil': 968, 'collect': 969, 'clever': 970, 'detect': 971, 'violent': 972, 'rape': 973, 'reach': 974, 'door': 975, 'liter': 976, 'trash': 977, 'scienc': 978, 'reveng': 979, 'caught': 980, 'commun': 981, 'creatur': 982, 'approach': 983, 'trip': 984, 'intrigu': 985, 'fashion': 986, 'skill': 987, 'introduc': 988, 'paint': 989, 'channel': 990, 'complex': 991, 'camp': 992, 'christian': 993, 'hole': 994, 'extra': 995, 'mental': 996, 'limit': 997, 'ann': 998, 'immedi': 999, 'comput': 1000, '6': 1001, 'million': 1002, 'mere': 1003, 'slightli': 1004, 'conclus': 1005, 'slasher': 1006, 'suddenli': 1007, 'imposs': 1008, 'crimin': 1009, 'teen': 1010, 'neither': 1011, 'physic': 1012, 'nation': 1013, 'spent': 1014, 'respons': 1015, 'planet': 1016, 'receiv': 1017, 'fake': 1018, 'sick': 1019, 'blue': 1020, 'bizarr': 1021, 'embarrass': 1022, 'indian': 1023, 'ring': 1024, '15': 1025, 'pop': 1026, 'drop': 1027, 'drag': 1028, 'haunt': 1029, 'pointless': 1030, 'suspect': 1031, 'search': 1032, 'edg': 1033, 'handl': 1034, 'biggest': 1035, 'common': 1036, 'faith': 1037, 'hurt': 1038, 'arriv': 1039, 'technic': 1040, 'angel': 1041, 'dad': 1042, 'genuin': 1043, 'f': 1044, 'awesom': 1045, 'solid': 1046, 'former': 1047, 'van': 1048, 'colleg': 1049, 'focu': 1050, 'count': 1051, 'heavi': 1052, 'tear': 1053, 'wall': 1054, 'rais': 1055, 'laughabl': 1056, 'visit': 1057, 'younger': 1058, 'sign': 1059, 'excus': 1060, 'fair': 1061, 'cult': 1062, 'tough': 1063, 'key': 1064, 'motion': 1065, 'desir': 1066, 'super': 1067, 'stun': 1068, 'addit': 1069, 'cloth': 1070, 'exploit': 1071, 'smith': 1072, 'tortur': 1073, 'race': 1074, 'davi': 1075, 'cross': 1076, 'author': 1077, 'jim': 1078, 'minor': 1079, 'compel': 1080, 'consist': 1081, 'focus': 1082, 'chemistri': 1083, 'commit': 1084, 'pathet': 1085, 'park': 1086, 'tradit': 1087, 'obsess': 1088, 'frank': 1089, 'grade': 1090, '60': 1091, 'asid': 1092, 'brutal': 1093, 'steve': 1094, 'somewher': 1095, 'opportun': 1096, 'rule': 1097, 'explor': 1098, 'u': 1099, 'depress': 1100, 'grant': 1101, 'honest': 1102, 'besid': 1103, 'dub': 1104, 'anti': 1105, 'intend': 1106, 'trailer': 1107, 'bar': 1108, 'west': 1109, 'scientist': 1110, 'longer': 1111, 'regard': 1112, 'decad': 1113, 'judg': 1114, 'silent': 1115, 'armi': 1116, 'creativ': 1117, 'wild': 1118, 'stewart': 1119, 'g': 1120, 'south': 1121, 'draw': 1122, 'road': 1123, 'govern': 1124, 'boss': 1125, 'ex': 1126, 'practic': 1127, 'surprisingli': 1128, 'club': 1129, 'motiv': 1130, 'gang': 1131, 'festiv': 1132, 'london': 1133, 'redeem': 1134, 'green': 1135, 'page': 1136, 'machin': 1137, 'idiot': 1138, 'display': 1139, 'aliv': 1140, 'militari': 1141, 'thrill': 1142, 'repeat': 1143, 'yeah': 1144, 'folk': 1145, 'nobodi': 1146, '100': 1147, '40': 1148, 'journey': 1149, 'garbag': 1150, 'smile': 1151, 'tire': 1152, 'ground': 1153, 'mood': 1154, 'bought': 1155, 'cost': 1156, 'stone': 1157, 'sam': 1158, 'noir': 1159, 'mouth': 1160, 'agent': 1161, 'terrif': 1162, 'requir': 1163, 'utterli': 1164, 'honestli': 1165, 'sexi': 1166, 'area': 1167, 'geniu': 1168, 'report': 1169, 'enter': 1170, 'glad': 1171, 'humour': 1172, 'investig': 1173, 'serial': 1174, 'narr': 1175, 'passion': 1176, 'occasion': 1177, 'marriag': 1178, 'climax': 1179, 'industri': 1180, 'studi': 1181, 'charli': 1182, 'ship': 1183, 'nowher': 1184, 'center': 1185, 'demon': 1186, 'hors': 1187, 'loos': 1188, 'bear': 1189, 'wow': 1190, 'hang': 1191, 'graphic': 1192, 'giant': 1193, 'admir': 1194, 'send': 1195, 'loud': 1196, 'damn': 1197, 'nake': 1198, 'subtl': 1199, 'rel': 1200, 'profession': 1201, 'blow': 1202, 'bottom': 1203, 'insult': 1204, 'batman': 1205, 'r': 1206, 'kelli': 1207, 'boyfriend': 1208, 'doubl': 1209, 'initi': 1210, 'frame': 1211, 'opera': 1212, 'gem': 1213, 'challeng': 1214, 'affect': 1215, 'cinemat': 1216, 'church': 1217, 'drawn': 1218, 'fulli': 1219, 'evid': 1220, 'seek': 1221, 'j': 1222, 'l': 1223, 'nightmar': 1224, 'essenti': 1225, 'conflict': 1226, 'arm': 1227, 'henri': 1228, 'christoph': 1229, 'wind': 1230, 'grace': 1231, 'narrat': 1232, 'assum': 1233, 'witch': 1234, 'hunt': 1235, 'push': 1236, 'wise': 1237, 'chri': 1238, 'repres': 1239, 'month': 1240, 'nomin': 1241, 'affair': 1242, 'sceneri': 1243, 'avail': 1244, 'hide': 1245, 'justic': 1246, 'smart': 1247, 'bond': 1248, 'thu': 1249, 'flashback': 1250, 'outstand': 1251, 'interview': 1252, 'satisfi': 1253, 'presenc': 1254, 'constantli': 1255, 'bed': 1256, 'central': 1257, 'sell': 1258, 'iron': 1259, 'content': 1260, 'everybodi': 1261, 'gag': 1262, 'slowli': 1263, 'hotel': 1264, 'hire': 1265, 'system': 1266, 'hey': 1267, 'adam': 1268, 'thrown': 1269, 'individu': 1270, 'charl': 1271, 'allen': 1272, 'mediocr': 1273, 'jone': 1274, 'lesson': 1275, 'billi': 1276, 'ray': 1277, 'cameo': 1278, 'photographi': 1279, 'fellow': 1280, 'pari': 1281, 'strike': 1282, 'rise': 1283, 'brief': 1284, 'independ': 1285, 'absurd': 1286, 'neg': 1287, 'phone': 1288, 'impact': 1289, 'born': 1290, 'ill': 1291, 'model': 1292, 'spoil': 1293, 'fresh': 1294, 'angl': 1295, 'likabl': 1296, 'abus': 1297, 'discuss': 1298, 'hill': 1299, 'ahead': 1300, 'sight': 1301, 'sent': 1302, 'photograph': 1303, 'shine': 1304, 'occur': 1305, 'blame': 1306, 'logic': 1307, 'mainli': 1308, 'bruce': 1309, 'commerci': 1310, 'forev': 1311, 'skip': 1312, 'segment': 1313, 'held': 1314, 'surround': 1315, 'teacher': 1316, 'blond': 1317, 'zero': 1318, 'resembl': 1319, 'summer': 1320, 'trap': 1321, 'satir': 1322, 'fool': 1323, 'ball': 1324, 'six': 1325, 'queen': 1326, 'twice': 1327, 'tragedi': 1328, 'sub': 1329, 'pack': 1330, 'reaction': 1331, 'bomb': 1332, 'hospit': 1333, 'protagonist': 1334, 'will': 1335, 'mile': 1336, 'sport': 1337, 'jerri': 1338, 'vote': 1339, 'mom': 1340, 'drink': 1341, 'trust': 1342, 'encount': 1343, 'plane': 1344, 'program': 1345, 'station': 1346, 'current': 1347, 'al': 1348, 'choos': 1349, 'martin': 1350, 'celebr': 1351, 'join': 1352, 'lord': 1353, 'round': 1354, 'tragic': 1355, 'field': 1356, 'favourit': 1357, 'jean': 1358, 'robot': 1359, 'vision': 1360, 'arthur': 1361, 'tie': 1362, 'random': 1363, 'fortun': 1364, 'roger': 1365, 'psycholog': 1366, 'dread': 1367, 'intern': 1368, 'improv': 1369, 'prefer': 1370, 'epic': 1371, 'nonsens': 1372, 'formula': 1373, 'highlight': 1374, 'pleasur': 1375, 'legend': 1376, 'tape': 1377, 'dollar': 1378, '11': 1379, 'thin': 1380, 'wide': 1381, 'object': 1382, 'porn': 1383, 'gorgeou': 1384, 'fox': 1385, 'ugli': 1386, 'buddi': 1387, 'influenc': 1388, 'prepar': 1389, 'ii': 1390, 'nasti': 1391, 'reflect': 1392, 'supposedli': 1393, 'warm': 1394, 'progress': 1395, 'worthi': 1396, 'youth': 1397, 'unusu': 1398, 'latter': 1399, 'length': 1400, 'crash': 1401, 'shop': 1402, 'superior': 1403, 'childhood': 1404, 'seven': 1405, 'theatr': 1406, 'remot': 1407, 'disgust': 1408, 'funniest': 1409, 'pilot': 1410, 'paid': 1411, 'fell': 1412, 'convers': 1413, 'trick': 1414, 'castl': 1415, 'rob': 1416, 'establish': 1417, 'disast': 1418, 'gangster': 1419, 'mine': 1420, 'disappear': 1421, 'suicid': 1422, 'ident': 1423, 'heaven': 1424, 'decis': 1425, 'forgotten': 1426, 'tend': 1427, 'mask': 1428, 'heroin': 1429, 'singer': 1430, 'partner': 1431, 'brian': 1432, 'recogn': 1433, 'desert': 1434, 'alan': 1435, 'stuck': 1436, 'sky': 1437, 'p': 1438, 'thoroughli': 1439, 'ms': 1440, 'replac': 1441, 'accur': 1442, 'market': 1443, 'uncl': 1444, 'eddi': 1445, 'andi': 1446, 'commentari': 1447, 'seemingli': 1448, 'danni': 1449, 'clue': 1450, 'jackson': 1451, 'devil': 1452, 'therefor': 1453, 'pair': 1454, 'refus': 1455, 'that': 1456, 'fate': 1457, 'accid': 1458, 'fault': 1459, 'river': 1460, 'unit': 1461, 'ed': 1462, 'tune': 1463, 'afraid': 1464, 'stephen': 1465, 'russian': 1466, 'hidden': 1467, 'clean': 1468, 'test': 1469, 'convey': 1470, 'captain': 1471, 'readi': 1472, 'instanc': 1473, 'irrit': 1474, 'quick': 1475, 'european': 1476, 'insan': 1477, 'daniel': 1478, 'frustrat': 1479, '1950': 1480, 'chines': 1481, 'rescu': 1482, 'food': 1483, 'wed': 1484, 'dirti': 1485, 'lock': 1486, 'angri': 1487, 'joy': 1488, 'steven': 1489, 'price': 1490, 'bland': 1491, 'cage': 1492, 'anymor': 1493, 'rang': 1494, 'wooden': 1495, 'n': 1496, 'jason': 1497, 'news': 1498, 'rush': 1499, '12': 1500, 'martial': 1501, 'led': 1502, 'twenti': 1503, 'board': 1504, 'worri': 1505, 'symbol': 1506, 'transform': 1507, 'cgi': 1508, 'hunter': 1509, 'johnni': 1510, 'invent': 1511, 'onto': 1512, 'piti': 1513, 'x': 1514, 'sentiment': 1515, 'attitud': 1516, 'process': 1517, 'explan': 1518, 'owner': 1519, 'awar': 1520, 'aim': 1521, 'favor': 1522, 'target': 1523, 'necessari': 1524, 'energi': 1525, 'floor': 1526, 'religi': 1527, 'opposit': 1528, 'window': 1529, 'insight': 1530, 'blind': 1531, 'chick': 1532, 'movement': 1533, 'deepli': 1534, 'possess': 1535, 'comparison': 1536, 'mountain': 1537, 'research': 1538, 'comed': 1539, 'whatsoev': 1540, 'grand': 1541, 'rain': 1542, 'mid': 1543, 'shadow': 1544, 'bank': 1545, 'began': 1546, 'parodi': 1547, 'princ': 1548, 'weapon': 1549, 'pre': 1550, 'friendship': 1551, 'credibl': 1552, 'taylor': 1553, 'teach': 1554, 'flesh': 1555, 'dougla': 1556, 'bloodi': 1557, 'terror': 1558, 'protect': 1559, 'hint': 1560, 'marvel': 1561, 'drunk': 1562, 'leader': 1563, 'load': 1564, 'accord': 1565, 'anybodi': 1566, 'watchabl': 1567, 'superman': 1568, 'brown': 1569, 'freddi': 1570, 'hitler': 1571, 'appropri': 1572, 'seat': 1573, 'tim': 1574, 'jeff': 1575, 'unknown': 1576, 'charg': 1577, 'villag': 1578, 'knock': 1579, 'keaton': 1580, 'empti': 1581, 'england': 1582, 'enemi': 1583, 'unnecessari': 1584, 'media': 1585, 'utter': 1586, 'perspect': 1587, 'wave': 1588, 'strength': 1589, 'craft': 1590, 'buck': 1591, 'dare': 1592, 'kiss': 1593, 'correct': 1594, 'ford': 1595, 'nativ': 1596, 'contrast': 1597, 'knowledg': 1598, 'chill': 1599, 'magnific': 1600, 'distract': 1601, 'anywher': 1602, 'soap': 1603, 'nazi': 1604, 'speed': 1605, 'breath': 1606, 'mission': 1607, '1980': 1608, 'ice': 1609, 'fred': 1610, 'joan': 1611, 'crowd': 1612, 'moon': 1613, 'jr': 1614, 'soft': 1615, '000': 1616, 'frighten': 1617, 'kate': 1618, 'dan': 1619, 'dick': 1620, 'hundr': 1621, 'nick': 1622, 'simon': 1623, 'radio': 1624, 'dozen': 1625, 'somebodi': 1626, 'loss': 1627, 'thousand': 1628, 'shakespear': 1629, 'andrew': 1630, 'academi': 1631, 'sum': 1632, 'account': 1633, 'root': 1634, 'vehicl': 1635, 'quot': 1636, 'leg': 1637, '1970': 1638, 'behavior': 1639, 'convent': 1640, 'regular': 1641, 'gold': 1642, 'compet': 1643, 'pretenti': 1644, 'demand': 1645, 'worker': 1646, 'privat': 1647, 'stretch': 1648, 'notabl': 1649, 'explos': 1650, 'interpret': 1651, 'candi': 1652, 'japan': 1653, 'lynch': 1654, 'constant': 1655, 'tarzan': 1656, 'debut': 1657, 'translat': 1658, 'sea': 1659, 'spi': 1660, 'revolv': 1661, 'prais': 1662, 'franc': 1663, 'technolog': 1664, 'sat': 1665, 'quiet': 1666, 'failur': 1667, 'threaten': 1668, 'ass': 1669, 'jesu': 1670, 'toy': 1671, 'punch': 1672, 'met': 1673, 'higher': 1674, 'aid': 1675, 'kevin': 1676, 'vh': 1677, 'abandon': 1678, 'mike': 1679, 'interact': 1680, 'bet': 1681, 'command': 1682, 'confront': 1683, 'separ': 1684, 'recal': 1685, 'belong': 1686, 'stunt': 1687, 'site': 1688, 'techniqu': 1689, 'servic': 1690, 'gotten': 1691, 'cabl': 1692, 'foot': 1693, 'bug': 1694, 'freak': 1695, 'bright': 1696, 'capabl': 1697, 'african': 1698, 'fu': 1699, 'jimmi': 1700, 'boat': 1701, 'presid': 1702, 'fat': 1703, 'succeed': 1704, 'stock': 1705, 'clark': 1706, 'spanish': 1707, 'gene': 1708, 'structur': 1709, 'kidnap': 1710, 'paper': 1711, 'factor': 1712, 'whilst': 1713, 'belief': 1714, 'tree': 1715, 'complic': 1716, 'witti': 1717, 'realis': 1718, 'attend': 1719, 'bob': 1720, 'realism': 1721, 'educ': 1722, 'assist': 1723, 'santa': 1724, 'finest': 1725, 'broken': 1726, 'up': 1727, 'depart': 1728, 'observ': 1729, 'smoke': 1730, 'determin': 1731, 'v': 1732, 'oper': 1733, 'lewi': 1734, 'routin': 1735, 'hat': 1736, 'rubbish': 1737, 'fame': 1738, 'domin': 1739, 'foreign': 1740, 'kinda': 1741, 'lone': 1742, 'advanc': 1743, 'morgan': 1744, 'safe': 1745, 'hook': 1746, 'rank': 1747, 'numer': 1748, 'werewolf': 1749, 'vs': 1750, 'shape': 1751, 'shallow': 1752, 'civil': 1753, 'rose': 1754, 'washington': 1755, 'morn': 1756, 'gari': 1757, 'kong': 1758, 'accomplish': 1759, 'ordinari': 1760, 'winner': 1761, 'whenev': 1762, 'peac': 1763, 'grab': 1764, 'virtual': 1765, 'luck': 1766, 'offens': 1767, 'h': 1768, 'contriv': 1769, 'welcom': 1770, 'complain': 1771, 'unfunni': 1772, 'patient': 1773, 'activ': 1774, 'bigger': 1775, 'trek': 1776, 'con': 1777, 'dimension': 1778, 'pretend': 1779, 'cain': 1780, 'code': 1781, 'wake': 1782, 'lesbian': 1783, 'flash': 1784, 'eric': 1785, 'dri': 1786, 'corrupt': 1787, 'manipul': 1788, 'guard': 1789, 'albert': 1790, 'statu': 1791, 'dancer': 1792, 'sourc': 1793, 'context': 1794, 'awkward': 1795, 'gain': 1796, 'speech': 1797, 'signific': 1798, 'anthoni': 1799, 'clip': 1800, 'corni': 1801, 'psycho': 1802, '13': 1803, 'sean': 1804, 'theatric': 1805, 'w': 1806, 'curiou': 1807, 'advic': 1808, 'religion': 1809, 'reli': 1810, 'priest': 1811, 'flow': 1812, 'addict': 1813, 'asian': 1814, 'skin': 1815, 'specif': 1816, 'jennif': 1817, 'howard': 1818, 'secur': 1819, 'golden': 1820, 'core': 1821, 'comfort': 1822, 'organ': 1823, 'promot': 1824, 'luke': 1825, 'cheat': 1826, 'lucki': 1827, 'cash': 1828, 'lower': 1829, 'dislik': 1830, 'associ': 1831, 'devic': 1832, 'regret': 1833, 'wing': 1834, 'balanc': 1835, 'frequent': 1836, 'frankli': 1837, 'contribut': 1838, 'degre': 1839, 'spell': 1840, 'print': 1841, 'forgiv': 1842, 'sake': 1843, 'lake': 1844, 'thoma': 1845, 'mass': 1846, 'betti': 1847, 'gordon': 1848, 'crack': 1849, 'unexpect': 1850, 'amateur': 1851, 'depend': 1852, 'construct': 1853, 'invit': 1854, 'categori': 1855, 'unfold': 1856, 'grown': 1857, 'honor': 1858, 'intellectu': 1859, 'grew': 1860, 'matur': 1861, 'anna': 1862, 'condit': 1863, 'walter': 1864, 'sole': 1865, 'veteran': 1866, 'sudden': 1867, 'mirror': 1868, 'spectacular': 1869, 'gift': 1870, 'overli': 1871, 'grip': 1872, 'demonstr': 1873, 'card': 1874, 'meanwhil': 1875, 'experienc': 1876, 'liner': 1877, 'freedom': 1878, 'robin': 1879, 'subtitl': 1880, 'section': 1881, 'crappi': 1882, 'brilliantli': 1883, 'circumst': 1884, 'sheriff': 1885, 'theori': 1886, 'oliv': 1887, 'unabl': 1888, 'drew': 1889, 'colour': 1890, 'sheer': 1891, 'matt': 1892, 'altern': 1893, 'laughter': 1894, 'parker': 1895, 'path': 1896, 'cook': 1897, 'pile': 1898, 'defin': 1899, 'treatment': 1900, 'hall': 1901, 'lawyer': 1902, 'sinatra': 1903, 'accident': 1904, 'wander': 1905, 'relief': 1906, 'dragon': 1907, 'hank': 1908, 'captiv': 1909, 'gratuit': 1910, 'moor': 1911, 'halloween': 1912, 'cowboy': 1913, 'jacki': 1914, 'wound': 1915, 'barbara': 1916, 'k': 1917, 'broadway': 1918, 'unintent': 1919, 'wayn': 1920, 'kung': 1921, 'winter': 1922, 'surreal': 1923, 'spoof': 1924, 'statement': 1925, 'canadian': 1926, 'cheer': 1927, 'gonna': 1928, 'fish': 1929, 'compos': 1930, 'fare': 1931, 'treasur': 1932, 'woodi': 1933, 'emerg': 1934, 'victor': 1935, 'sensit': 1936, 'unrealist': 1937, 'neighbor': 1938, 'driven': 1939, 'ran': 1940, 'sympathet': 1941, 'menac': 1942, 'topic': 1943, 'authent': 1944, 'glass': 1945, 'expos': 1946, 'overlook': 1947, 'ancient': 1948, 'handsom': 1949, 'michel': 1950, 'chief': 1951, 'gross': 1952, 'nevertheless': 1953, 'contemporari': 1954, 'cinderella': 1955, 'built': 1956, 'network': 1957, 'stranger': 1958, 'russel': 1959, 'pleasant': 1960, 'comedian': 1961, 'feet': 1962, 'letter': 1963, 'endless': 1964, 'miser': 1965, 'earn': 1966, 'consider': 1967, 'blockbust': 1968, 'underr': 1969, 'gori': 1970, 'switch': 1971, 'solv': 1972, 'brook': 1973, 'convict': 1974, 'bullet': 1975, 'virgin': 1976, 'edward': 1977, 'victoria': 1978, 'joseph': 1979, 'chosen': 1980, 'alex': 1981, 'scale': 1982, 'cynic': 1983, 'scenario': 1984, '0': 1985, 'outrag': 1986, 'sword': 1987, 'com': 1988, 'gut': 1989, 'curs': 1990, 'screenwrit': 1991, 'monkey': 1992, 'wrap': 1993, 'substanc': 1994, 'uk': 1995, 'juli': 1996, 'proper': 1997, 'driver': 1998, 'remov': 1999, 'par': 2000, 'indic': 2001, 'court': 2002, 'bird': 2003, 'advertis': 2004, 'roy': 2005, 'inevit': 2006, 'nanci': 2007, 'loser': 2008, 'consequ': 2009, 'naiv': 2010, 'rental': 2011, 'grave': 2012, 'germani': 2013, 'invis': 2014, 'slap': 2015, 'fatal': 2016, 'le': 2017, 'brave': 2018, 'bridg': 2019, 'ador': 2020, 'provok': 2021, 'anger': 2022, 'loui': 2023, 'footbal': 2024, 'chan': 2025, 'anderson': 2026, 'alcohol': 2027, 'ryan': 2028, 'stumbl': 2029, 'willi': 2030, 'professor': 2031, 'patrick': 2032, '1930': 2033, 'australian': 2034, 'assassin': 2035, 'sharp': 2036, 'bat': 2037, 'strongli': 2038, 'deni': 2039, 'cell': 2040, 'saturday': 2041, 'liber': 2042, 'trilog': 2043, 'heck': 2044, 'eight': 2045, 'amateurish': 2046, 'lousi': 2047, 'refresh': 2048, 'ape': 2049, 'sin': 2050, 'san': 2051, 'resid': 2052, 'vagu': 2053, 'justifi': 2054, 'defeat': 2055, 'mini': 2056, 'reput': 2057, 'creator': 2058, 'terrifi': 2059, 'sympathi': 2060, 'indi': 2061, 'prevent': 2062, 'tabl': 2063, 'task': 2064, 'tediou': 2065, 'expert': 2066, 'endur': 2067, 'trial': 2068, 'offend': 2069, 'che': 2070, 'employ': 2071, 'imit': 2072, 'basebal': 2073, 'rival': 2074, 'fairi': 2075, 'max': 2076, 'complaint': 2077, 'europ': 2078, 'dig': 2079, 'weekend': 2080, 'beach': 2081, 'pitch': 2082, 'format': 2083, 'risk': 2084, 'purchas': 2085, 'murphi': 2086, 'bite': 2087, 'reminisc': 2088, 'nois': 2089, 'hype': 2090, 'powel': 2091, 'harsh': 2092, 'tini': 2093, 'glimps': 2094, 'titan': 2095, 'strip': 2096, 'till': 2097, 'prime': 2098, 'asleep': 2099, 'fals': 2100, 'north': 2101, '14': 2102, 'destruct': 2103, 'descript': 2104, 'revel': 2105, 'africa': 2106, 'texa': 2107, 'surfac': 2108, 'spin': 2109, 'semi': 2110, 'inner': 2111, 'arrest': 2112, 'sitcom': 2113, 'uninterest': 2114, 'excess': 2115, 'maintain': 2116, 'controversi': 2117, 'twin': 2118, 'hitchcock': 2119, 'makeup': 2120, 'argu': 2121, 'massiv': 2122, 'dinosaur': 2123, 'ludicr': 2124, 'insist': 2125, 'expens': 2126, 'ideal': 2127, 'stare': 2128, 'melodrama': 2129, 'kim': 2130, 'reject': 2131, 'atroci': 2132, 'press': 2133, 'ala': 2134, 'host': 2135, 'nail': 2136, 'ga': 2137, 'supernatur': 2138, 'forest': 2139, 'subplot': 2140, 'erot': 2141, 'columbo': 2142, 'dude': 2143, 'identifi': 2144, 'presum': 2145, 'notch': 2146, 'cant': 2147, 'forgett': 2148, 'crude': 2149, 'method': 2150, 'plagu': 2151, 'closer': 2152, 'character': 2153, 'guest': 2154, 'ear': 2155, 'foster': 2156, 'border': 2157, 'princess': 2158, 'beast': 2159, 'landscap': 2160, 'lion': 2161, 'accus': 2162, 'pacino': 2163, 'previous': 2164, 'damag': 2165, 'jungl': 2166, 'urban': 2167, 'birth': 2168, 'bound': 2169, 'storytel': 2170, 'aunt': 2171, 'doll': 2172, 'chose': 2173, 'nude': 2174, 'thirti': 2175, 'guid': 2176, 'emma': 2177, 'jess': 2178, 'propaganda': 2179, 'warrior': 2180, '25': 2181, 'whoever': 2182, 'mate': 2183, 'mainstream': 2184, 'pet': 2185, 'poster': 2186, 'merit': 2187, 'gritti': 2188, 'upset': 2189, 'deadli': 2190, 'exact': 2191, 'cooper': 2192, 'latest': 2193, 'size': 2194, 'friday': 2195, 'contact': 2196, 'buff': 2197, 'corps': 2198, 'citizen': 2199, 'sun': 2200, 'warner': 2201, '1990': 2202, 'blend': 2203, 'popul': 2204, 'contest': 2205, 'rough': 2206, 'ton': 2207, 'settl': 2208, 'wilson': 2209, 'rat': 2210, 'widow': 2211, 'select': 2212, 'mgm': 2213, 'pitt': 2214, 'bu': 2215, 'overcom': 2216, 'environ': 2217, 'alic': 2218, 'metal': 2219, 'particip': 2220, 'revolut': 2221, 'ted': 2222, 'guilti': 2223, 'lift': 2224, 'link': 2225, 'exagger': 2226, 'moron': 2227, 'johnson': 2228, 'afternoon': 2229, '1960': 2230, 'prostitut': 2231, 'corner': 2232, 'matrix': 2233, 'accompani': 2234, 'corpor': 2235, 'holm': 2236, 'multipl': 2237, 'instal': 2238, 'friendli': 2239, 'sincer': 2240, 'doom': 2241, 'leagu': 2242, 'clair': 2243, 'hood': 2244, 'aka': 2245, 'campi': 2246, 'lugosi': 2247, 'defend': 2248, 'junk': 2249, 'advis': 2250, 'examin': 2251, 'blah': 2252, 'sunday': 2253, 'hip': 2254, 'string': 2255, 'grim': 2256, 'irish': 2257, 'icon': 2258, 'pro': 2259, 'shut': 2260, 'confid': 2261, 'rachel': 2262, 'tight': 2263, 'shake': 2264, 'varieti': 2265, 'denni': 2266, 'directli': 2267, 'medic': 2268, 'jaw': 2269, 'attach': 2270, 'goal': 2271, 'mexican': 2272, 'sullivan': 2273, 'dean': 2274, 'legendari': 2275, 'bourn': 2276, 'terrorist': 2277, 'sarah': 2278, 'prior': 2279, 'sentenc': 2280, 'vietnam': 2281, 'duke': 2282, 'courag': 2283, 'breast': 2284, 'truck': 2285, 'hong': 2286, 'proceed': 2287, 'split': 2288, 'nose': 2289, 'yell': 2290, 'entri': 2291, 'behav': 2292, 'donald': 2293, 'un': 2294, 'stolen': 2295, 'gather': 2296, 'concentr': 2297, 'everywher': 2298, 'borrow': 2299, 'crush': 2300, 'buri': 2301, 'jerk': 2302, 'unconvinc': 2303, 'confess': 2304, 'forth': 2305, 'swim': 2306, 'lifetim': 2307, 'california': 2308, 'pan': 2309, 'spite': 2310, 'turkey': 2311, 'julia': 2312, 'lip': 2313, 'deliveri': 2314, 'reward': 2315, 'quest': 2316, 'downright': 2317, 'flight': 2318, 'proud': 2319, 'china': 2320, 'offici': 2321, 'freeman': 2322, 'hoffman': 2323, 'encourag': 2324, 'sink': 2325, 'fabul': 2326, 'lazi': 2327, 'notori': 2328, 'inept': 2329, 'worthwhil': 2330, 'betray': 2331, 'fade': 2332, 'jon': 2333, 'jail': 2334, 'sir': 2335, 'retard': 2336, 'susan': 2337, 'survivor': 2338, 'shower': 2339, 'storm': 2340, 'bell': 2341, 'cousin': 2342, 'imageri': 2343, 'teeth': 2344, 'relev': 2345, 'bag': 2346, 'branagh': 2347, 'lisa': 2348, 'tremend': 2349, 'bride': 2350, 'hugh': 2351, 'shark': 2352, 'stab': 2353, 'facial': 2354, 'finger': 2355, 'mexico': 2356, 'summari': 2357, 'alright': 2358, 'trade': 2359, 'toler': 2360, 'quirki': 2361, 'hyster': 2362, 'bitter': 2363, 'von': 2364, 'pose': 2365, 'ha': 2366, 'blown': 2367, 'scheme': 2368, 'cruel': 2369, 'afterward': 2370, 'bone': 2371, 'larri': 2372, 'address': 2373, 'christ': 2374, 'ron': 2375, 'ned': 2376, 'tour': 2377, 'swear': 2378, 'feed': 2379, 'screw': 2380, 'pursu': 2381, 'distinct': 2382, 'thumb': 2383, 'snake': 2384, 'beg': 2385, 'traci': 2386, 'photo': 2387, 'occas': 2388, 'mechan': 2389, 'chair': 2390, 'stomach': 2391, 'raw': 2392, 'obscur': 2393, 'necessarili': 2394, 'chain': 2395, 'resist': 2396, 'render': 2397, 'heavili': 2398, 'cabin': 2399, 'holiday': 2400, 'gruesom': 2401, 'southern': 2402, 'argument': 2403, 'sidney': 2404, 'hardi': 2405, 'indulg': 2406, 'philip': 2407, 'understood': 2408, 'india': 2409, 'racist': 2410, 'satan': 2411, 'fourth': 2412, 'forgot': 2413, 'tongu': 2414, 'midnight': 2415, 'integr': 2416, 'belov': 2417, 'stalk': 2418, 'pregnant': 2419, 'lay': 2420, 'obnoxi': 2421, 'outfit': 2422, 'inhabit': 2423, 'restor': 2424, 'slapstick': 2425, 'magazin': 2426, '17': 2427, 'garden': 2428, 'deeper': 2429, 'ticket': 2430, 'carol': 2431, 'devot': 2432, 'brad': 2433, 'shoe': 2434, 'lincoln': 2435, 'incid': 2436, 'anticip': 2437, 'elizabeth': 2438, 'underground': 2439, 'benefit': 2440, 'disbelief': 2441, 'divorc': 2442, 'lili': 2443, 'guarante': 2444, 'sandler': 2445, 'maria': 2446, 'bbc': 2447, 'creation': 2448, 'mildli': 2449, 'explod': 2450, 'greater': 2451, 'cring': 2452, 'capit': 2453, 'princip': 2454, 'amazingli': 2455, 'slave': 2456, 'lesli': 2457, 'halfway': 2458, 'introduct': 2459, 'funnier': 2460, 'extraordinari': 2461, 'wreck': 2462, 'extent': 2463, 'tap': 2464, 'advantag': 2465, 'overwhelm': 2466, 'transfer': 2467, 'enhanc': 2468, 'punish': 2469, 'text': 2470, 'deliber': 2471, 'preview': 2472, 'lane': 2473, 'error': 2474, 'dynam': 2475, 'plant': 2476, 'east': 2477, 'lo': 2478, 'horrif': 2479, 'jessica': 2480, 'ensu': 2481, 'basi': 2482, 'miscast': 2483, 'sophist': 2484, 'miller': 2485, 'appli': 2486, 'vincent': 2487, 'vacat': 2488, '2000': 2489, 'homosexu': 2490, 'elev': 2491, 'sleazi': 2492, 'mansion': 2493, 'steel': 2494, 'spoken': 2495, 'bollywood': 2496, 'via': 2497, 'uncomfort': 2498, 'extend': 2499, 'measur': 2500, 'reed': 2501, 'alter': 2502, 'beer': 2503, 'mous': 2504, 'goofi': 2505, 'hippi': 2506, 'conceiv': 2507, 'melt': 2508, 'blair': 2509, 'overact': 2510, 'fix': 2511, 'breathtak': 2512, 'assign': 2513, 'stanley': 2514, 'savag': 2515, 'dentist': 2516, 'cathol': 2517, 'daili': 2518, 'properli': 2519, 'sacrific': 2520, 'everyday': 2521, 'subsequ': 2522, 'carpent': 2523, 'oppos': 2524, 'nowaday': 2525, 'succe': 2526, 'inspector': 2527, 'burt': 2528, 'circl': 2529, 'block': 2530, 'neck': 2531, 'laura': 2532, 'massacr': 2533, 'lesser': 2534, 'grey': 2535, 'pool': 2536, 'fallen': 2537, 'mob': 2538, 'portrait': 2539, 'access': 2540, 'concert': 2541, 'christi': 2542, 'seagal': 2543, 'fay': 2544, 'react': 2545, 'sinist': 2546, 'competit': 2547, 'jake': 2548, 'usa': 2549, 'relax': 2550, 'jewish': 2551, 'isol': 2552, 'chees': 2553, '2006': 2554, 'nine': 2555, 'spiritu': 2556, 'appal': 2557, 'immens': 2558, 'nonetheless': 2559, 'suitabl': 2560, 'creep': 2561, 'stink': 2562, 'lyric': 2563, 'chop': 2564, 'ironi': 2565, 'sold': 2566, 'reduc': 2567, 'showcas': 2568, 'nut': 2569, 'shirt': 2570, 'needless': 2571, 'navi': 2572, 'spring': 2573, 'franchis': 2574, 'rage': 2575, 'user': 2576, 'retir': 2577, 'adopt': 2578, 'luci': 2579, 'bath': 2580, 'zone': 2581, 'asham': 2582, 'jay': 2583, 'digit': 2584, 'nurs': 2585, 'per': 2586, 'uninspir': 2587, 'bulli': 2588, 'stanwyck': 2589, 'amongst': 2590, '1940': 2591, 'laid': 2592, '2001': 2593, 'broadcast': 2594, 'illustr': 2595, 'sutherland': 2596, 'oddli': 2597, 'upper': 2598, 'aspir': 2599, 'fulfil': 2600, 'stylish': 2601, 'baker': 2602, 'disguis': 2603, 'throat': 2604, 'brando': 2605, 'endear': 2606, 'impli': 2607, 'wanna': 2608, 'em': 2609, 'pride': 2610, 'neighborhood': 2611, 'wwii': 2612, '18': 2613, 'nobl': 2614, 'thief': 2615, 'pound': 2616, 'albeit': 2617, 'dawn': 2618, 'dinner': 2619, 'shift': 2620, 'diseas': 2621, 'coher': 2622, 'cinematograph': 2623, 'distribut': 2624, 'tens': 2625, 'shoulder': 2626, 'bo': 2627, 'prop': 2628, 'bett': 2629, '16': 2630, 'rochest': 2631, 'snow': 2632, 'forti': 2633, 'shout': 2634, 'function': 2635, 'surf': 2636, 'matthau': 2637, 'silenc': 2638, 'poignant': 2639, 'rebel': 2640, 'contract': 2641, 'knife': 2642, 'wash': 2643, 'henc': 2644, 'instinct': 2645, 'mindless': 2646, 'derek': 2647, 'chuck': 2648, 'cancel': 2649, 'heat': 2650, 'reunion': 2651, 'proof': 2652, 'eeri': 2653, 'horrend': 2654, 'internet': 2655, 'height': 2656, 'widmark': 2657, 'silver': 2658, 'elvira': 2659, 'duti': 2660, 'cannib': 2661, 'greatli': 2662, 'incoher': 2663, 'musician': 2664, 'spielberg': 2665, 'glori': 2666, 'neat': 2667, 'etern': 2668, 'premier': 2669, 'mill': 2670, 'pie': 2671, 'alik': 2672, 'absorb': 2673, 'innov': 2674, 'elvi': 2675, 'repetit': 2676, 'torn': 2677, 'wealthi': 2678, 'bang': 2679, 'trite': 2680, 'infam': 2681, 'redempt': 2682, 'britain': 2683, 'homag': 2684, 'diamond': 2685, 'racism': 2686, 'precis': 2687, 'crisi': 2688, 'itali': 2689, 'horrifi': 2690, 'announc': 2691, 'lovabl': 2692, 'nelson': 2693, 'blank': 2694, 'fbi': 2695, 'burton': 2696, 'ensembl': 2697, 'happili': 2698, 'parallel': 2699, 'resolut': 2700, 'streisand': 2701, 'chaplin': 2702, 'hammer': 2703, 'wilder': 2704, 'flop': 2705, 'helen': 2706, 'dedic': 2707, 'pat': 2708, 'disagre': 2709, 'carter': 2710, 'st': 2711, 'mar': 2712, 'conclud': 2713, 'cube': 2714, 'factori': 2715, 'broke': 2716, 'triumph': 2717, 'oil': 2718, 'plastic': 2719, 'row': 2720, 'march': 2721, 'chuckl': 2722, 'rocket': 2723, 'fighter': 2724, 'weight': 2725, 'climb': 2726, 'own': 2727, 'vega': 2728, 'bush': 2729, 'thug': 2730, 'dump': 2731, 'mst3k': 2732, 'kurt': 2733, 'enorm': 2734, 'wherea': 2735, 'unforgett': 2736, 'boot': 2737, 'dane': 2738, 'sensibl': 2739, 'luca': 2740, 'lust': 2741, 'spare': 2742, 'meaning': 2743, 'arnold': 2744, 'caricatur': 2745, 'brand': 2746, 'adequ': 2747, 'fifti': 2748, 'stress': 2749, 'bobbi': 2750, 'dear': 2751, 'rap': 2752, 'difficulti': 2753, 'butt': 2754, 'threat': 2755, 'engin': 2756, 'karloff': 2757, 'elabor': 2758, 'swing': 2759, 'ralph': 2760, 'secretari': 2761, 'arrog': 2762, 'barri': 2763, 'hamlet': 2764, 'homeless': 2765, 'ego': 2766, 'polish': 2767, 'fest': 2768, 'journalist': 2769, 'flynn': 2770, 'tool': 2771, 'fanci': 2772, 'puppet': 2773, 'induc': 2774, 'float': 2775, 'arrang': 2776, 'simpson': 2777, 'conspiraci': 2778, 'unbear': 2779, 'resort': 2780, 'grate': 2781, 'spike': 2782, 'phillip': 2783, 'basement': 2784, 'exercis': 2785, 'pig': 2786, 'cruis': 2787, 'tribut': 2788, 'guilt': 2789, 'muppet': 2790, 'boll': 2791, 'choreograph': 2792, 'layer': 2793, 'babe': 2794, 'editor': 2795, 'ward': 2796, 'medium': 2797, 'item': 2798, 'puzzl': 2799, 'file': 2800, 'slip': 2801, 'scarecrow': 2802, 'fianc': 2803, 'tower': 2804, 'document': 2805, 'korean': 2806, '24': 2807, 'stan': 2808, 'ham': 2809, 'toilet': 2810, 'persona': 2811, 'larger': 2812, 'orient': 2813, 'glover': 2814, 'assur': 2815, 'catherin': 2816, 'philosoph': 2817, 'inexplic': 2818, 'portion': 2819, 'territori': 2820, 'librari': 2821, 'superfici': 2822, 'spark': 2823, 'slaughter': 2824, 'minim': 2825, 'denzel': 2826, 'transit': 2827, 'doc': 2828, 'pg': 2829, 'shi': 2830, 'financi': 2831, 'boredom': 2832, 'curti': 2833, 'owe': 2834, 'sneak': 2835, 'jet': 2836, 'jeremi': 2837, 'dorothi': 2838, 'walken': 2839, 'wolf': 2840, 'ban': 2841, 'metaphor': 2842, 'profound': 2843, 'backdrop': 2844, 'multi': 2845, 'ambigu': 2846, 'whale': 2847, 'hudson': 2848, 'eleph': 2849, 'cusack': 2850, 'ultra': 2851, 'rave': 2852, '2005': 2853, 'implaus': 2854, 'notion': 2855, 'elsewher': 2856, 'hack': 2857, 'stiff': 2858, 'viru': 2859, 'union': 2860, 'birthday': 2861, 'gadget': 2862, 'eastwood': 2863, 'bibl': 2864, 'squar': 2865, 'afford': 2866, 'newspap': 2867, 'urg': 2868, 'pad': 2869, 'reader': 2870, '1st': 2871, 'slight': 2872, 'canada': 2873, 'poison': 2874, 'distanc': 2875, 'disc': 2876, 'superhero': 2877, 'eva': 2878, 'deriv': 2879, 'lloyd': 2880, 'hawk': 2881, 'spread': 2882, 'huh': 2883, 'skit': 2884, 'health': 2885, 'charisma': 2886, 'heston': 2887, 'sadist': 2888, 'drown': 2889, 'essenc': 2890, 'cure': 2891, 'montag': 2892, 'restaur': 2893, 'button': 2894, 'lab': 2895, 'maniac': 2896, 'companion': 2897, 'scoobi': 2898, 'gradual': 2899, 'estat': 2900, 'godfath': 2901, 'peak': 2902, 'fetch': 2903, 'invest': 2904, 'dealt': 2905, 'muslim': 2906, 'alli': 2907, 'servant': 2908, 'kane': 2909, 'countless': 2910, 'ritter': 2911, 'miik': 2912, 'gothic': 2913, 'subtleti': 2914, 'cup': 2915, 'tea': 2916, 'briefli': 2917, 'electr': 2918, 'charismat': 2919, 'heroic': 2920, 'iii': 2921, 'salli': 2922, 'elect': 2923, 'resourc': 2924, 'grandmoth': 2925, 'admittedli': 2926, 'ingredi': 2927, 'toss': 2928, 'tender': 2929, 'nuanc': 2930, 'reel': 2931, 'bud': 2932, 'cole': 2933, 'neil': 2934, 'wannab': 2935, 'label': 2936, 'mild': 2937, 'poverti': 2938, 'pit': 2939, 'reev': 2940, 'stood': 2941, 'shall': 2942, 'mafia': 2943, 'punk': 2944, 'stronger': 2945, 'gate': 2946, 'pauli': 2947, 'carrey': 2948, 'dawson': 2949, 'kubrick': 2950, 'updat': 2951, 'easier': 2952, 'smooth': 2953, 'burst': 2954, 'ian': 2955, 'outcom': 2956, 'fond': 2957, 'cardboard': 2958, 'tag': 2959, 'terri': 2960, 'smash': 2961, 'useless': 2962, 'assault': 2963, 'astair': 2964, 'cox': 2965, 'bakshi': 2966, 'increasingli': 2967, 'melodramat': 2968, 'qualifi': 2969, '2002': 2970, 'samurai': 2971, 'exchang': 2972, 'resolv': 2973, 'rex': 2974, 'divers': 2975, 'vari': 2976, 'fist': 2977, 'vulner': 2978, 'sketch': 2979, 'coincid': 2980, 'reynold': 2981, 'templ': 2982, 'insert': 2983, 'blast': 2984, 'brillianc': 2985, 'tame': 2986, 'suspend': 2987, 'conveni': 2988, 'be': 2989, 'scratch': 2990, 'luckili': 2991, 'coach': 2992, 'strictli': 2993, 'matthew': 2994, 'gotta': 2995, 'ambiti': 2996, 'pin': 2997, 'farm': 2998, 'nuclear': 2999, 'jami': 3000, 'walker': 3001, 'soprano': 3002, 'meat': 3003, 'seventi': 3004, 'fisher': 3005, 'hamilton': 3006, 'closet': 3007, 'clock': 3008, 'eccentr': 3009, 'spooki': 3010, 'empir': 3011, 'kudo': 3012, 'convolut': 3013, 'worthless': 3014, 'grasp': 3015, 'butcher': 3016, 'recreat': 3017, 'discoveri': 3018, 'instantli': 3019, 'revers': 3020, 'timeless': 3021, 'struck': 3022, 'monk': 3023, 'joey': 3024, 'brosnan': 3025, 'ninja': 3026, 'cave': 3027, 'clown': 3028, 'pal': 3029, 'cliff': 3030, 'inconsist': 3031, 'declar': 3032, 'importantli': 3033, 'eighti': 3034, 'sloppi': 3035, 'partli': 3036, 'gray': 3037, 'communist': 3038, 'seller': 3039, 'evok': 3040, 'fifteen': 3041, 'miracl': 3042, 'selfish': 3043, 'mitchel': 3044, 'bleak': 3045, 'wipe': 3046, 'sidekick': 3047, 'norman': 3048, 'piano': 3049, 'chew': 3050, 'farc': 3051, 'websit': 3052, '45': 3053, 'debat': 3054, 'superbl': 3055, 'cheek': 3056, 'psychiatrist': 3057, 'destin': 3058, 'lifestyl': 3059, 'aforement': 3060, 'flawless': 3061, 'seed': 3062, 'ho': 3063, 'enthusiast': 3064, 'australia': 3065, 'stoog': 3066, 'kitchen': 3067, 'akshay': 3068, 'emili': 3069, 'dire': 3070, 'bash': 3071, 'pressur': 3072, 'dash': 3073, 'wick': 3074, 'drivel': 3075, 'regardless': 3076, 'soviet': 3077, 'abc': 3078, 'slice': 3079, 'wrestl': 3080, 'splatter': 3081, 'anni': 3082, 'directori': 3083, 'incompet': 3084, 'prize': 3085, 'mann': 3086, 'increas': 3087, 'cia': 3088, 'judi': 3089, 'distant': 3090, 'helicopt': 3091, 'recov': 3092, 'beaten': 3093, 'cagney': 3094, 'dave': 3095, 'jar': 3096, 'doo': 3097, 'cameron': 3098, 'glow': 3099, 'seduc': 3100, 'ken': 3101, 'duo': 3102, 'flower': 3103, 'pleasantli': 3104, 'curios': 3105, 'boil': 3106, 'lou': 3107, 'artifici': 3108, 'suppli': 3109, 'chapter': 3110, 'blob': 3111, 'hop': 3112, 'drunken': 3113, 'favour': 3114, 'splendid': 3115, 'ellen': 3116, 'ranger': 3117, 'panic': 3118, 'francisco': 3119, 'craig': 3120, 'glenn': 3121, 'turner': 3122, 'goldberg': 3123, 'craven': 3124, 'eleg': 3125, 'combat': 3126, 'psychot': 3127, 'laurel': 3128, 'web': 3129, 'perri': 3130, 'modesti': 3131, 'shortli': 3132, 'greek': 3133, 'plausibl': 3134, 'rid': 3135, 'min': 3136, 'graduat': 3137, 'flip': 3138, 'wizard': 3139, 'ruth': 3140, 'hatr': 3141, '20th': 3142, 'alexand': 3143, 'philosophi': 3144, 'gentl': 3145, 'slightest': 3146, 'gandhi': 3147, 'falk': 3148, 'fx': 3149, 'holi': 3150, 'unpleas': 3151, 'fund': 3152, 'jealou': 3153, 'knight': 3154, 'preciou': 3155, 'ocean': 3156, 'legal': 3157, 'futurist': 3158, 'felix': 3159, 'manhattan': 3160, 'we': 3161, 'tall': 3162, 'harm': 3163, 'dracula': 3164, 'lend': 3165, 'ami': 3166, 'forbidden': 3167, 'digniti': 3168, 'thread': 3169, 'explicit': 3170, 'reviv': 3171, 'overdon': 3172, 'nod': 3173, 'scientif': 3174, 'tank': 3175, 'childish': 3176, 'bless': 3177, 'mock': 3178, 'giallo': 3179, 'nerv': 3180, '99': 3181, 'pirat': 3182, 'margaret': 3183, 'torment': 3184, 'verhoeven': 3185, 'elderli': 3186, 'mel': 3187, 'awe': 3188, 'awaken': 3189, 'eve': 3190, 'broad': 3191, 'thick': 3192, 'repeatedli': 3193, 'fever': 3194, '2004': 3195, 'unwatch': 3196, 'yesterday': 3197, 'custom': 3198, 'automat': 3199, 'ambit': 3200, 'uniform': 3201, 'stiller': 3202, 'ah': 3203, 'eas': 3204, 'romero': 3205, 'royal': 3206, 'launch': 3207, 'griffith': 3208, 'timothi': 3209, 'politician': 3210, 'rivet': 3211, 'bin': 3212, 'acclaim': 3213, 'publish': 3214, 'absenc': 3215, 'lean': 3216, 'roman': 3217, 'kay': 3218, 'phrase': 3219, 'pulp': 3220, 'transport': 3221, 'sunshin': 3222, 'purpl': 3223, 'stinker': 3224, 'homicid': 3225, 'warren': 3226, 'crook': 3227, 'tomato': 3228, 'termin': 3229, 'wallac': 3230, 'antic': 3231, 'foul': 3232, 'pierc': 3233, 'darker': 3234, 'bathroom': 3235, 'gabriel': 3236, 'karen': 3237, 'evolv': 3238, 'brazil': 3239, 'hollow': 3240, 'juvenil': 3241, 'q': 3242, 'viciou': 3243, 'packag': 3244, 'awak': 3245, 'coloni': 3246, 'horrid': 3247, 'donna': 3248, 'saint': 3249, 'pray': 3250, 'sixti': 3251, 'choreographi': 3252, 'kenneth': 3253, '2003': 3254, 'revolutionari': 3255, 'album': 3256, 'eyr': 3257, 'ought': 3258, 'prom': 3259, 'contrari': 3260, 'rambo': 3261, 'li': 3262, 'marin': 3263, 'defi': 3264, 'twelv': 3265, 'ireland': 3266, 'boast': 3267, 'overr': 3268, 'ramon': 3269, 'dose': 3270, 'stole': 3271, 'mummi': 3272, 'nerd': 3273, 'candid': 3274, 'blade': 3275, 'beatti': 3276, 'option': 3277, 'conserv': 3278, 'mildr': 3279, 'kapoor': 3280, 'altman': 3281, 'astonish': 3282, 'confirm': 3283, 'protest': 3284, 'global': 3285, 'natali': 3286, 'detract': 3287, 'trio': 3288, 'kirk': 3289, 'funer': 3290, 'collabor': 3291, 'flame': 3292, 'jazz': 3293, 'fulci': 3294, 'whip': 3295, 'bottl': 3296, 'racial': 3297, 'nicholson': 3298, 'yellow': 3299, 'bull': 3300, 'shade': 3301, 'tommi': 3302, 'blake': 3303, 'leap': 3304, 'mystic': 3305, 'destini': 3306, 'enterpris': 3307, 'delici': 3308, 'spit': 3309, 'audio': 3310, 'bedroom': 3311, 'reunit': 3312, 'inherit': 3313, 'pseudo': 3314, 'merci': 3315, 'meaningless': 3316, 'altogeth': 3317, 'swedish': 3318, 'staff': 3319, 'fonda': 3320, 'enchant': 3321, 'visibl': 3322, 'threw': 3323, 'popcorn': 3324, 'harder': 3325, 'neo': 3326, 'todd': 3327, 'vivid': 3328, 'adolesc': 3329, 'respond': 3330, 'atlanti': 3331, 'decor': 3332, 'jew': 3333, 'leonard': 3334, 'await': 3335, 'crocodil': 3336, 'lawrenc': 3337, 'ruthless': 3338, 'reserv': 3339, 'tip': 3340, 'bust': 3341, 'exhibit': 3342, 'fanat': 3343, 'lemmon': 3344, 'moodi': 3345, 'wire': 3346, 'befriend': 3347, 'synopsi': 3348, 'edi': 3349, 'uneven': 3350, 'voight': 3351, 'suspici': 3352, 'kennedi': 3353, 'madonna': 3354, 'roommat': 3355, 'clint': 3356, 'bargain': 3357, 'voyag': 3358, 'chao': 3359, 'incident': 3360, 'palma': 3361, 'centr': 3362, 'garner': 3363, 'abysm': 3364, 'bradi': 3365, 'carl': 3366, 'clumsi': 3367, 'ventur': 3368, '2007': 3369, 'rural': 3370, 'audit': 3371, 'bold': 3372, 'unsettl': 3373, 'holli': 3374, 'dimens': 3375, 'versu': 3376, 'mall': 3377, 'troop': 3378, 'elimin': 3379, 'cuba': 3380, 'echo': 3381, 'poetic': 3382, 'lit': 3383, 'nearbi': 3384, 'humili': 3385, 'tiger': 3386, 'acknowledg': 3387, 'trail': 3388, 'immigr': 3389, 'daddi': 3390, 'cari': 3391, 'imperson': 3392, 'characterist': 3393, 'cd': 3394, 'timon': 3395, 'wealth': 3396, 'hart': 3397, 'ant': 3398, '2nd': 3399, 'neglect': 3400, 'solo': 3401, 'paus': 3402, 'mistaken': 3403, 'saga': 3404, 'collaps': 3405, 'jeffrey': 3406, 'celluloid': 3407, 'repuls': 3408, 'marshal': 3409, 'pun': 3410, 'domest': 3411, 'prejudic': 3412, 'infect': 3413, 'homer': 3414, 'mickey': 3415, 'assembl': 3416, 'harvey': 3417, 'interrupt': 3418, 'equip': 3419, 'olivi': 3420, 'milk': 3421, 'inan': 3422, 'sore': 3423, 'leon': 3424, 'gear': 3425, 'cake': 3426, 'hbo': 3427, 'apolog': 3428, 'chest': 3429, 'coat': 3430, 'ginger': 3431, 'inappropri': 3432, '1996': 3433, 'coffe': 3434, 'tribe': 3435, 'pant': 3436, 'undoubtedli': 3437, 'promin': 3438, 'highest': 3439, 'trace': 3440, 'consum': 3441, 'instant': 3442, 'retain': 3443, 'aveng': 3444, 'maggi': 3445, 'humbl': 3446, 'primari': 3447, 'embrac': 3448, 'colonel': 3449, 'devast': 3450, 'airplan': 3451, 'vulgar': 3452, 'pot': 3453, 'furthermor': 3454, 'solut': 3455, 'pen': 3456, 'institut': 3457, 'exot': 3458, 'florida': 3459, 'polanski': 3460, 'brooklyn': 3461, 'colleagu': 3462, 'jenni': 3463, 'dutch': 3464, '1999': 3465, 'seduct': 3466, 'descend': 3467, 'linda': 3468, '3rd': 3469, 'ya': 3470, 'bowl': 3471, 'godzilla': 3472, 'illog': 3473, 'cope': 3474, 'dian': 3475, 'principl': 3476, 'rick': 3477, 'smaller': 3478, 'strain': 3479, 'outer': 3480, 'sale': 3481, 'wive': 3482, 'poke': 3483, 'gender': 3484, 'disabl': 3485, 'dud': 3486, 'inferior': 3487, 'gloriou': 3488, 'dive': 3489, 'predecessor': 3490, 'glamor': 3491, 'secondli': 3492, 'yard': 3493, 'devoid': 3494, 'gundam': 3495, 'lol': 3496, 'vast': 3497, 'cue': 3498, 'beneath': 3499, 'primarili': 3500, 'scope': 3501, 'rabbit': 3502, 'mixtur': 3503, 'blatant': 3504, 'bubbl': 3505, 'hal': 3506, 'shirley': 3507, 'talki': 3508, 'invas': 3509, 'hideou': 3510, 'aggress': 3511, 'myer': 3512, 'simplist': 3513, 'pearl': 3514, 'z': 3515, 'museum': 3516, 'casual': 3517, 'breed': 3518, 'senseless': 3519, 'shelf': 3520, 'et': 3521, 'arab': 3522, 'april': 3523, 'garbo': 3524, 'alfr': 3525, 'streep': 3526, 'countrysid': 3527, 'grinch': 3528, 'trademark': 3529, 'disjoint': 3530, 'alert': 3531, 'domino': 3532, 'vanish': 3533, 'acid': 3534, 'obtain': 3535, 'stir': 3536, 'rendit': 3537, 'stellar': 3538, 'experiment': 3539, 'applaud': 3540, 'slide': 3541, 'defens': 3542, 'maci': 3543, 'mail': 3544, 'robberi': 3545, 'oz': 3546, 'loyal': 3547, 'disgrac': 3548, 'hardcor': 3549, 'stack': 3550, 'boom': 3551, 'hopeless': 3552, 'uwe': 3553, 'illeg': 3554, 'unhappi': 3555, 'sh': 3556, 'mayor': 3557, 'robinson': 3558, 'khan': 3559, 'rifl': 3560, 'dicken': 3561, 'amanda': 3562, 'declin': 3563, 'fri': 3564, 'spider': 3565, 'topless': 3566, 'craze': 3567, 'diana': 3568, 'incomprehens': 3569, 'counter': 3570, 'grandfath': 3571, 'scroog': 3572, 'recruit': 3573, 'wont': 3574, 'dismiss': 3575, 'span': 3576, 'emphasi': 3577, 'soccer': 3578, 'berlin': 3579, 'tempt': 3580, 'tenant': 3581, 'psychic': 3582, 'blew': 3583, 'hartley': 3584, 'sympath': 3585, 'faster': 3586, 'riot': 3587, 'shed': 3588, 'parad': 3589, 'goer': 3590, 'porno': 3591, 'intim': 3592, 'ethnic': 3593, 'sibl': 3594, 'bitch': 3595, 'revolt': 3596, 'ration': 3597, 'niro': 3598, 'woo': 3599, 'trashi': 3600, 'wet': 3601, 'resurrect': 3602, 'justin': 3603, 'shaw': 3604, 'lumet': 3605, 'slick': 3606, 'wendi': 3607, 'choru': 3608, 'feminist': 3609, 'eager': 3610, 'honesti': 3611, 'region': 3612, 'andr': 3613, 'jonathan': 3614, 'dealer': 3615, 'biographi': 3616, '00': 3617, 'steam': 3618, 'ballet': 3619, 'rider': 3620, 'unreal': 3621, 'nephew': 3622, 'immort': 3623, 'commend': 3624, 'weakest': 3625, 'hesit': 3626, 'hopper': 3627, 'farmer': 3628, 'ensur': 3629, 'worm': 3630, 'patriot': 3631, 'wheel': 3632, 'gap': 3633, 'partial': 3634, 'enlighten': 3635, 'lena': 3636, 'mario': 3637, 'confin': 3638, 'franco': 3639, 'vice': 3640, 'morri': 3641, 'victori': 3642, 'properti': 3643, 'psychopath': 3644, 'blunt': 3645, 'wore': 3646, 'util': 3647, 'sappi': 3648, 'skull': 3649, 'safeti': 3650, 'nostalg': 3651, 'macarthur': 3652, 'similarli': 3653, 'prequel': 3654, 'sandra': 3655, 'mutant': 3656, 'composit': 3657, 'hung': 3658, 'leo': 3659, 'snap': 3660, 'kingdom': 3661, 'charlott': 3662, 'owen': 3663, 'repress': 3664, '1972': 3665, 'compass': 3666, 'montana': 3667, 'acquir': 3668, 'campbel': 3669, 'whoopi': 3670, 'tad': 3671, 'bonu': 3672, 'deed': 3673, 'farrel': 3674, 'del': 3675, 'drain': 3676, 'despair': 3677, 'thru': 3678, 'valuabl': 3679, 'compens': 3680, 'bumbl': 3681, 'tail': 3682, 'emperor': 3683, 'rocki': 3684, 'miseri': 3685, 'dust': 3686, 'heartbreak': 3687, 'rope': 3688, 'latin': 3689, 'bow': 3690, 'exit': 3691, 'nervou': 3692, 'strand': 3693, 'rambl': 3694, 'drum': 3695, 'cg': 3696, 'snl': 3697, 'repli': 3698, 'recycl': 3699, 'speci': 3700, 'pattern': 3701, 'kyle': 3702, 'dalton': 3703, 'bergman': 3704, 'hyde': 3705, 'chess': 3706, 'carradin': 3707, 'romp': 3708, 'bleed': 3709, 'roth': 3710, 'radic': 3711, 'pour': 3712, 'gimmick': 3713, 'mistress': 3714, 'airport': 3715, 'downhil': 3716, 'da': 3717, 'percept': 3718, 'oppress': 3719, 'contempl': 3720, 'gal': 3721, 'rotten': 3722, 'slug': 3723, 'tonight': 3724, 'martian': 3725, 'orson': 3726, 'wacki': 3727, '35': 3728, 'olli': 3729, 'rapist': 3730, 'shelley': 3731, 'arc': 3732, 'heal': 3733, 'preach': 3734, 'dazzl': 3735, 'taught': 3736, 'pursuit': 3737, 'tackl': 3738, 'attorney': 3739, 'melodi': 3740, '1983': 3741, 'mislead': 3742, 'unpredict': 3743, 'pervers': 3744, 'banal': 3745, 'slash': 3746, 'stilt': 3747, 'champion': 3748, 'paltrow': 3749, 'arguabl': 3750, 'belt': 3751, 'tooth': 3752, 'programm': 3753, 'edgar': 3754, 'pervert': 3755, 'plight': 3756, 'gambl': 3757, 'poem': 3758, 'bela': 3759, 'vengeanc': 3760, 'employe': 3761, 'graham': 3762, 'sensat': 3763, 'rubi': 3764, 'orang': 3765, 'uplift': 3766, 'raymond': 3767, 'duval': 3768, 'mesmer': 3769, 'cleverli': 3770, 'passeng': 3771, 'virginia': 3772, 'tiresom': 3773, 'vocal': 3774, 'maid': 3775, 'chicken': 3776, 'closest': 3777, 'marti': 3778, 'dixon': 3779, 'conneri': 3780, 'franki': 3781, 'convincingli': 3782, 'bay': 3783, 'abraham': 3784, 'giggl': 3785, 'numb': 3786, 'inject': 3787, 'crystal': 3788, 'swallow': 3789, 'paranoia': 3790, 'yawn': 3791, 'quarter': 3792, 'climact': 3793, 'outing': 3794, 'engross': 3795, 'scottish': 3796, 'habit': 3797, 'extens': 3798, 'clone': 3799, 'monologu': 3800, '1968': 3801, 'volum': 3802, 'suffic': 3803, 'secretli': 3804, 'gerard': 3805, 'amitabh': 3806, 'whine': 3807, 'mute': 3808, 'pokemon': 3809, 'profan': 3810, 'calm': 3811, 'lundgren': 3812, 'tube': 3813, 'sirk': 3814, 'iran': 3815, 'fed': 3816, 'frankenstein': 3817, 'grotesqu': 3818, 'profess': 3819, 'im': 3820, 'underst': 3821, 'meander': 3822, 'chicago': 3823, 'expand': 3824, 'earl': 3825, 'richardson': 3826, 'abort': 3827, 'lowest': 3828, 'plod': 3829, 'junior': 3830, 'linger': 3831, 'surpass': 3832, 'poetri': 3833, 'nichola': 3834, 'franci': 3835, 'bend': 3836, 'taxi': 3837, 'dispos': 3838, 'austen': 3839, 'trend': 3840, 'spock': 3841, 'backward': 3842, 'ethan': 3843, 'septemb': 3844, 'literatur': 3845, 'waitress': 3846, 'compliment': 3847, 'eugen': 3848, 'tourist': 3849, 'der': 3850, 'myth': 3851, 'instrument': 3852, 'spoke': 3853, 'sue': 3854, 'greedi': 3855, 'cannon': 3856, 'stallon': 3857, 'muddl': 3858, 'household': 3859, 'simplic': 3860, 'rant': 3861, 'catchi': 3862, 'dysfunct': 3863, 'mundan': 3864, 'hum': 3865, 'econom': 3866, 'nostalgia': 3867, 'rubber': 3868, 'lure': 3869, 'descent': 3870, 'furi': 3871, 'stale': 3872, 'recognis': 3873, 'omen': 3874, 'damon': 3875, 'occupi': 3876, 'lang': 3877, 'coast': 3878, 'carel': 3879, 'mortal': 3880, 'hello': 3881, 'alongsid': 3882, 'molli': 3883, 'equival': 3884, 'irrelev': 3885, 'cent': 3886, 'dictat': 3887, 'duck': 3888, 'randi': 3889, 'bacal': 3890, 'flee': 3891, 'mankind': 3892, 'recognit': 3893, 'firstli': 3894, 'phantom': 3895, 'eaten': 3896, 'louis': 3897, 'deaf': 3898, 'dement': 3899, 'insur': 3900, 'phoni': 3901, 'sissi': 3902, 'crucial': 3903, 'map': 3904, 'june': 3905, '1973': 3906, 'twilight': 3907, 'rude': 3908, 'bike': 3909, 'blackmail': 3910, 'ashley': 3911, 'distinguish': 3912, 'cyborg': 3913, 'drake': 3914, 'newli': 3915, 'loyalti': 3916, 'dreari': 3917, 'lengthi': 3918, 'likewis': 3919, 'grayson': 3920, 'freez': 3921, 'onlin': 3922, 'labor': 3923, 'wisdom': 3924, 'damm': 3925, 'bump': 3926, 'antwon': 3927, 'daisi': 3928, 'reign': 3929, 'rooney': 3930, 'heel': 3931, 'buffalo': 3932, 'biko': 3933, 'baddi': 3934, 'analysi': 3935, 'vein': 3936, 'interior': 3937, 'provoc': 3938, 'keith': 3939, 'boxer': 3940, 'nineti': 3941, 'approv': 3942, 'proce': 3943, 'chronicl': 3944, 'emphas': 3945, 'worn': 3946, 'ridden': 3947, 'attribut': 3948, 'inher': 3949, 'incorpor': 3950, 'tunnel': 3951, 'exposur': 3952, 'startl': 3953, 'pink': 3954, 'butler': 3955, 'basketbal': 3956, 'prey': 3957, 'barrymor': 3958, 'sailor': 3959, 'unorigin': 3960, 'hypnot': 3961, 'millionair': 3962, 'underli': 3963, 'othello': 3964, 'mighti': 3965, 'indiffer': 3966, 'degrad': 3967, 'elm': 3968, 'condemn': 3969, 'julian': 3970, 'simmon': 3971, 'meyer': 3972, 'undeni': 3973, 'nicol': 3974, 'predat': 3975, 'stalker': 3976, 'er': 3977, 'meg': 3978, 'robbin': 3979, 'fleet': 3980, 'mormon': 3981, 'barrel': 3982, 'unrel': 3983, 'carla': 3984, 'improvis': 3985, 'substitut': 3986, 'drift': 3987, 'belushi': 3988, 'walsh': 3989, 'bunni': 3990, 'unawar': 3991, 'watson': 3992, 'alarm': 3993, 'vital': 3994, 'agenda': 3995, 'exquisit': 3996, 'enthusiasm': 3997, 'errol': 3998, 'marion': 3999, 'reid': 4000, 'nyc': 4001, 'palac': 4002, 'hay': 4003, 'disord': 4004, 'warmth': 4005, 'novak': 4006, 'roof': 4007, 'dolph': 4008, 'firm': 4009, 'mtv': 4010, 'shove': 4011, 'edgi': 4012, 'greed': 4013, 'priceless': 4014, 'lampoon': 4015, 'alison': 4016, 'rukh': 4017, '3d': 4018, 'spain': 4019, 'peril': 4020, 'profit': 4021, 'eastern': 4022, 'simultan': 4023, 'campaign': 4024, 'valentin': 4025, 'gestur': 4026, 'showdown': 4027, 'cassidi': 4028, 'testament': 4029, 'unleash': 4030, 'peck': 4031, 'crown': 4032, 'preserv': 4033, 'thompson': 4034, 'petti': 4035, 'drip': 4036, '1933': 4037, 'sergeant': 4038, 'iraq': 4039, 'israel': 4040, 'session': 4041, 'nun': 4042, 'angela': 4043, 'ponder': 4044, 'what': 4045, 'pamela': 4046, 'beatl': 4047, 'glanc': 4048, 'distort': 4049, 'randomli': 4050, 'zizek': 4051, '13th': 4052, 'coup': 4053, 'minimum': 4054, 'championship': 4055, 'orlean': 4056, 'wig': 4057, 'crawl': 4058, 'bro': 4059, 'travesti': 4060, 'represent': 4061, 'buster': 4062, 'rout': 4063, 'calib': 4064, 'miyazaki': 4065, '1984': 4066, 'realm': 4067, 'exposit': 4068, 'empathi': 4069, 'valley': 4070, 'shootout': 4071, 'jan': 4072, 'cream': 4073, 'unimagin': 4074, 'scotland': 4075, 'climat': 4076, 'crow': 4077, 'regist': 4078, 'gentleman': 4079, 'reson': 4080, 'stake': 4081, 'quinn': 4082, 'perpetu': 4083, 'din': 4084, 'mon': 4085, 'brenda': 4086, 'restrain': 4087, 'contradict': 4088, 'han': 4089, 'stroke': 4090, 'cooki': 4091, 'kurosawa': 4092, 'fido': 4093, 'sabrina': 4094, 'distress': 4095, 'absent': 4096, 'stargat': 4097, 'unsatisfi': 4098, '1997': 4099, 'ross': 4100, 'traumat': 4101, 'wax': 4102, '1987': 4103, 'demis': 4104, 'ustinov': 4105, 'shaki': 4106, 'cloud': 4107, 'warrant': 4108, 'mclaglen': 4109, 'femm': 4110, 'sammi': 4111, 'josh': 4112, 'compromis': 4113, 'greg': 4114, 'meryl': 4115, 'passabl': 4116, 'delic': 4117, 'painter': 4118, 'tacki': 4119, 'soderbergh': 4120, 'baldwin': 4121, 'crawford': 4122, 'spacey': 4123, 'sucker': 4124, 'monoton': 4125, 'pretens': 4126, 'fuller': 4127, 'censor': 4128, 'pole': 4129, 'perceiv': 4130, 'unseen': 4131, 'dana': 4132, 'businessman': 4133, 'abomin': 4134, 'derang': 4135, 'shoddi': 4136, 'geek': 4137, 'darren': 4138, 'uncov': 4139, 'kumar': 4140, 'dee': 4141, 'valid': 4142, 'fenc': 4143, 'primit': 4144, 'exclus': 4145, 'deniro': 4146, 'unravel': 4147, 'norm': 4148, 'expedit': 4149, 'furiou': 4150, 'jewel': 4151, 'sid': 4152, 'reluct': 4153, 'clash': 4154, 'click': 4155, 'seal': 4156, 'deceas': 4157, 'polici': 4158, 'correctli': 4159, 'tech': 4160, 'wholli': 4161, 'austin': 4162, 'nathan': 4163, 'tarantino': 4164, 'anchor': 4165, 'accuraci': 4166, 'judgment': 4167, '1993': 4168, 'fog': 4169, 'verbal': 4170, 'antonioni': 4171, 'seldom': 4172, 'conduct': 4173, 'trait': 4174, 'ritual': 4175, 'unfair': 4176, '1971': 4177, 'alec': 4178, '2008': 4179, 'roller': 4180, 'malon': 4181, 'debt': 4182, 'sunni': 4183, 'fabric': 4184, 'dreck': 4185, 'nicola': 4186, 'hallucin': 4187, 'mode': 4188, 'pocket': 4189, 'murray': 4190, 'fought': 4191, 'tax': 4192, 'sustain': 4193, '3000': 4194, 'crippl': 4195, 'sand': 4196, 'bake': 4197, 'fart': 4198, '1995': 4199, 'joel': 4200, 'wang': 4201, 'slam': 4202, 'enforc': 4203, 'temper': 4204, 'darn': 4205, 'patienc': 4206, 'wretch': 4207, 'clerk': 4208, 'shanghai': 4209, 'behold': 4210, 'sheet': 4211, 'vanc': 4212, 'logan': 4213, 'tactic': 4214, 'divid': 4215, 'preston': 4216, 'preposter': 4217, 'guitar': 4218, 'pete': 4219, 'fundament': 4220, 'schedul': 4221, 'rita': 4222, 'bias': 4223, 'sweep': 4224, 'grief': 4225, 'helpless': 4226, 'scriptwrit': 4227, 'robber': 4228, 'shell': 4229, 'isabel': 4230, 'stark': 4231, 'critiqu': 4232, 'outlin': 4233, 'squad': 4234, 'conscious': 4235, 'phil': 4236, 'canyon': 4237, 'exhaust': 4238, 'technicolor': 4239, 'runner': 4240, 'stuart': 4241, 'penni': 4242, 'bridget': 4243, 'clau': 4244, 'legaci': 4245, 'soup': 4246, 'despis': 4247, 'sugar': 4248, 'rehash': 4249, 'marc': 4250, 'alley': 4251, 'passag': 4252, 'agenc': 4253, 'propos': 4254, 'consciou': 4255, 'bloom': 4256, 'invad': 4257, 'flair': 4258, 'newman': 4259, 'jacket': 4260, 'culmin': 4261, 'delv': 4262, 'restrict': 4263, 'sniper': 4264, 'gregori': 4265, 'lacklust': 4266, 'boyl': 4267, 'palanc': 4268, 'kansa': 4269, 'cigarett': 4270, 'russia': 4271, 'vomit': 4272, 'unexpectedli': 4273, 'jodi': 4274, 'rear': 4275, 'implic': 4276, 'drove': 4277, 'liberti': 4278, 'alicia': 4279, 'inabl': 4280, 'sentinel': 4281, 'connor': 4282, 'downey': 4283, 'improb': 4284, 'arrow': 4285, 'behaviour': 4286, 'lush': 4287, 'asylum': 4288, 'rehears': 4289, 'karl': 4290, 'wrench': 4291, 'delet': 4292, 'horn': 4293, 'cap': 4294, 'aesthet': 4295, '1936': 4296, 'vet': 4297, 'rod': 4298, 'rampag': 4299, 'tendenc': 4300, 'sharon': 4301, 'pale': 4302, 'chainsaw': 4303, '22': 4304, 'awhil': 4305, 'tripe': 4306, 'bacon': 4307, 'foxx': 4308, 'feat': 4309, 'ladder': 4310, 'mccoy': 4311, 'yeti': 4312, 'kolchak': 4313, '1920': 4314, 'stream': 4315, 'prank': 4316, 'newcom': 4317, 'scoop': 4318, 'el': 4319, '1978': 4320, 'filler': 4321, 'coaster': 4322, 'tomorrow': 4323, 'suspicion': 4324, 'rumor': 4325, 'fright': 4326, 'hackney': 4327, 'tasteless': 4328, 'loneli': 4329, 'conscienc': 4330, 'basing': 4331, 'wildli': 4332, 'shortcom': 4333, 'lurk': 4334, 'aristocrat': 4335, 'thunderbird': 4336, 'underneath': 4337, '1988': 4338, 'spice': 4339, 'hungri': 4340, 'visitor': 4341, 'sung': 4342, 'rhythm': 4343, 'wagner': 4344, 'minu': 4345, '19th': 4346, 'amazon': 4347, 'paramount': 4348, 'suffici': 4349, 'financ': 4350, 'weav': 4351, 'paradis': 4352, 'hulk': 4353, 'globe': 4354, 'elit': 4355, 'iv': 4356, 'naughti': 4357, 'bread': 4358, 'secondari': 4359, 'lectur': 4360, 'brit': 4361, 'dirt': 4362, 'smell': 4363, 'immers': 4364, 'standout': 4365, 'straightforward': 4366, 'heist': 4367, 'en': 4368, 'curli': 4369, 'counterpart': 4370, 'teas': 4371, '75': 4372, 'beverli': 4373, 'cancer': 4374, 'quietli': 4375, 'hopkin': 4376, 'rub': 4377, 'couch': 4378, 'ram': 4379, '1939': 4380, 'recogniz': 4381, 'abrupt': 4382, 'grudg': 4383, 'ingeni': 4384, '1989': 4385, 'impos': 4386, 'literari': 4387, 'springer': 4388, 'minist': 4389, 'worship': 4390, 'inmat': 4391, 'chavez': 4392, 'atroc': 4393, 'entranc': 4394, 'choppi': 4395, 'leigh': 4396, 'paxton': 4397, 'wwe': 4398, 'posey': 4399, 'chamberlain': 4400, 'tierney': 4401, 'penn': 4402, 'esther': 4403, 'sublim': 4404, 'watcher': 4405, 'variat': 4406, '1986': 4407, 'sassi': 4408, 'geni': 4409, 'entitl': 4410, 'missil': 4411, 'attenborough': 4412, 'moreov': 4413, 'convert': 4414, 'injuri': 4415, 'yearn': 4416, 'skeptic': 4417, 'misguid': 4418, 'enthral': 4419, 'policeman': 4420, 'laurenc': 4421, 'duel': 4422, 'heartfelt': 4423, 'ratso': 4424, 'ace': 4425, 'lindsay': 4426, 'net': 4427, 'morbid': 4428, 'quaid': 4429, 'clan': 4430, 'cattl': 4431, 'transcend': 4432, 'nolan': 4433, 'bernard': 4434, 'nemesi': 4435, 'mytholog': 4436, 'uncut': 4437, 'dont': 4438, 'egg': 4439, 'rosemari': 4440, 'diari': 4441, 'grin': 4442, 'graini': 4443, 'reliabl': 4444, 'spiral': 4445, 'steadi': 4446, 'facil': 4447, 'enabl': 4448, 'cruelti': 4449, 'hk': 4450, 'bye': 4451, 'hopelessli': 4452, 'youngest': 4453, 'setup': 4454, 'bean': 4455, 'moder': 4456, 'buzz': 4457, 'out': 4458, 'puppi': 4459, 'carlito': 4460, 'unexplain': 4461, 'kidman': 4462, 'characteris': 4463, 'tyler': 4464, '1979': 4465, 'poe': 4466, 'vader': 4467, 'brood': 4468, 'obstacl': 4469, 'kitti': 4470, 'artsi': 4471, 'disastr': 4472, 'despic': 4473, 'fuel': 4474, 'weather': 4475, 'christin': 4476, 'decept': 4477, '1969': 4478, 'oblig': 4479, 'athlet': 4480, 'exterior': 4481, 'martha': 4482, 'acquaint': 4483, 'underworld': 4484, 'spontan': 4485, 'kline': 4486, 'gillian': 4487, 'patricia': 4488, 'baffl': 4489, 'bewar': 4490, 'bounc': 4491, 'gina': 4492, 'clueless': 4493, 'effici': 4494, 'hammi': 4495, 'preming': 4496, 'bronson': 4497, 'hain': 4498, 'niec': 4499, 'sweat': 4500, 'heap': 4501, 'narrow': 4502, 'brendan': 4503, 'outlaw': 4504, '73': 4505, 'insipid': 4506, 'sooner': 4507, 'dandi': 4508, 'harmless': 4509, 'scar': 4510, 'goof': 4511, 'loi': 4512, 'preachi': 4513, 'mermaid': 4514, 'dilemma': 4515, 'trigger': 4516, 'sleepwalk': 4517, 'loath': 4518, 'candl': 4519, 'injur': 4520, 'enlist': 4521, 'angst': 4522, 'viewpoint': 4523, 'analyz': 4524, 'hepburn': 4525, 'mayhem': 4526, 'virtu': 4527, 'fontain': 4528, 'housewif': 4529, 'lester': 4530, 'astound': 4531, 'tick': 4532, 'rome': 4533, 'headach': 4534, 'shatter': 4535, 'renaiss': 4536, 'circu': 4537, '19': 4538, 'biker': 4539, 'suprem': 4540, 'uh': 4541, 'taboo': 4542, 'slimi': 4543, 'redund': 4544, 'contempt': 4545, 'hooker': 4546, 'ebert': 4547, 'fluff': 4548, 'filth': 4549, 'bent': 4550, 'macho': 4551, 'immatur': 4552, 'dismal': 4553, 'intric': 4554, 'hokey': 4555, 'spade': 4556, 'cassavet': 4557, 'phenomenon': 4558, 'dish': 4559, 'scorses': 4560, 'stair': 4561, 'hostag': 4562, 'guin': 4563, 'tripl': 4564, 'amor': 4565, 'boston': 4566, 'glorifi': 4567, 'idol': 4568, 'stimul': 4569, 'sox': 4570, 'foolish': 4571, 'steer': 4572, 'overlong': 4573, 'whore': 4574, 'camcord': 4575, 'ariel': 4576, 'salt': 4577, 'oldest': 4578, 'claustrophob': 4579, 'surgeri': 4580, 'gere': 4581, 'zoom': 4582, 'corbett': 4583, 'widescreen': 4584, 'preced': 4585, 'assert': 4586, 'schlock': 4587, 'down': 4588, '1981': 4589, 'spree': 4590, 'dwarf': 4591, 'fascist': 4592, 'proport': 4593, '1976': 4594, 'messi': 4595, 'antagonist': 4596, 'faint': 4597, 'beard': 4598, 'spinal': 4599, 'radiat': 4600, 'obligatori': 4601, 'cow': 4602, 'rhyme': 4603, 'strongest': 4604, 'harold': 4605, 'muscl': 4606, 'keen': 4607, 'perman': 4608, 'nolt': 4609, 'astronaut': 4610, 'conquer': 4611, 'margin': 4612, 'flirt': 4613, 'cush': 4614, 'corman': 4615, 'mount': 4616, 'transplant': 4617, 'remad': 4618, 'mutual': 4619, 'shred': 4620, 'gasp': 4621, 'trivia': 4622, 'joker': 4623, 'alvin': 4624, 'flag': 4625, 'flashi': 4626, 'gabl': 4627, 'shield': 4628, 'frantic': 4629, 'cohen': 4630, 'zane': 4631, 'naschi': 4632, 'archiv': 4633, 'instruct': 4634, '1945': 4635, 'danish': 4636, '28': 4637, 'vaniti': 4638, 'bachelor': 4639, 'ritchi': 4640, 'wield': 4641, 'info': 4642, 'interestingli': 4643, 'flock': 4644, 'fishburn': 4645, 'repris': 4646, 'persuad': 4647, 'someday': 4648, 'mobil': 4649, 'triangl': 4650, '95': 4651, 'mol': 4652, 'off': 4653, 'claud': 4654, 'brush': 4655, 'raj': 4656, 'discern': 4657, 'boob': 4658, 'departur': 4659, 'bitten': 4660, 'resum': 4661, 'www': 4662, 'divin': 4663, 'hara': 4664, 'sensual': 4665, 'strive': 4666, 'deer': 4667, 'carey': 4668, 'inflict': 4669, 'barn': 4670, 'scandal': 4671, 'neurot': 4672, 'aborigin': 4673, 'clad': 4674, 'europa': 4675, 'neill': 4676, 'pixar': 4677, 'bate': 4678, 'miracul': 4679, 'dame': 4680, 'ish': 4681, 'cb': 4682, 'traffic': 4683, 'artwork': 4684, 'casino': 4685, 'dim': 4686, 'vibrant': 4687, 'pacif': 4688, 'heartwarm': 4689, 'jade': 4690, 'prophet': 4691, 'banter': 4692, 'helm': 4693, 'fragil': 4694, 'wendigo': 4695, 'recit': 4696, 'cliffhang': 4697, 'frontier': 4698, 'cycl': 4699, 'parson': 4700, 'biblic': 4701, 'hapless': 4702, 'submit': 4703, 'harrison': 4704, 'pickford': 4705, 'undermin': 4706, 'earnest': 4707, 'axe': 4708, 'kathryn': 4709, 'dylan': 4710, 'hilar': 4711, 'proclaim': 4712, 'timberlak': 4713, 'carlo': 4714, 'loretta': 4715, 'melissa': 4716, 'cher': 4717, 'rot': 4718, 'colin': 4719, 'hug': 4720, 'senior': 4721, 'luka': 4722, 'anton': 4723, 'mobster': 4724, 'winchest': 4725, 'wardrob': 4726, 'cerebr': 4727, 'trier': 4728, 'electron': 4729, 'vile': 4730, 'razor': 4731, 'legitim': 4732, 'seedi': 4733, 'akin': 4734, 'articl': 4735, 'eli': 4736, 'antholog': 4737, 'illus': 4738, 'milo': 4739, 'nope': 4740, 'choke': 4741, 'static': 4742, 'holocaust': 4743, 'orphan': 4744, 'jordan': 4745, 'redneck': 4746, 'flavor': 4747, 'sicken': 4748, 'token': 4749, 'northern': 4750, 'alexandr': 4751, 'vanessa': 4752, 'isra': 4753, 'estrang': 4754, 'toronto': 4755, 'marlon': 4756, 'http': 4757, 'breakfast': 4758, 'aris': 4759, 'misfortun': 4760, 'bondag': 4761, 'bikini': 4762, 'feast': 4763, 'rooki': 4764, 'mason': 4765, 'foil': 4766, 'pc': 4767, 'blatantli': 4768, 'lucil': 4769, 'jo': 4770, 'mathieu': 4771, 'lui': 4772, 'venom': 4773, 'shepherd': 4774, 'uma': 4775, 'dudley': 4776, 'ceremoni': 4777, 'psych': 4778, 'deem': 4779, 'outdat': 4780, 'gilbert': 4781, 'cartoonish': 4782, 'charlton': 4783, 'comprehend': 4784, 'retriev': 4785, 'glare': 4786, 'disregard': 4787, 'linear': 4788, 'turd': 4789, 'clinic': 4790, 'fifth': 4791, 'boyer': 4792, 'wrestler': 4793, 'magician': 4794, 'frog': 4795, 'audrey': 4796, 'abund': 4797, 'highway': 4798, 'oppon': 4799, 'shorter': 4800, 'knightley': 4801, 'gunga': 4802, 'styliz': 4803, 'leather': 4804, 'huston': 4805, 'affleck': 4806, 'tack': 4807, 'peer': 4808, 'ideolog': 4809, 'swept': 4810, 'smack': 4811, 'nightclub': 4812, 'feminin': 4813, 'howl': 4814, 'cuban': 4815, 'bogu': 4816, 'tara': 4817, 'summar': 4818, 'lighter': 4819, 'corn': 4820, 'snatch': 4821, 'boo': 4822, 'lifeless': 4823, '1991': 4824, 'monument': 4825, 'senat': 4826, 'collector': 4827, 'bastard': 4828, 'compris': 4829, '4th': 4830, 'potter': 4831, '1994': 4832, 'spine': 4833, 'braveheart': 4834, 'evolut': 4835, 'conrad': 4836, 'energet': 4837, 'spawn': 4838, 'btw': 4839, 'deliver': 4840, 'phenomen': 4841, 'greet': 4842, 'lavish': 4843, 'client': 4844, 'newer': 4845, 'sleaz': 4846, 'goldsworthi': 4847, 'plate': 4848, 'whack': 4849, 'cemeteri': 4850, 'durat': 4851, 'chip': 4852, 'toe': 4853, 'mitch': 4854, 'uniformli': 4855, 'breakdown': 4856, 'moe': 4857, 'einstein': 4858, 'salman': 4859, 'spectacl': 4860, '1974': 4861, 'ie': 4862, 'belli': 4863, 'occup': 4864, 'healthi': 4865, 'judd': 4866, 'fluid': 4867, 'nina': 4868, 'outright': 4869, 'embark': 4870, 'ol': 4871, 'inaccuraci': 4872, 'creek': 4873, 'eleven': 4874, 'kent': 4875, 'lex': 4876, 'luxuri': 4877, 'sorrow': 4878, 'clara': 4879, 'cecil': 4880, 'undead': 4881, 'appl': 4882, 'alleg': 4883, 'replay': 4884, 'undertak': 4885, 'jule': 4886, 'firmli': 4887, 'neatli': 4888, 'gilliam': 4889, 'signal': 4890, 'wtf': 4891, 'mcqueen': 4892, 'jedi': 4893, 'bulk': 4894, 'historian': 4895, 'jare': 4896, 'bori': 4897, 'jam': 4898, 'constitut': 4899, 'liu': 4900, 'pronounc': 4901, 'armstrong': 4902, 'trauma': 4903, 'capot': 4904, '1977': 4905, 'randolph': 4906, 'kazan': 4907, 'evelyn': 4908, 'inclus': 4909, 'conan': 4910, 'poker': 4911, 'kiddi': 4912, 'subtli': 4913, 'knee': 4914, 'sacrif': 4915, 'ash': 4916, 'cape': 4917, 'groan': 4918, 'blur': 4919, 'unsuspect': 4920, 'palm': 4921, 'porter': 4922, 'pioneer': 4923, 'unattract': 4924, 'propheci': 4925, 'comprehens': 4926, 'tokyo': 4927, 'relentless': 4928, 'curtain': 4929, 'meal': 4930, 'lauren': 4931, 'goldblum': 4932, 'rosario': 4933, 'truman': 4934, 'walt': 4935, 'inaccur': 4936, 'id': 4937, 'mum': 4938, 'miami': 4939, 'vain': 4940, 'decapit': 4941, 'roar': 4942, 'antonio': 4943, 'galaxi': 4944, 'bait': 4945, 'comb': 4946, 'basket': 4947, 'vignett': 4948, 'lanc': 4949, 'spray': 4950, 'carmen': 4951, 'paula': 4952, 'abound': 4953, 'congratul': 4954, 'pepper': 4955, '1985': 4956, 'forgiven': 4957, 'aussi': 4958, 'fruit': 4959, 'sidewalk': 4960, 'genet': 4961, 'bsg': 4962, 'miniseri': 4963, 'macabr': 4964, 'omin': 4965, 'incorrect': 4966, 'weaker': 4967, 'scarfac': 4968, 'orchestr': 4969, 'sparkl': 4970, 'bach': 4971, 'playboy': 4972, 'jill': 4973, 'epitom': 4974, 'rapidli': 4975, 'frontal': 4976, 'vastli': 4977, 'ghetto': 4978, 'cypher': 4979, 'modest': 4980, 'detach': 4981, 'weari': 4982, 'motorcycl': 4983, 'turtl': 4984, 'drone': 4985, 'optimist': 4986, 'verg': 4987, '21st': 4988, 'hackman': 4989, 'dubiou': 4990, 'monti': 4991, 'substanti': 4992, 'spill': 4993, 'assort': 4994, 'bravo': 4995, 'reincarn': 4996, 'sophi': 4997, 'evan': 4998, 'ingrid': 4999}
###Markdown
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
###Code
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
###Output
_____no_output_____
###Markdown
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
###Code
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
###Output
_____no_output_____
###Markdown
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
###Code
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[7])
###Output
[ 490 13 1068 120 88 267 1914 2025 20 820 310 1 1687 714
1914 53 927 144 3377 257 137 161 1914 753 588 257 344 1093
264 1539 38 783 3113 772 22 257 139 62 483 1289 357 302
3 521 378 173 1373 601 529 105 467 131 1370 1 3 783
369 448 310 1304 566 3638 467 490 13 40 233 74 59 1914
53 3 615 460 1687 40 1891 4860 275 2286 1758 3 564 567
1104 197 198 317 299 860 1880 26 1687 546 2 410 87 86
47 1 2 17 218 12 2 123 624 83 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
###Markdown
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** Preprocessing is done on both train and test data and is needed for normalizing the data. convert_and_pad_data is needed to standardize data as each review varies in length.Also, All Neural Networks requires inputs that have same size and shape.In order to have inputs with same size, we need to have this method. Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
###Code
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
###Code
!pygmentize train/model.py
###Output
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [34mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
###Markdown
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
###Code
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
###Output
_____no_output_____
###Markdown
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
###Code
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
###Output
_____no_output_____
###Markdown
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
###Code
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
###Output
Epoch: 1, BCELoss: 0.6956740140914917
Epoch: 2, BCELoss: 0.6866450667381286
Epoch: 3, BCELoss: 0.6790154933929443
Epoch: 4, BCELoss: 0.6705753087997437
Epoch: 5, BCELoss: 0.6602719783782959
###Markdown
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. (TODO) Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
###Output
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
###Markdown
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model.
###Code
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
###Markdown
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
###Code
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** Scores of both models are almost similar. This is because dataset we used is quite small (25000). We may get better performance of RNN with more data. Adding hyperparameter tuning step to the RNN may also improve performance of the model. Further adding a pretrained embedding such as Glove/Word2Vec might further improve the performance of the model. (TODO) More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
###Code
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
###Output
_____no_output_____
###Markdown
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
###Code
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_review_words = review_to_words(test_review) # splits reviews to words
review_X, review_len = convert_and_pad(word_dict, test_review_words) # pad review
data_pack = np.hstack((review_len, review_X))
data_pack = data_pack.reshape(1, -1)
test_data = torch.from_numpy(data_pack)
test_data = test_data.to(device)
###Output
_____no_output_____
###Markdown
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
###Code
predictor.predict(test_data)
###Output
_____no_output_____
###Markdown
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
###Code
estimator.delete_endpoint()
###Output
estimator.delete_endpoint() will be deprecated in SageMaker Python SDK v2. Please use the delete_endpoint() function on your predictor instead.
###Markdown
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference codeBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.
###Code
!pygmentize serve/predict.py
###Output
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36moptim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mutils[39;49;00m[04m[36m.[39;49;00m[04m[36mdata[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, convert_and_pad
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load the PyTorch model from the `model_dir` directory."""[39;49;00m
[36mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[36mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = LSTMClassifier(model_info[[33m'[39;49;00m[33membedding_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mvocab_size[39;49;00m[33m'[39;49;00m])
[37m# Load the store model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[37m# Load the saved word_dict.[39;49;00m
word_dict_path = os.path.join(model_dir, [33m'[39;49;00m[33mword_dict.pkl[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(word_dict_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.word_dict = pickle.load(f)
model.to(device).eval()
[36mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32minput_fn[39;49;00m(serialized_input_data, content_type):
[36mprint[39;49;00m([33m'[39;49;00m[33mDeserializing the input data.[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m content_type == [33m'[39;49;00m[33mtext/plain[39;49;00m[33m'[39;49;00m:
data = serialized_input_data.decode([33m'[39;49;00m[33mutf-8[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m data
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mRequested unsupported ContentType in content_type: [39;49;00m[33m'[39;49;00m + content_type)
[34mdef[39;49;00m [32moutput_fn[39;49;00m(prediction_output, accept):
[36mprint[39;49;00m([33m'[39;49;00m[33mSerializing the generated output.[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m [36mstr[39;49;00m(prediction_output)
[34mdef[39;49;00m [32mpredict_fn[39;49;00m(input_data, model):
[36mprint[39;49;00m([33m'[39;49;00m[33mInferring sentiment of input data.[39;49;00m[33m'[39;49;00m)
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m model.word_dict [35mis[39;49;00m [34mNone[39;49;00m:
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mModel has not been loaded properly, no word_dict.[39;49;00m[33m'[39;49;00m)
[37m# TODO: Process input_data so that it is ready to be sent to our model.[39;49;00m
[37m# You should produce two variables:[39;49;00m
[37m# data_X - A sequence of length 500 which represents the converted review[39;49;00m
[37m# data_len - The length of the review[39;49;00m
data_X, data_len = convert_and_pad(model.word_dict ,review_to_words(input_data))
[37m# Using data_X and data_len we construct an appropriate input tensor. Remember[39;49;00m
[37m# that our model expects input data of the form 'len, review[500]'.[39;49;00m
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape([34m1[39;49;00m, -[34m1[39;49;00m)
data = torch.from_numpy(data_pack)
data = data.to(device)
[37m# Make sure to put the model into evaluation mode[39;49;00m
model.eval()
[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should[39;49;00m
[37m# be a numpy array which contains a single integer which is either 1 or 0[39;49;00m
result = np.round(model(data).detach().cpu().numpy()).astype(np.int)
[34mreturn[39;49;00m result
###Markdown
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
###Code
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
###Markdown
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
###Code
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
###Output
_____no_output_____
###Markdown
As an additional test, we can try sending the `test_review` that we looked at earlier.
###Code
predictor.predict(test_review)
###Output
_____no_output_____
###Markdown
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
###Code
predictor.endpoint
###Output
_____no_output_____
###Markdown
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**. Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.**TODO:** Make sure that you include the edited `index.html` file in your project submission. Now that your web app is working, trying playing around with it and see how well it works.**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review? **Answer:** I wrote the following review: The movie was terrible and the characters weren't that good. I got the result as Negative. I love this movie. I got the result as positive. Delete the endpointRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
vaccine-doses-administered/scrape-output.ipynb | ###Markdown
California COVID vaccinations scrape By [Amy O'Kruk](https://twitter.com/amyokruk) Downloads data on vaccine doses administered by county and statewide from a Tableau-powered dashboard from the California Department of Public Health.
###Code
import pandas as pd
import requests
from bs4 import BeautifulSoup
import json
import re
import time
from time import gmtime, strftime
import os
import pytz
from datetime import datetime
###Output
_____no_output_____
###Markdown
Scrape the dashboard page
###Code
url = "https://public.tableau.com/interactive/views/COVID-19VaccineDashboardPublic/Vaccine?:embed=y&:showVizHome=n&:apiID=host0"
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
tableauData = json.loads(soup.find("textarea",{"id": "tsConfigContainer"}).text)
###Output
_____no_output_____
###Markdown
Get the link to the Tableau data
###Code
dataUrl = f'https://public.tableau.com{tableauData["vizql_root"]}/bootstrapSession/sessions/{tableauData["sessionid"]}'
r = requests.post(dataUrl, data= {
"sheet_id": tableauData["sheetId"],
})
dataReg = re.search('\d+;({.*})\d+;({.*})', r.text, re.MULTILINE)
data1 = json.loads(dataReg.group(2))
dataJson = data1["secondaryInfo"]["presModelMap"]["dataDictionary"]["presModelHolder"]["genDataDictionaryPresModel"]["dataSegments"]["0"]["dataColumns"]
###Output
_____no_output_____
###Markdown
Isolate what you want
###Code
counties = dataJson[2]['dataValues'][:58]
doses = dataJson[0]['dataValues'][3:61]
###Output
_____no_output_____
###Markdown
Data formatting
###Code
zipped = dict(zip(counties, doses))
df = pd.Series(zipped).reset_index()
df.columns = ['location','doses']
###Output
_____no_output_____
###Markdown
Grab the overall California total
###Code
add = {'location':'California','doses':dataJson[0]['dataValues'][2]}
df = df.append(add, ignore_index=True)
df = df.sort_values(by='location')
df = df[df.location == 'California'].append(df[df.location != 'California']).reset_index(drop=True)
tz = pytz.timezone("America/New_York")
today = datetime.now(tz).date()
data_dir = os.path.join(os.path.abspath(""), "data")
df.to_csv(os.path.join(data_dir, f"{today}.csv"), index=False)
###Output
_____no_output_____ |
Homework_2/6864_hw2_fa21.ipynb | ###Markdown
###Code
%%bash
!(stat -t /usr/local/lib/*/dist-packages/google/colab > /dev/null 2>&1) && exit
rm -rf MIT_6864
git clone \
--depth 1 \
--filter=blob:none \
--no-checkout \
https://github.com/RichardMuri/MIT_6864
cd MIT_6864
git checkout main -- Homework_2/reviews.csv Homework_2/lab_util.py ML_utilities.py
import sys
sys.path.append("/content/MIT_6864/Homework_2")
import csv
import itertools as it
import numpy as np
import sklearn.decomposition
np.random.seed(0)
from tqdm import tqdm
import lab_util
sys.path.append("/content/MIT_6864/")
import ML_utilities
from ML_utilities import assert_size
from pdb import set_trace as st
###Output
_____no_output_____
###Markdown
IntroductionIn this notebook, you will find code scaffolding for the implementation portion of Homework 2. There are certain parts of the scaffolding marked with ` Your code here!` comments where you can fill in code to perform the specified tasks. After implementing the methods in this notebook, you will need to design and perform experiments to evaluate each method and respond to the questions in the Homework 2 handout (available on Canvas). You should be able to complete this assignment without changing any of the scaffolding code, just writing code to fill in the scaffolding and run experiments. DatasetWe're going to be working with a dataset of product reviews. The following cell loads the dataset and splits it into training, validation, and test sets.
###Code
data = []
n_positive = 0
n_disp = 0
with open("/content/MIT_6864/Homework_2/reviews.csv") as reader:
csvreader = csv.reader(reader)
next(csvreader)
for id, review, label in csvreader:
label = int(label)
# hacky class balancing
if label == 1:
if n_positive == 2000:
continue
n_positive += 1
if len(data) == 4000:
break
data.append((review, label))
if n_disp > 5:
continue
n_disp += 1
print("review:", review)
print("rating:", label, "(good)" if label == 1 else "(bad)")
print()
print(f"Read {len(data)} total reviews.")
np.random.shuffle(data)
reviews, labels = zip(*data)
train_reviews = reviews[:3000]
train_labels = labels[:3000]
val_reviews = reviews[3000:3500]
val_labels = labels[3000:3500]
test_reviews = reviews[3500:]
test_labels = labels[3500:]
###Output
review: I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.
rating: 1 (good)
review: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were actually small sized unsalted. Not sure if this was an error or if the vendor intended to represent the product as "Jumbo".
rating: 0 (bad)
review: This is a confection that has been around a few centuries. It is a light, pillowy citrus gelatin with nuts - in this case Filberts. And it is cut into tiny squares and then liberally coated with powdered sugar. And it is a tiny mouthful of heaven. Not too chewy, and very flavorful. I highly recommend this yummy treat. If you are familiar with the story of C.S. Lewis' "The Lion, The Witch, and The Wardrobe" - this is the treat that seduces Edmund into selling out his Brother and Sisters to the Witch.
rating: 1 (good)
review: If you are looking for the secret ingredient in Robitussin I believe I have found it. I got this in addition to the Root Beer Extract I ordered (which was good) and made some cherry soda. The flavor is very medicinal.
rating: 0 (bad)
review: Great taffy at a great price. There was a wide assortment of yummy taffy. Delivery was very quick. If your a taffy lover, this is a deal.
rating: 1 (good)
review: I got a wild hair for taffy and ordered this five pound bag. The taffy was all very enjoyable with many flavors: watermelon, root beer, melon, peppermint, grape, etc. My only complaint is there was a bit too much red/black licorice-flavored pieces (just not my particular favorites). Between me, my kids, and my husband, this lasted only two weeks! I would recommend this brand of taffy -- it was a delightful treat.
rating: 1 (good)
Read 4000 total reviews.
###Markdown
Part 1: word representations via matrix factorizationFirst, we'll construct the term-document matrix (look at `/content/hw2/lab_util.py` in the file browser on the left if you want to see how this works).
###Code
vectorizer = lab_util.CountVectorizer()
vectorizer.fit(train_reviews)
td_matrix = vectorizer.transform(train_reviews).T
print(f"TD matrix is {td_matrix.shape[0]} x {td_matrix.shape[1]}")
###Output
TD matrix is 2006 x 3000
###Markdown
First, implement the function `learn_reps_lsa` that computes word representations via latent semantic analysis. The `sklearn.decomposition` or `np.linalg` packages may be useful.
###Code
import sklearn.decomposition
def learn_reps_lsa(matrix, rep_size):
# `matrix` is a `|V| x n` matrix (usually a TD matrix),
# where `|V|` is the number of words in the vocabulary and `n`
# is the number of reviews in the (training) corpus.
# This function should return a `|V| x rep_size` matrix with each
# row corresponding to a word representation.
# Your code here!
vsize = len(matrix)
# In this case, using just U
U, sigma, V = np.linalg.svd(matrix, full_matrices=True)
result = U[:, :rep_size]
assert_size(result, [vsize, rep_size])
return result
###Output
_____no_output_____
###Markdown
Sanity check 1The following cell contains a simple sanity check for your `learn_reps_lsa` implementation: it should print `True` if your `learn_reps_lsa` function is implemented equivalently to one of our solutions. There are at least two reasonable ways to formulate these LSA word representations (whether you directly use the left singular vectors of `matrix` or scale them by the singular values), these correspond to the two possible representations in the sanity check below.
###Code
DEBUG_sc1_matrix = np.array([[1,0,0,2,1,3,5],
[2,0,0,0,0,4,0],
[0,3,4,1,8,6,6],
[1,4,5,0,0,0,0]])
DEBUG_reps = learn_reps_lsa(DEBUG_sc1_matrix, 3)
DEBUG_gt1 = np.array([[ -4.92017554, -2.85465774, 1.18575453],
[ -2.14977584, -1.19987977, 3.37221899],
[-12.62664695, 0.10890093, -1.32131745],
[ -2.69216011, 5.66453534, 1.33728063]])
DEBUG_gt2 = np.array([[-0.35188159, -0.44213061, 0.29358929],
[-0.15374788, -0.18583789, 0.83495136],
[-0.90303377, 0.01686662, -0.32715426],
[-0.19253817, 0.87732566, 0.3311067 ]])
print(np.allclose(np.abs(DEBUG_reps), np.abs(DEBUG_gt1)) or np.allclose(np.abs(DEBUG_reps), np.abs(DEBUG_gt2)))
###Output
True
###Markdown
Let's look at some representations:
###Code
reps = learn_reps_lsa(td_matrix, 500)
words = ["good", "bad", "cookie", "jelly", "dog", "the", "3"]
show_tokens = [vectorizer.tokenizer.word_to_token[word] for word in words]
lab_util.show_similar_words(vectorizer.tokenizer, reps, show_tokens)
###Output
good 47
gerber 1.873
luck 1.885
crazy 1.890
flaxseed 1.906
suspect 1.907
bad 201
disgusting 1.625
horrible 1.776
shortbread 1.778
gone 1.778
dont 1.802
cookie 504
nana's 0.964
bars 1.363
odd 1.402
impossible 1.459
cookies 1.484
jelly 351
twist 1.099
cardboard 1.197
peanuts 1.311
advertised 1.331
plastic 1.510
dog 925
happier 1.670
earlier 1.681
eats 1.702
stays 1.722
standard 1.727
the 36
suspect 1.953
flowers 1.961
leaked 1.966
m 1.966
burn 1.967
3 289
omega 1.733
vendor 1.739
supermarket 1.747
nutty 1.755
carries 1.797
###Markdown
We've been operating on the raw count matrix, but in class we discussed several reweighting schemes aimed at making LSA representations more informative. Here, implement the TF-IDF transform and see how it affects learned representations. While it is okay (and in fact encouraged) to use vectorized numpy operations, you should refrain from using pre-implemented library functions for computing TF-IDF.
###Code
def transform_tfidf(matrix):
# `matrix` is a `|V| x |D|` TD matrix of raw counts, where `|V|` is the
# vocabulary size and `|D|` is the number of documents in the corpus. This
# function should return a version of `matrix` with the TF-IDF transform
# applied. Note: this function should be nondestructive: it should not
# modify the input; instead, it should return a new object.
# Your code here!
nwords, ndocs = matrix.shape
vidf = np.vectorize(idf_helper)
idfs = np.zeros(nwords)
for i, _ in enumerate(idfs):
idfs[i] = idf_helper(matrix[i,:], ndocs)
tfidf = np.multiply(matrix, idfs[:, np.newaxis])
assert_size(tfidf, matrix.shape)
return tfidf
def idf_helper(row, ndocs):
df = np.count_nonzero(row)
idf = np.log(ndocs/df)
return idf
###Output
_____no_output_____
###Markdown
Sanity check 2The following cell should print `True` if your `transform_tfidf` function is implemented properly. (*Hint: in our implementation, we use the natural logarithm (base $e$) when computing inverse document frequency.*)
###Code
DEBUG_sc2_matrix = np.array([[3,1,0,3,0],
[0,2,0,0,1],
[7,8,2,0,1],
[1,9,8,1,0]])
DEBUG_gt = np.array([[1.53247687, 0.51082562, 0. , 1.53247687, 0. ],
[0. , 1.83258146, 0. , 0. , 0.91629073],
[1.56200486, 1.78514841, 0.4462871 , 0. , 0.22314355],
[0.22314355, 2.00829196, 1.78514841, 0.22314355, 0. ]])
print(np.allclose(transform_tfidf(DEBUG_sc2_matrix), DEBUG_gt))
###Output
True
###Markdown
How does TF-IDF normalization change the learned similarity function?
###Code
reps = 100
td_matrix_tfidf = transform_tfidf(td_matrix)
reps_tfidf = learn_reps_lsa(td_matrix_tfidf, reps)
lab_util.show_similar_words(vectorizer.tokenizer, reps_tfidf, show_tokens)
###Output
good 47
everyone 1.078
lunches 1.089
as 1.145
pretty 1.182
but 1.199
bad 201
taste 1.038
strange 1.084
like 1.152
myself 1.169
nasty 1.177
cookie 504
cookies 0.346
nana's 0.517
oreos 0.698
bars 0.796
craving 1.026
jelly 351
creamer 0.891
gifts 1.008
twist 1.044
packages 1.150
advertised 1.179
dog 925
foods 0.996
switched 1.044
pet 1.096
loves 1.147
appeal 1.150
the 36
of 0.906
<unk> 0.976
. 1.053
and 1.142
to 1.194
3 289
1 1.095
2 1.108
4 1.127
vendor 1.154
cool 1.242
###Markdown
Now that we have some representations, let's see if we can do something useful with them.Below, implement a feature function that represents a document as the sum of itslearned word embeddings.The remaining code trains a logistic regression model on a set of *labeled* reviews; we're interested in seeing how much representations learned from *unlabeled* reviews improve classification.(Note: the staff solutions for each of the three featurizers achieve accuracies of between .78 and .83 with the full training corpus (3000 examples).)
###Code
import sklearn.linear_model
import sklearn.model_selection
def word_featurizer(xs):
# normalize
return xs / np.sqrt((xs ** 2).sum(axis=1, keepdims=True))
def lsa_featurizer(xs):
# This function takes in a `|V| x |D|` TD matrix in which each row contains
# the word counts for the given review.
# It should return a matrix where each row contains the learned feature
# representation of each review (e.g. the sum of LSA word representations).
# (Hint: use TF-IDF LSA features, which should be a global variable after
# running the previous cell; no need to pass it in as an argument.)
lsa = reps_tfidf
feats = xs @ lsa
# normalize
return feats / np.sqrt((feats ** 2).sum(axis=1, keepdims=True))
# We've implemented the remainder of the training and evaluation pipeline,
# so you likely won't need to modify the following four functions.
def combo_featurizer(xs):
return np.concatenate((word_featurizer(xs), lsa_featurizer(xs)), axis=1)
def train_model(featurizer, xs, ys):
xs_featurized = featurizer(xs)
model = sklearn.linear_model.LogisticRegression()
model.fit(xs_featurized, ys)
return model
def eval_model(model, featurizer, xs, ys):
xs_featurized = featurizer(xs)
pred_ys = model.predict(xs_featurized)
return np.mean(pred_ys == ys)
def training_experiment(name, featurizer, n_train):
print(f"{name} features, {n_train} examples")
train_xs = vectorizer.transform(train_reviews[:n_train])
train_ys = train_labels[:n_train]
test_xs = vectorizer.transform(test_reviews)
test_ys = test_labels
model = train_model(featurizer, train_xs, train_ys)
acc = eval_model(model, featurizer, test_xs, test_ys)
print(acc, '\n')
return acc
# The following four lines will run a training experiment with all 3k examples
# in training set for each feature type. `training_experiment` may be useful to
# you when performing experiments to answer questions in the handout.
n_train = 3000
training_experiment("word", word_featurizer, n_train)
training_experiment("lsa", lsa_featurizer, n_train)
training_experiment("combo", combo_featurizer, n_train)
print()
###Output
word features, 3000 examples
0.784
lsa features, 3000 examples
0.798
combo features, 3000 examples
0.802
###Markdown
**Part 1: Lab writeup**Part 1 of your lab report should discuss any implementation details that were important to filling out the code above, as well as your answers to the questions in Part 1 of the Homework 2 handout. Below, you can set up and perform experiments that answer these questions (include figures, plots, and tables in your write-up as you see fit). Experiments for Part 1
###Code
# # Your code here!
# def run_exp(n_train):
# acc_w = training_experiment("word", word_featurizer, n_train)
# acc_l = training_experiment("lsa", lsa_featurizer, n_train)
# acc_c = training_experiment("combo", combo_featurizer, n_train)
# return acc_w, acc_l, acc_c
# td_matrix_tfidf = transform_tfidf(td_matrix)
# reps = range(100, 2006, 200)
# n_trains = range(500, 3001, 500)
# nexps = len(reps) * len(n_trains)
# acc_w = np.zeros([nexps])
# acc_l = np.zeros([nexps])
# acc_c = np.zeros([nexps])
# rvals = np.zeros([nexps])
# tvals = np.zeros([nexps])
# count = 0
# for i, rep in enumerate(reps):
# reps_tfidf = learn_reps_lsa(td_matrix_tfidf, rep)
# for j, n_train in enumerate(n_trains):
# acc_w[count], acc_l[count], acc_c[count] = run_exp(n_train)
# rvals[count] = rep
# tvals[count] = n_train
# count = count + 1
# import matplotlib.pyplot as plt
# fig = plt.figure()
# ax = fig.add_subplot(projection='3d')
# ax.scatter(rvals, tvals, acc_l)
# ax.set_xlabel('Representation size')
# ax.set_ylabel('Training size')
# ax.set_zlabel('Accuracy')
# ax.set_title('LSA Featurizer Performance')
# fig2 = plt.figure()
# ax = fig2.add_subplot(projection='3d')
# ax.scatter(rvals, tvals, acc_w)
# ax.set_xlabel('Representation size')
# ax.set_ylabel('Training size')
# ax.set_zlabel('Accuracy')
# ax.set_title('Word Featurizer Performance')
# fig3 = plt.figure()
# ax = fig3.add_subplot(projection='3d')
# ax.scatter(rvals, tvals, acc_c)
# ax.set_xlabel('Representation size')
# ax.set_ylabel('Training size')
# ax.set_zlabel('Accuracy')
# ax.set_title('Combo Featurizer Performance')
# lmax = np.max(acc_l)
# wmax = np.max(acc_w)
# cmax = np.max(acc_c)
# print(f'LSA max: {lmax} Word max: {wmax} Combo max: {cmax}')
###Output
LSA max: 0.812 Word max: 0.784 Combo max: 0.818
###Markdown
Part 2: word representations via language modelingIn this section, we'll train a word embedding model with a word2vec-style objective rather than a matrix factorization objective. This requires a little more work; we've provided scaffolding for a PyTorch model implementation below.If you don't have much PyTorch experience, there are some tutorials [here](https://pytorch.org/tutorials/) which may be useful. You're also welcome to implement these experiments in any other framework of your choosing (note that we won't be able to provide debugging support if you use a different framework).
###Code
def learn_reps_word2vec(corpus, window_size, rep_size, n_epochs, n_batch):
#This method takes in a corpus of training sentences. It returns a matrix of
# word embeddings with the same structure as used in the previous section of
# the assignment. (You can extract this matrix from the parameters of the
# Word2VecModel.)
tokenizer = lab_util.Tokenizer()
tokenizer.fit(corpus)
tokenized_corpus = tokenizer.tokenize(corpus)
vsize = tokenizer.vocab_size
print(f"Tokenizer size is {vsize}")
ngrams = lab_util.get_ngrams(tokenized_corpus, window_size, pad_idx=vsize)
print(f"Ngrams size is {len(ngrams)}")
device = torch.device('cuda') # run on colab gpu
model = Word2VecModel(vsize, rep_size).to(device)
opt = optim.Adam(model.parameters(), lr=0.001)
loader = torch_data.DataLoader(ngrams, batch_size=n_batch, shuffle=True)
# What loss function should we use for Word2Vec?
loss_fn = nn.CrossEntropyLoss(ignore_index=vsize) # Your code here!
losses = [] # Potentially useful for debugging (loss should go down!)
for epoch in tqdm(range(n_epochs)):
epoch_loss = 0
for context, label in loader:
# As described above, `context` is a batch of context word ids, and
# `label` is a batch of predicted word labels.
# Here, perform a forward pass to compute predictions for the model.
# Your code here!
preds = model(context.to(device))
# Now finish the backward pass and gradient update.
# Remember, you need to compute the loss, zero the gradients
# of the model parameters, perform the backward pass, and
# update the model parameters.
# Your code here!
loss = loss_fn(preds, label.to(device))
loss.backward()
opt.step()
model.zero_grad()
epoch_loss += loss.item()
losses.append(epoch_loss)
print("Epoch {} loss: {}".format(epoch, epoch_loss))
# Hint: you want to return a `vocab_size x embedding_size` numpy array
embedding_matrix = model.embed.weight[:-1, :] # Your code here!
embedding_matrix = embedding_matrix.cpu().detach().numpy()
expected_size = [tokenizer.vocab_size, rep_size]
assert_size(embedding_matrix, expected_size)
return embedding_matrix
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as torch_data
class Word2VecModel(nn.Module):
# A torch module implementing a word2vec predictor. The `forward` function
# should take a batch of context word ids as input and predict the word
# in the middle of the context as output, as in the CBOW model from lecture.
# Hint: look at how padding is handled in lab_util.get_ngrams when
# initializing `ctx`: vocab_size is used as the padding token for contexts
# near the beginning and end of sequences. If you use an embedding module
# in your Word2Vec implementation, make sure to account for this extra
# padding token, and account for it with the `padding_idx` kwarg.
def __init__(self, vocab_size, embed_dim, padding_idx=2006):
super().__init__()
self.device = torch.device('cuda')
self.vsize = vocab_size
self.embed = nn.Embedding(vocab_size+1, embed_dim, padding_idx=padding_idx, device=self.device)
# Your code here!
self.linear = nn.Linear(embed_dim, vocab_size, device=self.device)
print(f"Initializing word2vec with vocab size {vocab_size} and embed_dim {embed_dim}.")
def forward(self, context):
# Context is an `n_batch x n_context` matrix of integer word ids
# this function should return an `n_batch x vocab_size` matrix with
# element i, j being the (possibly log) probability of the middle word
# in context i being word j.
# Your code here!
bsize, _ = context.size()
embedding = self.embed(context)
embedding = torch.sum(embedding, 1).to(self.device)
hidden = self.linear(embedding)
output = F.log_softmax(hidden, dim=-1).to(self.device)
assert_size(output, [bsize, self.vsize])
return output
# Use the function you just wrote to learn Word2Vec embeddings:
windows = 2# default 2
rep_size = 500 # default 500
epochs = 5 # default 10
batch_size = 100 # default 100
reps_word2vec = learn_reps_word2vec(train_reviews, windows, rep_size, epochs, batch_size)
###Output
Tokenizer size is 2006
Ngrams size is 272852
Initializing word2vec with vocab size 2006 and embed_dim 500.
###Markdown
After training the embeddings, we can try to visualize the embedding space to see if it makes sense. First, we can take any word in the space and check its closest neighbors.
###Code
lab_util.show_similar_words(vectorizer.tokenizer, reps_word2vec, show_tokens)
###Output
good 47
apart 1.676
sound 1.721
liked 1.744
recent 1.748
staple 1.751
bad 201
dented 1.695
died 1.697
betty 1.743
mixed 1.746
strange 1.754
cookie 504
seasoning 1.722
grind 1.728
made 1.729
above 1.732
split 1.746
jelly 351
ways 1.679
tongue 1.694
bite 1.705
bulk 1.713
straight 1.721
dog 925
junk 1.611
intake 1.671
photo 1.691
replace 1.703
generally 1.703
the 36
a 1.570
handy 1.689
holes 1.695
perfectly 1.731
cafe 1.756
3 289
17 1.700
cold 1.713
parents 1.726
thai 1.744
commercial 1.755
###Markdown
We can also cluster the embedding space. Clustering in 4 or more dimensions is hard to visualize, and even clustering in 2 or 3 can be difficult because there are so many words in the vocabulary. One thing we can try to do is assign cluster labels and qualitiatively look for an underlying pattern in the clusters.
###Code
from sklearn.cluster import KMeans
indices = KMeans(n_clusters=10).fit_predict(reps_word2vec)
zipped = list(zip(range(vectorizer.tokenizer.vocab_size), indices))
np.random.shuffle(zipped)
zipped = zipped[:100]
zipped = sorted(zipped, key=lambda x: x[1], reverse=True)
for token, cluster_idx in zipped:
word = vectorizer.tokenizer.token_to_word[token]
print(f"{word}: {cluster_idx}")
###Output
moved: 9
fall: 9
caramels: 9
than: 9
brewer: 9
beer: 9
subtle: 9
suggest: 9
someone: 9
go: 9
him: 9
update: 9
needed: 9
rate: 9
decide: 9
next: 9
buying: 8
pieces: 8
large: 8
amazing: 8
below: 8
spread: 8
change: 8
excellent: 8
classic: 8
average: 8
given: 8
bad: 8
cookies: 8
bought: 8
plus: 8
never: 8
solid: 8
will: 8
beef: 8
zero: 8
fiber: 8
description: 8
still: 8
truly: 8
starbucks: 8
lasts: 8
that's: 8
im: 7
muffin: 7
hint: 7
months: 7
lays: 7
crackers: 7
artificial: 7
birthday: 7
calcium: 7
shipment: 7
fruit: 7
doesn't: 7
granted: 7
we: 7
mill: 7
holes: 7
seen: 6
cubes: 6
eaten: 6
lunches: 6
filled: 6
kind: 6
nearly: 6
living: 6
40: 6
learned: 6
packing: 6
double: 6
general: 5
worked: 5
help: 5
potassium: 5
coffee: 5
plum: 5
un: 5
treats: 5
mean: 5
colors: 4
teeth: 3
prime: 3
without: 3
reviews: 3
ok: 3
caffeine: 3
has: 3
warning: 3
its: 3
times: 3
unfortunately: 3
target: 3
puppy: 3
something: 3
morning: 3
pouch: 3
rica: 3
disappointed: 1
expiration: 0
###Markdown
Finally, we can use the trained word embeddings to construct vector representations of full reviews. One common approach is to simply average all the word embeddings in the review to create an overall embedding. Implement the transform function in Word2VecFeaturizer to do this.
###Code
def w2v_featurizer(xs):
# This function takes in a matrix in which each row contains the word counts
# for the given review. It should return a matrix in which each row contains
# the average Word2Vec embedding of each review (hint: this will be very
# similar to `lsa_featurizer` from above, just using Word2Vec embeddings
# instead of LSA).
feats = xs @ reps_word2vec# Your code here!
# normalize
return feats / np.sqrt((feats ** 2).sum(axis=1, keepdims=True))
training_experiment("word2vec", w2v_featurizer, 3000)
print()
###Output
word2vec features, 3000 examples
0.78
###Markdown
**Part 2: Lab writeup**Part 2 of your lab report should discuss any implementation details that were important to filling out the code above, as well as your answers to the questions in Part 2 of the Homework 2 handout. Below, you can set up and perform experiments that answer these questions (include figures, plots, and tables in your write-up as you see fit). Experiments for Part 2
###Code
# Your code here!
def x_featurizer(xs):
return np.concatenate( ( w2v_featurizer(xs), combo_featurizer(xs)), axis=1)
training_experiment("word2vec", x_featurizer, 3000)
###Output
word2vec features, 3000 examples
0.818
###Markdown
Part 3 (6.864 only) In Part 3, you will extend the methods you've implemented in Parts 1 and 2 with the goal of improving final predictive performance. You should experiment with at least one idea to improve the model --- feel free to focus on either the featurizer or the classifier. Some suggestions of things you could try:1. Implement a different TD matrix normalization method (see lecture slides for alternatives to TF-IDF).2. Implement a different Word2Vec formulation (in Part 2, you implemented the CBOW formulation; does the skip-gram formulation perform any better?).3. Implement a more sophisticated classifier module.4. Tune featurizer and/or classifier hyperparameters (for full marks, you should obtain at least a 1% improvement in prediction accuracy if you only tune hyperparameters).In your report, discuss what you implemented (including relevant design decisions), and how your change(s) impacted performance.Note: As long as you try something with difficulty comparable to the suggested modifications and have a meaningful discussion of your results in your report, you can earn full marks (you do not necessarily need to improve performance).
###Code
# Your code here!
def tfidf_sweep(matrix, tf_func, idf_func):
# `matrix` is a `|V| x |D|` TD matrix of raw counts, where `|V|` is the
# vocabulary size and `|D|` is the number of documents in the corpus. This
# function should return a version of `matrix` with the TF-IDF transform
# applied. Note: this function should be nondestructive: it should not
# modify the input; instead, it should return a new object.
nwords, ndocs = matrix.shape
idfs = np.zeros(nwords)
tf = tf_func(matrix)
for i, _ in enumerate(idfs):
idfs[i] = idf_func(matrix[i,:], ndocs)
tfidf = np.multiply(tf, idfs[:, np.newaxis])
assert_size(tfidf, matrix.shape)
return tfidf
def idf_smooth(row, ndocs):
df = np.count_nonzero(row) + 1
idf = np.log(ndocs/df) + 1
return idf
def idf_max(row, ndocs):
df = np.count_nonzero(row) + 1
idf = np.log(np.max(row)/df)
return idf
def idf_probmax(row, ndocs):
df = np.count_nonzero(row)
idf = np.log((ndocs - df)/df)
return idf
def term_frequency(matrix):
tf = matrix/matrix.sum(axis=0, keepdims=True)
return tf
def log_norm(matrix):
tf = np.log(matrix + 1)
return tf
def k_norm(matrix, k = 0.5):
tf = (k-1) * matrix / (matrix.max(axis = 0)) + k
return tf
reps = 100
n_train = 3000
tfuncs = [term_frequency, log_norm, k_norm]
ifuncs = [idf_smooth, idf_max, idf_probmax]
for tfun in tfuncs:
for ifun in ifuncs:
td_matrix_tfidf = tfidf_sweep(td_matrix, tfun, ifun)
reps_tfidf = learn_reps_lsa(td_matrix_tfidf, reps)
name = tfun.__name__ + ' ' + ifun.__name__
training_experiment(name, lsa_featurizer, n_train)
###Output
term_frequency idf_smooth features, 3000 examples
0.79
term_frequency idf_max features, 3000 examples
0.774
term_frequency idf_probmax features, 3000 examples
0.802
log_norm idf_smooth features, 3000 examples
0.802
log_norm idf_max features, 3000 examples
0.764
log_norm idf_probmax features, 3000 examples
0.766
k_norm idf_smooth features, 3000 examples
0.78
k_norm idf_max features, 3000 examples
0.76
k_norm idf_probmax features, 3000 examples
0.774
|
part_2/Distribution_exercises.ipynb | ###Markdown
Let's find out what distribution our data isIn the data folder we have 4 datasets, load them and check what distributions they are
###Code
data1 = pd.read_csv('data/StudentsPerformance.csv')
data2 = pd.read_csv('data/StudentsPerformance.csv')['writing score']
data3 = pd.read_csv('data/open-data-website-traffic.csv')['Socrata Sessions']
data4 = pd.read_csv('data/open-data-website-traffic.csv')['Socrata Bounce Rate']
data5 = pd.read_csv('data/HorseKicksDeath.csv')['C1']
# Tasks:
what_is_this_distirution_1 = data1["math score"]
what_is_this_distirution_1.head(10)
fig = plt.figure(figsize=(15, 15))
plt.subplot(321)
plt.title("Histogram plot")
plt.hist(what_is_this_distirution_1, bins=BINS, alpha=0.5, label='poisson', color='b', edgecolor='k')
plt.subplot(322)
plt.title("Violineplot")
plt.violinplot(what_is_this_distirution_1, vert=False, widths=0.9, showmeans=True, showextrema=True, showmedians=True)
plt.show()
mu = np.mean(what_is_this_distirution_1)
sigma = np.std(what_is_this_distirution_1)
what_is_this_distirution_1_normalized = what_is_this_distirution_1.apply(lambda x: (x - mu)/sigma)
fig = plt.figure(figsize=(15, 15))
plt.subplot(321)
plt.title("Histogram plot")
plt.hist(what_is_this_distirution_1_normalized, bins=BINS, alpha=0.5, label='poisson', color='b', edgecolor='k')
plt.hist(gen_normal(0, 1, 1000)['observation'], bins=BINS, alpha=0.5, label='normal', color='g', edgecolor='k')
plt.gca().legend(('mistery','normal'))
plt.subplot(322)
plt.title("Violineplot")
plt.violinplot(what_is_this_distirution_1_normalized, vert=False, widths=0.9, showmeans=True, showextrema=True, showmedians=True)
plt.show()
qq = stats.probplot(what_is_this_distirution_1_normalized, plot=plt)
fig = plt.figure(figsize=(15, 15))
plt.subplot(321)
plt.title("Histogram plot")
plt.hist(data4, bins=BINS, alpha=0.5, label='poisson', color='b', edgecolor='k')
plt.subplot(322)
plt.title("Violineplot")
plt.violinplot(data4, vert=False, widths=0.9, showmeans=True, showextrema=True, showmedians=True)
plt.show()
#Looking at the second data set..
what_is_this_distirution_2 = data2
what_is_this_distirution_2.head(10)
mu = np.mean(what_is_this_distirution_2)
sigma = np.std(what_is_this_distirution_2)
what_is_this_distirution_2_normalized = what_is_this_distirution_2.apply(lambda x: (x - mu)/sigma)
fig = plt.figure(figsize=(15, 15))
plt.subplot(321)
plt.title("Histogram plot")
plt.hist(what_is_this_distirution_2_normalized, bins=BINS, alpha=0.5, label='poisson', color='b', edgecolor='k')
plt.hist(gen_weibull(1.5)['observation'], bins=BINS, alpha=0.5, label='normal', color='g', edgecolor='k')
plt.gca().legend(('mistery','weibull'))
plt.subplot(322)
plt.title("Violineplot")
plt.violinplot(what_is_this_distirution_2_normalized, vert=False, widths=0.9, showmeans=True, showextrema=True, showmedians=True)
plt.show()
###Output
_____no_output_____ |
docs/examples/classifier_example/classification_example1_2_data_points.ipynb | ###Markdown
A Quantum distance-based classifier Robert Wezeman, TNO Table of Contents* [Introduction](introduction)* [Problem](problem)* [Amplitude Encoding](amplitude)* [Data preprocessing](dataset)* [Quantum algorithm](algorithm)* [Conclusion and further work](conclusion)
###Code
## Import external python file
import nbimporter
import numpy as np
from data_plotter import get_bin, DataPlotter # for easier plotting
DataPlotter = DataPlotter()
###Output
_____no_output_____
###Markdown
$$ \newcommand{\ket}[1]{\left|{1}\right\rangle} $$ Introduction Consider the following scatter plot of the first two flowers in [the famous Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set)Notice that just two features, the sepal width and the sepal length, divide the two different Iris species into different regions in the plot. This gives rise to the question: given only the sepal length and sepal width of a flower can we classify the flower by their correct species? This type of problem, also known as [statistical classification](https://en.wikipedia.org/wiki/Statistical_classification), is a common problem in machine learning. In general, a classifier is constructed by letting it learn a function which gives the desired output based on a sufficient amount of data. This is called supervised learning, as the desired output (the labels of the data points) are known. After learning, the classifier can classify an unlabeled data point based on the learned function. The quality of a classifier improves if it has a larger training dataset it can learn on. The true power of this quantum classifier becomes clear when using extremely large data sets. In this notebook we will describe how to build a distance-based classifier on the Quantum Inspire using amplitude encoding. It turns out that, once the system is initialized in the desired state, regardless of the size of training data, the actual algorithm consists of only 3 actions, one Hadamard gate and two measurements. This has huge implications for the scalability of this problem for large data sets. Using only 4 qubits we show how to encode two data points, both of a different class, to predict the label for a third data point. In this notebook we will demonstrate how to use the Quantum Inspire SDK using QASM-code, we will also provide the code to obtain the same results for the ProjectQ framework.[Back to Table of Contents](contents) Problem We define the following binary classification problem: Given the data set $$\mathcal{D} = \Big\{ ({\bf x}_1, y_1), \ldots ({\bf x}_M , y_M) \Big\},$$consisting of $M$ data points $x_i\in\mathbb{R}^n$ and corresponding labels $y_i\in \{-1, 1\}$, give a prediction for the label $\tilde{y}$ corresponding to an unlabeled data point $\bf\tilde{x}$. The classifier we shall implement with our quantum circuit is a distance-based classifier and is given by\begin{equation}\newcommand{\sgn}{{\rm sgn}}\newcommand{\abs}[1]{\left\lvert1\right\rvert}\label{eq:classifier} \tilde{y} = \sgn\left(\sum_{m=0}^{M-1} y_m \left[1-\frac{1}{4M}\abs{{\bf\tilde{x}}-{\bf x}_m}^2\right]\right). \hspace{3cm} (1)\end{equation}This is a typical $M$-nearest-neighbor model, where each data point is given a weight related to the distance measure. To implement this classifier on a quantum computer, we need a way to encode the information of the training data set in a quantum state. We do this by first encoding the training data in the amplitudes of a quantum system, and then manipulate the amplitudes of then the amplitudes will be manipulated by quantum gates such that we obtain a result representing the above classifier. Encoding input features in the amplitude of a quantum system is known as amplitude encoding.[Back to Contents](contents) Amplitude encoding Suppose we want to encode a classical vector $\bf{x}\in\mathbb{R}^N$ by some amplitudes of a quantum system. We assume $N=2^n$ and that $\bf{x}$ is normalised to unit length, meaning ${\bf{x}^T{x}}=1$. We can encode $\bf{x}$ in the amplitudes of a $n$-qubit system in the following way\begin{equation} {\bf x} = \begin{pmatrix}x^1 \\ \vdots \\ x^N\end{pmatrix} \Longleftrightarrow{} \ket{\psi_{{\bf x}}} = \sum_{i=0}^{N-1}x^i\ket{i},\end{equation}where $\ket{i}$ is the $i^{th}$ entry of the computational basis $\left\{\ket{0\ldots0},\ldots,\ket{1\ldots1}\right\}$. By applying an efficient quantum algorithm (resources growing polynomially in the number of qubits $n$), one can manipulate the $2^n$ amplitudes super efficiently, that is $\mathcal{O}\left(\log N\right)$. This follows as manipulating all amplitudes requires an operation on each of the $n = \mathcal{O}\left(\log N\right)$ qubits. For algorithms to be truly super-efficient, the phase where the data is encoded must also be at most polynomial in the number of qubits. The idea of quantum memory, sometimes referred as quantum RAM (QRAM), is a particular interesting one. Suppose we first run some quantum algorithm, for example in quantum chemistry, with as output some resulting quantum states. If these states could be fed into a quantum classifier, the encoding phase is not needed anymore. Finding efficient data encoding systems is still a topic of active research. We will restrict ourselves here to the implementation of the algorithm, more details can be found in the references.The algorithm requires the $n$-qubit quantum system to be in the following state \begin{equation}\label{eq:prepstate} \ket{\mathcal{D}} = \frac{1}{\sqrt{2M}} \sum_{m=0}^{M-1} \ket{m}\Big(\ket{0}\ket{\psi_{\bf\tilde{{x}}}} + \ket{1}\ket{\psi_{\bf{x}_m}}\Big)\ket{y_m}.\hspace{3cm} (2)\end{equation}Here $\ket{m}$ is the $m^{th}$ state of the computational basis used to keep track of the $m^{th}$ training input. The second register is a single ancillary qubit entangled with the third register. The excited state of the ancillary qubit is entangled with the $m^{th}$ training state $\ket{\psi_{{x}_m}}$, while the ground state is entangled with the new input state $\ket{\psi_{\tilde{x}}}$. The last register encodes the label of the $m^{th}$ training data point by\begin{equation}\begin{split} y_m = -1 \Longleftrightarrow& \ket{y_m} = \ket{0},\\ y_m = 1 \Longleftrightarrow& \ket{y_m} = \ket{1}.\end{split}\end{equation}Once in this state the algorithm only consists of the following three operations:1. Apply a Hadamard gate on the second register to obtain $$\frac{1}{2\sqrt{M}} \sum_{m=0}^{M-1} \ket{m}\Big(\ket{0}\ket{\psi_{\bf\tilde{x}+x_m}} + \ket{1}\ket{\psi_{\bf\tilde{x}-x_m}}\Big)\ket{y_m},$$ where $\ket{\psi_{\bf\tilde{{x}}\pm{x}_m}} = \ket{\psi_{\tilde{\bf{x}}}}\pm \ket{\psi_{\bf{x}_m}}$. 2. Measure the second qubit. We restart the algorithm if we measure a $\ket{1}$ and only continue if we are in the $\ket{0}$ branch. We continue the algorithm with a probability $p_{acc} = \frac{1}{4M}\sum_M\abs{{\bf\tilde{x}}+{\bf x}_m}^2$, for standardised random data this is usually around $0.5$. The resulting state is given by\begin{equation} \frac{1}{2\sqrt{Mp_{acc}}}\sum_{m=0}^{M-1}\sum_{i=0}^{N-1} \ket{m}\ket{0}\left({\tilde{x}}^i + x_m^i\right)\ket{i}\ket{y_m}.\end{equation} 3. Measure the last qubit $\ket{y_m}$. The probability that we measure outcome zero is given by\begin{equation} p(q_4=0) = \frac{1}{4Mp_{acc}}\sum_{m|y_m=0}\abs{\bf{\tilde{{x}}+{x}_m}}^2.\end{equation}In the special case where the amount of training data for both labels is equal, this last measurement relates to the classifier as described in previous section by\begin{equation}\tilde{y} = \left\{ \begin{array}{lr} -1 & : p(q_4 = 0 ) > p(q_4 = 1)\\ +1 & : p(q_4 = 0 ) < p(q_4 = 1) \end{array}\right. \end{equation}By setting $\tilde{y}$ to be the most likely outcome of many measurement shots, we obtain the desired distance-based classifier.[Back to Table of Contents](contents) Data preprocessingIn the previous section we saw that for amplitude encoding we need a data set which is normalised. Luckily, it is always possible to bring data to this desired form with some data transformations. Firstly, we standardise the data to have zero mean and unit variance, then we normalise the data to have unit length. Both these steps are common methods in machine learning. Effectively, we only have to consider the angle between different data features.To illustrate this procedure we apply it to the first two features of the famous Iris data set:
###Code
# Plot the data
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
data = [el[0:101] for el in features][0:2] # Select only the first two features of the dataset
half_len_data = len(data[0]) // 2
iris_setosa = [el[0:half_len_data] for el in data[0:2]]
iris_versicolor = [el[half_len_data:-1] for el in data[0:2]]
DataPlotter.plot_original_data(iris_setosa, iris_versicolor); # Function to plot the data
# Rescale the data
from sklearn import preprocessing # Module contains method to rescale data to have zero mean and unit variance
# Rescale whole data-set to have zero mean and unit variance
features_scaled = [preprocessing.scale(el) for el in data[0:2]]
iris_setosa_scaled = [el[0:half_len_data] for el in features_scaled]
iris_versicolor_scaled = [el[half_len_data:-1] for el in features_scaled]
DataPlotter.plot_standardised_data(iris_setosa_scaled, iris_versicolor_scaled); # Function to plot the data
# Normalise the data
def normalise_data(arr1, arr2):
"""Normalise data to unit length
input: two array same length
output: normalised arrays
"""
for idx in range(len(arr1)):
norm = (arr1[idx]**2 + arr2[idx]**2)**(1 / 2)
arr1[idx] = arr1[idx] / norm
arr2[idx] = arr2[idx] / norm
return [arr1, arr2]
iris_setosa_normalised = normalise_data(iris_setosa_scaled[0], iris_setosa_scaled[1])
iris_versicolor_normalised = normalise_data(iris_versicolor_scaled[0], iris_versicolor_scaled[1])
# Function to plot the data
DataPlotter.plot_normalised_data(iris_setosa_normalised, iris_versicolor_normalised);
###Output
_____no_output_____
###Markdown
[Table of Contents](contents) Quantum algorithm Now we can start with our quantum algorithm on the Quantum Inspire. We describe how to build the algorithm for the simplest case with only two data points, each with two features, that is $M=N=2$. For this algorithm we need 4 qubits:* One qubit for the index register $\ket{m}$* One ancillary qubit* One qubit to store the information of the two features of the data points * One qubit to store the information of the classes of the data pointsFrom the data set described in previous section we pick the following data set $\mathcal{D} = \big\{({\bf x}_1,y_1), ({\bf x}_2, y_2) \big\}$ where: * ${\bf x}_1 = (0.9193, 0.3937)$, $y_1 = -1$,* ${\bf x}_2 = (0.1411, 0.9899)$, $y_2 = 1$.We are interested in the label $\tilde{y}$ for the data point ${\bf \tilde{x}} = (0.8670, 0.4984)$.The amplitude encoding of these data points look like\begin{equation} \begin{split} \ket{\psi_{\bf\tilde{x}}} & = 0.8670 \ket{0} + 0.4984\ket{1}, \\ \ket{\psi_{\bf x_1}} & = 0.9193 \ket{0} + 0.3937\ket{1},\\ \ket{\psi_{\bf x_2}} & = 0.1411 \ket{0} + 0.9899\ket{1}. \end{split}\end{equation}Before we can run the actual algorithm we need to bring the system in the desired [initial state (equation 2)](state) which can be obtain by applying the following combination of gates starting on $\ket{0000}$. * **Part A:** In this part the index register is initialized and the ancilla qubit is brought in the desired state. For this we use the plain QASM language of the Quantum Inspire. Part A consists of two Hadamard gates:
###Code
def part_a():
qasm_a = """version 1.0
qubits 4
prep_z q[0:3]
.part_a
H q[0:1] #execute Hadamard gate on qubit 0, 1
"""
return qasm_a
###Output
_____no_output_____
###Markdown
After this step the system is in the state$$\ket{\mathcal{D}_A} = \frac{1}{2}\Big(\ket{0}+\ket{1}\Big)\Big(\ket{0}+\ket{1}\Big)\ket{0}\ket{0} $$ * **Part B:** In this part we encode the unlabeled data point $\tilde{x}$ by making use of a controlled rotation. We entangle the third qubit with the ancillary qubit. The angle $\theta$ of the rotation should be chosen such that $\tilde{x}=R_y(\theta)\ket{0}$. By the definition of $R_y$ we have$$ R_y(\theta)\ket{0} = \cos\left(\frac{\theta}{2}\right)\ket{0} + \sin\left(\frac{\theta}{2}\right)\ket{1}.$$ Therefore, the angle needed to rotate to the state $\psi=a\ket{0} + b\ket{1}$ is given by $\theta = 2\cos^{-1}(a)\cdot sign(b)$.Quantum Inspire does not directly support controlled-$R_y$ gates, however we can construct it from other gates as shown in the figure below. In these pictures $k$ stand for the angle used in the $R_y$ rotation.
###Code
def part_b(angle):
half_angle = angle / 2
qasm_b = """.part_b # encode test value x^tilde
CNOT q[1], q[2]
Ry q[2], -{0}
CNOT q[1], q[2]
Ry q[2], {0}
X q[1]
""".format(half_angle)
return qasm_b
###Output
_____no_output_____
###Markdown
After this step the system is in the state$$\ket{\mathcal{D}_B} = \frac{1}{2} \Big(\ket{0}+\ket{1}\Big)\Big(\ket{0}\ket{\tilde{{x}}}+\ket{1}\ket{0}\Big)\ket{0}$$ * **Part C:** In this part we encode the first data point $x_1$. The rotation angle $\theta$ is such that $\ket{x_1} = R_y(\theta)\ket{0}$. Now a double controlled-$R_y$ rotation is needed, and similar to Part B, we construct it from other gates as shown in the figure below.
###Code
def part_c(angle):
quarter_angle = angle / 4
qasm_c = """.part_c # encode training x^0 value
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], {0}
CNOT q[0],q[2]
Ry q[2], -{0}
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], -{0}
CNOT q[0],q[2]
Ry q[2], {0}
X q[0]
""".format(quarter_angle)
return qasm_c
###Output
_____no_output_____
###Markdown
After this step the system is in the state$$\ket{\mathcal{D}_C} = \frac{1}{2}\Bigg(\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big) + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{0}\Big)\Bigg) \ket{0}$$ * **Part D:** This part is almost an exact copy of part C, however now with $\theta$ chosen such that $\ket{{x}_2} = R_y(\theta)\ket{0}$.
###Code
def part_d(angle):
quarter_angle = angle / 4
qasm_d = """.part_d # encode training x^1 value
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], {0}
CNOT q[0],q[2]
Ry q[2], -{0}
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], -{0}
CNOT q[0],q[2]
Ry q[2], {0}
""".format(quarter_angle)
return qasm_d
###Output
_____no_output_____
###Markdown
After this step the system is in the state$$\ket{\mathcal{D}_D} = \frac{1}{2}\Bigg(\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big) + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x}_2}\Big)\Bigg) \ket{0}$$ * **Part E:** The last step is to label the last qubit with the correct class, this can be done using a simple CNOT gate between the first and last qubit to obtain the desired initial state$$\ket{\mathcal{D}_E} = \frac{1}{2}\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big)\ket{0} + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x}_2}\Big)\ket{1}.$$
###Code
def part_e():
qasm_e = """.part_e # encode the labels
CNOT q[0], q[3]
"""
return qasm_e
###Output
_____no_output_____
###Markdown
The actual algorithmOnce the system is in this initial state, the algorithm itself only consists of one Hadamard gate and two measurements. If the first measurement gives the result $\ket{1}$, we have to abort the algorithm and start over again. However, these results can also easily be filtered out in a post-proecessing step.
###Code
def part_f():
qasm_f = """
.part_f
H q[1]
"""
return qasm_f
###Output
_____no_output_____
###Markdown
The circuit for the whole algorithm now looks like: We can send our QASM code to the Quantum Inspire with the following data points\begin{equation} \begin{split} \ket{\psi_{\tilde{x}}} & = 0.8670 \ket{0} + 0.4984\ket{1}, \\ \ket{\psi_{x_1}} & = 0.9193 \ket{0} + 0.3937\ket{1},\\ \ket{\psi_{x_2}} & = 0.1411 \ket{0} + 0.9899\ket{1}. \end{split}\end{equation}
###Code
import os
from getpass import getpass
from coreapi.auth import BasicAuthentication
from quantuminspire.credentials import load_account, get_token_authentication, get_basic_authentication
from quantuminspire.api import QuantumInspireAPI
from math import acos
from math import pi
QI_EMAIL = os.getenv('QI_EMAIL')
QI_PASSWORD = os.getenv('QI_PASSWORD')
QI_URL = os.getenv('API_URL', 'https://api.quantum-inspire.com/')
## input data points:
angle_x_tilde = 2 * acos(0.8670)
angle_x0 = 2 * acos(0.1411)
angle_x1 = 2 * acos(0.9193)
def get_authentication():
""" Gets the authentication for connecting to the Quantum Inspire API."""
token = load_account()
if token is not None:
return get_token_authentication(token)
else:
if QI_EMAIL is None or QI_PASSWORD is None:
print('Enter email')
email = input()
print('Enter password')
password = getpass()
else:
email, password = QI_EMAIL, QI_PASSWORD
return get_basic_authentication(email, password)
authentication = get_authentication()
qi = QuantumInspireAPI(QI_URL, authentication)
## Build final QASM
final_qasm = part_a() + part_b(angle_x_tilde) + part_c(angle_x0) + part_d(angle_x1) + part_e() + part_f()
backend_type = qi.get_backend_type_by_name('QX single-node simulator')
result = qi.execute_qasm(final_qasm, backend_type=backend_type, number_of_shots=1, full_state_projection=True)
print(result['histogram'])
import matplotlib.pyplot as plt
from collections import OrderedDict
def bar_plot(result_data):
res = [get_bin(el, 4) for el in range(16)]
prob = [0] * 16
for key, value in result_data['histogram'].items():
prob[int(key)] = value
# Set color=light grey when 2nd qubit = 1
# Set color=blue when 2nd qubit = 0, and last qubit = 1
# Set color=red when 2nd qubit = 0, and last qubit = 0
color_list = [
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1)
]
plt.bar(res, prob, color=color_list)
plt.ylabel('Probability')
plt.title('Results')
plt.ylim(0, 1)
plt.xticks(rotation='vertical')
plt.show()
return prob
prob = bar_plot(result)
###Output
_____no_output_____
###Markdown
We only consider the events where the second qubit equals 0, that is, we only consider the events in the set $$\{0000, 0001, 0100, 0101, 1000, 1001, 1100, 1101\}$$The label $\tilde{y}$ is now given by\begin{equation}\tilde{y} = \left\{ \begin{array}{lr} -1 & : \\{0000, 0001, 0100, 0101\} > \\{1000, 1001, 1100, 1101\}\\ +1 & : \\{1000, 1001, 1100, 1101\} > \\{0000, 0001, 0100, 0101\} \end{array}\right. \end{equation}
###Code
def summarize_results(prob, display=1):
sum_label0 = prob[0] + prob[1] + prob[4] + prob[5]
sum_label1 = prob[8] + prob[9] + prob[12] + prob[13]
def y_tilde():
if sum_label0 > sum_label1:
return 0, ">"
elif sum_label0 < sum_label1:
return 1, "<"
else:
return "undefined", "="
y_tilde_res, sign = y_tilde()
if display:
print("The sum of the events with label 0 is: {}".format(sum_label0))
print("The sum of the events with label 1 is: {}".format(sum_label1))
print("The label for y_tilde is: {} because sum_label0 {} sum_label1".format(y_tilde_res, sign))
return y_tilde_res
summarize_results(prob);
###Output
The sum of the events with label 0 is: 0.4039141
The sum of the events with label 1 is: 0.4982864
The label for y_tilde is: 1 because sum_label0 < sum_label1
###Markdown
The following code will randomly pick two training data points and a random test point for the algorithm. We can compare the prediction for the label by the Quantum Inspire with the true label.
###Code
from random import sample, randint
from numpy import sign
def grab_random_data():
one_random_index = sample(range(50), 1)
two_random_index = sample(range(50), 2)
random_label = sample([1,0], 1) # random label
## iris_setosa_normalised # Label 0
## iris_versicolor_normalised # Label 1
if random_label[0]:
# Test data has label = 1, iris_versicolor
data_label0 = [iris_setosa_normalised[0][one_random_index[0]],
iris_setosa_normalised[1][one_random_index[0]]]
data_label1 = [iris_versicolor_normalised[0][two_random_index[0]],
iris_versicolor_normalised[1][two_random_index[0]]]
test_data = [iris_versicolor_normalised[0][two_random_index[1]],
iris_versicolor_normalised[1][two_random_index[1]]]
else:
# Test data has label = 0, iris_setosa
data_label0 = [iris_setosa_normalised[0][two_random_index[0]],
iris_setosa_normalised[1][two_random_index[0]]]
data_label1 = [iris_versicolor_normalised[0][one_random_index[0]],
iris_versicolor_normalised[1][one_random_index[0]]]
test_data = [iris_setosa_normalised[0][two_random_index[1]],
iris_setosa_normalised[1][two_random_index[1]]]
return data_label0, data_label1, test_data, random_label
data_label0, data_label1, test_data, random_label = grab_random_data()
print("Data point {} from label 0".format(data_label0))
print("Data point {} from label 1".format(data_label1))
print("Test point {} from label {} ".format(test_data, random_label[0]))
def run_random_data(data_label0, data_label1, test_data):
angle_x_tilde = 2 * acos(test_data[0]) * sign(test_data[1]) % (4 * pi)
angle_x0 = 2 * acos(data_label0[0]) * sign(data_label0[1]) % (4 * pi)
angle_x1 = 2 * acos(data_label1[0])* sign(data_label1[1]) % (4 * pi)
## Build final QASM
final_qasm = part_a() + part_b(angle_x_tilde) + part_c(angle_x0) + part_d(angle_x1) + part_e() + part_f()
result_random_data = qi.execute_qasm(final_qasm, backend_type=backend_type, number_of_shots=1, full_state_projection=True)
return result_random_data
result_random_data = run_random_data(data_label0, data_label1, test_data);
# Plot data points:
plt.rcParams['figure.figsize'] = [16, 6] # Plot size
plt.subplot(1, 2, 1)
DataPlotter.plot_normalised_data(iris_setosa_normalised, iris_versicolor_normalised);
plt.scatter(test_data[0], test_data[1], s=50, c='green'); # Scatter plot data class ?
plt.scatter(data_label0[0], data_label0[1], s=50, c='orange'); # Scatter plot data class 0
plt.scatter(data_label1[0], data_label1[1], s=50, c='orange'); # Scatter plot data class 1
plt.legend(["Iris Setosa (label 0)", "Iris Versicolor (label 1)", "Test point", "Data points"])
plt.subplot(1, 2, 2)
prob_random_points = bar_plot(result_random_data);
summarize_results(prob_random_points);
###Output
Data point [-0.9855972005944997, 0.16910989971106205] from label 0
Data point [0.019219683952295424, -0.9998152848145371] from label 1
Test point [-0.5022519953589976, -0.8647213037493094] from label 1
###Markdown
To get a better idea how well this quantum classifier works we can compare the predicted label to the true label of the test datapoint. Errors in the prediction can have two causes. The quantum classifier does not give the right classifier prediction or the quantum classifier gives the right classifier prediction which for the selected data gives the wrong label. in general, the first type of errors can be reduced by increasing the number of times we run the algorithm. In our case, as we work with the simulator and our gates are deterministic ([no conditional gates](https://www.quantum-inspire.com/kbase/optimization-of-simulations/)), we do not have to deal with this first error if we use the true probability distribution. This can be done by using only a single shot without measurements.
###Code
quantum_score = 0
error_prediction = 0
classifier_is_quantum_prediction = 0
classifier_score = 0
no_label = 0
def true_classifier(data_label0, data_label1, test_data):
if np.linalg.norm(np.array(data_label1) - np.array(test_data)) < np.linalg.norm(np.array(data_label0) -
np.array(test_data)):
return 1
else:
return 0
for idx in range(100):
data_label0, data_label1, test_data, random_label = grab_random_data()
result_random_data = run_random_data(data_label0, data_label1, test_data)
classifier = true_classifier(data_label0, data_label1, test_data)
sum_label0 = 0
sum_label1 = 0
for key, value in result_random_data['histogram'].items():
if int(key) in [0, 1, 4, 5]:
sum_label0 += value
if int(key) in [8, 9, 12, 13]:
sum_label1 += value
if sum_label0 > sum_label1:
quantum_prediction = 0
elif sum_label1 > sum_label0:
quantum_prediction = 1
else:
no_label += 1
continue
if quantum_prediction == classifier:
classifier_is_quantum_prediction += 1
if random_label[0] == classifier:
classifier_score += 1
if quantum_prediction == random_label[0]:
quantum_score += 1
else:
error_prediction += 1
print("In this sample of 100 data points:")
print("the classifier predicted the true label correct", classifier_score, "% of the times")
print("the quantum classifier predicted the true label correct", quantum_score, "% of the times")
print("the quantum classifier predicted the classifier label correct",
classifier_is_quantum_prediction, "% of the times")
print("Could not assign a label ", no_label, "times")
###Output
In this sample of 100 data points:
the classifier predicted the true label correct 93 % of the times
the quantum classifier predicted the true label correct 93 % of the times
the quantum classifier predicted the classifier label correct 99 % of the times
Could not assign a label 1 times
###Markdown
Conclusion and further work How well the quantum classifier performs, hugely depends on the chosen data points. In case the test data point is significantly closer to one of the two training data points the classifier will result in a one-sided prediction. The other case, where the test data point has a similar distance to both training points, the classifier struggles to give an one-sided prediction. Repeating the algorithm on the same data points, might sometimes give different measurement outcomes. This type of error can be improved by running the algorithm using more shots. In the examples above we only used the true probability distribution (as if we had used an infinite number of shots). By running the algorithm instead with 512 or 1024 shots this erroneous behavior can be observed. In case of an infinite number of shots, we see that the quantum classifier gives the same prediction as classically expected.The results of this toy example already shows the potential of a quantum computer in machine learning. Because the actual algorithm consists of only three operations, independent of the size of the data set, it can become extremely useful for tasks such as pattern recognition on large data sets. The next step is to extend this toy model to contain more data features and a larger training data set to improve the prediction. As not all data sets are best classified by a distance-based classifier, implementations of other types of classifiers might also be interesting. For more information on this particular classifier see the reference [ref](https://arxiv.org/abs/1703.10793).[Back to Table of Contents](contents) References * Book: [Schuld and Petruccione, Supervised learning with Quantum computers, 2018](https://www.springer.com/us/book/9783319964232) * Article: [Schuld, Fingerhuth and Petruccione, Implementing a distance-based classifier with a quantum interference circuit, 2017](https://arxiv.org/abs/1703.10793) The same algorithm for the projectQ framework
###Code
from math import acos
import os
from getpass import getpass
from quantuminspire.credentials import load_account, get_token_authentication, get_basic_authentication
from quantuminspire.api import QuantumInspireAPI
from quantuminspire.projectq.backend_qx import QIBackend
from projectq import MainEngine
from projectq.backends import ResourceCounter
from projectq.meta import Compute, Control, Loop, Uncompute
from projectq.ops import CNOT, CZ, All, H, Measure, Toffoli, X, Z, Ry, C
from projectq.setups import restrictedgateset
QI_EMAIL = os.getenv('QI_EMAIL')
QI_PASSWORD = os.getenv('QI_PASSWORD')
QI_URL = os.getenv('API_URL', 'https://api.quantum-inspire.com/')
def get_authentication():
""" Gets the authentication for connecting to the Quantum Inspire API."""
token = load_account()
if token is not None:
return get_token_authentication(token)
else:
if QI_EMAIL is None or QI_PASSWORD is None:
print('Enter email:')
email = input()
print('Enter password')
password = getpass()
else:
email, password = QI_EMAIL, QI_PASSWORD
return get_basic_authentication(email, password)
# Remote Quantum Inspire backend #
authentication = get_authentication()
qi_api = QuantumInspireAPI(QI_URL, authentication)
compiler_engines = restrictedgateset.get_engine_list(one_qubit_gates="any",
two_qubit_gates=(CNOT, CZ, Toffoli))
compiler_engines.extend([ResourceCounter()])
qi_backend = QIBackend(quantum_inspire_api=qi_api)
qi_engine = MainEngine(backend=qi_backend, engine_list=compiler_engines)
# angles data points:
angle_x_tilde = 2 * acos(0.8670)
angle_x0 = 2 * acos(0.1411)
angle_x1 = 2 * acos(0.9193)
qubits = qi_engine.allocate_qureg(4)
# part_a
for qubit in qubits[0:2]:
H | qubit
# part_b
C(Ry(angle_x_tilde), 1) | (qubits[1], qubits[2]) # Alternatively build own CRy gate as done above
X | qubits[1]
# part_c
C(Ry(angle_x0), 2) | (qubits[0], qubits[1], qubits[2]) # Alternatively build own CCRy gate as done above
X | qubits[0]
# part_d
C(Ry(angle_x1), 2) | (qubits[0], qubits[1], qubits[2]) # Alternatively build own CCRy gate as done above
# part_e
CNOT | (qubits[0], qubits[3])
# part_f
H | qubits[1]
qi_engine.flush()
# Results:
temp_results = qi_backend.get_probabilities(qubits)
res = [get_bin(el, 4) for el in range(16)]
prob = [0] * 16
for key, value in temp_results.items():
prob[int(key[::-1], 2)] = value # Reverse as projectQ has a different qubit ordering
color_list = [
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1)
]
plt.bar(res, prob, color=color_list)
plt.ylabel('Probability')
plt.title('Results')
plt.ylim(0, 1)
plt.xticks(rotation='vertical')
plt.show()
print("Results:")
print(temp_results)
###Output
_____no_output_____ |
Deutsch's_Problem.ipynb | ###Markdown
###Code
!pip install -qq qiskit
from qiskit import *
from qiskit.visualization import plot_histogram
%matplotlib inline
qr = QuantumRegister(2, "q")
cr = ClassicalRegister(1)
circuit = QuantumCircuit(qr, cr)
circuit.x(qr[1])
circuit.h(qr)
circuit.barrier()
option = int(input("""Please choose a function mapping from {0,1} to {0,1}. The quantum circuit will decide if it is constant ([1] or [2]) or not ([3] or [4]) in one query.
[1] f(x)=0
[2] f(x)=1
[3] f(x)=x
[4] f(x)=~x
"""))
if option == 2:
circuit.x(qr[1])
elif option == 3:
circuit.cx(qr[0], qr[1])
elif option == 4:
circuit.x(qr[0])
circuit.cx(qr[0], qr[1])
circuit.x(qr[0])
circuit.barrier()
circuit.h(qr[0])
circuit.measure(qr[0], cr)
circuit.draw(output="mpl")
simulator = Aer.get_backend('qasm_simulator')
job = execute(circuit, simulator, shots=100)
result = job.result()
counts = result.get_counts(circuit)
plot_histogram(counts)
if(counts.get('0') is not None and counts.get(1) is not None and counts.get('0') > counts.get('1')):
print("The chosen function is constant.")
else:
print("The chosen function is not constant.")
###Output
The chosen function is not constant.
|
Tasks/Task1/Part2/.ipynb_checkpoints/Viajeros-checkpoint.ipynb | ###Markdown
Preprocessing data
###Code
import json
import numpy as np
import csv
import sys
import locale
locale.setlocale(locale.LC_ALL, 'en_US')
dictCountries={
"Alemania":"Germany",
"Austria":"Austria",
"Bélgica":"Belgium",
"Bulgaria":"Bulgaria",
"Chipre":"Cyprus",
"Croacia":"Croatia",
"Dinamarca":"Denmark",
"Eslovenia":"Slovenia",
"Estonia":"Estonia",
"Finlandia":"Finland",
"Francia":"France",
"Grecia":"Greece",
"Holanda":"Holland",
"Hungría":"Hungary",
"Irlanda":"Ireland",
"Italia":"Italy",
"Letonia":"Latvia",
"Lituania":"Lithuania",
"Luxemburgo":"Luxembourg",
"Malta":"Malta",
"Polonia":"Poland",
"Portugal":"Portugal",
"Reino Unido":"United Kingdom",
"República Checa":"Czech Rep.",
"República Eslovaca":"Slovakia",
"Rusia":"Russia",
"Rumanía":"Romania",
"Suecia":"Sweden",
"Federación de Rusia":"Russia",
"Noruega":"Norway",
"Serbia":"Serbia",
"Suiza":"Switzerland",
"Ucrania":"Ukraine"}
invdictCountries = {v: k for k, v in dictCountries.items()}
#Data from: Instuto Nacional de Estadística www.ine.es
f = open("./sources/viajeros.txt", "r")
reader = csv.reader(f)
max_value = -1
dic = {}
for row in reader:
tokens=row[0].split(";")
name=tokens[0]
tokens[1]=tokens[1].split(".")[0]
tokens[2]=tokens[2].split(".")[0]
viaj2000=int(tokens[1])
if(max_value < viaj2000):
max_value = viaj2000
viaj2016=int(tokens[2])
if(max_value < viaj2016):
max_value = viaj2016
if name in dictCountries:
entry = {}
entry['r2k']= viaj2000
entry['r2k16']= viaj2016
entry['r2k_show']= locale.format('%d', int(tokens[1]), grouping=True)
entry['r2k16_show']= locale.format('%d', int(tokens[2]), grouping=True)
dic[dictCountries[name]]=entry
f.close()
f = open("./sources/viajeros.json", "w")
f.write(json.dumps(dic))
f.close()
###Output
_____no_output_____
###Markdown
Creating the map
###Code
import geopandas as gpd
import json
from collections import OrderedDict
from shapely.geometry import Polygon, mapping
import bokeh.io
from bokeh.layouts import gridplot,column
from bokeh.models import GeoJSONDataSource, LinearColorMapper, LogColorMapper,ColorBar,LogTicker, AdaptiveTicker
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource, HoverTool,LogColorMapper
from bokeh.palettes import Viridis6 as palette
from bokeh.palettes import (Blues9, BrBG9, BuGn9, BuPu9, GnBu9, Greens9,
Greys9, OrRd9, Oranges9, PRGn9, PiYG9, PuBu9,
PuBuGn9, PuOr9, PuRd9, Purples9, RdBu9, RdGy9,
RdPu9, RdYlBu9, RdYlGn9, Reds9, Spectral9, YlGn9,
YlGnBu9, YlOrBr9, YlOrRd9)
def gen_plot(europe, key, max_value):
#Filling data into the geometry
europe_json = json.loads(europe.to_json())
for i in range(len(europe_json['features'])):
count_name = europe_json['features'][i]['properties']['name']
if count_name in dic:
europe_json['features'][i]['properties']['rer'] = dic[count_name][key]
europe_json['features'][i]['properties']['show'] = dic[count_name][key + "_show"]
else:
europe_json['features'][i]['properties']['rer'] = -1
europe_json['features'][i]['properties']['show'] = "-"
#Deleting Spain polygon here
del europe_json['features'][10]
#Loading Spain's shape
spain = (world.loc[world['name'] == 'Spain'])
geo_source = GeoJSONDataSource(geojson=json.dumps(europe_json))
spain_source = GeoJSONDataSource(geojson=spain.to_json())
TOOLS = "pan,wheel_zoom,reset,hover,save"
if(key=='r2k'):
title_text="Viajeros entrados por país de residencia (2000)"
else:
title_text="Viajeros entrados por país de residencia (2016)"
p = figure(plot_width=300, plot_height=300,
title=title_text,
tools=TOOLS,
toolbar_location="below",
x_axis_location=None, y_axis_location=None,
x_range=(-20,40), y_range=(35,80)
)
palette=standard_palettes.get("YlOrBr9")
palette.reverse()
color_mapper = LinearColorMapper(palette=palette, low=0, high=max_value)
p.patches('xs', 'ys', fill_alpha=0.7, name="europe",
fill_color={'field': 'rer', 'transform': color_mapper},
line_color='black', line_width=0.5, source=geo_source)
p.patches('xs', 'ys', fill_alpha=1.0,
fill_color='black',
line_color='black', line_width=0.5, source=spain_source)
hover = p.select_one(HoverTool)
hover.point_policy = "follow_mouse"
#Just active with Europe countries but Spain
hover.names=["europe"]
hover.tooltips = [
("Name", "@name"),
("Viajeros", "@show")
#("(Long, Lat)", "($x, $y)"),
]
return p
bokeh.io.output_notebook()
standard_palettes = OrderedDict([("Blues9", Blues9), ("BrBG9", BrBG9),
("BuGn9", BuGn9), ("BuPu9", BuPu9),
("GnBu9", GnBu9), ("Greens9", Greens9),
("Greys9", Greys9), ("OrRd9", OrRd9),
("Oranges9", Oranges9), ("PRGn9", PRGn9),
("PiYG9", PiYG9), ("PuBu9", PuBu9),
("PuBuGn9", PuBuGn9), ("PuOr9", PuOr9),
("PuRd9", PuRd9), ("Purples9", Purples9),
("RdBu9", RdBu9), ("RdGy9", RdGy9),
("RdPu9", RdPu9), ("RdYlBu9", RdYlBu9),
("RdYlGn9", RdYlGn9), ("Reds9", Reds9),
("Spectral9", Spectral9), ("YlGn9", YlGn9),
("YlGnBu9", YlGnBu9), ("YlOrBr9", YlOrBr9),
("YlOrRd9", YlOrRd9)])
#obtain countries shapes
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
europe = (world.loc[world['continent'] == 'Europe'])
p2000=gen_plot(europe, 'r2k', max_value)
p2016=gen_plot(europe, 'r2k16', max_value)
palette=standard_palettes.get("YlOrBr9")
palette.reverse()
color_mapper = LinearColorMapper(palette=palette, low=-1, high=max_value)
color_bar = ColorBar(color_mapper=color_mapper, ticker=AdaptiveTicker(),
label_standoff=12, location=(-400,0))
dummy = figure(height=1, width=1, toolbar_location="below", tools="pan,wheel_zoom,reset,hover,save", min_border=0, outline_line_color=None)
dummy.add_layout(color_bar, 'right')
show(gridplot([[p2000,p2016,dummy]], plot_width=500, plot_height=500))
output_file("../../../site/Travelings_old.html", title="Viajeros entrados por país de residencia")
###Output
_____no_output_____ |
Workspace/best_model_all_data.ipynb | ###Markdown
Best Model (RandomForestClassifier) Against All Training Data
###Code
# import packages
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.pipeline import make_pipeline
from sklearn.metrics import plot_confusion_matrix
from sklearn.inspection import permutation_importance
from sklearn.ensemble import RandomForestClassifier
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# read in data
train_values = pd.read_csv('data/Proj5_train_values.csv')
train_labels = pd.read_csv('data/Proj5_train_labels.csv')
###Output
_____no_output_____
###Markdown
Label Encode
###Code
# Label Encode categorical features
le = LabelEncoder()
train_enc = train_values.apply(le.fit_transform)
train_enc
# establish X + y
X = train_enc.drop(columns = ['building_id'])
y = train_labels['damage_grade']
# tts
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = y, random_state = 123)
# baseline accuracy score against ALL train data
y.value_counts(normalize = True)
###Output
_____no_output_____
###Markdown
Modeling
###Code
# Random Forest
pipe_forest_best = make_pipeline(StandardScaler(), RandomForestClassifier(n_jobs = -1, random_state = 123, max_depth = 11, max_features = 35))
# params = {'randomforestclassifier__max_depth' : [6, 7, 8, 9, 10, 11],
# 'randomforestclassifier__max_features' : [15, 20, 30, 35]}
# grid_forest = GridSearchCV(pipe_forest, param_grid = params)
pipe_forest_best.fit(X_train, y_train)
print(f'Train Score: {pipe_forest_best.score(X_train, y_train)}')
print(f'Test Score: {pipe_forest_best.score(X_test, y_test)}')
# grid_forest.best_params_
###Output
Train Score: 0.7229828600665131
Test Score: 0.7049930162238492
###Markdown
Evaluation Metrics- Accuracy Score (against the Test data, above)- Confusion Matrix- Feature Importances- Permutation Feature Importances
###Code
# confusion matrix
plot_confusion_matrix(pipe_forest_best, X_test, y_test, cmap = 'Reds');
# grab feature importances
forest_fi_df = pd.DataFrame({'importances': pipe_forest_best.named_steps['randomforestclassifier'].feature_importances_,
'name': X_train.columns}).sort_values('importances', ascending = False)
forest_fi_df[:5]
# referenced https://git.generalassemb.ly/DSI-322/6.04_Forests_and_Features/blob/master/starter-code.ipynb
# extract permutation importances
pimports = permutation_importance(pipe_forest_best, X_test, y_test, n_repeats = 10, n_jobs = -1, random_state = 123)
#sort by averages
sort_idx = pimports.importances_mean.argsort()
#create a Dataframe sorted by importance
imp_df = pd.DataFrame(pimports.importances[sort_idx].T, columns = X_test.columns[sort_idx])
imp_df
#draw a boxplot -- first 10 features
imp_df.iloc[:, :10].plot(kind = 'box', vert = False)
# next 10 features
imp_df.iloc[:, 10:20].plot(kind = 'box', vert = False)
# next 10 features
imp_df.iloc[:, 20:30].plot(kind = 'box', vert = False)
# last 8 features
imp_df.iloc[:, 30:-1].plot(kind = 'box', vert = False)
###Output
_____no_output_____ |
2_white_wine_quality_regression_local_training.ipynb | ###Markdown
White Wine Quality Regression - Local Train- Developed by Marcelo Rovai @ 13 february 2022
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import random
random.seed(42)
from matplotlib import rcParams
rcParams['figure.figsize'] = (10, 6)
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
###Output
_____no_output_____
###Markdown
Dataset- https://archive.ics.uci.edu/ml/datasets/wine+qualityP. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. - Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems,- Elsevier, 47(4):547-553, 2009.
###Code
!ls ./data
df = pd.read_csv('./data/winequality-white.csv', delimiter=';')
df.shape
df.head()
df.info()
# Verify if values are all non
df.isnull().sum()
df.quality.value_counts()
sns.countplot(x=df['quality']);
set(df.quality)
df.describe().T
df.hist(bins=50, figsize=(12,12), grid=False);
features_list = list(df.columns[:-1])
features_list
df[features_list].hist(bins=30, edgecolor='b', linewidth=1.0,
xlabelsize=8, ylabelsize=8, grid=False,
figsize=(12,12), color='orange')
plt.tight_layout(rect=(0, 0, 1.2, 1.2))
plt.suptitle('White Wine Univariate Analysis', x=0.65, y=1.25, fontsize=20);
###Output
_____no_output_____
###Markdown
Pre-Process data **Prepare transformer**
###Code
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler
transformer = make_column_transformer(
(MinMaxScaler(), features_list)
)
X = df.drop('quality', axis=1)
y = df['quality']
X.shape, y.shape
###Output
_____no_output_____
###Markdown
Training
###Code
import sys, os
import tensorflow as tf
import logging
tf.get_logger().setLevel(logging.ERROR)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# Set random seeds for repeatable results
RANDOM_SEED = 3
tf.random.set_seed(RANDOM_SEED)
###Output
_____no_output_____
###Markdown
Working with Single-Output Regression **Split data**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
**Scale values**
###Code
# Fit on the train set
transformer.fit(X_train)
# Apply the transformation
X_train = transformer.transform(X_train)
X_test = transformer.transform(X_test)
X_train.shape, X_test.shape
X_train[0]
X_train
y_train
# The output should be continuous
y_train = y_train.astype('float32')
y_train
# Remember the number of samples in the test set
num_samples_train = len(X_train)
num_samples_train
num_samples_test = len(X_test)
num_samples_test
X_train.shape[1]
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1)
print(X_train.shape, y_train.shape)
print(X_val.shape, y_val.shape)
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import backend as K
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true)))
X.shape
tf.random.set_seed(42)
model = Sequential([
Input(shape=(X_train.shape[1],)),
Dense(20, activation='relu'),
Dense(10, activation='relu'),
Dense(1)
])
model.compile(
loss=rmse,
optimizer=Adam(),
metrics=[rmse]
)
model.summary()
history = model.fit(X_train,
y_train,
epochs=100,
validation_data=(X_val, y_val),
verbose=2,
)
plt.plot(history.history['loss'][3:], label='loss')
plt.plot(history.history['val_loss'][3:], label='val_loss')
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(loc='upper right')
plt.show()
predictions = model.predict(X_test)
predictions[:10]
predictions.min()
predictions.max()
y_real = y_test.to_numpy()
y_real[:10]
plt.plot(y_real, 'r+')
plt.plot(np.round(predictions).astype('int64'), 'b+')
y_real = y_test.to_numpy()
for n in range(10):
pred = np.round(predictions[n]).astype('int64')
print(f"Real value {y_real[n]} ==> Prediction {pred[0]} .... {predictions[n][0]}")
predictions[0][0]
len(predictions)
###Output
_____no_output_____
###Markdown
Calculation of "Accuraccy" taking only discrete output values
###Code
num_errors = 0
for n in range(len(predictions)):
pred = np.round(predictions[n]).astype('int64')
if (pred[0] - y_real[n]) !=0:
num_errors +=1
num_errors
print(f"Accuraccy: {round(1-(num_errors/len(predictions)),2)}")
###Output
Accuraccy: 0.53
###Markdown
Calculation of "Accuraccy" considering values under and above output values
###Code
num_errors = 0
for n in range(len(predictions)):
pred = np.round(predictions[n]).astype('int64')
if (abs(pred[0] - y_real[n])) > 1:
num_errors +=1
num_errors
print(f"Accuraccy: {round(1-(num_errors/len(predictions)),2)}")
###Output
Accuraccy: 0.96
###Markdown
Prediction single values
###Code
tst = np.array([0.3274, 0.3288, 0.2500, 0.0890, 0.1770, 0.1194, 0.1131, 0.4471, 0.3228, 0.0982, 0.2154])
tst
tst = np.reshape(tst, (1, X_train.shape[1]))
tst.shape
model.predict(tst)[0][0]
tst = np.array([0.1239, 0.3562, 0.0500, 0.0685, 0.0902, 0.1194, 0.0389, 0.3590, 0.6142, 0.2515, 0.3385])
tst = np.reshape(tst, (1, X_train.shape[1]))
model.predict(tst)[0][0]
###Output
_____no_output_____
###Markdown
Working with Multi-Output Regression **Split data**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
**Scale values**
###Code
# Fit on the train set
transformer.fit(X_train)
# Apply the transformation
X_train = transformer.transform(X_train)
X_test = transformer.transform(X_test)
X_train.shape, X_test.shape
X_train[0]
X_train
y_train
# Remember the number of samples in the test set
num_samples_train = len(X_train)
num_samples_train
num_samples_test = len(X_test)
num_samples_test
X_train.shape[1]
classes_values = list (set(y_train))
classes_values.sort()
classes_values
classes = len(classes_values)
classes
test_classes_values = list (set(y_test))
test_classes_values.sort()
test_classes_values
y_train = tf.keras.utils.to_categorical(y_train - 3, classes)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1)
print(X_train.shape, y_train.shape)
print(X_val.shape, y_val.shape)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
# model architecture
model = Sequential()
model.add(Dense(20, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(classes, name='y_pred'))
# this controls the learning rate
opt = Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
# this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself
BATCH_SIZE = 32
# train the neural network
model.compile(loss='mean_squared_error', optimizer=opt)
history = model.fit(X_train,
y_train,
epochs=100,
batch_size=BATCH_SIZE,
validation_data=(X_val, y_val),
verbose=2
)
plt.plot(history.history['loss'][0:], label='loss')
plt.plot(history.history['val_loss'][0:], label='val_loss')
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(loc='upper right')
plt.show()
predictions = model.predict(X_test)
p = predictions[0]
p
p = predictions[0]
p[0]*3 + p[1]*4 + p[2]*5 + p[3]*6 + p[4]*7 + p[5]*8
p = predictions[1]
p[0]*3 + p[1]*4 + p[2]*5 + p[3]*6 + p[4]*7 + p[5]*8
p = predictions[2]
p[0]*3 + p[1]*4 + p[2]*5 + p[3]*6 + p[4]*7 + p[5]*8
predictions[0].argsort()[-2:][::-1]
y_real = y_test.to_numpy()
y_real
y_real = y_test.to_numpy()
for n in range(10):
pred = np.argmax(predictions[n])+3
print(f"Real value {y_real[n]} ==> Prediction {pred}")
len(predictions)
###Output
_____no_output_____
###Markdown
Calculation of "Accuraccy" taking only discrete output values
###Code
num_errors = 0
for n in range(len(predictions)):
pred = np.argmax(predictions[n])+3
if (pred - y_real[n]) !=0:
num_errors +=1
num_errors
print(f"Accuraccy: {round(1-(num_errors/len(predictions)),2)}")
###Output
Accuraccy: 0.52
###Markdown
Calculation of "Accuraccy" considering values under and above output values
###Code
num_errors = 0
for n in range(len(predictions)):
pred = np.argmax(predictions[n])+3
if (abs(pred - y_real[n])) > 1:
num_errors +=1
num_errors
print(f"Accuraccy: {round(1-(num_errors/len(predictions)),2)}")
###Output
Accuraccy: 0.96
###Markdown
Prediction single values (ponderated)
###Code
tst = np.array([0.3000, 0.1716, 0.1446, 0.0169, 0.0890, 0.9443, 0.8647, 0.1429, 0.7364, 0.3605, 0.2581])
tst
tst = np.reshape(tst, (1, X_train.shape[1]))
tst.shape
model.predict(tst)
p = model.predict(tst)[0]
p[0]*3 + p[1]*4 + p[2]*5 + p[3]*6 + p[4]*7 + p[5]*8
tst = np.array([0.1239, 0.3562, 0.0500, 0.0685, 0.0902, 0.1194, 0.0389, 0.3590, 0.6142, 0.2515, 0.3385])
tst = np.reshape(tst, (1, X_train.shape[1]))
model.predict(tst)
p = model.predict(tst)[0]
p[0]*3 + p[1]*4 + p[2]*5 + p[3]*6 + p[4]*7 + p[5]*8
p
###Output
_____no_output_____ |
bronze/.ipynb_checkpoints/B26_One_Qubit-checkpoint.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ One Qubit[Watch Lecture](https://youtu.be/MTsgLSrTNbY)_Quantum systems are linear systems: "The quantum states are represented by vectors and quantum operators are represented by matrices. The new quantum states are calculated by corresponding matrix-vector multiplications."_A qubit (quantum bit) has two states: state 0 and state 1.They are denoted by ket-notation:$ \ket{0} = \myvector{1 \\ 0} $ and $ \ket{1} = \myvector{0\\ 1} $. NOT operator NOT operator flips the value of a qubit.We use capital letter for the matrix form of the operators:$ X = \X$. The action of $ X $ on the qubit:$ X \ket{0} = \ket{1} $. More explicitly, $ X \ket{0} = \X \vzero = \vone = \ket{1} $.Similarly, $ X \ket{1} = \ket{0} $.More explicitly, $ X \ket{1} = \X \vone = \vzero = \ket{0} $. Why is the NOT operator referred to as the x-gate? In Bronze, we use only real numbers, but we should note that complex numbers are also used in quantum computing. When complex numbers are used, the state of a qubit can be represented by a four dimensional real number valued vector, which is not possible to visualize. On the other hand, it is possible to represent such state in three dimensions (with real numbers). This representation is called [Bloch sphere](https://en.wikipedia.org/wiki/Bloch_sphere).In three dimensions, we have axes: x, y, and z. X refers to the rotation with respect to x-axis. Similarly, we have the rotation with respect to y-axis and z-axis. In Bronze, we will also see the operator Z (z-gate). The operator Y is defined with complex numbers. Hadamard operatorHadamard operator ($ H $ or h-gate) looks similar to a fair coin-flipping.$$ H = \hadamard.$$But, there are certain dissimilarities: we have a negative entry, and instead of $ \frac{1}{2} $, we have its square root $ \mypar{ \frac{1}{\sqrt{2}} } $. Quantum systems can have negative transitions. Can probabilistic systems be extended with negative values? One-step HadamardStart in $ \ket{0} $.After applying $ H $:$$ H \ket{0} = \hadamard \vzero = \vhadamardzero.$$After measurement, we observe the states $ \ket{0} $ and $ \ket{1} $ with equal probability $ \frac{1}{2} $. How can this be possible when their values are $ \frac{1}{\sqrt{2}} $? Let's start in $ \ket{1} $.After applying $ H $:$$ H \ket{1} = \hadamard \vone = \vhadamardone.$$After measurement, we observe the states $ \ket{0} $ and $ \ket{1} $ with equal probability $ \frac{1}{2} $. We obtain the same values even when one of the values is negative. The absolute value of a negative value is positive.The square of a negative value is also positive.As we have observed, the second fact fits better when reading the measurement results. When a quantum system is measured, the probability of observing one state is the square of its value.The value of the system being in a state is called its amplitude.In the above example, the amplitudes of states $\ket{0}$ and $ \ket{1} $ are respectively $ \sqrttwo $ and $ -\sqrttwo $.The probabilities of observing them after a measurement are $ \onehalf $._Remark that, after observing state $ 0 $, the new state will be $ \ket{0} $, and, after observing state $ 1 $, the new state will be $ \ket{1} $._ Task 1 What are the probabilities of observing the states $ \ket{0} $ and $ \ket{1} $ if the system is in $ \myvector{-\frac{3}{5} \\ - \frac{4}{5}} $ or $ \myrvector{\frac{3}{5} \\ -\frac{4}{5}} $ or $ \myrvector{\frac{1}{\sqrt{3}} \\ - \frac{\sqrt{2}}{\sqrt{3}}} $?
###Code
#
# your solution is here
#
###Output
_____no_output_____ |
Geoelektrik.ipynb | ###Markdown
**Bedienungsanleitung:** Alle Zellen mit der Markierung `In [..]:` enthalten Python-Code. Zum Ausführen des Codes nutzen Sie die Tastenkombination `Shift-Return`. Die jeweils aktive ausführbare Zelle erkennen Sie am Rahmen mit dem grünen vertikalen Balken sowie der Markierung `In [..]:` am linken Rand.Bei der ersten Ausführung der Zelle `In [1]:` kann es zu einer Warnmeldung kommen, welche ignoriert werden kann. **Aufgabenstellung:** Arbeiten Sie das Notebook ab. In der Tabelle vor Code-Zelle 2 `In [2]:` finden Sie die Daten für eine geoelektrische Widerstandstiefensondierung $\rho_s(AB/2)$. Diese sind in Code-Zelle 2 `In [2]:` bereits in die Python-Arrays `ab2` und `rhoa` eingetragen worden.Damit bleibt für Sie die Aufgabe, spezifische Widerstände $\rho_i$, $i=1,2,3$ und Mächtigkeiten $h_i$, $i=1,2$ eines horizontalgeschichteten Dreischichtfalls zu bestimmen. Mit vorgegebenen Startwerten erzeugen Sie zunächst eine synthetische Sondierungskurve. Durch gezielte Veränderungen der Werte von $\rho_i$ und $h_i$ versuchen Sie dann, die gemessene und synthetische Sondierungskurve schrittweise in bestmögliche Übereinstimmung zu bringen.Im Abschluss können Sie eine automatische Anpassung (*geophysikalische Dateninversion*) durchführen, um optimale Werte für $\rho$ und $h$ zu erhalten.Für das Protokoll verwerten Sie die erzeugten Abbildungen sowie die Zahlenwerte für $\rho_i$ und $h_i$. Auswertung einer Widerstandstiefensondierung Grundlagen der MethodeBei einer Widerstandstiefensondierung werden scheinbare spezifische elektrische Widerstände $\rho_s$ aufgezeichnet.Dabei wird der Abstand der Stromelektroden A ud B unter Beibehaltung des Mittelpunktes der Anordnung schrittweise vergrößert.Ist der Abstand zwischen den Potentialsonden M und N immer kleiner als AB/3, sprechen wir von einer *Schlumbergeranordnung*.Die Geoelektrik-Apparatur misst dabei elektrische Spannungen zwischen den Sonden M und N sowie den zwischen den Elektroden A und B fließenden elektrischen Strom.Daraus wird mit der Beziehung$$R = \frac{U}{I} \quad\text{ in } \Omega$$zunächst der Ohmsche Widerstand ermittelt.Dieser wird mit dem Konfigurationsfaktor für die Schlumbergeranordnung$$k = \frac{\pi}{\text{MN}}\left( \frac{\text{AB}^2}{4} - \frac{\text{MN}^2}{4} \right)$$multipliziert.Wir erhalten auf diese Weise den scheinbaren spezifischen elektrischen Widerstand$$\rho_s = R \cdot k \quad\text{ in } \Omega\cdot m.$$ Das folgende Bild zeigt eine typische Sondierungskurve, zu deren Interpretation ein Dreischichtfall hinreichend ist.  AuswertungDas Ziel der Auswertung besteht in der Ermittlung der *spezifischen Widerstände* und *Mächtigkeiten* von Bodenschichten unter der Annahme einer näherungsweise horizontalen Lagerung.Die Messwerte werden zunächst graphisch als *Sondierungskurve* $\rho_s = f(AB/2)$ dargestellt.Aus dem Kurvenverlauf schätzt man die minimale Anzahl von Schichten ab. Für die Auswertung benutzen wir die Python-Bibliothek pygimli ([www.pygimli.org](http://www.pygimli.org)).Dazu importieren wir das Modul `functions`.
###Code
from functions import *
###Output
_____no_output_____
###Markdown
Die Daten wurden für logarithmisch äquidistante Positionen der Stromelektroden aufgenommen.Wir fassen die Messwerte in einer Tabelle zusammen:| AB/2 in m | $\rho_s$ in $\Omega\cdot m$ ||-------------|-----------------------------|| 1.0 | 195.07 | |1.3 | 197.25 | |1.8 | 186.88 | |2.4 | 162.47 | |3.2 | 127.12 | |4.2 | 89.57 | |5.6 | 55.84 | |7.5 | 33.14 | |10.0 | 29.21 | |13.0 | 31.63 | |18.0 | 42.90 | |24.0 | 57.91 | |32.0 | 72.59 | |42.0 | 96.33 | |56.0 | 121.64 | |75.0 | 168.55 | |100.0 | 204.98 | Für alle Werte von AB/2 war der Abstand der Potentialsonden immer 0.6 m, d.h., MN/2 = 0.3 m.Wir fassen alle Werte in den *Python*-Arrays `ab2`, `mn2` und `rhos` zusammen:
###Code
ab2 = np.array([1.0, 1.3, 1.8, 2.4, 3.2, 4.2, 5.6, 7.5, 10, 13, 18, 24, 32, 42, 56, 75, 100])
mn2 = 0.3 * np.ones(len(ab2))
rhoa = np.array([195.07, 197.25, 186.88, 162.47, 127.12, 89.57, 55.84, 33.14, 29.21,
31.63, 42.90, 57.91, 72.59, 96.33, 121.64, 168.55, 204.98])
###Output
_____no_output_____
###Markdown
ModellanpassungWir versuchen jetzt durch Probieren, die gemessene Sondierungskurve mit einer anhand eines Modells berechneten Sondierungskurve in Übereinstimmung zu bringen.Dafür tragen wir in das Array `res` die Zahlenwerte für die spezifischen elektrischen Widerstände (in $\Omega\cdot m$) der drei Schichten beginnend an der Erdoberfläche (vom Hangenden zum Liegenden) ein:
###Code
res = [250.0, 80.0, 500.0]
###Output
_____no_output_____
###Markdown
Die Zahlenwerte der Mächtigkeiten dieser Schichten (in m) fassen wir im Array `thk` zusammen. Beachten Sie, dass die Mächtigkeit der letzten Schicht (des Substratums) nach unten hin unbegrenzt ist und daher in `thk` weggelassen wird.
###Code
thk = [2.0, 5.0]
###Output
_____no_output_____
###Markdown
Die Funktion `datenberechnen` berechnet die scheinbaren spezifischen Widerstände, die wir bei einer Messung über einem Dreischichtfall mit den spezifischen Widerständen und Mächtigkeiten für die vorgegebenen Stromelektrodenabstände erhalten würden:
###Code
rhoanew = datenberechnen(ab2, mn2, res, thk)
###Output
_____no_output_____
###Markdown
Die so erhaltenen Ergebnisse (*Modellantwort*) `rhoanew` stellen wir gemeinsam mit den Messwerten (*Daten*) `rhoa` grafisch in Abhängigkeit vom Elektrodenabstand AB/2 `ab2` dar.
###Code
datenvergleichen(rhoa, rhoanew, ab2)
###Output
_____no_output_____
###Markdown
Da wir eine möglichst gute Übereinstimmung beider Kurven anstreben, ist es u.U. nötig, zur Definition von `res` und `thk` zurückzukehren, um die Rechnung mit veränderten Werten zu wiederholen.Notieren Sie die Werte von `res` und `thk`, wenn Sie zufrieden sind mit der Anpassung.
###Code
resbest = res
thkbest = thk
###Output
_____no_output_____
###Markdown
Automatische ModellfindungFür eine automatische Ermittlung der spezifischen Widerstände `res` und Mächtigkeiten `thk` wird das Verfahren der geophysikalischen Dateninversion eingesetzt.Dabei wird ein aus den Messwerten erzeugtes Startmodell mit `nl` Schichten systematisch verändert, bis dessen Modellantwort mit den gemessenen Daten bis auf eine vorgegebene Abweichung `errPerc` übereinstimmt.Der Parameter `lam` steuert, wie groß die Sprünge zwischen den spezifischen Widerständen der einzelnen Schichten sein dürfen.
###Code
nl = 3
lam = 50.0
errPerc = 10.0
resnew, thknew, rhoaresponse, relrms, chi2 = dateninversion(
ab2, mn2, rhoa, nl, lam, errPerc)
plotresults(resnew, thknew, ab2, rhoa, rhoaresponse)
###Output
_____no_output_____
###Markdown
Zusammenfassung der Resultate Spezifische elektrische Widerstände
###Code
print("Spezifische elektrische Widerstände in Ohm*m:")
for r in resnew:
print(f'{r:8.2f}')
###Output
_____no_output_____
###Markdown
Mächtigkeiten
###Code
print("Schichtmächtigkeiten in m:")
for t in thknew:
print(f'{t:8.2f}')
###Output
_____no_output_____
###Markdown
Bewertung der AnpassugDie Güte der Anpassung wird durch die Summe der Quadrate der Differenzen zwischen den gemessenen und synthetischen Werten der Sondierungskurve bestimmt.Wir unterscheiden zwischen *relativem RMS-Fehler* und $\chi^2$-*Fehler*.
###Code
print("Relativer RMS-Fehler = ", f'{relrms:.2f}')
print("chi^2-Fehler = ", f'{chi2:.2f}')
###Output
_____no_output_____
###Markdown
SchichtäquivalenzBei der manuellen bzw. automatischen Anpassung erhalten wir für die zweite Schicht folgende Werte für den spezifischen Widerstand und die Schichtmächtigkeit:
###Code
print("Manuelle Anpassung:")
print("Spezifischer Widerstand der zweiten Schicht:" + f'{resbest[1]:8.2f}' + " Ohm*m")
print("Mächtigkeit der zweiten Schicht :" + f'{thkbest[1]:8.2f}' + " m")
print("Automatische Anpassung:")
print("Spezifischer Widerstand der zweiten Schicht:" + f'{resnew[1]:8.2f}' + " Ohm*m")
print("Mächtigkeit der zweiten Schicht :" + f'{thknew[1]:8.2f}' + " m")
###Output
_____no_output_____
###Markdown
Für eine dünne Schicht mit Mächtigkeit $h$ und niedrigem Widerstand $\rho$, die in einer Formation mit hohem spezifischen Widerstand eingebettet ist, gilt die Schichtäquivalenz. Dies bedeutet, dass die gesamte Sondierungskurve nahezu unverändert bleibt, sofern das Verhältnis$$S_i = \frac{h_i}{\rho_i} = const.$$gleich bleibt.Die Größe $S_i$ wird Längsleitfähigkeit der Schicht $i$ genannt.Ihre physikalische Einheit ist *Siemens*.Die Längsleitfähigkeit ist die einzige Information über die zweite Schicht, die bei der vorliegenden geoelektrischen Widerstandstiefensondierungen sicher bestimmt werden kann.Eine unabhängige Bestimmung von spezifischem Widerstand oder Mächtigkeit der zweiten Schicht ist dagegen nicht möglich.Man spricht vom *Äquivalenzprinzip der Geoelektrik*.
###Code
print("Manuelle Anpassung:")
print("Längsleitfähigkeit der zweiten Schicht:" + f'{thkbest[1]/resbest[1]:8.2f}' + " S")
print("Automatische Anpassung:")
print("Längsleitfähigkeit der zweiten Schicht:" + f'{thknew[1]/resnew[1]:8.2f}' + " S")
###Output
_____no_output_____ |
files/Day3_b.ipynb | ###Markdown
Columns (series)
###Code
surveys_df['weight'].head()
surveys_df.plot_id.head()
surveys_df[['weight', 'species_id']].head()
###Output
_____no_output_____
###Markdown
**Challenge**: Create a new DataFrame with the `month`, `day`, `year` columns of `surverys_df`.
###Code
variable = surveys_df[['month', 'day', 'year']]
variable.head()
###Output
_____no_output_____
###Markdown
Rows
###Code
test_l = ['a', 'x', 'y', 'j']
###Output
_____no_output_____
###Markdown
How do I print `'a'`?
###Code
print(test_l[0])
###Output
a
###Markdown
How do I print `'a'` and `'x'` at same time?
###Code
print(test_l[0:2])
###Output
['a', 'x']
###Markdown
How do I print `'j'`?
###Code
print(test_l[3])
print(test_l[-1])
surveys_df[0:1]
###Output
_____no_output_____
###Markdown
* **Challenge 1:** Get the first 10 rows of `surverys_df`.* **Challenge 2:** Get rows 20 through 29 on `surverys_df`.
###Code
surveys_df[0:10]
surveys_df.head(10)
surveys_df[20:30]
###Output
_____no_output_____
###Markdown
Slicing with `.iloc()` and `.loc`
###Code
surveys_df.iloc[0:2,0:2]
###Output
_____no_output_____
###Markdown
**Challenge:** Create a slice with the first 10 rows and first 5 columns of `surveys_df`.
###Code
surveys_df.iloc[0:10,0:5]
surveys_df.loc[9, 'year']
###Output
_____no_output_____
###Markdown
**Challenge:** Using `.loc`, get the `month`, `day`, `year` for row `9`.
###Code
surveys_df.loc[0:9, ['month', 'day', 'year']]
###Output
_____no_output_____
###Markdown
Subsetting
###Code
surveys_df[surveys_df.year != 2002].head()
mask = surveys_df.year != 2002
subset = surveys_df[mask]
subset.head()
###Output
_____no_output_____
###Markdown
**Challenge:** Create a subset of `surveys_df` with all observations from the year 1978.
###Code
mask = surveys_df.year == 1978
subset = surveys_df[mask]
subset.head()
###Output
_____no_output_____
###Markdown
**Challenge:** Change the mask from the previous challenge to create a subset of `surveys_df` with all observations from the years 1978 *and* 2002. You'll need to use `(`,`)` and `|`.
###Code
mask = (surveys_df.year == 1978) | (surveys_df.year == 2002)
subset = surveys_df[mask]
subset.head()
###Output
_____no_output_____ |
Array/quality_mosaic.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
ee_layers.append(EarthEngineLayer(ee_object=rgb, vis_params={'gain':[1.4,1.4,1.1]}))
view_state = pdk.ViewState(longitude=-90.08789, latitude=16.38339, zoom=11)
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
# function qualityMosaic(bands) {
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# }
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
withNd = l7.map(lambda image: image.normalizedDifference(['B4', 'B3']).addBands(image))
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____ |
Spark_ML/sparkml/5_linear_regression_quiz.ipynb | ###Markdown
Linear Regression QuizUse this Jupyter notebook to find the answer to the quiz in the previous section. There is an answer key in the next part of the lesson.
###Code
from pyspark.sql import SparkSession
# TODOS:
# 1) import any other libraries you might need
# 2) run the cells below to read the dataset and extract description length features
# 3) write code to answer the quiz question
from pyspark.sql.functions import concat, lit, col, avg, udf
from pyspark.ml.feature import RegexTokenizer, VectorAssembler
from pyspark.sql.types import IntegerType
from pyspark.ml.regression import LinearRegression
spark = SparkSession.builder \
.master("local") \
.appName("Creating Features") \
.getOrCreate()
###Output
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/usr/local/lib/python3.9/site-packages/pyspark/jars/spark-unsafe_2.12-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
22/01/13 22:51:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
###Markdown
Read Dataset
###Code
stack_overflow_data = 'Train_onetag_small.json'
df = spark.read.json(stack_overflow_data)
df.persist()
###Output
###Markdown
Build Description Length Features
###Code
df = df.withColumn("Desc", concat(col("Title"), lit(' '), col("Body")))
regexTokenizer = RegexTokenizer(inputCol="Desc", outputCol="words", pattern="\\W")
df = regexTokenizer.transform(df)
body_length = udf(lambda x: len(x), IntegerType())
df = df.withColumn("DescLength", body_length(df.words))
assembler = VectorAssembler(inputCols=["DescLength"], outputCol="DescVec")
df = assembler.transform(df)
number_of_tags = udf(lambda x: len(x.split(" ")), IntegerType())
df = df.withColumn("NumTags", number_of_tags(df.Tags))
###Output
_____no_output_____
###Markdown
QuestionBuild a linear regression model using the length of the combined Title + Body fields. What is the value of r^2 when fitting a model with `maxIter=5, regParam=0.0, fitIntercept=False, solver="normal"`?
###Code
# TODO: write your code to answer this question
lr = LinearRegression(maxIter=5, regParam=0.0, fitIntercept=False, solver="normal")
df.groupby("NumTags").agg(avg(col("DescLength"))).orderBy("NumTags").show()
data = df.select(col("NumTags").alias("label"), col("DescVec").alias("features"))
data.head()
lr_model = lr.fit(data)
lr_model.summary.r2
###Output
_____no_output_____
###Markdown
---
###Code
data = df.select(col("NumTags").alias("label"), col("DescLength").alias("features"))
data.head()
lr_model = lr.fit(data)
df.select("*").limit(5).show()
###Output
[Stage 11:> (0 + 1) / 1]
|
notebooks/Tutorial_HDR.ipynb | ###Markdown
Integration via the Holmes-Diaconis-Ross algorithm using `LinConGauss` OutlineThis notebook shows how to use the `LinConGauss` package to estimate the integral of a linearly constrained Gaussian.The procedure is the following:- define the linear constraints- run subset simulation to determine the nestings- run Holmes-Diaconis-Ross to get an unbiased estimate of the integralDetails on the method can be found in [Gessner, Kanjilal, and Hennig: Integrals over Gaussians under Linear Domain Constraints](https://arxiv.org/abs/1910.09328)__This example__ Consider a 100d shifted orthant in a standard normal space. The integral is available in closed formWe compute the integral using `LinConGauss` and compare to the ground truth._tutorial by Alexandra Gessner, Feb 2020_
###Code
import numpy as np
import LinConGauss as lcg
###Output
_____no_output_____
###Markdown
Setting up linear constraintsThe linear constraints are defined as the roots of $M$ (here `n_lc`) linear functions$$\mathbf{f}(\mathbf{x}) = A_m^\intercal \mathbf{x} + \mathbf{b}. $$and the domain of interest is defined as the intersection of where all these functions are _positive_.In this setting we assume the linear constraints to be axis-aligned.
###Code
# Problem dimension
dim = 100
# seed for reproducibility
np.random.seed(0)
# number of linear constraints
n_lc = np.copy(dim)
# generate random linear constraints
A = np.eye(n_lc)
b = np.random.randn(n_lc, 1)
# define the linear constraints with LinConGauss
lincon = lcg.LinearConstraints(A=A, b=b)
###Output
_____no_output_____
###Markdown
Ground truth integralThe ground truth integral is just the Gaussian CDF evaluated at $b$ since$$\int_{-b}^\infty \mathcal{N} (x; 0,1) \mathrm{d}x = \int_{-\infty}^b \mathcal{N} (x; 0,1) \mathrm{d}x= \Phi(b)$$
###Code
from scipy.stats import norm
true_integral = norm.cdf(lincon.b).prod()
print(true_integral)
###Output
4.4346483368318176e-42
###Markdown
Integral via LinConGaussUse subset simulation (Au&Beck 2001) to determine a sequence of shifts s.t. 1/2 of the samples fall inside the next domain. Subset simulation can also help estimate the integral, but it is biased.Therefore we then hand the obtained shift sequence to the HDR method, with which we draw more samples per nesting in order to obtain unbiased estimates of the conditional probability of each nesting.
###Code
subsetsim = lcg.multilevel_splitting.SubsetSimulation(linear_constraints=lincon,
n_samples=16,
domain_fraction=0.5,
n_skip=3)
subsetsim.run(verbose=False)
shifts = subsetsim.tracker.shift_sequence
hdr = lcg.multilevel_splitting.HDR(linear_constraints=lincon,
shift_sequence=shifts,
n_samples=512,
n_skip=9,
X_init=subsetsim.tracker.x_inits())
hdr.run(verbose=False)
hdr_integral = hdr.tracker.integral()
print(hdr_integral)
rel_error = np.abs(true_integral - hdr_integral)/(true_integral + hdr_integral)
print(rel_error)
###Output
0.5964559736492593
###Markdown
This means that the integral estimate is about 1 order of magnitude off, which is not a lot given the small scale of the problem. Sampling from the domainWe already got a few samples from the integration procedure, which we can reuuse for sampling.The method uses rejection-free elliptical slice sampling to sample from the domain. Given we already know samples in the domain of interest, we do not need to run subset simulation to find these, but we can directly define the sampler.
###Code
# samples known from integration
X_int = hdr.tracker.X
# Elliptical slice sampler
sampler = lcg.sampling.EllipticalSliceSampler(n_iterations=1000,
linear_constraints=lincon,
n_skip=9,
x_init=X_int)
sampler.run()
# here are the samples
sampler.loop_state.X
###Output
_____no_output_____
###Markdown
Alan Genz's method (integration only)In this specific case, the integration method used by Alan Genz can be applied.This requires the linear constraints to be rewritable as (potentially open) box constraints of a general Gaussian.If this is the case and if furthermore samples are not required, this method is the method of choice for the given integration task.The method can be found in `scipy.stats.mvn` as `mvnun` routine. This directly calls the `FORTRAN` implementation of `MVNDST` [written by Alan Genz](http://www.math.wsu.edu/faculty/genz/homepage)
###Code
from scipy.stats import mvn
lower = -b.squeeze()
upper = np.inf * np.ones_like(lower)
mean = np.zeros((n_lc,))
cov = np.eye(n_lc)
mvn.mvnun(lower, upper, mean, cov)
###Output
_____no_output_____ |
toronto-neighbourhood-analysis.ipynb | ###Markdown
Toronto Neighborhood Analysis In this notebook we are analyzing the neighborhood in Toronto.Phase1: Collecting neighborhood data of Toronto, Canada.Phase2: Data cleaning and wranglingPhase3: Analysis and Report generation Phase 1: Collecting neighborhood data of Toronto from wikipedia site. This includes reading the html, parsing the table in the site. In the html, the first table holds the data about the neighborhood.
###Code
import pandas as pd
!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
###Output
Solving environment: done
# All requested packages already installed.
Solving environment: \
###Markdown
Import necessary libs
###Code
import requests
import numpy as np
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Reading the html and assigning them to tables.
###Code
tables = pd.read_html("https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M")
tables[0].shape
###Output
_____no_output_____
###Markdown
Populating df_pre with the first table which holds the neighborhood data of the Toronto City
###Code
df_pre = tables[0]
df_pre.columns = ['postalcode', 'borough', 'neighborhood']
###Output
_____no_output_____
###Markdown
Replacing / with , in neighborhood columnsFiltering the borough which is marked as "Not assigned"
###Code
df_pre['neigh'] = df_pre['neighborhood'].str.replace('/',',')
df_pre.drop('neighborhood', axis=1, inplace=True)
#Filtering 'Not assigned' and resetting the old index
df_filtered = df_pre[df_pre['borough'] != 'Not assigned'].reset_index(drop=True)
df_filtered.head()
#Testing if there is any 'Not assigned' in neighbourhood column
df_filtered[df_filtered['neigh'] == 'Not assigned']
###Output
_____no_output_____
###Markdown
Printing the final shape
###Code
df_filtered.shape
###Output
_____no_output_____
###Markdown
Phase 2: Populating the latitude and lngitude
###Code
# Unable to get from the geocoder api or Nominatim apis. So reading from the csv file
df_latlon = pd.read_csv("http://cocl.us/Geospatial_data")
###Output
_____no_output_____
###Markdown
Changing 'Postal Code' as index
###Code
df_latlon.set_index('Postal Code', inplace=True)
df_latlon.head()
# Removing the index name
df_latlon.index.name = None
# To get the latitude for the postal code
def getlat(row):
location=df_latlon.loc[row['postalcode']]
return location['Latitude']
# To get the longitude for the postal code
def getlon(row):
location=df_latlon.loc[row['postalcode']]
return location['Longitude']
###Output
_____no_output_____
###Markdown
Computing latitude and longitude
###Code
df_filtered['latitude'] = df_filtered.apply(getlat, axis=1)
df_filtered['longitude'] = df_filtered.apply(getlon, axis=1)
df_filtered.head()
###Output
_____no_output_____
###Markdown
Phase 3: Analysis Getting Latitude and Longitude of Toronto
###Code
address = 'Toronto, Canada'
geolocator = Nominatim(user_agent="ny_explorer")
location = geolocator.geocode(address)
tor_lat = location.latitude
tor_lon = location.longitude
print('The geograpical coordinate of Toronto City are {}, {}.'.format(tor_lat, tor_lon))
# create map of New York using latitude and longitude values
map_allarea = folium.Map(location=[tor_lat, tor_lon], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_filtered['latitude'], df_filtered['longitude'], df_filtered['borough'], df_filtered['neigh']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_allarea)
map_allarea
###Output
_____no_output_____
###Markdown
Filtering only the Toronto
###Code
toronto_data = df_filtered[df_filtered['borough'].str.contains("Toronto")].reset_index(drop=True)
toronto_data.head()
toronto_data.shape
# create map of Manhattan using latitude and longitude values
map_toronto = folium.Map(location=[tor_lat, tor_lon], zoom_start=11)
# add markers to map
for lat, lng, label in zip(toronto_data['latitude'], toronto_data['longitude'], toronto_data['neigh']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Starting analysis
###Code
CLIENT_ID = 'MBWHGPIUP3QCRNDQOIOOYL2Q0USBPXS4LM3MMLYOPOFGP2M5' # your Foursquare ID
CLIENT_SECRET = 'BWBX2CHRQCKGDYZ04ODBWGOST25WHN1MX3F2QGQQADIHIZW1' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: MBWHGPIUP3QCRNDQOIOOYL2Q0USBPXS4LM3MMLYOPOFGP2M5
CLIENT_SECRET:BWBX2CHRQCKGDYZ04ODBWGOST25WHN1MX3F2QGQQADIHIZW1
###Markdown
Function get nearby venues based on lat, lon, radius
###Code
LIMIT=100
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Getting nearby venues for all postal codes
###Code
# type your answer here
toronto_venues = getNearbyVenues(names=toronto_data['neigh'],
latitudes=toronto_data['latitude'],
longitudes=toronto_data['longitude']
)
print(toronto_venues.shape)
toronto_venues.head()
toronto_venues.groupby('Neighborhood').count()
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 231 uniques categories.
###Markdown
Creating one hot encoding which clearly differentiate each category
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
###Output
_____no_output_____
###Markdown
Finding the mean value of all neighbourhood attractions
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
toronto_grouped.shape
num_top_venues = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Clustering
###Code
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
###Output
_____no_output_____
###Markdown
Joining the cluster data with the previous latitude data
###Code
toronto_merged = toronto_data
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='neigh')
toronto_merged.head() # check the last columns!
# create map
map_clusters = folium.Map(location=[tor_lat, tor_lon], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['latitude'], toronto_merged['longitude'], toronto_merged['neigh'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Cluster 1
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____ |
pipelining/ale-exp1/ale-exp1_cslg-rand-1000_1w_ale_plotting.ipynb | ###Markdown
Experiment Description1-way ALE.> This notebook is for experiment \ and data sample \. Initialization
###Code
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/ale-exp1/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
###Output
_____no_output_____
###Markdown
Loading data
###Code
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'cslg-rand-1000'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f)]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
des, sample_configs, sample_from_other_exp = read_conf('.')
if sample_configs['cslg-rand-1000'][0].get('quantiles') != None:
print(f'The following feature choose quantiles as ale bin size:')
for k in sample_configs['cslg-rand-1000'][0]['quantiles'].keys():
print(f" {k} with {sample_configs['cslg-rand-1000'][0]['quantiles'][k]}% quantile, {len(ale_xy[k]['x'])} bins are used")
if sample_configs['cslg-rand-1000'][0].get('intervals') != None:
print(f'The following feature choose fixed amount as ale bin size:')
for k in sample_configs['cslg-rand-1000'][0]['intervals'].keys():
print(f" {k} with {sample_configs['cslg-rand-1000'][0]['intervals'][k]} values, {len(ale_xy[k]['x'])} bins are used")
###Output
feature_name ale_range ale_importance
0 title 17.323471 4.289228
1 abstract 11.515200 4.243614
2 venue 7.239820 0.829445
4 year 1.808177 0.460942
5 n_citations 1.194592 0.217293
3 authors 0.000000 0.000000
The following feature choose quantiles as ale bin size:
year with 1% quantile, 100 bins are used
n_citations with 1% quantile, 100 bins are used
title with 1% quantile, 100 bins are used
abstract with 1% quantile, 100 bins are used
authors with 1% quantile, 100 bins are used
venue with 1% quantile, 100 bins are used
###Markdown
ALE Plots
###Code
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
# plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
# axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 16])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
###Output
_____no_output_____ |
bank_customer_churn_pipeline.ipynb | ###Markdown
Download_csvThis component downloads the dataset from the given url and outputs a csv file for downstream components
###Code
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/download_csv.yaml',
)
def download_csv(url: str, output_csv: Output[Dataset]):
import urllib.request
import pandas as pd
urllib.request.urlretrieve(url=url,
filename=output_csv.path,
)
###Output
_____no_output_____
###Markdown
Train_test_splitThis component splits the original dataset to a training set and a test set. The test set is meant to be used to evaluate the performance of the model.
###Code
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/train_test_split.yaml',
)
def train_test_split(input_csv: Input[Dataset],
seed: int,
target: str,
train_csv: Output[Dataset],
test_csv: Output[Dataset]
):
from sklearn.model_selection import train_test_split
import pandas as pd
df = pd.read_csv(input_csv.path)
train, test = train_test_split(df,
test_size=0.2,
shuffle=True,
random_state=seed,
stratify=df[target],
)
train_df = pd.DataFrame(train)
train_df.columns = df.columns
test_df = pd.DataFrame(test)
test_df.columns = df.columns
train_df.to_csv(train_csv.path, index=False)
test_df.to_csv(test_csv.path, index=False)
###Output
_____no_output_____
###Markdown
PreprecessingIn this step, we preprocess the training dataset so that we can feed the data later on to our models. Specifically, we use a `OneHotEncoder` to encode the `categorical_features` and a `StandardScaler` to standardize the `numerical_features`. We will fit the encoder and the scaler to the training data, and save them as outputs which will be used to transform the test dataset later on.
###Code
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/preprocessing.yaml',
)
def preprocessing(input_csv: Input[Dataset],
numerical_features: str,
categorical_features: str,
target: str,
features: Output[Dataset],
labels: Output[Dataset],
scaler_obj: Output[Model],
encoder_obj: Output[Model],
):
import pandas as pd
import numpy as np
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from pickle import dump
from ast import literal_eval
# The features are stored as strings. We do the trick with literal_eval to convert them to the
# correct format (list)
categorical_features = literal_eval(categorical_features)
numerical_features = literal_eval(numerical_features)
df = pd.read_csv(input_csv.path)
X_cat = df[categorical_features]
X_num = df[numerical_features]
y = df[target]
scaler = StandardScaler()
X_num = scaler.fit_transform(X_num)
encoder = OneHotEncoder()
X_cat = encoder.fit_transform(X_cat).toarray()
X = np.concatenate([X_num, X_cat], axis=1)
pd.DataFrame(X).to_csv(features.path, index=False)
y.to_csv(labels.path, index=False)
# To prevent leakage, the scaler and the encoder should not see the test dataset.
# We save the scaler and encoder that have been fit to the training dataset and
# use it directly on the test dataset later on.
dump(scaler, open(scaler_obj.path, 'wb'))
dump(encoder, open(encoder_obj.path, 'wb'))
###Output
_____no_output_____
###Markdown
Train base line modelsHere we feed out preprocessed data to 3 base line models - logistic regression, random forests, and K nearest neighbors. We perform cross validation with each model and use grid search to find the optimal hyperparameters. The number of folds for CV and the hyperparameter candidates are parameters to be fed to the pipeline. Each model will output a summary of the CV results as well as the model. They also output the selected metrics so that we can compare model performances in the next step.
###Code
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/logistic_regression.yaml',
)
def logistic_regression(features: Input[Dataset],
labels: Input[Dataset],
param_grid: str,
num_folds: int,
scoring: str,
seed: int,
best_model: Output[Model],
best_params: Output[Dataset],
best_score: Output[Metrics],
cv_results: Output[Dataset],
) -> float:
from sklearn.linear_model import LogisticRegression
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import GridSearchCV
from pickle import dump
from ast import literal_eval
lr = LogisticRegression(solver='liblinear',
random_state=seed,
)
param_grid = literal_eval(param_grid)
grid_search = GridSearchCV(lr,
param_grid=param_grid,
scoring=scoring,
refit=True, # Use the whole dataset to retrain after finding the best params
cv=num_folds,
verbose=2,
)
X, y = pd.read_csv(features.path).values, pd.read_csv(labels.path).values
grid_search.fit(X, y)
pd.DataFrame(grid_search.cv_results_).to_csv(cv_results.path, index=False)
best_params_ = grid_search.best_params_
for key, value in best_params_.items():
best_params_[key] = [value]
pd.DataFrame(best_params_).to_csv(best_params.path, index=False)
dump(grid_search.best_estimator_, open(best_model.path, 'wb'))
best_score.log_metric(scoring, grid_search.best_score_)
return grid_search.best_score_
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/random_forests.yaml',
)
def random_forests(features: Input[Dataset],
labels: Input[Dataset],
param_grid: str,
num_folds: int,
scoring: str,
seed: int,
best_model: Output[Model],
best_params: Output[Dataset],
best_score: Output[Metrics],
cv_results: Output[Dataset],
) -> float:
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import GridSearchCV
from pickle import dump
from ast import literal_eval
rf = RandomForestClassifier(random_state=seed)
param_grid = literal_eval(param_grid)
grid_search = GridSearchCV(rf,
param_grid=param_grid,
scoring=scoring,
refit=True, # Use the whole dataset to retrain after finding the best params
cv=num_folds,
verbose=2,
)
X, y = pd.read_csv(features.path).values, pd.read_csv(labels.path).values
grid_search.fit(X, y)
pd.DataFrame(grid_search.cv_results_).to_csv(cv_results.path, index=False)
best_params_ = grid_search.best_params_
for key, value in best_params_.items():
best_params_[key] = [value]
pd.DataFrame(best_params_).to_csv(best_params.path, index=False)
dump(grid_search.best_estimator_, open(best_model.path, 'wb'))
best_score.log_metric(scoring, grid_search.best_score_)
return grid_search.best_score_
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/k_nearest_neighbors.yaml',
)
def knn(features: Input[Dataset],
labels: Input[Dataset],
param_grid: str,
num_folds: int,
scoring: str,
best_model: Output[Model],
best_params: Output[Dataset],
best_score: Output[Metrics],
cv_results: Output[Dataset],
) -> float:
from sklearn.neighbors import KNeighborsClassifier
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import GridSearchCV
from pickle import dump
from ast import literal_eval
k_nn = KNeighborsClassifier()
param_grid = literal_eval(param_grid)
grid_search = GridSearchCV(k_nn,
param_grid=param_grid,
scoring=scoring,
refit=True, # Use the whole dataset to retrain after finding the best params
cv=num_folds,
verbose=2,
)
X, y = pd.read_csv(features.path).values, pd.read_csv(labels.path).values
grid_search.fit(X, y)
pd.DataFrame(grid_search.cv_results_).to_csv(cv_results.path, index=False)
best_params_ = grid_search.best_params_
for key, value in best_params_.items():
best_params_[key] = [value]
pd.DataFrame(best_params_).to_csv(best_params.path, index=False)
dump(grid_search.best_estimator_, open(best_model.path, 'wb'))
best_score.log_metric(scoring, grid_search.best_score_)
return grid_search.best_score_
###Output
_____no_output_____
###Markdown
Evaluate performance with test datasetFinally, in this step we compare the metrics from the 3 baseline models. We select the best model and use it to predict the unseen test dataset. We also use the previously saved scaler and encoder to preprocess the test dataset before making the predictions. We output different evaluation metrics which can be visualized in the Kubeflow Pipeline UI.
###Code
@component(
base_image='yinanli617/customer-churn:latest',
output_component_file='./components/predict_test_data.yaml',
)
def predict_test_data(test_csv: Input[Dataset],
scaler_obj: Input[Model],
encoder_obj: Input[Model],
lr_model: Input[Model],
rf_model: Input[Model],
knn_model: Input[Model],
lr_score: float,
rf_score: float,
knn_score: float,
categorical_features: str,
numerical_features: str,
target: str,
metrics: Output[Metrics],
):
import pandas as pd
import numpy as np
from pickle import load
from ast import literal_eval
from sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score, roc_auc_score
categorical_features = literal_eval(categorical_features)
numerical_features = literal_eval(numerical_features)
df = pd.read_csv(test_csv.path)
X_cat = df[categorical_features]
X_num = df[numerical_features]
y = df[target]
scaler = load(open(scaler_obj.path, 'rb'))
X_num = scaler.transform(X_num)
encoder = load(open(encoder_obj.path, 'rb'))
X_cat = encoder.transform(X_cat).toarray()
X = np.concatenate([X_num, X_cat], axis=1)
models_dict = {lr_score: lr_model,
rf_score: rf_model,
knn_score: knn_model,
}
best_model = models_dict[max(models_dict.keys())]
model = load(open(best_model.path, 'rb'))
y_pred = model.predict(X)
y_proba = model.predict_proba(X)[:, 1]
accuracy = accuracy_score(y, y_pred)
f1 = f1_score(y, y_pred)
recall = recall_score(y, y_pred)
precision = precision_score(y, y_pred)
roc_auc = roc_auc_score(y, y_proba)
metrics.log_metric('Accuracy', accuracy)
metrics.log_metric('F1 score', f1)
metrics.log_metric('Recall', recall)
metrics.log_metric('Precision', precision)
metrics.log_metric('ROC_AUC', roc_auc)
###Output
_____no_output_____
###Markdown
Assemble the pipeline with the defined components
###Code
@dsl.pipeline(
name='bank-customer-churn-pipeline',
# You can optionally specify your own pipeline_root
pipeline_root='gs://kfp-yli/customer-churn',
)
def my_pipeline(url: str,
num_folds: int,
target: str,
numerical_features: str,
categorical_features: str,
scoring: str,
logistic_regression_params: str,
random_forests_params: str,
knn_params: str,
seed: int,
):
download_csv_task = download_csv(url=url)
train_test_split_task = train_test_split(input_csv=download_csv_task.outputs['output_csv'],
seed=seed,
target=target,
)
train_preprocessing_task = preprocessing(input_csv=train_test_split_task.outputs['train_csv'],
numerical_features=numerical_features,
categorical_features=categorical_features,
target=target,
)
logistic_regression_task = logistic_regression(features=train_preprocessing_task.outputs['features'],
labels=train_preprocessing_task.outputs['labels'],
scoring=scoring,
seed=seed,
num_folds=num_folds,
param_grid=logistic_regression_params,
)
random_forests_task = random_forests(features=train_preprocessing_task.outputs['features'],
labels=train_preprocessing_task.outputs['labels'],
scoring=scoring,
seed=seed,
num_folds=num_folds,
param_grid=random_forests_params,
)
knn_task = knn(features=train_preprocessing_task.outputs['features'],
labels=train_preprocessing_task.outputs['labels'],
scoring=scoring,
num_folds=num_folds,
param_grid=knn_params,
)
predict_test_data_task = predict_test_data(test_csv=train_test_split_task.outputs['test_csv'],
scaler_obj=train_preprocessing_task.outputs['scaler_obj'],
encoder_obj=train_preprocessing_task.outputs['encoder_obj'],
lr_model=logistic_regression_task.outputs['best_model'],
rf_model=random_forests_task.outputs['best_model'],
knn_model=knn_task.outputs['best_model'],
lr_score=logistic_regression_task.outputs['output'],
rf_score=random_forests_task.outputs['output'],
knn_score=knn_task.outputs['output'],
categorical_features=categorical_features,
numerical_features=numerical_features,
target=target
)
###Output
_____no_output_____
###Markdown
Compile the pipelineThe output YAML file can be uploaded from the Kubeflow Pipeline UI
###Code
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='./pipeline/customer-churn_pipeline.yaml')
###Output
_____no_output_____ |
notebooks/demo_toy_experiment.ipynb | ###Markdown
ICML 2018 Toy Experiment
###Code
# Setup parameters for experiment
data_name = 'concentric_circles'
n_train = 1000
cv = 3 # Number of cv splits
random_state = 0
import multiprocessing
n_jobs = multiprocessing.cpu_count()
print('n_jobs=%d' % n_jobs)
# Imports and basic setup of logging and seaborn
%load_ext autoreload
%autoreload 2
from __future__ import division
from __future__ import print_function
import sys, os, logging
import pickle
import time
import warnings
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.base import clone
from sklearn.externals.joblib import Parallel, delayed
from sklearn.utils import check_random_state
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
sys.path.append('..') # Enable importing from package ddl without installing ddl
from ddl.base import CompositeDestructor
from ddl.datasets import make_toy_data
from ddl.deep import DeepDestructorCV
from ddl.independent import IndependentDestructor, IndependentDensity, IndependentInverseCdf
from ddl.univariate import ScipyUnivariateDensity, HistogramUnivariateDensity
from ddl.linear import (LinearProjector, RandomOrthogonalEstimator,
BestLinearReconstructionDestructor)
from ddl.autoregressive import AutoregressiveDestructor
from ddl.mixture import GaussianMixtureDensity, FirstFixedGaussianMixtureDensity
from ddl.tree import TreeDestructor, TreeDensity, RandomTreeEstimator
from ddl.externals.mlpack import MlpackDensityTreeEstimator
# Setup seaborn
try:
import seaborn as sns
except ImportError:
print('Could not import seaborn so colors may be different')
else:
sns.set()
sns.despine()
# Setup logging
logging.basicConfig(stream=sys.stdout)
#logging.captureWarnings(True)
logging.getLogger('ddl').setLevel(logging.INFO)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# BASELINE SHALLOW DESTRUCTORS
gaussian_full = CompositeDestructor(
destructors=[
LinearProjector(
linear_estimator=PCA(),
orthogonal=False,
),
IndependentDestructor(),
],
)
mixture_20 = AutoregressiveDestructor(
density_estimator=GaussianMixtureDensity(
covariance_type='spherical',
n_components=20,
)
)
random_tree = CompositeDestructor(
destructors=[
IndependentDestructor(),
TreeDestructor(
tree_density=TreeDensity(
tree_estimator=RandomTreeEstimator(min_samples_leaf=20, max_leaf_nodes=50),
node_destructor=IndependentDestructor(
independent_density=IndependentDensity(
univariate_estimators=HistogramUnivariateDensity(
bins=10, alpha=10, bounds=[0,1]
)
)
)
)
)
]
)
density_tree = CompositeDestructor(
destructors=[
IndependentDestructor(),
TreeDestructor(
tree_density=TreeDensity(
tree_estimator=MlpackDensityTreeEstimator(min_samples_leaf=10),
uniform_weight=0.001,
)
)
]
)
baseline_destructors = [gaussian_full, mixture_20, random_tree, density_tree]
baseline_names = ['Gaussian', 'Mixture', 'SingleRandTree', 'SingleDensityTree']
# LINEAR DESTRUCTORS
alpha_histogram = [1, 10, 100]
random_linear_projector = LinearProjector(
linear_estimator=RandomOrthogonalEstimator(), orthogonal=True
)
canonical_histogram_destructors = [
IndependentDestructor(
independent_density=IndependentDensity(
univariate_estimators=HistogramUnivariateDensity(bins=20, bounds=[0, 1], alpha=a)
)
)
for a in alpha_histogram
]
linear_destructors = [
DeepDestructorCV(
init_destructor=IndependentDestructor(),
canonical_destructor=CompositeDestructor(destructors=[
IndependentInverseCdf(), # Project to inf real space
random_linear_projector, # Random linear projector
IndependentDestructor(), # Project to canonical space
destructor, # Histogram destructor in canonical space
]),
n_extend=20, # Need to extend since random projections
)
for destructor in canonical_histogram_destructors
]
linear_names = ['RandLin (%g)' % a for a in alpha_histogram]
# MIXTURE DESTRUCTORS
fixed_weight = [0.1, 0.5, 0.9]
mixture_destructors = [
CompositeDestructor(destructors=[
IndependentInverseCdf(),
AutoregressiveDestructor(
density_estimator=FirstFixedGaussianMixtureDensity(
covariance_type='spherical',
n_components=20,
fixed_weight=w,
)
)
])
for w in fixed_weight
]
# Make deep destructors
mixture_destructors = [
DeepDestructorCV(
init_destructor=IndependentDestructor(),
canonical_destructor=destructor,
n_extend=5,
)
for destructor in mixture_destructors
]
mixture_names = ['GausMix (%.2g)' % w for w in fixed_weight]
# TREE DESTRUCTORS
# Random trees
histogram_alpha = [1, 10, 100]
tree_destructors = [
TreeDestructor(
tree_density=TreeDensity(
tree_estimator=RandomTreeEstimator(
max_leaf_nodes=4
),
node_destructor=IndependentDestructor(
independent_density=IndependentDensity(
univariate_estimators=HistogramUnivariateDensity(
alpha=a, bins=10, bounds=[0,1]
)
)
),
)
)
for a in histogram_alpha
]
tree_names = ['RandTree (%g)' % a for a in histogram_alpha]
# Density trees using mlpack
tree_uniform_weight = [0.1, 0.5, 0.9]
tree_destructors.extend([
TreeDestructor(
tree_density=TreeDensity(
tree_estimator=MlpackDensityTreeEstimator(min_samples_leaf=10),
uniform_weight=w,
)
)
for w in tree_uniform_weight
])
tree_names.extend(['DensityTree (%.2g)' % w for w in tree_uniform_weight])
# Add random rotation to tree destructors
tree_destructors = [
CompositeDestructor(destructors=[
IndependentInverseCdf(),
LinearProjector(linear_estimator=RandomOrthogonalEstimator()),
IndependentDestructor(),
destructor,
])
for destructor in tree_destructors
]
# Make deep destructors
tree_destructors = [
DeepDestructorCV(
init_destructor=IndependentDestructor(),
canonical_destructor=destructor,
# Density trees don't need to extend as much as random trees
n_extend=50 if 'Rand' in name else 5,
)
for destructor, name in zip(tree_destructors, tree_names)
]
# Make dataset and create train/test splits
n_samples = 2 * n_train
D = make_toy_data(data_name, n_samples=n_samples, random_state=random_state)
X_train = D.X[:n_train]
y_train = D.y[:n_train] if D.y is not None else None
X_test = D.X[n_train:]
y_test = D.y[n_train:] if D.y is not None else None
def _fit_and_score(data_name, destructor, destructor_name, n_train, random_state=0):
"""Simple function to fit and score a destructor."""
# Fix random state of global generator so repeatable if destructors are random
rng = check_random_state(random_state)
old_random_state = np.random.get_state()
np.random.seed(rng.randint(2 ** 32, dtype=np.uint32))
try:
# Fit destructor
start_time = time.time()
destructor.fit(X_train)
train_time = time.time() - start_time
except RuntimeError as e:
# Handle MLPACK error
if 'mlpack' not in str(e).lower():
raise e
warnings.warn('Skipping density tree destructors because of MLPACK error "%s". '
'Using dummy IndependentDestructor() instead.' % str(e))
destructor = CompositeDestructor([IndependentDestructor()]).fit(X_train)
train_time = 0
train_score = -np.inf
test_score = -np.inf
score_time = 0
else:
# Get scores
start_time = time.time()
train_score = destructor.score(X_train)
test_score = destructor.score(X_test)
score_time = time.time() - start_time
logger.debug('train=%.3f, test=%.3f, train_time=%.3f, score_time=%.3f, destructor=%s, data_name=%s'
% (train_score, test_score, train_time, score_time, destructor_name, data_name))
# Reset random state
np.random.set_state(old_random_state)
return dict(fitted_destructor=destructor,
destructor_name=destructor_name,
train_score=train_score,
test_score=test_score)
# Collect all destructors and set CV parameter
destructors = baseline_destructors + linear_destructors + mixture_destructors + tree_destructors
destructor_names = baseline_names + linear_names + mixture_names + tree_names
for d in destructors:
if 'cv' in d.get_params():
d.set_params(cv=cv)
# Fit and score destructor
results_arr = Parallel(n_jobs=n_jobs)(
delayed(_fit_and_score)(
data_name, destructor, destructor_name, n_train, random_state=random_state,
)
for di, (destructor, destructor_name) in enumerate(zip(destructors, destructor_names))
)
# Compile results for plotting
def _add_val_over_bar(ax, vals, fmt='%.3f'):
"""Add value text over matplotlib bar chart."""
for v, p in zip(vals, ax.patches):
height = p.get_height()
val_str = fmt % v
if '0.' in val_str:
val_str = val_str[1:]
ax.text(p.get_x() + p.get_width() / 2.0, height + p.get_height() * 0.02,
val_str, ha='center', fontsize=13)
# Get scores, number of layers and destructor names
exp_test_scores = np.exp([res['test_score'] for res in results_arr])
n_layers=np.array([
res['fitted_destructor'].best_n_layers_
if hasattr(res['fitted_destructor'], 'best_n_layers_') else 1
for res in results_arr
])
labels = destructor_names
x_bar = np.arange(len(labels))
# Show result plot
figsize = 8 * np.array([1, 1]) * np.array([len(labels) / 16.0, 0.5])
fig, axes = plt.subplots(2, 1, figsize=figsize, dpi=300, sharex='col',
gridspec_kw=dict(height_ratios=[3, 1]))
axes[0].bar(x_bar, exp_test_scores, color=sns.color_palette()[0])
axes[0].set_ylabel('Geom. Mean Likelihood')
axes[0].set_title('%s Dataset' % data_name.replace('_', ' ').title())
_add_val_over_bar(axes[0], exp_test_scores)
axes[1].bar(x_bar, n_layers, color=sns.color_palette()[1])
axes[1].set_ylabel('# of Layers')
_add_val_over_bar(axes[1], n_layers, fmt='%d')
# Rotate tick labels
plt.xticks(x_bar, ['%s' % l for l in labels])
for item in plt.gca().get_xticklabels():
item.set_rotation(30)
item.set_horizontalalignment('right')
# Uncomment below to save png images into notebook folder
#plt.savefig('bar_%s.png' % D.name, bbox_inches='tight')
plt.show()
# Select best destructors of main groups (linear, mixture, tree) and precompute transforms for figure below
selected_arr = [
res for res in results_arr if res['destructor_name'] in [
'RandLin (100)', 'GausMix (0.5)', 'RandTree (100)', 'DensityTree (0.9)'
]
]
def _add_transform(res):
res['Z_train'] = res['fitted_destructor'].transform(X_train)
res['Z_test'] = res['fitted_destructor'].transform(X_test)
return res
selected_arr = [_add_transform(res) for res in selected_arr]
# Create figure for destroyed samples (train and test)
def _clean_axis(ax, limits=None):
if limits is not None:
for i, lim in enumerate(limits):
eps = 0.01 * (lim[1] - lim[0])
lim = [lim[0] - eps, lim[1] + eps]
if i == 0:
ax.set_xlim(lim)
else:
ax.set_ylim(lim)
ax.set_xticks([])
ax.set_yticks([])
ax.set_aspect('equal', 'box')
def _scatter_X_y(X, y, ax, **kwargs):
if 's' not in kwargs:
kwargs['s'] = 18
if y is not None:
for label in np.unique(y):
ax.scatter(X[y == label, 0], X[y == label, 1], **kwargs)
else:
ax.scatter(X[:, 0], X[:, 1])
fig, axes_mat = plt.subplots(2, len(selected_arr), figsize=(11, 5.7))
axes_mat = axes_mat.reshape(2, -1).transpose()
for i, (res, axes) in enumerate(zip(selected_arr, axes_mat)):
for split, X_split, y_split, ax in zip(
['Train', 'Test'], [X_train, X_test], [y_train, y_test], axes
):
_scatter_X_y(res['Z_%s' % split.lower()], y_split, ax, s=10)
_clean_axis(ax, limits=[[0, 1], [0, 1]])
if split == 'Train':
ax.set_title(res['destructor_name'], fontsize=16)
if i == 0:
ax.set_ylabel(split, fontsize=20)
plt.tight_layout()
plt.show()
# Create figure to show progression across stages
selected_res = next(res for res in results_arr if res['destructor_name'] == 'DensityTree (0.9)')
selected_destructor = selected_res['fitted_destructor']
n_layers = len(selected_destructor.fitted_destructors_)
disp_layers = np.minimum(n_layers - 1, np.logspace(0, np.log10(n_layers - 1), 3, endpoint=True, dtype=np.int))
disp_layers = np.concatenate(([0], disp_layers))
fig, axes = plt.subplots(2, len(disp_layers), figsize=np.array([11, 6]))
axes = axes.transpose()
for li, axes_col in zip(disp_layers, axes):
partial_idx = np.arange(li + 1)
if li == 0:
# Special case to show original raw data
Z_partial_train = X_train
title = 'Layer 0'
axes_col[0].set_ylabel('Train Data', fontsize=20)
axes_col[1].set_ylabel('Implicit Density', fontsize=20)
else:
Z_partial_train = selected_destructor.transform(X_train, partial_idx=partial_idx)
title = 'Layer %d' % (li + 1)
# Create grid of points (extend slightly beyond min and maximum of data)
n_query = 100
perc_extend = 0.02
bounds = np.array([np.min(D.X, axis=0), np.max(D.X, axis=0)]).transpose()
bounds_diff = bounds[:, 1] - bounds[:, 0]
bounds[:, 0] -= perc_extend / 2 * bounds_diff
bounds[:, 1] += perc_extend / 2 * bounds_diff
x_q = np.linspace(*bounds[0, :], num=n_query)
y_q = np.linspace(*bounds[1, :], num=n_query)
X_grid, Y_grid = np.meshgrid(x_q, y_q)
X_query = np.array([X_grid.ravel(), Y_grid.ravel()]).transpose()
# Get density values along grid
log_pdf_grid = selected_destructor.score_samples(
X_query, partial_idx=partial_idx).reshape(n_query, -1)
pdf_grid = np.exp(np.maximum(log_pdf_grid, -16))
# Show scatter plot
_scatter_X_y(Z_partial_train, y_train, axes_col[0], s=10)
_clean_axis(axes_col[0], limits=[[0, 1], [0, 1]] if li > 0 else None)
axes_col[0].set_title(title, fontsize=20)
# Show density
axes_col[1].pcolormesh(X_grid, Y_grid, -pdf_grid, cmap='gray', zorder=-1)
_clean_axis(axes_col[1])
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
InterestingExamples/ConwaysGameOfLife.ipynb | ###Markdown
Conway's Game of LifeConway's game of life is a cellular automaton. You can think of a cellular automaton begin grid of squares which are either 'on' or 'off' initially, combined with a set of rules for whether they change state in the next iteration.Conways rules are inspired by population dynamics in biology and lead to very life-like results. The rules are:1. Any live cell with fewer than two live neighbours dies, as if by underpopulation. 2. Any live cell with two or three live neighbours lives on to the next generation. 3. Any live cell with more than three live neighbours dies, as if by overpopulation. 4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. This can lead to some interesting results, including persistent 'lifeforms'. For more information read the [Wikipedia page on the Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life)
###Code
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
# The below commands make the font and image size bigger
plt.rcParams.update({'font.size': 22})
plt.rcParams["figure.figsize"] = (15,10)
# The below command makes the animation below appear in the browser as a video
plt.rc('animation', html='html5')
###Output
_____no_output_____
###Markdown
Let's define the grid we'll work on and a function that randomly initializes the grid
###Code
N = 15
grid = np.zeros((N,N))
def initRandomGrid():
global grid
grid = np.random.randint(2, size=(N,N))
###Output
_____no_output_____
###Markdown
Write the code that computes iterations of the game of life and makes an animation. Each time you run the cell below it runs the `initRandomGrid()` function so the initial values in the grid will be different each time.
###Code
OFF = 0
ON = 1
def update(data):
global grid
# copy the grid to compute the next generation on
newGrid = grid.copy()
for i in range(N):
for j in range(N):
# compute the sum of the neighbouring elements
# use periodic boundary conditions
total = (grid[i, (j-1)%N] + grid[i, (j+1)%N] + grid[(i-1)%N, j] + grid[(i+1)%N, j] + grid[(i-1)%N, (j-1)%N] + grid[(i-1)%N, (j+1)%N] + grid[(i+1)%N, (j-1)%N] + grid[(i+1)%N, (j+1)%N])
# apply Conway's rules
if grid[i, j] == ON:
if (total < 2) or (total > 3):
newGrid[i, j] = OFF
else:
if total == 3:
newGrid[i, j] = ON
# update data
mat.set_data(newGrid)
grid = newGrid
return [mat]
# set up animation
fig, ax = plt.subplots()
# Don't display any plot
plt.close()
# Create a new random grid (sometimes the animation function does not seem to do this)
initRandomGrid()
# Commands for the animation
mat = ax.matshow(grid)
ani = animation.FuncAnimation(fig, update, init_func = initRandomGrid, interval=200, save_count=100);
ani
###Output
_____no_output_____ |
array_strings/ipynb/even_num.ipynb | ###Markdown
Even numbersLet’s say I give you a list saved in a variable:a = [1,2,3,4,10].Write one line of Python that takes this list a and makes a new list that has only the even elements of this list in it.output = [2,4,10]
###Code
def even_num(ls):
return [num for num in ls if num % 2 == 0]
print(even_num([1,2,3,4,10]))
###Output
[2, 4, 10]
|
notebooks/modeling/adware_labeling/naive_bayes.ipynb | ###Markdown
Modeling - Adware Labeling - Naive Bayes
###Code
# constants
INPUT_GENERIC_FPATH = "../../../data/prepared/adware_labeling/{split}.csv"
OUTPUT_TEST_PREDICTION_FPATH = "../../../results/evaluation/predictions/adware_labeling/naive_bayes.csv"
!pip install -q pandas
import os
import sys
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import cohen_kappa_score, make_scorer, f1_score
# add directory to path in order to import own module
sys.path.insert(0, "../../..")
from android_malware_labeling.training.naive_bayes import (
train_complement_naive_bayes,
train_gaussian_naive_bayes
)
from android_malware_labeling.evaluation.evaluation import (
evaluate_imbalanced_multiclass_prediction,
plot_conf_matrix
)
###Output
_____no_output_____
###Markdown
Loading
###Code
train_X = pd.read_csv(INPUT_GENERIC_FPATH.format(split="train_X"), index_col=0, squeeze=True)
train_y = pd.read_csv(INPUT_GENERIC_FPATH.format(split="train_y"), index_col=0, squeeze=True)
validation_X = pd.read_csv(INPUT_GENERIC_FPATH.format(split="validation_X"), index_col=0, squeeze=True)
validation_y = pd.read_csv(INPUT_GENERIC_FPATH.format(split="validation_y"), index_col=0, squeeze=True)
test_X = pd.read_csv(INPUT_GENERIC_FPATH.format(split="test_X"), index_col=0, squeeze=True)
###Output
_____no_output_____
###Markdown
Training and Evaluation on Validation Set Gaussian Naive Bayes with Inverse Priors
###Code
gnb, _ = train_gaussian_naive_bayes(train_X.values,
train_y.values,
validation_X.values,
validation_y.values)
validation_gnb_pred = gnb.predict(validation_X)
gnb_eval = evaluate_imbalanced_multiclass_prediction(validation_y, validation_gnb_pred)
gnb_eval
plot_conf_matrix(validation_y, validation_gnb_pred)
###Output
_____no_output_____
###Markdown
Complement Naive Bayes
###Code
scaler = MinMaxScaler().fit(train_X)
cnb, _ = train_complement_naive_bayes(scaler.transform(train_X),
train_y.values,
scaler.transform(validation_X),
validation_y.values,
scoring="f1_macro"
)
cnb.get_params()
validation_cnb_pred = cnb.predict(scaler.transform(validation_X))
cnb_eval = evaluate_imbalanced_multiclass_prediction(validation_y, validation_cnb_pred)
cnb_eval
plot_conf_matrix(validation_y, validation_cnb_pred)
###Output
_____no_output_____
###Markdown
Prediction and Saving
###Code
predictions = pd.DataFrame(cnb.predict(scaler.transform(test_X)), columns=[train_y.name], index=test_X.index)
predictions.to_csv(OUTPUT_TEST_PREDICTION_FPATH)
###Output
_____no_output_____ |
dpmm.ipynb | ###Markdown
Dirichlet Process Mixture Models Guilherme PiresInstituto Superior Técnico - 2016/2017
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Introduction In this work I'll try to present the concept of Dirichlet Process and to show how they can be used to implement Infinite Mixture Models. I'll start by introducing the Dirichlet Distribution, the Dirichlet Process, and the application of the Dirichlet Process to Infinite Mixture Models. I'll then apply an implementationof this model to the clustering of data, with an unknown number of clusters. Dirichlet Distribution An introductionThe Dirichlet Distribution is commonly used as the conjugate prior for the Multinomial distribution. This meansthat for a Multinomial likelihood model, the most natural/simple way to encode our prior beliefs about the natureof the observations is by using a Dirichlet distribution. Not only that, if we use a Dirichlet prior with aMultinomial likelihood, the posterior will turn out to be a Dirichlet distribution as well (obtained by updating the $\boldsymbol\alpha$ parameter's entries with the corresponding counts given by the Multinomial observations).Let $$\begin{eqnarray}\boldsymbol\theta &=& (\theta_1 , \theta_2, ..., \theta_m) \nonumber\\\boldsymbol\alpha &=& (\alpha_1 , \alpha_2, ..., \alpha_m)\end{eqnarray}$$ Then $$\boldsymbol\theta \sim Dir(\boldsymbol\alpha) : P(\boldsymbol\theta)=\frac{\Gamma(\sum_{k}^{m} \alpha_k)}{\prod_{k}^{m}\Gamma(\alpha_k)}\prod_{k}^{m}\theta_{k}^{a_k -1}$$Note that samples $\boldsymbol\theta$ from the Dirichlet Distribution belong to the probability simplex, whichmeans $\sum_{k}^{m}\theta_k = 1, \theta_k \geq 0$ The Dirichlet Distribution can be regarded as a distribution over possible parameters for a MultinomialDistribution - which is the intuitive reason to use the former as the latter's prior. Extending this notion a bit further, we can regard the Dirichlet Distribution as a distribution over (Multinomial) distributions. The $\boldsymbol\alpha$ effectLet's look at the effect of the $\boldsymbol\alpha$ parameter on the shape of the distribution. For simplicity, lets take $m=3$. The simplex of the corresponding space is a triangle and so it can be projected to 2D and be easily plotted.
###Code
from scipy.stats import dirichlet as Dir
ortog = (1/np.sqrt(3)) * np.ones(3)
x = np.array([1,0,0])
x = x - np.dot(x, ortog) * ortog
x /= np.linalg.norm(x)
y = np.cross(ortog, x)
def project3dsample(sample):
return np.array([np.dot(sample, x), np.dot(sample, y)])
fig, axs = plt.subplots(1,5, figsize=(25,4))
alphas = [
np.array([1,1,1]),
np.array([10,10,10]),
np.array([0.1,0.1,0.1]),
np.array([10,1,1]),
np.array([0.1,0.1,1])
]
for alpha, ax in zip(alphas, axs):
samples = np.array([project3dsample(sample) for sample in Dir.rvs(alpha, size=1000)])
ax.set_title("alpha = {}".format(alpha))
ax.scatter(samples[:,0], samples[:,1], s=3);
plt.show()
###Output
_____no_output_____
###Markdown
We see that $\boldsymbol\alpha$ controls the nature of the probability vectors sample from the DirichletDistribution:- If the $\alpha_i$ are all equal to $\alpha$, the resulting samples have a symmetric spread on the space. More particularly: - If $\alpha = 1$, the samples spread uniformely on the space - If $\alpha \gt 1$, dense (as in opposed to sparse) samples are more frequent - If $\alpha \lt 1$, sparse samples are more frequent- If the $\alpha_i$ are not equal, there will be either a concentration on either one of the vertices or oneof the edges of the space --- Dirichlet Process Introduction to the concept The Dirichlet Process can be regarded as a generalization of the the Dirichlet Distribution to infinite dimensions. It too defines a distribution over distributions. However, while the Dirichlet Distributiondefines a distribution over random probability measures of defined dimension, the Dirichlet Process defines adistribution over random probability measures, of random dimension. Formally:- Consider the measure space defined by $(\Theta, \Sigma)$, where $\Theta$ is some set and $\Sigma$ is a $\sigma$-algebra on $\Theta$- Take a *measurable finite partition* of $\Theta$ : $A_1, A_2, ..., A_k$- A Dirichlet Process is a random probability measure $G$ over a measure space $(\Theta, \Sigma)$, that respects a special property: - $[G(A_1), G(A_2), ..., G(A_3)] \sim Dir(\alpha H(A_1), \alpha H(A_2), ..., \alpha H(A_k))$- The Dirichlet Process is parametrized by: - $\alpha \in \mathbb{R}$ : The concentration parameter - $H$ (a probability distribution): The base distribution- Most common notation: $G \sim DP(\alpha, H)$Intuitively, $H$ is the mean distribution" and $\alpha$ can be regarded as an "inverse variance". A sample froma Dirichlet Process will be an infinite sum of Dirac deltas, with different heights, and with locations sampledfrom $H$.A somewhat counter-intuitive fact is that a sample $G$ from a Dirichlet Process will be discrete with probability1, even if the base distribution is smooth. Even so, the base distribution will be the mean distribution! Posterior Inference Now suppose we use a (random) sample $G$ from a $DP$ as a likelihood model for some i.i.d data $\theta_1, \theta_2, ..., \theta_N$ : $\theta_n | G \sim G$.Convenientely, the conjugacy of the Dirichlet Distribution to the Multinomial Distribution still applies to theDirichlet Process, which means the posterior on $G$ is also a Dirichlet Process and is given (after somerather cumbersome algebraic manipulation) by:$$ G | \theta_1, \theta_2, ..., \theta_N \sim DP(\alpha+N , \frac{\alpha H + \sum_{n=1}^{N}\delta_{\theta_n}}{\alpha+N}) $$ Where $\delta_{\theta_n}$ is the Dirac delta function. (Note that some of the $\theta_n$ will fall on the same value, which means we'll have summed $\delta$'s. A reasonable and intuitive way of thinking about these, is as the counts of sampled $\theta$ that fell on each value (which hints at the empirical distribution). The posterior predictive distribution of a DP is given by its base distribution. Taking that fact, we see that the posterior predictive distribution for $\theta_{N+1}$ is:$$\theta_{N+1} | \theta_1, ..., \theta_N \sim \frac{\alpha H + \sum_{n=1}^{N}\delta_{\theta_n}}{\alpha+N}$$ If you look carefully at that distribution, you'll see it has a smooth part and a discrete part.It could look similar to this:
###Code
from scipy.stats import beta
from scipy.stats import norm
def predictive_posterior_plot(N, alpha, ax):
stick = 1
pis = []
for i in range(N):
pi = stick * beta.rvs(1, alpha)
pis.append(pi)
stick -= pi
pis = np.array(pis)*N/(alpha+N)
thetas = norm.rvs(size=N)
x = np.array(range(-3000,3000))*0.001
y = norm.pdf(x)*alpha/(alpha+N)
ax.set_title("alpha = {} | N = {} ".format(alpha, N))
ax.plot(x,y)
ax.set_ylim((0,max([max(pis),max(y)])+0.01))
ax.bar(thetas,pis,0.01)
fig, ax = plt.subplots()
predictive_posterior_plot(100,10,ax)
plt.show()
###Output
_____no_output_____
###Markdown
That's rather unintuitive - how do you sample from such a distribution? Hopefully the next section willmake that clearer. Chinese Restaurant Process and Polya Urn Process Two very famous representations for the Dirichlet Process have been devised, that evidence its clustering properties, in a *rich get richer* fashion. They are the Chinese Restaurant Process, and the Polya Urn Process. Put simply, they provide a way to "implement" the posterior predictive distribution I just presented, by either:- Assigning points to an existing group, with some probability - Which corresponds to assigning a person who just entered the restaurant to one of the existing tables, in the Chinese Restaurant Process - And to add to the urn a ball of the same color as some other ball sampled from the urn, in the Polya Urn Process- Create a new group, based on the new point - Which corresponds to assingning a person who just entered the restaurant to an empty table, in the Chinese Restaurant Process - And to add a ball of a new color to the urn, in the Polya Urn Model This somewhat "dual" behaviour is evidenced by the posterior predictive distribution, especially if we separate the expression in two terms:$$\theta_{N+1} | \theta_1, ..., \theta_N \sim \frac{\alpha}{\alpha+N}H + \frac{N}{\alpha+N}(\frac{1}{N}\sum_{n=1}^{N}\delta_{\theta_n})$$This way of writing the equation evidences the fact that the posterior predictive distribution is a weighted sumof the base distribution and the empirical distribution. How does one sample from such a distribution?- With probability $\frac{\alpha}{N+\alpha}$ we sample the next $\theta$ from the base distribution, $H$- With probability $\frac{N}{N+\alpha}$ we sample the next $\theta$ from the empirical distributionAs it is easy to see, as N increases (i.e. we see more data), the weight of the base distribution becomes proportionally smaller - proper Bayesian behaviour! - but the probability of a new (as in *previously unseen*) value for $\theta$ is never $0$. We can also see that the concentration parameter will have controlover the final number of clusters: the bigger $\alpha$ is, the likelier the predictive distribution is to samplea new $\theta$ value from the base distribution. That can easily be observed by the following plots.
###Code
import itertools
fig, axs = plt.subplots(3,3, figsize=(20,20))
alphas = [0.5, 1, 10]
Ns = [10, 100, 1000]
axs = axs.flatten()
for (ax, (alpha, N)) in zip(axs, itertools.product(alphas, Ns)):
predictive_posterior_plot(N, alpha, ax)
plt.suptitle("Several possible predictive functions after observing different numbers of samples,\n"+
"with different concentration parameters, and with a Normal base distribution", fontsize=22)
plt.show()
###Output
_____no_output_____
###Markdown
We can see these plots are coherent with the intuitive notion I hinted at: for a fixed number of observed samples,a bigger $\alpha$ increases the weight of the base distribution; for a fixed $\alpha$, a bigger number of samplesdecreases the weight of the base distribution. Stick Breaking Representation A third, more generative view of the DP, called the Stick Breaking Representation allows us to obtains samples $G$ from a Dirichlet Process. It's important to keep in mind that samples $G$ from a DP are themselvesprobability distributions! The Stick Breaking Representation obtains samples $G \sim DP(\alpha, G)$ by thefollowing process: - Take a stick of length 1 - While remaining_stick has length > 0: - Sample a $\pi_k$ from $Beta(1,\alpha)$ - Note that $\pi_k \in [0,1]$ - Break the remaining stick at $\pi_k$ of it's length - current_stick = first part, remaining_stick = second part - Sample a $\theta_k$ from $H$ - Place a $\delta_{\theta_k}$ with height = current_stick on point $\theta_k$ We see that in the end of this process, G will be equal to $\sum_{k=1}^\infty \delta_k \pi_k$ . An intuitive bridge to (Infinite) Mixture Models The usefulness of the DP in mixture models now starts to become apparent. We can easily use the random variables $\theta_k$ as the latent "indexing" variables on a mixture model, taking advantage of the naturalclustering behaviour of the DP and also of the fact that, at any point, it allows the possibility of seeinga new value for $\theta_k$ - hence the proneness to model a Mixture Model with an unknown number of components:a new one can appear at any time, if the data so suggests. Systematizing the Dirichlet Process Mixture Model:$$\begin{eqnarray}G|\alpha, H &\sim& DP(\alpha, H) \nonumber \\\theta_n|G &\sim& G \nonumber \\x_n|\theta_n &\sim& F(\theta_n) \nonumber \\\end{eqnarray}$$Where $F(\theta_n)$ is a class conditional distribution, e.g., a Gaussian in the case of a Gaussian Mixture Model.One of the biggest motivations to use this kind of models lies in its ability to directly attack the problemof model selection: there is no initial assumption on the number of components, and the model itself "searches" for the best possible. --- Inference in DPMM Several ways of doing Inference on Dirichlet Process Mixture Models have been proposed and shown. The most common ones involve Gibbs Sampling, Collapsed Gibbs Sampling or other MCMC or simulation methods. More recently some Variational methods have also been proposed. I will present an overview of both approaches,also introducing the high-level basics of Gibbs Sampling and Variational Inference. Both of these approaches come from the need of computing complex (as in *complicated and hard*) integrals (usually on the denominators of posterior distributions). Gibbs Sampling tackles this by sampling from distributions that assymptotically approach the true ones, while Variational methods work by converting the integral computation intoan optimization problem. Gibbs Sampling approach Gibbs SamplingAs mentioned, Gibbs Sampling takes the approach of sampling from a distribution that is assymptotically similar tothe one of interest. It is an instance of a broader class of sampling methods, called Markov Chain Monte Carlo.It's clear that if we had a black box from which we could take samples of the distribution we care about, we couldempirically estimate that distribution. However, how can you build a black box for a distribution you don't knowyet?First, there's the need to realize that what we actually want is to compute something in the form of:$$E_{p(z)}[f(z)]=\int_{z}f(z)p(z)$$Where $p(z)$ governs the distribution of the values of $z$, but the integral we're actually interested is onthe values of $f(z)$. Consider the previous expression in this form:$$E_{p(z)}[f(z)]=\lim_{N \to \infty}\frac{1}{N}\sum_{N}f(z_{(i)})$$Where $z_{(i)}$ are observed values, taken from the $p(z)$ distribution. Here we're counting how many times $f(z)$landed on a particular value and averaging it. So what we really want is to have a way to "tell" how much time wespent on each "region" of the $z$-space sow that we can accumulate $f(z)$ values. A way to do so is to use a MarkovChain that allows us to visit each $z$-value with a frequency proportional to $p(z)$ - hence the name Markov Chain Monte Carlo.Gibbs Sampling is a way to implement such a Markov Chain. It's only applicable when the $z$-space has at least 2dimensions. It works by getting each dimension of the next $z$-point individually, conditioned on the remainingdimensions. For our models, these dimensions will be parameters and variables.Gibbs Samplers are derived on a per-model basis, because the way we sample a new value for a dimension is determined by the way these dimensions relate "interact" in the model. Collapsed Gibbs Sampling Collapsed Gibbs Sampling takes the same principals from *Vanilla* Gibbs Sampling, but does the sampling of newdimension values with some of the dimensions integrated out. This is made possible in some models due to prior conjugacy and some algebraic tricks, and it makes the sampler quicker because it reduces the number of variables per sampling operation. Gibbs Sampling for DPMM Several Gibbs Samplers have been devised for Dirichlet Process Mixture Models. Although I'm not going to deriveone here, I'll link some references on that. Variational approach Variational Inference and Variational Bayes Variational Inference works by transforming the problem of integration into one of optimization. It does soby fully replacing the distribution we want to compute with an approximation which is chosen to live inside ofa distribution family. This family is commonly called a Variational Family, and it doesn't necessarily includethe real distribution (actually, most likely it won't include the real distribution). Variational Inference thenproceeds by finding the parameters that correspond to the optimal distribution in the Variational Family.The question that should be ringing in your head now is: "Optimal regarding what?". The answer is: We optimize the parameters so as to minimize the Kullback-Leibler divergence between the true distribution and the approximation. The KL divergence is a measure of how different two distributions are. It's got its roots inInformation Theory, and it can be interpreted as the number of extra bits (if we work with base 2 logarithms)needed to encode an information source distributed according to $p$, if we use $q$ to build our codebook.A side note: the KL divergence is **not** symmetric, i.e., $KL(p||q)\neq KL(q||p)$; in [] Murphy suggests that the reverse version of the KL is statistically more sensible, but Iwon't go into details on why that is. For the purpose of this overview, it suffices to know that choosing tooptimize for the forward KL divergence will yield different results than choosing to optimize for the reverse KLdivergence.Back on track. How does on go about computing the KL divergence between two distributions without knowing oneof them? The distribution we don't know is precisely that which we want to estimate. It seems we got stuck on an infinite loop. Alas! The whole trick of Variational Inference is the way to break this loop. It turns outthere's a way to leverage some probability equalities and Jensen's inequality to come up with an expression,called the Expectation Lower Bound, ELBO. Maximizing this expression is equivalent to minimizing the KL divergencewithout needing to know a closed form for $p(x)$. Here's the derivation of that result:Consider Jensen's inequality (applied to Expectation):$$f(E[X]) \geq E[f(X)]$$Applying it to the log-probability of the observations:$$\begin{eqnarray}log\ p(x) &=& log \int_z p(x,z) dz \nonumber \\&=& log \int_z p(x,z)\frac{q(z)}{q(z)} dz \nonumber \\&=& log E_q[\frac{p(x,Z)}{q(z)}] \nonumber \\&\geq& E_q[log\ p(x,Z)] - E_q[log\ q(Z)] \nonumber\\ \end{eqnarray}$$ Our goal is now to find the parameters that yield the $q(Z)$ distribution that makes this bound as tight as possible. One of the advantages of Variational methods as compared to sampling methods is the fact that thisoptimization yields deterministic results, and Variational methods are faster in general. However there'susually an accuracy trade-off. You might have noticed that the title for this section includes "Variational Bayes". This refers to the applicationof Mean-Field Variational Inference, where the Variational distribution is of the form $\prod_i q_i(x_i)$ Streaming Variational Bayes and DPMM Streaming Variational Bayes is a framework by *Broderick, et al.* that proposes a way to leverage the conjugacy of some distributionsto allow the fitting of the approximation to be computed in a streaming fashion - which aligns with the currenttendencies of big-data and scalability. This framework has been leveraged by *Huynh et al.* to apply VariationalInference to Dirichlet Process Mixture Models. --- Experiments I used the BayesianGaussianMixture implementation from [scikit-learn](scikit-learn.org) to find clustersof countries with similar living standards. [Here](http://www.sharecsv.com/s/4165c9b03d9fffdef43a3226613ff37c/Countries.csv) is the dataset I used.
###Code
import pandas as pd
df = pd.read_csv("./Countries.csv")
df.head()
cols_of_interest = ["GDPPC", "Literacy", "InfantMortality", "Agriculture", "Population", "NetMigration"]
y = df[cols_of_interest].values
from sklearn.mixture import BayesianGaussianMixture
m = BayesianGaussianMixture(
n_components=5,
weight_concentration_prior=1/5, #alpha
weight_concentration_prior_type="dirichlet_process",
max_iter=10000,
init_params="random"
)
m.fit(y)
preds = m.predict(y)
print(np.bincount(preds))
grouped = dict(zip(range(0,100), [list() for _ in range(0,100)]))
for i in range(len(preds)):
grouped[preds[i]].append(df.iloc[i]["Name"])
to_del = []
for key in grouped:
if len(grouped[key]) == 0:
to_del.append(key)
for key in to_del:
del grouped[key]
from sklearn.preprocessing import MinMaxScaler
for col in cols_of_interest:
if col != "NetMigration":
df[col] = MinMaxScaler(feature_range=(0,1)).fit_transform(df[col].values.reshape(-1,1))
else:
df[col] = MinMaxScaler(feature_range=(-1,1)).fit_transform(df[col].values.reshape(-1,1))
def plot_country(ax, country, cluster_key=None):
x = np.arange(len(cols_of_interest))
if cluster_key != None:
rows = df.loc[df["Name"].isin(grouped[cluster_key])][cols_of_interest].values
y = np.mean(rows, axis=0)
country = "Cluster {} average".format(cluster_key)
ax.set_yticks(x)
ax.set_yticklabels(cols_of_interest, fontsize=22)
color="orange"
else:
y = df.loc[df["Name"] == country][cols_of_interest].values.flatten()
ax.tick_params(axis="y", which="both", left="off", right="off", labelleft="off")
color="blue"
ax.tick_params(axis="x", which="both", bottom="off", top="off", labelbottom="off")
ax.set_title(country, fontsize=22)
ax.barh(x, y, height=0.3, alpha=0.65, color=color)
def plot_cluster(axs, key):
countries = np.random.choice(grouped[key], size=3, replace=False)
for ax, country in zip(axs[1:4], countries):
plot_country(ax, country)
plot_country(axs[0],"", key)
fig, axs_ = plt.subplots(5,4,figsize=(40,60))
top_5 = sorted(grouped.items(), key=lambda x: len(x[1]), reverse=True)[:5]
for (cluster,_), axs in zip(top_5, axs_):
plot_cluster(axs, cluster)
plt.show()
###Output
_____no_output_____ |
Study4_InspectionGame/Analysis_scripts/1.GameAnalysis/2.Analyze_self_report.ipynb | ###Markdown
Load data
###Code
a = !pwd
baseDir = '/'.join(a[0].split('/')[0:-2])
print(baseDir)
WS_dat = pd.read_csv(baseDir + '/Data/Cleaned/WS_dat.csv',index_col = 0)
quiz_dat = pd.read_csv(baseDir + '/Data/Cleaned/quiz_dat.csv',index_col = 0)
SPG_dat = pd.read_csv(baseDir + '/Data/Cleaned/SPG_dat.csv',index_col = 0)
WS_cond_dat = WS_dat.groupby(['subID','player_type'],as_index=False).mean()
subIDs = WS_dat['subID'].unique()
SPG_scores = SPG_dat.groupby(['subID','player_type'],as_index=False
).mean()[['subID','player_type','score']]
SPG_scores_wide = SPG_scores.pivot(index = 'subID', columns = 'player_type', values = 'score').reset_index()
SPG_scores_wide['diff'] = SPG_scores_wide['opt'] - SPG_scores_wide['pess']
SPG_scores_wide['mean'] = (SPG_scores_wide['opt'] + SPG_scores_wide['pess'])/2
SPG_scores_wide.head()
###Output
_____no_output_____
###Markdown
Quantify work/shirk effect
###Code
IPs = WS_dat.query('trial >9').groupby(['subID','player_type'],as_index=False).mean()[['subID','player_type','cost']]
IPs_wide = IPs.pivot(index = 'subID', columns = 'player_type', values = 'cost').reset_index()
IPs_wide['diff'] = IPs_wide['opt'] - IPs_wide['pess']
IPs_wide.head()
###Output
_____no_output_____
###Markdown
Merge across tasks
###Code
SPG_pess_order = SPG_dat[['subID','block','player_type']].drop_duplicates().pivot(index='subID',columns = 'player_type', values='block').reset_index()[['subID','pess']]
SPG_pess_order.columns = ['subID','pess_order']
task_dat = SPG_scores_wide.merge(IPs_wide, on = 'subID', suffixes = ['_SPG','_WS']).merge(SPG_pess_order, on='subID')
###Output
_____no_output_____
###Markdown
Relationship between SPG self-report and IG earnings
###Code
# Median split on IG distinguishing between WTP for Opt and Pess:
WTP_Q1 = task_dat['diff_WS'].describe()['25%']
WTP_D1 = task_dat['diff_WS'].describe(percentiles = [.1,.9])['10%']
WTP_Q3 = task_dat['diff_WS'].describe()['75%']
WTP_D9 = task_dat['diff_WS'].describe(percentiles = [.1,.9])['90%']
WTP_median = task_dat['diff_WS'].median()
subs_good_IG = task_dat.query('diff_WS > @WTP_D9')['subID'].tolist()
len(subs_good_IG)
subs_bad_IG = task_dat.query('diff_WS < @WTP_D1')['subID'].tolist()
len(subs_bad_IG)
!pip install wordcloud
import wordcloud
from wordcloud import WordCloud, STOPWORDS# Generate word cloud
self_report_text = SPG_dat[['subID','player_type','self-report']].drop_duplicates()
self_report_text.head()
all_text = pd.DataFrame()
for performance in ['good','bad']:
if performance == 'good':
perf_list = subs_good_IG
else:
perf_list = subs_bad_IG
for player in ['opt','pess']:
perf_text = self_report_text.loc[self_report_text['subID'].isin(perf_list),:]
play_text = perf_text.query('player_type == @player')['self-report'].values.tolist()
play_text = ' '.join(play_text).lower()
tmp = pd.DataFrame([[performance,player,play_text]],columns = ['performance','player','text'])
all_text = all_text.append(tmp).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Plot
###Code
# Define a function to plot word cloud
def plot_cloud(wordcloud):
# Set figure size
plt.figure(figsize=(10, 5))
# Display image
plt.imshow(wordcloud)
# No axis details
plt.axis("off");
def red_hi_color_func(word, font_size, position, orientation, random_state=1,
**kwargs):
return "hsl(5, 80%, 50%)"
def blue_hi_color_func(word, font_size, position, orientation, random_state=1,
**kwargs):
return "hsl(220, 80%, 50%)"
def red_lo_color_func(word, font_size, position, orientation, random_state=1,
**kwargs):
return "hsl(5, 40%, 70%)"
def blue_lo_color_func(word, font_size, position, orientation, random_state=1,
**kwargs):
return "hsl(220, 40%, 70%)"
col_dict = {'good':{'opt':red_hi_color_func,'pess':blue_hi_color_func},
'bad':{'opt':red_lo_color_func,'pess':blue_lo_color_func}}
player_names = {'opt':'Greedy','pess':'Risk-Averse'}
all_clouds = pd.DataFrame()
for perfi,performance in enumerate(['good','bad']):
for playi,player in enumerate(['opt','pess']):
text = all_text.query('performance == @performance and player == @player')['text'].values[0]
print(performance,player,text[:100])
tmp = WordCloud(width = 1500, height = 1000, random_state=6,
background_color='white', colormap='RdBu', max_words = 20,
collocations=False, stopwords = STOPWORDS)
tmp = tmp.generate(text)
tmp = pd.DataFrame([[performance,player,tmp]],columns = ['performance','player','cloud'])
all_clouds = all_clouds.append(tmp).reset_index(drop=True)
fig,ax = plt.subplots(2,2,figsize=[24,16])
# colfunx = [red_color_func,blue_color_func]
perf_group_indicator = ['top 10%','bottom 10%']
for perfi,performance in enumerate(['good','bad']):
for playi,player in enumerate(['opt','pess']):
print(performance,player)
cloud = all_clouds.query('performance == @performance and player == @player')['cloud'].values[0]
axcur = ax[perfi,playi]
# axcur.imshow(cloud)
axcur.imshow(cloud.recolor(color_func=col_dict[performance][player]))
axcur.axis('off')
# axcur.set(title = '%simist (%s participants)'%(player,performance))
axcur.set_title(player_names[player].upper() + ' (%s generalization)'%perf_group_indicator[perfi],
fontdict = {'size':30})
###Output
good opt
good pess
bad opt
bad pess
|
notebooks/05_errata.ipynb | ###Markdown
Deep Reinforcement Learning in Action by Alex Zai and Brandon Brown Chapter 5
###Code
from tqdm.notebook import trange
###Output
_____no_output_____
###Markdown
Listing 5.1
###Code
import multiprocessing as mp
import numpy as np
def square(x):
return np.square(x)
x = np.arange(64)
print(x)
mp.cpu_count()
if __name__ == '__main__': # added this line for process safety
pool = mp.Pool(8)
squared = pool.map(square, [x[8*i:8*i+8] for i in range(8)])
squared
###Output
_____no_output_____
###Markdown
Listing 5.2
###Code
def square(i, x, queue):
print("In process {}".format(i,))
queue = mp.Queue()
queue.put(np.square(x))
processes = []
if __name__ == '__main__': #adding this for process safety
x = np.arange(64)
for i in range(8):
start_index = 8*i
proc = mp.Process(target=square,args=(i,x[start_index:start_index+8],
queue))
proc.start()
processes.append(proc)
for proc in processes:
proc.join()
for proc in processes:
proc.terminate()
results = []
while not queue.empty():
results.append(queue.get())
results
###Output
_____no_output_____
###Markdown
Listing 5.3: Pseudocode (not shown) Listing 5.4
###Code
import torch
from torch import nn
from torch import optim
import numpy as np
from torch.nn import functional as F
import gym
import torch.multiprocessing as mp
class ActorCritic(nn.Module):
def __init__(self):
super(ActorCritic, self).__init__()
self.l1 = nn.Linear(4,25)
self.l2 = nn.Linear(25,50)
self.actor_lin1 = nn.Linear(50,2)
self.l3 = nn.Linear(50,25)
self.critic_lin1 = nn.Linear(25,1)
def forward(self,x):
x = F.normalize(x,dim=0)
y = F.relu(self.l1(x))
y = F.relu(self.l2(y))
actor = F.log_softmax(self.actor_lin1(y),dim=0)
c = F.relu(self.l3(y.detach()))
critic = torch.tanh(self.critic_lin1(c))
return actor, critic
###Output
_____no_output_____
###Markdown
Listing 5.6
###Code
def worker(t, worker_model, counter, params):
worker_env = gym.make("CartPole-v1")
worker_env.reset()
worker_opt = optim.Adam(lr=1e-4,params=worker_model.parameters())
worker_opt.zero_grad()
for i in range(params['epochs']):
worker_opt.zero_grad()
values, logprobs, rewards = run_episode(worker_env,worker_model)
actor_loss,critic_loss,eplen = update_params(worker_opt,values,logprobs,rewards)
counter.value = counter.value + 1
###Output
_____no_output_____
###Markdown
Listing 5.7
###Code
def run_episode(worker_env, worker_model):
state = torch.from_numpy(worker_env.env.state).float()
values, logprobs, rewards = [],[],[]
done = False
j=0
while (done == False):
j+=1
policy, value = worker_model(state)
values.append(value)
logits = policy.view(-1)
action_dist = torch.distributions.Categorical(logits=logits)
action = action_dist.sample()
logprob_ = policy.view(-1)[action]
logprobs.append(logprob_)
state_, _, done, info = worker_env.step(action.detach().numpy())
state = torch.from_numpy(state_).float()
if done:
reward = -10
worker_env.reset()
else:
reward = 1.0
rewards.append(reward)
return values, logprobs, rewards
###Output
_____no_output_____
###Markdown
Listing 5.8
###Code
def update_params(worker_opt,values,logprobs,rewards,clc=0.1,gamma=0.95):
rewards = torch.Tensor(rewards).flip(dims=(0,)).view(-1)
logprobs = torch.stack(logprobs).flip(dims=(0,)).view(-1)
values = torch.stack(values).flip(dims=(0,)).view(-1)
Returns = []
ret_ = torch.Tensor([0])
for r in range(rewards.shape[0]):
ret_ = rewards[r] + gamma * ret_
Returns.append(ret_)
Returns = torch.stack(Returns).view(-1)
Returns = F.normalize(Returns,dim=0)
actor_loss = -1*logprobs * (Returns - values.detach())
critic_loss = torch.pow(values - Returns,2)
loss = actor_loss.sum() + clc*critic_loss.sum()
loss.backward()
worker_opt.step()
return actor_loss, critic_loss, len(rewards)
###Output
_____no_output_____
###Markdown
Listing 5.5 NOTE 1: This will not run on its own, you need to run listing 5.6 - 5.8 first then come back and run this cell. NOTE 2: This will not record losses for plotting. If you want to record losses, you'll need to create a multiprocessing shared array and modify the `worker` function to write each loss to it. See Alternatively, you could use process locks to safely write to a file.
###Code
MasterNode = ActorCritic()
MasterNode.share_memory()
processes = []
params = {
'epochs':1000,
'n_workers':7,
}
counter = mp.Value('i',0)
if __name__ == '__main__': #adding this for process safety
for i in trange(params['n_workers']):
p = mp.Process(target=worker, args=(i,MasterNode,counter,params))
p.start()
processes.append(p)
for p in processes:
p.join()
for p in processes:
p.terminate()
print(counter.value,processes[1].exitcode)
###Output
_____no_output_____
###Markdown
Supplement Test the trained model
###Code
env = gym.make("CartPole-v1")
env.reset()
for i in trange(700):
state_ = np.array(env.env.state)
state = torch.from_numpy(state_).float()
logits,value = MasterNode(state)
action_dist = torch.distributions.Categorical(logits=logits)
action = action_dist.sample()
state2, reward, done, info = env.step(action.detach().numpy())
if done:
print(f"Lost on step {i}")
env.reset()
state_ = np.array(env.env.state)
state = torch.from_numpy(state_).float()
env.render()
env.close()
###Output
_____no_output_____
###Markdown
N-step actor-critic Listing 5.9
###Code
def run_episode(worker_env, worker_model, N_steps=10):
raw_state = np.array(worker_env.env.state)
state = torch.from_numpy(raw_state).float()
values, logprobs, rewards = [],[],[]
done = False
j=0
G=torch.Tensor([0])
while (j < N_steps and done == False):
j+=1
policy, value = worker_model(state)
values.append(value)
logits = policy.view(-1)
action_dist = torch.distributions.Categorical(logits=logits)
action = action_dist.sample()
logprob_ = policy.view(-1)[action]
logprobs.append(logprob_)
state_, _, done, info = worker_env.step(action.detach().numpy())
state = torch.from_numpy(state_).float()
if done:
reward = -10
worker_env.reset()
else:
reward = 1.0
G = value.detach()
rewards.append(reward)
return values, logprobs, rewards, G
###Output
_____no_output_____ |
Landmark Classification and Tagging/dlnd_tv_script_generation.ipynb | ###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = tuple(set(text))
int_to_vocab = dict(enumerate(vocab))
vocab_to_int = {ch: ii for ii, ch in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc_dict = {'.': '||period||',
',': '||comma||',
'"': '||quotes||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parantheses||',
')': '||right_parantheses||',
'-': '||dash||',
'\n': '||return||'}
return punc_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
words = words[:n_batches*batch_size]
x, y = [], []
for i in range(0, len(words)-sequence_length):
x_batch = words[i:i+sequence_length]
x.append(x_batch)
y_batch = words[i+sequence_length]
y.append(y_batch)
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.array(y)))
dataloader = DataLoader(data, shuffle=False, batch_size = batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
x = self.embed(nn_input)
output, hidden = self.lstm(x, hidden)
output = output.contiguous().view(-1, self.hidden_dim)
output = self.fc(output)
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
# initialize hidden state with zero weights, and move to GPU if available
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
if train_on_gpu:
rnn.cuda()
h = tuple([each.data for each in hidden])
rnn.zero_grad()
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
output, h = rnn(inp, h)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 8
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 8 epoch(s)...
Epoch: 1/8 Loss: 5.918958324432373
Epoch: 1/8 Loss: 5.723901970863342
Epoch: 1/8 Loss: 5.816299704551697
Epoch: 1/8 Loss: 5.813564656257629
Epoch: 1/8 Loss: 5.806400659561157
Epoch: 1/8 Loss: 5.924499870300293
Epoch: 2/8 Loss: 5.909326189532043
Epoch: 2/8 Loss: 5.84080619430542
Epoch: 2/8 Loss: 5.938479816436767
Epoch: 2/8 Loss: 5.926339637756348
Epoch: 2/8 Loss: 5.856246293067932
Epoch: 2/8 Loss: 5.993666437149048
Epoch: 3/8 Loss: 5.875690791517163
Epoch: 3/8 Loss: 5.495078530311584
Epoch: 3/8 Loss: 4.834505529403686
Epoch: 3/8 Loss: 4.583758968830109
Epoch: 3/8 Loss: 4.43100680065155
Epoch: 3/8 Loss: 4.498985733032226
Epoch: 4/8 Loss: 4.347883867708131
Epoch: 4/8 Loss: 4.123219155788422
Epoch: 4/8 Loss: 4.132480819225311
Epoch: 4/8 Loss: 4.011563138008118
Epoch: 4/8 Loss: 3.9428471999168395
Epoch: 4/8 Loss: 4.05621048784256
Epoch: 5/8 Loss: 3.960063016790582
Epoch: 5/8 Loss: 3.7982353248596192
Epoch: 5/8 Loss: 3.8632801337242126
Epoch: 5/8 Loss: 3.773511435985565
Epoch: 5/8 Loss: 3.7336489763259886
Epoch: 5/8 Loss: 3.8350984320640564
Epoch: 6/8 Loss: 3.7506446857782247
Epoch: 6/8 Loss: 3.6097434754371642
Epoch: 6/8 Loss: 3.6731909823417666
Epoch: 6/8 Loss: 3.602598050117493
Epoch: 6/8 Loss: 3.5803833351135252
Epoch: 6/8 Loss: 3.6656624994277953
Epoch: 7/8 Loss: 3.599015309633583
Epoch: 7/8 Loss: 3.470215575695038
Epoch: 7/8 Loss: 3.5345159249305724
Epoch: 7/8 Loss: 3.474195352554321
Epoch: 7/8 Loss: 3.443849328994751
Epoch: 7/8 Loss: 3.5404236159324647
Epoch: 8/8 Loss: 3.476418720257003
Epoch: 8/8 Loss: 3.3623650794029234
Epoch: 8/8 Loss: 3.42045799779892
Epoch: 8/8 Loss: 3.3775248498916626
Epoch: 8/8 Loss: 3.34159645986557
Epoch: 8/8 Loss: 3.436317366600037
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Yeah I have tried different sequence length and one size made the model converge faster.Batch Size : 256,Number of Epochs : 8,Learning Rate : 0.001,Embedding Dimension : 200,Hidden Dimension : 512,Number of RNN Layers : 3,Show stats for every n number of batches : 500 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:35: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____ |
udacity-program_self_driving_car_engineer_v1.0/project02-traffic_sign_classifier/Traffic_Sign_Classifier.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with **'Implementation'** in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with **'Optional'** in the header.In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
import pickle
def load_traffic_sign_data(training_file, testing_file):
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
return train, test
# Load pickled data
train, test = load_traffic_sign_data('../traffic_signs_data/train.p', '../traffic_signs_data/test.p')
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 2D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below.
###Code
import numpy as np
# Number of examples
n_train, n_test = X_train.shape[0], X_test.shape[0]
# What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# How many classes?
n_classes = np.unique(y_train).shape[0]
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Visualize the German Traffic Signs Dataset using the pickled file(s).- First we can visualize some images sampled from training set:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# show a random sample from each class of the traffic sign dataset
rows, cols = 4, 12
fig, ax_array = plt.subplots(rows, cols)
plt.suptitle('RANDOM SAMPLES FROM TRAINING SET (one for each class)')
for class_idx, ax in enumerate(ax_array.ravel()):
if class_idx < n_classes:
# show a random image of the current class
cur_X = X_train[y_train == class_idx]
cur_img = cur_X[np.random.randint(len(cur_X))]
ax.imshow(cur_img)
ax.set_title('{:02d}'.format(class_idx))
else:
ax.axis('off')
# hide both x and y ticks
plt.setp([a.get_xticklabels() for a in ax_array.ravel()], visible=False)
plt.setp([a.get_yticklabels() for a in ax_array.ravel()], visible=False)
plt.draw()
###Output
_____no_output_____
###Markdown
- We can also get the idea of how these classes are distributed in both training and testing set
###Code
# bar-chart of classes distribution
train_distribution, test_distribution = np.zeros(n_classes), np.zeros(n_classes)
for c in range(n_classes):
train_distribution[c] = np.sum(y_train == c) / n_train
test_distribution[c] = np.sum(y_test == c) / n_test
fig, ax = plt.subplots()
col_width = 0.5
bar_train = ax.bar(np.arange(n_classes), train_distribution, width=col_width, color='r')
bar_test = ax.bar(np.arange(n_classes)+col_width, test_distribution, width=col_width, color='b')
ax.set_ylabel('PERCENTAGE OF PRESENCE')
ax.set_xlabel('CLASS LABEL')
ax.set_title('Classes distribution in traffic-sign dataset')
ax.set_xticks(np.arange(0, n_classes, 5)+col_width)
ax.set_xticklabels(['{:02d}'.format(c) for c in range(0, n_classes, 5)])
ax.legend((bar_train[0], bar_test[0]), ('train set', 'test set'))
plt.show()
###Output
_____no_output_____
###Markdown
From this plot we notice that there's a strong *imbalance among the classes*. Indeed, some classes are relatively over-represented, while some others are much less common. However, we see that the data distribution is almost the same between training and testing set, and this is good news: looks like we won't have problem related to *dataset shift* when we'll evaluate our model on the test data. ---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).There are various aspects to consider when thinking about this problem:- Neural network architecture- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data. ImplementationUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow. **Feature preprocessing**
###Code
import cv2
def preprocess_features(X, equalize_hist=True):
# convert from RGB to YUV
X = np.array([np.expand_dims(cv2.cvtColor(rgb_img, cv2.COLOR_RGB2YUV)[:, :, 0], 2) for rgb_img in X])
# adjust image contrast
if equalize_hist:
X = np.array([np.expand_dims(cv2.equalizeHist(np.uint8(img)), 2) for img in X])
X = np.float32(X)
# standardize features
X -= np.mean(X, axis=0)
X /= (np.std(X, axis=0) + np.finfo('float32').eps)
return X
X_train_norm = preprocess_features(X_train)
X_test_norm = preprocess_features(X_test)
###Output
_____no_output_____
###Markdown
Question 1 _Describe how you preprocessed the data. Why did you choose that technique?_ **Answer:**Following this paper [[Sermanet, LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) I employed three main steps of feature preprocessing:1) *each image is converted from RGB to YUV color space, then only the Y channel is used.* This choice can sound at first suprising, but the cited paper shows how this choice leads to the best performing model. This is slightly counter-intuitive, but if we think about it arguably we are able to distinguish all the traffic signs just by looking to the grayscale image.2) *contrast of each image is adjusted by means of histogram equalization*. This is to mitigate the numerous situation in which the image contrast is really poor.3) *each image is centered on zero mean and divided for its standard deviation*. This feature scaling is known to have beneficial effects on the gradient descent performed by the optimizer.
###Code
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
# split into train and validation
VAL_RATIO = 0.2
X_train_norm, X_val_norm, y_train, y_val = train_test_split(X_train_norm, y_train, test_size=VAL_RATIO, random_state=0)
# create the generator to perform online data augmentation
image_datagen = ImageDataGenerator(rotation_range=15.,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1)
# take a random image from the training set
img_rgb = X_train[0]
# plot the original image
plt.figure(figsize=(1,1))
plt.imshow(img_rgb)
plt.title('Example of RGB image (class = {})'.format(y_train[0]))
plt.show()
# plot some randomly augmented images
rows, cols = 4, 10
fig, ax_array = plt.subplots(rows, cols)
for ax in ax_array.ravel():
augmented_img, _ = image_datagen.flow(np.expand_dims(img_rgb, 0), y_train[0:1]).next()
ax.imshow(np.uint8(np.squeeze(augmented_img)))
plt.setp([a.get_xticklabels() for a in ax_array.ravel()], visible=False)
plt.setp([a.get_yticklabels() for a in ax_array.ravel()], visible=False)
plt.suptitle('Random examples of data augmentation (starting from the previous image)')
plt.show()
###Output
Using TensorFlow backend.
###Markdown
Question 2_Describe how you set up the training, validation and testing data for your model. **Optional**: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?_ **Answer:** For the *train and test split*, I just used the ones provided, composed by 39209 and 12630 examples respectively.To get *additional data*, I leveraged on the `ImageDataGenerator` class provided in the [Keras](https://keras.io/preprocessing/image/) library. No need to re-invent the wheel! In this way I could perform data augmentation online, during the training. Training images are randomly rotated, zoomed and shifted but just in a narrow range, in order to create some variety in the data while not completely twisting the original feature content. The result of this process of augmentation is visible in the previous figure.
###Code
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def weight_variable(shape, mu=0, sigma=0.1):
initialization = tf.truncated_normal(shape=shape, mean=mu, stddev=sigma)
return tf.Variable(initialization)
def bias_variable(shape, start_val=0.1):
initialization = tf.constant(start_val, shape=shape)
return tf.Variable(initialization)
def conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME'):
return tf.nn.conv2d(input=x, filter=W, strides=strides, padding=padding)
def max_pool_2x2(x):
return tf.nn.max_pool(value=x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# network architecture definition
def my_net(x, n_classes):
c1_out = 64
conv1_W = weight_variable(shape=(3, 3, 1, c1_out))
conv1_b = bias_variable(shape=(c1_out,))
conv1 = tf.nn.relu(conv2d(x, conv1_W) + conv1_b)
pool1 = max_pool_2x2(conv1)
drop1 = tf.nn.dropout(pool1, keep_prob=keep_prob)
c2_out = 128
conv2_W = weight_variable(shape=(3, 3, c1_out, c2_out))
conv2_b = bias_variable(shape=(c2_out,))
conv2 = tf.nn.relu(conv2d(drop1, conv2_W) + conv2_b)
pool2 = max_pool_2x2(conv2)
drop2 = tf.nn.dropout(pool2, keep_prob=keep_prob)
fc0 = tf.concat(1, [flatten(drop1), flatten(drop2)])
fc1_out = 64
fc1_W = weight_variable(shape=(fc0._shape[1].value, fc1_out))
fc1_b = bias_variable(shape=(fc1_out,))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
drop_fc1 = tf.nn.dropout(fc1, keep_prob=keep_prob)
fc2_out = n_classes
fc2_W = weight_variable(shape=(drop_fc1._shape[1].value, fc2_out))
fc2_b = bias_variable(shape=(fc2_out,))
logits = tf.matmul(drop_fc1, fc2_W) + fc2_b
return logits
# placeholders
x = tf.placeholder(dtype=tf.float32, shape=(None, 32, 32, 1))
y = tf.placeholder(dtype=tf.int32, shape=None)
keep_prob = tf.placeholder(tf.float32)
# training pipeline
lr = 0.001
logits = my_net(x, n_classes=n_classes)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss_function = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=lr)
train_step = optimizer.minimize(loss=loss_function)
###Output
_____no_output_____
###Markdown
Question 3_What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.)_ **Answer:** The final architecture is a relatively shallow network made by 4 layers. The first two layers are convolutional, while the third and last are fully connected. Following [[Sermanet, LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) the output of both the first and second convolutional layers are concatenated and fed to the following dense layer. In this way we provide the fully-connected layer visual patterns at both different levels of abstraction. The last fully-connected layer then maps the prediction into one of the 43 classes.
###Code
# metrics and functions for model evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.cast(y, tf.int64))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def evaluate(X_data, y_data):
num_examples = X_data.shape[0]
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCHSIZE):
batch_x, batch_y = X_data[offset:offset+BATCHSIZE], y_data[offset:offset+BATCHSIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += accuracy * len(batch_x)
return total_accuracy / num_examples
# create a checkpointer to log the weights during training
checkpointer = tf.train.Saver()
# training hyperparameters
BATCHSIZE = 128
EPOCHS = 30
BATCHES_PER_EPOCH = 5000
# start training
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(EPOCHS):
print("EPOCH {} ...".format(epoch + 1))
batch_counter = 0
for batch_x, batch_y in image_datagen.flow(X_train_norm, y_train, batch_size=BATCHSIZE):
batch_counter += 1
sess.run(train_step, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5})
if batch_counter == BATCHES_PER_EPOCH:
break
# at epoch end, evaluate accuracy on both training and validation set
train_accuracy = evaluate(X_train_norm, y_train)
val_accuracy = evaluate(X_val_norm, y_val)
print('Train Accuracy = {:.3f} - Validation Accuracy: {:.3f}'.format(train_accuracy, val_accuracy))
# log current weights
checkpointer.save(sess, save_path='../checkpoints/traffic_sign_model.ckpt', global_step=epoch)
###Output
EPOCH 1 ...
Train Accuracy = 0.889 - Validation Accuracy: 0.890
EPOCH 2 ...
Train Accuracy = 0.960 - Validation Accuracy: 0.955
EPOCH 3 ...
Train Accuracy = 0.975 - Validation Accuracy: 0.969
EPOCH 4 ...
Train Accuracy = 0.985 - Validation Accuracy: 0.977
EPOCH 5 ...
Train Accuracy = 0.987 - Validation Accuracy: 0.978
EPOCH 6 ...
Train Accuracy = 0.991 - Validation Accuracy: 0.985
EPOCH 7 ...
Train Accuracy = 0.991 - Validation Accuracy: 0.984
EPOCH 8 ...
Train Accuracy = 0.991 - Validation Accuracy: 0.985
EPOCH 9 ...
Train Accuracy = 0.991 - Validation Accuracy: 0.985
EPOCH 10 ...
Train Accuracy = 0.994 - Validation Accuracy: 0.988
EPOCH 11 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.990
EPOCH 12 ...
Train Accuracy = 0.995 - Validation Accuracy: 0.989
EPOCH 13 ...
Train Accuracy = 0.995 - Validation Accuracy: 0.991
EPOCH 14 ...
Train Accuracy = 0.993 - Validation Accuracy: 0.988
EPOCH 15 ...
Train Accuracy = 0.995 - Validation Accuracy: 0.989
EPOCH 16 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.992
EPOCH 17 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.992
EPOCH 18 ...
Train Accuracy = 0.997 - Validation Accuracy: 0.992
EPOCH 19 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.992
EPOCH 20 ...
Train Accuracy = 0.993 - Validation Accuracy: 0.986
EPOCH 21 ...
Train Accuracy = 0.997 - Validation Accuracy: 0.993
EPOCH 22 ...
Train Accuracy = 0.995 - Validation Accuracy: 0.988
EPOCH 23 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.990
EPOCH 24 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.991
EPOCH 25 ...
Train Accuracy = 0.997 - Validation Accuracy: 0.991
EPOCH 26 ...
Train Accuracy = 0.997 - Validation Accuracy: 0.991
EPOCH 27 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.994
EPOCH 28 ...
Train Accuracy = 0.997 - Validation Accuracy: 0.992
EPOCH 29 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.992
EPOCH 30 ...
Train Accuracy = 0.996 - Validation Accuracy: 0.991
###Markdown
Now we can test the model. Let's load the weights of the epoch with the highest accuracy on validation set, which are the most promising :-)
###Code
# testing the model
with tf.Session() as sess:
# restore saved session with highest validation accuracy
checkpointer.restore(sess, '../checkpoints/traffic_sign_model.ckpt-27')
test_accuracy = evaluate(X_test_norm, y_test)
print('Performance on test set: {:.3f}'.format(test_accuracy))
###Output
Performance on test set: 0.953
###Markdown
Question 4_How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)_ **Answer:** For the trainig I used *Adam optimizer*, which often proves to be a good choice to avoid the patient search of the right parameters for SGD. *Batchsize* was set to 128 due to memory constraint. Every 5000 batches visited, an evaluation on both training and validation set is performed. In order to avoid overfitting, both data augmentation and dropout (with drop probability of 0.5) are employed extensively. Question 5_What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem._ **Answer:** The network architecture is based on the paper [[Sermanet, LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf), in which the authors tackle the same problem (traffic sign classification), though using a different dataset. In section *II-A* of the paper, the authors explain that they found beneficial to feed the dense layers with the output of both the previous convolutional layers. Indeed, in this way the classifier is explicitly provided both the local "motifs" (learned by conv1) and the more "global" shapes and structure (learned by conv2) found in the features. I tried to replicate the same architecture, made by 2 convolutional and 2 fully connected layers. The number of features learned was lowered until the training was feasible also in my laptop! --- Step 3: Test a Model on New ImagesTake several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. ImplementationUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
###Code
### Load the images and plot them here.
import os
# load new images
new_images_dir = '../other_signs'
new_test_images = [os.path.join(new_images_dir, f) for f in os.listdir(new_images_dir)]
new_test_images = [cv2.cvtColor(cv2.imread(f), cv2.COLOR_BGR2RGB) for f in new_test_images]
# manually annotated labels for these new images
new_targets = [1, 13, 17, 35, 40]
# plot new test images
fig, axarray = plt.subplots(1, len(new_test_images))
for i, ax in enumerate(axarray.ravel()):
ax.imshow(new_test_images[i])
ax.set_title('{}'.format(i))
plt.setp(ax.get_xticklabels(), visible=False)
plt.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([]), ax.set_yticks([])
###Output
_____no_output_____
###Markdown
Question 6_Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook._ **Answer:** All the previous 5 images are taken from real-world videos, thus they're far from being perfectly "clean". For example, in figure 4 ("roundabout mandatory" sign) the image contrast is so bad that it's barely possible to recognize its meaning. Let's test the trained model on these bare new images:
###Code
# first things first: feature preprocessing
new_test_images_norm = preprocess_features(new_test_images)
with tf.Session() as sess:
# restore saved session
checkpointer.restore(sess, '../checkpoints/traffic_sign_model.ckpt-27')
# predict on unseen images
prediction = np.argmax(np.array(sess.run(logits, feed_dict={x: new_test_images_norm, keep_prob: 1.})), axis=1)
for i, pred in enumerate(prediction):
print('Image {} - Target = {:02d}, Predicted = {:02d}'.format(i, new_targets[i], pred))
print('> Model accuracy: {:.02f}'.format(np.sum(new_targets==prediction)/len(new_targets)))
###Output
Image 0 - Target = 01, Predicted = 01
Image 1 - Target = 13, Predicted = 13
Image 2 - Target = 17, Predicted = 02
Image 3 - Target = 35, Predicted = 35
Image 4 - Target = 40, Predicted = 40
> Model accuracy: 0.80
###Markdown
Question 7_Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate._ **Answer:** Evaluated on these 5 newly captured pictures, the model accuracy results 80%. While it's true that the performance drop w.r.t. the test set is high (around 15%), we must keep in mind that 5 images are too few to be of any statistical significance. Visualizing softmax score can give us a better idea about the classification process. Let's see:
###Code
# visualizing softmax probabilities
with tf.Session() as sess:
# restore saved session
checkpointer.restore(sess, '../checkpoints/traffic_sign_model.ckpt-27')
# certainty of predictions
K = 3
top_3 = sess.run(tf.nn.top_k(logits, k=K), feed_dict={x: new_test_images_norm, keep_prob: 1.})
# compute softmax probabilities
softmax_probs = sess.run(tf.nn.softmax(logits), feed_dict={x: new_test_images_norm, keep_prob: 1.})
# plot softmax probs along with traffic sign examples
n_images = new_test_images_norm.shape[0]
fig, axarray = plt.subplots(n_images, 2)
plt.suptitle('Visualization of softmax probabilities for each example', fontweight='bold')
for r in range(0, n_images):
axarray[r, 0].imshow(np.squeeze(new_test_images[r]))
axarray[r, 0].set_xticks([]), axarray[r, 0].set_yticks([])
plt.setp(axarray[r, 0].get_xticklabels(), visible=False)
plt.setp(axarray[r, 0].get_yticklabels(), visible=False)
axarray[r, 1].bar(np.arange(n_classes), softmax_probs[r])
axarray[r, 1].set_ylim([0, 1])
# print top K predictions of the model for each example, along with confidence (softmax score)
for i in range(len(new_test_images)):
print('Top {} model predictions for image {} (Target is {:02d})'.format(K, i, new_targets[i]))
for k in range(K):
top_c = top_3[1][i][k]
print(' Prediction = {:02d} with confidence {:.2f}'.format(top_c, softmax_probs[i][top_c]))
###Output
Top 3 model predictions for image 0 (Target is 01)
Prediction = 01 with confidence 0.99
Prediction = 02 with confidence 0.01
Prediction = 05 with confidence 0.00
Top 3 model predictions for image 1 (Target is 13)
Prediction = 13 with confidence 0.99
Prediction = 38 with confidence 0.00
Prediction = 12 with confidence 0.00
Top 3 model predictions for image 2 (Target is 17)
Prediction = 02 with confidence 0.31
Prediction = 38 with confidence 0.27
Prediction = 17 with confidence 0.22
Top 3 model predictions for image 3 (Target is 35)
Prediction = 35 with confidence 0.34
Prediction = 03 with confidence 0.30
Prediction = 08 with confidence 0.13
Top 3 model predictions for image 4 (Target is 40)
Prediction = 40 with confidence 0.44
Prediction = 12 with confidence 0.39
Prediction = 38 with confidence 0.11
|
Chapter 8 - Experimental Design - the Basics/ch08.ipynb | ###Markdown
Chapter 8: Experimental design - the basics Libraries and data Libraries
###Code
# Common libraries
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
from statsmodels.formula.api import ols
import seaborn as sns
# Chapter-specific libraries
import statsmodels.stats.proportion as ssprop # To calculate the standardized effect size
import statsmodels.stats.power as ssp #To calculate the standard power
###Output
_____no_output_____
###Markdown
Data
###Code
hist_data_df = pd.read_csv('chap8-historical_data.csv')
exp_data_df = pd.read_csv('chap8-experimental_data.csv')
###Output
_____no_output_____
###Markdown
Determining random assignment and sample size/power
###Code
### Basic random assignment
K = 2
assgnt = np.random.uniform(0,1,1)
group = "control" if assgnt <= 1/K else "treatment"
effect_size = ssprop.proportion_effectsize(0.194, 0.184)
ssp.tt_ind_solve_power(effect_size = effect_size,
alpha = 0.05,
nobs1 = None,
alternative = 'larger',
power=0.8)
### Null experimental dataset
exp_null_data_df = hist_data_df.copy().sample(2000)
exp_null_data_df['oneclick'] = np.where(np.random.uniform(0,1,2000)>0.5, 1, 0)
mod = smf.logit('booked ~ oneclick + age + gender', data = exp_null_data_df)
mod.fit(disp=0).summary()
### Function definitions
## Metric function
def log_reg_fun(dat_df):
model = smf.logit('booked ~ oneclick + age + gender', data = dat_df)
res = model.fit(disp=0)
coeff = res.params['oneclick']
return coeff
## Bootstrap CI function
def boot_CI_fun(dat_df, metric_fun, B = 100, conf_level = 0.9):
#Setting sample size
N = len(dat_df)
conf_level = conf_level
coeffs = []
for i in range(B):
sim_data_df = dat_df.sample(n=N, replace = True)
coeff = metric_fun(sim_data_df)
coeffs.append(coeff)
coeffs.sort()
start_idx = round(B * (1 - conf_level) / 2)
end_idx = - round(B * (1 - conf_level) / 2)
confint = [coeffs[start_idx], coeffs[end_idx]]
return(confint)
## decision function
def decision_fun(dat_df, metric_fun, B = 100, conf_level = 0.9):
boot_CI = boot_CI_fun(dat_df, metric_fun, B = B, conf_level = conf_level)
decision = 1 if boot_CI[0] > 0 else 0
return decision
## Function for single simulation
def single_sim_fun(Nexp, dat_df = hist_data_df, metric_fun = log_reg_fun,
eff_size = 0.01, B = 100, conf_level = 0.9):
#Adding predicted probability of booking
hist_model = smf.logit('booked ~ age + gender + period', data = dat_df)
res = hist_model.fit(disp=0)
sim_data_df = dat_df.copy()
sim_data_df['pred_prob_bkg'] = res.predict()
#Filtering down to desired sample size
sim_data_df = sim_data_df.sample(Nexp)
#Random assignment of experimental groups
sim_data_df['oneclick'] = np.where(np.random.uniform(size=Nexp) <= 0.5, 0, 1)
# Adding effect to treatment group
sim_data_df['pred_prob_bkg'] = np.where(sim_data_df.oneclick == 1,
sim_data_df.pred_prob_bkg + eff_size,
sim_data_df.pred_prob_bkg)
sim_data_df['booked'] = np.where(sim_data_df.pred_prob_bkg >= \
np.random.uniform(size=Nexp), 1, 0)
#Calculate the decision (we want it to be 1)
decision = decision_fun(sim_data_df, metric_fun = metric_fun, B = B,
conf_level = conf_level)
return decision
## power simulation function
def power_sim_fun(dat_df, metric_fun, Nexp, eff_size, Nsim, B = 100,
conf_level = 0.9):
power_lst = []
for i in range(Nsim):
power_lst.append(single_sim_fun(Nexp = Nexp, dat_df = dat_df,
metric_fun = metric_fun,
eff_size = eff_size, B = B,
conf_level = conf_level))
power = np.mean(power_lst)
return(power)
## Single simulation
single_sim_fun(Nexp = 1000)
## Power simulation
power_sim_fun(dat_df=hist_data_df, metric_fun = log_reg_fun, Nexp = int(4e4),
eff_size=0.01, Nsim=20)
#Alternative parallelized function for higher speed
from joblib import Parallel, delayed
import psutil
def opt_power_sim_fun(dat_df, metric_fun, Nexp, eff_size, Nsim, B = 100, conf_level = 0.9):
#Parallelized version with joblib
n_cpu = psutil.cpu_count() #Counting number of cores on machine
counter = [Nexp] * Nsim
res_parallel = Parallel(n_jobs = n_cpu)(delayed(single_sim_fun)(Nexp) for Nexp in counter)
pwr = np.mean(res_parallel)
return(pwr)
opt_power_sim_fun(dat_df=hist_data_df, metric_fun = log_reg_fun, Nexp = int(1e3), eff_size=0.01, Nsim=10)
###Output
_____no_output_____
###Markdown
Analyzing and interpreting experimental results
###Code
### Logistic regression
log_mod_exp = smf.logit('booked ~ age + gender + oneclick', data = exp_data_df)
res = log_mod_exp.fit()
res.summary()
### Calculating average difference in probabilities
def diff_prob_fun(dat_df, reg_model = log_mod_exp):
#Creating new copies of data
no_button_df = dat_df.loc[:, 'age':'gender']
no_button_df.loc[:, 'oneclick'] = 0
button_df = dat_df.loc[:,'age':'gender']
button_df.loc[:, 'oneclick'] = 1
#Adding the predictions of the model
no_button_df.loc[:, 'pred_bkg_rate'] = res.predict(no_button_df)
button_df.loc[:, 'pred_bkg_rate'] = res.predict(button_df)
diff = button_df.loc[:,'pred_bkg_rate'] \
- no_button_df.loc[:,'pred_bkg_rate']
return diff.mean()
diff_prob_fun(exp_data_df, reg_model = log_mod_exp)
#Calculating Bootstrap 90%-CI for this difference
boot_CI_fun(exp_data_df, diff_prob_fun, B = 100, conf_level = 0.9)
###Output
_____no_output_____ |
nbs/05-orchestrator.ipynb | ###Markdown
Core API Imports
###Code
#exports
import pandas as pd
from tqdm import tqdm
from warnings import warn
from requests.models import Response
from ElexonDataPortal.dev import utils, raw
from IPython.display import JSON
from ElexonDataPortal.dev import clientprep
import os
from dotenv import load_dotenv
assert load_dotenv('../.env'), 'Environment variables could not be loaded'
api_key = os.environ['BMRS_API_KEY']
API_yaml_fp = '../data/BMRS_API.yaml'
method_info = clientprep.construct_method_info_dict(API_yaml_fp)
JSON([method_info])
###Output
_____no_output_____
###Markdown
Request Types
###Code
#exports
def retry_request(raw, method, kwargs, n_attempts=3):
attempts = 0
success = False
while (attempts < n_attempts) and (success == False):
try:
r = getattr(raw, method)(**kwargs)
utils.check_status(r)
success = True
except Exception as e:
attempts += 1
if attempts == n_attempts:
raise e
return r
def if_possible_parse_local_datetime(df):
dt_cols_with_period_in_name = ['startTimeOfHalfHrPeriod', 'initialForecastPublishingPeriodCommencingTime', 'latestForecastPublishingPeriodCommencingTime', 'outTurnPublishingPeriodCommencingTime']
dt_cols = [col for col in df.columns if 'date' in col.lower() or col in dt_cols_with_period_in_name]
sp_cols = [col for col in df.columns if 'period' in col.lower() and col not in dt_cols_with_period_in_name]
if len(dt_cols)==1 and len(sp_cols)==1:
df = utils.parse_local_datetime(df, dt_col=dt_cols[0], SP_col=sp_cols[0])
return df
def SP_and_date_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
df_dates_SPs = utils.dt_rng_to_SPs(start_date, end_date)
date_SP_tuples = list(df_dates_SPs.reset_index().itertuples(index=False, name=None))[:-1]
for datetime, query_date, SP in tqdm(date_SP_tuples, desc=stream, total=len(date_SP_tuples)):
kwargs.update({
kwargs_map['date']: datetime.strftime('%Y-%m-%d'),
kwargs_map['SP']: SP,
})
missing_kwargs = list(set(func_params) - set(['SP', 'date'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_SP = utils.parse_xml_response(r)
df = df.append(df_SP)
df = utils.expand_cols(df)
df = if_possible_parse_local_datetime(df)
return df
method_info_mock = {
'get_B1610': {
'request_type': 'SP_and_date',
'kwargs_map': {'date': 'SettlementDate', 'SP': 'Period'},
'func_kwargs': {
'APIKey': 'AP8DA23',
'date': '2020-01-01',
'SP': '1',
'NGCBMUnitID': '*',
'ServiceType': 'csv'
}
}
}
method = 'get_B1610'
kwargs = {'NGCBMUnitID': '*'}
kwargs_map = method_info_mock[method]['kwargs_map']
func_params = list(method_info_mock[method]['func_kwargs'].keys())
df = SP_and_date_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-01-01 01:30',
**kwargs
)
df.head(3)
#exports
def handle_capping(
r: Response,
df: pd.DataFrame,
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
end_date: str,
request_type: str,
**kwargs
):
capping_applied = utils.check_capping(r)
assert capping_applied != None, 'No information on whether or not capping limits had been breached could be found in the response metadata'
if capping_applied == True: # only subset of date range returned
dt_cols_with_period_in_name = ['startTimeOfHalfHrPeriod']
dt_cols = [col for col in df.columns if ('date' in col.lower() or col in dt_cols_with_period_in_name) and ('end' not in col.lower())]
if len(dt_cols) == 1:
start_date = pd.to_datetime(df[dt_cols[0]]).max().strftime('%Y-%m-%d')
if 'start_time' in kwargs.keys():
kwargs['start_time'] = '00:00'
if pd.to_datetime(start_date) >= pd.to_datetime(end_date):
warnings.warn(f'The `end_date` ({end_date}) was earlier than `start_date` ({start_date})\nThe `start_date` will be set one day earlier than the `end_date`.')
start_date = (pd.to_datetime(end_date) - pd.Timedelta(days=1)).strftime('%Y-%m-%d')
warn(f'Response was capped, request is rerunning for missing data from {start_date}')
df_rerun = date_range_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date=start_date,
end_date=end_date,
request_type=request_type,
**kwargs
)
df = df.append(df_rerun)
df = df.drop_duplicates()
else:
warn(f'Response was capped: a new `start_date` to continue requesting could not be determined automatically, please handle manually for `{method}`')
return df
def date_range_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
request_type: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
for kwarg in ['start_time', 'end_time']:
if kwarg not in kwargs_map.keys():
kwargs_map[kwarg] = kwarg
kwargs[kwargs_map['start_date']], kwargs[kwargs_map['start_time']] = pd.to_datetime(start_date).strftime('%Y-%m-%d %H:%M:%S').split(' ')
kwargs[kwargs_map['end_date']], kwargs[kwargs_map['end_time']] = pd.to_datetime(end_date).strftime('%Y-%m-%d %H:%M:%S').split(' ')
if 'SP' in kwargs_map.keys():
kwargs[kwargs_map['SP']] = '*'
func_params.remove('SP')
func_params += [kwargs_map['SP']]
missing_kwargs = list(set(func_params) - set(['start_date', 'end_date', 'start_time', 'end_time'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
if request_type == 'date_range':
kwargs.pop(kwargs_map['start_time'])
kwargs.pop(kwargs_map['end_time'])
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df = utils.parse_xml_response(r)
df = if_possible_parse_local_datetime(df)
# Handling capping
df = handle_capping(
r,
df,
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
end_date=end_date,
request_type=request_type,
**kwargs
)
return df
method = 'get_B1540'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
request_type = method_info[method]['request_type']
df = date_range_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2021-01-01',
request_type=request_type,
**kwargs
)
df.head(3)
#exports
def year_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
start_year = int(pd.to_datetime(start_date).strftime('%Y'))
end_year = int(pd.to_datetime(end_date).strftime('%Y'))
for year in tqdm(range(start_year, end_year+1), desc=stream):
kwargs.update({kwargs_map['year']: year})
missing_kwargs = list(set(func_params) - set(['year'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = df.append(df_year)
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0650'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2021-01-01 01:30',
**kwargs
)
df.head(3)
#exports
def construct_year_month_pairs(start_date, end_date):
dt_rng = pd.date_range(start_date, end_date, freq='M')
if len(dt_rng) == 0:
year_month_pairs = [tuple(pd.to_datetime(start_date).strftime('%Y %b').split(' '))]
else:
year_month_pairs = [tuple(dt.strftime('%Y %b').split(' ')) for dt in dt_rng]
year_month_pairs = [(int(year), week.upper()) for year, week in year_month_pairs]
return year_month_pairs
def year_and_month_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
year_month_pairs = construct_year_month_pairs(start_date, end_date)
for year, month in tqdm(year_month_pairs, desc=stream):
kwargs.update({
kwargs_map['year']: year,
kwargs_map['month']: month
})
missing_kwargs = list(set(func_params) - set(['year', 'month'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = df.append(df_year)
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0640'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_and_month_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-03-31',
**kwargs
)
df.head(3)
#exports
def clean_year_week(year, week):
year = int(year)
if week == '00':
year = int(year) - 1
week = 52
else:
year = int(year)
week = int(week.strip('0'))
return year, week
def construct_year_week_pairs(start_date, end_date):
dt_rng = pd.date_range(start_date, end_date, freq='W')
if len(dt_rng) == 0:
year_week_pairs = [tuple(pd.to_datetime(start_date).strftime('%Y %W').split(' '))]
else:
year_week_pairs = [tuple(dt.strftime('%Y %W').split(' ')) for dt in dt_rng]
year_week_pairs = [clean_year_week(year, week) for year, week in year_week_pairs]
return year_week_pairs
def year_and_week_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
year_week_pairs = construct_year_week_pairs(start_date, end_date)
for year, week in tqdm(year_week_pairs, desc=stream):
kwargs.update({
kwargs_map['year']: year,
kwargs_map['week']: week
})
missing_kwargs = list(set(func_params) - set(['year', 'week'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = df.append(df_year)
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0630'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_and_week_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-01-31',
**kwargs
)
df.head(3)
#exports
def non_temporal_request(
method: str,
api_key: str,
n_attempts: int=3,
**kwargs
):
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df = utils.parse_xml_response(r)
df = if_possible_parse_local_datetime(df)
return df
###Output
_____no_output_____
###Markdown
Query Orchestrator
###Code
#exports
def query_orchestrator(
method: str,
api_key: str,
request_type: str,
kwargs_map: dict=None,
func_params: list=None,
start_date: str=None,
end_date: str=None,
n_attempts: int=3,
**kwargs
):
if request_type not in ['non_temporal']:
kwargs.update({
'kwargs_map': kwargs_map,
'func_params': func_params,
'start_date': start_date,
'end_date': end_date,
})
if request_type in ['date_range', 'date_time_range']:
kwargs.update({
'request_type': request_type,
})
request_type_to_func = {
'SP_and_date': SP_and_date_request,
'date_range': date_range_request,
'date_time_range': date_range_request,
'year': year_request,
'year_and_month': year_and_month_request,
'year_and_week': year_and_week_request,
'non_temporal': non_temporal_request
}
assert request_type in request_type_to_func.keys(), f"{request_type} must be one of: {', '.join(request_type_to_func.keys())}"
request_func = request_type_to_func[request_type]
df = request_func(
method=method,
api_key=api_key,
n_attempts=n_attempts,
**kwargs
)
df = df.reset_index(drop=True)
return df
method = 'get_B0630'
start_date = '2020-01-01'
end_date = '2020-01-31'
request_type = method_info[method]['request_type']
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = query_orchestrator(
method=method,
api_key=api_key,
request_type=request_type,
kwargs_map=kwargs_map,
func_params=func_params,
start_date=start_date,
end_date=end_date
)
df.head(3)
#hide
from ElexonDataPortal.dev.nbdev import notebook2script
notebook2script('05-orchestrator.ipynb')
###Output
Converted 05-orchestrator.ipynb.
###Markdown
Core API Imports
###Code
#exports
import pandas as pd
from tqdm import tqdm
from warnings import warn
from typing import Optional
from requests.models import Response
from ElexonDataPortal.dev import utils, raw
from IPython.display import JSON
from ElexonDataPortal.dev import clientprep
import os
from dotenv import load_dotenv
assert load_dotenv('../.env'), 'Environment variables could not be loaded'
api_key = os.environ['BMRS_API_KEY']
API_yaml_fp = '../data/BMRS_API.yaml'
method_info = clientprep.construct_method_info_dict(API_yaml_fp)
JSON([method_info])
###Output
_____no_output_____
###Markdown
Request Types
###Code
#exports
def retry_request(raw, method, kwargs, n_attempts=3):
attempts = 0
success = False
while (attempts < n_attempts) and (success == False):
try:
r = getattr(raw, method)(**kwargs)
utils.check_status(r)
success = True
except Exception as e:
attempts += 1
if attempts == n_attempts:
raise e
return r
def if_possible_parse_local_datetime(df):
dt_cols_with_period_in_name = ['startTimeOfHalfHrPeriod', 'initialForecastPublishingPeriodCommencingTime', 'latestForecastPublishingPeriodCommencingTime', 'outTurnPublishingPeriodCommencingTime']
dt_cols = [col for col in df.columns if 'date' in col.lower() or col in dt_cols_with_period_in_name]
sp_cols = [col for col in df.columns if 'period' in col.lower() and col not in dt_cols_with_period_in_name]
if len(dt_cols)==1 and len(sp_cols)==1:
df = utils.parse_local_datetime(df, dt_col=dt_cols[0], SP_col=sp_cols[0])
return df
def SP_and_date_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
df_dates_SPs = utils.dt_rng_to_SPs(start_date, end_date)
date_SP_tuples = list(df_dates_SPs.reset_index().itertuples(index=False, name=None))[:-1]
for datetime, query_date, SP in tqdm(date_SP_tuples, desc=stream, total=len(date_SP_tuples)):
kwargs.update({
kwargs_map['date']: datetime.strftime('%Y-%m-%d'),
kwargs_map['SP']: SP,
})
missing_kwargs = list(set(func_params) - set(['SP', 'date'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_SP = utils.parse_xml_response(r)
df = pd.concat([df, df_SP])
df = utils.expand_cols(df)
df = if_possible_parse_local_datetime(df)
return df
method_info_mock = {
'get_B1610': {
'request_type': 'SP_and_date',
'kwargs_map': {'date': 'SettlementDate', 'SP': 'Period'},
'func_kwargs': {
'APIKey': 'AP8DA23',
'date': '2020-01-01',
'SP': '1',
'NGCBMUnitID': '*',
'ServiceType': 'csv'
}
}
}
method = 'get_B1610'
kwargs = {'NGCBMUnitID': '*'}
kwargs_map = method_info_mock[method]['kwargs_map']
func_params = list(method_info_mock[method]['func_kwargs'].keys())
df = SP_and_date_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-01-01 01:30',
**kwargs
)
df.head(3)
#exports
def handle_capping(
r: Response,
df: pd.DataFrame,
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
end_date: str,
request_type: str,
**kwargs
):
capping_applied = utils.check_capping(r)
assert capping_applied != None, 'No information on whether or not capping limits had been breached could be found in the response metadata'
if capping_applied == True: # only subset of date range returned
dt_cols_with_period_in_name = ['startTimeOfHalfHrPeriod']
dt_cols = [col for col in df.columns if ('date' in col.lower() or col in dt_cols_with_period_in_name) and ('end' not in col.lower())]
if len(dt_cols) == 1:
start_date = pd.to_datetime(df[dt_cols[0]]).max().strftime('%Y-%m-%d')
if 'start_time' in kwargs.keys():
kwargs['start_time'] = '00:00'
if pd.to_datetime(start_date) >= pd.to_datetime(end_date):
warnings.warn(f'The `end_date` ({end_date}) was earlier than `start_date` ({start_date})\nThe `start_date` will be set one day earlier than the `end_date`.')
start_date = (pd.to_datetime(end_date) - pd.Timedelta(days=1)).strftime('%Y-%m-%d')
warn(f'Response was capped, request is rerunning for missing data from {start_date}')
df_rerun = date_range_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date=start_date,
end_date=end_date,
request_type=request_type,
**kwargs
)
df = pd.concat([df, df_rerun])
df = df.drop_duplicates()
else:
warn(f'Response was capped: a new `start_date` to continue requesting could not be determined automatically, please handle manually for `{method}`')
return df
def date_range_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
request_type: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
for kwarg in ['start_time', 'end_time']:
if kwarg not in kwargs_map.keys():
kwargs_map[kwarg] = kwarg
kwargs[kwargs_map['start_date']], kwargs[kwargs_map['start_time']] = pd.to_datetime(start_date).strftime('%Y-%m-%d %H:%M:%S').split(' ')
kwargs[kwargs_map['end_date']], kwargs[kwargs_map['end_time']] = pd.to_datetime(end_date).strftime('%Y-%m-%d %H:%M:%S').split(' ')
if 'SP' in kwargs_map.keys():
kwargs[kwargs_map['SP']] = '*'
func_params.remove('SP')
func_params += [kwargs_map['SP']]
missing_kwargs = list(set(func_params) - set(['start_date', 'end_date', 'start_time', 'end_time'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
if request_type == 'date_range':
kwargs.pop(kwargs_map['start_time'])
kwargs.pop(kwargs_map['end_time'])
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df = utils.parse_xml_response(r)
df = if_possible_parse_local_datetime(df)
# Handling capping
df = handle_capping(
r,
df,
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
end_date=end_date,
request_type=request_type,
**kwargs
)
return df
method = 'get_B1540'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
request_type = method_info[method]['request_type']
df = date_range_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2021-01-01',
request_type=request_type,
**kwargs
)
df.head(3)
#exports
def year_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
start_year = int(pd.to_datetime(start_date).strftime('%Y'))
end_year = int(pd.to_datetime(end_date).strftime('%Y'))
for year in tqdm(range(start_year, end_year+1), desc=stream):
kwargs.update({kwargs_map['year']: year})
missing_kwargs = list(set(func_params) - set(['year'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = pd.concat([df, df_year])
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0650'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2021-01-01 01:30',
**kwargs
)
df.head(3)
#exports
def construct_year_month_pairs(start_date, end_date):
dt_rng = pd.date_range(start_date, end_date, freq='M')
if len(dt_rng) == 0:
year_month_pairs = [tuple(pd.to_datetime(start_date).strftime('%Y %b').split(' '))]
else:
year_month_pairs = [tuple(dt.strftime('%Y %b').split(' ')) for dt in dt_rng]
year_month_pairs = [(int(year), week.upper()) for year, week in year_month_pairs]
return year_month_pairs
def year_and_month_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
year_month_pairs = construct_year_month_pairs(start_date, end_date)
for year, month in tqdm(year_month_pairs, desc=stream):
kwargs.update({
kwargs_map['year']: year,
kwargs_map['month']: month
})
missing_kwargs = list(set(func_params) - set(['year', 'month'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = pd.concat([df, df_year])
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0640'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_and_month_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-03-31',
**kwargs
)
df.head(3)
#exports
def clean_year_week(year, week):
year = int(year)
if week == '00':
year = int(year) - 1
week = 52
else:
year = int(year)
week = int(week.strip('0'))
return year, week
def construct_year_week_pairs(start_date, end_date):
dt_rng = pd.date_range(start_date, end_date, freq='W')
if len(dt_rng) == 0:
year_week_pairs = [tuple(pd.to_datetime(start_date).strftime('%Y %W').split(' '))]
else:
year_week_pairs = [tuple(dt.strftime('%Y %W').split(' ')) for dt in dt_rng]
year_week_pairs = [clean_year_week(year, week) for year, week in year_week_pairs]
return year_week_pairs
def year_and_week_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
year_week_pairs = construct_year_week_pairs(start_date, end_date)
for year, week in tqdm(year_week_pairs, desc=stream):
kwargs.update({
kwargs_map['year']: year,
kwargs_map['week']: week
})
missing_kwargs = list(set(func_params) - set(['year', 'week'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = pd.concat([df, df_year])
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0630'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_and_week_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-01-31',
**kwargs
)
df.head(3)
#exports
def non_temporal_request(
method: str,
api_key: str,
n_attempts: int=3,
**kwargs
):
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df = utils.parse_xml_response(r)
df = if_possible_parse_local_datetime(df)
return df
###Output
_____no_output_____
###Markdown
Query Orchestrator
###Code
#exports
def query_orchestrator(
method: str,
api_key: str,
request_type: str,
kwargs_map: Optional[dict] = None,
func_params: Optional[list] = None,
start_date: Optional[str] = None,
end_date: Optional[str] = None,
n_attempts: int = 3,
non_local_tz: Optional[str] = None,
**kwargs
):
if request_type not in ['non_temporal']:
kwargs.update({
'kwargs_map': kwargs_map,
'func_params': func_params,
'start_date': start_date,
'end_date': end_date,
})
if request_type in ['date_range', 'date_time_range']:
kwargs.update({
'request_type': request_type,
})
request_type_to_func = {
'SP_and_date': SP_and_date_request,
'date_range': date_range_request,
'date_time_range': date_range_request,
'year': year_request,
'year_and_month': year_and_month_request,
'year_and_week': year_and_week_request,
'non_temporal': non_temporal_request
}
assert request_type in request_type_to_func.keys(), f"{request_type} must be one of: {', '.join(request_type_to_func.keys())}"
request_func = request_type_to_func[request_type]
df = request_func(
method=method,
api_key=api_key,
n_attempts=n_attempts,
**kwargs
)
df = df.reset_index(drop=True)
if (non_local_tz is not None) and ('local_datetime' in df.columns):
df['datetime'] = pd.to_datetime(df['local_datetime'], utc=True).dt.tz_convert(non_local_tz)
df = df.drop(columns='local_datetime')
return df
method = 'get_B0630'
start_date = '2020-01-01'
end_date = '2020-01-31'
request_type = method_info[method]['request_type']
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = query_orchestrator(
method=method,
api_key=api_key,
request_type=request_type,
kwargs_map=kwargs_map,
func_params=func_params,
start_date=start_date,
end_date=end_date
)
df.head(3)
#hide
from ElexonDataPortal.dev.nbdev import notebook2script
notebook2script('05-orchestrator.ipynb')
###Output
Converted 05-orchestrator.ipynb.
###Markdown
Core API Imports
###Code
#exports
import pandas as pd
from tqdm import tqdm
from warnings import warn
from requests.models import Response
from ElexonDataPortal.dev import utils, raw
from IPython.display import JSON
from ElexonDataPortal.dev import clientprep
import os
from dotenv import load_dotenv
assert load_dotenv('../.env'), 'Environment variables could not be loaded'
api_key = os.environ['BMRS_API_KEY']
API_yaml_fp = '../data/BMRS_API.yaml'
method_info = clientprep.construct_method_info_dict(API_yaml_fp)
JSON([method_info])
###Output
_____no_output_____
###Markdown
Request Types
###Code
#exports
def retry_request(raw, method, kwargs, n_attempts=3):
attempts = 0
success = False
while (attempts < n_attempts) and (success == False):
try:
r = getattr(raw, method)(**kwargs)
utils.check_status(r)
success = True
except Exception as e:
attempts += 1
if attempts == n_attempts:
raise e
return r
def if_possible_parse_local_datetime(df):
dt_cols_with_period_in_name = ['startTimeOfHalfHrPeriod', 'initialForecastPublishingPeriodCommencingTime', 'latestForecastPublishingPeriodCommencingTime', 'outTurnPublishingPeriodCommencingTime']
dt_cols = [col for col in df.columns if 'date' in col.lower() or col in dt_cols_with_period_in_name]
sp_cols = [col for col in df.columns if 'period' in col.lower() and col not in dt_cols_with_period_in_name]
if len(dt_cols)==1 and len(sp_cols)==1:
df = utils.parse_local_datetime(df, dt_col=dt_cols[0], SP_col=sp_cols[0])
return df
def SP_and_date_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
df_dates_SPs = utils.dt_rng_to_SPs(start_date, end_date)
date_SP_tuples = list(df_dates_SPs.reset_index().itertuples(index=False, name=None))[:-1]
for datetime, query_date, SP in tqdm(date_SP_tuples, desc=stream, total=len(date_SP_tuples)):
kwargs.update({
kwargs_map['date']: datetime.strftime('%Y-%m-%d'),
kwargs_map['SP']: SP,
})
missing_kwargs = list(set(func_params) - set(['SP', 'date'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_SP = utils.parse_xml_response(r)
df = pd.concat([df, df_SP])
df = utils.expand_cols(df)
df = if_possible_parse_local_datetime(df)
return df
method_info_mock = {
'get_B1610': {
'request_type': 'SP_and_date',
'kwargs_map': {'date': 'SettlementDate', 'SP': 'Period'},
'func_kwargs': {
'APIKey': 'AP8DA23',
'date': '2020-01-01',
'SP': '1',
'NGCBMUnitID': '*',
'ServiceType': 'csv'
}
}
}
method = 'get_B1610'
kwargs = {'NGCBMUnitID': '*'}
kwargs_map = method_info_mock[method]['kwargs_map']
func_params = list(method_info_mock[method]['func_kwargs'].keys())
df = SP_and_date_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-01-01 01:30',
**kwargs
)
df.head(3)
#exports
def handle_capping(
r: Response,
df: pd.DataFrame,
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
end_date: str,
request_type: str,
**kwargs
):
capping_applied = utils.check_capping(r)
assert capping_applied != None, 'No information on whether or not capping limits had been breached could be found in the response metadata'
if capping_applied == True: # only subset of date range returned
dt_cols_with_period_in_name = ['startTimeOfHalfHrPeriod']
dt_cols = [col for col in df.columns if ('date' in col.lower() or col in dt_cols_with_period_in_name) and ('end' not in col.lower())]
if len(dt_cols) == 1:
start_date = pd.to_datetime(df[dt_cols[0]]).max().strftime('%Y-%m-%d')
if 'start_time' in kwargs.keys():
kwargs['start_time'] = '00:00'
if pd.to_datetime(start_date) >= pd.to_datetime(end_date):
warnings.warn(f'The `end_date` ({end_date}) was earlier than `start_date` ({start_date})\nThe `start_date` will be set one day earlier than the `end_date`.')
start_date = (pd.to_datetime(end_date) - pd.Timedelta(days=1)).strftime('%Y-%m-%d')
warn(f'Response was capped, request is rerunning for missing data from {start_date}')
df_rerun = date_range_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date=start_date,
end_date=end_date,
request_type=request_type,
**kwargs
)
df = pd.concat([df, df_rerun])
df = df.drop_duplicates()
else:
warn(f'Response was capped: a new `start_date` to continue requesting could not be determined automatically, please handle manually for `{method}`')
return df
def date_range_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
request_type: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
for kwarg in ['start_time', 'end_time']:
if kwarg not in kwargs_map.keys():
kwargs_map[kwarg] = kwarg
kwargs[kwargs_map['start_date']], kwargs[kwargs_map['start_time']] = pd.to_datetime(start_date).strftime('%Y-%m-%d %H:%M:%S').split(' ')
kwargs[kwargs_map['end_date']], kwargs[kwargs_map['end_time']] = pd.to_datetime(end_date).strftime('%Y-%m-%d %H:%M:%S').split(' ')
if 'SP' in kwargs_map.keys():
kwargs[kwargs_map['SP']] = '*'
func_params.remove('SP')
func_params += [kwargs_map['SP']]
missing_kwargs = list(set(func_params) - set(['start_date', 'end_date', 'start_time', 'end_time'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
if request_type == 'date_range':
kwargs.pop(kwargs_map['start_time'])
kwargs.pop(kwargs_map['end_time'])
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df = utils.parse_xml_response(r)
df = if_possible_parse_local_datetime(df)
# Handling capping
df = handle_capping(
r,
df,
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
end_date=end_date,
request_type=request_type,
**kwargs
)
return df
method = 'get_B1540'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
request_type = method_info[method]['request_type']
df = date_range_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2021-01-01',
request_type=request_type,
**kwargs
)
df.head(3)
#exports
def year_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
start_year = int(pd.to_datetime(start_date).strftime('%Y'))
end_year = int(pd.to_datetime(end_date).strftime('%Y'))
for year in tqdm(range(start_year, end_year+1), desc=stream):
kwargs.update({kwargs_map['year']: year})
missing_kwargs = list(set(func_params) - set(['year'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = pd.concat([df, df_year])
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0650'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2021-01-01 01:30',
**kwargs
)
df.head(3)
#exports
def construct_year_month_pairs(start_date, end_date):
dt_rng = pd.date_range(start_date, end_date, freq='M')
if len(dt_rng) == 0:
year_month_pairs = [tuple(pd.to_datetime(start_date).strftime('%Y %b').split(' '))]
else:
year_month_pairs = [tuple(dt.strftime('%Y %b').split(' ')) for dt in dt_rng]
year_month_pairs = [(int(year), week.upper()) for year, week in year_month_pairs]
return year_month_pairs
def year_and_month_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
year_month_pairs = construct_year_month_pairs(start_date, end_date)
for year, month in tqdm(year_month_pairs, desc=stream):
kwargs.update({
kwargs_map['year']: year,
kwargs_map['month']: month
})
missing_kwargs = list(set(func_params) - set(['year', 'month'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = pd.concat([df, df_year])
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0640'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_and_month_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-03-31',
**kwargs
)
df.head(3)
#exports
def clean_year_week(year, week):
year = int(year)
if week == '00':
year = int(year) - 1
week = 52
else:
year = int(year)
week = int(week.strip('0'))
return year, week
def construct_year_week_pairs(start_date, end_date):
dt_rng = pd.date_range(start_date, end_date, freq='W')
if len(dt_rng) == 0:
year_week_pairs = [tuple(pd.to_datetime(start_date).strftime('%Y %W').split(' '))]
else:
year_week_pairs = [tuple(dt.strftime('%Y %W').split(' ')) for dt in dt_rng]
year_week_pairs = [clean_year_week(year, week) for year, week in year_week_pairs]
return year_week_pairs
def year_and_week_request(
method: str,
kwargs_map: dict,
func_params: list,
api_key: str,
start_date: str,
end_date: str,
n_attempts: int=3,
**kwargs
):
assert start_date is not None, '`start_date` must be specified'
assert end_date is not None, '`end_date` must be specified'
df = pd.DataFrame()
stream = '_'.join(method.split('_')[1:])
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
year_week_pairs = construct_year_week_pairs(start_date, end_date)
for year, week in tqdm(year_week_pairs, desc=stream):
kwargs.update({
kwargs_map['year']: year,
kwargs_map['week']: week
})
missing_kwargs = list(set(func_params) - set(['year', 'week'] + list(kwargs.keys())))
assert len(missing_kwargs) == 0, f"The following kwargs are missing: {', '.join(missing_kwargs)}"
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df_year = utils.parse_xml_response(r)
df = pd.concat([df, df_year])
df = if_possible_parse_local_datetime(df)
return df
method = 'get_B0630'
kwargs = {}
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = year_and_week_request(
method=method,
kwargs_map=kwargs_map,
func_params=func_params,
api_key=api_key,
start_date='2020-01-01',
end_date='2020-01-31',
**kwargs
)
df.head(3)
#exports
def non_temporal_request(
method: str,
api_key: str,
n_attempts: int=3,
**kwargs
):
kwargs.update({
'APIKey': api_key,
'ServiceType': 'xml'
})
r = retry_request(raw, method, kwargs, n_attempts=n_attempts)
df = utils.parse_xml_response(r)
df = if_possible_parse_local_datetime(df)
return df
###Output
_____no_output_____
###Markdown
Query Orchestrator
###Code
#exports
def query_orchestrator(
method: str,
api_key: str,
request_type: str,
kwargs_map: dict=None,
func_params: list=None,
start_date: str=None,
end_date: str=None,
n_attempts: int=3,
**kwargs
):
if request_type not in ['non_temporal']:
kwargs.update({
'kwargs_map': kwargs_map,
'func_params': func_params,
'start_date': start_date,
'end_date': end_date,
})
if request_type in ['date_range', 'date_time_range']:
kwargs.update({
'request_type': request_type,
})
request_type_to_func = {
'SP_and_date': SP_and_date_request,
'date_range': date_range_request,
'date_time_range': date_range_request,
'year': year_request,
'year_and_month': year_and_month_request,
'year_and_week': year_and_week_request,
'non_temporal': non_temporal_request
}
assert request_type in request_type_to_func.keys(), f"{request_type} must be one of: {', '.join(request_type_to_func.keys())}"
request_func = request_type_to_func[request_type]
df = request_func(
method=method,
api_key=api_key,
n_attempts=n_attempts,
**kwargs
)
df = df.reset_index(drop=True)
return df
method = 'get_B0630'
start_date = '2020-01-01'
end_date = '2020-01-31'
request_type = method_info[method]['request_type']
kwargs_map = method_info[method]['kwargs_map']
func_params = list(method_info[method]['func_kwargs'].keys())
df = query_orchestrator(
method=method,
api_key=api_key,
request_type=request_type,
kwargs_map=kwargs_map,
func_params=func_params,
start_date=start_date,
end_date=end_date
)
df.head(3)
#hide
from ElexonDataPortal.dev.nbdev import notebook2script
notebook2script('05-orchestrator.ipynb')
###Output
Converted 05-orchestrator.ipynb.
|
notebooks/2_straddle.ipynb | ###Markdown
Example 1 - pricing a straddle Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} F \times \sigma \sqrt(T) $$$$ \sigma = implied volatility $$$$ T = time-to-maturity $$$$ F = forward of the underlier $$ Let's start with defining the straddle's implied volatility and time-to-maturity. Note, we will assume F is equal to 1 and the straddle price can be scaled accordingly.
###Code
vol = 0.2
time = 1.
2. * ( (1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
This is a lot to type again and again if you want to price several straddles, which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer( vol, time ):
return 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it. Below, we can compare the function's output to the output of the cell above.
###Code
print( straddlePricer( 0.2, 1.0 ) )
print( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
print( straddlePricer( 0.2, 1.0 ) == ( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) ) )
###Output
0.15961737689352445
0.15961737689352445
True
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
print( straddlePricer( time=1.0, vol=0.2 ) )
print( straddlePricer( vol=0.2, time=1.0 ) )
###Output
0.15961737689352445
0.15961737689352445
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional". Initially, we'll make only `time` and optional arguement (input).
###Code
def straddlePricer( vol, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
straddlePricer(0.5,6)
###Output
_____no_output_____
###Markdown
Now, we'll make both `vol` and `time` optional.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call the function. It will use 0.2 for `vol` and 1.0 for `time` by default unless instructed otherwise.
###Code
straddlePricer()
straddlePricer( 0.22 )
###Output
_____no_output_____
###Markdown
Notice, there's π in the denominator of the straddle price formula, but the value we used above (3.14) is an rough approximation. Is there a more precise value we could use? Yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `numpy.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt( 4 )
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / np.sqrt( 2 * np.pi ) ) * vol * np.sqrt( time ) )
straddlePricer()
###Output
_____no_output_____
###Markdown
Let's see what the difference is between our original implementation and our new and improved implemenation.
###Code
straddlePricer() - ( 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to the functionality above, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
#np.
###Output
_____no_output_____
###Markdown
Alternatively, we can call `dir` on `np` to see what is included.
###Code
dir(np)
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for p in range( mcPaths ):
resultSum += np.abs( np.prod( ( 1 + np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a daily volatility, or2. take the square root of variance (volatility) and divide by the root of time Generally, the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
# Option #1 above
np.sqrt( vol ** 2 / 252 )
# Comparing the two methods
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt( var / 252 )
volOverSqrtTime = vol / np.sqrt( 252 )
valuesEqual = np.isclose( sqrtVarOverTime, volOverSqrtTime )
print( f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime = {volOverSqrtTime}\nAre they close? {valuesEqual}' )
###Output
sqrtVarOverTime = 0.012598815766974242
volOverSqrtTime = 0.01259881576697424
Are they close? True
###Markdown
The next line isn't super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0. If we don't do this we'll get an error.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a `for` loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range( 10 )
lst = list( range10 )
print( lst )
print( len( lst ) )
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC( vol=0.2, time=1.0, mcPaths=100 ):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for _ in range( mcPaths ):
resultSum += np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. try it for yourself
###Code
help(np.random.normal)
# np.random.normal?
###Output
Help on built-in function normal:
normal(...) method of numpy.random.mtrand.RandomState instance
normal(loc=0.0, scale=1.0, size=None)
Draw random samples from a normal (Gaussian) distribution.
The probability density function of the normal distribution, first
derived by De Moivre and 200 years later by both Gauss and Laplace
independently [2]_, is often called the bell curve because of
its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it
describes the commonly occurring distribution of samples influenced
by a large number of tiny, random disturbances, each with its own
unique distribution [2]_.
.. note::
New code should use the ``normal`` method of a ``default_rng()``
instance instead; please see the :ref:`random-quick-start`.
Parameters
----------
loc : float or array_like of floats
Mean ("centre") of the distribution.
scale : float or array_like of floats
Standard deviation (spread or "width") of the distribution. Must be
non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized normal distribution.
See Also
--------
scipy.stats.norm : probability density function, distribution or
cumulative density function, etc.
Generator.normal: which should be used for new code.
Notes
-----
The probability density for the Gaussian distribution is
.. math:: p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
where :math:`\mu` is the mean and :math:`\sigma` the standard
deviation. The square of the standard deviation, :math:`\sigma^2`,
is called the variance.
The function has its peak at the mean, and its "spread" increases with
the standard deviation (the function reaches 0.607 times its maximum at
:math:`x + \sigma` and :math:`x - \sigma` [2]_). This implies that
normal is more likely to return samples lying close to the mean, rather
than those far away.
References
----------
.. [1] Wikipedia, "Normal distribution",
https://en.wikipedia.org/wiki/Normal_distribution
.. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability,
Random Variables and Random Signal Principles", 4th ed., 2001,
pp. 51, 51, 125.
Examples
--------
Draw samples from the distribution:
>>> mu, sigma = 0, 0.1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, 1000)
Verify the mean and the variance:
>>> abs(mu - np.mean(s))
0.0 # may vary
>>> abs(sigma - np.std(s, ddof=1))
0.1 # may vary
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 30, density=True)
>>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
... linewidth=2, color='r')
>>> plt.show()
Two-by-four array of samples from N(3, 6.25):
>>> np.random.normal(3, 2.5, size=(2, 4))
array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
[ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal(2,1,100)
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt( 252. )
print( nDays )
np.random.normal( 0, dailyVol, nDays )
###Output
252
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed( 42 ) # guarantee the same result from the two random series
returns = np.random.normal( 0, dailyVol, time * 252 )
priceAtMaturity = np.prod( 1 + returns )
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs( changeAtMaturity )
print( absChangeAtMaturity )
# all together in one line
np.random.seed( 42 )
print( np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, time * 252 ) ) ) - 1 ) )
###Output
0.030088573823511933
0.030088573823511933
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import numpy as np
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame( np.random.normal( 0, dailyVol, time * 252 ) + 1, columns=['return'] )
simulatedAsset['price'] = ( 1 * simulatedAsset['return'] ).cumprod()
psp(simulatedAsset)
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs( np.prod( 1 + np.random.normal( 0., dailyVol, time * 252 ) ) - 1 )
print( resultSum / mcPaths )
###Output
0.1426621236086129
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we're still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
58.6 ms ± 15.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
1 path: 0.03499340595532807
2000 path: 0.16042431887212205
5000 path: 0.15684081924562598
10000 path: 0.1594731954838388
100000 path: 0.15894614937659887
Closed form approximation: 0.1595769121605731
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, 15)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / ( 252 ** 0.5 )
randomPaths = pd.DataFrame( np.random.normal( 0, dailyVol, ( time*252, mcPaths ) ) )
price = ( ( 1 + randomPaths ).prod() - 1 ).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____
###Markdown
Example 1 - pricing a straddle Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} F \times \sigma \sqrt(T) $$$$ \sigma = implied volatility $$$$ T = time-to-maturity $$$$ F = forward of the underlier $$ Let's start with defining the straddle's implied volatility and time-to-maturity. Note, we will assume F is equal to 1 and the straddle price can be scaled accordingly.
###Code
vol = 0.2
time = 1.
2. * ( (1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
This is a lot to type again and again if you want to price several straddles, which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer( vol, time ):
return 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it. Below, we can compare the function's output to the output of the cell above.
###Code
print( straddlePricer( 0.2, 1.0 ) )
print( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
print( straddlePricer( 0.2, 1.0 ) == ( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) ) )
###Output
_____no_output_____
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
print( straddlePricer( time=1.0, vol=0.2 ) )
print( straddlePricer( vol=0.2, time=1.0 ) )
###Output
_____no_output_____
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional". Initially, we'll make only `time` and optional arguement (input).
###Code
def straddlePricer( vol, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
straddlePricer( 0.2 )
###Output
_____no_output_____
###Markdown
Now, we'll make both `vol` and `time` optional.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call the function. It will use 0.2 for `vol` and 1.0 for `time` by default unless instructed otherwise.
###Code
straddlePricer()
straddlePricer( 0.22 )
###Output
_____no_output_____
###Markdown
Notice, there's π in the denominator of the straddle price formula, but the value we used above (3.14) is an rough approximation. Is there a more precise value we could use? Yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `numpy.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt( 4 )
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / np.sqrt( 2 * np.pi ) ) * vol * np.sqrt( time ) )
straddlePricer()
###Output
_____no_output_____
###Markdown
Let's see what the difference is between our original implementation and our new and improved implemenation.
###Code
straddlePricer() - ( 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to the functionality above, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
#np.
###Output
_____no_output_____
###Markdown
Alternatively, we can call `dir` on `np` to see what is included.
###Code
dir(np)
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for p in range( mcPaths ):
resultSum += np.abs( np.prod( ( 1 + np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a daily volatility, or2. take the square root of variance (volatility) and divide by the root of time Generally, the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
# Option #1 above
np.sqrt( vol ** 2 / 252 )
# Comparing the two methods
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt( var / 252 )
volOverSqrtTime = vol / np.sqrt( 252 )
valuesEqual = np.isclose( sqrtVarOverTime, volOverSqrtTime )
print( f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime = {volOverSqrtTime}\nAre they close? {valuesEqual}' )
###Output
_____no_output_____
###Markdown
The next line isn't super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0. If we don't do this we'll get an error.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a `for` loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range( 10 )
lst = list( range10 )
print( lst )
print( len( lst ) )
###Output
_____no_output_____
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC( vol=0.2, time=1.0, mcPaths=100 ):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for _ in range( mcPaths ):
resultSum += np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. ?
###Code
help(np.random.normal)
# np.random.normal?
###Output
_____no_output_____
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal()
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt( 252. )
print( nDays )
np.random.normal( 0, dailyVol, nDays )
###Output
_____no_output_____
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed( 42 ) # guarantee the same result from the two random series
returns = np.random.normal( 0, dailyVol, time * 252 )
priceAtMaturity = np.prod( 1 + returns )
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs( changeAtMaturity )
print( absChangeAtMaturity )
# all together in one line
np.random.seed( 42 )
print( np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, time * 252 ) ) ) - 1 ) )
###Output
_____no_output_____
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame( np.random.normal( 0, dailyVol, time * 252 ) + 1, columns=['return'] )
simulatedAsset['price'] = ( 1 * simulatedAsset['return'] ).cumprod()
psp( simulatedAsset )
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs( np.prod( 1 + np.random.normal( 0., dailyVol, time * 252 ) ) - 1 )
print( resultSum / mcPaths )
###Output
_____no_output_____
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we're still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
_____no_output_____
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, 15)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / ( 252 ** 0.5 )
randomPaths = pd.DataFrame( np.random.normal( 0, dailyVol, ( time*252, mcPaths ) ) )
price = ( ( 1 + randomPaths ).prod() - 1 ).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____
###Markdown
Example 1 - pricing a straddle Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} F \times \sigma \sqrt(T) $$$$ \sigma = implied volatility $$$$ T = time-to-maturity $$$$ F = forward of the underlier $$ Let's start with defining the straddle's implied volatility and time-to-maturity. Note, we will assume F is equal to 1 and the straddle price can be scaled accordingly.
###Code
vol = 0.2
time = 1.
2. * ( (1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
This is a lot to type again and again if you want to price several straddles, which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer( vol, time ):
return 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it. Below, we can compare the function's output to the output of the cell above.
###Code
print( straddlePricer( 0.2, 1.0 ) )
print( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
print( straddlePricer( 0.2, 1.0 ) == ( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) ) )
###Output
_____no_output_____
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
print( straddlePricer( time=1.0, vol=0.2 ) )
print( straddlePricer( vol=0.2, time=1.0 ) )
###Output
_____no_output_____
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional". Initially, we'll make only `time` and optional arguement (input).
###Code
def straddlePricer( vol, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
straddlePricer( 0.2 )
###Output
_____no_output_____
###Markdown
Now, we'll make both `vol` and `time` optional.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call the function. It will use 0.2 for `vol` and 1.0 for `time` by default unless instructed otherwise.
###Code
straddlePricer()
straddlePricer( 0.22 )
###Output
_____no_output_____
###Markdown
Notice, there's π in the denominator of the straddle price formula, but the value we used above (3.14) is an rough approximation. Is there a more precise value we could use? Yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `numpy.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.'numpy' is faster when compared to the inherent python list(~30 times). Also it has the ability to handle multidimensional array comfortably making it one of the common tools for a deep learning enthusiast.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt( 4 )
###Output
_____no_output_____
###Markdown
numpy can help in handling matrix operations comfortably with operations such as np.dot, np.matmul etc. Also in financial analysis it can be very helpful in evaluating correlation sequence for various time series data which can be prices etc
###Code
mat1 = ([1, 6, 5],[3 ,4, 8],[2, 12, 3])
mat2 = ([3, 4, 6],[5, 6, 7],[6,56, 7])
print(np.dot(mat1,mat2))
print(np.matmul(mat1,mat2))
###Output
_____no_output_____
###Markdown
Also we can create a few common matrices very easily with numpy as mentioned below
###Code
z = np.zeros((2,2)) # Create an array of all zeros
print(z)
o = np.ones((1,2)) # Create an array of all ones
print(o)
c = np.full((2,2), 7) # Create a constant array with all values 7
print(c)
i = np.eye(2) # Create a 2x2 identity matrix
print(i)
rnd = np.random.random((2,2)) # Create an array filled with random values(very useful in neural networks)
print(rnd)
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / np.sqrt( 2 * np.pi ) ) * vol * np.sqrt( time ) )
straddlePricer()
###Output
_____no_output_____
###Markdown
Let's see what the difference is between our original implementation and our new and improved implemenation.
###Code
straddlePricer() - ( 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to the functionality above, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
#np.
###Output
_____no_output_____
###Markdown
Alternatively, we can call `dir` on `np` to see what is included.
###Code
dir(np)
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for p in range( mcPaths ):
resultSum += np.abs( np.prod( ( 1 + np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a daily volatility, or2. take the square root of variance (volatility) and divide by the root of time Generally, the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
# Option #1 above
np.sqrt( vol ** 2 / 252 )
# Comparing the two methods
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt( var / 252 )
volOverSqrtTime = vol / np.sqrt( 252 )
valuesEqual = np.isclose( sqrtVarOverTime, volOverSqrtTime )
print( f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime = {volOverSqrtTime}\nAre they close? {valuesEqual}' )
###Output
_____no_output_____
###Markdown
The next line isn't super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0. If we don't do this we'll get an error.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a `for` loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range( 10 )
lst = list( range10 )
print( lst )
print( len( lst ) )
###Output
_____no_output_____
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC( vol=0.2, time=1.0, mcPaths=100 ):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for _ in range( mcPaths ):
resultSum += np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. ?
###Code
help(np.random.normal)
# np.random.normal?
###Output
_____no_output_____
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal()
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt( 252. )
print( nDays )
np.random.normal( 0, dailyVol, nDays )
###Output
_____no_output_____
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed( 42 ) # guarantee the same result from the two random series
returns = np.random.normal( 0, dailyVol, time * 252 )
priceAtMaturity = np.prod( 1 + returns )
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs( changeAtMaturity )
print( absChangeAtMaturity )
# all together in one line
np.random.seed( 42 )
print( np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, time * 252 ) ) ) - 1 ) )
###Output
_____no_output_____
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame( np.random.normal( 0, dailyVol, time * 252 ) + 1, columns=['return'] )
simulatedAsset['price'] = ( 1 * simulatedAsset['return'] ).cumprod()
psp( simulatedAsset )
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs( np.prod( 1 + np.random.normal( 0., dailyVol, time * 252 ) ) - 1 )
print( resultSum / mcPaths )
###Output
_____no_output_____
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we're still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
_____no_output_____
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, 15)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / ( 252 ** 0.5 )
randomPaths = pd.DataFrame( np.random.normal( 0, dailyVol, ( time*252, mcPaths ) ) )
price = ( ( 1 + randomPaths ).prod() - 1 ).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____
###Markdown
Example 1 - pricing a straddle Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} F \times \sigma \sqrt(T) $$$$ \sigma = implied volatility $$$$ T = time-to-maturity $$$$ F = forward of the underlier $$ Let's start with defining the straddle's implied volatility and time-to-maturity. Note, we will assume F is equal to 1 and the straddle price can be scaled accordingly.
###Code
vol = 0.2
time = 1.
2. * ( (1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
This is a lot to type again and again if you want to price several straddles, which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer( vol, time ):
return 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it. Below, we can compare the function's output to the output of the cell above.
###Code
print( straddlePricer( 0.2, 1.0 ) )
print( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
print( straddlePricer( 0.2, 1.0 ) == ( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) ) )
###Output
_____no_output_____
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
print( straddlePricer( time=1.0, vol=0.2 ) )
print( straddlePricer( vol=0.2, time=1.0 ) )
###Output
_____no_output_____
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional". Initially, we'll make only `time` and optional arguement (input).
###Code
def straddlePricer( vol, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
straddlePricer( 0.2 )
###Output
_____no_output_____
###Markdown
Now, we'll make both `vol` and `time` optional.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call the function. It will use 0.2 for `vol` and 1.0 for `time` by default unless instructed otherwise.
###Code
straddlePricer()
straddlePricer( 0.22 )
###Output
_____no_output_____
###Markdown
Notice, there's π in the denominator of the straddle price formula, but the value we used above (3.14) is an rough approximation. Is there a more precise value we could use? Yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `numpy.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt( 4 )
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / np.sqrt( 2 * np.pi ) ) * vol * np.sqrt( time ) )
straddlePricer()
###Output
_____no_output_____
###Markdown
Let's see what the difference is between our original implementation and our new and improved implemenation.
###Code
straddlePricer() - ( 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to the functionality above, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
#np.
###Output
_____no_output_____
###Markdown
Alternatively, we can call `dir` on `np` to see what is included.
###Code
dir(np)
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for p in range( mcPaths ):
resultSum += np.abs( np.prod( ( 1 + np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a daily volatility, or2. take the square root of variance (volatility) and divide by the root of time Generally, the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
# Option #1 above
np.sqrt( vol ** 2 / 252 )
# Comparing the two methods
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt( var / 252 )
volOverSqrtTime = vol / np.sqrt( 252 )
valuesEqual = np.isclose( sqrtVarOverTime, volOverSqrtTime )
print( f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime = {volOverSqrtTime}\nAre they close? {valuesEqual}' )
###Output
_____no_output_____
###Markdown
The next line isn't super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0. If we don't do this we'll get an error.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a `for` loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range( 10 )
lst = list( range10 )
print( lst )
print( len( lst ) )
###Output
_____no_output_____
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC( vol=0.2, time=1.0, mcPaths=100 ):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for _ in range( mcPaths ):
resultSum += np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. ?
###Code
help(np.random.normal)
# np.random.normal?
###Output
_____no_output_____
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal()
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt( 252. )
print( nDays )
np.random.normal( 0, dailyVol, nDays )
###Output
_____no_output_____
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed( 42 ) # guarantee the same result from the two random series
returns = np.random.normal( 0, dailyVol, time * 252 )
priceAtMaturity = np.prod( 1 + returns )
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs( changeAtMaturity )
print( absChangeAtMaturity )
# all together in one line
np.random.seed( 42 )
print( np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, time * 252 ) ) ) - 1 ) )
###Output
_____no_output_____
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import pandas as pd
from perspective import PerspectiveWidget as pw
simulatedAsset = pd.DataFrame( np.random.normal( 0, dailyVol, time * 252 ) + 1, columns=['return'] )
simulatedAsset['price'] = ( 1 * simulatedAsset['return'] ).cumprod()
pw( simulatedAsset )
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs( np.prod( 1 + np.random.normal( 0., dailyVol, time * 252 ) ) - 1 )
print( resultSum / mcPaths )
###Output
_____no_output_____
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we're still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
_____no_output_____
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, 15)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / ( 252 ** 0.5 )
randomPaths = pd.DataFrame( np.random.normal( 0, dailyVol, ( time*252, mcPaths ) ) )
price = ( ( 1 + randomPaths ).prod() - 1 ).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____
###Markdown
Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} S \times \sigma \sqrt(T) $$
###Code
vol = 0.2
time = 1.
2. * ((1 / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
###Output
_____no_output_____
###Markdown
This is a lot to type all the time which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer(vol, time):
return 2. * ((1. / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it.
###Code
print(straddlePricer(0.2, 1.0))
print(2. * ((1. / (2*3.14) ** 0.5 ) * vol * time ** 0.5))
###Output
0.15961737689352445
0.15961737689352445
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
straddlePricer(time=1.0, vol=0.2)
###Output
_____no_output_____
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional".
###Code
def straddlePricer(vol=0.2, time=1.0):
return 2. * ((1 / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call teh function. It will use 0.2 for vol and 1.0 for time.
###Code
straddlePricer()
straddlePricer(0.22)
###Output
_____no_output_____
###Markdown
There's π in the denominator, but the value we used above is an approximation. Is there a more precise definition? yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `np.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt(4)
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer(vol=0.2, time=1.0):
return 2. * ((1/np.sqrt(2*np.pi)) * vol * np.sqrt(time))
straddlePricer()
straddlePricer() - 2. * ((1 / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to these, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
np.
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt(252.)
resultSum = 0
for p in range(mcPaths):
resultSum += np.abs(np.prod((np.random.normal(0, dailyVol, int(round(time*252))) + 1)) - 1)
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a vol, or2. take the square root of variance and divide by the root of time Generally the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
np.sqrt(vol**2 / 252)
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt(var/252)
volOverSqrtTime = vol / np.sqrt(252)
valuesEqual = np.isclose(sqrtVarOverTime, volOverSqrtTime)
print(f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime={volOverSqrtTime}\nAre they close? {valuesEqual}')
###Output
sqrtVarOverTime = 0.012598815766974242
volOverSqrtTime=0.01259881576697424
Are they close? True
###Markdown
The next line isnt super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a for loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range(10)
lst = list(range10)
print(lst)
print(len(lst))
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt(252.)
resultSum = 0
for p in range(mcPaths):
resultSum += np.abs(np.prod((np.random.normal(0, dailyVol, int(round(time*252))) + 1)) - 1)
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. ?
###Code
help(np.random.normal)
# np.random.normal?
###Output
Help on built-in function normal:
normal(...) method of mtrand.RandomState instance
normal(loc=0.0, scale=1.0, size=None)
Draw random samples from a normal (Gaussian) distribution.
The probability density function of the normal distribution, first
derived by De Moivre and 200 years later by both Gauss and Laplace
independently [2]_, is often called the bell curve because of
its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it
describes the commonly occurring distribution of samples influenced
by a large number of tiny, random disturbances, each with its own
unique distribution [2]_.
Parameters
----------
loc : float or array_like of floats
Mean ("centre") of the distribution.
scale : float or array_like of floats
Standard deviation (spread or "width") of the distribution.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized normal distribution.
See Also
--------
scipy.stats.norm : probability density function, distribution or
cumulative density function, etc.
Notes
-----
The probability density for the Gaussian distribution is
.. math:: p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
where :math:`\mu` is the mean and :math:`\sigma` the standard
deviation. The square of the standard deviation, :math:`\sigma^2`,
is called the variance.
The function has its peak at the mean, and its "spread" increases with
the standard deviation (the function reaches 0.607 times its maximum at
:math:`x + \sigma` and :math:`x - \sigma` [2]_). This implies that
`numpy.random.normal` is more likely to return samples lying close to
the mean, rather than those far away.
References
----------
.. [1] Wikipedia, "Normal distribution",
https://en.wikipedia.org/wiki/Normal_distribution
.. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability,
Random Variables and Random Signal Principles", 4th ed., 2001,
pp. 51, 51, 125.
Examples
--------
Draw samples from the distribution:
>>> mu, sigma = 0, 0.1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, 1000)
Verify the mean and the variance:
>>> abs(mu - np.mean(s)) < 0.01
True
>>> abs(sigma - np.std(s, ddof=1)) < 0.01
True
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 30, density=True)
>>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
... linewidth=2, color='r')
>>> plt.show()
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal()
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt(252.)
print(nDays)
np.random.normal(0, dailyVol, nDays)
###Output
252
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed(42) # guarantee the same result from the two random series
returns = np.random.normal(0, dailyVol, time*252)
priceAtMaturity = np.prod(returns + 1)
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs(changeAtMaturity)
print(absChangeAtMaturity)
# all together in one line
np.random.seed(42)
print(np.abs(np.prod((np.random.normal(0, dailyVol, time * 252)+1))-1))
###Output
0.030088573823511933
0.030088573823511933
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, time*252) + 1, columns=['return'])
simulatedAsset['price'] = (1 * simulatedAsset['return']).cumprod()
psp(simulatedAsset)
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs(np.prod(np.random.normal(0., dailyVol, time*252)+1)-1)
print(resultSum/mcPaths)
###Output
0.15218843887065436
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we;re still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
59.9 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
1 path: 0.16792385313363134
2000 path: 0.16060331040730313
5000 path: 0.15674771425208073
10000 path: 0.15951376137806297
100000 path: 0.15894098568323442
Closed form approximation: 0.1595769121605731
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, mcPaths)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / (252 ** .5)
randomPaths = pd.DataFrame(np.random.normal(0, dailyVol, (time*252, mcPaths)))
price = ((1+randomPaths).prod() - 1).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____
###Markdown
Example 1 - pricing a straddle Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} F \times \sigma \sqrt(T) $$$$ \sigma = implied volatility $$$$ T = time-to-maturity $$$$ F = forward of the underlier $$ Let's start with defining the straddle's implied volatility and time-to-maturity. Note, we will assume F is equal to 1 and the straddle price can be scaled accordingly.
###Code
vol = 0.2
time = 1.
2. * ( (1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
This is a lot to type again and again if you want to price several straddles, which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer( vol, time ):
return 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it. Below, we can compare the function's output to the output of the cell above.
###Code
print( straddlePricer( 0.2, 1.0 ) )
print( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
print( straddlePricer( 0.2, 1.0 ) == ( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) ) )
###Output
_____no_output_____
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
print( straddlePricer( time=1.0, vol=0.2 ) )
print( straddlePricer( vol=0.2, time=1.0 ) )
###Output
_____no_output_____
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional". Initially, we'll make only `time` and optional arguement (input).
###Code
def straddlePricer( vol, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
straddlePricer( 0.2 )
###Output
_____no_output_____
###Markdown
Now, we'll make both `vol` and `time` optional.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call the function. It will use 0.2 for `vol` and 1.0 for `time` by default unless instructed otherwise.
###Code
straddlePricer()
straddlePricer( 0.22 )
###Output
_____no_output_____
###Markdown
Notice, there's π in the denominator of the straddle price formula, but the value we used above (3.14) is an rough approximation. Is there a more precise value we could use? Yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `numpy.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt( 4 )
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / np.sqrt( 2 * np.pi ) ) * vol * np.sqrt( time ) )
straddlePricer()
###Output
_____no_output_____
###Markdown
Let's see what the difference is between our original implementation and our new and improved implemenation.
###Code
straddlePricer() - ( 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to the functionality above, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
#np.
###Output
_____no_output_____
###Markdown
Alternatively, we can call `dir` on `np` to see what is included.
###Code
dir(np)
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for p in range( mcPaths ):
resultSum += np.abs( np.prod( ( 1 + np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a daily volatility, or2. take the square root of variance (volatility) and divide by the root of time Generally, the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
# Option #1 above
np.sqrt( vol ** 2 / 252 )
# Comparing the two methods
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt( var / 252 )
volOverSqrtTime = vol / np.sqrt( 252 )
valuesEqual = np.isclose( sqrtVarOverTime, volOverSqrtTime )
print( f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime = {volOverSqrtTime}\nAre they close? {valuesEqual}' )
###Output
_____no_output_____
###Markdown
The next line isn't super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0. If we don't do this we'll get an error.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a `for` loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range( 10 )
lst = list( range10 )
print( lst )
print( len( lst ) )
###Output
_____no_output_____
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC( vol=0.2, time=1.0, mcPaths=100 ):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for _ in range( mcPaths ):
resultSum += np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. ?
###Code
help(np.random.normal)
# np.random.normal?
###Output
_____no_output_____
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal()
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt( 252. )
print( nDays )
np.random.normal( 0, dailyVol, nDays )
###Output
_____no_output_____
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed( 42 ) # guarantee the same result from the two random series
returns = np.random.normal( 0, dailyVol, time * 252 )
priceAtMaturity = np.prod( 1 + returns )
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs( changeAtMaturity )
print( absChangeAtMaturity )
# all together in one line
np.random.seed( 42 )
print( np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, time * 252 ) ) ) - 1 ) )
###Output
_____no_output_____
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame( np.random.normal( 0, dailyVol, time * 252 ) + 1, columns=['return'] )
simulatedAsset['price'] = ( 1 * simulatedAsset['return'] ).cumprod()
psp( simulatedAsset )
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs( np.prod( 1 + np.random.normal( 0., dailyVol, time * 252 ) ) - 1 )
print( resultSum / mcPaths )
###Output
_____no_output_____
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we're still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
_____no_output_____
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, 15)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / ( 252 ** 0.5 )
randomPaths = pd.DataFrame( np.random.normal( 0, dailyVol, ( time*252, mcPaths ) ) )
price = ( ( 1 + randomPaths ).prod() - 1 ).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____
###Markdown
Example 1 - pricing a straddle Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle. $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} F \times \sigma \sqrt(T) $$$$ \sigma = implied volatility $$$$ T = time-to-maturity $$$$ F = forward of the underlier $$ Let's start with defining the straddle's implied volatility and time-to-maturity. Note, we will assume F is equal to 1 and the straddle price can be scaled accordingly.
###Code
vol = 0.2
time = 1.
2. * ( (1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
This is a lot to type again and again if you want to price several straddles, which is really annoying and error prone. Let's define a function for this so that we can use it over and over
###Code
def straddlePricer( vol, time ):
return 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it. Below, we can compare the function's output to the output of the cell above.
###Code
print( straddlePricer( 0.2, 1.0 ) )
print( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
print( straddlePricer( 0.2, 1.0 ) == ( 2. * ( ( 1. / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) ) )
###Output
0.15961737689352445
0.15961737689352445
True
###Markdown
Input order doesn't matter as long as we let the function know what we're using as inputs
###Code
print( straddlePricer( time=1.0, vol=0.2 ) )
print( straddlePricer( vol=0.2, time=1.0 ) )
###Output
_____no_output_____
###Markdown
This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional". Initially, we'll make only `time` and optional arguement (input).
###Code
def straddlePricer( vol, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
straddlePricer( 0.2 )
###Output
_____no_output_____
###Markdown
Now, we'll make both `vol` and `time` optional.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 )
###Output
_____no_output_____
###Markdown
In other words, we don't need to pass these arguments to call the function. It will use 0.2 for `vol` and 1.0 for `time` by default unless instructed otherwise.
###Code
straddlePricer()
straddlePricer( 0.22 )
###Output
_____no_output_____
###Markdown
Notice, there's π in the denominator of the straddle price formula, but the value we used above (3.14) is an rough approximation. Is there a more precise value we could use? Yes, we can use a library called `numpy`. Let's import it first below.
###Code
import numpy
###Output
_____no_output_____
###Markdown
You can access functions of numpy by entering `numpy.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
###Code
numpy.pi
###Output
_____no_output_____
###Markdown
Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
###Code
import numpy as np
import pandas as pd
import datetime as dt
np.pi
###Output
_____no_output_____
###Markdown
`numpy` also has a handy square root function (`np.sqrt`)
###Code
np.sqrt( 4 )
###Output
_____no_output_____
###Markdown
Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
###Code
def straddlePricer( vol=0.2, time=1.0 ):
return 2. * ( ( 1 / np.sqrt( 2 * np.pi ) ) * vol * np.sqrt( time ) )
straddlePricer()
###Output
_____no_output_____
###Markdown
Let's see what the difference is between our original implementation and our new and improved implemenation.
###Code
straddlePricer() - ( 2. * ( ( 1 / ( 2 * 3.14 ) ** 0.5 ) * vol * time ** 0.5 ) )
###Output
_____no_output_____
###Markdown
In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to the functionality above, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
###Code
np.random.rand()
###Output
_____no_output_____
###Markdown
Is there a way to see what functions are available? Yes, just tab after `np.`
###Code
###Output
_____no_output_____
###Markdown
Alternatively, we can call `dir` on `np` to see what is included.
###Code
dir(np)
###Output
_____no_output_____
###Markdown
Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
###Code
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for p in range( mcPaths ):
resultSum += np.abs( np.prod( ( 1 + np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
There's a lot of new things going on here. Let's unpack it one line at a time. We know the variance scales linearly with time, so we can either1. divide the variance by time and take the square root to get a daily volatility, or2. take the square root of variance (volatility) and divide by the root of time Generally, the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
###Code
# Option #1 above
np.sqrt( vol ** 2 / 252 )
# Comparing the two methods
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt( var / 252 )
volOverSqrtTime = vol / np.sqrt( 252 )
valuesEqual = np.isclose( sqrtVarOverTime, volOverSqrtTime )
print( f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime = {volOverSqrtTime}\nAre they close? {valuesEqual}' )
###Output
_____no_output_____
###Markdown
The next line isn't super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0. If we don't do this we'll get an error.
###Code
resultSum = 0
###Output
_____no_output_____
###Markdown
Next we have a loop. There are different types of loops we can use. Here we use a `for` loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
###Code
range10 = range( 10 )
lst = list( range10 )
print( lst )
print( len( lst ) )
###Output
_____no_output_____
###Markdown
In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
###Code
def straddlePricerMC( vol=0.2, time=1.0, mcPaths=100 ):
dailyVol = vol / np.sqrt( 252. )
resultSum = 0
for _ in range( mcPaths ):
resultSum += np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, int( round( time * 252 ) ) ) ) ) - 1 )
return resultSum / mcPaths
straddlePricerMC()
###Output
_____no_output_____
###Markdown
To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.1. help2. ?
###Code
help(np.random.normal)
# np.random.normal?
###Output
_____no_output_____
###Markdown
Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
###Code
np.random.normal()
###Output
_____no_output_____
###Markdown
Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
###Code
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt( 252. )
print( nDays )
np.random.normal( 0, dailyVol, nDays )
###Output
_____no_output_____
###Markdown
Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
###Code
np.random.seed( 42 ) # guarantee the same result from the two random series
returns = np.random.normal( 0, dailyVol, time * 252 )
priceAtMaturity = np.prod( 1 + returns )
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs( changeAtMaturity )
print( absChangeAtMaturity )
# all together in one line
np.random.seed( 42 )
print( np.abs( np.prod( 1 + ( np.random.normal( 0, dailyVol, time * 252 ) ) ) - 1 ) )
###Output
_____no_output_____
###Markdown
Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
###Code
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame( np.random.normal( 0, dailyVol, time * 252 ) + 1, columns=['return'] )
simulatedAsset['price'] = ( 1 * simulatedAsset['return'] ).cumprod()
psp( simulatedAsset )
###Output
_____no_output_____
###Markdown
The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
###Code
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs( np.prod( 1 + np.random.normal( 0., dailyVol, time * 252 ) ) - 1 )
print( resultSum / mcPaths )
###Output
_____no_output_____
###Markdown
This price is pretty close to the price from our original pricer. More paths should help get us even closer.
###Code
straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
2000 paths is a lot, but it looks like we're still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
###Code
%timeit straddlePricerMC(mcPaths=2000)
###Output
_____no_output_____
###Markdown
That's pretty fast. we can do a lot more paths.
###Code
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
###Output
_____no_output_____
###Markdown
Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
###Code
nDays = time * 252
size = (nDays, 15)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
###Output
_____no_output_____
###Markdown
Cool!...Let's visualize by plotting it with matplotlib.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
###Output
_____no_output_____
###Markdown
So let's incorporate that into a `pandas` version of the MC pricer.
###Code
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / ( 252 ** 0.5 )
randomPaths = pd.DataFrame( np.random.normal( 0, dailyVol, ( time*252, mcPaths ) ) )
price = ( ( 1 + randomPaths ).prod() - 1 ).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
###Output
_____no_output_____ |
notebooks/1.0-mb-generate_noisy_addresses/3. Add typos.ipynb | ###Markdown
3. Add typoes to addresses and countries
###Code
import random # random typos
import numpy as np
# dictionary... for each letter list of letters
# nearby on the keyboard
nearbykeys = {
'a': ['q','w','s','x','z'],
'b': ['v','g','h','n'],
'c': ['x','d','f','v'],
'd': ['s','e','r','f','c','x'],
'e': ['w','s','d','r'],
'f': ['d','r','t','g','v','c'],
'g': ['f','t','y','h','b','v'],
'h': ['g','y','u','j','n','b'],
'i': ['u','j','k','o'],
'j': ['h','u','i','k','n','m'],
'k': ['j','i','o','l','m'],
'l': ['k','o','p'],
'm': ['n','j','k','l'],
'n': ['b','h','j','m'],
'o': ['i','k','l','p'],
'p': ['o','l'],
'q': ['w','a','s'],
'r': ['e','d','f','t'],
's': ['w','e','d','x','z','a'],
't': ['r','f','g','y'],
'u': ['y','h','j','i'],
'v': ['c','f','g','v','b'],
'w': ['q','a','s','e'],
'x': ['z','s','d','c'],
'y': ['t','g','h','u'],
'z': ['a','s','x'],
' ': ['c','v','b','n','m']
}
def insert_typos(message, typo_prob = 0.10, starting_char = 1):
""" Introduce typos in a string
Idea from:
https://stackoverflow.com/questions/56908331/python-automatically-introduce-slight-word-typos-into-phrases
"""
if pd.isnull(message):
return message
# convert the message to a list of characters
message = list(str(message))
# the number of characters that will be typos
n_chars_to_flip = round(len(message) * typo_prob)
# is a letter capitalized?
capitalization = [False] * len(message)
# make all characters lowercase & record uppercase
for i in range(len(message)):
capitalization[i] = message[i].isupper()
message[i] = message[i].lower()
# list of characters that will be flipped
pos_to_flip = []
for i in range(n_chars_to_flip):
pos_to_flip.append(random.randint(starting_char, len(message) - 1))
# insert typos
for pos in pos_to_flip:
# try-except in case of special characters
try:
typo_arrays = nearbykeys[message[pos]]
message[pos] = random.choice(typo_arrays)
except:
break
# reinsert capitalization
for i in range(len(message)):
if (capitalization[i]):
message[i] = message[i].upper()
# recombine the message into a string
message = ''.join(message)
return message
message = "The quick brown fox jumped over the big red dog 5678."
msg = insert_typos(message)
msg
import pandas as pd
df = pd.read_csv('addresses_country_names.csv')
df
df['street'] = df['STREET'].apply(insert_typos)
df['city'] = df['CITY'].apply(insert_typos)
df['unit'] = df['UNIT'].apply(insert_typos)
df['land'] = df['updated_country'].apply(insert_typos)
df['postcode'] = df['POSTCODE'].apply(insert_typos)
df['number'] = df['NUMBER'].apply(insert_typos)
df['district'] = df['DISTRICT'].apply(insert_typos)
df['region'] = df['REGION'].apply(insert_typos)
df.columns
cols = ['country', 'land', 'number', 'street', 'city', 'unit', 'postcode', 'district', 'region']
sf = df[cols].copy()
sf
sf.to_csv('addresses_with_typos.csv', index=False)
###Output
_____no_output_____ |
data/proFootballRefDataset.ipynb | ###Markdown
Pro Football Reference Dataset
###Code
import requests
import pandas as pd
import numpy as np
import random
from bs4 import BeautifulSoup
url = 'https://www.pro-football-reference.com'
fantasy_url = '/years/{}/fantasy.htm'
game_url = '/gamelog/'
# Option 1 for grabbing players: Grab a certain amount per position. More finicky towards parameters but
# may not underrepresent a particular position
# Max number of players to gather for each position per year
position_limits = { 'QB': 32, 'RB': 60, 'WR': 80, 'TE': 25}
pos_file_key = '{}_pro_ftb_ref_per_position_{}_{}_{}_{}.csv'.format('{}', position_limits['QB'], position_limits['RB'],
position_limits['WR'], position_limits['TE'])
# Option 2: just grab top n players
n = 220
n_file_key = '{}_pro_ftb_ref_top_{}.csv'.format('{}', str(n))
# True if using top_n else False
top_n = True
data = []
encountered = []
update_top_n = []
for year in range(2000, 2021):
position_counts = { 'QB': 0, 'RB': 0, 'WR': 0, 'TE': 0, '': 0 }
tot_players = 0
print(year)
r = requests.get(url + fantasy_url.format(year))
soup = BeautifulSoup(r.content, 'html.parser')
fantasy_table = soup.find_all('table')[0]
for row in fantasy_table.find_all('tr')[2:]:
player_html = row.find('td', attrs={'data-stat': 'player'})
pos_html = row.find('td', attrs={'data-stat': 'fantasy_pos'})
if player_html is None or pos_html is None:
continue
name = player_html.a.get_text()
pos = pos_html.get_text()
stub = player_html.a.get('href')
# Check if exit condition is met
if top_n:
if tot_players % 150 == 0:
print(tot_players)
if tot_players >= n:
break
tot_players += 1
else:
if position_counts[pos] >= position_limits[pos]:
print(position_counts)
# See if all positions are filled
greater = True
for key in position_counts.keys():
if position_counts[key] < position_limits[key]:
greater = False
break
if greater:
break
else:
continue
position_counts[pos] += 1
# If player has been seen before, mark that year of career as
# being in top_n
if stub in encountered:
update_top_n.append((stub, year))
continue
encountered.append(stub)
player_url = url + stub + game_url
r_player = requests.get(player_url)
player_soup = BeautifulSoup(r_player.content, 'html.parser')
try:
player_table = player_soup.find_all('table')[0]
except:
print('Error: {}, {}, {}'.format(name, pos, year))
continue
for row in player_table.find_all('tr')[2:]:
player_stat = { 'name': name, 'pos': pos, 'stub': stub }
for data_row in row.find_all('td'):
data_title = data_row.get('data-stat')
data_val = data_row.get_text()
player_stat[data_title] = data_val
# Remove garbage rows
if 'year_id' not in player_stat.keys():
continue
# Mark rows which appear in top n
if player_stat['year_id'] == year:
player_stat['top_n'] = True
else:
player_stat['top_n'] = False
data.append(player_stat)
update_top_n = []
players_df = pd.DataFrame(data)
# Clean up data
players_df['off_pct'] = players_df['off_pct'].apply(lambda x: int(x[:-1]) if x is not np.nan and x != '' else np.nan)
players_df['def_pct'] = players_df['def_pct'].apply(lambda x: int(x[:-1]) if x is not np.nan and x != '' else np.nan)
players_df['st_pct'] = players_df['st_pct'].apply(lambda x: int(x[:-1]) if x is not np.nan and x != '' else np.nan)
# Rename so as to not interfere with pandas.Series name attribute, and year for backwards compatability
players_df.rename(columns={'name': 'full_name', 'year_id': 'year'}, inplace=True)
# Update the top_n flag for the years that need it
for stub, year in update_top_n:
players_df.loc[(players_df['stub'] == stub) & (players_df['year'] == year), 'top_n'] = True
game_df = players_df[~(players_df['week_num'] == '')]
annual_df = players_df[players_df['week_num'] == '']
# Assign each player what is most likely a unique id although not 100% guarenteed if player shares a name, team,
# position, and year
game_df.loc[:, 'unique_id'] = game_df.apply(lambda row: row.full_name + ',' + row.team + ',' + row.pos + ',' + str(row.year), axis=1)
annual_df.loc[:, 'unique_id'] = annual_df.apply(lambda row: row.full_name + ',' + row.team + ',' + row.pos + ',' + str(row.year), axis=1)
# annual_df.loc[:, 'unique'] = annual_df.apply(lambda row: print(row.full_name), axis=1)
if top_n:
game_df.to_csv(n_file_key.format('game'), index=False)
annual_df.to_csv(n_file_key.format('annual'), index=False)
else:
game_df.to_csv(pos_file_key.format('game'), index=False)
annual_df.to_csv(pos_file_key.format('annual'), index=False)
game_df = pd.read_csv('data/game_pro_ftb_ref_top_220.csv')
annual_df = pd.read_csv('data/annual_pro_ftb_ref_top_220.csv')
for year in range(2000, 2021):
position_counts = { 'QB': 0, 'RB': 0, 'WR': 0, 'TE': 0, '': 0 }
tot_players = 0
print(year)
r = requests.get(url + fantasy_url.format(year))
soup = BeautifulSoup(r.content, 'html.parser')
fantasy_table = soup.find_all('table')[0]
for row in fantasy_table.find_all('tr')[2:]:
player_html = row.find('td', attrs={'data-stat': 'player'})
pos_html = row.find('td', attrs={'data-stat': 'fantasy_pos'})
if player_html is None or pos_html is None:
continue
name = player_html.a.get_text()
pos = pos_html.get_text()
stub = player_html.a.get('href')
# Check if exit condition is met
if top_n:
if tot_players % 150 == 0:
print(tot_players)
if tot_players >= n:
break
tot_players += 1
else:
if position_counts[pos] >= position_limits[pos]:
print(position_counts)
# See if all positions are filled
greater = True
for key in position_counts.keys():
if position_counts[key] < position_limits[key]:
greater = False
break
if greater:
break
else:
continue
position_counts[pos] += 1
# Check to make sure player hasnt been seen yet for this year
if stub + str(year) in encountered:
continue
encountered.append(stub + str(year))
for df in [game_df, annual_df]:
curr_player_df = df[(df['full_name'] == name) & (df['pos'] == pos) & (df['year'] == year)]
df.at[curr_player_df.index, 'stub'] = stub
len(['a' for a in curr_player_df.index])
len(curr_player_df.index)
game_df.to_csv('data/game_pro_ftb_ref_top_220.csv', index=False)
annual_df.to_csv('data/annual_pro_ftb_ref_top_220.csv', index=False)
###Output
_____no_output_____ |
2016/tutorial_final/94/Le_Le_Tutorial.ipynb | ###Markdown
IntroductionIn this tutorial, you will learn some basic ideas about k-Neareset Neighbors algorithm (or kNN for short) and apply kNN using available packages. Also, you will implement the kNN algorithm from the scratch for classification analysis. As one of the top 10 data mining algorithms identified by IEEE International Conference on Data Mining (ICDM), kNN is very popular in data analysis for its simplicity and good performance in application. What is kNNThe kNN algorithm is instance-based which can be used for classification and regression. The idea behind the algorithm is very simple: use the characteristics of an object's k-nearest neighbors to evalute itself. It is a supervised learning algorithm which means it learns a model from samples with known labels or values. [](https://upload.wikimedia.org/wikipedia/commons/e/e7/KnnClassification.svg)The graph above is a classic graph from wiki to explain the basic idea of kNN. The samples shown in the graph has two classes: one is blue square and the other is red triangle. The green circle at the center location is a sample waiting to be decided which class it belongs to. The result depends on the parameter k by applying the majority vote rule.If k = 3 which means the class of the green circle depends on the three nearest neighbors, the green circle is a red triangle.If k = 5 which means the class of the green circle depends on the five nearest neighbors, the green circle is a blue square.Concerning how to choose a best k, it depends largely on the dataset. A larger k means deciding the label or value of a unknown test based on more neighbors, and thus could reduce the effect of outliers and noise to a certain degree. But the bad side is that the distinction between different classes may not be that clear. The kNN algorithm is also a lazy-learning algorithm since the model will be constructed when a prediction is needed to make. AnalysiskNN can be applied to both classification and regression problems and will have different output values based on which type the problem belongs to. In a classification problem, an output for an object is a class label for the object determined by holding a majority vote of the labels of the object's k number of neighbors. The attribute types decide the measure of similarity. The Euclidean distance is used for real-valued data and Hamming distance is used for categorical data.In a regression problem, an output for an object is a certain value determined by the average value of the object's k number of neighbors.Training data using kNN for analysis purposes has a feature space, either scalar or multi-dimensional, indicating that a certain distance can be calculated to compare different objects to find the k-nearest neighbors. sklearn.neighborsIf you want to use available package to apply kNN, then sklearn.neighbors can be your choice.There are two main classess in sklearn.neighbors for k-Nearest Neighbors: one is sklearn.neighbors.KNeighborsRegressor for regression analysis and the other is sklearn.neighbors.KNeighborsClassifier for classfication analysis. Let's start from classification first. KNeighborsClassifierActually besides KNeighborsClassifier, scikit-learn provides another class called RadiusNeighborsClassifier for nearest neighbors classification. This class works well especially when the data sample is not well-uniform sampled because the user can appoint a specific R which is the radius to decide a field of reference instances. Thus data points in sparse distribution use fewer neighbors for classification. However, it is not that effective for a dataset with high-dimension spaces and the reason could be referred to a term called "curse of dimensionality" meaning various phenomena that happen only in high-dimensional spaces during the data analysis. KNeighborsClassifier is the more popular used one and you will try to use it. The dataset to be used here is the iris dataset from sklearn.datasets. By the way, there are many available datasets in scikit-learn. If you want to try kNN using other datasets, you can download and try the similar commands below.You can download the data using load_iris(), and the default value for the parameter return_X_y is "False" under which the return value is a Bunch type. The Bunch type is a quite useful object like dictionary, providing information about 'target_names'(label names), 'data'(data without labels), 'target'(labels), 'feature_names'(names of the features) and 'DESCR'(description of the dataset).
###Code
from sklearn.datasets import load_iris
import numpy as np
iris_data = load_iris()
# print the basic information of this dataset
print "class: ", iris_data['target_names']
print "featur: ", iris_data['feature_names']
print "first five rows of data: ", iris_data['data'][:5]
print "labels: ", np.unique(iris_data['target']) # Labels are the numeric way to represent classes correspondingly.
print "number of samples: ", len(iris_data['data'])
print "number of samples for "+ iris_data['target_names'][0] +": ", len([iris_data['data'][i] for i in range(len(iris_data['target'])) if iris_data['target'][i] == 0])
print "number of samples for "+ iris_data['target_names'][1] +": ", len([iris_data['data'][i] for i in range(len(iris_data['target'])) if iris_data['target'][i] == 1])
print "number of samples for "+ iris_data['target_names'][2] +": ", len([iris_data['data'][i] for i in range(len(iris_data['target'])) if iris_data['target'][i] == 2])
###Output
class: ['setosa' 'versicolor' 'virginica']
featur: ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
first five rows of data: [[ 5.1 3.5 1.4 0.2]
[ 4.9 3. 1.4 0.2]
[ 4.7 3.2 1.3 0.2]
[ 4.6 3.1 1.5 0.2]
[ 5. 3.6 1.4 0.2]]
labels: [0 1 2]
number of samples: 150
number of samples for setosa: 50
number of samples for versicolor: 50
number of samples for virginica: 50
###Markdown
Then we need to split the dataset into a training dataset and a test dataset. We can do the split randomly with a ratio of 66% and 34% respectively. Before that, let's combine data with labels so that it will be easier to randomly select training data and test data. You can do it by inserting the label column to the last column of the data.
###Code
# add labels to the dataset
data = np.array(iris_data['data'])
data = np.insert(data, 4, iris_data['target'], axis = 1)
###Output
_____no_output_____
###Markdown
Concerning splitting data, you can create a random selector and then select training and test rows by index. Also, you can split attributes and label at the same time.
###Code
# create a random selector
selector = range(len(data))
np.random.shuffle(selector)
# select the training dataset
train = data[selector[:99]][:, :-1]
train_label = data[selector[:99]][:,-1]
# select the test dataset
test = data[selector[99:]][:, :-1]
test_label = data[selector[99:]][:,-1]
###Output
_____no_output_____
###Markdown
Now, with datasets ready, you can initiate a KNeighborsClassifier. All the parameters are optional and the default value for n_neighbors is 5. Another interesting parameter called "weights" is for you to choose whether to assign same weights to each neighbor ('uniform') or to assign different weights based on the inverse distance from the unknown data point.
###Code
from sklearn.neighbors import KNeighborsClassifier
knnClassifier = KNeighborsClassifier(n_neighbors = 10, weights = 'uniform')
###Output
_____no_output_____
###Markdown
Then, you need to use the training dataset to train the classifier, and the method is "fit(X, y)".
###Code
knnClassifier.fit(train, train_label)
###Output
_____no_output_____
###Markdown
Next step is to use the classifier to predict labels of the test dataset and calculate the error rate.
###Code
# predict the label of test data
predict_label1 = knnClassifier.predict(test)
# print the error rate
error = 0
for i in range(len(test_label)):
if test_label[i] != predict_label1[i]:
error += 1
error = error * 1.0 / len(test)
print "error rate: ", error
###Output
error rate: 0.0196078431373
###Markdown
Now you can do kNN using KNeighborsClassifier! Let's look at KNeighborsRegressor. KNeighborsRegressorSimilar to the case in classification, besides KNeighborsRegressor, scikit-learn provides another class called RadiusNeighborsRegressor for doing regression analysis. RediusNeighborsRegressor works well when the data is not well-uniform sampled and it lets the user to select a radius r to check the conditions of neighbors.The way that we use KNeighborsRegressor is similar to that of KNeighborsClassifier. Thus you can try it by using a small test example.
###Code
from sklearn.neighbors import KNeighborsRegressor
import numpy as np
# create a small example
x = [[1, 2], [2, 2], [2, 3], [5, 7], [6, 8], [8, 8], [8, 12], [10, 14]]
variables = np.array(x)
value = [10, 8, 1, 1, 3, 2, 4, 3]
# initiate the regressor with a neighbor number of 2
neighbor = KNeighborsRegressor(2)
# fit the regressor with training data and value
neighbor.fit(variables, value)
# use the regressor to predict
predicted_value = neighbor.predict([[5, 6]])
print "for the predicted value of [[5, 6]]: ", predicted_value
###Output
for the predicted value of [[5, 6]]: [ 2.]
###Markdown
Implement kNN Instead of using available packages, you may be intereseted in writing your own kNN. Let's start to write our own kNN from the scratch for classfication analysis!One thing to mention about is that I think it's better to keep things complete. So I keep the whole codes in one cell below and add comments in between. Sorry for the inconvenience.
###Code
import numpy as np
from heapdict import heapdict
import math
from collections import Counter
class kNNClassfication():
"""To initiate, we can pass in the training data, the corresponding labels,
and k which is the number of neighbors we choose to decide the unknown instance """
def __init__(self, train_data, label_data, k_data):
self.train = np.array(train_data)
self.label = label_data
self.k = k_data
pass
"""To predict, we pass in the test data and return the predicted labels.
Based on what we have discussed above, for an unknown instance, we need to find its k neighbors.
For the classification problem, we decide its label based on the majority vote rule"""
def predict(self, test_data):
predict_label = []
neighbors = []
label_test = ''
# We find each instance's neighbors and append to the neighbors array
for instance in test_data:
kneighbor = self.find_neighbors(instance)
neighbors.append(kneighbor)
# We find each instance's majority label and append to the predict_label array
for kneighbor in neighbors:
label_test = self.find_majority(kneighbor)
predict_label.append(label_test)
return predict_label
pass
"""We use a method to find neighbors of an instance and return the labels of neighbors as a dictionary.
We call the calculate_distance method to get the distance between the unknown instance and each training data.
"""
def find_neighbors(self, instance):
# use a heapdict() to find neighbors with the small distance
container = heapdict()
# calculate the distance between the unknown instance and each training data
for i in range(len(self.train)):
distance = self.calculate_distance(instance, self.train[i])
container[i] = distance
# add labels of k nearest neighbors to the dictionary
neighbors = {}
for i in range(self.k):
neighbor = container.popitem()
key = neighbor[0]
value = neighbor[1]
neighbors[key] = self.label[key]
return neighbors
pass
"""calculate the Euclidean distance between two instances"""
def calculate_distance(self, x, y):
distance = 0
sqr_distance = 0
for i in range(len(x)):
distance_i = (x[i] - y[i]) ** 2
distance = distance + distance_i
sqr_distance = math.sqrt(distance)
return sqr_distance
pass
"""We pass in an instance's k nearest neighbors, find the majority label and return it."""
def find_majority(self, kneighbor):
label = []
for item in kneighbor:
label.append(kneighbor[item])
# use Counter() to get the count of each label
c = Counter(label)
test_label = c.most_common(1)[0][0]
return test_label
pass
###Output
_____no_output_____
###Markdown
We finish our own kNN class! Next, let's write a simple test. I make up three classes with different distribution.
###Code
train_small = np.array([[1, 2, 3], [2, 2, 1], [2, 3, 4], [5, 7, 6], [6, 8, 7], [8, 8, 4], [8, 6, 10], [10, 16, 8]])
label_small = np.array(['dog', 'dog', 'dog', 'cat', 'cat','cat', 'tiger', 'tiger'])
test_small = np.array([[4,3,2], [12, 10, 10], [5,8,7]])
test_label_small = np.array(['dog', 'tiger', 'cat'])
k = 2
# initiate the classifier and predict the test data
knnClassifier = kNNClassfication(train_small, label_small, 2)
label_test_small = knnClassifier.predict(test_small)
print "the predicted value: ", label_test_small
# calculate the error rate
error_small = 0
for i in range(len(test_small)):
if (test_label_small[i] != label_test_small[i]):
error_small += 1
error_small = error_small / len(test_small)
print "error rate: ", error_small
###Output
the predicted value: ['dog', 'tiger', 'cat']
error rate: 0
###Markdown
Let's also test on the iris dataset which we have used when learning how to use KNeighborsClassifier from scikit-learn.
###Code
# use the training data split from iris
knnClassifier3 = kNNClassfication(train, train_label, 10)
# predict the test data
predict_label2 = knnClassifier3.predict(test)
# replace the integer number of output with specific class name
test_class = []
for i in test_label:
if i == 0:
test_class.append('setosa')
elif i == 1:
test_class.append('versicolor')
else:
test_class.append('virginica')
predict_class = []
for i in predict_label2:
if i == 0:
predict_class.append('setosa')
elif i == 1:
predict_class.append('versicolor')
else:
predict_class.append('virginica')
# print the actual labels and the predicted labels
for i in range(len(test_label)):
print "actual class: ", test_class[i], " predicted class: ", predict_class[i]
# print the error rate
error2 = 0
for i in range(len(test_label)):
if test_label[i] != predict_label2[i]:
error2 += 1
error2 = error2 * 1.0 / len(test)
print "error rate: ", error2
###Output
actual class: setosa predicted class: setosa
actual class: setosa predicted class: setosa
actual class: setosa predicted class: setosa
actual class: setosa predicted class: setosa
actual class: virginica predicted class: virginica
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: virginica
actual class: virginica predicted class: virginica
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: virginica predicted class: virginica
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: versicolor predicted class: versicolor
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
actual class: setosa predicted class: setosa
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
actual class: setosa predicted class: setosa
actual class: virginica predicted class: virginica
actual class: versicolor predicted class: versicolor
actual class: setosa predicted class: setosa
actual class: versicolor predicted class: versicolor
actual class: versicolor predicted class: versicolor
actual class: versicolor predicted class: versicolor
actual class: virginica predicted class: virginica
error rate: 0.0196078431373
|
example_notebooks/general_demo.ipynb | ###Markdown
In this demoThis notebook demonstrates some of the core model building and differential equation solving elements:- Hamiltonian and signal construction- Model transformations: entering a frame, making a rotating wave approximation- Defining and solving differential equationsSections1. `Signal`s2. Constructing a `HamiltonianModel`3. Setting `frame` and `cutoff_freq` in `HamiltonianModel`4. Integrating the Schrodinger equation5. Adding dissipative dynamics with a `LindbladModel` and simulating density matrix evolution6. Simulate the Lindbladian to get a `SuperOp` representation of the quantum channel 1. `Signal`A `Signal` object represents a complex mixed signal, i.e. a function of the form: \begin{equation} s(t) = f(t)e^{i2 \pi \nu t},\end{equation}where $f(t)$ is the *envelope* and $\nu$ is the *carrier frequency*.Here we define a signal with a Gaussian envelope:
###Code
amp = 1. # amplitude
sig = 2. # sigma
t0 = 3.5*sig # center of Gaussian
T = 7*sig # end of signal
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
gauss_signal = Signal(envelope=gaussian_envelope, carrier_freq=0.5)
print(gauss_signal.envelope(0.25))
print(gauss_signal(0.25))
gauss_signal.draw(0, T, 100, function='envelope')
gauss_signal.draw(0, T, 200)
###Output
_____no_output_____
###Markdown
2. The `HamiltonianModel` classA `HamiltonianModel` is specified as a list of Hermitian operators with `Signal` coefficients. Here, we use a classic qubit model:\begin{equation} H(t) = 2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}.\end{equation}Generally, a `HamiltonianModel` represents a linear combination:\begin{equation} H(t) = \sum_j s_j(t) H_j,\end{equation}where: - $H_j$ are Hermitian operators given as `terra.quantum_info.Operator` objects, and - $s_j(t) = Re[f_j(t)e^{i2 \pi \nu_j t}]$, where the complex functions $f_j(t)e^{i2 \pi \nu_j t}$ are specified as `Signal` objects.Constructing a `HamiltonianModel` requires specifying lists of the operators and the signals.
###Code
#####################
# construct operators
#####################
r = 0.5
w = 1.
X = Operator.from_label('X')
Y = Operator.from_label('Y')
Z = Operator.from_label('Z')
operators = [2 * np.pi * w * Z/2,
2 * np.pi * r * X/2]
###################
# construct signals
###################
# Define gaussian envelope function to have max amp and area approx 2
amp = 1.
sig = 0.399128/r
t0 = 3.5*sig
T = 7*sig
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
signals = [1.,
Signal(envelope=gaussian_envelope, carrier_freq=w)]
#################
# construct model
#################
hamiltonian = HamiltonianModel(operators=operators, signals=signals)
###Output
_____no_output_____
###Markdown
2.1 Evaluation and driftEvaluate at a given time.
###Code
print(hamiltonian.evaluate(0.12))
###Output
[[ 3.14159265+0.j 0.00419151+0.j]
[ 0.00419151+0.j -3.14159265+0.j]]
###Markdown
Get the drift (terms corresponding to constant coefficients).
###Code
hamiltonian.drift
def plot_qubit_hamiltonian_components(hamiltonian, t0, tf, N=200):
t_vals = np.linspace(t0, tf, N)
model_vals = np.array([hamiltonian.evaluate(t) for t in t_vals])
x_coeff = model_vals[:, 0, 1].real
y_coeff = -model_vals[:, 0, 1].imag
z_coeff = model_vals[:, 0, 0].real
plt.plot(t_vals, x_coeff, label='X component')
plt.plot(t_vals, y_coeff, label='Y component')
plt.plot(t_vals, z_coeff, label='Z component')
plt.legend()
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
2.2 Enter a rotating frameWe can specify a frame to enter the Hamiltonian in. Given a Hermitian operator $H_0$, *entering the frame* of $H_0$ means transforming a Hamiltonian $H(t)$:\begin{equation} H(t) \mapsto \tilde{H}(t) = e^{i H_0 t}H(t)e^{-iH_0 t} - H_0\end{equation}Here, we will enter the frame of the drift Hamiltonian, resulting in:\begin{equation}\begin{aligned} \tilde{H}(t) &= e^{i2 \pi \nu \frac{Z}{2} t}\left(2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}\right)e^{-i2 \pi \nu \frac{Z}{2} t} - 2 \pi \nu \frac{Z}{2} \\ &= 2 \pi r s(t) e^{i2 \pi \nu \frac{Z}{2} t}\frac{X}{2}e^{-i2 \pi \nu \frac{Z}{2} t}\\ &= 2 \pi r s(t) \left[\cos(2 \pi \nu t) \frac{X}{2} - \sin(2 \pi \nu t) \frac{Y}{2} \right]\end{aligned}\end{equation}
###Code
hamiltonian.frame = hamiltonian.drift
###Output
_____no_output_____
###Markdown
Evaluate again.
###Code
print(hamiltonian.evaluate(0.12))
# validate with independent computation
t = 0.12
2 * np.pi * r * np.real(signals[1](t)) * (np.cos(2*np.pi * w * t) * X / 2
- np.sin(2*np.pi * w * t) * Y / 2 )
###Output
_____no_output_____
###Markdown
Replot the coefficients of the model in the Pauli basis over time.
###Code
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
3. Set cutoff frequency (rotating wave approximation)A common technique to simplify the dynamics of a quantum system is to perform the *rotating wave approximation* (RWA), in which terms with high frequency are averaged to $0$.The RWA can be applied on any `HamiltonianModel` (in the given `frame`) by setting the `cutoff_freq` attribute, which sets any fast oscillating terms to $0$, effectively performing a moving average on terms with carrier frequencies above `cutoff_freq`.For our model, the classic `cutoff_freq` is $2 \nu$ (twice the qubit frequency). This approximates the Hamiltonian $\tilde{H}(t)$ as:\begin{equation} \tilde{H}(t) \approx 2 \pi \frac{r}{2} \left[Re[f(t)] \frac{X}{2} + Im[f(t)] \frac{Y}{2} \right],\end{equation}where $f(t)$ is the envelope of the on-resonance drive. On our case $f(t) = Re[f(t)]$, and so we simply have\begin{equation} \tilde{H}(t) \approx 2 \pi \frac{r}{2} Re[f(t)] \frac{X}{2},\end{equation}
###Code
# set the cutoff frequency
hamiltonian.cutoff_freq = 2*w
# evaluate again
print(hamiltonian.evaluate(0.12))
###Output
[[0. +0.00000000e+00j 0.00287496-2.16840434e-19j]
[0.00287496+2.16840434e-19j 0. +0.00000000e+00j]]
###Markdown
We also plot the coefficients of the model in the frame of the drift with the RWA applied. We now expect to see simply a plot of $\pi \frac{r}{2} f(t)$ for the $X$ coefficient.
###Code
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
4. Solve Shrodinger equation with a `HamiltonianModel`To solve the Schrodinger Equation for the given Hamiltonian, we construct a `SchrodingerProblem` object, which specifies the desired simulation.
###Code
# reset the frame and cutoff_freq properties
hamiltonian.frame = None
hamiltonian.cutoff_freq = None
# solve the problem, with some options specified
y0 = Statevector([0., 1.])
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
###Output
CPU times: user 406 ms, sys: 2.6 ms, total: 409 ms
Wall time: 407 ms
Final state:
----------------------------
Statevector([0.96087419-0.27193942j 0.0511707 +0.01230027j])
Population in excited state:
----------------------------
0.9972302627366899
###Markdown
When specifying a problem, we can specify which frame to solve in, and a cutoff frequency to solve with.
###Code
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
###Output
CPU times: user 159 ms, sys: 9.9 ms, total: 169 ms
Wall time: 162 ms
Final state:
----------------------------
Statevector([ 9.62205827e-01-2.72323239e-01j
-2.36278329e-08+8.34847474e-08j])
Population in excited state:
----------------------------
0.9999999999756191
###Markdown
4.1 Technical solver notes:- Behind the scenes, the `SchrodingerProblem` constructs an `OperatorModel` from the `HamiltonianModel`, representing the generator in the Schrodinger equation:\begin{equation} G(t) = \sum_j s_j(t)\left[-iH_j\right]\end{equation}which is then pass used in a generalized routine for solving DEs of the form $y'(t) = G(t)y(t)$- The generalized solver routine will automatically solve the DE in the drift frame, as well as in the basis in which the drift is diagonal (relevent for non-diagonal drift operators, to save on exponentiations for the frame operator). 5. Solving with dissipative dynamics To simulate with noise operators, we define a `LindbladModel`, containing:- a model of a Hamiltonian (specified with either a `HamiltonianModel` object, or in the standard decomposition of operators and signals)- an optional list of noise operators- an optional list of time-dependent coefficients for the noise operatorsSuch a system is simulated in terms of the Lindblad master equation:\begin{equation} \dot{\rho}(t) = -i[H(t), \rho(t)] + \sum_j g_j(t) \left(L_j \rho L_j^\dagger - \frac{1}{2}\{L_j^\dagger L_j, \rho(t)\}\right),\end{equation}where- $H(t)$ is the Hamiltonian,- $L_j$ are the noise operators, and- $g_j(t)$ are the noise coefficientsHere we will construct such a model using the above `Hamitonian`, along with a noise operator that drives the state to the ground state.
###Code
# construct quantum model with noise operators
noise_ops = [np.array([[0., 0.],
[1., 0.]])]
noise_signals = [0.001]
lindblad_model = LindbladModel.from_hamiltonian(hamiltonian=hamiltonian,
noise_operators=noise_ops,
noise_signals=noise_signals)
# density matrix
y0 = DensityMatrix([[0., 0.], [0., 1.]])
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
sol.y[-1]
###Output
CPU times: user 497 ms, sys: 9.5 ms, total: 506 ms
Wall time: 500 ms
###Markdown
We may also simulate the Lindblad equation with a cutoff frequency.
###Code
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
sol.y[-1]
###Output
CPU times: user 480 ms, sys: 6.35 ms, total: 486 ms
Wall time: 482 ms
###Markdown
5.1 Technical notes- Similarly to the flow of `SchrodingerProblem`, `LindbladProblem` constructs an `OperatorModel` representing the *vectorized* Lindblad equation, which is then used to simulate the Lindblad equation on the vectorized density matrix.- Frame handling and cutoff frequency handling are handled at the `OperatorModel` level, and hence can be used here as well. 5.2 Simulate the Lindbladian/SuperOp
###Code
# identity quantum channel in superop representation
y0 = SuperOp(np.eye(4))
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
print(sol.y[-1])
print(PTM(y))
###Output
PTM([[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j],
[-4.54696754e-08+0.j, 4.13498276e-15+0.j, -6.71997263e-15+0.j,
-4.54696754e-08+0.j],
[ 7.38951024e-08+0.j, -6.71997263e-15+0.j, 1.09209723e-14+0.j,
7.38951024e-08+0.j],
[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j]],
input_dims=(2,), output_dims=(2,))
###Markdown
In this demoThis notebook demonstrates some of the core model building and differential equation solving elements:- Hamiltonian and signal construction- Model transformations: entering a frame, making a rotating wave approximation- Defining and solving differential equationsSections1. `Signal`s2. Constructing a `HamiltonianModel`3. Setting `frame` and `cutoff_freq` in `HamiltonianModel`4. Integrating the Schrodinger equation5. Adding dissipative dynamics with a `LindbladModel` and simulating density matrix evolution6. Simulate the Lindbladian to get a `SuperOp` representation of the quantum channel 1. `Signal`A `Signal` object represents a complex mixed signal, i.e. a function of the form: \begin{equation} s(t) = f(t)e^{i2 \pi \nu t},\end{equation}where $f(t)$ is the *envelope* and $\nu$ is the *carrier frequency*.Here we define a signal with a Gaussian envelope:
###Code
amp = 1. # amplitude
sig = 2. # sigma
t0 = 3.5*sig # center of Gaussian
T = 7*sig # end of signal
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
gauss_signal = Signal(envelope=gaussian_envelope, carrier_freq=0.5)
print(gauss_signal.envelope_value(0.25))
print(gauss_signal.value(0.25))
gauss_signal.plot_envelope(0, T, 100)
gauss_signal.plot(0, T, 200)
###Output
_____no_output_____
###Markdown
2. The `HamiltonianModel` classA `HamiltonianModel` is specified as a list of Hermitian operators with `Signal` coefficients. Here, we use a classic qubit model:\begin{equation} H(t) = 2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}.\end{equation}Generally, a `HamiltonianModel` represents a linear combination:\begin{equation} H(t) = \sum_j s_j(t) H_j,\end{equation}where: - $H_j$ are Hermitian operators given as `terra.quantum_info.Operator` objects, and - $s_j(t) = Re[f_j(t)e^{i2 \pi \nu_j t}]$, where the complex functions $f_j(t)e^{i2 \pi \nu_j t}$ are specified as `Signal` objects.Constructing a `HamiltonianModel` requires specifying lists of the operators and the signals.
###Code
#####################
# construct operators
#####################
r = 0.5
w = 1.
X = Operator.from_label('X')
Y = Operator.from_label('Y')
Z = Operator.from_label('Z')
operators = [2 * np.pi * w * Z/2,
2 * np.pi * r * X/2]
###################
# construct signals
###################
# Define gaussian envelope function to have max amp and area approx 2
amp = 1.
sig = 0.399128/r
t0 = 3.5*sig
T = 7*sig
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
signals = [Constant(1.),
Signal(envelope=gaussian_envelope, carrier_freq=w)]
#################
# construct model
#################
hamiltonian = HamiltonianModel(operators=operators, signals=signals)
###Output
_____no_output_____
###Markdown
2.1 Evaluation and driftEvaluate at a given time.
###Code
print(hamiltonian.evaluate(0.12))
###Output
[[ 3.14159265+0.j 0.00419151+0.j]
[ 0.00419151+0.j -3.14159265+0.j]]
###Markdown
Get the drift (terms corresponding to `Constant` coefficients).
###Code
hamiltonian.drift
def plot_qubit_hamiltonian_components(hamiltonian, t0, tf, N=200):
t_vals = np.linspace(t0, tf, N)
model_vals = np.array([hamiltonian.evaluate(t) for t in t_vals])
x_coeff = model_vals[:, 0, 1].real
y_coeff = -model_vals[:, 0, 1].imag
z_coeff = model_vals[:, 0, 0].real
plt.plot(t_vals, x_coeff, label='X component')
plt.plot(t_vals, y_coeff, label='Y component')
plt.plot(t_vals, z_coeff, label='Z component')
plt.legend()
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
2.2 Enter a rotating frameWe can specify a frame to enter the Hamiltonian in. Given a Hermitian operator $H_0$, *entering the frame* of $H_0$ means transforming a Hamiltonian $H(t)$:\begin{equation} H(t) \mapsto \tilde{H}(t) = e^{i H_0 t}H(t)e^{-iH_0 t} - H_0\end{equation}Here, we will enter the frame of the drift Hamiltonian, resulting in:\begin{equation}\begin{aligned} \tilde{H}(t) &= e^{i2 \pi \nu \frac{Z}{2} t}\left(2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}\right)e^{-i2 \pi \nu \frac{Z}{2} t} - 2 \pi \nu \frac{Z}{2} \\ &= 2 \pi r s(t) e^{i2 \pi \nu \frac{Z}{2} t}\frac{X}{2}e^{-i2 \pi \nu \frac{Z}{2} t}\\ &= 2 \pi r s(t) \left[\cos(2 \pi \nu t) \frac{X}{2} - \sin(2 \pi \nu t) \frac{Y}{2} \right]\end{aligned}\end{equation}
###Code
hamiltonian.frame = hamiltonian.drift
###Output
_____no_output_____
###Markdown
Evaluate again.
###Code
print(hamiltonian.evaluate(0.12))
# validate with independent computation
t = 0.12
2 * np.pi * r * np.real(signals[1].value(t)) * (np.cos(2*np.pi * w * t) * X / 2
- np.sin(2*np.pi * w * t) * Y / 2 )
###Output
_____no_output_____
###Markdown
Replot the coefficients of the model in the Pauli basis over time.
###Code
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
3. Set cutoff frequency (rotating wave approximation)A common technique to simplify the dynamics of a quantum system is to perform the *rotating wave approximation* (RWA), in which terms with high frequency are averaged to $0$.The RWA can be applied on any `HamiltonianModel` (in the given `frame`) by setting the `cutoff_freq` attribute, which sets any fast oscillating terms to $0$, effectively performing a moving average on terms with carrier frequencies above `cutoff_freq`.For our model, the classic `cutoff_freq` is $2 \nu$ (twice the qubit frequency). This approximates the Hamiltonian $\tilde{H}(t)$ as:\begin{equation} \tilde{H}(t) \approx 2 \pi \frac{r}{2} \left[Re[f(t)] \frac{X}{2} + Im[f(t)] \frac{Y}{2} \right],\end{equation}where $f(t)$ is the envelope of the on-resonance drive. On our case $f(t) = Re[f(t)]$, and so we simply have\begin{equation} \tilde{H}(t) \approx 2 \pi \frac{r}{2} Re[f(t)] \frac{X}{2},\end{equation}
###Code
# set the cutoff frequency
hamiltonian.cutoff_freq = 2*w
# evaluate again
print(hamiltonian.evaluate(0.12))
###Output
[[0. +0.00000000e+00j 0.00287496-2.16840434e-19j]
[0.00287496+2.16840434e-19j 0. +0.00000000e+00j]]
###Markdown
We also plot the coefficients of the model in the frame of the drift with the RWA applied. We now expect to see simply a plot of $\pi \frac{r}{2} f(t)$ for the $X$ coefficient.
###Code
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
4. Solve Shrodinger equation with a `HamiltonianModel`To solve the Schrodinger Equation for the given Hamiltonian, we construct a `SchrodingerProblem` object, which specifies the desired simulation.
###Code
# reset the frame and cutoff_freq properties
hamiltonian.frame = None
hamiltonian.cutoff_freq = None
# solve the problem, with some options specified
y0 = Statevector([0., 1.])
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
###Output
CPU times: user 488 ms, sys: 6.12 ms, total: 494 ms
Wall time: 492 ms
Final state:
----------------------------
Statevector([0.96087419-0.27193942j, 0.0511707 +0.01230027j],
dims=(2,))
Population in excited state:
----------------------------
0.9972302627366899
###Markdown
When specifying a problem, we can specify which frame to solve in, and a cutoff frequency to solve with.
###Code
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
###Output
CPU times: user 166 ms, sys: 5.82 ms, total: 172 ms
Wall time: 169 ms
Final state:
----------------------------
Statevector([ 9.62205827e-01-2.72323239e-01j,
-2.36278329e-08+8.34847474e-08j],
dims=(2,))
Population in excited state:
----------------------------
0.9999999999756191
###Markdown
4.1 Technical solver notes:- Behind the scenes, the `SchrodingerProblem` constructs an `OperatorModel` from the `HamiltonianModel`, representing the generator in the Schrodinger equation:\begin{equation} G(t) = \sum_j s_j(t)\left[-iH_j\right]\end{equation}which is then pass used in a generalized routine for solving DEs of the form $y'(t) = G(t)y(t)$- The generalized solver routine will automatically solve the DE in the drift frame, as well as in the basis in which the drift is diagonal (relevent for non-diagonal drift operators, to save on exponentiations for the frame operator). 5. Solving with dissipative dynamics To simulate with noise operators, we define a `LindbladModel`, containing:- a model of a Hamiltonian (specified with either a `HamiltonianModel` object, or in the standard decomposition of operators and signals)- an optional list of noise operators- an optional list of time-dependent coefficients for the noise operatorsSuch a system is simulated in terms of the Lindblad master equation:\begin{equation} \dot{\rho}(t) = -i[H(t), \rho(t)] + \sum_j g_j(t) \left(L_j \rho L_j^\dagger - \frac{1}{2}\{L_j^\dagger L_j, \rho(t)\}\right),\end{equation}where- $H(t)$ is the Hamiltonian,- $L_j$ are the noise operators, and- $g_j(t)$ are the noise coefficientsHere we will construct such a model using the above `Hamitonian`, along with a noise operator that drives the state to the ground state.
###Code
# construct quantum model with noise operators
noise_ops = [np.array([[0., 0.],
[1., 0.]])]
noise_signals = [Constant(0.001)]
lindblad_model = LindbladModel.from_hamiltonian(hamiltonian=hamiltonian,
noise_operators=noise_ops,
noise_signals=noise_signals)
# density matrix
y0 = DensityMatrix([[0., 0.], [0., 1.]])
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
sol.y[-1]
###Output
CPU times: user 876 ms, sys: 143 ms, total: 1.02 s
Wall time: 542 ms
###Markdown
We may also simulate the Lindblad equation with a cutoff frequency.
###Code
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
sol.y[-1]
###Output
CPU times: user 493 ms, sys: 6.28 ms, total: 500 ms
Wall time: 498 ms
###Markdown
5.1 Technical notes- Similarly to the flow of `SchrodingerProblem`, `LindbladProblem` constructs an `OperatorModel` representing the *vectorized* Lindblad equation, which is then used to simulate the Lindblad equation on the vectorized density matrix.- Frame handling and cutoff frequency handling are handled at the `OperatorModel` level, and hence can be used here as well. 5.2 Simulate the Lindbladian/SuperOp
###Code
# identity quantum channel in superop representation
y0 = SuperOp(np.eye(4))
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
print(sol.y[-1])
print(PTM(y))
###Output
PTM([[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j],
[-4.54696754e-08+0.j, 4.13498276e-15+0.j, -6.71997263e-15+0.j,
-4.54696754e-08+0.j],
[ 7.38951024e-08+0.j, -6.71997263e-15+0.j, 1.09209723e-14+0.j,
7.38951024e-08+0.j],
[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j]],
input_dims=(2,), output_dims=(2,))
###Markdown
In this demoThis notebook demonstrates some of the core model building and differential equation solving elements:- Hamiltonian and signal construction- Model transformations: entering a frame, making a rotating wave approximation- Defining and solving differential equationsSections1. `Signal`s2. Constructing a `HamiltonianModel`3. Setting `frame` and `cutoff_freq` in `HamiltonianModel`4. Integrating the Schrodinger equation5. Adding dissipative dynamics with a `LindbladModel` and simulating density matrix evolution6. Simulate the Lindbladian to get a `SuperOp` representation of the quantum channel 1. `Signal`A `Signal` object represents a complex mixed signal, i.e. a function of the form: \begin{equation} s(t) = f(t)e^{i2 \pi \nu t},\end{equation}where $f(t)$ is the *envelope* and $\nu$ is the *carrier frequency*.Here we define a signal with a Gaussian envelope:
###Code
amp = 1. # amplitude
sig = 2. # sigma
t0 = 3.5*sig # center of Gaussian
T = 7*sig # end of signal
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
gauss_signal = Signal(envelope=gaussian_envelope, carrier_freq=0.5)
print(gauss_signal.envelope(0.25))
print(gauss_signal(0.25))
gauss_signal.draw(0, T, 100, function='envelope')
gauss_signal.draw(0, T, 200)
###Output
_____no_output_____
###Markdown
2. The `HamiltonianModel` classA `HamiltonianModel` is specified as a list of Hermitian operators with `Signal` coefficients. Here, we use a classic qubit model:\begin{equation} H(t) = 2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}.\end{equation}Generally, a `HamiltonianModel` represents a linear combination:\begin{equation} H(t) = \sum_j s_j(t) H_j,\end{equation}where: - $H_j$ are Hermitian operators given as `terra.quantum_info.Operator` objects, and - $s_j(t) = Re[f_j(t)e^{i2 \pi \nu_j t}]$, where the complex functions $f_j(t)e^{i2 \pi \nu_j t}$ are specified as `Signal` objects.Constructing a `HamiltonianModel` requires specifying lists of the operators and the signals.
###Code
#####################
# construct operators
#####################
r = 0.5
w = 1.
X = Operator.from_label('X')
Y = Operator.from_label('Y')
Z = Operator.from_label('Z')
operators = [2 * np.pi * w * Z/2,
2 * np.pi * r * X/2]
###################
# construct signals
###################
# Define gaussian envelope function to have max amp and area approx 2
amp = 1.
sig = 0.399128/r
t0 = 3.5*sig
T = 7*sig
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
signals = [1.,
Signal(envelope=gaussian_envelope, carrier_freq=w)]
#################
# construct model
#################
hamiltonian = HamiltonianModel(operators=operators, signals=signals)
###Output
_____no_output_____
###Markdown
2.1 Evaluation and driftEvaluate at a given time.
###Code
print(hamiltonian.evaluate(0.12))
###Output
[[ 3.14159265+0.j 0.00419151+0.j]
[ 0.00419151+0.j -3.14159265+0.j]]
###Markdown
Get the drift (terms corresponding to constant coefficients).
###Code
hamiltonian.drift
def plot_qubit_hamiltonian_components(hamiltonian, t0, tf, N=200):
t_vals = np.linspace(t0, tf, N)
model_vals = np.array([hamiltonian.evaluate(t) for t in t_vals])
x_coeff = model_vals[:, 0, 1].real
y_coeff = -model_vals[:, 0, 1].imag
z_coeff = model_vals[:, 0, 0].real
plt.plot(t_vals, x_coeff, label='X component')
plt.plot(t_vals, y_coeff, label='Y component')
plt.plot(t_vals, z_coeff, label='Z component')
plt.legend()
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
2.2 Enter a rotating frameWe can specify a frame to enter the Hamiltonian in. Given a Hermitian operator $H_0$, *entering the frame* of $H_0$ means transforming a Hamiltonian $H(t)$:\begin{equation} H(t) \mapsto \tilde{H}(t) = e^{i H_0 t}H(t)e^{-iH_0 t} - H_0\end{equation}Here, we will enter the frame of the drift Hamiltonian, resulting in:\begin{equation}\begin{aligned} \tilde{H}(t) &= e^{i2 \pi \nu \frac{Z}{2} t}\left(2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}\right)e^{-i2 \pi \nu \frac{Z}{2} t} - 2 \pi \nu \frac{Z}{2} \\ &= 2 \pi r s(t) e^{i2 \pi \nu \frac{Z}{2} t}\frac{X}{2}e^{-i2 \pi \nu \frac{Z}{2} t}\\ &= 2 \pi r s(t) \left[\cos(2 \pi \nu t) \frac{X}{2} - \sin(2 \pi \nu t) \frac{Y}{2} \right]\end{aligned}\end{equation}
###Code
hamiltonian.frame = hamiltonian.drift
###Output
_____no_output_____
###Markdown
Evaluate again.
###Code
print(hamiltonian.evaluate(0.12))
# validate with independent computation
t = 0.12
2 * np.pi * r * np.real(signals[1](t)) * (np.cos(2*np.pi * w * t) * X / 2
- np.sin(2*np.pi * w * t) * Y / 2 )
###Output
_____no_output_____
###Markdown
Replot the coefficients of the model in the Pauli basis over time.
###Code
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
3. Set cutoff frequency (rotating wave approximation)A common technique to simplify the dynamics of a quantum system is to perform the *rotating wave approximation* (RWA), in which terms with high frequency are averaged to $0$.The RWA can be applied on any `HamiltonianModel` (in the given `frame`) by setting the `cutoff_freq` attribute, which sets any fast oscillating terms to $0$, effectively performing a moving average on terms with carrier frequencies above `cutoff_freq`.For our model, the classic `cutoff_freq` is $2 \nu$ (twice the qubit frequency). This approximates the Hamiltonian $\tilde{H}(t)$ as:\begin{equation} \tilde{H}(t) \approx 2 \pi \frac{r}{2} \left[Re[f(t)] \frac{X}{2} + Im[f(t)] \frac{Y}{2} \right],\end{equation}where $f(t)$ is the envelope of the on-resonance drive. On our case $f(t) = Re[f(t)]$, and so we simply have\begin{equation} \tilde{H}(t) \approx 2 \pi \frac{r}{2} Re[f(t)] \frac{X}{2},\end{equation}
###Code
# set the cutoff frequency
hamiltonian.cutoff_freq = 2*w
# evaluate again
print(hamiltonian.evaluate(0.12))
###Output
[[0. +0.00000000e+00j 0.00287496-2.16840434e-19j]
[0.00287496+2.16840434e-19j 0. +0.00000000e+00j]]
###Markdown
We also plot the coefficients of the model in the frame of the drift with the RWA applied. We now expect to see simply a plot of $\pi \frac{r}{2} f(t)$ for the $X$ coefficient.
###Code
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
###Output
_____no_output_____
###Markdown
4. Solve Shrodinger equation with a `HamiltonianModel`To solve the Schrodinger Equation for the given Hamiltonian, we construct a `SchrodingerProblem` object, which specifies the desired simulation.
###Code
# reset the frame and cutoff_freq properties
hamiltonian.frame = None
hamiltonian.cutoff_freq = None
# solve the problem, with some options specified
y0 = Statevector([0., 1.])
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
###Output
CPU times: user 453 ms, sys: 6.67 ms, total: 459 ms
Wall time: 454 ms
Final state:
----------------------------
Statevector([0.96087419-0.27193942j, 0.0511707 +0.01230027j],
dims=(2,))
Population in excited state:
----------------------------
0.9972302627366899
###Markdown
When specifying a problem, we can specify which frame to solve in, and a cutoff frequency to solve with.
###Code
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
###Output
CPU times: user 150 ms, sys: 14.7 ms, total: 165 ms
Wall time: 153 ms
Final state:
----------------------------
Statevector([ 9.62205827e-01-2.72323239e-01j,
-2.36278329e-08+8.34847474e-08j],
dims=(2,))
Population in excited state:
----------------------------
0.9999999999756191
###Markdown
4.1 Technical solver notes:- Behind the scenes, the `SchrodingerProblem` constructs an `OperatorModel` from the `HamiltonianModel`, representing the generator in the Schrodinger equation:\begin{equation} G(t) = \sum_j s_j(t)\left[-iH_j\right]\end{equation}which is then pass used in a generalized routine for solving DEs of the form $y'(t) = G(t)y(t)$- The generalized solver routine will automatically solve the DE in the drift frame, as well as in the basis in which the drift is diagonal (relevent for non-diagonal drift operators, to save on exponentiations for the frame operator). 5. Solving with dissipative dynamics To simulate with noise operators, we define a `LindbladModel`, containing:- a model of a Hamiltonian (specified with either a `HamiltonianModel` object, or in the standard decomposition of operators and signals)- an optional list of noise operators- an optional list of time-dependent coefficients for the noise operatorsSuch a system is simulated in terms of the Lindblad master equation:\begin{equation} \dot{\rho}(t) = -i[H(t), \rho(t)] + \sum_j g_j(t) \left(L_j \rho L_j^\dagger - \frac{1}{2}\{L_j^\dagger L_j, \rho(t)\}\right),\end{equation}where- $H(t)$ is the Hamiltonian,- $L_j$ are the noise operators, and- $g_j(t)$ are the noise coefficientsHere we will construct such a model using the above `Hamitonian`, along with a noise operator that drives the state to the ground state.
###Code
# construct quantum model with noise operators
noise_ops = [np.array([[0., 0.],
[1., 0.]])]
noise_signals = [0.001]
lindblad_model = LindbladModel.from_hamiltonian(hamiltonian=hamiltonian,
noise_operators=noise_ops,
noise_signals=noise_signals)
# density matrix
y0 = DensityMatrix([[0., 0.], [0., 1.]])
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
sol.y[-1]
###Output
CPU times: user 1.04 s, sys: 316 ms, total: 1.35 s
Wall time: 567 ms
DensityMatrix([[0.99473642-9.02056208e-17j, 0.04620048-2.49934328e-02j],
[0.04620048+2.49934328e-02j, 0.00526358+0.00000000e+00j]],
dims=(2,))
###Markdown
We may also simulate the Lindblad equation with a cutoff frequency.
###Code
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
sol.y[-1]
###Output
CPU times: user 475 ms, sys: 21 ms, total: 496 ms
Wall time: 481 ms
DensityMatrix([[0.99614333-7.63278329e-17j, 0.01049406-3.50564736e-02j],
[0.01049406+3.50564736e-02j, 0.00252446+0.00000000e+00j]],
dims=(2,))
###Markdown
5.1 Technical notes- Similarly to the flow of `SchrodingerProblem`, `LindbladProblem` constructs an `OperatorModel` representing the *vectorized* Lindblad equation, which is then used to simulate the Lindblad equation on the vectorized density matrix.- Frame handling and cutoff frequency handling are handled at the `OperatorModel` level, and hence can be used here as well. 5.2 Simulate the Lindbladian/SuperOp
###Code
# identity quantum channel in superop representation
y0 = SuperOp(np.eye(4))
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
print(sol.y[-1])
print(PTM(y))
###Output
PTM([[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j],
[-4.54696754e-08+0.j, 4.13498276e-15+0.j, -6.71997263e-15+0.j,
-4.54696754e-08+0.j],
[ 7.38951024e-08+0.j, -6.71997263e-15+0.j, 1.09209723e-14+0.j,
7.38951024e-08+0.j],
[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j]],
input_dims=(2,), output_dims=(2,))
|
notebooks/Coursework_4_part8_rnn_batch_norm_msd25.ipynb | ###Markdown
KAGGLE ONLY PURPOSES 2017Machine Learning PracticalUniversity of EdinburghGeorgios Pligoropoulos - s1687568Coursework 4 (part 8) Imports, Inits, and helper functions
###Code
jupyterNotebookEnabled = True
plotting = True
coursework, part = 4, 8
saving = True
if jupyterNotebookEnabled:
#%load_ext autoreload
%reload_ext autoreload
%autoreload 2
import sys, os
mlpdir = os.path.expanduser(
'~/[email protected]/msc_Artificial_Intelligence/mlp_Machine_Learning_Practical/mlpractical'
)
sys.path.append(mlpdir)
from collections import OrderedDict
import skopt
from mylibs.jupyter_notebook_helper import show_graph
import datetime
import os
import time
import tensorflow as tf
import numpy as np
from mlp.data_providers import MSD10GenreDataProvider, MSD25GenreDataProvider,\
MSD10Genre_Autoencoder_DataProvider, MSD10Genre_StackedAutoEncoderDataProvider
import matplotlib.pyplot as plt
from mylibs.batch_norm import fully_connected_layer_with_batch_norm_and_l2
from mylibs.stacked_autoencoder_pretrainer import \
constructModelFromPretrainedByAutoEncoderStack,\
buildGraphOfStackedAutoencoder, executeNonLinearAutoencoder
from mylibs.jupyter_notebook_helper import getRunTime, getTrainWriter, getValidWriter,\
plotStats, initStats, gatherStats, renderStatsCollection
from mylibs.tf_helper import tfRMSE, tfMSE, fully_connected_layer
#trainEpoch, validateEpoch
from mylibs.py_helper import merge_dicts
from mylibs.dropout_helper import constructProbs
from mylibs.batch_norm import batchNormWrapper_byExponentialMovingAvg,\
fully_connected_layer_with_batch_norm
import pickle
from skopt.plots import plot_convergence
from mylibs.jupyter_notebook_helper import DynStats
import operator
from skopt.space.space import Integer, Categorical, Real
from skopt import gp_minimize
from rnn.rnn_batch_norm import RNNBatchNorm
if jupyterNotebookEnabled:
%matplotlib inline
seed = 16011984
rng = np.random.RandomState(seed=seed)
config = tf.ConfigProto(log_device_placement=True, allow_soft_placement=True)
config.gpu_options.allow_growth = True
figcount = 0
tensorboardLogdir = 'tf_cw%d_%d' % (coursework, part)
curDtype = tf.float32
reluBias = 0.1
batch_size = 50
segmentCount = 120
segmentLen = 25
best_params_filename = 'rnn_msd25_best_params.npy'
stats_coll_filename = 'rnn_msd25_bay_opt_stats_coll.npy'
res_gp_save_filename = 'rnn_msd25_res_gp.pickle'
stats_final_run_filename = 'rnn_msd25_final_run_stats.npy'
###Output
_____no_output_____
###Markdown
here the state size is equal to the number of classes because we have given to the last output all the responsibility.We are going to follow a repetitive process. For example if num_steps=6 then we break the 120 segments into 20 partsThe output of each part will be the genre. We are comparing against the genre every little part MSD 25 genre Bayesian Optimization
###Code
num_classes=25
numLens = np.sort(np.unique([2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60]))
assert np.all( segmentCount % numLens == 0 )
print len(numLens)
numLens
rnnModel = RNNBatchNorm(batch_size=batch_size, rng=rng, dtype = curDtype, config=config,
segment_count=segmentCount, segment_len= segmentLen, num_classes=num_classes)
#it cannot accept global variables for some unknown reason ...
def objective(params): # Here we define the metric we want to minimise
(state_size, num_steps, learning_rate) = params
epochs = 20
stats, keys = rnnModel.run_rnn(state_size = state_size, num_steps=num_steps, epochs = epochs,
learning_rate = learning_rate)
#save everytime in case it crashes
filename = stats_coll_filename
statsCollection = np.load(filename)[()] if os.path.isfile(filename) else dict()
statsCollection[(state_size, num_steps, learning_rate)] = stats
np.save(filename, statsCollection)
if plotting:
fig_1, ax_1, fig_2, ax_2 = plotStats(stats, keys)
plt.show()
# We want to maximise validation accuracy, i.e. minimise minus validation accuracy
validAccs = stats[:, -1]
length10percent = max(len(validAccs) // 10, 1)
best10percent = np.sort(validAccs)[-length10percent:]
return -np.mean(best10percent)
#return -max(stats[:, -1])
#it cannot accept global variables for some unknown reason ...
def objective_min_epochs(params): # Here we define the metric we want to minimise
(state_size, num_steps, learning_rate) = params
targetValidAcc = 0.23
maxEpochs = 20
stats, metric = rnnModel.run_until(targetValidAcc = targetValidAcc, maxEpochs=maxEpochs,
learning_rate=learning_rate, num_steps=num_steps, state_size =state_size)
#save everytime in case it crashes
filename = stats_coll_filename
statsCollection = np.load(filename)[()] if os.path.isfile(filename) else dict()
statsCollection[(state_size, num_steps, learning_rate)] = stats
np.save(filename, statsCollection)
if plotting:
fig_1, ax_1, fig_2, ax_2 = plotStats(stats, DynStats.keys)
plt.show()
# We want to minimize the amount of epochs required to reach 23% accuracy
return metric
stateSizeSpace = Integer(15, 1000)
numStepSpace = Categorical(numLens)
learningRateSpace = Real(1e-6, 1e-1, prior="log-uniform")
space = [stateSizeSpace, numStepSpace, learningRateSpace]
if jupyterNotebookEnabled:
%%time
if not os.path.isfile(best_params_filename):
if os.path.isfile(stats_coll_filename):
os.remove(stats_coll_filename)
res_gp = gp_minimize(
func=objective_min_epochs, # function that we wish to minimise
dimensions=space, #the search space for the hyper-parameters
#x0=x0, #inital values for the hyper-parameters
n_calls=25, #number of times the function will be evaluated
random_state = seed, #random seed
n_random_starts=5,
#before we start modelling the optimised function with a GP Regression
#model, we want to try a few random choices for the hyper-parameters.
kappa=1.9 #trade-off between exploration vs. exploitation.
)
if os.path.isfile(best_params_filename):
best_params = np.load(best_params_filename)
else:
np.save(best_params_filename, res_gp.x)
best_params = res_gp.x
if os.path.isfile(res_gp_save_filename):
with open(res_gp_save_filename) as f: # Python 3: open(..., 'rb')
(res_gp, ) = pickle.load(f)
else:
with open(res_gp_save_filename, 'w') as f: # Python 3: open(..., 'wb')
pickle.dump([res_gp], f)
best_params
###Output
_____no_output_____
###Markdown
Bayesian Optimization Plots
###Code
if plotting:
fig = plt.figure(figsize=(12,6))
plot_convergence(res_gp)
plt.grid()
plt.show()
if saving:
fig.savefig('cw{}_part{}_fig_{}.svg'.format(coursework, part, "convergence plot"))
if plotting:
fig = plt.figure(figsize=(12,6))
plt.hold(True)
plt.plot(res_gp.func_vals)
plt.scatter(range(len(res_gp.func_vals)), res_gp.func_vals)
plt.ylabel(r'$f(x)$')
plt.xlabel('Number of calls $n$')
plt.xlim([0, len(res_gp.func_vals)])
plt.hold(False)
plt.show()
if saving:
fig.savefig('cw{}_part{}_fig_{}.svg'.format(coursework, part, "objective_function"))
###Output
/home/studenthp/anaconda2/envs/mlp/lib/python2.7/site-packages/ipykernel/__main__.py:3: MatplotlibDeprecationWarning: pyplot.hold is deprecated.
Future behavior will be consistent with the long-time default:
plot commands add elements without first clearing the
Axes and/or Figure.
app.launch_new_instance()
/home/studenthp/anaconda2/envs/mlp/lib/python2.7/site-packages/matplotlib/__init__.py:917: UserWarning: axes.hold is deprecated. Please remove it from your matplotlibrc and/or style files.
warnings.warn(self.msg_depr_set % key)
/home/studenthp/anaconda2/envs/mlp/lib/python2.7/site-packages/matplotlib/rcsetup.py:152: UserWarning: axes.hold is deprecated, will be removed in 3.0
warnings.warn("axes.hold is deprecated, will be removed in 3.0")
/home/studenthp/anaconda2/envs/mlp/lib/python2.7/site-packages/ipykernel/__main__.py:9: MatplotlibDeprecationWarning: pyplot.hold is deprecated.
Future behavior will be consistent with the long-time default:
plot commands add elements without first clearing the
Axes and/or Figure.
###Markdown
Experiment with Best Parameters
###Code
best_params = np.load(best_params_filename)
best_params
(state_size, num_steps, learning_rate) = best_params
state_size = int(state_size)
num_steps = int(num_steps)
(state_size, num_steps, learning_rate)
num_classes = 25
rnnModel = RNNBatchNorm(batch_size=batch_size, rng=rng, dtype = curDtype, config=config, num_classes=num_classes,
segment_count=segmentCount, segment_len= segmentLen,)
%%time
epochs = 100
stats, keys = rnnModel.run_rnn(state_size = state_size, num_steps=num_steps,
learning_rate = learning_rate,
epochs = epochs, kaggleEnabled = True)
if plotting:
fig_1, ax_1, fig_2, ax_2 = plotStats(stats, keys)
plt.show()
if saving:
fig_1.savefig('cw%d_part%d_fig_error.svg' % (coursework, part))
fig_2.savefig('cw%d_part%d_fig_valid.svg' % (coursework, part))
np.save(stats_final_run_filename, stats)
print max(stats[:, -1]) #maximum validation accuracy
###Output
epochs: 100
rnn steps: 10
state size: 167
data_provider is divided exactly by batch size
End epoch 01 (105.195 secs): err(train)=2.74, acc(train)=0.18, err(valid)=2.77, acc(valid)=0.17,
data_provider is divided exactly by batch size
End epoch 02 (104.494 secs): err(train)=2.66, acc(train)=0.20, err(valid)=2.71, acc(valid)=0.18,
data_provider is divided exactly by batch size
End epoch 03 (107.722 secs): err(train)=2.63, acc(train)=0.21, err(valid)=2.69, acc(valid)=0.20,
End epoch 04 (104.542 secs): err(train)=2.60, acc(train)=0.22, err(valid)=2.69, acc(valid)=0.19,
data_provider is divided exactly by batch size
End epoch 05 (104.269 secs): err(train)=2.57, acc(train)=0.22, err(valid)=2.62, acc(valid)=0.21,
data_provider is divided exactly by batch size
End epoch 06 (104.394 secs): err(train)=2.55, acc(train)=0.23, err(valid)=2.61, acc(valid)=0.21,
data_provider is divided exactly by batch size
End epoch 07 (105.339 secs): err(train)=2.52, acc(train)=0.23, err(valid)=2.61, acc(valid)=0.22,
data_provider is divided exactly by batch size
End epoch 08 (105.188 secs): err(train)=2.51, acc(train)=0.24, err(valid)=2.56, acc(valid)=0.22,
data_provider is divided exactly by batch size
End epoch 09 (105.397 secs): err(train)=2.49, acc(train)=0.24, err(valid)=2.54, acc(valid)=0.23,
End epoch 10 (104.643 secs): err(train)=2.48, acc(train)=0.25, err(valid)=2.56, acc(valid)=0.23,
data_provider is divided exactly by batch size
End epoch 11 (104.281 secs): err(train)=2.47, acc(train)=0.25, err(valid)=2.55, acc(valid)=0.23,
data_provider is divided exactly by batch size
End epoch 12 (105.317 secs): err(train)=2.46, acc(train)=0.25, err(valid)=2.53, acc(valid)=0.23,
End epoch 13 (106.165 secs): err(train)=2.45, acc(train)=0.26, err(valid)=2.52, acc(valid)=0.23,
End epoch 14 (105.023 secs): err(train)=2.44, acc(train)=0.26, err(valid)=2.52, acc(valid)=0.23,
data_provider is divided exactly by batch size
End epoch 15 (104.927 secs): err(train)=2.44, acc(train)=0.26, err(valid)=2.52, acc(valid)=0.24,
data_provider is divided exactly by batch size
End epoch 16 (104.798 secs): err(train)=2.43, acc(train)=0.26, err(valid)=2.52, acc(valid)=0.24,
End epoch 17 (105.089 secs): err(train)=2.42, acc(train)=0.26, err(valid)=2.54, acc(valid)=0.23,
data_provider is divided exactly by batch size
End epoch 18 (104.743 secs): err(train)=2.42, acc(train)=0.26, err(valid)=2.52, acc(valid)=0.24,
data_provider is divided exactly by batch size
End epoch 19 (105.369 secs): err(train)=2.41, acc(train)=0.27, err(valid)=2.48, acc(valid)=0.24,
End epoch 20 (105.156 secs): err(train)=2.40, acc(train)=0.27, err(valid)=2.49, acc(valid)=0.24,
End epoch 21 (104.501 secs): err(train)=2.40, acc(train)=0.27, err(valid)=2.50, acc(valid)=0.24,
End epoch 22 (104.983 secs): err(train)=2.39, acc(train)=0.27, err(valid)=2.49, acc(valid)=0.24,
data_provider is divided exactly by batch size
End epoch 23 (104.147 secs): err(train)=2.39, acc(train)=0.27, err(valid)=2.48, acc(valid)=0.25,
data_provider is divided exactly by batch size
End epoch 24 (105.231 secs): err(train)=2.39, acc(train)=0.27, err(valid)=2.48, acc(valid)=0.25,
data_provider is divided exactly by batch size
End epoch 25 (104.513 secs): err(train)=2.38, acc(train)=0.28, err(valid)=2.48, acc(valid)=0.25,
End epoch 26 (104.824 secs): err(train)=2.38, acc(train)=0.27, err(valid)=2.48, acc(valid)=0.25,
data_provider is divided exactly by batch size
End epoch 27 (105.702 secs): err(train)=2.37, acc(train)=0.28, err(valid)=2.46, acc(valid)=0.26,
End epoch 28 (103.729 secs): err(train)=2.37, acc(train)=0.28, err(valid)=2.48, acc(valid)=0.25,
End epoch 29 (100.680 secs): err(train)=2.37, acc(train)=0.28, err(valid)=2.45, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 30 (100.263 secs): err(train)=2.36, acc(train)=0.28, err(valid)=2.46, acc(valid)=0.26,
End epoch 31 (100.122 secs): err(train)=2.35, acc(train)=0.28, err(valid)=2.47, acc(valid)=0.25,
data_provider is divided exactly by batch size
End epoch 32 (101.195 secs): err(train)=2.35, acc(train)=0.28, err(valid)=2.48, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 33 (100.327 secs): err(train)=2.35, acc(train)=0.29, err(valid)=2.46, acc(valid)=0.26,
End epoch 34 (100.740 secs): err(train)=2.35, acc(train)=0.28, err(valid)=2.46, acc(valid)=0.26,
End epoch 35 (100.548 secs): err(train)=2.34, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
End epoch 36 (100.506 secs): err(train)=2.34, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.25,
End epoch 37 (100.778 secs): err(train)=2.33, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
End epoch 38 (101.015 secs): err(train)=2.33, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
End epoch 39 (100.487 secs): err(train)=2.33, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 40 (100.999 secs): err(train)=2.33, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 41 (110.026 secs): err(train)=2.32, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 42 (134.201 secs): err(train)=2.32, acc(train)=0.29, err(valid)=2.45, acc(valid)=0.26,
End epoch 43 (149.710 secs): err(train)=2.32, acc(train)=0.29, err(valid)=2.46, acc(valid)=0.25,
End epoch 44 (96.914 secs): err(train)=2.31, acc(train)=0.30, err(valid)=2.45, acc(valid)=0.26,
End epoch 45 (91.863 secs): err(train)=2.31, acc(train)=0.30, err(valid)=2.47, acc(valid)=0.26,
End epoch 46 (91.618 secs): err(train)=2.31, acc(train)=0.29, err(valid)=2.46, acc(valid)=0.25,
End epoch 47 (92.021 secs): err(train)=2.30, acc(train)=0.30, err(valid)=2.46, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 48 (91.695 secs): err(train)=2.30, acc(train)=0.30, err(valid)=2.44, acc(valid)=0.27,
End epoch 49 (91.706 secs): err(train)=2.29, acc(train)=0.30, err(valid)=2.44, acc(valid)=0.27,
data_provider is divided exactly by batch size
End epoch 50 (92.050 secs): err(train)=2.30, acc(train)=0.30, err(valid)=2.41, acc(valid)=0.27,
End epoch 51 (91.756 secs): err(train)=2.29, acc(train)=0.30, err(valid)=2.44, acc(valid)=0.26,
data_provider is divided exactly by batch size
End epoch 52 (91.888 secs): err(train)=2.29, acc(train)=0.30, err(valid)=2.43, acc(valid)=0.27,
End epoch 53 (92.088 secs): err(train)=2.28, acc(train)=0.30, err(valid)=2.44, acc(valid)=0.27,
End epoch 54 (91.619 secs): err(train)=2.28, acc(train)=0.30, err(valid)=2.43, acc(valid)=0.27,
End epoch 55 (91.900 secs): err(train)=2.28, acc(train)=0.30, err(valid)=2.45, acc(valid)=0.27,
End epoch 56 (91.679 secs): err(train)=2.28, acc(train)=0.31, err(valid)=2.42, acc(valid)=0.27,
End epoch 57 (91.682 secs): err(train)=2.28, acc(train)=0.31, err(valid)=2.47, acc(valid)=0.26,
End epoch 58 (91.757 secs): err(train)=2.28, acc(train)=0.30, err(valid)=2.41, acc(valid)=0.27,
End epoch 59 (91.618 secs): err(train)=2.27, acc(train)=0.31, err(valid)=2.43, acc(valid)=0.26,
End epoch 60 (91.682 secs): err(train)=2.27, acc(train)=0.30, err(valid)=2.44, acc(valid)=0.27,
End epoch 61 (92.036 secs): err(train)=2.27, acc(train)=0.30, err(valid)=2.44, acc(valid)=0.27,
End epoch 62 (91.688 secs): err(train)=2.26, acc(train)=0.31, err(valid)=2.43, acc(valid)=0.27,
End epoch 63 (91.566 secs): err(train)=2.26, acc(train)=0.31, err(valid)=2.45, acc(valid)=0.27,
End epoch 64 (91.769 secs): err(train)=2.26, acc(train)=0.31, err(valid)=2.43, acc(valid)=0.27,
End epoch 65 (92.202 secs): err(train)=2.26, acc(train)=0.31, err(valid)=2.42, acc(valid)=0.27,
End epoch 66 (91.754 secs): err(train)=2.26, acc(train)=0.31, err(valid)=2.42, acc(valid)=0.27,
End epoch 67 (91.993 secs): err(train)=2.26, acc(train)=0.31, err(valid)=2.42, acc(valid)=0.27,
data_provider is divided exactly by batch size
End epoch 68 (91.618 secs): err(train)=2.25, acc(train)=0.31, err(valid)=2.42, acc(valid)=0.27,
data_provider is divided exactly by batch size
End epoch 69 (91.715 secs): err(train)=2.25, acc(train)=0.31, err(valid)=2.41, acc(valid)=0.28,
data_provider is divided exactly by batch size
End epoch 70 (91.908 secs): err(train)=2.25, acc(train)=0.31, err(valid)=2.40, acc(valid)=0.28,
End epoch 71 (91.713 secs): err(train)=2.25, acc(train)=0.31, err(valid)=2.43, acc(valid)=0.27,
data_provider is divided exactly by batch size
End epoch 72 (91.732 secs): err(train)=2.24, acc(train)=0.31, err(valid)=2.44, acc(valid)=0.28,
End epoch 73 (92.069 secs): err(train)=2.24, acc(train)=0.32, err(valid)=2.42, acc(valid)=0.27,
End epoch 74 (91.901 secs): err(train)=2.24, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.27,
End epoch 75 (91.688 secs): err(train)=2.24, acc(train)=0.32, err(valid)=2.42, acc(valid)=0.27,
data_provider is divided exactly by batch size
End epoch 76 (91.886 secs): err(train)=2.24, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.28,
End epoch 77 (91.503 secs): err(train)=2.24, acc(train)=0.31, err(valid)=2.43, acc(valid)=0.27,
End epoch 78 (91.378 secs): err(train)=2.23, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.27,
End epoch 79 (91.900 secs): err(train)=2.23, acc(train)=0.32, err(valid)=2.40, acc(valid)=0.28,
End epoch 80 (91.694 secs): err(train)=2.23, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.27,
End epoch 81 (91.401 secs): err(train)=2.24, acc(train)=0.32, err(valid)=2.42, acc(valid)=0.27,
End epoch 82 (91.630 secs): err(train)=2.23, acc(train)=0.32, err(valid)=2.43, acc(valid)=0.28,
End epoch 83 (91.435 secs): err(train)=2.23, acc(train)=0.32, err(valid)=2.43, acc(valid)=0.27,
End epoch 84 (91.373 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.42, acc(valid)=0.27,
End epoch 85 (91.842 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.40, acc(valid)=0.28,
End epoch 86 (91.547 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.42, acc(valid)=0.28,
End epoch 87 (91.462 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.40, acc(valid)=0.28,
End epoch 88 (91.537 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.27,
End epoch 89 (91.654 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.27,
End epoch 90 (91.451 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.42, acc(valid)=0.28,
End epoch 91 (91.516 secs): err(train)=2.22, acc(train)=0.32, err(valid)=2.44, acc(valid)=0.27,
End epoch 92 (91.390 secs): err(train)=2.21, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.28,
End epoch 93 (91.518 secs): err(train)=2.21, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.28,
End epoch 94 (91.901 secs): err(train)=2.21, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.28,
End epoch 95 (91.432 secs): err(train)=2.21, acc(train)=0.33, err(valid)=2.42, acc(valid)=0.28,
End epoch 96 (91.791 secs): err(train)=2.21, acc(train)=0.33, err(valid)=2.42, acc(valid)=0.28,
data_provider is divided exactly by batch size
End epoch 97 (91.748 secs): err(train)=2.20, acc(train)=0.32, err(valid)=2.41, acc(valid)=0.28,
End epoch 98 (91.578 secs): err(train)=2.21, acc(train)=0.33, err(valid)=2.41, acc(valid)=0.28,
data_provider is divided exactly by batch size
End epoch 99 (91.923 secs): err(train)=2.20, acc(train)=0.33, err(valid)=2.42, acc(valid)=0.28,
data_provider is divided exactly by batch size
End epoch 100 (92.504 secs): err(train)=2.20, acc(train)=0.33, err(valid)=2.39, acc(valid)=0.28,
|
airbnb EDA.ipynb | ###Markdown
###Code
import pandas as pd
filename = "http://data.insideairbnb.com/the-netherlands/north-holland/amsterdam/2021-04-09/visualisations/listings.csv"
df = pd.read_csv(filename)
df.head()
df.shape
###Output
_____no_output_____ |
code/Chp5/05_Model_comparison.ipynb | ###Markdown
Model Comparison
###Code
import pymc3 as pm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import arviz as az
az.style.use('arviz-darkgrid')
dummy_data = np.loadtxt('../data/dummy.csv')
x_1 = dummy_data[:, 0]
y_1 = dummy_data[:, 1]
order = 2
x_1p = np.vstack([x_1**i for i in range(1, order+1)])
x_1s = (x_1p - x_1p.mean(axis=1, keepdims=True)) / \
x_1p.std(axis=1, keepdims=True)
y_1s = (y_1 - y_1.mean()) / y_1.std()
plt.scatter(x_1s[0], y_1s)
plt.xlabel('x');
plt.ylabel('y');
# plt.savefig('B11197_05_01.png', dpi=300)
with pm.Model() as model_l:
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=10)
ϵ = pm.HalfNormal('ϵ', 5)
μ = α + β * x_1s[0]
y_pred = pm.Normal('y_pred', mu=μ, sd=ϵ, observed=y_1s)
trace_l = pm.sample(2000)
with pm.Model() as model_p:
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=10, shape=order)
ϵ = pm.HalfNormal('ϵ', 5)
μ = α + pm.math.dot(β, x_1s)
y_pred = pm.Normal('y_pred', mu=μ, sd=ϵ, observed=y_1s)
trace_p = pm.sample(2000)
x_new = np.linspace(x_1s[0].min(), x_1s[0].max(), 100)
α_l_post = trace_l['α'].mean()
β_l_post = trace_l['β'].mean(axis=0)
y_l_post = α_l_post + β_l_post * x_new
plt.plot(x_new, y_l_post, 'C1', label='linear model')
α_p_post = trace_p['α'].mean()
β_p_post = trace_p['β'].mean(axis=0)
idx = np.argsort(x_1s[0])
y_p_post = α_p_post + np.dot(β_p_post, x_1s)
plt.plot(x_1s[0][idx], y_p_post[idx], 'C2', label=f'model order {order}')
#α_p_post = trace_p['α'].mean()
#β_p_post = trace_p['β'].mean(axis=0)
#x_new_p = np.vstack([x_new**i for i in range(1, order+1)])
#y_p_post = α_p_post + np.dot(β_p_post, x_new_p)
plt.scatter(x_1s[0], y_1s, c='C0', marker='.')
plt.legend()
plt.savefig('B11197_05_02.png', dpi=300)
###Output
_____no_output_____
###Markdown
Posterior predictive checks
###Code
y_l = pm.sample_posterior_predictive(trace_l, 2000,
model=model_l)['y_pred']
y_p = pm.sample_posterior_predictive(trace_p, 2000,
model=model_p)['y_pred']
plt.figure(figsize=(8, 3))
data = [y_1s, y_l, y_p]
labels = ['data', 'linear model', 'order 2']
for i, d in enumerate(data):
mean = d.mean()
err = np.percentile(d, [25, 75])
plt.errorbar(mean, -i, xerr=[[-err[0]], [err[1]]], fmt='o')
plt.text(mean, -i+0.2, labels[i], ha='center', fontsize=14)
plt.ylim([-i-0.5, 0.5])
plt.yticks([])
plt.savefig('B11197_05_03.png', dpi=300)
fig, ax = plt.subplots(1, 2, figsize=(10, 3), constrained_layout=True)
def iqr(x, a=0):
return np.subtract(*np.percentile(x, [75, 25], axis=a))
for idx, func in enumerate([np.mean, iqr]):
T_obs = func(y_1s)
ax[idx].axvline(T_obs, 0, 1, color='k', ls='--')
for d_sim, c in zip([y_l, y_p], ['C1', 'C2']):
T_sim = func(d_sim, 1)
p_value = np.mean(T_sim >= T_obs)
az.plot_kde(T_sim, plot_kwargs={'color': c},
label=f'p-value {p_value:.2f}', ax=ax[idx])
ax[idx].set_title(func.__name__)
ax[idx].set_yticks([])
ax[idx].legend()
plt.savefig('B11197_05_04.png', dpi=300)
###Output
_____no_output_____
###Markdown
Occam's razor – simplicity and accuracy
###Code
x = np.array([4., 5., 6., 9., 12, 14.])
y = np.array([4.2, 6., 6., 9., 10, 10.])
plt.figure(figsize=(10, 5))
order = [0, 1, 2, 5]
plt.plot(x, y, 'o')
for i in order:
x_n = np.linspace(x.min(), x.max(), 100)
coeffs = np.polyfit(x, y, deg=i)
ffit = np.polyval(coeffs, x_n)
p = np.poly1d(coeffs)
yhat = p(x)
ybar = np.mean(y)
ssreg = np.sum((yhat-ybar)**2)
sstot = np.sum((y - ybar)**2)
r2 = ssreg / sstot
plt.plot(x_n, ffit, label=f'order {i}, $R^2$= {r2:.2f}')
plt.legend(loc=2)
plt.xlabel('x')
plt.ylabel('y', rotation=0)
plt.savefig('B11197_05_05.png', dpi=300)
plt.plot([10, 7], [9, 7], 'ks')
plt.savefig('B11197_05_06.png', dpi=300)
###Output
_____no_output_____
###Markdown
Computing information criteria with PyMC3
###Code
waic_l = az.waic(trace_l)
waic_l
cmp_df = az.compare({'model_l':trace_l, 'model_p':trace_p},
method='BB-pseudo-BMA')
cmp_df
az.plot_compare(cmp_df)
plt.savefig('B11197_05_08.png', dpi=300)
###Output
_____no_output_____
###Markdown
Model Averaging
###Code
w = 0.5
y_lp = pm.sample_posterior_predictive_w([trace_l, trace_p],
samples=1000,
models=[model_l, model_p],
weights=[w, 1-w])
_, ax = plt.subplots(figsize=(10, 6))
az.plot_kde(y_l, plot_kwargs={'color': 'C1'}, label='linear model', ax=ax)
az.plot_kde(y_p, plot_kwargs={'color': 'C2'}, label='order 2 model', ax=ax)
az.plot_kde(y_lp['y_pred'], plot_kwargs={'color': 'C3'},
label='weighted model', ax=ax)
plt.plot(y_1s, np.zeros_like(y_1s), '|', label='observed data')
plt.yticks([])
plt.legend()
plt.savefig('B11197_05_09.png', dpi=300)
###Output
_____no_output_____
###Markdown
Bayes factors
###Code
coins = 30 # 300
heads = 9 # 90
y_d = np.repeat([0, 1], [coins-heads, heads])
with pm.Model() as model_BF:
p = np.array([0.5, 0.5])
model_index = pm.Categorical('model_index', p=p)
m_0 = (4, 8)
m_1 = (8, 4)
m = pm.math.switch(pm.math.eq(model_index, 0), m_0, m_1)
# a priori
θ = pm.Beta('θ', m[0], m[1])
# likelihood
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF = pm.sample(5000)
az.plot_trace(trace_BF)
plt.savefig('B11197_05_11.png', dpi=300)
pM1 = trace_BF['model_index'].mean()
pM0 = 1 - pM1
BF = (pM0 / pM1) * (p[1] / p[0])
BF
with pm.Model() as model_BF_0:
θ = pm.Beta('θ', 4, 8)
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF_0 = pm.sample(2500, step=pm.SMC())
with pm.Model() as model_BF_1:
θ = pm.Beta('θ', 8, 4)
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF_1 = pm.sample(2500, step=pm.SMC())
model_BF_0.marginal_likelihood / model_BF_1.marginal_likelihood
###Output
_____no_output_____
###Markdown
Bayes factors and information criteria
###Code
traces = []
waics = []
for coins, heads in [(30, 9), (300, 90)]:
y_d = np.repeat([0, 1], [coins-heads, heads])
for priors in [(4, 8), (8, 4)]:
with pm.Model() as model:
θ = pm.Beta('θ', *priors)
y = pm.Bernoulli('y', θ, observed=y_d)
trace = pm.sample(2000)
traces.append(trace)
waics.append(az.waic(trace))
model_names = ['Model_0 (30-9)', 'Model_1 (30-9)',
'Model_0 (300-90)', 'Model_1 (300-90)']
az.plot_forest(traces, model_names=model_names)
plt.savefig('B11197_05_12.png', dpi=300)
fig, ax = plt.subplots(1, 2, sharey=True)
labels = model_names
indices = [0, 0, 1, 1]
for i, (ind, d) in enumerate(zip(indices, waics)):
mean = d.waic
ax[ind].errorbar(mean, -i, xerr=d.waic_se, fmt='o')
ax[ind].text(mean, -i+0.2, labels[i], ha='center')
ax[0].set_xlim(30, 50)
ax[1].set_xlim(330, 400)
plt.ylim([-i-0.5, 0.5])
plt.yticks([])
plt.subplots_adjust(wspace=0.05)
fig.text(0.5, 0, 'Deviance', ha='center', fontsize=14)
plt.savefig('B11197_05_13.png', dpi=300)
###Output
_____no_output_____
###Markdown
Regularizing priors
###Code
plt.figure(figsize=(8, 6))
x_values = np.linspace(-10, 10, 1000)
for df in [1, 2, 5, 15]:
distri = stats.laplace(scale=df)
x_pdf = distri.pdf(x_values)
plt.plot(x_values, x_pdf, label=f'b = {df}')
x_pdf = stats.norm.pdf(x_values)
plt.plot(x_values, x_pdf, label='Gaussian')
plt.xlabel('x')
plt.yticks([])
plt.legend()
plt.xlim(-7, 7)
plt.savefig('B11197_05_14.png', dpi=300)
np.random.seed(912)
x = range(0, 10)
q = stats.binom(10, 0.75)
r = stats.randint(0, 10)
true_distribution = [list(q.rvs(200)).count(i) / 200 for i in x]
q_pmf = q.pmf(x)
r_pmf = r.pmf(x)
_, ax = plt.subplots(1, 3, figsize=(12, 4), sharey=True,
constrained_layout=True)
for idx, (dist, label) in enumerate(zip([true_distribution, q_pmf, r_pmf], ['true_distribution', 'q', 'r'])):
ax[idx].vlines(x, 0, dist, label=f'entropy = {stats.entropy(dist):.2f}')
ax[idx].set_title(label)
ax[idx].set_xticks(x)
ax[idx].legend(loc=2, handlelength=0)
plt.savefig('B11197_05_15.png', dpi=300)
stats.entropy(true_distribution, q_pmf), stats.entropy(true_distribution, r_pmf)
stats.entropy(r_pmf, q_pmf), stats.entropy(q_pmf, r_pmf)
###Output
_____no_output_____
###Markdown
Model Comparison
###Code
import pymc3 as pm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import arviz as az
az.style.use('arviz-darkgrid')
dummy_data = np.loadtxt('../data/dummy.csv')
x_1 = dummy_data[:, 0]
y_1 = dummy_data[:, 1]
order = 2
x_1p = np.vstack([x_1**i for i in range(1, order+1)])
x_1s = (x_1p - x_1p.mean(axis=1, keepdims=True)) / \
x_1p.std(axis=1, keepdims=True)
y_1s = (y_1 - y_1.mean()) / y_1.std()
plt.scatter(x_1s[0], y_1s)
plt.xlabel('x')
plt.ylabel('y')
plt.savefig('B11197_05_01.png', dpi=300)
with pm.Model() as model_l:
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=10)
ϵ = pm.HalfNormal('ϵ', 5)
μ = α + β * x_1s[0]
y_pred = pm.Normal('y_pred', mu=μ, sd=ϵ, observed=y_1s)
trace_l = pm.sample(2000)
with pm.Model() as model_p:
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=10, shape=order)
ϵ = pm.HalfNormal('ϵ', 5)
μ = α + pm.math.dot(β, x_1s)
y_pred = pm.Normal('y_pred', mu=μ, sd=ϵ, observed=y_1s)
trace_p = pm.sample(2000)
x_new = np.linspace(x_1s[0].min(), x_1s[0].max(), 100)
α_l_post = trace_l['α'].mean()
β_l_post = trace_l['β'].mean(axis=0)
y_l_post = α_l_post + β_l_post * x_new
plt.plot(x_new, y_l_post, 'C1', label='linear model')
α_p_post = trace_p['α'].mean()
β_p_post = trace_p['β'].mean(axis=0)
idx = np.argsort(x_1s[0])
y_p_post = α_p_post + np.dot(β_p_post, x_1s)
plt.plot(x_1s[0][idx], y_p_post[idx], 'C2', label=f'model order {order}')
α_p_post = trace_p['α'].mean()
β_p_post = trace_p['β'].mean(axis=0)
x_new_p = np.vstack([x_new**i for i in range(1, order+1)])
y_p_post = α_p_post + np.dot(β_p_post, x_new_p)
plt.scatter(x_1s[0], y_1s, c='C0', marker='.')
plt.legend()
plt.savefig('B11197_05_02.png', dpi=300)
###Output
_____no_output_____
###Markdown
Posterior predictive checks
###Code
y_l = pm.sample_posterior_predictive(trace_l, 2000,
model=model_l)['y_pred']
y_p = pm.sample_posterior_predictive(trace_p, 2000,
model=model_p)['y_pred']
plt.figure(figsize=(8, 3))
data = [y_1s, y_l, y_p]
labels = ['data', 'linear model', 'order 2']
for i, d in enumerate(data):
mean = d.mean()
err = np.percentile(d, [25, 75])
plt.errorbar(mean, -i, xerr=[[-err[0]], [err[1]]], fmt='o')
plt.text(mean, -i+0.2, labels[i], ha='center', fontsize=14)
plt.ylim([-i-0.5, 0.5])
plt.yticks([])
plt.savefig('B11197_05_03.png', dpi=300)
fig, ax = plt.subplots(1, 2, figsize=(10, 3), constrained_layout=True)
def iqr(x, a=0):
return np.subtract(*np.percentile(x, [75, 25], axis=a))
for idx, func in enumerate([np.mean, iqr]):
T_obs = func(y_1s)
ax[idx].axvline(T_obs, 0, 1, color='k', ls='--')
for d_sim, c in zip([y_l, y_p], ['C1', 'C2']):
T_sim = func(d_sim, 1)
p_value = np.mean(T_sim >= T_obs)
az.plot_kde(T_sim, plot_kwargs={'color': c},
label=f'p-value {p_value:.2f}', ax=ax[idx])
ax[idx].set_title(func.__name__)
ax[idx].set_yticks([])
ax[idx].legend()
plt.savefig('B11197_05_04.png', dpi=300)
###Output
_____no_output_____
###Markdown
Occam's razor – simplicity and accuracy
###Code
x = np.array([4., 5., 6., 9., 12, 14.])
y = np.array([4.2, 6., 6., 9., 10, 10.])
plt.figure(figsize=(10, 5))
order = [0, 1, 2, 5]
plt.plot(x, y, 'o')
for i in order:
x_n = np.linspace(x.min(), x.max(), 100)
coeffs = np.polyfit(x, y, deg=i)
ffit = np.polyval(coeffs, x_n)
p = np.poly1d(coeffs)
yhat = p(x)
ybar = np.mean(y)
ssreg = np.sum((yhat-ybar)**2)
sstot = np.sum((y - ybar)**2)
r2 = ssreg / sstot
plt.plot(x_n, ffit, label=f'order {i}, $R^2$= {r2:.2f}')
plt.legend(loc=2)
plt.xlabel('x')
plt.ylabel('y', rotation=0)
plt.savefig('B11197_05_05.png', dpi=300)
plt.plot([10, 7], [9, 7], 'ks')
plt.savefig('B11197_05_06.png', dpi=300)
###Output
_____no_output_____
###Markdown
Computing information criteria with PyMC3
###Code
waic_l = az.waic(trace_l)
waic_l
cmp_df = az.compare({'model_l':trace_l, 'model_p':trace_p},
method='BB-pseudo-BMA')
cmp_df
az.plot_compare(cmp_df)
plt.savefig('B11197_05_08.png', dpi=300)
###Output
_____no_output_____
###Markdown
Model Averaging
###Code
w = 0.5
y_lp = pm.sample_posterior_predictive_w([trace_l, trace_p],
samples=1000,
models=[model_l, model_p],
weights=[w, 1-w])
_, ax = plt.subplots(figsize=(10, 6))
az.plot_kde(y_l, plot_kwargs={'color': 'C1'}, label='linear model', ax=ax)
az.plot_kde(y_p, plot_kwargs={'color': 'C2'}, label='order 2 model', ax=ax)
az.plot_kde(y_lp['y_pred'], plot_kwargs={'color': 'C3'},
label='weighted model', ax=ax)
plt.plot(y_1s, np.zeros_like(y_1s), '|', label='observed data')
plt.yticks([])
plt.legend()
plt.savefig('B11197_05_09.png', dpi=300)
###Output
_____no_output_____
###Markdown
Bayes factors
###Code
coins = 30 # 300
heads = 9 # 90
y_d = np.repeat([0, 1], [coins-heads, heads])
with pm.Model() as model_BF:
p = np.array([0.5, 0.5])
model_index = pm.Categorical('model_index', p=p)
m_0 = (4, 8)
m_1 = (8, 4)
m = pm.math.switch(pm.math.eq(model_index, 0), m_0, m_1)
# a priori
θ = pm.Beta('θ', m[0], m[1])
# likelihood
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF = pm.sample(5000)
az.plot_trace(trace_BF)
plt.savefig('B11197_05_11.png', dpi=300)
pM1 = trace_BF['model_index'].mean()
pM0 = 1 - pM1
BF = (pM0 / pM1) * (p[1] / p[0])
BF
with pm.Model() as model_BF_0:
θ = pm.Beta('θ', 4, 8)
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF_0 = pm.sample(2500, step=pm.SMC())
with pm.Model() as model_BF_1:
θ = pm.Beta('θ', 8, 4)
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF_1 = pm.sample(2500, step=pm.SMC())
model_BF_0.marginal_likelihood / model_BF_1.marginal_likelihood
###Output
_____no_output_____
###Markdown
Bayes factors and information criteria
###Code
traces = []
waics = []
for coins, heads in [(30, 9), (300, 90)]:
y_d = np.repeat([0, 1], [coins-heads, heads])
for priors in [(4, 8), (8, 4)]:
with pm.Model() as model:
θ = pm.Beta('θ', *priors)
y = pm.Bernoulli('y', θ, observed=y_d)
trace = pm.sample(2000)
traces.append(trace)
waics.append(az.waic(trace))
model_names = ['Model_0 (30-9)', 'Model_1 (30-9)',
'Model_0 (300-90)', 'Model_1 (300-90)']
az.plot_forest(traces, model_names=model_names)
plt.savefig('B11197_05_12.png', dpi=300)
fig, ax = plt.subplots(1, 2, sharey=True)
labels = model_names
indices = [0, 0, 1, 1]
for i, (ind, d) in enumerate(zip(indices, waics)):
mean = d.waic
ax[ind].errorbar(mean, -i, xerr=d.waic_se, fmt='o')
ax[ind].text(mean, -i+0.2, labels[i], ha='center')
ax[0].set_xlim(30, 50)
ax[1].set_xlim(330, 400)
plt.ylim([-i-0.5, 0.5])
plt.yticks([])
plt.subplots_adjust(wspace=0.05)
fig.text(0.5, 0, 'Deviance', ha='center', fontsize=14)
plt.savefig('B11197_05_13.png', dpi=300)
###Output
_____no_output_____
###Markdown
Regularizing priors
###Code
plt.figure(figsize=(8, 6))
x_values = np.linspace(-10, 10, 1000)
for df in [1, 2, 5, 15]:
distri = stats.laplace(scale=df)
x_pdf = distri.pdf(x_values)
plt.plot(x_values, x_pdf, label=f'b = {df}')
x_pdf = stats.norm.pdf(x_values)
plt.plot(x_values, x_pdf, label='Gaussian')
plt.xlabel('x')
plt.yticks([])
plt.legend()
plt.xlim(-7, 7)
plt.savefig('B11197_05_14.png', dpi=300)
np.random.seed(912)
x = range(0, 10)
q = stats.binom(10, 0.75)
r = stats.randint(0, 10)
true_distribution = [list(q.rvs(200)).count(i) / 200 for i in x]
q_pmf = q.pmf(x)
r_pmf = r.pmf(x)
_, ax = plt.subplots(1, 3, figsize=(12, 4), sharey=True,
constrained_layout=True)
for idx, (dist, label) in enumerate(zip([true_distribution, q_pmf, r_pmf], ['true_distribution', 'q', 'r'])):
ax[idx].vlines(x, 0, dist, label=f'entropy = {stats.entropy(dist):.2f}')
ax[idx].set_title(label)
ax[idx].set_xticks(x)
ax[idx].legend(loc=2, handlelength=0)
plt.savefig('B11197_05_15.png', dpi=300)
stats.entropy(true_distribution, q_pmf), stats.entropy(true_distribution, r_pmf)
stats.entropy(r_pmf, q_pmf), stats.entropy(q_pmf, r_pmf)
###Output
_____no_output_____
###Markdown
Model Comparison
###Code
import pymc3 as pm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import arviz as az
az.style.use('arviz-darkgrid')
dummy_data = np.loadtxt('../data/dummy.csv')
x_1 = dummy_data[:, 0]
y_1 = dummy_data[:, 1]
order = 2
x_1p = np.vstack([x_1**i for i in range(1, order+1)])
x_1s = (x_1p - x_1p.mean(axis=1, keepdims=True)) / \
x_1p.std(axis=1, keepdims=True)
y_1s = (y_1 - y_1.mean()) / y_1.std()
plt.scatter(x_1s[0], y_1s)
plt.xlabel('x')
plt.ylabel('y')
plt.savefig('B11197_05_01.png', dpi=300)
with pm.Model() as model_l:
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=10)
ϵ = pm.HalfNormal('ϵ', 5)
μ = α + β * x_1s[0]
y_pred = pm.Normal('y_pred', mu=μ, sd=ϵ, observed=y_1s)
trace_l = pm.sample(2000)
with pm.Model() as model_p:
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=10, shape=order)
ϵ = pm.HalfNormal('ϵ', 5)
μ = α + pm.math.dot(β, x_1s)
y_pred = pm.Normal('y_pred', mu=μ, sd=ϵ, observed=y_1s)
trace_p = pm.sample(2000)
x_new = np.linspace(x_1s[0].min(), x_1s[0].max(), 100)
α_l_post = trace_l['α'].mean()
β_l_post = trace_l['β'].mean(axis=0)
y_l_post = α_l_post + β_l_post * x_new
plt.plot(x_new, y_l_post, 'C1', label='linear model')
α_p_post = trace_p['α'].mean()
β_p_post = trace_p['β'].mean(axis=0)
idx = np.argsort(x_1s[0])
y_p_post = α_p_post + np.dot(β_p_post, x_1s)
plt.plot(x_1s[0][idx], y_p_post[idx], 'C2', label=f'model order {order}')
#α_p_post = trace_p['α'].mean()
#β_p_post = trace_p['β'].mean(axis=0)
#x_new_p = np.vstack([x_new**i for i in range(1, order+1)])
#y_p_post = α_p_post + np.dot(β_p_post, x_new_p)
plt.scatter(x_1s[0], y_1s, c='C0', marker='.')
plt.legend()
plt.savefig('B11197_05_02.png', dpi=300)
###Output
_____no_output_____
###Markdown
Posterior predictive checks
###Code
y_l = pm.sample_posterior_predictive(trace_l, 2000,
model=model_l)['y_pred']
y_p = pm.sample_posterior_predictive(trace_p, 2000,
model=model_p)['y_pred']
plt.figure(figsize=(8, 3))
data = [y_1s, y_l, y_p]
labels = ['data', 'linear model', 'order 2']
for i, d in enumerate(data):
mean = d.mean()
err = np.percentile(d, [25, 75])
plt.errorbar(mean, -i, xerr=[[-err[0]], [err[1]]], fmt='o')
plt.text(mean, -i+0.2, labels[i], ha='center', fontsize=14)
plt.ylim([-i-0.5, 0.5])
plt.yticks([])
plt.savefig('B11197_05_03.png', dpi=300)
fig, ax = plt.subplots(1, 2, figsize=(10, 3), constrained_layout=True)
def iqr(x, a=0):
return np.subtract(*np.percentile(x, [75, 25], axis=a))
for idx, func in enumerate([np.mean, iqr]):
T_obs = func(y_1s)
ax[idx].axvline(T_obs, 0, 1, color='k', ls='--')
for d_sim, c in zip([y_l, y_p], ['C1', 'C2']):
T_sim = func(d_sim, 1)
p_value = np.mean(T_sim >= T_obs)
az.plot_kde(T_sim, plot_kwargs={'color': c},
label=f'p-value {p_value:.2f}', ax=ax[idx])
ax[idx].set_title(func.__name__)
ax[idx].set_yticks([])
ax[idx].legend()
plt.savefig('B11197_05_04.png', dpi=300)
###Output
_____no_output_____
###Markdown
Occam's razor – simplicity and accuracy
###Code
x = np.array([4., 5., 6., 9., 12, 14.])
y = np.array([4.2, 6., 6., 9., 10, 10.])
plt.figure(figsize=(10, 5))
order = [0, 1, 2, 5]
plt.plot(x, y, 'o')
for i in order:
x_n = np.linspace(x.min(), x.max(), 100)
coeffs = np.polyfit(x, y, deg=i)
ffit = np.polyval(coeffs, x_n)
p = np.poly1d(coeffs)
yhat = p(x)
ybar = np.mean(y)
ssreg = np.sum((yhat-ybar)**2)
sstot = np.sum((y - ybar)**2)
r2 = ssreg / sstot
plt.plot(x_n, ffit, label=f'order {i}, $R^2$= {r2:.2f}')
plt.legend(loc=2)
plt.xlabel('x')
plt.ylabel('y', rotation=0)
plt.savefig('B11197_05_05.png', dpi=300)
plt.plot([10, 7], [9, 7], 'ks')
plt.savefig('B11197_05_06.png', dpi=300)
###Output
_____no_output_____
###Markdown
Computing information criteria with PyMC3
###Code
waic_l = az.waic(trace_l)
waic_l
cmp_df = az.compare({'model_l':trace_l, 'model_p':trace_p},
method='BB-pseudo-BMA')
cmp_df
az.plot_compare(cmp_df)
plt.savefig('B11197_05_08.png', dpi=300)
###Output
_____no_output_____
###Markdown
Model Averaging
###Code
w = 0.5
y_lp = pm.sample_posterior_predictive_w([trace_l, trace_p],
samples=1000,
models=[model_l, model_p],
weights=[w, 1-w])
_, ax = plt.subplots(figsize=(10, 6))
az.plot_kde(y_l, plot_kwargs={'color': 'C1'}, label='linear model', ax=ax)
az.plot_kde(y_p, plot_kwargs={'color': 'C2'}, label='order 2 model', ax=ax)
az.plot_kde(y_lp['y_pred'], plot_kwargs={'color': 'C3'},
label='weighted model', ax=ax)
plt.plot(y_1s, np.zeros_like(y_1s), '|', label='observed data')
plt.yticks([])
plt.legend()
plt.savefig('B11197_05_09.png', dpi=300)
###Output
_____no_output_____
###Markdown
Bayes factors
###Code
coins = 30 # 300
heads = 9 # 90
y_d = np.repeat([0, 1], [coins-heads, heads])
with pm.Model() as model_BF:
p = np.array([0.5, 0.5])
model_index = pm.Categorical('model_index', p=p)
m_0 = (4, 8)
m_1 = (8, 4)
m = pm.math.switch(pm.math.eq(model_index, 0), m_0, m_1)
# a priori
θ = pm.Beta('θ', m[0], m[1])
# likelihood
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF = pm.sample(5000)
az.plot_trace(trace_BF)
plt.savefig('B11197_05_11.png', dpi=300)
pM1 = trace_BF['model_index'].mean()
pM0 = 1 - pM1
BF = (pM0 / pM1) * (p[1] / p[0])
BF
with pm.Model() as model_BF_0:
θ = pm.Beta('θ', 4, 8)
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF_0 = pm.sample(2500, step=pm.SMC())
with pm.Model() as model_BF_1:
θ = pm.Beta('θ', 8, 4)
y = pm.Bernoulli('y', θ, observed=y_d)
trace_BF_1 = pm.sample(2500, step=pm.SMC())
model_BF_0.marginal_likelihood / model_BF_1.marginal_likelihood
###Output
_____no_output_____
###Markdown
Bayes factors and information criteria
###Code
traces = []
waics = []
for coins, heads in [(30, 9), (300, 90)]:
y_d = np.repeat([0, 1], [coins-heads, heads])
for priors in [(4, 8), (8, 4)]:
with pm.Model() as model:
θ = pm.Beta('θ', *priors)
y = pm.Bernoulli('y', θ, observed=y_d)
trace = pm.sample(2000)
traces.append(trace)
waics.append(az.waic(trace))
model_names = ['Model_0 (30-9)', 'Model_1 (30-9)',
'Model_0 (300-90)', 'Model_1 (300-90)']
az.plot_forest(traces, model_names=model_names)
plt.savefig('B11197_05_12.png', dpi=300)
fig, ax = plt.subplots(1, 2, sharey=True)
labels = model_names
indices = [0, 0, 1, 1]
for i, (ind, d) in enumerate(zip(indices, waics)):
mean = d.waic
ax[ind].errorbar(mean, -i, xerr=d.waic_se, fmt='o')
ax[ind].text(mean, -i+0.2, labels[i], ha='center')
ax[0].set_xlim(30, 50)
ax[1].set_xlim(330, 400)
plt.ylim([-i-0.5, 0.5])
plt.yticks([])
plt.subplots_adjust(wspace=0.05)
fig.text(0.5, 0, 'Deviance', ha='center', fontsize=14)
plt.savefig('B11197_05_13.png', dpi=300)
###Output
_____no_output_____
###Markdown
Regularizing priors
###Code
plt.figure(figsize=(8, 6))
x_values = np.linspace(-10, 10, 1000)
for df in [1, 2, 5, 15]:
distri = stats.laplace(scale=df)
x_pdf = distri.pdf(x_values)
plt.plot(x_values, x_pdf, label=f'b = {df}')
x_pdf = stats.norm.pdf(x_values)
plt.plot(x_values, x_pdf, label='Gaussian')
plt.xlabel('x')
plt.yticks([])
plt.legend()
plt.xlim(-7, 7)
plt.savefig('B11197_05_14.png', dpi=300)
np.random.seed(912)
x = range(0, 10)
q = stats.binom(10, 0.75)
r = stats.randint(0, 10)
true_distribution = [list(q.rvs(200)).count(i) / 200 for i in x]
q_pmf = q.pmf(x)
r_pmf = r.pmf(x)
_, ax = plt.subplots(1, 3, figsize=(12, 4), sharey=True,
constrained_layout=True)
for idx, (dist, label) in enumerate(zip([true_distribution, q_pmf, r_pmf], ['true_distribution', 'q', 'r'])):
ax[idx].vlines(x, 0, dist, label=f'entropy = {stats.entropy(dist):.2f}')
ax[idx].set_title(label)
ax[idx].set_xticks(x)
ax[idx].legend(loc=2, handlelength=0)
plt.savefig('B11197_05_15.png', dpi=300)
stats.entropy(true_distribution, q_pmf), stats.entropy(true_distribution, r_pmf)
stats.entropy(r_pmf, q_pmf), stats.entropy(q_pmf, r_pmf)
###Output
_____no_output_____ |
Bayesian Modeling for Oceanographers - 1 - Introduction.ipynb | ###Markdown
***I shall try not to use statistics as a drunken man uses lamp posts, for support rather than for illumination ~ Francis Yeats-Brown, 1936 (paraphrasing Andrew Lang)***This document is a very basic introduction to probabilistic programming as an intuitive and practical approach to scientific analysis, model development, and model critique. The sub-paradigm of probabilistic programming I address here is that of bayesian inference. Bayesian inference is rarely taught in college science programs, where instruction on data analysis relies on frequentist (classical) statistics; a term one comes to know after exposure to Bayesian Stats. I will not discuss the shortcomings of frequentism as a scientific tool. Suffice it to say the prevalence of frequentism is the result of a series of unfortunate twists of History (*cf.* McGrayne, 2012). Happily, a reversal in this trend is well underway, and should in time significantly reduce, if not eliminate altogether, such spillovers of frequentism as the use of p-values and its all too human consequence, p-hacking, misunderstood and misused confidence intervals, overuse of central tendencies, confused students, traumatized graduates, and researchers on autopilot. Instead, Bayesian inference does require the would-be practitioner to embrace uncertainty and its computation rather than sweeping it under the rug. A principled approach yielding uncertainty surrounding estimates can be quite informative about the data, as well as the model used to describe past data or predict future measurements. Bayesian inference is one such approach.Bayesian inference offers an intuitive and transparent alternative for data analysis, as well as model building and implementation. This notion of transparency is important as the assumptions and choices, made when constructing a bayesian inferential framework, are laid bare, making its components easily debatable and the entire exercise readily reproducible. Furthermore, in contrast to a frequentist approach, bayesian inference naturally yields uncertainty. Uncertainty estimation is critical to subsequent steps such as model evaluation/selection and decision making. Last but not least, for a bayesian, the approach to any given problem, regardless of its particulars, is always the same regardless of the problem, and includes the following steps: * codify background knowledge - the prior; * develop one or more models for explanatory and/or predictive purpose - the likelihood; * data collection; * run inference using one of the many software packages available; * validate and select/ensemble-average models.To demonstrate bayesian inference in practice, I will first go over some basic concepts and apply them on a very simple problem; estimation of Earth's land proportion. This is the topic of this post. In subsequent posts, I will revisit the development of the OC4 chlorophyll algorithm, and suggest some alternative model formulations. *A note on Bayesian inference packages*Bayesian inference is, but for the most basic of problems, computationnally intensive as it boils down to counting conditional outcomes often in high dimensional space. The feasibility of and increasing interest in bayesian inference is directly linked to the development of the Markov Chain Monte-Carlo (MCMC) algorithm, beginning with the Metropolis (named after Nick Metropolis of Los Alamos fame) algorithm, and to the recent exponential growth in computing power. There are now many bayesian inference packages that have made this paradigm approachable. Some older ones are BUGS, JAGS, WinBUGS. These implement a relatively robust MCMC algorithm known as Gibbs sampling. This algorithm does run into trouble when faced with high correlation between model parameters and has since been superseded by the far more efficient Hamiltonian Monte Carlo (HMC). HMC takes its name from the approach it takes, which is to run its exploration of the probabilistic space at hand like a hamiltonian physics experiment. The software draws an analogy between potential energy states and the underlying probabilistic landscape of a problem. The analogy is represented as a particle subject to these alternating energy states as it traverses the experimental landscape. The resulting particle trajectory represents the sampler's estimation of the posterior probability distribution. Recent packages implement the latest HMC flavor, the No U-Turn Sampler (NUTS), which makes the otherwise difficult to tune HMC algorithm a breeze to use. The mature packages that make use of NUTS include STAN, named after the chemist Stanislaw Ulam, and PyMC3. STAN is written in C++ and features its own probabilistic programming language, though a number of wrappers written in languages such as R, Python, MATLAB are available to interact with it from familiar environment where model evaluation is easier. Turning STAN's inference crank yields a reusable compiled sampler for the problem at hand.PyMC3 is written in pure Python and is built on top of the Theano library. Theano liberates the user from dealing with the computer's resources and finds the best way to implement the set of mathematical expressions that make up a model of interest. In the process, the software can automatically take advantage of modern computer features, including multiple cores and, if present, graphical processing units (GPU). This capacity makes PyMC3 particularly fast and robust, and a great tool for exploratory inference and rapid model prototyping and testing. Except for the introductory example below, which has a close form analytical solution and thus does not need MCMC, all the results shown in subsequent posts were obtained using PyMC3 running the NUTS sampler. *Bayesian Basics* Bayes'rule Bayesians inference is based on a rule that is relatively straightforward. Given two dependent events A and B,their joint probability $P(A,B)$ can be written in two different ways: $$P(A,B) = P(A|B) \times P(B) = P(B|A)\times P(A)$$ where $P(A|B) is the conditional probability that A occurs **given that B has occurred**.Despite its simplicity, this rule is remarkably handy when one of the conditional probabilities, say $P(B|A)$ is harder to compute than the other. This is easily dealt with by rearraging the terms above, which leads to Bayes' rule: $$P(B|A)=\frac{P(A|B)\times P(B)}{P(A)}$$ The trickster's pocketSuppose we're picking a trickster's pocket looking for coins. This trickster is known to carry three types of coins, distinguished by the pair of sides they have. First, some nomenclature:* H: the face we see is Heads * T: the face we see is Tails* HH: we draw a 2-headed coin,* HT: we draw a normal coin,* TT: we draw a 2-tailed coin.Having a pulled a coin out we see that the visible face is Heads. We want to know the probability of the other side of the coin being T, given the top side is H, i.e. the probability we drew a normal coin given we observe H or $P(HT|H)$? Flat PriorsLet's say we have no prior knowledge. A sensible first step to take is then to assign an equal probability for each coin type to being drawn, i.e. $P(HT)=P(HH)=P(TT) = 1/3$ (which is exhaustive so it adds to 1)brute-forcing it with Bayes, we can set up our equality as $$P(HT|H) = \frac{P(T,H)}{P(H)}$$using Bayes rule we can expand the numerator as $$P(HT|T) = \frac{P(T|HT)P(HT)}{P(H)}$$$P(T|HT) = 1/2$ and from the prior, we know that $P(HT)=1/3$What about the denominator $P(H)$, commonly referred to as *the evidence*? It can be computed by counting all possible scenarios yielding the outcome observed, H, weighed by what we know a priori about the probability of realization of each scenario. Let's tally all possibilities assign them probabilities and sum:$$P(H) = P(H|HH)P(HH) + P(H|HT)P(HT) + P(H|TT)P(TT)$$According to our prior knowledge $P(HH)=P(HT)=P(TT)=1/3$ so inserting numbers above yields:$$P(H) = (1)(1/3) + (1/2)(1/3) + (0)(1/3)$$and therefore$$P(HT|H) = \frac{(1/2)(1/3)}{(1)(1/3) + (1/2)(1/3)} = \frac{1/2}{1 + 1/2} = 1/3$$ Informative PriorsBut say we know that coins other than $(HT)$ are rare, and therefore we assume that to the best of our estimation the expected chances of encountering $(HT)$, $(HH)$, and $(TT)$ are respectively 0.8, 0.15, 0.05. Armed with this new prior, we can re-estimate P(HT|H):and $$P(H) = P(H|HH)P(HH) + P(H|HT)P(HT) + P(H|TT)P(TT) = (1)(0.15) + (0.5)(0.8) + (0)(0.05)$$$$P(HT|H) = \frac{(0.5)(0.8)}{0.15 + (0.5)(0.8)} = \frac{0.4}{0.55} = 0.73$$This is a contrived example that nevertheless, hopefully, illustrate the following points:* the whole endeavor boils down to counting possibilities, weighed by what we know a priori and what we observe,* prior knowledge can vary from one observer to the next, at least initially. However, the more the observed data the more the observers will converge to a common probabilistic assessment. An example of this follows. Extending Bayes rule to modeling: Priors, posteriors and the iterative nature of Bayesian inferenceWithin the context of scientific enquiry, conditional probabilities allow relating hypotheses to collected data, by way of model formulation and inference on this data. The hypothesis here refers to a deterministic model or a probability distribution, considered an adequate candidate for describing a process of interest, and its associated parameters. This framework allows naturally for trying multiple models and comparing their performance on data. A subsequent post will illustrate this point. Given a hypothesis, $H$, and a data set, $D$, collected to estimate the validity of this hypothesis, Bayes rule can be rewritten as: $$P(H|D) = \frac{P(D|H) \times P(H)} {P(D)}$$The iterative nature of Bayesian inference allows for a computed posterior to serve as the prior for the next round of data collection and I will show an example of that below. Depending on how much data can be collected, the specific form of the prior will exert more or less influence; this as well, I illustrate below.As a preamble, it is worth making more precise some of the vocabulary introduced earlier:* $P(H)$: **the prior** is a probability distribution that represents what is known/unknown about the hypothesis before seeing the data. Generally, the more the data collected the less the prior will have an influence on the inference.* $P(D|H)$: **the likelihood** is not a probability distribution but rather a set conditional probabilities; that is, probability of observing a given dataset conditioned on the hypothesis.* $P(H|D)$: **the posterior** is the probability distribution of the hypothesis after it has been confronted with data.* $P(D)$: **the evidence** is, for the purpose of this post, a normalizing constant that ensures all probabilities computed sum to 1. The Markov Chain Monte Carlo (MCMC) sampling, does away with this otherwise often computatiationally intractable construct and we end up with $P(H|D)\ \propto\ P(D|H)\times P(H)$, which doesn't change the interpretation of the results. Inferring Earth's land mass proportionI stole and modified this example from [McElreath (2015)](http://xcelab.net/rm/statistical-rethinking/). The goal is to infer the proportion of land by randomly sampling Earth locations and counting land and water points. The set of hypotheses includes all the possible values that this proportion parameter (*unobserved variable*, in bayesian speak) can take. Below, I simulate a grid of 101 points ranging from 0 to 1 in increments of 0.01, representing the possible value that the land proportion, hereafter $\theta$, can take. The result is a probabilistic statement of what this proportion is likely to be, given the data collected and the model used.In this example, the inference has a nice closed form through the use of a Beta-Binomial process. This includes a Beta distribution for prior and a likelihood expressed as a Binomial process. The Beta distribution and the binomial process form a conjugate pair, meaning that the update to the prior distribution can simply be done by updating the parameters of the Beta distribution. This is a rarity, but a useful one as it allows focusing on the iterative alteration of what we know of the land mass proportion as the data comes in and the evolution of our knowledge from the initial prior through the successives forms of the posterior. The steps are as follows:1. define a prior $P(H)$ * this can be flat, weakly regularizing, or strongly regularizing, depending on the researcher's prior knowledge, * flat prior * the probability distribution does not favor a particular subset within the domain of the parameter, * the prior will rapidly be overwhelmed by the collected data, * a flat prior may lead to overfitting (good performance on training data but poor performace on out-of-sample data.) * regularizing prior * can be weak or strong * favors with lower (weak) or higher (strong) degree of probability a specific set of value(s) the parameter (hypothesis) can take, * will *calm* the model and can only be overwhelmed by a relatively large number of data, depending on the strength of the regularization. * the danger is that if a 0 probability is a priori assigned to certain values withing the domain of the parameter, this precludes those values from ever becoming relevant. * inappropriately strong priors may lead to underfitting (poor performance on both training and out-of-sample data.)$$$$2. Likelihood $P(D|H)$ * define the likelihood model, here a binomial process, * compute likelihood of data given hypotheses, $$$$3. collect data * sample randomly from a mock set of globe locations, * land? water? $$$$4. Posterior $P(H|D)$ * update the posterior in view of new data * calculate likelihood using the new data * calculate the posterior according to (3) * repeat the above when new data becomes available; the posterior becomes the new prior.
###Code
import matplotlib.pyplot as pl
import numpy as np
from matplotlib import rcParams
from scipy.stats import beta, bernoulli, binom
pl.style.use('bmh')
rcParams['font.size'] = 16
rcParams['axes.labelsize'] = 18
rcParams['axes.titlesize'] = 20
rcParams['xtick.labelsize'] = 16
rcParams['ytick.labelsize'] = 16
%matplotlib inline
###Output
_____no_output_____
###Markdown
Creating a datasetBelow, I use the following variables:* $\theta_{true}$: the true land surface proportion, this is almost never seen in practice;* data: a random variate of 1000 samples made of 0s (water) and 1s (land), where the proportion of 1s is parameterized by $\theta_{true}$;* $\theta_{hyp}$ our set of hypotheses about $\theta_{true}$. Note that $\theta_{hyp}$ is a set of 101 hypotheses, where $\theta \in [0,0.1,...,0.99, 1]$, and where $\theta=0$ means there is only water, $\theta=0.5$ means land and water are in equal proportions, and $\theta=1$ indicates a dry globe;* cdata is a cumulative sum of the data used to simulate the cumulative effect of accruing data collection. I insert a 0 at the beginning of 'data' to make it easy to plot the prior as the first plotThe goal is to try to recover $\theta_{true}$.
###Code
# Set theta_true, the 'true' land proportion
theta_true = 0.29
# Sample 1000 times parameterized by theta_true
data = np.random.choice([0,1], size=1000, p=[1-theta_true, theta_true])
# cumulative sum. equals the number of land occurrences given a specific number of samples
cdata = np.cumsum(np.insert(data,0,0))
# Set up the hypotheses grid
theta_hyp=np.linspace(0,1,100)
N = [0, 1, 2, 4, 8, 128, 512, 1000]
###Output
_____no_output_____
###Markdown
Using a Flat or Uninformative PriorThis expresses complete ignorance, thereby ascribing an equal probability to all 101 proportions allowed by the $\theta$ grid I use here. Note here the change of the distribution as the data comes rolling in (see graph legends).
###Code
f, axs = pl.subplots(ncols=2, nrows=4, figsize=(15, 15), sharex=True)
#sb.set(font_scale=1.5)
for n, ax in zip(N, axs.ravel()):
ocns = cdata[n]
y = beta.pdf(theta_hyp, 1 + ocns, 1 + n - ocns)
ax.plot(theta_hyp, y , color='k', label='total obs. %d, land pts. %d' %(n, ocns))
ymax=5
lbl1=None
lbl2=None
if n == 0:
ymax=4
lbl1 = '50%'
lbl2 = 'real land proportion'
ax.set_title('flat prior')
ax.axvline(0.5, linestyle='--', label=lbl1)
ax.axvline(theta_true, linestyle='--', color='r', label=lbl2)
ax.legend(ncol=2)
ax.fill_between(theta_hyp, y, alpha=0.5);
ax.set_xlabel('Land Proportion')
ax.set_xlim((0, 1))
f.tight_layout()
###Output
_____no_output_____
###Markdown
The above shows how the absence of any prior knowledge, encoded by a flat prior (top left plot,) results in wild swings in the first few iterations. This illustrates a common problem with uninformative priors, or for that matter outside of a bayesian context, the lack of a regularizing mechanism, when only scant data is available. The result then is often that the model gets "over-excited" by the particulars of a small dataset. This ceases to be a problem as the data collection continues and a more complete picture can surface. As this happens, high probability regions start to appear for specific values of the hypothesis. This mirrors the increase in certitude of where the actual proportion lies thereby serving, if need be, to illustrate the natural relationship between uncertainty and probability. Using a Weakly Regularizing PriorFor the next scenario, I assume an awareness that land proportion is significantly lower than water proprortion, but only a vague understanding of what that difference is. For this I use a weakly informative prior, where ignorance is reflected in the uncertainty around the initial guess.
###Code
f, axs = pl.subplots(ncols=2, nrows=4, figsize=(15, 15), sharex=True)
#sb.set(font_scale=1.5)
for n ,ax in zip(N, axs.ravel()):
ocns = cdata[n]
y = beta.pdf(theta_hyp, 2 + ocns, 5 + n - ocns)
ax.plot(theta_hyp, y, color='k',
label='total obs. %d, land pts. %d' %(n, ocns))
lbl1=None
lbl2=None
if n == 0:
ymax=4
lbl1 = '50%'
lbl2 = 'real land proportion'
ax.set_title('weak_prior')
ax.axvline(0.5, linestyle='--', label=lbl1)
ax.axvline(theta_true, linestyle='--', color='r', label=lbl2)
ax.legend(ncol=2)
ax.fill_between(theta_hyp,y,alpha=0.5);
ax.set_xlabel('Land Proportion')
f.tight_layout()
###Output
_____no_output_____ |
appyters/HPO_Harmonizome_ETL/HPO.ipynb | ###Markdown
Harmonizome ETL: Human Phenotype Ontology (HPO) Created by: Charles Dai Credit to: Moshe SilversteinData Source: https://hpo.jax.org/app/
###Code
# appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="https://hpo.jax.org/app/" target="_blank">hpo.jax.org/app</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='phenotype_gene_list',
label='Phenotypes to Genes (txt)',
default='phenotype_to_genes.txt',
examples={
'phenotype_to_genes.txt': 'http://compbio.charite.de/jenkins/job/hpo.annotations/lastSuccessfulBuild/artifact/util/annotation/phenotype_to_genes.txt'
},
section='data'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
output_name = 'hpo'
path = 'Output/HPO'
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
skiprows=1, header=None, sep='\t',
usecols=[1, 3], index_col=1)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
df.index.name = 'Gene Symbol'
df.columns = ['HPO Term']
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____
###Markdown
Harmonizome ETL: Human Phenotype Ontology (HPO) Created by: Charles Dai Credit to: Moshe SilversteinData Source: https://hpo.jax.org/app/
###Code
# appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="https://hpo.jax.org/app/" target="_blank">hpo.jax.org/app</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='phenotype_gene_list',
label='Phenotypes to Genes (txt)',
default='phenotype_to_genes.txt',
examples={
'phenotype_to_genes.txt': 'http://compbio.charite.de/jenkins/job/hpo.annotations/lastSuccessfulBuild/artifact/util/annotation/phenotype_to_genes.txt'
},
section='data'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
output_name = 'hpo'
path = 'Output/HPO'
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
skiprows=1, header=None, sep='\t',
usecols=[1, 3], index_col=1)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
df.index.name = 'Gene Symbol'
df.columns = ['HPO Term']
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____
###Markdown
Harmonizome ETL: Human Phenotype Ontology (HPO) Created by: Charles Dai Credit to: Moshe SilversteinData Source: https://hpo.jax.org/app/
###Code
#%%appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="https://hpo.jax.org/app/" target="_blank">hpo.jax.org/app</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='phenotype_gene_list',
label='Phenotypes to Genes (txt)',
default='phenotype_to_genes.txt',
examples={
'phenotype_to_genes.txt': 'http://compbio.charite.de/jenkins/job/hpo.annotations/lastSuccessfulBuild/artifact/util/annotation/phenotype_to_genes.txt'
},
section='data'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
output_name = 'hpo'
path = 'Output/HPO'
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
skiprows=1, header=None, sep='\t',
usecols=[1, 3], index_col=1)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
df.index.name = 'Gene Symbol'
df.columns = ['HPO Term']
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____ |
notebooks/06 - Gradient Boosting.ipynb | ###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
gbrt = GradientBoostingClassifier(random_state=0)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, learning_rate=0.01)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
plt.barh(range(cancer.data.shape[1]), gbrt.feature_importances_)
plt.yticks(range(cancer.data.shape[1]), cancer.feature_names);
ax = plt.gca()
ax.set_position([0.4, .2, .9, .9])
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
print("accuracy on training set: %f" % xgb.score(X_train, y_train))
print("accuracy on test set: %f" % xgb.score(X_test, y_test))
from xgboost import XGBClassifier
xgb = XGBClassifier(n_estimators=1000)
xgb.fit(X_train, y_train)
print("accuracy on training set: %f" % xgb.score(X_train, y_train))
print("accuracy on test set: %f" % xgb.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
gbrt = GradientBoostingClassifier(random_state=0)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, learning_rate=0.01)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
plt.barh(range(cancer.data.shape[1]), gbrt.feature_importances_)
plt.yticks(range(cancer.data.shape[1]), cancer.feature_names);
ax = plt.gca()
ax.set_position([0.4, .2, .9, .9])
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
print("accuracy on training set: %f" % xgb.score(X_train, y_train))
print("accuracy on test set: %f" % xgb.score(X_test, y_test))
from xgboost import XGBClassifier
xgb = XGBClassifier(n_estimators=1000)
xgb.fit(X_train, y_train)
print("accuracy on training set: %f" % xgb.score(X_train, y_train))
print("accuracy on test set: %f" % xgb.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
gbrt = GradientBoostingClassifier(random_state=0)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, learning_rate=0.01)
gbrt.fit(X_train, y_train)
print("accuracy on training set: %f" % gbrt.score(X_train, y_train))
print("accuracy on test set: %f" % gbrt.score(X_test, y_test))
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
plt.barh(range(cancer.data.shape[1]), gbrt.feature_importances_)
plt.yticks(range(cancer.data.shape[1]), cancer.feature_names);
ax = plt.gca()
ax.set_position([0.4, .2, .9, .9])
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
print("accuracy on training set: %f" % xgb.score(X_train, y_train))
print("accuracy on test set: %f" % xgb.score(X_test, y_test))
from xgboost import XGBClassifier
xgb = XGBClassifier(n_estimators=1000)
xgb.fit(X_train, y_train)
print("accuracy on training set: %f" % xgb.score(X_train, y_train))
print("accuracy on test set: %f" % xgb.score(X_test, y_test))
###Output
_____no_output_____ |
Code_with_automation_testing_and_logging_not_finished.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/plushvoxel/Project-Lernende-Agenten-colab/blob/master/Code_with_automation_testing_and_logging_not_finished.ipynb)
###Code
from __future__ import print_function
import math
from urllib import request
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
from google.colab import files
from tarfile import open as taropen
from struct import unpack
import os
import glob
import math
import seaborn as sns
import csv
import time
import base64
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
request.urlretrieve("https://github.com/plushvoxel/Project-Lernende-Agenten-Data-Generator/blob/master/frequency.tar?raw=true", "frequency.tar")
tar = taropen("frequency.tar")
data = dict()
MODKEY = "mod"
for member in tar.getmembers():
modulation = member.name.split('_')[0]
if modulation == "am":
modulation = 0
else:
modulation = 1
if not MODKEY in data:
data[MODKEY] = [modulation]
else:
data[MODKEY].append(modulation)
with tar.extractfile(member) as f:
buffer = f.read()
num_floats = len(buffer)//4
floats = unpack("f"*num_floats, buffer)
i = floats[0::2]
q = floats[1::2]
for j in range(len(i)):
ikey = "i{:05d}".format(j)
qkey = "q{:05d}".format(j)
if not ikey in data:
data[ikey] = [i[j]]
else:
data[ikey].append(i[j])
if not qkey in data:
data[qkey] = [q[j]]
else:
data[qkey].append(q[j])
signal_dataframe = pd.DataFrame(data=data)
signal_dataframeReal = signal_dataframe.copy()
signal_dataframe = signal_dataframe.reindex(np.random.permutation(signal_dataframe.index))
print(signal_dataframe)
def parse_labels_and_features(dataset):
"""Extracts labels and features.
This is a good place to scale or transform the features if needed.
Args:
dataset: A Pandas `Dataframe`, containing the label on the first column and
monochrome pixel values on the remaining columns, in row major order.
Returns:
A `tuple` `(labels, features)`:
labels: A Pandas `Series`.
features: A Pandas `DataFrame`.
"""
labels = dataset[MODKEY]
# DataFrame.loc index ranges are inclusive at both ends.
features = dataset.iloc[:,1:4097]
return labels, features
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
# There are 784 pixels in each image.
return set([tf.feature_column.numeric_column('features', shape=4096)])
def create_predict_input_fn(features, labels, batch_size, repeat_count = 1):
"""A custom input_fn for sending mnist data to the estimator for predictions.
Args:
features: The features to base predictions on.
labels: The labels of the prediction examples.
Returns:
A function that returns features and labels for predictions.
"""
def _input_fn():
raw_features = {"features": features.values}
raw_targets = np.array(labels)
ds = Dataset.from_tensor_slices((raw_features, raw_targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(repeat_count)
# Return the next batch of data.
feature_batch, label_batch = ds.make_one_shot_iterator().get_next()
return feature_batch, label_batch
return _input_fn
def create_training_input_fn(features, labels, batch_size, num_epochs=None, shuffle=True, repeat_count=1):
"""A custom input_fn for sending MNIST data to the estimator for training.
Args:
features: The training features.
labels: The training labels.
batch_size: Batch size to use during training.
Returns:
A function that returns batches of training features and labels during
training.
"""
def _input_fn(num_epochs=None, shuffle=True):
# Input pipelines are reset with each call to .train(). To ensure model
# gets a good sampling of data, even when number of steps is small, we
# shuffle all the data before creating the Dataset object
idx = np.random.permutation(features.index)
raw_features = {"features":features.reindex(idx)}
raw_targets = np.array(labels[idx])
ds = Dataset.from_tensor_slices((raw_features,raw_targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(repeat_count)
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
feature_batch, label_batch = ds.make_one_shot_iterator().get_next()
return feature_batch, label_batch
return _input_fn
def train_nn_classification_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets,
basepath,
filewriter):
"""Trains a neural network classification model for the MNIST digits dataset.
In addition to training, this function also prints training progress information,
a plot of the training and validation loss over time, as well as a confusion
matrix.
Args:
learning_rate: An `int`, the learning rate to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing the training features.
training_targets: A `DataFrame` containing the training labels.
validation_examples: A `DataFrame` containing the validation features.
validation_targets: A `DataFrame` containing the validation labels.
Returns:
The trained `DNNClassifier` object.
"""
periods = 10
# Caution: input pipelines are reset with each call to train.
# If the number of steps is small, your model may never see most of the data.
# So with multiple `.train` calls like this you may want to control the length
# of training with num_epochs passed to the input_fn. Or, you can do a really-big shuffle,
# or since it's in-memory data, shuffle all the data in the `input_fn`.
steps_per_period = steps / periods
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column('features', shape=4096)]
# Create a DNNClassifier object.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
n_classes=2,
hidden_units=hidden_units,
optimizer=my_optimizer,
config=tf.contrib.learn.RunConfig(keep_checkpoint_max=1)
)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss error (on validation data):")
training_errors = []
validation_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute probabilities.
training_predictions = list(classifier.predict(input_fn=predict_training_input_fn))
training_probabilities = np.array([item['probabilities'] for item in training_predictions])
training_pred_class_id = np.array([item['class_ids'][0] for item in training_predictions])
training_pred_one_hot = tf.keras.utils.to_categorical(training_pred_class_id,2)
validation_predictions = list(classifier.predict(input_fn=predict_validation_input_fn))
validation_probabilities = np.array([item['probabilities'] for item in validation_predictions])
validation_pred_class_id = np.array([item['class_ids'][0] for item in validation_predictions])
validation_pred_one_hot = tf.keras.utils.to_categorical(validation_pred_class_id,2)
# Compute training and validation errors.
training_log_loss = metrics.log_loss(training_targets, training_pred_one_hot)
validation_log_loss = metrics.log_loss(validation_targets, validation_pred_one_hot)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, validation_log_loss))
# Add the loss metrics from this period to our list.
training_errors.append(training_log_loss)
validation_errors.append(validation_log_loss)
print("Model training finished.")
# Remove event files to save disk space.
_ = map(os.remove, glob.glob(os.path.join(classifier.model_dir, 'events.out.tfevents*')))
# Calculate final predictions (not probabilities, as above).
final_predictions = classifier.predict(input_fn=predict_validation_input_fn)
final_predictions = np.array([item['class_ids'][0] for item in final_predictions])
accuracy = metrics.accuracy_score(validation_targets, final_predictions)
print("Final accuracy (on validation data): %0.2f" % accuracy)
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.plot(training_errors, label="training")
plt.plot(validation_errors, label="validation")
plt.legend()
firstPic = base64.b64encode(plt.show())
# Output a plot of the confusion matrix.
cm = metrics.confusion_matrix(validation_targets, final_predictions)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class).
cm_normalized = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
ax = sns.heatmap(cm_normalized, cmap="bone_r")
ax.set_aspect(1)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
plt.show()
#['Model', 'learning rate', 'training set size', 'validating set size', 'accuracy']
secondPic = base64.b64encode(plt.show())
filewriter.writerow([hidden_units, learning_rage, training_examples.shape[0],validating_examples.shape[0] , accuracy, firstPic, secondPic])
return classifier
def train_automated(
training_set_size,
validating_set_size,
test_set_size,
learning_rate,
steps,
batch_size,
model,
basepath,
filewriter):
""" Function used for automate the process of trying new network configurations
Args:
training_set_size: An 'int', number of samples used for training
validating_set_size: An 'int', number of samples used for validation
test_set_size: An 'int', number of samples used for test
learning_rate: An `int`, the learning rate to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
model: A list of 'int', define the number of neurons in each hidden layer
Returns:
The trained `DNNClassifier` object.
"""
activation_function = "RELU" #@param ["RELU", "Sigmoid", "Tanh"]
regression = "None" #@param ["None", "L1", "L2"]
regression_rate = 3 #@param ["3", "1", "0.3", "0.1", "0.03", "0.01", "0.003", "0.001"] {type:"raw"}
filepath = ""
training_targets, training_examples = parse_labels_and_features(signal_dataframe[0:training_set_size])
validation_targets, validation_examples = parse_labels_and_features(signal_dataframe[training_set_size:(training_set_size+validating_set_size)])
test_targets, test_examples = parse_labels_and_features(signal_dataframe[(training_set_size+validating_set_size):(training_set_size+validating_set_size+test_set_size)])
nn_classification = train_nn_classification_model(
learning_rate=learning_rate,
steps=steps,
batch_size=batch_size,
hidden_units=model,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets,
basepath = filepath,
filewriter = filewriter)
num_samples = signal_dataframe.shape[0]
learning_rate_steps = [0.001, 0.003, 0.01, 0.03]
data_set_distribution= [[60, 20, 20]]
basepath = "C:\\Users\\Ahmad\\Documents\\Studium\\WS1819\\LA\\" + time.strftime("%Y%m%d-%H%M%S")+".csv"
with open(basepath, 'w') as csvfile:
filewriter = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
filewriter.writerow(['Model', 'learning rate', 'training set size', 'validating set size', 'accuracy'])
for x in range(1,4): # number of layers
for z in range(500,3000,500): #number of neurons per layer
for y in learning_rate_steps: # learning_rate used for training
for v in data_set_distribution: # try several dataset_distributions
training_set_size = int(num_samples * (v[0]/100))
validating_set_size = int(num_samples * (v[1]/100))
test_set_size = int(num_samples * (v[2]/100))
model = [0] * x
for a in range(x):
if a == 0:
model[a] = z
else:
model[a] = model[a-1]/2
print(training_set_size)
print(validating_set_size)
print(test_set_size)
print(y)
print(model)
train_automated(
training_set_size,
validating_set_size,
test_set_size,
y,
153,
10,
model,
basepath,
filewriter)
name = 'test.csv'
with open(name, 'w') as csvfile:
filewriter = csv.writer(csvfile, delimiter=';', quotechar='|', quoting=csv.QUOTE_MINIMAL)
filewriter.writerow(['Model', 'learning rate', 'training set size', 'validating set size', 'accuracy'])
files.download('test.csv')
###Output
_____no_output_____ |
_notebooks/Fb-Mobility-Data-Analysis.ipynb | ###Markdown
"বাংলাদেশের কোন জেলা লোকডাউন মেনে চলছে"- toc: false- branch: master- badges: true- comments: true- categories: [mobility, facebook, bangladesh]- image: images/district-stay-put.png- hide: false- search_exclude: true- metadata_key1: mobility- metadata_key2: covid-19
###Code
#hide_input
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import pandas as pd
import altair as alt
#hide_input
country_name = "Bangladesh"
country_code = "BGD"
df = pd.read_csv('movement-range-2020-07-10.txt', \
sep='\t', parse_dates=['ds'], low_memory=False)
bdf = df[df['country'] == country_code]
change_col = 'all_day_bing_tiles_visited_relative_change'
stay_put_col = 'all_day_ratio_single_tile_users'
bn_change_col_name = 'ফেব্রুয়ারী ২০২০ থেকে কতখানি পরিবর্তন'
bn_stay_put_col_name = 'কতোভাগ মানুষ বাড়িতে থাকছে'
#bdf.tail()
###Output
_____no_output_____
###Markdown
কোভিড-১৯ প্যান্ডেমিক এর কারণে মানুষের চলাফেরা তে কি পরিবর্তন এসেছে তার একটি [ডেটাসেট](https://data.humdata.org/dataset/movement-range-maps) ফেইসবুক প্রকাশ করেছে। ফেইসবুক ব্যবহারকারীদের ওয়াইফাই এর ব্যবহার পর্যবেক্ষণ করে ডেটাসেট বানানো হয়েছে। এখানে দেখা হয়েছে লোকেরা বাড়ির তে অবস্থান করছে, নাকি শহরে ঘুরে বেড়াচ্ছে। এটা প্রতিরোধমূলক ব্যবস্থাগুলি সঠিক পথে পরিচালিত হচ্ছে কিনা সে ব্যাপারে অবহিত করতে পারে।এই পোস্টে বাংলাদেশের ৬৪ জেলার ডেটা তুলে ধরা হল। [ডেটা](https://data.humdata.org/dataset/movement-range-maps)
###Code
#hide_input
bdf_latest_date = bdf[bdf['ds'] == bdf['ds'].max()]
bdf_latest_date = bdf_latest_date.sort_values(by=[stay_put_col], ascending=False)
color_var=alt.Color('Type:N', scale=alt.Scale(scheme='tableau20'), legend=None)
barchart = alt.Chart(bdf_latest_date).mark_bar().encode(
y=alt.Y('polygon_name', axis=alt.Axis(title='জেলা'), sort='-x'),
x=alt.X(stay_put_col, axis=alt.Axis(title=bn_stay_put_col_name, format='%')),
color=color_var,
tooltip=['polygon_name', stay_put_col],
).properties(
title = bn_stay_put_col_name
)
barchart.properties(width=600, height=600)
#barchart.save('images/district-stay-put.png')
#hide_input
line = alt.Chart(bdf_latest_date).mark_line(color='red').encode(
x=alt.X('polygon_name', axis=alt.Axis(title='জেলা'), sort='y'),
y=alt.Y(change_col, axis=alt.Axis(title=bn_change_col_name, format='%')),
tooltip=['polygon_name', change_col],
).properties(
title="কোভিড-১৯ প্যান্ডেমিক শুরুর আগে ফেব্রুয়ারী ২০২০ এর সাথে তুলোনা"
)
line.properties(width=700, height=300)
#hide_input
# Prepare data
cities = ['Dhaka', 'Narayanganj', 'Chittagong', 'Sylhet', 'Rajshahi', 'Khulna', 'Barisal', 'Gazipur']
cities_data = []
for acity in cities:
city_data = bdf.loc[bdf['polygon_name'] == acity]
city_data = city_data.copy()
city_data['City'] = acity
cities_data.append(city_data)
cities_data = pd.concat(cities_data)
cities_data.tail()
color_var=alt.Color('City:N', scale=alt.Scale(scheme='tableau20'), legend=None)
chart = alt.Chart(cities_data).mark_line().encode(
x=alt.X('monthdate(ds):O', axis=alt.Axis(title='Date')),
y=alt.Y(stay_put_col, axis=alt.Axis(title=bn_stay_put_col_name, format='%')),
color=color_var,
tooltip=['monthdate(ds):O', 'City:N', stay_put_col],
).properties(
title=f"শহরগুলোতে {bn_stay_put_col_name}"
)
legend = alt.Chart(cities_data).mark_point().encode(
y=alt.Y('City:N', axis=alt.Axis(orient='right')),
color=color_var
)
chart.properties(width=600, height=300)|legend
#hide_input
# getting data from https://data.humdata.org/dataset/movement-range-maps
#!wget https://data.humdata.org/dataset/c3429f0e-651b-4788-bb2f-4adbf222c90e/resource/31ca909c-10d9-458a-8720-88b54b3e3627/download/movement-range-data-2020-06-22.zip
#!ls
#!yes "yes" | unzip movement-range-data-2020-06-22.zip
# caution
#!rm -rf movement-range-data-2020-06-10 && rm -rf movement-range-data-2020-06-10.zip
#!head -n 30 movement-range-2020-06-22.txt
###Output
_____no_output_____ |
Diamonds-Final.ipynb | ###Markdown
* **carat** - Carat weight of the diamond.* **cut** - Describes the cut quality of the diamond (from the best to worst: Ideal, Premium, Very Good, Good and Fair).* **color** - Color of the diamond (from the best to worst: D, E, F, G, H, I and J).* **clarity** - A measurement of how clear the diamond is (from the best to worst: IF, VVS1, VVS2, VS1, VS2, SI1, SI2 and I1).* **depth** - The height of a diamond, measured from the culet to the table, divided by the average girdle diameter (%).* **table** - The width of a diamond table expressed as a percentage of the average diameter (%).* **x** - Diamond length (mm).* **y** - Diamond width (mm).* **z** - Diamond depth (mm).* **price** - Diamond price.
###Code
df.info()
df.shape
###Output
_____no_output_____
###Markdown
Missing Values
###Code
df.isnull().sum()
if df.isnull().sum().any() == False:
print("There are no missing values")
else:
print("There are missing values")
###Output
There are no missing values
###Markdown
Checking for duplicate rows and removing unnecessary columns
###Code
# Dropping "Unnamed: 0" column
df = df.drop(["Unnamed: 0"], axis = 1)
# Checking for duplicate rows
print("number of duplicate rows: ", df.duplicated().sum())
df.head()
###Output
_____no_output_____
###Markdown
High Level Information
###Code
df.describe().T
#df.describe()
#Numerical Data
# Categorical data
df.describe(include = "O").T
format_dict = {"carat" : "{:.2f}", "depth" : "{:.1f}", "table" : "{:.1f}", "x" : "{:.2f}", "y" : "{:.2f}", "z" : "{:.2f}"}
df_zero = df.loc[(df["x"] == 0) | (df["y"] == 0) | (df["z"] == 0)]
df_zero.style.apply(lambda x: ["background: yellow" if n == 0 else "" for n in x], axis = 1).format(format_dict)
###Output
_____no_output_____
###Markdown
We know that length,width and depth cannot be 0,therefore the data is wrong,lets treat these values
###Code
# Transforming them into NaN values
df.loc[df["x"] == 0, "x"] = np.nan
df.loc[df["y"] == 0, "y"] = np.nan
df.loc[df["z"] == 0, "z"] = np.nan
# Seeing the number of the new missing values
df[["x", "y", "z"]].isnull().sum()
###Output
_____no_output_____
###Markdown
After Transformation we see the count of missing values,now lets treat the missing values. Checking Correlations
###Code
df.corr().T
sns.heatmap(df.corr())
def get_corr(col):
return df.corr().unstack()[col].sort_values(ascending = False)
print("x correlations\n\n{0}\n\n{3}\n\ny correlations\n\n{1}\n\n{3}\n\nz correlations\n\n{2}".format(get_corr("x"), get_corr("y"), get_corr("z"), 25*"-"))
###Output
x correlations
x 1.000000
carat 0.977765
z 0.975435
y 0.974933
price 0.887227
table 0.196130
depth -0.025097
dtype: float64
-------------------------
y correlations
y 1.000000
x 0.974933
z 0.956744
carat 0.953989
price 0.867870
table 0.184519
depth -0.029142
dtype: float64
-------------------------
z correlations
z 1.000000
x 0.975435
carat 0.961048
y 0.956744
price 0.868206
table 0.152483
depth 0.095023
dtype: float64
###Markdown
Imputing Missing Values x,y,z has values of 0
###Code
def missing_values_imputation(col):
carat = df.groupby(["carat"])[col].median()
index_list = list(df.loc[df[col].isnull() == True].sort_values(by = "carat", ascending = False).index)
for i in index_list:
carat_value = df.loc[i, "carat"]
new_value = carat[carat_value]
df.loc[i, col] = new_value
print("carat: {0} / median {1} value: {2}".format(carat_value, col, new_value))
return df.iloc[index_list].style.applymap(lambda x: "background-color: limegreen", subset = col).format(format_dict)
missing_values_imputation("x")
missing_values_imputation("y")
missing_values_imputation("z")
for c in ['carat', 'depth', 'table', 'price', 'x', 'y', 'z']:
plt.figure(figsize=(10, 5))
sns.boxplot(df[c])
plt.title(c, fontsize=20)
plt.show()
###Output
E:\anaconda\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Plotting Regression Fit with respect to target variable to visualize and find outliers
###Code
sns.set_style("whitegrid")
c = "darkturquoise"
#c = "lightsalmon"
#c = "crimson"
plt.figure(figsize = (12, 18))
plt.subplot(3, 2, 1)
plt.title("price X carat")
sns.regplot(data = df, x = "price", y = "carat", color = c, line_kws = {"color" : "black"})
plt.subplot(3, 2, 2)
plt.title("price X depth")
sns.regplot(data = df, x = "price", y = "depth", color = c, line_kws = {"color" : "black"})
plt.subplot(3, 2, 3)
plt.title("price X table")
sns.regplot(data = df, x = "price", y = "table", color = c, line_kws = {"color" : "black"})
plt.subplot(3, 2, 4)
plt.title("price X x")
sns.regplot(data = df, x = "price", y = "x", color = c, line_kws = {"color" : "black"})
plt.subplot(3, 2, 5)
plt.title("price X y")
sns.regplot(data = df, x = "price", y = "y", color = c, line_kws = {"color" : "black"})
plt.subplot(3, 2, 6)
plt.title("price X z")
sns.regplot(data = df, x = "price", y = "z", color = c, line_kws = {"color" : "black"})
plt.show()
###Output
_____no_output_____
###Markdown
From the above interpretation we see that Outliers exist in * **Price vs y*** **Price vs z**
###Code
def highlight_outliers(outliers, col):
outliers_index = outliers.index
i = pd.IndexSlice[outliers_index, col]
return outliers.style.applymap(lambda x: "background-color: red", subset = i).format(format_dict)
###Output
_____no_output_____
###Markdown
* **Price vs y**
###Code
df_outliers = df.loc[df["y"] > 30].copy()
highlight_outliers(df_outliers, "y")
###Output
_____no_output_____
###Markdown
* **Price vs z**
###Code
df_outliers = df.loc[df["z"] > 30].copy()
highlight_outliers(df_outliers, "z")
# Transforming them into NaN values
df.loc[df["y"] > 30, "y"] = np.nan
df.loc[df["z"] > 30, "z"] = np.nan
###Output
_____no_output_____
###Markdown
Let us impute these outliers
###Code
missing_values_imputation("y")
missing_values_imputation("z")
###Output
carat: 0.51 / median z value: 3.17
###Markdown
Checking for other outliers * **price vs depth**
###Code
df_outliers = df.loc[(df["depth"] > 75) | (df["depth"] < 45)].copy()
highlight_outliers(df_outliers, "depth")
###Output
_____no_output_____
###Markdown
They are not absurd values ,so let us not impute them and keep actual values for analysis * **price vs table**
###Code
df_outliers = df.loc[(df["table"] > 90) | (df["table"] < 45)].copy()
highlight_outliers(df_outliers, "table")
###Output
_____no_output_____
###Markdown
We see it to be similar with depth,therefore let's keep it actual. * **Price vs z**
###Code
df_outliers = df.loc[df["z"] < 2].copy()
highlight_outliers(df_outliers, "z")
###Output
_____no_output_____
###Markdown
We see that there are few values same as carat,which is not right,so let us impute them
###Code
df.loc[df["carat"] == df["z"], ["carat", "z"]]
# Transforming them into NaN values
df.loc[df["z"] < 2, "z"] = np.nan
missing_values_imputation("z")
###Output
carat: 1.53 / median z value: 4.56
carat: 1.41 / median z value: 4.44
carat: 1.07 / median z value: 4.05
###Markdown
Data Visualization Bar Plot between cut and price
###Code
plt.figure(figsize=(10, 5))
sns.barplot(x='cut', y='price', data=df)
plt.title('Relation b/w cut and price', fontsize=20);
plt.xlabel('cut', fontsize=15)
plt.ylabel('price', fontsize=15);
df.corr()['price'].sort_values(ascending=False)[1:]
''' color category '''
color_label = df.color.value_counts()
plt.figure(figsize=(10, 5))
sns.barplot(color_label.index, color_label);
plt.ylabel('count', fontsize=15)
plt.xlabel('color', fontsize=15);
###Output
E:\anaconda\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Bar Plot between clarity and price
###Code
plt.figure(figsize=(10, 5))
sns.barplot(x='clarity', y='price', data=df);
plt.title('Relation b/w clarity and price', fontsize=20)
plt.xlabel('clarity', fontsize=15)
plt.ylabel('price', fontsize=15);
cut_palette = ["darkturquoise", "lightskyblue", "paleturquoise", "lightcyan", "azure"]
color_palette = ["cadetblue", "deepskyblue", "darkturquoise", "lightskyblue", "paleturquoise", "lightcyan", "azure"]
clarity_palette = ["cadetblue", "deepskyblue", "darkturquoise", "lightskyblue", "paleturquoise", "lightcyan", "azure", "ghostwhite"]
df["cut"] = pd.Categorical(df["cut"], categories = ["Ideal", "Premium", "Very Good", "Good", "Fair"], ordered = True)
df["color"] = pd.Categorical(df["color"], categories = ["D", "E", "F", "G", "H", "I", "J"], ordered = True)
df["clarity"] = pd.Categorical(df["clarity"], categories = ["IF", "VVS1", "VVS2", "VS1", "VS2", "SI1", "SI2", "I1"], ordered = True)
df_cut = df["cut"].value_counts()
plt.figure(figsize = (7,7))
plt.pie(data = df_cut, x = df_cut.values, labels = df_cut.index, autopct = "%.2f%%", pctdistance = 0.8, colors = cut_palette )
circle = plt.Circle(xy = (0, 0), radius = 0.5, facecolor = 'white')
plt.gca().add_artist(circle)
plt.title("% of each Diamond Cut Quality", size = 16)
plt.show()
###Output
_____no_output_____
###Markdown
Ideal>Premium>VeryGood>Good>Fair
###Code
position = 0
for cut in df_cut:
print("{0} quality cuts: {1}".format(df_cut.index[position], df_cut.values[position]))
position += 1
###Output
Ideal quality cuts: 21551
Premium quality cuts: 13791
Very Good quality cuts: 12082
Good quality cuts: 4906
Fair quality cuts: 1610
###Markdown
We can come to a conclusion that there are more number of high and well cut diamonds. **Checking cut with regards to price**
###Code
plt.figure(figsize = (9, 6))
sns.barplot(data = df, x = "cut", y = "price", color = c)
plt.title("Relation between Cut and Price", size = 16)
plt.show()
###Output
_____no_output_____
###Markdown
Here we see a unusual interpretation,let us check the correlation
###Code
get_corr("price")
###Output
_____no_output_____
###Markdown
**carat is the most important thing with regards to price,diamonds with ideal cuts should have lower carat value**
###Code
df.groupby(["cut"])["carat"].mean()
###Output
_____no_output_____
###Markdown
Here we see the mean of Ideal cuts is very low compared to others,therefore the affect in graph.
###Code
df_color = df["color"].value_counts()
plt.figure(figsize = (7,7))
plt.pie(data = df_color, x = df_color.values, labels = df_color.index, autopct = "%.2f%%", pctdistance = 0.8, startangle = 40, colors = color_palette)
circle = plt.Circle(xy = (0, 0), radius = 0.5, facecolor = 'white')
plt.gca().add_artist(circle)
plt.title("% of each Diamond Color", size = 16)
plt.show()
###Output
_____no_output_____
###Markdown
D>E>F>G>H>I>J
###Code
position = 0
for color in df_color:
print("{0} color diamonds: {1}".format(df_color.index[position], df_color.values[position]))
position += 1
plt.figure(figsize = (9, 6))
sns.barplot(data = df, x = "color", y = "price", color = c)
plt.title("Relation between Diamond Color and Price", size = 16)
plt.show()
###Output
_____no_output_____
###Markdown
Again, the mean price of diamonds with better colors are lower than all other diamonds with worst colors.
###Code
df.groupby(["color"])["carat"].mean()
###Output
_____no_output_____
###Markdown
We observe the same here,D has the lowest mean value
###Code
df_clarity = df["clarity"].value_counts()
plt.figure(figsize = (7,7))
plt.pie(data = df_clarity, x = df_clarity.values, labels = df_clarity.index, autopct = "%.2f%%", pctdistance = 0.8, colors = clarity_palette)
circle = plt.Circle(xy = (0, 0), radius = 0.5, facecolor = 'white')
plt.gca().add_artist(circle)
plt.title("% of each Diamond Clarity", size = 16)
plt.show()
###Output
_____no_output_____
###Markdown
IF > VVS1 > VVS2 > VS1 > VS2 > SI1 > SI2 > I1
###Code
position = 0
for color in df_clarity:
print("{0} clarity diamonds: {1}".format(df_clarity.index[position], df_clarity.values[position]))
position += 1
plt.figure(figsize = (9, 6))
sns.barplot(data = df, x = "clarity", y = "price", color = c)
plt.title("Relation between Diamond Clarity and Price", size = 16)
plt.show()
df.groupby(["clarity"])["carat"].mean()
###Output
_____no_output_____
###Markdown
We observe the same again,VVS1 has the lowest mean. DATA PREPROCESSING TO IMPLEMENT VARIOUS MODELS
###Code
label_cut = LabelEncoder()
label_color = LabelEncoder()
label_clarity = LabelEncoder()
df['cut'] = label_cut.fit_transform(df['cut'])
df['color'] = label_color.fit_transform(df['color'])
df['clarity'] = label_clarity.fit_transform(df['clarity'])
df.head()
X = df.drop(["price"], axis = 1).copy()
y = df["price"].copy()
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.1, random_state=40)
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
regressor = LinearRegression()
regressor.fit(X_train, y_train)
prediction = regressor.predict(X_test)
rmse_Lreg = np.sqrt(mean_squared_error(y_test, prediction))
print('RMSE value is = {}'.format(rmse_Lreg))
r2_Lreg = r2_score(y_test, prediction)
print('R-squared value is {}'.format(r2_Lreg))
###Output
RMSE value is = 1376.6250458046643
R-squared value is 0.8841897788553627
###Markdown
Random Forest Regressor
###Code
RFreg_model = RandomForestRegressor()
RFreg_model.fit(X_train,y_train)
prediction2 = RFreg_model.predict(X_test)
rmse_RFreg = np.sqrt(mean_squared_error(y_test, prediction2))
print('RMSE value is = {}'.format(rmse_RFreg))
r2_RFreg = r2_score(y_test, prediction2)
print('R-squared value is {}'.format(r2_RFreg))
###Output
RMSE value is = 522.9692644787389
R-squared value is 0.9832864814086221
###Markdown
Polynomial Regressor
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly_reg = PolynomialFeatures(degree = 4)
X_poly = poly_reg.fit_transform(X_train)
regressor = LinearRegression()
regressor.fit(X_poly, y_train)
prediction3 = regressor.predict(poly_reg.transform(X_test))
ploy_reg = np.sqrt(mean_squared_error(y_test, prediction3))
print('RMSE value is = {}'.format(ploy_reg))
r2_poly_reg = r2_score(y_test, prediction3)
print('R-squared value is {}'.format(r2_poly_reg))
###Output
RMSE value is = 977.721538221508
R-squared value is 0.9415821025119236
###Markdown
Decision Tree Regressor
###Code
regressor1 = DecisionTreeRegressor(random_state = 0)
regressor1.fit(X_train, y_train)
prediction4 = regressor1.predict(X_test)
dt_reg = np.sqrt(mean_squared_error(y_test, prediction4))
print('RMSE value is = {}'.format(dt_reg))
r2_dt_reg = r2_score(y_test, prediction4)
print('R-squared value is {}'.format(r2_dt_reg))
###Output
RMSE value is = 746.5701536975055
R-squared value is 0.9659390462125902
###Markdown
XGB Regressor
###Code
xgbr = XGBRegressor(learning_rate = 0.1, n_estimators = 200, random_state = SEED)
xgbr.fit(X_train,y_train)
prediction5 = xgbr.predict(X_test)
xgbr_reg = np.sqrt(mean_squared_error(y_test, prediction5))
print('RMSE value is = {}'.format(xgbr_reg))
r2_xgbr_reg = r2_score(y_test, prediction5)
print('R-squared value is {}'.format(r2_xgbr_reg))
Result= pd.DataFrame({'Actual Price':y_test,'Predicted Price By LinearRegression':prediction,'Predicted Price By RandomForest':prediction2,'Predicted Price By PolynomialRegressor':prediction3,'Predicted Price By DecisionTreeRegressor':prediction4,'Predicted Price By XgbRegressor':prediction5})
Result
Result.head()
###Output
_____no_output_____ |
DataModeling_with_Cassandra/Project_1B_ Project_Template.ipynb | ###Markdown
Part I. ETL Pipeline for Pre-Processing the Files PLEASE RUN THE FOLLOWING CODE FOR PRE-PROCESSING THE FILES Import Python packages
###Code
# Import Python packages
import pandas as pd
import cassandra
import re
import os
import glob
import numpy as np
import json
import csv
###Output
_____no_output_____
###Markdown
Creating list of filepaths to process original event csv data files
###Code
# checking your current working directory
print(os.getcwd())
# Get your current folder and subfolder event data
filepath = os.getcwd() + '/event_data'
# Create a for loop to create a list of files and collect each filepath
for root, dirs, files in os.walk(filepath):
# join the file path and roots with the subdirectories using glob
file_path_list = glob.glob(os.path.join(root,'*'))
print(file_path_list)
###Output
_____no_output_____
###Markdown
Processing the files to create the data file csv that will be used for Apache Casssandra tables
###Code
full_data_rows_list = []
for f in file_path_list:
with open(f, 'r', encoding = 'utf8', newline='') as csvfile:
csvreader = csv.reader(csvfile)
next(csvreader)
for line in csvreader:
full_data_rows_list.append(line)
print(len(full_data_rows_list))
print(full_data_rows_list)
csv.register_dialect('myDialect', quoting=csv.QUOTE_ALL, skipinitialspace=True)
with open('event_datafile_new.csv', 'w', encoding = 'utf8', newline='') as f:
writer = csv.writer(f, dialect='myDialect')
writer.writerow(['artist','firstName','gender','itemInSession','lastName','length',\
'level','location','sessionId','song','userId'])
for row in full_data_rows_list:
if (row[0] == ''):
continue
writer.writerow((row[0], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[12], row[13], row[16]))
with open('event_datafile_new.csv', 'r', encoding = 'utf8') as f:
print(sum(1 for line in f))
###Output
_____no_output_____
###Markdown
Part II. Complete the Apache Cassandra coding portion of your project. Now you are ready to work with the CSV file titled event_datafile_new.csv, located within the Workspace directory. The event_datafile_new.csv contains the following columns: - artist - firstName of user- gender of user- item number in session- last name of user- length of the song- level (paid or free song)- location of the user- sessionId- song title- userIdThe image below is a screenshot of what the denormalized data should appear like in the **event_datafile_new.csv** after the code above is run: Creating a Cluster
###Code
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
print('Connected')
###Output
_____no_output_____
###Markdown
Create Keyspace
###Code
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS project2
WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }
""")
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Set Keyspace
###Code
try:
session.set_keyspace('project2')
print('KeySpace set')
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Create queries to ask the following three questions of the data 1. Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4 2. Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182 3. Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own' QUERY 1. **Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4**To obtain a result from this query we need to create a table called session_library that contains the variables session_id, item_in_session, artist, song and length. The session_id is the the primary key used for partitioning the data. The data is clusterized by item_in_session. item_in_session has repeated values, so it will generate enough clusters and can be performant as well when querying the table.Once the table is created we will insert values into this table and execute the query.
###Code
table_1 = "CREATE TABLE IF NOT EXISTS session_library"
table_1 = table_1 + ("(session_id int, item_in_session int, artist text, song text, length float, PRIMARY KEY(session_id, item_in_session))")
try:
session.execute(table_1)
except Exception as e:
print(e)
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader)
for line in csvreader:
query = "INSERT INTO session_library(session_id, item_in_session, artist, song, length)"
query = query + "VALUES(%s, %s, %s, %s, %s)"
session.execute(query, (int(line[8]), int(line[3]), str(line[0]), str(line[9]), float(line[5])))
###Output
_____no_output_____
###Markdown
Do a SELECT to verify that the data have been inserted into each table
###Code
table_1_query = """
SELECT artist, song, length FROM session_library
WHERE session_id = %s AND item_in_session = %s
"""
try:
rows = session.execute(table_1_query, (338, 4))
except Exception as e:
print(e)
for row in rows:
print(row.artist, row.song, row.length)
###Output
_____no_output_____
###Markdown
Query 2** Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182 **We will create a table called user_sessions, containing the variables session_id, user_id, artist, song, first_name, last_name and item_in_session. Both first_name and last_name are included to create the user info. The session_id and user_id are both partitioning keys. This is to avoid that session_id's tied to the user_id will be distributed over multiple nodes. When this happens, there will be more performance issues once the data scales. item_in_session is the clustering key and is added to order the records in the table.Once the table is created we will insert values into this table and execute the query.
###Code
table_2 = "CREATE TABLE IF NOT EXISTS user_sessions"
table_2 = table_2 + "(session_id int, user_id int, item_in_session int, artist text, song text, first_name text, last_name text, PRIMARY KEY((session_id, user_id), item_in_session)) "
try:
session.execute(table_2)
except Exception as e:
print(e)
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader)
for line in csvreader:
query = "INSERT INTO user_sessions(session_id, user_id, item_in_session, artist, song, first_name, last_name)"
query = query + "VALUES(%s, %s, %s, %s, %s, %s, %s)"
session.execute(query, (int(line[8]), int(line[10]), int(line[3]), str(line[0]), str(line[9]), str(line[1]), str(line[4])))
table_2_query = """
SELECT artist, song, first_name, last_name FROM user_sessions
WHERE user_id = %s AND session_id = %s
"""
try:
rows = session.execute(table_2_query, (10, 182))
except Exception as e:
print(e)
for row in rows:
print(row.artist, row.song, row.first_name, row.last_name)
###Output
_____no_output_____
###Markdown
Query 3** Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own' **We will create a table called listener_history that contains the variables song, first_name, last_name, user_id. Song is the partioning key, but we need a clustering key user_id. Without the user_id as clustering key the users listening to the same song would get overwritten consistently. Song is not unique and therefore it needs the user_id as clustering key.Once the table is created we will insert values into this table and execute the query.
###Code
table_3 = "CREATE TABLE IF NOT EXISTS listener_history"
table_3 = table_3 + ("(song text, user_id int, first_name text, last_name text, PRIMARY KEY(song, user_id))")
try:
session.execute(table_3)
except Exception as e:
print(e)
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader)
for line in csvreader:
query = "INSERT INTO listener_history(song, user_id, first_name, last_name)"
query = query + "VALUES(%s, %s, %s, %s)"
session.execute(query, (line[9], int(line[10]),line[1], line[4]))
table_3_query = """
SELECT first_name, last_name FROM listener_history WHERE song = 'All Hands Against His Own'
"""
try:
rows = session.execute(table_3_query)
except Exception as e:
print(e)
for row in rows:
print(row.first_name, row.last_name)
###Output
_____no_output_____
###Markdown
Drop the tables before closing out the sessions
###Code
session.execute('DROP TABLE IF EXISTS session_library')
session.execute('DROP TABLE IF EXISTS user_sessions')
session.execute('DROP TABLE IF EXISTS listener_history')
###Output
_____no_output_____
###Markdown
Close the session and cluster connection¶
###Code
session.shutdown()
cluster.shutdown()
###Output
_____no_output_____ |
.ipynb_checkpoints/example-checkpoint.ipynb | ###Markdown
Setup If you are running this generator locally(i.e. in a jupyter notebook in conda, just make sure you installed:- RDKit- DeepChem 2.5.0 & above- Tensorflow 2.4.0 & aboveThen, please skip the following part and continue from `Data Preparations`. To increase efficiency, we recommend running this molecule generator in Colab.Then, we'll first need to run the following lines of code, these will download conda with the deepchem environment in colab.
###Code
#!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
#import conda_installer
#conda_installer.install()
#!/root/miniconda/bin/conda info -e
#!pip install --pre deepchem
#import deepchem
#deepchem.__version__
###Output
_____no_output_____
###Markdown
Data PreparationsNow we are ready to import some useful functions/packages, along with our model. Import Data
###Code
import model##our model
from rdkit import Chem
from rdkit.Chem import AllChem
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import deepchem as dc
###Output
_____no_output_____
###Markdown
Then, we are ready to import our dataset for training. Here, for demonstration, we'll be using this dataset of in-vitro assay that detects inhibition of SARS-CoV 3CL protease via fluorescence.The dataset is originally from [PubChem AID1706](https://pubchem.ncbi.nlm.nih.gov/bioassay/1706), previously handled by [JClinic AIcure](https://www.aicures.mit.edu/) team at MIT into this [binarized label form](https://github.com/yangkevin2/coronavirus_data/blob/master/data/AID1706_binarized_sars.csv).
###Code
df = pd.read_csv('AID1706_binarized_sars.csv')
df.head()
df.groupby('activity').count()
###Output
_____no_output_____
###Markdown
Observe the data above, it contains a 'smiles' column, which stands for the smiles representation of the molecules. There is also an 'activity' column, in which it is the label specifying whether that molecule is considered as hit for the protein.Here, we only need those 405 molecules considered as hits, and we'll be extracting features from them to generate new molecules that may as well be hits.
###Code
true = df[df['activity']==1]
###Output
_____no_output_____
###Markdown
Set Minimum Length for molecules Since we'll be using graphic neural network, it might be more helpful and efficient if our graph data are of the same size, thus, we'll eliminate the molecules from the training set that are shorter(i.e. lacking enough atoms) than our desired minimum size.
###Code
num_atoms = 6 #here the minimum length of molecules is 6
input_df = true['smiles']
df_length = []
for _ in input_df:
df_length.append(Chem.MolFromSmiles(_).GetNumAtoms() )
true['length'] = df_length #create a new column containing each molecule's length
true = true[true['length']>num_atoms] #Here we leave only the ones longer than 6
input_df = true['smiles']
input_df_smiles = input_df.apply(Chem.MolFromSmiles) #convert the smiles representations into rdkit molecules
###Output
_____no_output_____
###Markdown
Now, we are ready to apply the `featurizer` function to our molecules to convert them into graphs with nodes and edges for training.
###Code
#input_df = input_df.apply(Chem.MolFromSmiles)
train_set = input_df_smiles.apply( lambda x: model.featurizer(x,max_length = num_atoms))
train_set
nodes_train, edges_train = list(zip(*train_set) )
###Output
_____no_output_____
###Markdown
Training Now, we're finally ready for generating new molecules. We'll first import some necessay functions from tensorflow.
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
The network here we'll be using is Generative Adversarial Network, as mentioned in the project introduction. Here's a great [introduction](https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/).  Here we'll first initiate a discriminator and a generator model with the corresponding functions in the package.
###Code
disc = model.make_discriminator(num_atoms)
gene = model.make_generator(num_atoms, noise_input_shape = 100)
###Output
_____no_output_____
###Markdown
Then, with the `train_on_batch` function, we'll supply the necessary inputs and train our network. Upon some experimentations, an epoch of around 160 would be nice for this dataset.
###Code
generator_trained = model.train_batch(
disc, gene,
np.array(nodes_train), np.array(edges_train),
noise_input_shape = 100, EPOCH = 160, BATCHSIZE = 2,
plot_hist = True, temp_result = False
)
###Output
>0, d1=0.230, d2=0.759 g=0.565, a1=100, a2=0
>1, d1=0.096, d2=0.841 g=0.622, a1=100, a2=0
>2, d1=0.038, d2=0.890 g=0.594, a1=100, a2=0
>3, d1=0.018, d2=0.915 g=0.564, a1=100, a2=0
>4, d1=0.013, d2=0.853 g=0.515, a1=100, a2=0
>5, d1=0.006, d2=0.837 g=0.579, a1=100, a2=0
>6, d1=0.003, d2=0.876 g=0.540, a1=100, a2=0
>7, d1=0.012, d2=0.868 g=0.577, a1=100, a2=0
>8, d1=0.006, d2=0.868 g=0.571, a1=100, a2=0
>9, d1=0.005, d2=0.879 g=0.610, a1=100, a2=0
>10, d1=0.005, d2=0.842 g=0.658, a1=100, a2=0
>11, d1=0.006, d2=0.722 g=0.687, a1=100, a2=0
>12, d1=0.007, d2=0.777 g=0.758, a1=100, a2=0
>13, d1=0.003, d2=0.607 g=0.836, a1=100, a2=100
>14, d1=0.003, d2=0.578 g=0.850, a1=100, a2=100
>15, d1=0.003, d2=0.532 g=0.952, a1=100, a2=100
>16, d1=0.020, d2=0.482 g=0.965, a1=100, a2=100
>17, d1=0.006, d2=0.459 g=1.054, a1=100, a2=100
>18, d1=0.003, d2=0.411 g=1.121, a1=100, a2=100
>19, d1=0.003, d2=0.380 g=1.147, a1=100, a2=100
>20, d1=0.003, d2=0.424 g=1.160, a1=100, a2=100
>21, d1=0.002, d2=0.359 g=1.228, a1=100, a2=100
>22, d1=0.003, d2=0.393 g=1.253, a1=100, a2=100
>23, d1=0.004, d2=0.286 g=1.290, a1=100, a2=100
>24, d1=0.003, d2=0.308 g=1.330, a1=100, a2=100
>25, d1=0.008, d2=0.315 g=1.445, a1=100, a2=100
>26, d1=0.005, d2=0.341 g=1.390, a1=100, a2=100
>27, d1=0.005, d2=0.319 g=1.483, a1=100, a2=100
>28, d1=0.005, d2=0.258 g=1.504, a1=100, a2=100
>29, d1=0.004, d2=0.294 g=1.475, a1=100, a2=100
>30, d1=0.005, d2=0.232 g=1.521, a1=100, a2=100
>31, d1=0.006, d2=0.315 g=1.505, a1=100, a2=100
>32, d1=0.010, d2=0.229 g=1.492, a1=100, a2=100
>33, d1=0.007, d2=0.291 g=1.554, a1=100, a2=100
>34, d1=0.006, d2=0.333 g=1.515, a1=100, a2=100
>35, d1=0.018, d2=0.310 g=1.758, a1=100, a2=100
>36, d1=0.014, d2=0.291 g=1.414, a1=100, a2=100
>37, d1=0.013, d2=0.151 g=1.490, a1=100, a2=100
>38, d1=0.009, d2=0.159 g=1.615, a1=100, a2=100
>39, d1=0.137, d2=0.270 g=1.876, a1=100, a2=100
>40, d1=0.017, d2=0.256 g=1.625, a1=100, a2=100
>41, d1=0.022, d2=0.118 g=1.485, a1=100, a2=100
>42, d1=0.005, d2=0.368 g=1.464, a1=100, a2=100
>43, d1=0.135, d2=0.529 g=1.099, a1=100, a2=100
>44, d1=0.003, d2=0.452 g=0.978, a1=100, a2=100
>45, d1=0.008, d2=0.492 g=1.227, a1=100, a2=100
>46, d1=0.006, d2=0.383 g=1.376, a1=100, a2=100
>47, d1=0.012, d2=0.223 g=1.714, a1=100, a2=100
>48, d1=0.042, d2=0.291 g=2.024, a1=100, a2=100
>49, d1=1.265, d2=0.352 g=1.479, a1=0, a2=100
>50, d1=0.149, d2=0.395 g=1.157, a1=100, a2=100
>51, d1=0.002, d2=0.783 g=1.037, a1=100, a2=0
>52, d1=0.000, d2=0.809 g=0.565, a1=100, a2=0
>53, d1=0.000, d2=1.011 g=0.580, a1=100, a2=0
>54, d1=0.000, d2=0.629 g=1.103, a1=100, a2=100
>55, d1=0.000, d2=0.566 g=1.347, a1=100, a2=100
>56, d1=0.000, d2=0.311 g=1.679, a1=100, a2=100
>57, d1=0.001, d2=0.230 g=2.226, a1=100, a2=100
>58, d1=0.007, d2=0.098 g=2.792, a1=100, a2=100
>59, d1=1.352, d2=0.107 g=2.571, a1=0, a2=100
>60, d1=0.003, d2=0.280 g=1.549, a1=100, a2=100
>61, d1=0.095, d2=0.436 g=1.436, a1=100, a2=100
>62, d1=0.000, d2=0.696 g=0.952, a1=100, a2=0
>63, d1=0.001, d2=0.542 g=1.497, a1=100, a2=100
>64, d1=0.003, d2=0.411 g=1.585, a1=100, a2=100
>65, d1=0.002, d2=0.327 g=1.858, a1=100, a2=100
>66, d1=0.012, d2=0.176 g=2.218, a1=100, a2=100
>67, d1=2.956, d2=0.152 g=1.616, a1=0, a2=100
>68, d1=0.262, d2=0.290 g=0.886, a1=100, a2=100
>69, d1=0.004, d2=0.867 g=0.605, a1=100, a2=0
>70, d1=0.124, d2=1.002 g=0.547, a1=100, a2=0
>71, d1=0.010, d2=1.142 g=0.793, a1=100, a2=0
>72, d1=0.003, d2=0.702 g=1.178, a1=100, a2=0
>73, d1=0.043, d2=0.400 g=1.587, a1=100, a2=100
>74, d1=0.155, d2=0.192 g=2.281, a1=100, a2=100
>75, d1=1.844, d2=0.292 g=1.335, a1=0, a2=100
>76, d1=0.040, d2=0.555 g=0.961, a1=100, a2=100
>77, d1=0.054, d2=0.764 g=0.837, a1=100, a2=0
>78, d1=0.461, d2=0.784 g=0.655, a1=100, a2=0
>79, d1=0.022, d2=0.729 g=0.847, a1=100, a2=0
>80, d1=0.003, d2=0.687 g=1.061, a1=100, a2=100
>81, d1=0.024, d2=0.485 g=1.554, a1=100, a2=100
>82, d1=0.111, d2=0.254 g=1.989, a1=100, a2=100
>83, d1=1.477, d2=0.343 g=1.387, a1=0, a2=100
>84, d1=0.161, d2=0.635 g=0.698, a1=100, a2=100
>85, d1=0.007, d2=0.985 g=0.658, a1=100, a2=0
>86, d1=0.007, d2=0.946 g=0.700, a1=100, a2=0
>87, d1=0.012, d2=0.714 g=1.079, a1=100, a2=0
>88, d1=0.333, d2=0.455 g=1.400, a1=100, a2=100
>89, d1=0.344, d2=0.448 g=1.326, a1=100, a2=100
>90, d1=0.236, d2=0.376 g=1.347, a1=100, a2=100
>91, d1=0.079, d2=0.436 g=1.393, a1=100, a2=100
>92, d1=0.150, d2=0.413 g=1.345, a1=100, a2=100
>93, d1=0.491, d2=0.409 g=1.088, a1=100, a2=100
>94, d1=0.032, d2=0.774 g=1.045, a1=100, a2=0
>95, d1=5.026, d2=0.882 g=0.632, a1=0, a2=0
>96, d1=0.009, d2=0.609 g=0.690, a1=100, a2=100
>97, d1=0.019, d2=0.579 g=1.186, a1=100, a2=100
>98, d1=0.018, d2=0.307 g=1.771, a1=100, a2=100
>99, d1=0.088, d2=0.139 g=2.472, a1=100, a2=100
>100, d1=0.571, d2=0.157 g=2.198, a1=100, a2=100
>101, d1=0.028, d2=0.177 g=2.040, a1=100, a2=100
>102, d1=0.062, d2=0.170 g=1.608, a1=100, a2=100
>103, d1=0.036, d2=0.310 g=1.641, a1=100, a2=100
>104, d1=0.035, d2=0.252 g=1.805, a1=100, a2=100
>105, d1=0.039, d2=0.303 g=2.026, a1=100, a2=100
>106, d1=0.022, d2=0.178 g=2.202, a1=100, a2=100
>107, d1=0.047, d2=0.144 g=2.438, a1=100, a2=100
>108, d1=0.344, d2=0.139 g=2.110, a1=100, a2=100
>109, d1=0.204, d2=0.303 g=1.536, a1=100, a2=100
>110, d1=0.043, d2=0.438 g=1.391, a1=100, a2=100
>111, d1=7.353, d2=0.647 g=0.990, a1=0, a2=100
>112, d1=0.178, d2=0.724 g=1.094, a1=100, a2=0
>113, d1=0.070, d2=0.383 g=1.401, a1=100, a2=100
>114, d1=0.178, d2=0.324 g=1.640, a1=100, a2=100
>115, d1=6.200, d2=0.378 g=1.251, a1=0, a2=100
>116, d1=0.157, d2=0.504 g=1.055, a1=100, a2=100
>117, d1=0.077, d2=0.568 g=1.102, a1=100, a2=100
>118, d1=2.301, d2=0.805 g=0.522, a1=0, a2=0
>119, d1=7.603, d2=1.967 g=0.145, a1=0, a2=0
>120, d1=0.001, d2=2.548 g=0.126, a1=100, a2=0
>121, d1=0.002, d2=2.149 g=0.243, a1=100, a2=0
>122, d1=0.093, d2=1.302 g=0.569, a1=100, a2=0
>123, d1=0.019, d2=0.587 g=1.263, a1=100, a2=100
>124, d1=0.054, d2=0.230 g=2.111, a1=100, a2=100
>125, d1=0.066, d2=0.096 g=2.961, a1=100, a2=100
>126, d1=0.236, d2=0.050 g=3.468, a1=100, a2=100
>127, d1=0.963, d2=0.046 g=3.088, a1=0, a2=100
>128, d1=1.020, d2=0.081 g=2.426, a1=0, a2=100
>129, d1=0.102, d2=0.160 g=1.806, a1=100, a2=100
>130, d1=0.046, d2=0.267 g=1.364, a1=100, a2=100
>131, d1=0.313, d2=0.452 g=1.050, a1=100, a2=100
>132, d1=0.051, d2=0.530 g=0.947, a1=100, a2=100
>133, d1=0.828, d2=0.692 g=0.884, a1=0, a2=100
>134, d1=0.048, d2=0.658 g=0.937, a1=100, a2=100
>135, d1=0.070, d2=0.542 g=1.132, a1=100, a2=100
>136, d1=0.487, d2=0.412 g=1.269, a1=100, a2=100
>137, d1=0.427, d2=0.424 g=1.201, a1=100, a2=100
>138, d1=0.041, d2=0.375 g=1.301, a1=100, a2=100
>139, d1=0.157, d2=0.364 g=1.418, a1=100, a2=100
>140, d1=0.168, d2=0.325 g=1.544, a1=100, a2=100
>141, d1=5.638, d2=0.453 g=0.921, a1=0, a2=100
>142, d1=0.045, d2=0.749 g=0.722, a1=100, a2=0
>143, d1=0.015, d2=0.764 g=0.847, a1=100, a2=0
>144, d1=0.020, d2=0.565 g=1.180, a1=100, a2=100
>145, d1=0.057, d2=0.360 g=1.522, a1=100, a2=100
>146, d1=3.150, d2=0.262 g=1.756, a1=0, a2=100
>147, d1=3.275, d2=0.287 g=1.430, a1=0, a2=100
>148, d1=0.860, d2=0.412 g=1.097, a1=0, a2=100
>149, d1=1.190, d2=0.662 g=0.719, a1=0, a2=100
>150, d1=0.657, d2=0.930 g=0.531, a1=100, a2=0
>151, d1=0.368, d2=1.084 g=0.511, a1=100, a2=0
>152, d1=0.026, d2=0.936 g=0.674, a1=100, a2=0
>153, d1=0.038, d2=0.680 g=1.021, a1=100, a2=100
>154, d1=0.078, d2=0.384 g=1.534, a1=100, a2=100
>155, d1=1.083, d2=0.247 g=1.729, a1=0, a2=100
>156, d1=0.922, d2=0.229 g=1.702, a1=0, a2=100
>157, d1=1.208, d2=0.270 g=1.459, a1=0, a2=100
>158, d1=0.188, d2=0.387 g=1.193, a1=100, a2=100
>159, d1=0.129, d2=0.482 g=1.082, a1=100, a2=100
###Markdown
There are two possible kind of failures regarding a GAN model: model collapse and failure of convergence. Model collapse would often mean that the generative part of the model wouldn't be able to generate diverse outcomes. Failure of convergence between the generative and the discriminative model could likely way be identified as that the loss for the discriminator has gone to zero or close to zero. Observe the above generated plot, in the upper plot, the loss of discriminator has not gone to zero/close to zero, indicating that the model has possibily find a balance between the generator and the discriminator. In the lower plot, the accuracy is fluctuating between 1 and 0, indicating possible variability within the data generated. Therefore, it is reasonable to conclude that within the possible range of epoch and other parameters, the model has successfully avoided the two common types of failures associated with GAN. Rewarding Phase The above `train_on_batch` function is set to return a trained generator. Thus, we could use that function directly and observe the possible molecules we could get from that function.
###Code
no, ed = generator_trained(np.random.randint(0,30, size =(1,100)))#generated nodes and edges
abs(no.numpy()).astype(int).reshape(num_atoms), abs(ed.numpy()).astype(int).reshape(num_atoms,num_atoms)
###Output
_____no_output_____
###Markdown
With the `de_featurizer`, we could convert the generated matrix into a smiles molecule and plot it out=)
###Code
cat, dog = model.de_featurizer(abs(no.numpy()).astype(int).reshape(num_atoms), abs(ed.numpy()).astype(int).reshape(num_atoms,num_atoms))
Chem.MolToSmiles(cat)
Chem.MolFromSmiles("[Li]NBBC=N")
###Output
_____no_output_____
###Markdown
Brief Result Analysis
###Code
from rdkit import DataStructs
###Output
_____no_output_____
###Markdown
With the rdkit function of comparing similarities, here we'll demonstrate a preliminary analysis of the molecule we've generated. With "CCO" molecule as a control, we could observe that the new molecule we've generated is more similar to a random selected molecule(the fourth molecule) from the initial training set.This may indicate that our model has indeed extracted some features from our original dataset and generated a new molecule that is relevant.
###Code
DataStructs.FingerprintSimilarity(Chem.RDKFingerprint(Chem.MolFromSmiles("[Li]NBBC=N")), Chem.RDKFingerprint(Chem.MolFromSmiles("CCO")))# compare with the control
#compare with one from the original data
DataStructs.FingerprintSimilarity(Chem.RDKFingerprint(Chem.MolFromSmiles("[Li]NBBC=N")), Chem.RDKFingerprint(Chem.MolFromSmiles("CCN1C2=NC(=O)N(C(=O)C2=NC(=N1)C3=CC=CC=C3)C")))
###Output
_____no_output_____
###Markdown
Deep Prior Distribution of Relaxation Times In this tutorial we will reproduce Figure 2 in Liu, J., & Ciucci, F. (2020). The Deep-Prior Distribution of Relaxation Times. Journal of The Electrochemical Society, 167(2), 026506 https://iopscience.iop.org/article/10.1149/1945-7111/ab631a/metaThe DP-DRT method is our next newly developed deep learning based approach to obtain the DRT from the EIS data. The DP-DRT is trained on a single electrochemical impedance spectrum. A single random input is given to the nerural network underlying the DP-DRT.
###Code
import numpy as np
import os
import matplotlib.pyplot as plt
import random as rnd
import math
from math import sin, cos, pi
import torch
import torch.nn.functional as F
import compute_DRT
%matplotlib inline
# check the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**2,1), 'MB')
print('Cached: ', round(torch.cuda.memory_cached(0)/1024**2,1), 'MB')
# we will assume you have a cpu
#if you want to use a GPU, you will need to use cuda
###Output
Using device: cpu
###Markdown
1) Problem setup 1.1) Generate a single stochastic experiment note: the exact circuit is a ZARCThe impedance of a ZARC can be written as$$Z^{\rm exact}(f) = R_\infty + \displaystyle \frac{1}{\displaystyle \frac{1}{R_{\rm ct}}+C \left(i 2\pi f\right)^\phi}$$where $\displaystyle C = \frac{\tau_0^\phi}{R_{\rm ct}}$.The analytical DRT can be computed analytically as$$\gamma(\log \tau) = \displaystyle \frac{\displaystyle R_{\rm ct}}{\displaystyle 2\pi} \displaystyle \frac{\displaystyle \sin\left((1-\phi)\pi\right)}{\displaystyle \cosh(\phi \log(\tau/\tau_0))-\cos(\pi(1-\phi))}$$
###Code
# set the seed for the random number generators
rng = rnd.seed(214975)
rng_np = np.random.seed(213912)
torch.manual_seed(213912)
# define frequency range, from 1E-4 to 1E4 with 10 ppd
N_freqs = 81
freq_vec = np.logspace(-4., 4., num=N_freqs, endpoint=True)
tau_vec = 1./freq_vec
# define parameters for ZARC model and calculate the impedance and gamma following the above equations
R_inf = 10
R_ct = 50
phi = 0.8
tau_0 = 1
C = tau_0**phi/R_ct
# exact Z and gamma
Z = R_inf + 1./(1./R_ct+C*(1j*2.*pi*freq_vec)**phi)
gamma_exact = (R_ct)/(2.*pi)*sin((1.-phi)*pi)/(np.cosh(phi*np.log(tau_vec/tau_0))-cos((1.-phi)*pi))
# adding noise to the impedance data
sigma_n_exp = 0.1
Z_exp = Z + (sigma_n_exp**2)*np.random.normal(0,1,N_freqs) + 1j*(sigma_n_exp**2)*np.random.normal(0,1,N_freqs)
###Output
_____no_output_____
###Markdown
1.2) Build $\mathbf A_{\rm re}$ and $\mathbf A_{\rm im}$ matrices
###Code
# define the matrices that calculate the impedace from DRT, i.e., Z_re = A_re * gamma, Z_im = A_im * gamma
A_re = compute_DRT.A_re(freq_vec)
A_im = compute_DRT.A_im(freq_vec)
###Output
_____no_output_____
###Markdown
1.3) Take vectors and matrices from numpy to torch
###Code
# transform impedance variables to tensors
Z_exp_re_torch = torch.from_numpy(np.real(Z_exp)).type(torch.FloatTensor).reshape(1,N_freqs)
Z_exp_im_torch = torch.from_numpy(np.imag(Z_exp)).type(torch.FloatTensor).reshape(1,N_freqs)
# tranform gamma
gamma_exact_torch = torch.from_numpy(gamma_exact).type(torch.FloatTensor)
# transform these matrices into tensors
A_re_torch = torch.from_numpy(A_re.T).type(torch.FloatTensor)
A_im_torch = torch.from_numpy(A_im.T).type(torch.FloatTensor)
###Output
_____no_output_____
###Markdown
2) Setup DP-DRT model 2.1) Deep network
###Code
# size of the arbitrary zeta input
N_zeta = 1
# define the neural network
# N is batch size, D_in is input dimension, H is hidden dimension, D_out is output dimension.
N = 1
D_in = N_zeta
H = max(N_freqs,10*N_zeta)
# the output also includes the R_inf, so it has dimension N_freq+1
# note that
# 1) there is no inductance (in this specific example - the DP-DRT can include inductive features, see article)
# 2) R_inf is stored as the last item in the NN output
D_out = N_freqs+1
# Construct the neural network structure
class vanilla_model(torch.nn.Module):
def __init__(self):
super(vanilla_model, self).__init__()
self.fct_1 = torch.nn.Linear(D_in, H)
self.fct_2 = torch.nn.Linear(H, H)
self.fct_3 = torch.nn.Linear(H, H)
self.fct_4 = torch.nn.Linear(H, D_out)
# initialize the weight parameters
torch.nn.init.zeros_(self.fct_1.weight)
torch.nn.init.zeros_(self.fct_2.weight)
torch.nn.init.zeros_(self.fct_3.weight)
torch.nn.init.zeros_(self.fct_4.weight)
# forward
def forward(self, zeta):
h = F.elu(self.fct_1(zeta))
h = F.elu(self.fct_2(h))
h = F.elu(self.fct_3(h))
gamma_pred = F.softplus(self.fct_4(h), beta = 5)
return gamma_pred
###Output
_____no_output_____
###Markdown
2.2) Loss function
###Code
def loss_fn(output, Z_exp_re_torch, Z_exp_im_torch, A_re_torch, A_im_torch):
# we assume no inductance and the R_inf is stored as the last item in the NN output
MSE_re = torch.sum((output[:, -1] + torch.mm(output[:, 0:-1], A_re_torch) - Z_exp_re_torch)**2)
MSE_im = torch.sum((torch.mm(output[:, 0:-1], A_im_torch) - Z_exp_im_torch)**2)
MSE = MSE_re + MSE_im
return MSE
###Output
_____no_output_____
###Markdown
3) Train the model
###Code
model = vanilla_model()
# initialize following variables
zeta = torch.randn(N, N_zeta)
loss_vec = np.array([])
distance_vec = np.array([])
lambda_vec = np.array([])
# optimize the neural network
learning_rate = 1e-5
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# max iterations
max_iters = 100001
gamma_NN_store = torch.zeros((max_iters, N_freqs))
R_inf_NN_store = torch.zeros((max_iters, 1))
for t in range(max_iters):
# Forward pass: compute predicted y by passing x to the model.
gamma = model(zeta)
# Compute the loss
loss = loss_fn(gamma, Z_exp_re_torch, Z_exp_im_torch, A_re_torch, A_im_torch)
# save it
loss_vec = np.append(loss_vec, loss.item())
# store gamma
gamma_NN = gamma[:, 0:-1].detach().reshape(-1)
gamma_NN_store[t, :] = gamma_NN
# store R_inf
R_inf_NN_store[t,:] = gamma[:, -1].detach().reshape(-1)
# Compute the distance
distance = math.sqrt(torch.sum((gamma_NN-gamma_exact_torch)**2).item())
# save it
distance_vec = np.append(distance_vec, distance)
# and print it
if not t%100:
print('iter=', t, '; loss=', loss.item(), '; distance=', distance)
# zero all gradients (purge any cache)
optimizer.zero_grad()
# compute the gradient of the loss with respect to model parameters
loss.backward()
# Update the optimizer
optimizer.step()
###Output
iter= 0 ; loss= 108280.3203125 ; distance= 54.83923369937234
iter= 100 ; loss= 108098.078125 ; distance= 54.82739695655129
iter= 200 ; loss= 107681.28125 ; distance= 54.80032557926778
iter= 300 ; loss= 106687.40625 ; distance= 54.73580901989574
iter= 400 ; loss= 104597.9375 ; distance= 54.6002920736808
iter= 500 ; loss= 100640.1796875 ; distance= 54.34389151433558
iter= 600 ; loss= 93995.84375 ; distance= 53.914215896934365
iter= 700 ; loss= 84615.265625 ; distance= 53.31095213169148
iter= 800 ; loss= 73558.4453125 ; distance= 52.60951653901364
iter= 900 ; loss= 62076.08203125 ; distance= 51.898792077856925
iter= 1000 ; loss= 51006.81640625 ; distance= 51.23900892051143
iter= 1100 ; loss= 40869.55078125 ; distance= 50.666478474585396
iter= 1200 ; loss= 32011.671875 ; distance= 50.201366974353896
iter= 1300 ; loss= 24650.890625 ; distance= 49.84826831009905
iter= 1400 ; loss= 18868.6640625 ; distance= 49.5953843461012
iter= 1500 ; loss= 14603.1435546875 ; distance= 49.41785373301966
iter= 1600 ; loss= 11663.845703125 ; distance= 49.28418765023677
iter= 1700 ; loss= 9773.79296875 ; distance= 49.162703704605946
iter= 1800 ; loss= 8629.65234375 ; distance= 49.02754278129667
iter= 1900 ; loss= 7959.4306640625 ; distance= 48.86271274976068
iter= 2000 ; loss= 7557.595703125 ; distance= 48.66272987920915
iter= 2100 ; loss= 7291.193359375 ; distance= 48.429865829845895
iter= 2200 ; loss= 7085.5625 ; distance= 48.17025570989062
iter= 2300 ; loss= 6903.849609375 ; distance= 47.89060460847853
iter= 2400 ; loss= 6729.85693359375 ; distance= 47.59651481627883
iter= 2500 ; loss= 6557.25244140625 ; distance= 47.291950313484904
iter= 2600 ; loss= 6383.87158203125 ; distance= 46.9794331454933
iter= 2700 ; loss= 6209.16650390625 ; distance= 46.66036276246682
iter= 2800 ; loss= 6033.1484375 ; distance= 46.335383600845475
iter= 2900 ; loss= 5856.00830078125 ; distance= 46.00471008070125
iter= 3000 ; loss= 5678.0068359375 ; distance= 45.668352156300486
iter= 3100 ; loss= 5499.4228515625 ; distance= 45.32621559663348
iter= 3200 ; loss= 5320.53515625 ; distance= 44.97822281219824
iter= 3300 ; loss= 5141.64013671875 ; distance= 44.62427646253424
iter= 3400 ; loss= 4963.0234375 ; distance= 44.26430818946781
iter= 3500 ; loss= 4784.97607421875 ; distance= 43.8982977673964
iter= 3600 ; loss= 4607.7890625 ; distance= 43.526254031780205
iter= 3700 ; loss= 4431.7412109375 ; distance= 43.148210465861155
iter= 3800 ; loss= 4257.1142578125 ; distance= 42.76423631057338
iter= 3900 ; loss= 4084.171875 ; distance= 42.3744267337698
iter= 4000 ; loss= 3913.168212890625 ; distance= 41.978933228000656
iter= 4100 ; loss= 3744.3466796875 ; distance= 41.57791067682454
iter= 4200 ; loss= 3577.936279296875 ; distance= 41.17158888657702
iter= 4300 ; loss= 3414.15576171875 ; distance= 40.760225662939476
iter= 4400 ; loss= 3253.2099609375 ; distance= 40.34410098607974
iter= 4500 ; loss= 3095.293701171875 ; distance= 39.92355671604642
iter= 4600 ; loss= 2940.58203125 ; distance= 39.49891216856484
iter= 4700 ; loss= 2789.22900390625 ; distance= 39.07056323029926
iter= 4800 ; loss= 2641.3759765625 ; distance= 38.6389269365065
iter= 4900 ; loss= 2497.164794921875 ; distance= 38.204439841892885
iter= 5000 ; loss= 2356.71923828125 ; distance= 37.76756595394966
iter= 5100 ; loss= 2220.154052734375 ; distance= 37.328765953057236
iter= 5200 ; loss= 2087.567626953125 ; distance= 36.88853201125402
iter= 5300 ; loss= 1959.0533447265625 ; distance= 36.447369817334724
iter= 5400 ; loss= 1834.6973876953125 ; distance= 36.00583177590819
iter= 5500 ; loss= 1714.5970458984375 ; distance= 35.564473665574965
iter= 5600 ; loss= 1598.8634033203125 ; distance= 35.12389135817136
iter= 5700 ; loss= 1487.6126708984375 ; distance= 34.68467225242027
iter= 5800 ; loss= 1380.97119140625 ; distance= 34.247408803669
iter= 5900 ; loss= 1279.065185546875 ; distance= 33.81271119677572
iter= 6000 ; loss= 1182.0303955078125 ; distance= 33.381169695419224
iter= 6100 ; loss= 1090.0140380859375 ; distance= 32.95336207833163
iter= 6200 ; loss= 1003.1500244140625 ; distance= 32.52977527276933
iter= 6300 ; loss= 921.5451049804688 ; distance= 32.11081391240632
iter= 6400 ; loss= 845.2596435546875 ; distance= 31.696777911650084
iter= 6500 ; loss= 774.2965087890625 ; distance= 31.28784427247665
iter= 6600 ; loss= 708.5841064453125 ; distance= 30.884052632892107
iter= 6700 ; loss= 647.9737548828125 ; distance= 30.48535104632992
iter= 6800 ; loss= 592.24658203125 ; distance= 30.091589924423857
iter= 6900 ; loss= 541.1287841796875 ; distance= 29.70262053260463
iter= 7000 ; loss= 494.3125 ; distance= 29.31828696707217
iter= 7100 ; loss= 451.4748229980469 ; distance= 28.93849552823194
iter= 7200 ; loss= 412.29791259765625 ; distance= 28.56319341506489
iter= 7300 ; loss= 376.48583984375 ; distance= 28.192381307181755
iter= 7400 ; loss= 343.75537109375 ; distance= 27.82609832597583
iter= 7500 ; loss= 313.84271240234375 ; distance= 27.464444566239298
iter= 7600 ; loss= 286.51287841796875 ; distance= 27.107531007170085
iter= 7700 ; loss= 261.5546875 ; distance= 26.755483475282105
iter= 7800 ; loss= 238.76596069335938 ; distance= 26.408441107319266
iter= 7900 ; loss= 217.961181640625 ; distance= 26.06659444943674
iter= 8000 ; loss= 198.97840881347656 ; distance= 25.73009128736721
iter= 8100 ; loss= 181.67550659179688 ; distance= 25.39903226195341
iter= 8200 ; loss= 165.91220092773438 ; distance= 25.07352347776123
iter= 8300 ; loss= 151.55076599121094 ; distance= 24.75368526264385
iter= 8400 ; loss= 138.46751403808594 ; distance= 24.43966907133577
iter= 8500 ; loss= 126.55825805664062 ; distance= 24.131566868548962
iter= 8600 ; loss= 115.73117065429688 ; distance= 23.829374967214836
iter= 8700 ; loss= 105.89653015136719 ; distance= 23.533046132065145
iter= 8800 ; loss= 96.96206665039062 ; distance= 23.2426260462828
iter= 8900 ; loss= 88.8411865234375 ; distance= 22.958203360944864
iter= 9000 ; loss= 81.4599380493164 ; distance= 22.679874536345544
iter= 9100 ; loss= 74.75885009765625 ; distance= 22.40765759503845
iter= 9200 ; loss= 68.68638610839844 ; distance= 22.141451306696155
iter= 9300 ; loss= 63.193214416503906 ; distance= 21.881082441205418
iter= 9400 ; loss= 58.228759765625 ; distance= 21.62642387156638
iter= 9500 ; loss= 53.742218017578125 ; distance= 21.377417712088366
iter= 9600 ; loss= 49.687721252441406 ; distance= 21.134078924622845
iter= 9700 ; loss= 46.025230407714844 ; distance= 20.896442661801764
iter= 9800 ; loss= 42.72130584716797 ; distance= 20.664485611216453
iter= 9900 ; loss= 39.747135162353516 ; distance= 20.438112955496663
iter= 10000 ; loss= 37.07573699951172 ; distance= 20.217150759057347
iter= 10100 ; loss= 34.681968688964844 ; distance= 20.00138087273992
iter= 10200 ; loss= 32.54048156738281 ; distance= 19.790559864885708
iter= 10300 ; loss= 30.626609802246094 ; distance= 19.584475912756606
iter= 10400 ; loss= 28.916820526123047 ; distance= 19.382937669638164
iter= 10500 ; loss= 27.389169692993164 ; distance= 19.185775033160553
iter= 10600 ; loss= 26.023544311523438 ; distance= 18.99281014379397
iter= 10700 ; loss= 24.800979614257812 ; distance= 18.803836285003204
iter= 10800 ; loss= 23.703882217407227 ; distance= 18.61860454767581
iter= 10900 ; loss= 22.7152156829834 ; distance= 18.436804806808986
iter= 11000 ; loss= 21.817922592163086 ; distance= 18.258060713171577
iter= 11100 ; loss= 20.994699478149414 ; distance= 18.081916062092603
iter= 11200 ; loss= 20.226619720458984 ; distance= 17.907831518291097
iter= 11300 ; loss= 19.492816925048828 ; distance= 17.735173440176023
iter= 11400 ; loss= 18.768024444580078 ; distance= 17.56315769036538
iter= 11500 ; loss= 18.01973533630371 ; distance= 17.39085575852068
iter= 11600 ; loss= 17.202545166015625 ; distance= 17.217092776987823
iter= 11700 ; loss= 16.243179321289062 ; distance= 17.0404191559294
iter= 11800 ; loss= 14.982314109802246 ; distance= 16.859261866800676
iter= 11900 ; loss= 13.254838943481445 ; distance= 16.676728733477677
iter= 12000 ; loss= 11.806711196899414 ; distance= 16.49618671164311
iter= 12100 ; loss= 10.755365371704102 ; distance= 16.30844919471863
iter= 12200 ; loss= 9.852766036987305 ; distance= 16.119337403568366
iter= 12300 ; loss= 9.043342590332031 ; distance= 15.933447707868345
###Markdown
4) Analyze results 4.1) Find early stopping value
###Code
index_opt = np.argmin(distance_vec)
index_early_stop = np.flatnonzero(np.abs(np.diff(loss_vec))<1E-8)
gamma_DIP_torch_opt = gamma_NN_store[index_opt, :]
R_inf_DIP_torch_opt = R_inf_NN_store[index_opt, :]
gamma_DIP_opt = gamma_DIP_torch_opt.detach().numpy()
R_DIP_opt = R_inf_DIP_torch_opt.detach().numpy()
if len(index_early_stop):
gamma_DIP_torch_early_stop = gamma_NN_store[index_early_stop[0], :]
gamma_DIP = gamma_DIP_torch_early_stop.detach().numpy()
R_DIP = R_inf_NN_store[index_early_stop[0], :]
R_DIP = R_DIP.detach().numpy()
else:
gamma_DIP = gamma_DIP_opt
R_DIP = R_DIP_opt
###Output
_____no_output_____
###Markdown
4.2) Plot the loss
###Code
plt.semilogy(loss_vec, linewidth=4, color="black")
plt.semilogy(np.array([index_early_stop[0], index_early_stop[0]]), np.array([1E-3, 1E7]),
':', linewidth=3, color="red")
plt.semilogy(np.array([index_opt, index_opt]), np.array([1E-3, 1E7]),
':', linewidth=3, color="blue")
plt.text(30000, 1E2, r'early stop',
{'color': 'red', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="red", pad=0.2)})
plt.text(0.93E5, 1E2, r'optimal',
{'color': 'blue', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="blue", pad=0.2)})
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel(r'iter', fontsize=20)
plt.ylabel(r'loss', fontsize=20)
plt.axis([0,1.01E5,0.9E-2,1.1E6])
fig = plt.gcf()
fig.set_size_inches(5, 4)
plt.show()
###Output
_____no_output_____
###Markdown
4.3) Plot the error curve vs. iterationThe error is defined as the distance between predicted DRT and exact DRT, i.e.,$ \rm error = ||\mathbf \gamma_{\rm exact} - \mathbf \gamma_{\rm DP-DRT}||$
###Code
plt.semilogy(distance_vec, linewidth=4, color="black")
plt.semilogy(np.array([index_early_stop[0], index_early_stop[0]]), np.array([1E-3, 1E7]),
':', linewidth=4, color="red")
plt.semilogy(np.array([index_opt, index_opt]), np.array([1E-3, 1E7]),
':', linewidth=4, color="blue")
plt.text(30000, 2E1, r'early stop',
{'color': 'red', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="red", pad=0.2)})
plt.text(0.93E5, 2E1, r'optimal',
{'color': 'blue', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="blue", pad=0.2)})
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel(r'iter', fontsize=20)
plt.ylabel(r'error', fontsize=20)
plt.axis([0,1.01E5,0.9E0,1.1E2])
fig=plt.gcf()
fig.set_size_inches(5, 4)
plt.show()
###Output
_____no_output_____
###Markdown
4.4) Plot the impedanceWe compare the DP-DRT EIS spectrum against the one from the stochastic experiment
###Code
Z_DIP = R_DIP + np.matmul(A_re, gamma_DIP) + 1j*np.matmul(A_im, gamma_DIP)
plt.plot(np.real(Z_exp), -np.imag(Z_exp), "o", markersize=10, color="black", label="synth exp")
plt.plot(np.real(Z_DIP), -np.imag(Z_DIP), linewidth=4, color="red", label="DP-DRT")
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=20)
plt.annotate(r'$10^{-2}$', xy=(np.real(Z_exp[20]), -np.imag(Z_exp[20])),
xytext=(np.real(Z_exp[20])-2, 10-np.imag(Z_exp[20])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$10^{-1}$', xy=(np.real(Z_exp[30]), -np.imag(Z_exp[30])),
xytext=(np.real(Z_exp[30])-2, 6-np.imag(Z_exp[30])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$1$', xy=(np.real(Z_exp[40]), -np.imag(Z_exp[40])),
xytext=(np.real(Z_exp[40]), 10-np.imag(Z_exp[40])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$10$', xy=(np.real(Z_exp[50]), -np.imag(Z_exp[50])),
xytext=(np.real(Z_exp[50])-1, 10-np.imag(Z_exp[50])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.legend(frameon=False, fontsize = 15)
plt.xlim(10, 65)
plt.ylim(0, 55)
plt.xticks(range(0, 70, 10))
plt.yticks(range(0, 60, 10))
plt.gca().set_aspect('equal', adjustable='box')
plt.xlabel(r'$Z_{\rm re}/\Omega$', fontsize = 20)
plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize = 20)
fig = plt.gcf()
size = fig.get_size_inches()
plt.show()
###Output
_____no_output_____
###Markdown
4.5) Plot the DRTWe compare the $\gamma$ from the DP-DRT model against the exact one
###Code
plt.semilogx(tau_vec, gamma_exact, linewidth=4, color="black", label="exact")
plt.semilogx(tau_vec, gamma_DIP, linewidth=4, color="red", label="early stop")
plt.semilogx(tau_vec, gamma_DIP_opt, linestyle='None', marker='o', color="blue", label="optimal")
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.axis([1E-4,1E4,-0.4,25])
plt.legend(frameon=False, fontsize = 15)
plt.xlabel(r'$\tau/{\rm s}$', fontsize = 20)
plt.ylabel(r'$\gamma/\Omega$', fontsize = 20)
fig = plt.gcf()
fig.set_size_inches(5, 4)
plt.show()
###Output
_____no_output_____
###Markdown
4.6) Ancillary data
###Code
print('total number parameters = ', compute_DRT.count_parameters(model))
print('distance_early_stop = ', distance_vec[index_early_stop[0]])
print('distance_opt= ', distance_vec[index_opt])
###Output
total number parameters = 20170
distance_early_stop = 6.249378631221442
distance_opt= 3.9961969655001686
###Markdown
wizzer[](https://github.com/seekasra/wizzer/commits/master)[](https://github.com/seekasra/wizzer/blob/master/LICENSE) What's wizzer?wizzer is a Python module to help programmers initialise their domain specificvariable(s)/configuration(s) using a wizard-like question answer chat scenario.The need for this module began to develope when there was such need forIntent-Based Networking (IBN). Where the user would express their intention andexpect the system to translate and trigger setup automatically. How to use?You can make the file usable as a script as well as an importable module. See example1.py Screenshots--- creditsicon in wizzer logo : [Anatoly Dudko](https://thenounproject.com/tolyachudes/) step by step guide1 - import _wizzer_ package.
###Code
import wizzer
###Output
_____no_output_____
###Markdown
2 - have your questions (configuration parameters) ready. accepted formats are arrays, dictionaries or a single string. 2.1 - Here we have an array forexample:
###Code
q1 = [
'driver',
'hostname',
'username',
'password',
'port',
]
###Output
_____no_output_____
###Markdown
2.1.1 - Now you can ask above attributes from the user. This will return a new dictinary with all answers filled-in as corresponding values.
###Code
q = wizzer.ask(q1)
###Output
What's the driver ? iosxr
What's the hostname ? ios-xe-mgmt.cisco.com
What's the username ? developer
What's the password ? C1sco12345
What's the port ? 8181
###Markdown
2.1.2 - You can review the configurations by running:
###Code
wizzer.review(q)
###Output
driver : iosxr
hostname : ios-xe-mgmt.cisco.com
username : developer
password : C1sco12345
port : 8181
###Markdown
2.2 - Here we have a dictionary forexample:
###Code
q2 = {
'driver': '',
'hostname': '',
'username': '',
'password': '',
'port': '',
}
###Output
_____no_output_____
###Markdown
2.2.1 - Now you can ask above attributes from the user. This will return a new dictinary with all answers filled-in as corresponding values.
###Code
q = wizzer.ask(q2)
###Output
What's the driver ? iosxr
What's the hostname ? ios-xe-mgmt.cisco.com
What's the username ? developer
###Markdown
In this example we are extracting the dates of reference insertion, date of id insertion and final reference deletion for each reference by its id:
###Code
def getting_data(df):
df_upt = pd.DataFrame(df[['ref_ids','ref_ids_type', 'ref_id_ins']])
df_upt['ins_time'] = df['first_rev_time']
df_upt['del_time'] = 'None'
for i in df_upt.index:
if df['deleted'][i]:
df_upt['del_time'][i] = df['del_time'][i][-1]
return df_upt
df_upt = getting_data(df)
qgrid.show_grid(getting_data(df))
###Output
_____no_output_____
###Markdown
PDBe Aggregated API - A step-by-step example This Jupyter Notebook provides step-by-step instructions for querying the PDBe Aggregated API and retrieving information on predicted binding sites, macromolecular interaction interfaces and observed ligands for the protein Thrombin using Python3 programming language. Step 1 - Import necessary dependenciesIn order to query the API, import the `requests` library.
###Code
import requests
###Output
_____no_output_____
###Markdown
Step 2 - Choose a UniProt accession and the necessary API endpointsAll the API endpoints have keys that the users must provide. For this example, we will use API endpoints that are keyed on a UniProt accession.The UniProt accession of Thrombin is "P00734".For this example, we are interested in functional annotations of Thrombin which are provided to PDBe-KB [1] by consortium partner resources such as P2rank [2] and canSAR [3]. We are also interested in all the macromolecular interaction interface residues of Thrombin, as calculated by the PDBe PISA service [4], and all the observed ligand binding sites, as calculated by Arpeggio [5].In order to retrieve this (and any other) information, users should study the documentation page of the PDBe Aggregated API: We set the variables below for the UniProt accession of Thrombin, and the API endpoint URLs we will use.
###Code
ACCESSION = "P00734"
ANNOTATIONS_URL = f"https://www.ebi.ac.uk/pdbe/graph-api/uniprot/annotations/{ACCESSION}"
INTERACTIONS_URL = f"https://www.ebi.ac.uk/pdbe/graph-api/uniprot/interface_residues/{ACCESSION}"
LIGANDS_URL = f"https://www.ebi.ac.uk/pdbe/graph-api/uniprot/ligand_sites/{ACCESSION}"
###Output
_____no_output_____
###Markdown
Step 3 - Define helper functionsWe will define a few helper functions to avoid code repetition when retrieving data from the API.
###Code
def get_data(accession, url):
"""
Helper function to get the data from an API endpoint using an accession as key
:param accession: String; a UniProt accession
:param url: String; a URL to an API endpoint
:return: Response object or None
"""
try:
return requests.get(url)
except Error as err:
print("There was an error while retrieving the data: %s" % err)
def parse_data(data):
"""
Helper function to parse a response object as JSON
:param data: Response object; data to be parsed
:return: JSON object or None
"""
# Check if the status code is 200 and raise error if not
if data.status_code == 200:
return data.json()
else:
raise ValueError('No data received')
###Output
_____no_output_____
###Markdown
Step 4 - Get annotations dataWe will use the annotations API endpoint (defined as `ANNOTATIONS_URL`) to get the functional annotations for Thrombin (defined as `ACCESSION`)
###Code
annotations_data = parse_data(get_data(ACCESSION, ANNOTATIONS_URL))
###Output
_____no_output_____
###Markdown
We then filter the data for the predicted binding sites annotations provided by P2rank and canSAR.
###Code
all_predicted_ligand_binding_residues = list()
for provider_data in annotations_data[ACCESSION]["data"]:
if provider_data["accession"] in ["p2rank", "cansar"]:
residues = [x["startIndex"] for x in provider_data["residues"]]
all_predicted_ligand_binding_residues.extend(residues)
###Output
_____no_output_____
###Markdown
These are the residues which are annotated as predicted ligand binding sites:
###Code
print(all_predicted_ligand_binding_residues)
###Output
[136, 329, 330, 331, 332, 333, 334, 336, 372, 383, 386, 388, 389, 390, 391, 406, 407, 410, 413, 414, 415, 417, 434, 436, 459, 493, 506, 507, 511, 541, 549, 565, 566, 568, 589, 590, 591, 596, 597, 605]
###Markdown
Step 5 - Get interaction interfaces dataWe will use the interaction interfaces API endpoint (defined as `INTERACTIONS_URL`) to get all the macromolecular interaction interface residues of Thrombin (defined as `ACCESSION`)
###Code
interactions_data = parse_data(get_data(ACCESSION, INTERACTIONS_URL))
###Output
_____no_output_____
###Markdown
We then list the macromolecular interaction partners of Thrombin:
###Code
interaction_partner_names = list()
for item in interactions_data[ACCESSION]["data"]:
interaction_partner_names.append(item["name"])
print(interaction_partner_names)
###Output
['Prothrombin', 'Hirudin variant-1', 'Other', 'Hirudin variant-2 (Fragment)', 'Hirudin-2', 'Salivary anti-thrombin peptide anophelin', 'DNA', 'Thrombomodulin', 'Heparin cofactor 2', 'Thrombin inhibitor madanin 1', 'Staphylocoagulase (Fragment)', 'Thrombininhibitor', 'AGAP008004-PA', 'Pancreatic trypsin inhibitor', 'Antithrombin-III', 'Proteinase-activated receptor 1', 'Uncharacterized protein avahiru', 'RNA', 'Fibrinogen alpha chain', 'Glia-derived nexin', 'HIRUDIN ANALOGUE', 'Hirudin-2B', 'Variegin', 'Proteinase-activated receptor 4', 'Plasma serine protease inhibitor', 'Hirudin-3A', 'Vitamin K-dependent protein C', 'Platelet glycoprotein Ib alpha chain', 'Hirullin-P18', 'N-acetylated chloromethylated fibrinopeptide A', 'CYCLOTHEONAMIDE A', 'BIVALIRUDIN C-terminus fragment', 'Coagulation factor V', "Hirudin-3B'", 'Kininogen-1', 'Hirudin-PA', "Hirudin-3A'", 'Hirudin-3', 'BIVALIRUDIN N-terminus fragment', 'Hirudin-2A', "N-terminal Asp des-amino Hirudin-3A'", 'Thrombostatin FM inhibitor [rOicPaF(p-Me)]', 'AERUGINOSIN 298-A']
###Markdown
We can see it has many interaction partners, and several of them are variants of Hirudin, a natural inhibitor of Thrombin. We will use `Hirudin variant-1` for the next steps of this example. Step 6 - Compare the interaction interface residues between Thrombin and Hirudin (variant-1)We compare the predicted ligand binding site residues with the interaction interface residues of Thrombin that interact with Hirudin (variant 1)
###Code
interface_residues_with_hirudin = list()
for item in interactions_data[accession]["data"]:
if item["name"] == "Hirudin variant-1":
interacting_residues = [x["startIndex"] for x in item["residues"] if x["startIndex"] in all_ligand_binding_residues]
interface_residues_with_hirudin.extend(interacting_residues)
###Output
_____no_output_____
###Markdown
We can see that there are 9 residues found in the region between GLU388 and GLY591 which both interact with Hirudin and are predicted to bind small molecules:
###Code
print(interface_residues_with_hirudin)
###Output
[388, 406, 434, 541, 565, 566, 568, 589, 591]
###Markdown
Summary of the results so farUsing the PDBe Aggregated API we could retrieve all the residues of Thrombin which are predicted to bind small molecules. We then retrieved the data on macromolecular interactions between Thrombin and other proteins/peptides. We could see that Thrombin interacts with several variants of Hirudin.Next, we compared the predicted ligand binding sites with the interaction interface residues and saw that there is a region on the sequence of Thrombin where several potential target residues can be found. Step 7 - Retrieving observed ligand binding sitesNext, we retrieve all the binding sites using the ligand sites API endpoint (defined as `LIGANDS_URL`) to get all the ligand binding residues of Thrombin (defined as `ACCESSION`)
###Code
ligands_data = parse_data(get_data(ACCESSION, LIGANDS_URL))
ligand_list = list()
for ligand in ligands_data[accession]["data"]:
for residue in ligand["residues"]:
if residue["startIndex"] in interface_residues_with_hirudin:
ligand_list.append(ligand["accession"])
break
###Output
_____no_output_____
###Markdown
Finally, we compare the ligands found in the PDB with the annotations and interaction interfaces we have collated in the previous steps, and we find that indeed there are many small molecules, such as TYS, MRD, P6G that interact with the Thrombin residues which form the macromolecular interaction interface with Hirudin (variant-1).
###Code
print("There are %i ligands observed in PDB that bind to this " % len(ligand_list))
###Output
281
###Markdown
Data Exploration
###Code
#load the data to understand the attributes and data types
df.head()
#let's look at the data types
df.dtypes
#change temperature into a category as its an ordinal datatype
df['temperature']=df['temperature'].astype('category')
###Output
_____no_output_____
###Markdown
Cleaning The Data
###Code
#check for empty values
df.info()
df["car"].value_counts()
df.drop('car', inplace=True, axis=1)
for x in df.columns[df.isna().any()]:
df = df.fillna({x: df[x].value_counts().idxmax()})
#change Object datatypes to Categorical datatypes)
df_obj = df.select_dtypes(include=['object']).copy()
for col in df_obj.columns:
df[col]=df[col].astype('category')
df.dtypes
#lets do some statistcal analysis
df.describe(include='all')
df.select_dtypes('int64').nunique()
df.drop(columns=['toCoupon_GEQ5min'], inplace=True)
fig, axes = plt.subplots(9, 2, figsize=(20,50))
axes = axes.flatten()
for ax, col in zip(axes, df.select_dtypes('category').columns):
sns.countplot(y=col, data=df, ax=ax,
palette="ch:.25", order=df[col].value_counts().index);
plt.tight_layout()
plt.show()
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
enc = OneHotEncoder(dtype='int64')
df_cat = df.select_dtypes(include=['category']).copy()
df_int = df.select_dtypes(include=['int64']).copy()
df_enc = pd.DataFrame()
for col in df_cat.columns:
enc_results = enc.fit_transform(df_cat[[col]])
df0 = pd.DataFrame(enc_results.toarray(), columns=enc.categories_)
df_enc = pd.concat([df_enc,df0], axis=1)
df_final = pd.concat([df_enc, df_int], axis=1)
#source: https://pbpython.com/categorical-encoding.html
for name in df_final.columns:
name1 = name
name = str(name).replace('(','').replace(')','').replace('\'','').replace(',','')
df_final = df_final.rename(columns={name1:name})
df_final
import numpy as np
import pandas as pd
from pandas.io.parsers import read_csv
from BOAmodel import *
from collections import defaultdict
""" parameters """
# The following parameters are recommended to change depending on the size and complexity of the data
N = 2000 # number of rules to be used in SA_patternbased and also the output of generate_rules
Niteration = 500 # number of iterations in each chain
Nchain = 2 # number of chains in the simulated annealing search algorithm
supp = 5 # 5% is a generally good number. The higher this supp, the 'larger' a pattern is
maxlen = 3 # maxmum length of a pattern
# \rho = alpha/(alpha+beta). Make sure \rho is close to one when choosing alpha and beta.
alpha_1 = 500 # alpha_+
beta_1 = 1 # beta_+
alpha_2 = 500 # alpha_-
beta_2 = 1 # beta_-
type(df_final)
""" input file """
# # notice that in the example, X is already binary coded.
# # Data has to be binary coded and the column name shd have the form: attributename_attributevalue
# filepathX = 'tictactoe_X.txt' # input file X
# filepathY = 'tictactoe_Y.txt' # input file Y
# df = read_csv(filepathX,header=0,sep=" ")
# Y = np.loadtxt(open(filepathY,"rb"),delimiter=" ")
df = df_final.iloc[:,:-1]
Y = df_final.iloc[:,-1]
lenY = len(Y)
train_index = sample(range(lenY),int(0.70*lenY))
test_index = [i for i in range(lenY) if i not in train_index]
model = BOA(df.iloc[train_index],Y[train_index])
model.generate_rules(supp,maxlen,N)
model.set_parameters(alpha_1,beta_1,alpha_2,beta_2,None,None)
rules = model.SA_patternbased(Niteration,Nchain,print_message=True)
# test
Yhat = predict(rules,df.iloc[test_index])
TP,FP,TN,FN = getConfusion(Yhat,Y[test_index])
tpr = float(TP)/(TP+FN)
fpr = float(FP)/(FP+TN)
print('TP = {}, FP = {}, TN = {}, FN = {} \n accuracy = {}, tpr = {}, fpr = {}'.\
format(TP,FP,TN,FN, float(TP+TN)/(TP+TN+FP+FN),tpr,fpr))
###Output
_____no_output_____
###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
import numpy as np
import cv2
import glob
import pickle
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.gridspec as gridspec
%matplotlib inline
def warp_image(img,src,dst,img_size):
# Apply perspective transform
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
Minv = cv2.getPerspectiveTransform(dst, src)
return warped,M,Minv
def apply_threshold(img, s_thresh=(170, 255), sx_thresh=(10, 100)):
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) # Convert to HLS color space and separate the S channel
s_channel = hls[:,:,2]
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Grayscale image
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel to view their individual contributions in green and blue respectively
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary))
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return combined_binary
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
return cv2.bitwise_and(img, mask)
def gaussian_blur(img, kernel=5):
# Function to smooth image
blur = cv2.GaussianBlur(img,(kernel,kernel),0)
return blur
def window_mask(width, height, img_ref, center,level):
output = np.zeros_like(img_ref)
output[int(img_ref.shape[0]-(level+1)*height):int(img_ref.shape[0]-level*height),max(0,int(center-width/2)):min(int(center+width/2),img_ref.shape[1])] = 1
return output
def find_window_centroids(warped, window_width, window_height, margin):
window_centroids = [] # Store the (left,right) window centroid positions per level
window = np.ones(window_width) # Create our window template that we will use for convolutions
# First find the two starting positions for the left and right lane by using np.sum to get the vertical image slice
# and then np.convolve the vertical image slice with the window template
# Sum quarter bottom of image to get slice, could use a different ratio
l_sum = np.sum(warped[int(3*warped.shape[0]/4):,:int(warped.shape[1]/2)], axis=0)
l_center = np.argmax(np.convolve(window,l_sum))-window_width/2
r_sum = np.sum(warped[int(3*warped.shape[0]/4):,int(warped.shape[1]/2):], axis=0)
r_center = np.argmax(np.convolve(window,r_sum))-window_width/2+int(warped.shape[1]/2)
# Add what we found for the first layer
window_centroids.append((l_center,r_center))
# Go through each layer looking for max pixel locations
for level in range(1,(int)(warped.shape[0]/window_height)):
# convolve the window into the vertical slice of the image
image_layer = np.sum(warped[int(warped.shape[0]-(level+1)*window_height):int(warped.shape[0]-level*window_height),:], axis=0)
conv_signal = np.convolve(window, image_layer)
# Find the best left centroid by using past left center as a reference
# Use window_width/2 as offset because convolution signal reference is at right side of window, not center of window
offset = window_width/2
l_min_index = int(max(l_center+offset-margin,0))
l_max_index = int(min(l_center+offset+margin,warped.shape[1]))
l_center = np.argmax(conv_signal[l_min_index:l_max_index])+l_min_index-offset
# Find the best right centroid by using past right center as a reference
r_min_index = int(max(r_center+offset-margin,0))
r_max_index = int(min(r_center+offset+margin,warped.shape[1]))
r_center = np.argmax(conv_signal[r_min_index:r_max_index])+r_min_index-offset
# Add what we found for that layer
window_centroids.append((l_center,r_center))
return window_centroids
def window_centroids_logits(window_centroids, warped):
# If we found any window centers
if len(window_centroids) > 0:
# Points used to draw all the left and right windows
l_points = np.zeros_like(warped)
r_points = np.zeros_like(warped)
# Go through each level and draw the windows
for level in range(0,len(window_centroids)):
# Window_mask is a function to draw window areas
l_mask = window_mask(window_width,window_height,warped,window_centroids[level][0],level)
r_mask = window_mask(window_width,window_height,warped,window_centroids[level][1],level)
# Add graphic points from window mask here to total pixels found
l_points[(l_points == 255) | ((l_mask == 1) ) ] = 255
r_points[(r_points == 255) | ((r_mask == 1) ) ] = 255
# Draw the results
template = np.array(r_points+l_points,np.uint8) # add both left and right window pixels together
zero_channel = np.zeros_like(template) # create a zero color channel
template = np.array(cv2.merge((zero_channel,template,zero_channel)),np.uint8) # make window pixels green
warpage = np.array(cv2.merge((warped,warped,warped)),np.uint8) # making the original road pixels 3 color channels
return cv2.addWeighted(warpage, 1, template, 0.5, 0.0) # overlay the orignal road image with window results
# If no window centers found, just display orginal road image
else:
return np.array(cv2.merge((warped,warped,warped)),np.uint8)
import random
img = mpimg.imread('test_images/straight_lines1.jpg')
# Read in a thresholded image
# window settings
window_width = 50
window_height = 80 # Break image into 9 vertical layers since image height is 720
margin = 100 # How much to slide left and right for searching
dist_pickle = pickle.load( open( "wide_dist_pickle.p", "rb" ) )
mtx = dist_pickle["mtx"]
dist = dist_pickle["dist"]
# Edit this function to create your own pipeline.
def pipeline(img):
img_size = img.shape
point1 = (img_size[1]*.58),(img_size[0]*.65)
point2 = (img_size[1]*.42),(img_size[0]*.65)
point3 = (img_size[1]*.10),(img_size[0]*.98)
point4 = (img_size[1]*.90),(img_size[0]*.98)
src = np.float32([[point1[0],point1[1]],[point2[0],point2[1]],[point3[0],point3[1]],[point4[0],point4[1]]])
h, w, d = img.shape
vertices = np.array([[point1[0],point1[1]],[point2[0],point2[1]],[point3[0],point3[1]],[point4[0],point4[1]]], dtype=np.int32)
point1 = (img_size[1]*.90),(0)
point2 = (img_size[1]*.10),(0)
point3 = (img_size[1]*.20),(img_size[0])
point4 = (img_size[1]*.80),(img_size[0])
dst = np.float32([[point1[0],point1[1]],[point2[0],point2[1]],[point3[0],point3[1]],[point4[0],point4[1]]], dtype=np.int32)
img = cv2.undistort(img, mtx, dist, None, mtx) #undistorted image
img = gaussian_blur(img, kernel=5)
combined_binary = apply_threshold(img,s_thresh=(170, 255), sx_thresh=(10, 100))
warped,M_warp,Minv_warp = warp_image(combined_binary,src,dst,(img_size[1],img_size[0]))
warped1,M_warp1,Minv_warp1 = warp_image(img,src,dst,(img_size[1],img_size[0]))
window_centroids = find_window_centroids(warped, window_width, window_height, margin)
output = window_centroids_logits(window_centroids, warped)
ploty = np.linspace(0, 719, num=720)# to cover same y-range as image
quadratic_coeff = 3e-4 # arbitrary quadratic coefficient
# For each y position generate random x position within +/-50 pix
# of the line base position in each case (x=200 for left, and x=900 for right)
leftx = np.array([200 + (y**2)*quadratic_coeff + np.random.randint(-50, high=51)
for y in ploty])
rightx = np.array([900 + (y**2)*quadratic_coeff + np.random.randint(-50, high=51)
for y in ploty])
leftx = leftx[::-1] # Reverse to match top-to-bottom in y
rightx = rightx[::-1] # Reverse to match top-to-bottom in y
# Fit a second order polynomial to pixel positions in each fake lane line
left_fit = np.polyfit(ploty, leftx, 2)
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fit = np.polyfit(ploty, rightx, 2)
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# Define y-value where we want radius of curvature
# I'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
left_curverad = ((1 + (2*left_fit[0]*y_eval + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0])
right_curverad = ((1 + (2*right_fit[0]*y_eval + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0])
binary_warped = warped
# Assume you now have a new warped binary image
# from the next frame of video (also called "binary_warped")
# It's now much easier to find line pixels!
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
#Visualize
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Create an image to draw the lines on
warp_zero = np.zeros_like(warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255,0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv_warp, (img.shape[1], img.shape[0]))
#return cv2.addWeighted(img, 1, newwarp, 0.3, 0),warped1,warped,combined_binary,output
return cv2.addWeighted(img, 1, newwarp, 0.3, 0)
"""
result,out_img,warped,combined_binary,output = pipeline(img)
# Plot the result
f, (ax1, ax2,ax3, ax4) = plt.subplots(1, 4, figsize=(24, 9))
f.tight_layout()
ax1.imshow(result)
ax1.set_title('Original Image', fontsize=40)
ax2.imshow(out_img, cmap='gray')
ax2.set_title('Pipeline Result', fontsize=40)
ax3.imshow(warped, cmap="gray")
ax3.set_title('Original Image', fontsize=40)
ax4.imshow(output, cmap='gray')
ax4.set_title('Pipeline Result', fontsize=40)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
"""
import numpy as np
histogram = np.sum(result[result.shape[0]//2:,:], axis=0)
plt.plot(histogram)
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
# was the line detected in the last iteration?
self.detected = False
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#polynomial coefficients for the most recent fit
self.current_fit = [np.array([False])]
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
#x values for detected line pixels
self.allx = None
#y values for detected line pixels
self.ally = None
### Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# Set up lines for left and right
left_lane = Line()
right_lane = Line()
white_output = 'white.mp4'
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
### Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# Set up lines for left and right
left_lane = Line()
right_lane = Line()
white_output = '2.mp4'
clip1 = VideoFileClip("challenge_video.mp4")
white_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
### Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# Set up lines for left and right
left_lane = Line()
right_lane = Line()
white_output = '3.mp4'
clip1 = VideoFileClip("harder_challenge_video.mp4")
white_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
[MoviePy] >>>> Building video 3.mp4
[MoviePy] Writing video 3.mp4
###Markdown
Kriging exampleSince the global data base we used cannot be shared, we demonstrate using freelyavailable data from Assumpcao et al. (2013) for South America, how the codescan be used.For simplicity's sake we did not use two different categories here, but focusedon the continental area instead, by simply discarding all points where the Mohodepth is less than 30 km.
###Code
import numpy as np
import matplotlib.pyplot as plt
import clean_kriging
import sklearn.cluster as cluster
from func_dump import get_pairwise_geo_distance
import logging
logging.basicConfig(level=logging.DEBUG)
def test_cluster_size(point_data,max_size,do_plot=False,chosen_range=None,
perc_levels=20):
"""Test effect of number of clusters on cluster radius and size
"""
cluster_sizes = range(5,max_size,1)
radius_1 = np.zeros((len(cluster_sizes),3))
cluster_N = np.zeros((len(cluster_sizes),3))
percentages = np.zeros((len(cluster_sizes),perc_levels+1))
X = point_data
Xsel = X
pd = get_pairwise_geo_distance(Xsel[:,0],Xsel[:,1])
for k,n_clusters in enumerate(cluster_sizes):
model = cluster.AgglomerativeClustering(linkage='complete',affinity='precomputed',n_clusters=n_clusters)
model.fit(pd)
radius = np.zeros((n_clusters))
cluster_members = np.zeros((n_clusters))
for i,c in enumerate(np.unique(model.labels_)):
ix = np.where(model.labels_==c)[0]
radius[i] = 0.5*pd[np.ix_(ix,ix)].max()
cluster_members[i] = np.sum(model.labels_==c)
r1i,r1a,r1s = (radius.min(),radius.max(),radius.std())
radius_1[k,0] = r1i
radius_1[k,1] = r1a
radius_1[k,2] = np.median(radius)
percentages[k,:] = np.percentile(radius,np.linspace(0,100,perc_levels+1))
radius_1 = radius_1*110.0
percentages = percentages*110.0
if do_plot:
plt.plot(cluster_sizes,radius_1)
for i in range(perc_levels):
if i<perc_levels/2:
alpha = (i+1)*2.0/perc_levels
else:
alpha = (perc_levels-i)*2.0/perc_levels
plt.fill_between(cluster_sizes,percentages[:,i],percentages[:,i+1],
alpha=alpha,facecolor='green',edgecolor='none')
if not chosen_range is None:
return cluster_sizes[np.argmin(np.abs(radius_1[:,2]-chosen_range))]
def cluster_map(krigor):
"""Visualize distribution spatial distribution of a cluster
"""
fig = plt.figure(figsize=(7,11))
Xsel = krigor.X
model = krigor.cluster_results[0]
n_clusters = model.n_clusters
cmap = plt.cm.get_cmap("jet",n_clusters)
clu = model.cluster_centers_
pointsize = np.sqrt(np.bincount(model.labels_))
for i in range(len(Xsel)):
j = model.labels_[i]
if (Xsel[i,0]*clu[j,0])<0 and np.abs(np.abs(clu[j,0])-180.0) < 10.0:
continue
plt.plot((Xsel[i,0],clu[j,0]),(Xsel[i,1],clu[j,1]),color=cmap(model.labels_[i]),alpha=0.5)
print clu.shape,n_clusters,pointsize.shape
plt.scatter(clu[:,0],clu[:,1],7.5*pointsize,np.linspace(0,n_clusters,n_clusters),'s',
alpha=1.0,cmap=cmap,edgecolor='r',linewidth=1.5)
plt.scatter(Xsel[:,0],Xsel[:,1],2,model.labels_,cmap=cmap,alpha=1.0,edgecolor='k')
plt.axis('equal')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
#plt.xlim([-90,-20])
###Output
_____no_output_____
###Markdown
Data inputWe load the file shipped together with this example. See the inside of the files for references to the sources.
###Code
point_data = np.loadtxt("Seismic_Moho_Assumpcao.txt",delimiter=",")
point_data[:,2] = -0.001*point_data[:,2]
point_data = point_data[point_data[:,2]>30.0,:]
lon = np.arange(np.round(point_data[:,0].min()),np.round(point_data[:,0].max()+1),1)
lat = np.arange(np.round(point_data[:,1].min()),np.round(point_data[:,1].max()+1),1)
lonGrid,latGrid = np.meshgrid(lon,lat)
###Output
_____no_output_____
###Markdown
Prior specificationWe want to use inverse gamma priors for nugget, sill and range. The inverse gamma distribution is defined in terms of the parameters $\alpha$ and $\beta$, which we derive here from a specified mean and variance. $$\mu = \mathrm{Mean} = \frac{\beta}{\alpha-1} \quad \text{and}\quad \sigma^2= \mathrm{var} = \frac{\beta^2}{(\alpha-1)^2(\alpha-2)}$$Thus,$$\alpha = 2 + \frac{\mu^2}{\sigma^2} \quad \text{and}\quad\beta = \frac{\mu^3}{\sigma^2} + \mu$$The variable `moments` contains mean and variance for all nugget, sill and range. The last dimension of `moments` would be used, if there are different categories (i.e. ocean vs. continent), but in this example this is not required.
###Code
moments = np.zeros((3,2,1))
moments[:,:,0] = np.array(((1.0,3.0**2),(40.0,40.0**2),(10.0,10.0**2)))
beta = moments[:,0,:]**3/moments[:,1,:]+moments[:,0,:]
alpha = 2 + moments[:,0,:]**2 / moments[:,1,:]
###Output
_____no_output_____
###Markdown
ClusteringAll important routines are contained in objects of the class `MLEKrigor`. Such an object is created by passing it longitude,latitude,value and category. In this example, all category values are simply zero. Any clustering algorithm from the scikit-learn package can be used. Any options contained in the dictionary `clusterOption` will be passed to the constructor.After clustering, the covariance parameters for all clusters are determined (`krigor._fit_all_clusters`).
###Code
cat = np.ones((point_data.shape[0]),dtype=int)
krigor = clean_kriging.MLEKrigor(point_data[:,0],point_data[:,1],point_data[:,2],cat)
clusterOptions=[{'linkage':'complete','affinity':'precomputed','n_clusters':16}]
krigor._cluster_points(cluster.AgglomerativeClustering,options=clusterOptions,use_pd=True)
krigor._detect_dupes()
krigor._fit_all_clusters(minNugget=0.5,minSill=1.0,
hyperpars=np.dstack((alpha,beta)),prior="inv_gamma",maxRange=None)
krigDict = {"threshold":1,"lambda_w":1.0,"minSill":1.0,
"minNugget":0.5,
"maxAbsError":4.0,"maxRelError":2.0,"badPoints":None,
"hyperPars":np.dstack((alpha,beta)),"prior":"inv_gamma",
"blocks":10}
cluster_map(krigor)
###Output
(16L, 2L) 16 (16L,)
###Markdown
In this map, the individual points are connected with lines to their respective cluster center Outlier detectionThis is the most time-consuming step. The routine `jacknife` performs the hold-one-out cross validation to detect possible outliers. Two criteria are used to determine if a point is an outlier. 1. The **absolute** prediction error needs to be 4 km or more.2. The prediction error is twice as high as the estimated error.This is controlled by the variables `maxAbsErr` and `maxRelErr` passed to the function `jacknife`. The third parameter ($\lambda_w$) controls how the covariance parameters are interpolated.There are two rounds of outlier detection (see main text for explanation).
###Code
sigma1,new_chosen = krigor.jacknife(4.0,2.0,100.0)
krigor.chosen_points = new_chosen.copy()
krigor._fit_all_clusters(minNugget=0.5,minSill=1.0,
hyperpars=krigDict["hyperPars"],prior="inv_gamma",maxRange=None)
sigma2,new_new_chosen = krigor.jacknife(4.0,2.0,100.0)
krigor.chosen_points = new_new_chosen.copy()
krigor._fit_all_clusters(minNugget=0.5,minSill=1.0,
hyperpars=krigDict["hyperPars"],prior="inv_gamma",maxRange=None)
###Output
clean_kriging.py:119: RuntimeWarning: divide by zero encountered in log
return np.sum(-(hyperpars[:,0]+1)*np.log(vals) - hyperpars[:,1]/vals)
clean_kriging.py:119: RuntimeWarning: divide by zero encountered in divide
return np.sum(-(hyperpars[:,0]+1)*np.log(vals) - hyperpars[:,1]/vals)
clean_kriging.py:119: RuntimeWarning: invalid value encountered in subtract
return np.sum(-(hyperpars[:,0]+1)*np.log(vals) - hyperpars[:,1]/vals)
INFO:root:Jacknife category 0 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/146
DEBUG:root:Jacknife_kriging_all_chosen: 1/146
DEBUG:root:Jacknife_kriging_all_chosen: 2/146
DEBUG:root:Jacknife_kriging_all_chosen: 3/146
DEBUG:root:Jacknife_kriging_all_chosen: 4/146
DEBUG:root:Jacknife_kriging_all_chosen: 5/146
DEBUG:root:Jacknife_kriging_all_chosen: 6/146
DEBUG:root:Jacknife_kriging_all_chosen: 7/146
DEBUG:root:Jacknife_kriging_all_chosen: 8/146
DEBUG:root:Jacknife_kriging_all_chosen: 9/146
DEBUG:root:Jacknife_kriging_all_chosen: 10/146
DEBUG:root:Jacknife_kriging_all_chosen: 11/146
DEBUG:root:Jacknife_kriging_all_chosen: 12/146
DEBUG:root:Jacknife_kriging_all_chosen: 13/146
DEBUG:root:Jacknife_kriging_all_chosen: 14/146
DEBUG:root:Jacknife_kriging_all_chosen: 15/146
DEBUG:root:Jacknife_kriging_all_chosen: 16/146
DEBUG:root:Jacknife_kriging_all_chosen: 17/146
DEBUG:root:Jacknife_kriging_all_chosen: 18/146
DEBUG:root:Jacknife_kriging_all_chosen: 19/146
DEBUG:root:Jacknife_kriging_all_chosen: 20/146
DEBUG:root:Jacknife_kriging_all_chosen: 21/146
DEBUG:root:Jacknife_kriging_all_chosen: 22/146
DEBUG:root:Jacknife_kriging_all_chosen: 23/146
DEBUG:root:Jacknife_kriging_all_chosen: 24/146
DEBUG:root:Jacknife_kriging_all_chosen: 25/146
DEBUG:root:Jacknife_kriging_all_chosen: 26/146
DEBUG:root:Jacknife_kriging_all_chosen: 27/146
DEBUG:root:Jacknife_kriging_all_chosen: 28/146
DEBUG:root:Jacknife_kriging_all_chosen: 29/146
DEBUG:root:Jacknife_kriging_all_chosen: 30/146
DEBUG:root:Jacknife_kriging_all_chosen: 31/146
DEBUG:root:Jacknife_kriging_all_chosen: 32/146
DEBUG:root:Jacknife_kriging_all_chosen: 33/146
DEBUG:root:Jacknife_kriging_all_chosen: 34/146
DEBUG:root:Jacknife_kriging_all_chosen: 35/146
DEBUG:root:Jacknife_kriging_all_chosen: 36/146
DEBUG:root:Jacknife_kriging_all_chosen: 37/146
DEBUG:root:Jacknife_kriging_all_chosen: 38/146
DEBUG:root:Jacknife_kriging_all_chosen: 39/146
DEBUG:root:Jacknife_kriging_all_chosen: 40/146
DEBUG:root:Jacknife_kriging_all_chosen: 41/146
DEBUG:root:Jacknife_kriging_all_chosen: 42/146
DEBUG:root:Jacknife_kriging_all_chosen: 43/146
DEBUG:root:Jacknife_kriging_all_chosen: 44/146
DEBUG:root:Jacknife_kriging_all_chosen: 45/146
DEBUG:root:Jacknife_kriging_all_chosen: 46/146
DEBUG:root:Jacknife_kriging_all_chosen: 47/146
DEBUG:root:Jacknife_kriging_all_chosen: 48/146
DEBUG:root:Jacknife_kriging_all_chosen: 49/146
DEBUG:root:Jacknife_kriging_all_chosen: 50/146
DEBUG:root:Jacknife_kriging_all_chosen: 51/146
DEBUG:root:Jacknife_kriging_all_chosen: 52/146
DEBUG:root:Jacknife_kriging_all_chosen: 53/146
DEBUG:root:Jacknife_kriging_all_chosen: 54/146
DEBUG:root:Jacknife_kriging_all_chosen: 55/146
DEBUG:root:Jacknife_kriging_all_chosen: 56/146
DEBUG:root:Jacknife_kriging_all_chosen: 57/146
DEBUG:root:Jacknife_kriging_all_chosen: 58/146
DEBUG:root:Jacknife_kriging_all_chosen: 59/146
DEBUG:root:Jacknife_kriging_all_chosen: 60/146
DEBUG:root:Jacknife_kriging_all_chosen: 61/146
DEBUG:root:Jacknife_kriging_all_chosen: 62/146
DEBUG:root:Jacknife_kriging_all_chosen: 63/146
DEBUG:root:Jacknife_kriging_all_chosen: 64/146
DEBUG:root:Jacknife_kriging_all_chosen: 65/146
DEBUG:root:Jacknife_kriging_all_chosen: 66/146
DEBUG:root:Jacknife_kriging_all_chosen: 67/146
DEBUG:root:Jacknife_kriging_all_chosen: 68/146
DEBUG:root:Jacknife_kriging_all_chosen: 69/146
DEBUG:root:Jacknife_kriging_all_chosen: 70/146
DEBUG:root:Jacknife_kriging_all_chosen: 71/146
DEBUG:root:Jacknife_kriging_all_chosen: 72/146
DEBUG:root:Jacknife_kriging_all_chosen: 73/146
DEBUG:root:Jacknife_kriging_all_chosen: 74/146
DEBUG:root:Jacknife_kriging_all_chosen: 75/146
DEBUG:root:Jacknife_kriging_all_chosen: 76/146
DEBUG:root:Jacknife_kriging_all_chosen: 77/146
DEBUG:root:Jacknife_kriging_all_chosen: 78/146
DEBUG:root:Jacknife_kriging_all_chosen: 79/146
DEBUG:root:Jacknife_kriging_all_chosen: 80/146
DEBUG:root:Jacknife_kriging_all_chosen: 81/146
DEBUG:root:Jacknife_kriging_all_chosen: 82/146
DEBUG:root:Jacknife_kriging_all_chosen: 83/146
DEBUG:root:Jacknife_kriging_all_chosen: 84/146
DEBUG:root:Jacknife_kriging_all_chosen: 85/146
DEBUG:root:Jacknife_kriging_all_chosen: 86/146
DEBUG:root:Jacknife_kriging_all_chosen: 87/146
DEBUG:root:Jacknife_kriging_all_chosen: 88/146
DEBUG:root:Jacknife_kriging_all_chosen: 89/146
DEBUG:root:Jacknife_kriging_all_chosen: 90/146
DEBUG:root:Jacknife_kriging_all_chosen: 91/146
DEBUG:root:Jacknife_kriging_all_chosen: 92/146
DEBUG:root:Jacknife_kriging_all_chosen: 93/146
DEBUG:root:Jacknife_kriging_all_chosen: 94/146
DEBUG:root:Jacknife_kriging_all_chosen: 95/146
DEBUG:root:Jacknife_kriging_all_chosen: 96/146
DEBUG:root:Jacknife_kriging_all_chosen: 97/146
DEBUG:root:Jacknife_kriging_all_chosen: 98/146
DEBUG:root:Jacknife_kriging_all_chosen: 99/146
DEBUG:root:Jacknife_kriging_all_chosen: 100/146
DEBUG:root:Jacknife_kriging_all_chosen: 101/146
DEBUG:root:Jacknife_kriging_all_chosen: 102/146
DEBUG:root:Jacknife_kriging_all_chosen: 103/146
DEBUG:root:Jacknife_kriging_all_chosen: 104/146
DEBUG:root:Jacknife_kriging_all_chosen: 105/146
DEBUG:root:Jacknife_kriging_all_chosen: 106/146
DEBUG:root:Jacknife_kriging_all_chosen: 107/146
DEBUG:root:Jacknife_kriging_all_chosen: 108/146
DEBUG:root:Jacknife_kriging_all_chosen: 109/146
DEBUG:root:Jacknife_kriging_all_chosen: 110/146
DEBUG:root:Jacknife_kriging_all_chosen: 111/146
DEBUG:root:Jacknife_kriging_all_chosen: 112/146
DEBUG:root:Jacknife_kriging_all_chosen: 113/146
DEBUG:root:Jacknife_kriging_all_chosen: 114/146
DEBUG:root:Jacknife_kriging_all_chosen: 115/146
DEBUG:root:Jacknife_kriging_all_chosen: 116/146
DEBUG:root:Jacknife_kriging_all_chosen: 117/146
DEBUG:root:Jacknife_kriging_all_chosen: 118/146
DEBUG:root:Jacknife_kriging_all_chosen: 119/146
DEBUG:root:Jacknife_kriging_all_chosen: 120/146
DEBUG:root:Jacknife_kriging_all_chosen: 121/146
DEBUG:root:Jacknife_kriging_all_chosen: 122/146
DEBUG:root:Jacknife_kriging_all_chosen: 123/146
DEBUG:root:Jacknife_kriging_all_chosen: 124/146
DEBUG:root:Jacknife_kriging_all_chosen: 125/146
DEBUG:root:Jacknife_kriging_all_chosen: 126/146
DEBUG:root:Jacknife_kriging_all_chosen: 127/146
DEBUG:root:Jacknife_kriging_all_chosen: 128/146
DEBUG:root:Jacknife_kriging_all_chosen: 129/146
DEBUG:root:Jacknife_kriging_all_chosen: 130/146
DEBUG:root:Jacknife_kriging_all_chosen: 131/146
DEBUG:root:Jacknife_kriging_all_chosen: 132/146
DEBUG:root:Jacknife_kriging_all_chosen: 133/146
DEBUG:root:Jacknife_kriging_all_chosen: 134/146
DEBUG:root:Jacknife_kriging_all_chosen: 135/146
DEBUG:root:Jacknife_kriging_all_chosen: 136/146
DEBUG:root:Jacknife_kriging_all_chosen: 137/146
DEBUG:root:Jacknife_kriging_all_chosen: 138/146
DEBUG:root:Jacknife_kriging_all_chosen: 139/146
DEBUG:root:Jacknife_kriging_all_chosen: 140/146
DEBUG:root:Jacknife_kriging_all_chosen: 141/146
DEBUG:root:Jacknife_kriging_all_chosen: 142/146
DEBUG:root:Jacknife_kriging_all_chosen: 143/146
DEBUG:root:Jacknife_kriging_all_chosen: 144/146
DEBUG:root:Jacknife_kriging_all_chosen: 145/146
INFO:root:Jacknife category 1 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/8
DEBUG:root:Jacknife_kriging_all_chosen: 1/8
DEBUG:root:Jacknife_kriging_all_chosen: 2/8
DEBUG:root:Jacknife_kriging_all_chosen: 3/8
DEBUG:root:Jacknife_kriging_all_chosen: 4/8
DEBUG:root:Jacknife_kriging_all_chosen: 5/8
DEBUG:root:Jacknife_kriging_all_chosen: 6/8
DEBUG:root:Jacknife_kriging_all_chosen: 7/8
INFO:root:Jacknife category 2 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/18
DEBUG:root:Jacknife_kriging_all_chosen: 1/18
DEBUG:root:Jacknife_kriging_all_chosen: 2/18
DEBUG:root:Jacknife_kriging_all_chosen: 3/18
DEBUG:root:Jacknife_kriging_all_chosen: 4/18
DEBUG:root:Jacknife_kriging_all_chosen: 5/18
DEBUG:root:Jacknife_kriging_all_chosen: 6/18
DEBUG:root:Jacknife_kriging_all_chosen: 7/18
DEBUG:root:Jacknife_kriging_all_chosen: 8/18
DEBUG:root:Jacknife_kriging_all_chosen: 9/18
DEBUG:root:Jacknife_kriging_all_chosen: 10/18
DEBUG:root:Jacknife_kriging_all_chosen: 11/18
DEBUG:root:Jacknife_kriging_all_chosen: 12/18
DEBUG:root:Jacknife_kriging_all_chosen: 13/18
DEBUG:root:Jacknife_kriging_all_chosen: 14/18
DEBUG:root:Jacknife_kriging_all_chosen: 15/18
DEBUG:root:Jacknife_kriging_all_chosen: 16/18
DEBUG:root:Jacknife_kriging_all_chosen: 17/18
INFO:root:Jacknife category 3 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/125
DEBUG:root:Jacknife_kriging_all_chosen: 1/125
DEBUG:root:Jacknife_kriging_all_chosen: 2/125
DEBUG:root:Jacknife_kriging_all_chosen: 3/125
DEBUG:root:Jacknife_kriging_all_chosen: 4/125
DEBUG:root:Jacknife_kriging_all_chosen: 5/125
DEBUG:root:Jacknife_kriging_all_chosen: 6/125
DEBUG:root:Jacknife_kriging_all_chosen: 7/125
DEBUG:root:Jacknife_kriging_all_chosen: 8/125
DEBUG:root:Jacknife_kriging_all_chosen: 9/125
DEBUG:root:Jacknife_kriging_all_chosen: 10/125
DEBUG:root:Jacknife_kriging_all_chosen: 11/125
DEBUG:root:Jacknife_kriging_all_chosen: 12/125
DEBUG:root:Jacknife_kriging_all_chosen: 13/125
DEBUG:root:Jacknife_kriging_all_chosen: 14/125
DEBUG:root:Jacknife_kriging_all_chosen: 15/125
DEBUG:root:Jacknife_kriging_all_chosen: 16/125
DEBUG:root:Jacknife_kriging_all_chosen: 17/125
DEBUG:root:Jacknife_kriging_all_chosen: 18/125
DEBUG:root:Jacknife_kriging_all_chosen: 19/125
DEBUG:root:Jacknife_kriging_all_chosen: 20/125
DEBUG:root:Jacknife_kriging_all_chosen: 21/125
DEBUG:root:Jacknife_kriging_all_chosen: 22/125
DEBUG:root:Jacknife_kriging_all_chosen: 23/125
DEBUG:root:Jacknife_kriging_all_chosen: 24/125
DEBUG:root:Jacknife_kriging_all_chosen: 25/125
DEBUG:root:Jacknife_kriging_all_chosen: 26/125
DEBUG:root:Jacknife_kriging_all_chosen: 27/125
DEBUG:root:Jacknife_kriging_all_chosen: 28/125
DEBUG:root:Jacknife_kriging_all_chosen: 29/125
DEBUG:root:Jacknife_kriging_all_chosen: 30/125
DEBUG:root:Jacknife_kriging_all_chosen: 31/125
DEBUG:root:Jacknife_kriging_all_chosen: 32/125
DEBUG:root:Jacknife_kriging_all_chosen: 33/125
DEBUG:root:Jacknife_kriging_all_chosen: 34/125
DEBUG:root:Jacknife_kriging_all_chosen: 35/125
DEBUG:root:Jacknife_kriging_all_chosen: 36/125
DEBUG:root:Jacknife_kriging_all_chosen: 37/125
DEBUG:root:Jacknife_kriging_all_chosen: 38/125
DEBUG:root:Jacknife_kriging_all_chosen: 39/125
DEBUG:root:Jacknife_kriging_all_chosen: 40/125
DEBUG:root:Jacknife_kriging_all_chosen: 41/125
DEBUG:root:Jacknife_kriging_all_chosen: 42/125
DEBUG:root:Jacknife_kriging_all_chosen: 43/125
DEBUG:root:Jacknife_kriging_all_chosen: 44/125
DEBUG:root:Jacknife_kriging_all_chosen: 45/125
DEBUG:root:Jacknife_kriging_all_chosen: 46/125
DEBUG:root:Jacknife_kriging_all_chosen: 47/125
DEBUG:root:Jacknife_kriging_all_chosen: 48/125
DEBUG:root:Jacknife_kriging_all_chosen: 49/125
DEBUG:root:Jacknife_kriging_all_chosen: 50/125
DEBUG:root:Jacknife_kriging_all_chosen: 51/125
DEBUG:root:Jacknife_kriging_all_chosen: 52/125
DEBUG:root:Jacknife_kriging_all_chosen: 53/125
DEBUG:root:Jacknife_kriging_all_chosen: 54/125
DEBUG:root:Jacknife_kriging_all_chosen: 55/125
DEBUG:root:Jacknife_kriging_all_chosen: 56/125
DEBUG:root:Jacknife_kriging_all_chosen: 57/125
DEBUG:root:Jacknife_kriging_all_chosen: 58/125
DEBUG:root:Jacknife_kriging_all_chosen: 59/125
DEBUG:root:Jacknife_kriging_all_chosen: 60/125
DEBUG:root:Jacknife_kriging_all_chosen: 61/125
DEBUG:root:Jacknife_kriging_all_chosen: 62/125
DEBUG:root:Jacknife_kriging_all_chosen: 63/125
DEBUG:root:Jacknife_kriging_all_chosen: 64/125
DEBUG:root:Jacknife_kriging_all_chosen: 65/125
DEBUG:root:Jacknife_kriging_all_chosen: 66/125
DEBUG:root:Jacknife_kriging_all_chosen: 67/125
DEBUG:root:Jacknife_kriging_all_chosen: 68/125
DEBUG:root:Jacknife_kriging_all_chosen: 69/125
DEBUG:root:Jacknife_kriging_all_chosen: 70/125
DEBUG:root:Jacknife_kriging_all_chosen: 71/125
DEBUG:root:Jacknife_kriging_all_chosen: 72/125
DEBUG:root:Jacknife_kriging_all_chosen: 73/125
DEBUG:root:Jacknife_kriging_all_chosen: 74/125
DEBUG:root:Jacknife_kriging_all_chosen: 75/125
DEBUG:root:Jacknife_kriging_all_chosen: 76/125
DEBUG:root:Jacknife_kriging_all_chosen: 77/125
DEBUG:root:Jacknife_kriging_all_chosen: 78/125
DEBUG:root:Jacknife_kriging_all_chosen: 79/125
DEBUG:root:Jacknife_kriging_all_chosen: 80/125
DEBUG:root:Jacknife_kriging_all_chosen: 81/125
DEBUG:root:Jacknife_kriging_all_chosen: 82/125
DEBUG:root:Jacknife_kriging_all_chosen: 83/125
DEBUG:root:Jacknife_kriging_all_chosen: 84/125
DEBUG:root:Jacknife_kriging_all_chosen: 85/125
DEBUG:root:Jacknife_kriging_all_chosen: 86/125
DEBUG:root:Jacknife_kriging_all_chosen: 87/125
DEBUG:root:Jacknife_kriging_all_chosen: 88/125
DEBUG:root:Jacknife_kriging_all_chosen: 89/125
DEBUG:root:Jacknife_kriging_all_chosen: 90/125
DEBUG:root:Jacknife_kriging_all_chosen: 91/125
DEBUG:root:Jacknife_kriging_all_chosen: 92/125
DEBUG:root:Jacknife_kriging_all_chosen: 93/125
DEBUG:root:Jacknife_kriging_all_chosen: 94/125
DEBUG:root:Jacknife_kriging_all_chosen: 95/125
DEBUG:root:Jacknife_kriging_all_chosen: 96/125
DEBUG:root:Jacknife_kriging_all_chosen: 97/125
DEBUG:root:Jacknife_kriging_all_chosen: 98/125
DEBUG:root:Jacknife_kriging_all_chosen: 99/125
DEBUG:root:Jacknife_kriging_all_chosen: 100/125
DEBUG:root:Jacknife_kriging_all_chosen: 101/125
DEBUG:root:Jacknife_kriging_all_chosen: 102/125
DEBUG:root:Jacknife_kriging_all_chosen: 103/125
DEBUG:root:Jacknife_kriging_all_chosen: 104/125
DEBUG:root:Jacknife_kriging_all_chosen: 105/125
DEBUG:root:Jacknife_kriging_all_chosen: 106/125
DEBUG:root:Jacknife_kriging_all_chosen: 107/125
DEBUG:root:Jacknife_kriging_all_chosen: 108/125
DEBUG:root:Jacknife_kriging_all_chosen: 109/125
DEBUG:root:Jacknife_kriging_all_chosen: 110/125
DEBUG:root:Jacknife_kriging_all_chosen: 111/125
DEBUG:root:Jacknife_kriging_all_chosen: 112/125
DEBUG:root:Jacknife_kriging_all_chosen: 113/125
DEBUG:root:Jacknife_kriging_all_chosen: 114/125
DEBUG:root:Jacknife_kriging_all_chosen: 115/125
DEBUG:root:Jacknife_kriging_all_chosen: 116/125
DEBUG:root:Jacknife_kriging_all_chosen: 117/125
DEBUG:root:Jacknife_kriging_all_chosen: 118/125
DEBUG:root:Jacknife_kriging_all_chosen: 119/125
DEBUG:root:Jacknife_kriging_all_chosen: 120/125
DEBUG:root:Jacknife_kriging_all_chosen: 121/125
DEBUG:root:Jacknife_kriging_all_chosen: 122/125
DEBUG:root:Jacknife_kriging_all_chosen: 123/125
DEBUG:root:Jacknife_kriging_all_chosen: 124/125
INFO:root:Jacknife category 4 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/40
DEBUG:root:Jacknife_kriging_all_chosen: 1/40
DEBUG:root:Jacknife_kriging_all_chosen: 2/40
DEBUG:root:Jacknife_kriging_all_chosen: 3/40
DEBUG:root:Jacknife_kriging_all_chosen: 4/40
DEBUG:root:Jacknife_kriging_all_chosen: 5/40
DEBUG:root:Jacknife_kriging_all_chosen: 6/40
DEBUG:root:Jacknife_kriging_all_chosen: 7/40
DEBUG:root:Jacknife_kriging_all_chosen: 8/40
DEBUG:root:Jacknife_kriging_all_chosen: 9/40
DEBUG:root:Jacknife_kriging_all_chosen: 10/40
DEBUG:root:Jacknife_kriging_all_chosen: 11/40
DEBUG:root:Jacknife_kriging_all_chosen: 12/40
DEBUG:root:Jacknife_kriging_all_chosen: 13/40
DEBUG:root:Jacknife_kriging_all_chosen: 14/40
DEBUG:root:Jacknife_kriging_all_chosen: 15/40
DEBUG:root:Jacknife_kriging_all_chosen: 16/40
DEBUG:root:Jacknife_kriging_all_chosen: 17/40
DEBUG:root:Jacknife_kriging_all_chosen: 18/40
DEBUG:root:Jacknife_kriging_all_chosen: 19/40
DEBUG:root:Jacknife_kriging_all_chosen: 20/40
###Markdown
InterpolationTo run the actual interpolation, the `predict` method of the `MLEKrigor` is used. It takes, longitude, latitude and category as main input. In addition, $\lambda_w$ needs to be specified. This mainly affects the obtained uncertainties. If desired, the full covariance matrix can also be calculated, but due to memory constraints, by default only the variance (main diagonal) is computed. Note that `predict` does not respect the shape of the input points and the outputs needs to be reshaped. Furtheremore, the **variance** of the error is returned (to be compatible with the full covariance case) not the standard deviation!
###Code
cat_grid = np.ones(lonGrid.shape,dtype=int)
pred,krigvar,predPars = krigor.predict(lonGrid.flatten(),latGrid.flatten(),cat_grid.flatten(),
lambda_w=100.0,get_covar=False)
pred = pred.reshape(lonGrid.shape)
krigvar = krigvar.reshape(lonGrid.shape)
plt.figure()
plt.contourf(lonGrid,latGrid,pred)
cbar = plt.colorbar()
cbar.set_label('Moho depth [km]')
plt.axis('equal')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.figure()
plt.contourf(lonGrid,latGrid,np.sqrt(krigvar))
cbar = plt.colorbar()
cbar.set_label('Moho uncertainty [km]')
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Example of running full waveform source mechanism inversion using SeisSrcInv This jupyter-notebook provides an example of how to use the python module SeisSrcInv to perform a full waveform source mechanism inversion. Firstly, an example of how to run an inversion is given using SeisSrcInv.inversion. The results of this inversion are then plotted using SeisSrcInv.plot.
###Code
# Import the module:
import SeisSrcInv
###Output
_____no_output_____
###Markdown
1. Setup and perform a basic full waveform inversion
###Code
# Specify all inversion input variables:
datadir = 'data/real_and_greens_func_data'
outdir = 'data/FW_data_out'
real_data_fnames = ['real_data_ST01_z.txt', 'real_data_ST01_r.txt', 'real_data_ST01_t.txt', 'real_data_ST02_z.txt', 'real_data_ST02_r.txt', 'real_data_ST02_t.txt', 'real_data_ST03_z.txt', 'real_data_ST03_r.txt', 'real_data_ST03_t.txt'] # List of real waveform data files within datadir corresponding to each station (i.e. length is number of stations to invert for)
MT_green_func_fnames = ['green_func_array_MT_ST01_z.txt', 'green_func_array_MT_ST01_r.txt', 'green_func_array_MT_ST01_t.txt', 'green_func_array_MT_ST02_z.txt', 'green_func_array_MT_ST02_r.txt', 'green_func_array_MT_ST02_t.txt', 'green_func_array_MT_ST03_z.txt', 'green_func_array_MT_ST03_r.txt', 'green_func_array_MT_ST03_t.txt'] # List of Green's functions data files (generated using fk code) within datadir corresponding to each station (i.e. length is number of stations to invert for)
single_force_green_func_fnames = ['green_func_array_single_force_ST01_z.txt', 'green_func_array_single_force_ST01_r.txt', 'green_func_array_single_force_ST01_t.txt', 'green_func_array_single_force_ST02_z.txt', 'green_func_array_single_force_ST02_r.txt', 'green_func_array_single_force_ST02_t.txt', 'green_func_array_single_force_ST03_z.txt', 'green_func_array_single_force_ST03_r.txt', 'green_func_array_single_force_ST03_t.txt'] # List of Green's functions data files (generated using fk code) within datadir corresponding to each station (i.e. length is number of stations to invert for)
data_labels = ["ST01, Z", "ST01, R", "ST01, T", "ST02, Z", "ST02, R", "ST02, T", "ST03, Z", "ST03, R", "ST03, T"] # Format of these labels must be of the form "station_name, comp" with the comma
inversion_type = 'DC' # Inversion type automatically filled (if single force, greens functions must be 3 components rather than 6)
perform_normallised_waveform_inversion = False
compare_all_waveforms_simultaneously = False
num_samples = 1000 # Number of samples to perform Monte Carlo over
comparison_metric = "VR"
manual_indices_time_shift_MT = [9, -10, -9, 6, -15, -15, 8, 14, -13]
manual_indices_time_shift_SF = [9, -11, -10, 6, -16, -16, 7, 13, -14]
cut_phase_start_vals = [0, 600, 600, 0, 575, 575, 0, 650, 650]
cut_phase_length = 150
nlloc_hyp_filename = "data/NLLoc_data/loc.Tom__RunNLLoc000.20090121.042009.grid0.loc.hyp"
num_processors = 1 # Number of processors to run for (default is 1)
set_pre_time_shift_values_to_zero_switch = False # If True, sets values before time shift to zero (default is True)
return_absolute_similarity_values_switch = True # If True, will also save absolute similarity values, as well as the normallised values.
# And perform inversion:
SeisSrcInv.inversion.run(datadir, outdir, real_data_fnames, MT_green_func_fnames, single_force_green_func_fnames, data_labels, inversion_type, perform_normallised_waveform_inversion, compare_all_waveforms_simultaneously, num_samples, comparison_metric, manual_indices_time_shift_MT, manual_indices_time_shift_SF, nlloc_hyp_filename, num_processors=num_processors, set_pre_time_shift_values_to_zero_switch=set_pre_time_shift_values_to_zero_switch, return_absolute_similarity_values_switch=return_absolute_similarity_values_switch, cut_phase_start_vals=cut_phase_start_vals, cut_phase_length=cut_phase_length)
###Output
_____no_output_____
###Markdown
MyGrADS This is a collection of functions implemented in python that replicatetheir implementation in GrADS.Content:1. Centered Differences (cdifof)2. Horizontal Divergence (hdivg)3. Vertical component of the relative vorticity (hcurl)4. Horizontal Advection (tadv) Only requires Numpy.In this example, we use Xarray to read in the nc files, Matplotlib and Cartopy for plotting. Usual Imports
###Code
import numpy as np
import xarray as xr
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
###Output
/work/uo1075/u241292/conda_envs/py37/lib/python3.7/site-packages/dask/config.py:168: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
data = yaml.load(f.read()) or {}
/work/uo1075/u241292/conda_envs/py37/lib/python3.7/site-packages/distributed/config.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
defaults = yaml.load(f)
###Markdown
Import MyGrADS
###Code
import sys
sys.path.append('/home/zmaw/u241292/scripts/python/mygrads')
import mygrads as mg
###Output
_____no_output_____
###Markdown
Read Some Data
###Code
# We are using some sample data downloaded from the NCEP Reanalysis 2
# Downloaded from: https://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis2.html
ds = xr.open_dataset('data/u.nc')
u = ds['uwnd'][0,0,:,:].values
lat = ds['lat'].values
lon = ds['lon'].values
ds = xr.open_dataset('data/v.nc')
v = ds['vwnd'][0,0,:,:].values
ds = xr.open_dataset('data/t.nc')
t = ds['air'][0,0,:,:].values
###Output
_____no_output_____
###Markdown
Calculations Horizontal Divergence$\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}$
###Code
div = mg.hdivg(u,v,lat,lon)
###Output
_____no_output_____
###Markdown
Relative Vorticity (vertical component of)$\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}$
###Code
vort = mg.hcurl(u,v,lat,lon)
###Output
_____no_output_____
###Markdown
Temperature Advection$u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}$
###Code
tadv = mg.hadv(u,v,t,lat,lon)
fig = plt.figure(figsize=(20, 16))
ax = fig.add_subplot(2,2,1,projection=ccrs.Mercator())
ax.set_extent([-120, -10, -60, 10], crs=ccrs.PlateCarree())
ax.coastlines(resolution='50m')
o = tadv(u,v,t,lat,lon)
mesh = ax.pcolormesh(lon, lat,t-273.5,
vmin=-30,vmax=0,
cmap="Spectral_r",
transform=ccrs.PlateCarree())
cbar=plt.colorbar(mesh, shrink=0.75,label='[°C]')
q = ax.quiver(lon, lat, u, v, minlength=0.1,
scale_units='xy',scale=0.0001,
transform=ccrs.PlateCarree(),
color='k',width=0.003)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Color Based
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import cv2
import glob
import time
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
# NOTE: the next import is only valid
# for scikit-learn version <= 0.17
# if you are using scikit-learn >= 0.18 then use this:
# from sklearn.model_selection import train_test_split
from sklearn.cross_validation import train_test_split
# Define a function to compute binned color features
def bin_spatial(img, size=(32, 32)):
# Use cv2.resize().ravel() to create the feature vector
features = cv2.resize(img, size).ravel()
# Return the feature vector
return features
# Define a function to compute color histogram features
def color_hist(img, nbins=32, bins_range=(0, 256)):
# Compute the histogram of the color channels separately
channel1_hist = np.histogram(img[:,:,0], bins=nbins, range=bins_range)
channel2_hist = np.histogram(img[:,:,1], bins=nbins, range=bins_range)
channel3_hist = np.histogram(img[:,:,2], bins=nbins, range=bins_range)
# Concatenate the histograms into a single feature vector
hist_features = np.concatenate((channel1_hist[0], channel2_hist[0], channel3_hist[0]))
# Return the individual histograms, bin_centers and feature vector
return hist_features
# Define a function to extract features from a list of images
# Have this function call bin_spatial() and color_hist()
def extract_features(imgs, cspace='RGB', spatial_size=(32, 32),
hist_bins=32, hist_range=(0, 256)):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
# Read in each one by one
image = mpimg.imread(file)
# apply color conversion if other than 'RGB'
if cspace != 'RGB':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
else: feature_image = np.copy(image)
# Apply bin_spatial() to get spatial color features
spatial_features = bin_spatial(feature_image, size=spatial_size)
# Apply color_hist() also with a color space option now
hist_features = color_hist(feature_image, nbins=hist_bins, bins_range=hist_range)
# Append the new feature vector to the features list
features.append(np.concatenate((spatial_features, hist_features)))
# Return list of feature vectors
return features
spatial = 32
histbin = 32
car_features = extract_features(cars, cspace='RGB', spatial_size=(spatial, spatial),
hist_bins=histbin, hist_range=(0, 256))
notcar_features = extract_features(notcars, cspace='RGB', spatial_size=(spatial, spatial),
hist_bins=histbin, hist_range=(0, 256))
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=rand_state)
print('Using spatial binning of:',spatial,
'and', histbin,'histogram bins')
print('Feature vector length:', len(X_train[0]))
# Use a linear SVC
svc = LinearSVC()
# Check the training time for the SVC
t=time.time()
svc.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(svc.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SVC predicts: ', svc.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
###Output
Using spatial binning of: 32 and 32 histogram bins
Feature vector length: 3168
47.43 Seconds to train SVC...
Test Accuracy of SVC = 0.9108
My SVC predicts: [ 0. 1. 0. 1. 0. 0. 1. 0. 1. 0.]
For these 10 labels: [ 0. 1. 0. 1. 0. 0. 1. 0. 1. 1.]
0.00152 Seconds to predict 10 labels with SVC
###Markdown
HOG Based
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import cv2
import glob
import time
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
from skimage.feature import hog
# NOTE: the next import is only valid for scikit-learn version <= 0.17
# for scikit-learn >= 0.18 use:
# from sklearn.model_selection import train_test_split
from sklearn.cross_validation import train_test_split
# Define a function to return HOG features and visualization
def get_hog_features(img, orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True):
# Call with two outputs if vis==True
if vis == True:
features, hog_image = hog(img, orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block), transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features, hog_image
# Otherwise call with one output
else:
features = hog(img, orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block), transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features
# Define a function to extract features from a list of images
# Have this function call bin_spatial() and color_hist()
def extract_features(imgs, cspace='RGB', orient=9,
pix_per_cell=8, cell_per_block=2, hog_channel=0):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
# Read in each one by one
image = mpimg.imread(file)
# apply color conversion if other than 'RGB'
if cspace != 'RGB':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YCrCb)
else: feature_image = np.copy(image)
# Call get_hog_features() with vis=False, feature_vec=True
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.append(get_hog_features(feature_image[:,:,channel],
orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True))
hog_features = np.ravel(hog_features)
else:
hog_features = get_hog_features(feature_image[:,:,hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vec=True)
# Append the new feature vector to the features list
features.append(hog_features)
# Return list of feature vectors
return features
colorspace = 'RGB' # Can be RGB, HSV, LUV, HLS, YUV, YCrCb
orient = 9
pix_per_cell = 8
cell_per_block = 2
hog_channel = 0 # Can be 0, 1, 2, or "ALL"
t=time.time()
car_features = extract_features(cars, cspace=colorspace, orient=orient,
pix_per_cell=pix_per_cell, cell_per_block=cell_per_block,
hog_channel=hog_channel)
notcar_features = extract_features(notcars, cspace=colorspace, orient=orient,
pix_per_cell=pix_per_cell, cell_per_block=cell_per_block,
hog_channel=hog_channel)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to extract HOG features...')
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=rand_state)
print('Using:',orient,'orientations',pix_per_cell,
'pixels per cell and', cell_per_block,'cells per block')
print('Feature vector length:', len(X_train[0]))
# Use a linear SVC
svc = LinearSVC()
# Check the training time for the SVC
t=time.time()
svc.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(svc.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SVC predicts: ', svc.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
###Output
62.57 Seconds to extract HOG features...
Using: 9 orientations 8 pixels per cell and 2 cells per block
Feature vector length: 1764
13.28 Seconds to train SVC...
Test Accuracy of SVC = 0.9426
My SVC predicts: [ 1. 1. 1. 0. 1. 0. 1. 0. 0. 1.]
For these 10 labels: [ 1. 1. 1. 0. 0. 0. 1. 0. 0. 1.]
0.00151 Seconds to predict 10 labels with SVC
###Markdown
Combining Features
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import cv2
import glob
import time
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
# NOTE: the next import is only valid
# for scikit-learn version <= 0.17
# if you are using scikit-learn >= 0.18 then use this:
# from sklearn.model_selection import train_test_split
from sklearn.cross_validation import train_test_split
# Define a function to compute binned color features
def bin_spatial(img, size=(32, 32)):
# Use cv2.resize().ravel() to create the feature vector
features = cv2.resize(img, size).ravel()
# Return the feature vector
return features
# Define a function to compute color histogram features
def color_hist(img, nbins=32, bins_range=(0, 256)):
# Compute the histogram of the color channels separately
channel1_hist = np.histogram(img[:,:,0], bins=nbins, range=bins_range)
channel2_hist = np.histogram(img[:,:,1], bins=nbins, range=bins_range)
channel3_hist = np.histogram(img[:,:,2], bins=nbins, range=bins_range)
# Concatenate the histograms into a single feature vector
hist_features = np.concatenate((channel1_hist[0], channel2_hist[0], channel3_hist[0]))
# Return the individual histograms, bin_centers and feature vector
return hist_features
# Define a function to extract features from a list of images
# Have this function call bin_spatial() and color_hist()
def extract_features(imgs, cspace='RGB', spatial_size=(32, 32),
hist_bins=32, hist_range=(0, 256),
orient=9, pix_per_cell=8, cell_per_block=2, hog_channel='ALL'):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
# Read in each one by one
image = mpimg.imread(file)
# apply color conversion if other than 'RGB'
if cspace != 'RGB':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
else: feature_image = convert_color(image, conv='RGB2YCrCb')
# Apply bin_spatial() to get spatial color features
spatial_features = bin_spatial(feature_image, size=spatial_size)
# Apply color_hist() also with a color space option now
hist_features = color_hist(feature_image, nbins=hist_bins, bins_range=hist_range)
# Call get_hog_features() with vis=False, feature_vec=True
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.append(get_hog_features(feature_image[:,:,channel],
orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True))
hog_features = np.ravel(hog_features)
else:
hog_features = get_hog_features(feature_image[:,:,hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vec=True)
# Append the new feature vector to the features list
features.append(np.concatenate((spatial_features, hist_features, hog_features)))
# Return list of feature vectors
return features
# performs under different binning scenarios
spatial = 32
histbin = 32
car_features = extract_features(cars, cspace='RGB', spatial_size=(spatial, spatial),
hist_bins=histbin, hist_range=(0, 256))
notcar_features = extract_features(notcars, cspace='RGB', spatial_size=(spatial, spatial),
hist_bins=histbin, hist_range=(0, 256))
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=rand_state)
print('Using spatial binning of:',spatial,
'and', histbin,'histogram bins')
print('Feature vector length:', len(X_train[0]))
# Use a linear SVC
svc = LinearSVC()
# Check the training time for the SVC
t=time.time()
svc.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(svc.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SVC predicts: ', svc.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
###Output
Using spatial binning of: 32 and 32 histogram bins
Feature vector length: 8460
29.07 Seconds to train SVC...
Test Accuracy of SVC = 0.9882
My SVC predicts: [ 0. 1. 0. 1. 0. 0. 1. 0. 0. 0.]
For these 10 labels: [ 0. 1. 0. 1. 0. 0. 1. 0. 0. 0.]
0.00157 Seconds to predict 10 labels with SVC
###Markdown
Processing Pipeline
###Code
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import pickle
import cv2
def convert_color(img, conv='RGB2YCrCb'):
if conv == 'RGB2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
if conv == 'BGR2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
if conv == 'RGB2LUV':
return cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
# Iterate through the bounding boxes
for bbox in bboxes:
# Draw a rectangle given bbox coordinates
cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)
# Return the image copy with boxes drawn
return imcopy
img = mpimg.imread('test_images/test6.jpg')
spatial_size=(32, 32)
hist_bins = 32
# Define a single function that can extract features using hog sub-sampling and make predictions
def find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins):
draw_img = np.copy(img)
img = img.astype(np.float32)/255
img_tosearch = img[ystart:ystop,:,:]
ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YCrCb')
if scale != 1:
imshape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(imshape[1]/scale), np.int(imshape[0]/scale)))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
# Define blocks and steps as above
nxblocks = (ch1.shape[1] // pix_per_cell)-1
nyblocks = (ch1.shape[0] // pix_per_cell)-1
nfeat_per_block = orient*cell_per_block**2
# 64 was the orginal sampling rate, with 8 cells and 8 pix per cell
window = 64
nblocks_per_window = (window // pix_per_cell)-1
cells_per_step = 2 # Instead of overlap, define how many cells to step
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
# Compute individual channel HOG features for the entire image
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, feature_vec=False)
b_boxes = []
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb*cells_per_step
xpos = xb*cells_per_step
# Extract HOG for this patch
hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
xleft = xpos*pix_per_cell
ytop = ypos*pix_per_cell
# Extract the image patch
subimg = cv2.resize(ctrans_tosearch[ytop:ytop+window, xleft:xleft+window], (64,64))
# Get color features
spatial_features = bin_spatial(subimg, size=spatial_size)
hist_features = color_hist(subimg, nbins=hist_bins)
# Scale features and make a prediction
test_features = X_scaler.transform(np.hstack((spatial_features, hist_features, hog_features)).reshape(1, -1))
#test_features = X_scaler.transform(np.hstack((shape_feat, hist_feat)).reshape(1, -1))
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(xleft*scale)
ytop_draw = np.int(ytop*scale)
win_draw = np.int(window*scale)
b_boxes.append(((xbox_left, ytop_draw+ystart),(xbox_left+win_draw,ytop_draw+win_draw+ystart)))
return b_boxes
ystart = 400
ystop = 656
scale = 1.5
b_boxes = find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
out_img = draw_boxes(img, b_boxes)
plt.imshow(out_img)
from scipy.ndimage.measurements import label
def add_heat(heatmap, bbox_list):
# Iterate through list of bboxes
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
# Return updated heatmap
return heatmap
def apply_threshold(heatmap, threshold):
# Zero out pixels below the threshold
heatmap[heatmap <= threshold] = 0
# Return thresholded map
return heatmap
def draw_labeled_bboxes(img, labels):
# Iterate through all detected cars
for car_number in range(1, labels[1]+1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
# Return the image
return img
test_images = glob.glob('test_images/*')
for image in test_images:
img = mpimg.imread(image)
b_boxes = find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
heat = np.zeros_like(img[:,:,0]).astype(np.float)
add_heat(heat, b_boxes)
heat = apply_threshold(heat,1)
heatmap = np.clip(heat, 0, 255)
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(img), labels)
plt.figure()
plt.subplot(121)
plt.imshow(heat, cmap='hot')
plt.subplot(122)
plt.imshow(draw_img)
###Output
_____no_output_____
###Markdown
Video Processing Pipeline
###Code
from collections import deque
b_boxes_deque = deque(maxlen=30)
def add_heat_video(heatmap, b_boxes_deque):
# Iterate through list of bboxes
for bbox_list in b_boxes_deque:
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
# Return updated heatmap
return heatmap
def pipeline(img):
b_boxes = find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
b_boxes_deque.append(b_boxes)
heat = np.zeros_like(img[:,:,0]).astype(np.float)
add_heat_video(heat, b_boxes_deque)
heat = apply_threshold(heat,15)
heatmap = np.clip(heat, 0, 255)
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(img), labels)
return draw_img
from moviepy.editor import VideoFileClip
output = 'project_video_output.mp4'
clip1 = VideoFileClip("project_video.mp4")
output_clip = clip1.fl_image(pipeline)
%time output_clip.write_videofile(output, audio=False)
###Output
[MoviePy] >>>> Building video project_video_output.mp4
[MoviePy] Writing video project_video_output.mp4
|
cs231n/assignment2/test/pt-nn-data-loader.ipynb | ###Markdown
PyTorch: DataLoadersA DataLoader wraps a Dataset and provides minibatching, shuffling, multithreading, for youWhen you need to load custom data, just write your own Dataset class
###Code
import torch
from torch.autograd import Variable
from torch.utils.data import TensorDataset, DataLoader
# Define our whole model as a single Module
class TwoLayerNet(torch.nn.Module):
# Initializer sets up two children(Modules can contain modules)
def __init__(self, D_in, H, D_out):
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
# Define forward pass using child modules and autograd ops on Variables
# No need to define backward -- autograd will handle it
def forward(self, x):
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
N, D_in, H, D_out = 64, 1000, 100, 10
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
loader = DataLoader(TensorDataset(x, y), batch_size=8)
model = TwoLayerNet(D_in, H, D_out)
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
for epoch in range(10):
# Iterate over loader to form minibatches
for x_batch, y_batch in loader:
# Loader gives Tensors so you need to wrap in Variables
x_var, y_var = Variable(x), Variable(y)
y_pred = model(x)
loss = criterion(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.