markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Recommended to avoid The [documentation](https://github.com/HIPS/autograd/blob/master/docs/tutorial.md) recommends to avoid inplace operations such as | a += b
a -= b
a*= b
a /=b | _____no_output_____ | CC0-1.0 | doc/src/GradientOptim/autodiff/examples_allowed_functions.ipynb | ndavila/MachineLearningMSU |
B-Value estimates from Maximum LikelihoodHere we implement the maximum likelihood method from Tinti and Mulargia [1987]. We will compute the distribution of b-values from the stochastic event set and compare with the Comcat catalog. We will filter both the stochastic event sets and the catalog above Mw 3.95. | import time
import os
import pandas as pd
import numpy as np
import scipy.stats as stats
from csep.utils.plotting import plot_mfd
import csep
%pylab inline
def bval_ml_est(mws, dmw):
# compute the p term from eq 3.10 in marzocchi and sandri [2003]
def p():
top = dmw
# assuming that the magnitudes are truncated above Mc (ask about this).
bottom = np.mean(mws) - np.min(mws)
return 1 + top / bottom
bottom = np.log(10) * dmw
return 1.0 / bottom * np.log(p())
def bval_err_est(mws, dmw):
# compute the p term from eq 3.10 in marzocchi and sandri [2003]
def p():
top = dmw
# assuming that the magnitudes are truncated above Mc (ask about this).
bottom = np.mean(mws) - np.min(mws)
return 1 + top / bottom
top = 1 - p()
bottom = np.log(10)*dmw*np.sqrt(len(mws)*p())
return top / bottom
def discretize(data, bin_edges):
"""
returns array with len(bin_edges) consisting of the discretized values from each bin.
instead of returning the counts of each bin, this will return an array with values
modified such that any value within bin_edges[0] <= x_new < bin_edges[1] ==> x_new.
"""
n = data.shape[0]
idx = digitize(data, bins=bin_edges)
x_new = bin_edges[idx]
return x_new
# Comcat Synthetics
epoch_time = 709732655000
duration_in_years = 1.0
t0 = time.time()
comcat = csep.load_catalog(type='comcat', format='native',
start_epoch=epoch_time, duration_in_years=1.0,
min_magnitude=2.5,
min_latitude=31.50, max_latitude=43.00,
min_longitude=-125.40, max_longitude=-113.10,
name='Comcat').filter('magnitude > 3.95')
t1 = time.time()
# Statements about Comcat Downloads
print("Fetched Comcat catalog in {} seconds.\n".format(t1-t0))
print("Downloaded Comcat Catalog with following parameters")
print("Start Date: {}\nEnd Date: {}".format(str(comcat.start_time), str(comcat.end_time)))
print("Min Latitude: {} and Max Latitude: {}".format(comcat.min_latitude, comcat.max_latitude))
print("Min Longitude: {} and Max Longitude: {}".format(comcat.min_longitude, comcat.max_longitude))
print("Min Magnitude: {} and Max Magnitude: {}\n".format(comcat.min_magnitude, comcat.max_magnitude))
# read in ucerf3 simulations
project_root = '/Users/wsavran/Projects/CSEP2/u3etas_simulations/landers_experiment'
filename = os.path.join(project_root, '10-23-2018_landers-pt1/results_complete.bin')
filename_nofaults = os.path.join(project_root, '10-31-2018_landers-nofaults-pt1/results_complete.bin')
u3catalogs = []
for cat in csep.load_stochastic_event_set(filename=filename, format='native', type='ucerf3', name='UCERF3-ETAS'):
u3catalogs.append(cat.filter('magnitude > 3.95'))
dmw = 0.1
b_vals = []
# get b-values from stochastic event set
for cat in u3catalogs:
global_max = max([max(cat.get_magnitudes()), max(comcat.get_magnitudes())])
mws = arange(3.95, global_max+2*dmw, dmw)
cat_mws = discretize(cat.get_magnitudes(), mws)
b_est = bval_ml_est(cat_mws, dmw)
b_vals.append(b_est)
b_vals = np.array(b_vals)
# get b-value for comcat catalog
com_mws = discretize(comcat.get_magnitudes(), mws)
com_bval = bval_ml_est(com_mws, dmw)
com_bval_err = bval_err_est(com_mws, dmw)
print(com_bval_err)
# plot b-value estimates
fig = hist(b_vals, bins = 60, edgecolor='black', alpha=0.7, label='Stochastic Event Set')
axvline(x=com_bval, color='black', linestyle='-', label='Observation')
axvline(x=com_bval-com_bval_err, color='black', linestyle='--', label='$\pm\hat{\sigma_{TM}}$')
axvline(x=com_bval+com_bval_err, color='black', linestyle='--')
xlabel('b-value')
ylabel('Frequency')
title('b-value Estimates')
legend(loc='upper right') | -0.0620467887229116
| MIT | notes/maximum_likelihood.ipynb | thbeutin/csep2 |
Verifying computation of $a$ from Michael [2014]$log(N(m)) = a - bM$ $ a = log(N(m)/T) + bM $From Table 2 in Michael [2014], $T$: 1900 $-$ 2009 $M_c:$ 7.7 $N^{\prime}:$ 100 $b$ = 1.59 $\pm$ 0.13 | Np = 100
b = 1.59
Mc = 7.7
T = 2009-1900
sigma = 0.13
def a_val(N, M, b, T):
return np.log10(N/T) + M*b
a = a_val(Np, Mc, b, T)
print(a)
def a_err(a, b, sigma):
return a*sigma/b
print(a_err(a, b, sigma))
Np = 635
b = 1.07
Mc = 7.0
T = 2009-1918
sigma = 0.03
def a_val(N, M, b, T):
return np.log10(N/T) + M*b
a = a_val(Np, Mc, b, T)
print(a)
def a_err(a, b, sigma):
return sigma/b*a
print(a_err(a, b, sigma))
Np = 810
b = 1.05
Mc = 6.8
T = 2009-1940
sigma = 0.03
def a_val(N, M, b, T):
return np.log10(N/T) + M*b
a = a_val(Np, Mc, b, T)
print(a)
def a_err(a, b, sigma):
return sigma/b*a
print(a_err(a, b, sigma)) | 8.209635928141394
0.23456102651832553
| MIT | notes/maximum_likelihood.ipynb | thbeutin/csep2 |
List the available countries to download data for | pb.footballdata.list_countries() | _____no_output_____ | MIT | examples/football-data.co.uk.ipynb | martineastwood/penaltyblog |
Download the data for the English Premier League | pb.footballdata.fetch_data("England", 2020, 0) | _____no_output_____ | MIT | examples/football-data.co.uk.ipynb | martineastwood/penaltyblog |
Download the data for the French Ligue 2 | pb.footballdata.fetch_data("France", 2020, 1) | _____no_output_____ | MIT | examples/football-data.co.uk.ipynb | martineastwood/penaltyblog |
Investigate behavior of `curry` and `partial` | from toolz.curried import *
def clump3(a, b, c):
return a, b, c
@curry
def curried_clump3(a, b, c):
return a, b, c
partial1_clump3=partial(clump3, 1)
partial12_clump3=partial(clump3, 1, 2)
print(f'clump3(1, 2, 3)={clump3(1, 2, 3)}')
print(f'clump3(3, 1, 2)={clump3(3, 1, 2)}')
print()
print(f'curried_clump3(1)(2)(3)={curried_clump3(1)(2)(3)}')
print(f'curried_clump3(1)(2)(3)={curried_clump3(1)(2)(3)}')
print()
print(f'curried_clump3(3, 1)(2)={curried_clump3(3, 1)(2)}')
print()
print(f'partial1_clump3(2, 3)={partial1_clump3(2, 3)}')
print(f'partial12_clump3(2, 3)={partial12_clump3(3)}')
| _____no_output_____ | MIT | curry_n_partial.ipynb | mrwizard82d1/learn_toolz |
Convolutional Neural Network (CNN) Image Classifier for Persian Numbers | import tensorflow as tf
from scipy.io import loadmat
import numpy as np
import matplotlib.pyplot as plt
import random
import math
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Conv2D, MaxPool2D, Dropout, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.activations import relu, softmax
from tensorflow.keras import regularizers
from tensorflow.keras.losses import sparse_categorical_crossentropy
from tensorflow.keras.initializers import he_uniform, glorot_normal, zeros, ones
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint | _____no_output_____ | MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
[HODA dataset](http://farsiocr.ir/%D9%85%D8%AC%D9%85%D9%88%D8%B9%D9%87-%D8%AF%D8%A7%D8%AF%D9%87/%D9%85%D8%AC%D9%85%D9%88%D8%B9%D9%87-%D8%A7%D8%B1%D9%82%D8%A7%D9%85-%D8%AF%D8%B3%D8%AA%D9%86%D9%88%DB%8C%D8%B3-%D9%87%D8%AF%DB%8C/) HODA Daset reader from: https://github.com/amir-saniyan/HodaDatasetReader | # *-* coding: utf-8 *-*
# Hoda Dataset Reader
# Python code for reading Hoda farsi digit dataset.
# Hoda Farsi Digit Dataset:
# http://farsiocr.ir/
# http://farsiocr.ir/مجموعه-داده/مجموعه-ارقام-دستنویس-هدی
# http://dadegan.ir/catalog/hoda
# Repository:
# https://github.com/amir-saniyan/HodaDatasetReader
import struct
import numpy as np
import cv2
def __convert_to_one_hot(vector, num_classes):
result = np.zeros(shape=[len(vector), num_classes])
result[np.arange(len(vector)), vector] = 1
return result
def __resize_image(src_image, dst_image_height, dst_image_width):
src_image_height = src_image.shape[0]
src_image_width = src_image.shape[1]
if src_image_height > dst_image_height or src_image_width > dst_image_width:
height_scale = dst_image_height / src_image_height
width_scale = dst_image_width / src_image_width
scale = min(height_scale, width_scale)
img = cv2.resize(src=src_image, dsize=(0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
else:
img = src_image
img_height = img.shape[0]
img_width = img.shape[1]
dst_image = np.zeros(shape=[dst_image_height, dst_image_width], dtype=np.uint8)
y_offset = (dst_image_height - img_height) // 2
x_offset = (dst_image_width - img_width) // 2
dst_image[y_offset:y_offset+img_height, x_offset:x_offset+img_width] = img
return dst_image
def read_hoda_cdb(file_name):
with open(file_name, 'rb') as binary_file:
data = binary_file.read()
offset = 0
# read private header
yy = struct.unpack_from('H', data, offset)[0]
offset += 2
m = struct.unpack_from('B', data, offset)[0]
offset += 1
d = struct.unpack_from('B', data, offset)[0]
offset += 1
H = struct.unpack_from('B', data, offset)[0]
offset += 1
W = struct.unpack_from('B', data, offset)[0]
offset += 1
TotalRec = struct.unpack_from('I', data, offset)[0]
offset += 4
LetterCount = struct.unpack_from('128I', data, offset)
offset += 128 * 4
imgType = struct.unpack_from('B', data, offset)[0] # 0: binary, 1: gray
offset += 1
Comments = struct.unpack_from('256c', data, offset)
offset += 256 * 1
Reserved = struct.unpack_from('245c', data, offset)
offset += 245 * 1
if (W > 0) and (H > 0):
normal = True
else:
normal = False
images = []
labels = []
for i in range(TotalRec):
StartByte = struct.unpack_from('B', data, offset)[0] # must be 0xff
offset += 1
label = struct.unpack_from('B', data, offset)[0]
offset += 1
if not normal:
W = struct.unpack_from('B', data, offset)[0]
offset += 1
H = struct.unpack_from('B', data, offset)[0]
offset += 1
ByteCount = struct.unpack_from('H', data, offset)[0]
offset += 2
image = np.zeros(shape=[H, W], dtype=np.uint8)
if imgType == 0:
# Binary
for y in range(H):
bWhite = True
counter = 0
while counter < W:
WBcount = struct.unpack_from('B', data, offset)[0]
offset += 1
# x = 0
# while x < WBcount:
# if bWhite:
# image[y, x + counter] = 0 # Background
# else:
# image[y, x + counter] = 255 # ForeGround
# x += 1
if bWhite:
image[y, counter:counter + WBcount] = 0 # Background
else:
image[y, counter:counter + WBcount] = 255 # ForeGround
bWhite = not bWhite # black white black white ...
counter += WBcount
else:
# GrayScale mode
data = struct.unpack_from('{}B'.format(W * H), data, offset)
offset += W * H
image = np.asarray(data, dtype=np.uint8).reshape([W, H]).T
images.append(image)
labels.append(label)
return images, labels
def read_hoda_dataset(dataset_path, images_height=32, images_width=32, one_hot=False, reshape=True):
images, labels = read_hoda_cdb(dataset_path)
assert len(images) == len(labels)
X = np.zeros(shape=[len(images), images_height, images_width], dtype=np.float32)
Y = np.zeros(shape=[len(labels)], dtype=np.int)
for i in range(len(images)):
image = images[i]
# Image resizing.
image = __resize_image(src_image=image, dst_image_height=images_height, dst_image_width=images_width)
# Image normalization.
image = image / 255
# Image binarization.
image = np.where(image >= 0.5, 1, 0)
# Image.
X[i] = image
# Label.
Y[i] = labels[i]
if one_hot:
Y = __convert_to_one_hot(Y, 10).astype(dtype=np.float32)
else:
Y = Y.astype(dtype=np.float32)
if reshape:
X = X.reshape(-1, images_height * images_width)
else:
X = X.reshape(-1, images_height, images_width, 1)
return X, Y
# loading dataset
# train data
train_images, train_labels = read_hoda_dataset(dataset_path='data_Persian/Train 60000.cdb',
images_height=32,
images_width=32,
one_hot=False,
reshape=False)
# test data
test_images, test_labels = read_hoda_dataset(dataset_path='data_Persian/Test 20000.cdb',
images_height=32,
images_width=32,
one_hot=False,
reshape=False) | _____no_output_____ | MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
Visualization fucntions | def show_images(n,image_array,label_array, cmap=None):
'''
show random n number of images from image_array with corresponding label_array
'''
total_rows = math.floor(n/4)+1
random_list = random.sample(range(0, image_array.shape[0]), n)
fig, axes = plt.subplots(total_rows, 4, figsize=(16, total_rows*4))
[axi.set_axis_off() for axi in axes.ravel()] # this line sets all axis off
r = 0
c = 0
for i in random_list:
image = image_array[i,:,:,:]
#axes[r, c].set_axis_off()
axes[r, c].imshow(np.squeeze(image), cmap=cmap)
axes[r, c].set_title(f"Label: {label_array[i]} \n {i}th image in the dataset.")
c += 1
if c % 4 == 0:
r += 1
c = 0
plt.show()
def show_images_predictions(n,image_array,label_array1,label_array2, cmap=None):
'''
show random n number of images from image_array with corresponding label_array
the precition of class probablity distibution from each model also would be discplayed
'''
random_list = random.sample(range(0, image_array.shape[0]), n)
fig, axes = plt.subplots(n, 2, figsize=(16, n*6))
#[axi.set_axis_off() for axi in axes.ravel()] # this line sets all axis off
category_list1 = list(map(lambda x : x + 0.15, list(range(10))))
category_list2 = list(map(lambda x : x - 0.15, list(range(10))))
r = 0
for i in random_list:
image = image_array[i,:,:,:]
axes[r, 0].set_axis_off()
axes[r, 0].imshow(np.squeeze(image), cmap=cmap)
#axes[r, 1].set_title(f"{i}th image in the dataset.")
axes[r, 1].bar(category_list1,label_array1[i], width=0.3, label='MLP')
axes[r, 1].bar(category_list2,label_array2[i], width=0.3, label='CNN')
axes[r, 1].set_title(f"Prediction from MLP model: {np.argmax(label_array1[i,:])} \n Prediction from CNN model: {np.argmax(label_array2[i,:])} ")
axes[r, 1].legend()
r += 1
plt.show()
# Functions to plot accuacy and loss
def plot_acc(history):
try:
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
except KeyError:
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Accuracy vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='lower right')
plt.show()
def plot_loss(history):
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.show() | _____no_output_____ | MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
Check training images | n = 10 # number of images to show
# showing images and correspoind labels from train set
show_images(n,train_images,train_labels) | _____no_output_____ | MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
CNN neural network classifier | def CNN_NN(input_shape, dropout_rate, reg_rate):
model = Sequential([
Conv2D(8, (3,3), activation='relu', input_shape=input_shape,
kernel_initializer="he_uniform", bias_initializer="ones",
kernel_regularizer=regularizers.l2(reg_rate), name='CONV2D_1_1_relu'),
BatchNormalization(),
Conv2D(16, (3,3), activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='CONV2D_2_2_relu'),
MaxPool2D((3,3), strides=(2, 2), name='MaxPool2D_1_2_relu'),
Dropout(dropout_rate),
BatchNormalization(),
Conv2D(32, (3,3), activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='CONV2D_3_3_relu'),
MaxPool2D((3,3), strides=(2, 2), name='MaxPool2D_2_3_relu'),
Dropout(dropout_rate),
BatchNormalization(),
Flatten(),
Dense(64, activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='Dense_1_4_relu'),
Dense(32, activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='Dense_2_5_relu'),
Dense(10, activation='softmax', name='Dense_3_6_softmax')
])
return model
def get_checkpoint_best_only(checkpoint_path):
'''
save best weights of the model with monitoring validation accuract
'''
checkpoint = ModelCheckpoint(checkpoint_path,
save_weights_only=True,
monitor='val_accuracy',
verbose=1,
save_best_only=True)
return checkpoint
def get_test_accuracy(model, x_test, y_test):
'''
checking the accuracy of the model on the test sets
'''
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=0)
print(' test accuracy: {acc:0.3f}'.format(acc=test_acc), '\n',
'test loss: {loss:0.3f}'.format(loss=test_loss))
# creating CNN model for greay scale images
model_CNN = CNN_NN(input_shape= (32,32,1), dropout_rate = 0.3, reg_rate=1e-3)
model_CNN.summary()
model_CNN.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
checkpoint_best_only = get_checkpoint_best_only('Trained models weights_Persian\checkpoints_best_only_CNN\checkpoint')
history_CNN = model_CNN.fit(train_images,
train_labels,
batch_size=32,
epochs=30,
validation_split=0.10,
callbacks=[EarlyStopping(monitor='val_accuracy', patience=4), checkpoint_best_only]
)
plot_acc(history_CNN)
plot_loss(history_CNN)
get_test_accuracy(model_CNN, test_images, test_labels) | test accuracy: 0.983
test loss: 0.097
| MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
Model predictions | def get_model_best_epoch(model, checkpoint_path):
'''
get model saved best epoch
'''
model.load_weights(checkpoint_path)
return model
# CNN model best epoch
model_CNN = CNN_NN(input_shape= (32,32,1), dropout_rate = 0.3, reg_rate=1e-4)
model_CNN = get_model_best_epoch(model_CNN, 'Trained models weights_Persian\checkpoints_best_only_CNN\checkpoint')
prediction_CNN = model_CNN.predict(test_images)
prediction_CNN_final = np.argmax(prediction_CNN, axis=1) # finding the maximum category
prediction_CNN_final = np.expand_dims(prediction_CNN_final, axis=1) # add the channel dimension
n = 5 # number of images to show
show_images(n,test_images,prediction_CNN_final, cmap='Greys') | _____no_output_____ | MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
ComparisonTo do a comparison between MLP and CNN model, the MLP model is created here and the trained wights are loaded | def MLP_NN(input_shape, reg_rate):
'''
Multilayer Perceptron (MLP) classification model
'''
model = Sequential([
Flatten(input_shape=input_shape),
Dense(256, activation='relu', kernel_initializer="he_uniform", bias_initializer="ones",
kernel_regularizer=regularizers.l2(reg_rate), name='dense_1_relu'),
Dense(256, activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='dense_2_relu'),
Dense(128, activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='dense_3_relu'),
Dense(128, activation='relu', kernel_regularizer=regularizers.l2(reg_rate), name='dense_4_relu'),
Dense(10, activation='softmax', name='dense_5_softmax')
])
return model
model_MLP = MLP_NN(input_shape=(32,32,1), reg_rate=1e-4)
model_MLP = get_model_best_epoch(model_MLP, 'Trained models weights_Persian\checkpoints_best_only_MLP\checkpoint')
prediction_MLP = model_MLP.predict(test_images)
prediction_MLP_final = np.argmax(prediction_MLP, axis=1) # finding the maximum category
prediction_MLP_final = np.expand_dims(prediction_MLP_final, axis=1) # add the channel dimension
n = 5 # number of random images
show_images_predictions(n,test_images,prediction_MLP, prediction_CNN, cmap='Greys') | _____no_output_____ | MIT | CNN_Persian_DigitsClassifier.ipynb | saniaki/Digit-Image-Classifier |
As a warm-up, you'll review some machine learning fundamentals and submit your initial results to a Kaggle competition. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system. | # Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex1 import *
print("Setup Complete") | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
You will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course) to predict home prices in Iowa using 79 explanatory variables describing (almost) every aspect of the homes. Run the next code cell without changes to load the training and validation features in `X_train` and `X_valid`, along with the prediction targets in `y_train` and `y_valid`. The test features are loaded in `X_test`. (_If you need to review **features** and **prediction targets**, please check out [this short tutorial](https://www.kaggle.com/dansbecker/your-first-machine-learning-model). To read about model **validation**, look [here](https://www.kaggle.com/dansbecker/model-validation). Alternatively, if you'd prefer to look through a full course to review all of these topics, start [here](https://www.kaggle.com/learn/machine-learning).)_ | import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Obtain target and predictors
y = X_full.SalePrice
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = X_full[features].copy()
X_test = X_test_full[features].copy()
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0) | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
Use the next cell to print the first several rows of the data. It's a nice way to get an overview of the data you will use in your price prediction model. | X_train.head() | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
Step 1: Evaluate several modelsThe next code cell defines five different random forest models. Run this code cell without changes. (_To review **random forests**, look [here](https://www.kaggle.com/dansbecker/random-forests)._) | from sklearn.ensemble import RandomForestRegressor
# Define the models
model_1 = RandomForestRegressor(n_estimators=50, random_state=0)
model_2 = RandomForestRegressor(n_estimators=100, random_state=0)
model_3 = RandomForestRegressor(n_estimators=100, criterion='mae', random_state=0)
model_4 = RandomForestRegressor(n_estimators=200, min_samples_split=20, random_state=0)
model_5 = RandomForestRegressor(n_estimators=100, max_depth=7, random_state=0)
models = [model_1, model_2, model_3, model_4, model_5] | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
To select the best model out of the five, we define a function `score_model()` below. This function returns the mean absolute error (MAE) from the validation set. Recall that the best model will obtain the lowest MAE. (_To review **mean absolute error**, look [here](https://www.kaggle.com/dansbecker/model-validation).)_Run the code cell without changes. | from sklearn.metrics import mean_absolute_error
# Function for comparing different models
def score_model(model, X_t=X_train, X_v=X_valid, y_t=y_train, y_v=y_valid):
model.fit(X_t, y_t)
preds = model.predict(X_v)
return mean_absolute_error(y_v, preds)
for i in range(0, len(models)):
mae = score_model(models[i])
print("Model %d MAE: %d" % (i+1, mae)) | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
Use the above results to fill in the line below. Which model is the best model? Your answer should be one of `model_1`, `model_2`, `model_3`, `model_4`, or `model_5`. | # Fill in the best model
best_model = ____
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
best_model = model_3
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution() | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
Step 2: Generate test predictionsGreat. You know how to evaluate what makes an accurate model. Now it's time to go through the modeling process and make predictions. In the line below, create a Random Forest model with the variable name `my_model`. | # Define a model
my_model = ____ # Your code here
# Check your answer
step_2.check()
#%%RM_IF(PROD)%%
my_model = 3
step_2.assert_check_failed()
#%%RM_IF(PROD)%%
my_model = best_model
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution() | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
Run the next code cell without changes. The code fits the model to the training and validation data, and then generates test predictions that are saved to a CSV file. These test predictions can be submitted directly to the competition! | # Fit the model to the training data
my_model.fit(X, y)
# Generate test predictions
preds_test = my_model.predict(X_test)
# Save predictions in format used for competition scoring
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False) | _____no_output_____ | Apache-2.0 | notebooks/ml_intermediate/raw/ex1.ipynb | aurnik/learntools |
时间转换及处理 str类型的时间转换为datetime类型的时间 | from datetime import datetime
time = '2010-05-01 00:00:00'
time = datetime.strptime(time, "%Y-%m-%d %H:%M:%S")
type(time),time | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
datetime类型的时间转换为str类型的时间 | from datetime import datetime
time = datetime(2010, 5, 1, 0, 0)
time = datetime.strftime(time, "%Y-%m-%d %H:%M:%S")
type(time),time
import tushare as ts
df = ts.get_k_data('600519','2020-08-01','2020-08-05')
df
type(df.date[140]),df.date[140]
import pandas as pd
#dateframe 日期数据,字符型转换成datetime日期格式
df.date = pd.to_datetime(df.date,format='%Y/%m/%d %H:%M:%S')
type(df.date[140]),df.date[140] | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
获取 日期数据 的年、月、日、时、分 | df.date.dt.time
df.date.dt.date
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
df.date.dt.year
df.date.dt.month
df.date.dt.day
df.date.dt.hour
df.date.dt.minute
df.date.dt.second | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
时间加减 | from datetime import datetime,timedelta
start = '2010-05-01 00:00:00'
start = datetime.strptime(start, "%Y-%m-%d %H:%M:%S")
time = start+timedelta(days=60)
time = datetime.strftime(time, "%Y-%m-%d %H:%M:%S")
time
from datetime import datetime,timedelta
start = '2010-05-01 00:00:00'
start = datetime.strptime(start, "%Y-%m-%d %H:%M:%S")
time = start+timedelta(seconds=1)
time = datetime.strftime(time, "%Y-%m-%d %H:%M:%S")
time
from datetime import datetime,timedelta
start = '2010-05-01 00:00:00'
start = datetime.strptime(start, "%Y-%m-%d %H:%M:%S")
time = start+timedelta(minutes=1)
time = datetime.strftime(time, "%Y-%m-%d %H:%M:%S")
time
from datetime import datetime,timedelta
start = '2010-05-01 00:00:00'
start = datetime.strptime(start, "%Y-%m-%d %H:%M:%S")
time = start+timedelta(hours=1)
time = datetime.strftime(time, "%Y-%m-%d %H:%M:%S")
time
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
start = '2010-05-01 00:00:00'
end = '2010-05-01 05:00:00'
start = datetime.strptime(start, "%Y-%m-%d %H:%M:%S")
end = datetime.strptime(end, "%Y-%m-%d %H:%M:%S")
(end-start).seconds
(end-start).total_seconds() | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
dateime模块 datetime模块中包含如下类:  date类 today(...):返回当前日期 | import datetime
datetime.date.today() | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
date对象由year年份、month月份及day日期三部分构成: | import datetime
a = datetime.date.today()
a
a.year,a.month,a.day | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
用于日期比较大小的方法  | a=datetime.date(2020,3,1)
b=datetime.date(2020,9,4)
a.__eq__(b)
a.__ge__(b)
a.__le__(b) | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
获得二个日期相差多少天  | a=datetime.date(2020,3,1)
b=datetime.date(2020,9,4)
a.__sub__(b).days
a.__rsub__(b).days | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
日期的字符串输出 | import datetime
a = datetime.date.today()
a.__format__('%Y-%m-%d')
import datetime
a = datetime.date.today()
a.__format__('%Y/%m/%d')
import datetime
a = datetime.date.today()
a.__str__() | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
time类 time类由hour小时、minute分钟、second秒、microsecond毫秒和tzinfo五部分组成 | import datetime
a = datetime.time(12,20,59,899)
a.__str__()
a.__format__('%H:%M:%S') | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
datetime类 datetime类其实是可以看做是date类和time类的合体,其大部分的方法和属性都继承于这二个类 返回现在的时间 | import datetime
a = datetime.datetime.now()
a
a.date(),a.time() | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
combine(…):将一个date对象和一个time对象合并生成一个datetime对象 | datetime.datetime.combine(a.date(),a.time()) | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
strptime(…):根据string, format 2个参数,返回一个对应的datetime对象: | datetime.datetime.strptime('2017-3-22 15:25','%Y-%m-%d %H:%M') | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
strftime(…):根据datetime, format 的参数,返回一个对应的str: | a = datetime.datetime.now()
datetime.datetime.strftime(a,'%Y-%m-%d %H:%M:%S') | _____no_output_____ | MIT | .ipynb_checkpoints/1.6 时间转换及处理-checkpoint.ipynb | Yanie1asdfg/Quant-Lectures |
Torrent To Google Drive Downloader **Important Note:** To get more disk space:> Go to Runtime -> Change Runtime and give GPU as the Hardware Accelerator. You will get around 384GB to download any torrent you want. Install libtorrent and Initialize Session | !python -m pip install --upgrade pip setuptools wheel
!python -m pip install lbry-libtorrent
!apt install python3-libtorrent
import libtorrent as lt
ses = lt.session()
ses.listen_on(6881, 6891)
downloads = [] | _____no_output_____ | MIT | Torrent_To_Google_Drive_Downloader.ipynb | l-i-e-d-j-i-6-7-8-w-d/Torrent-To-Google-Drive-Downloader |
Mount Google DriveTo stream files we need to mount Google Drive. | from google.colab import drive
drive.mount("/content/drive") | _____no_output_____ | MIT | Torrent_To_Google_Drive_Downloader.ipynb | l-i-e-d-j-i-6-7-8-w-d/Torrent-To-Google-Drive-Downloader |
Add From Torrent FileYou can run this cell to add more files as many times as you want | from google.colab import files
source = files.upload()
params = {
"save_path": "/content/drive/My Drive/Torrent",
"ti": lt.torrent_info(list(source.keys())[0]),
}
downloads.append(ses.add_torrent(params)) | _____no_output_____ | MIT | Torrent_To_Google_Drive_Downloader.ipynb | l-i-e-d-j-i-6-7-8-w-d/Torrent-To-Google-Drive-Downloader |
Add From Magnet LinkYou can run this cell to add more files as many times as you want | params = {"save_path": "/content/drive/My Drive/Torrent"}
while True:
magnet_link = input("Enter Magnet Link Or Type Exit: ")
if magnet_link.lower() == "exit":
break
downloads.append(
lt.add_magnet_uri(ses, magnet_link, params)
)
| _____no_output_____ | MIT | Torrent_To_Google_Drive_Downloader.ipynb | l-i-e-d-j-i-6-7-8-w-d/Torrent-To-Google-Drive-Downloader |
Start DownloadSource: https://stackoverflow.com/a/5494823/7957705 and [3 issue](https://github.com/FKLC/Torrent-To-Google-Drive-Downloader/issues/3) which refers to this [stackoverflow question](https://stackoverflow.com/a/6053350/7957705) | import time
from IPython.display import display
import ipywidgets as widgets
state_str = [
"queued",
"checking",
"downloading metadata",
"downloading",
"finished",
"seeding",
"allocating",
"checking fastresume",
]
layout = widgets.Layout(width="auto")
style = {"description_width": "initial"}
download_bars = [
widgets.FloatSlider(
step=0.01, disabled=True, layout=layout, style=style
)
for _ in downloads
]
display(*download_bars)
while downloads:
next_shift = 0
for index, download in enumerate(downloads[:]):
bar = download_bars[index + next_shift]
if not download.is_seed():
s = download.status()
bar.description = " ".join(
[
download.name(),
str(s.download_rate / 1000),
"kB/s",
state_str[s.state],
]
)
bar.value = s.progress * 100
else:
next_shift -= 1
ses.remove_torrent(download)
downloads.remove(download)
bar.close() # Seems to be not working in Colab (see https://github.com/googlecolab/colabtools/issues/726#issue-486731758)
download_bars.remove(bar)
print(download.name(), "complete")
time.sleep(1)
| _____no_output_____ | MIT | Torrent_To_Google_Drive_Downloader.ipynb | l-i-e-d-j-i-6-7-8-w-d/Torrent-To-Google-Drive-Downloader |
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages. | import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy | _____no_output_____ | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment. | env = gym.make('Blackjack-v0') | _____no_output_____ | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below. | print(env.observation_space)
print(env.action_space) | Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
| MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._) | for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break | (12, 10, False)
End game! Reward: -1.0
You lost :(
(19, 10, False)
End game! Reward: -1
You lost :(
(18, 2, False)
End game! Reward: -1
You lost :(
| MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy.exceeding 18 will be `probs = [.8, .2] if state[0] > 18 else [.2, .8]`, and `np.random.choice(action, probs)` choose the action from probilities.The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively. | def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode | _____no_output_____ | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*) | for i in range(3):
print(generate_episode_from_limit_stochastic(env)) | [((17, 9, False), 1, -1)]
[((16, 6, False), 1, -1)]
[((18, 8, False), 1, -1)]
| MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. | def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
# as above episode has states, actions, rewards, need extract
states, actions, rewards = zip(*episode) # return list
# define a list of discounts to adapt S, A, R
discounts = np.array([gamma ** i for i in range(len(rewards) + 1)])
for i, state in enumerate(states):
N[state][actions[i]] += 1.0
# [:-1] denote from 0 to the last, [:-2] is to the second to last
G_t = sum(rewards[i:] * discounts[: -(1+i)])
returns_sum[state][actions[i]] += G_t
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
episode = generate_episode_from_limit_stochastic(env)
print(episode)
states, actions, rewards = zip(*episode)
print('\nstates:', states)
print('\nactions:', actions)
print('\nrewards:', rewards) | [((19, 10, False), 0, 1.0)]
states: ((19, 10, False),)
actions: (0,)
rewards: (1.0,)
| MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**. | # obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot) | Episode 500000/500000. | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._) | import random
def generate_episode_from_q(env, Q, nA, epsilon):
epsidoe = []
state = env.reset()
while True:
# from the reset state, we can make choice or sample randamly
action = np.random.choice(np.arange(nA), p=get_probs(Q, nA, epsilon)) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
epsidoe.append((state, action, reward))
state = next_state
if done:
break
return epsidoe
def generate_episode_from_q_epsilon(env, Q, nA, epsilon):
epsidoe = []
state = env.reset()
while True:
# from the reset state, we can make choice or sample randamly
action = epsilon_greedy(Q, nA, state, epsilon)
next_state, reward, done, info = env.step(action)
epsidoe.append((state, action, reward))
state = next_state
if done:
break
return epsidoe
def get_probs(Q_s, nA, epsilon):
# build a list [nA] for policy
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
# we can also use epsilon-greedy, but need to return policy
def epsilon_greedy(Q, nA, state, epsilon):
'''explore and explicit by epsilon-greedy, return action'''
if random.random() > epsilon:
return np.argmax(Q[state])
else:
return random.choice(np.arange(nA))
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon = 1.0, eps_decay=.99999, eps_min=.05):
# initialize empty dictionary of arrays
nA = env.action_space.n
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# epsilon = max(epsilon*eps_decay, eps_min) # set a fixed epsilon
# or we can generate epsiode by changing epsilon
epsilon = 1.0 / i_episode
# episode = generate_episode_from_q(env, Q, nA, epsilon)
# or
episode = generate_episode_from_q_epsilon(env, Q, nA, epsilon)
# as above episode has states, actions, rewards, need extract
states, actions, rewards = zip(*episode) # return list
# define a list of discounts to adapt S, A, R
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
G_t = sum( rewards[i:] * discounts[: -(1+i)] )
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha * (G_t - old_Q)
# pack state and action into dictionary for optional policy
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q | _____no_output_____ | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters. | # obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02) | Episode 500000/500000. | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Next, we plot the corresponding state-value function. | # obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V) | _____no_output_____ | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
Finally, we visualize the policy that is estimated to be optimal. | # plot the policy
plot_policy(policy) | _____no_output_____ | MIT | 2-Valued-Based Methods/monte-carlo/Monte_Carlo.ipynb | zhaolongkzz/DRL-of-Udacity |
To Do1. Try different architectures2. Try stateful/stateless LSTM.3. Add OAT, holidays.4. Check if data has consecutive blocks. | import numpy as np
import pandas as pd
from scipy import stats
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.callbacks import EarlyStopping
from keras.layers import Dropout, Dense, LSTM
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
power_data_folder = '/Users/pranavhgupta/Documents/GitHub/XBOS_HVAC_Predictions/micro-service/data'
hvac_states_data_folder = '/Users/pranavhgupta/Documents/GitHub/XBOS_HVAC_Predictions/micro-service/hvac_states_batch_data'
site = 'avenal-animal-shelter' | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Import data Power data | df_power = pd.read_csv(power_data_folder + '/power_' + site + '.csv', index_col=[0], parse_dates=True)
df_power.columns = ['power']
df_power.head()
df_power.plot(figsize=(18,5)) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Check for missing data | df_power.isna().any() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Clean data | # Resample to 5min
df_processed = df_power.resample('5T').mean()
df_processed.head()
df_processed.plot(figsize=(18,5)) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Check for missing data | print(df_processed.isna().any())
print('\n')
missing = df_processed['power'].isnull().sum()
total = df_processed['power'].shape[0]
print('% Missing data for power: ', (missing/total)*100, '%') | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Depending on the percent missing data, either drop it or forward fill the NaN's | # Option 1: Drop NaN's
df_processed.dropna(inplace=True)
# # Option 2: ffill NaN's
# df_processed = df_processed.fillna(method='ffill') | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Normalize data | scaler = MinMaxScaler(feature_range=(0,1))
df_normalized = pd.DataFrame(scaler.fit_transform(df_processed),
columns=df_processed.columns, index=df_processed.index)
df_normalized.head() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Check for missing data | df_normalized.isna().any() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Check for stationarity | result = adfuller(df_normalized['power'], autolag='AIC')
output = pd.Series(result[0:4], index=['Test Statistic', 'p-value', '#Lags Used',
'#Observations Used'])
for key, value in result[4].items():
output['Critical Value (%s)' % key] = value
output | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
HVAC States data | df_hvac_states = pd.read_csv(hvac_states_data_folder + '/hvac_states_' + site + '.csv',
index_col=[0], parse_dates=True)
df_hvac_states.columns = ['zone' + str(i) for i in range(len(df_hvac_states.columns))]
df_hvac_states.head() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Check for missing data | df_hvac_states.isna().any() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Convert categorical (HVAC states) into dummy variables | var_to_expand = df_hvac_states.columns
# One-hot encode the HVAC states
for var in var_to_expand:
add_var = pd.get_dummies(df_hvac_states[var], prefix=var, drop_first=True)
# Add all the columns to the model data
df_hvac_states = df_hvac_states.join(add_var)
# Drop the original column that was expanded
df_hvac_states.drop(columns=[var], inplace=True)
df_hvac_states.head()
# def func(row):
# """ Possible situations: (0,0,0), (1,0,1), (0,1,2) --> 0, 1, 2
# If all are same --> first element
# If there is a majority among the 3 --> majority
# If all are unique --> last element
# """
# count = len(set(list(row.values)))
# if count == 1:
# return row.values[0]
# elif count == 2:
# max(set(list(row.values)), key=list(row.values).count)
# else:
# return row.values[-1]
# resample_df_hvac = df_raw_hvac_states.resample('15T').apply(func)
# resample_df_hvac = resample_df_hvac.fillna(method='ffill')
# resample_df_hvac.isna().any() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Join power and hvac_states data | # CHECK: pd.concat gives a lot of duplicate indices.
# Try below code to see,
# start = pd.Timestamp('2018-02-10 06:00:00+00:00')
# df.loc[start]
df = pd.concat([df_normalized, df_hvac_states], axis=1)
df.head()
df = df.drop_duplicates()
missing = df.isnull().sum()
total = df.shape[0]
print('missing data for power: ', (missing/total)*100, '%') | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Depending on the percent missing data, either drop it or forward fill the NaN's | # Option 1: Drop NaN's
df.dropna(inplace=True)
# # Option 2: ffill NaN's
# df = df.fillna(method='ffill') | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Visualizations Box plot | df_box_plot = pd.DataFrame(df['power'])
df_box_plot['quarter'] = df_box_plot.index.quarter
df_box_plot.boxplot(column='power', by='quarter') | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Histogram | df['power'].hist() | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
ACF and PACF | fig1 = plot_acf(df_processed['power'], lags=50)
fig2 = plot_pacf(df_processed['power'], lags=50) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Prepare data Split into training & testing data | X_train = df[(df.index < '2019-01-01')]
y_train = df.loc[(df.index < '2019-01-01'), 'power']
X_test = df[(df.index >= '2019-01-01')]
y_test = df.loc[(df.index >= '2019-01-01'), 'power'] | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Prepare data for LSTMNote: NUM_TIMESTEPS is a hyper-parameter too! | # Number of columns in X_train
NUM_FEATURES = len(X_train.columns)
# A sequence contains NUM_TIMESTEPS number of elements and predicts NUM_MODEL_PREDICTIONS number of predictions
NUM_TIMESTEPS = 24
# Since this is an iterative method, model will predict only 1 timestep ahead
NUM_MODEL_PREDICTIONS = 1
# 4 hour predictions = Fourty eight 5min predictions
NUM_ACTUAL_PREDICTIONS = 48
train_x, train_y = [], []
for i in range(NUM_TIMESTEPS, len(X_train)-NUM_MODEL_PREDICTIONS):
train_x.append(X_train.values[i-NUM_TIMESTEPS:i])
train_y.append(y_train.values[i:i+NUM_MODEL_PREDICTIONS])
train_x, train_y = np.array(train_x), np.array(train_y)
print(train_x.shape)
print(train_y.shape)
test_x, test_y = [], []
for i in range(NUM_TIMESTEPS, len(X_test)-NUM_MODEL_PREDICTIONS):
test_x.append(X_test.values[i-NUM_TIMESTEPS:i])
test_y.append(y_test.values[i:i+NUM_MODEL_PREDICTIONS])
test_x, test_y = np.array(test_x), np.array(test_y)
print(test_x.shape)
print(test_y.shape) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
LSTM | model = Sequential([
LSTM(units=128, input_shape=(NUM_TIMESTEPS, NUM_FEATURES), return_sequences=True),
Dropout(0.2),
LSTM(units=128, return_sequences=True),
Dropout(0.2),
LSTM(units=128, activation='softmax', return_sequences=False),
Dropout(0.2),
Dense(NUM_MODEL_PREDICTIONS)
])
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.summary()
# Stop training if validation loss fails to decrease
callbacks = [EarlyStopping(monitor='val_loss', mode='min', verbose=1)]
history = model.fit(train_x, train_y,
epochs=100, batch_size=128, shuffle=False,
validation_data=(test_x, test_y), callbacks=callbacks) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Results Loss | train_loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = [x for x in range(len(train_loss))]
df_train_loss = pd.DataFrame(train_loss, columns=['train_loss'], index=epochs)
df_val_loss = pd.DataFrame(val_loss, columns=['val_loss'], index=epochs)
df_loss = pd.concat([df_train_loss, df_val_loss], axis=1)
df_loss.plot(figsize=(18,5)) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Accuracy | train_acc = history.history['acc']
val_acc = history.history['val_acc']
epochs = [x for x in range(len(train_acc))]
df_train_acc = pd.DataFrame(train_acc, columns=['train_acc'], index=epochs)
df_val_acc = pd.DataFrame(val_acc, columns=['val_acc'], index=epochs)
df_acc = pd.concat([df_train_acc, df_val_acc], axis=1)
df_acc.plot(figsize=(18,5)) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Plot predicted & true values | # Make predictions through trained model
pred_y = model.predict(test_x)
# Convert predicted and actual values to dataframes (for plotting)
df_y_pred = pd.DataFrame(scaler.inverse_transform(pred_y),
index=y_test[NUM_TIMESTEPS:-NUM_MODEL_PREDICTIONS].index,
columns=['power'])
df_y_true = pd.DataFrame(scaler.inverse_transform(test_y),
index=y_test[NUM_TIMESTEPS:-NUM_MODEL_PREDICTIONS].index,
columns=['power'])
df_y_pred.head()
df_plot = pd.concat([df_y_pred, df_y_true], axis=1)
df_plot.columns = ['pred', 'true']
df_plot.head()
df_plot.plot(figsize=(18,5))
# # Plot between two time periods
# start = pd.Timestamp('2019-01-01 23:45:00+00:00')
# end = pd.Timestamp('2019-02-01 23:45:00+00:00')
# df_plot.loc[start:end].plot(figsize=(18,5)) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Make predictions through iterative fitting for a particular timestamp Choose a particular timestamp | timestamp = pd.Timestamp('2019-01-01 23:45:00+00:00')
# Keep copy of timestamp to use it after the for loop
orig_timestamp = timestamp
X_test_pred = X_test.copy()
for _ in range(NUM_ACTUAL_PREDICTIONS):
# Create test sequence
test = np.array(X_test_pred.loc[:timestamp].tail(NUM_TIMESTEPS))
test = np.reshape(test, (1, test.shape[0], test.shape[1]))
# Increment timestamp
timestamp = X_test_pred.loc[timestamp:].index.values[1]
# Make prediction
y_pred_power = model.predict(test)
y_pred_power = list(y_pred_power[0])
# Add prediction to end of test array
X_test_pred.loc[timestamp, 'power'] = y_pred_power
# X_test_pred.loc[pd.Timestamp('2019-01-01 23:45:00+00:00'):].head(NUM_ACTUAL_PREDICTIONS)
# X_test.loc[pd.Timestamp('2019-01-01 23:45:00+00:00'):].head(NUM_ACTUAL_PREDICTIONS) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Plot | arr_pred = np.reshape(X_test_pred.loc[orig_timestamp:,'power'].head(NUM_ACTUAL_PREDICTIONS).values, (-1, 1))
arr_true = np.reshape(X_test.loc[orig_timestamp:,'power'].head(NUM_ACTUAL_PREDICTIONS).values, (-1, 1))
df_pred = pd.DataFrame(scaler.inverse_transform(arr_pred),
index=X_test_pred.loc[orig_timestamp:].head(NUM_ACTUAL_PREDICTIONS).index)
df_true = pd.DataFrame(scaler.inverse_transform(arr_true),
index=X_test.loc[orig_timestamp:].head(NUM_ACTUAL_PREDICTIONS).index)
df_plot = pd.concat([df_pred, df_true], axis=1)
df_plot.columns = ['pred', 'true']
df_plot.plot(figsize=(18,5)) | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
Get accuracy and mse of the entire test set using iterative fittingNote: This takes a while to compute! | # These two lists store the entire dataframes of 48 predictions of each element in test set!
# This is not really necessary but only to double check if the outputs are in the correct format
predicted_values = []
true_values = []
for i in range(NUM_TIMESTEPS, len(X_test)-NUM_ACTUAL_PREDICTIONS):
# Keep copy of timestamp to store it for use after the for loop
timestamp = pd.Timestamp(X_test.index.values[i])
orig_timestamp = timestamp
X_test_pred = X_test.copy()
for _ in range(NUM_ACTUAL_PREDICTIONS):
# Create test sequence
test = np.array(X_test_pred.loc[:timestamp].tail(NUM_TIMESTEPS))
test = np.reshape(test, (1, test.shape[0], test.shape[1]))
# Increment timestamp
timestamp = X_test_pred.loc[timestamp:].index.values[1]
# Make prediction
y_pred_power = model.predict(test)
y_pred_power = list(y_pred_power[0])
# Add prediction to end of test array
X_test_pred.loc[timestamp, 'power'] = y_pred_power
predicted_values.append(X_test_pred.loc[orig_timestamp:].head(NUM_ACTUAL_PREDICTIONS))
true_values.append(X_test.loc[orig_timestamp:].head(NUM_ACTUAL_PREDICTIONS))
# Get only the power values from the original predicted_values and true_values lists and then reshape them
# into the correct format for sklearn metrics' functions.
predicted_power_values = []
true_power_values = []
for df in predicted_values:
predicted_power_values.append(df[['power']].values)
for df in true_values:
true_power_values.append(df[['power']].values)
predicted_power_values = np.array(predicted_power_values)
predicted_power_values = np.reshape(predicted_power_values,
(predicted_power_values.shape[0], predicted_power_values.shape[1]))
true_power_values = np.array(true_power_values)
true_power_values = np.reshape(true_power_values,
(true_power_values.shape[0], true_power_values.shape[1]))
from sklearn.metrics import r2_score
score = r2_score(true_power_values, predicted_power_values)
score
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(true_power_values, predicted_power_values)
mse | _____no_output_____ | BSD-2-Clause | services/energy_consumption_forecast/lstm/LSTM (Iterative).ipynb | phgupta/XBOS |
**Project 4 Notebook 1****Data Acquisition**Using the Google Chrome web browser extension "Web Scraper", I scraped stories and other data from Fanfiction.net. I searched for Hunger Games stories, filtering for stories that were rated T, and that had Katniss Everdeen (there are 4 fields where you can put characters, and I put Katniss Everdeen in for all 4). Looking at the .csv files in Excel, some of the stories were split into several cells. I later learned that Excel has a limit of 32,767 characters per cell, so that when a cell contains more characters than this limit, the remaining characters are split into several cells over the next rows. This is a limitation of Excel, but not of .csv files in general, and so should not affect loading the .csv files into a pandas dataframe.**Preprocessing issues**On Tuesday 2/23/21, I decided to go back and re-do the preprocessing, but leave the capital letters in. Because so many of the names in the stories are slight variations from modern American English (eg Peeta/Peter, Katniss/Katherine) or don't exist in modern American English, I thought it would be important to leave the capitalization in so that the POS tagger recognizes these words as proper nouns.On 2/24/21, I observed that leaving words capitalized resulted in stop words that were capitalized not being removed. Also, I decided to not do parts of speech tagging, as the tagger will not recognize some words as nouns if they are not capitalized (eg Peeta, Katniss, Haymitch). I will replace capital letters, remove numbers and punctuation, then do ngrams, then remove stop words and proceed to vectorization and topic modeling. This happens in Notebook 2.Later, when I couldn't get stop word removal working from the quadgrams, I decided to tokenize by single word, then use stemming to try to reduce the number of words. | import numpy as np
import nltk
import pandas as pd | _____no_output_____ | MIT | Notebook-1-Project-4-Hunger-Games-Fanfiction-webscraping.ipynb | sutrofog/Sillman-Metis-Project4 |
The data was scraped in two batches and saved in .csv files. I read in the two files, created Pandas DataFrames, and then joined the two DataFrames using append. | data = pd.read_csv('Project-4-data/fanfiction-katniss1_pre_page_69.csv')
data.head()
data.info()
data2=pd.read_csv('Project-4-data/fanfiction-katniss1_p69-end_complete.csv')
data.head()
data2.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1718 entries, 0 to 1717
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 web-scraper-order 1718 non-null object
1 web-scraper-start-url 1718 non-null object
2 story_link 1718 non-null object
3 story_link-href 1718 non-null object
4 story_title 1718 non-null object
5 author_id 1718 non-null object
6 author_id-href 1718 non-null object
7 story_info 1718 non-null object
8 story_text 1718 non-null object
9 next_pages 1693 non-null object
10 next_pages-href 1693 non-null object
dtypes: object(11)
memory usage: 147.8+ KB
| MIT | Notebook-1-Project-4-Hunger-Games-Fanfiction-webscraping.ipynb | sutrofog/Sillman-Metis-Project4 |
Append the dataframes to make a dataframe with the complete dataset. | katniss=data.append(data2)
katniss.head()
katniss.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 3443 entries, 0 to 1717
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 web-scraper-order 3443 non-null object
1 web-scraper-start-url 3443 non-null object
2 story_link 3443 non-null object
3 story_link-href 3443 non-null object
4 story_title 3443 non-null object
5 author_id 3443 non-null object
6 author_id-href 3443 non-null object
7 story_info 3443 non-null object
8 story_text 3443 non-null object
9 previous_pages 1700 non-null object
10 previous_pages-href 1700 non-null object
11 next_pages 1693 non-null object
12 next_pages-href 1693 non-null object
dtypes: object(13)
memory usage: 376.6+ KB
| MIT | Notebook-1-Project-4-Hunger-Games-Fanfiction-webscraping.ipynb | sutrofog/Sillman-Metis-Project4 |
Removed some unnecessary columns. | ##Can delete columns "previous_pages" and "next_pages".
##These are links that the scraping extension put in.
katniss.drop(["previous_pages", "previous_pages-href",
"next_pages", "next_pages-href"], axis=1, inplace=True )
katniss.head()
katniss.info()
#replace punctuation with a white space, remove numbers, capital letters
##on 2/23, decided to not replace capital letters
##on 2/24, decided to go back and replace capital letters again, and then not tag parts of speech, as the pos
##tagger will not recognize some names as nouns (eg Katniss, Peeta, Haymitch). Captialized stopwords
##were not being removed, which creates its own mess.
import re
import string
alphanumeric = lambda x: re.sub('\w*\d\w*', ' ', x)
punc_lower = lambda x: re.sub('[%s]' % re.escape(string.punctuation), ' ', x.lower()) #this was used 2/22 to replace
#capital letters and remove punctuation.
#punc_remove = lambda x: re.sub('[%s]' % re.escape(string.punctuation), ' ', x) this is from 2/23
katniss['story_text'] = data.story_text.map(alphanumeric).map(punc_remove)
katniss.head()
katniss.to_csv('katniss-no-punc-num.csv')
##save this to a .csv file
import re
import string
#import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
nltk.download('stopwords')
from nltk.tokenize import word_tokenize
stop=stopwords.words('english')
#import texthero as hero
#set(stopwords.words('english'))
#this does not work. It separated all of the story_text into single letters!
#Good thing I saved the last iteration as a .csv. I'll have to load it and figure out what I did wrong.
#katniss['story_text_without_stopwords'] = katniss['story_text'].apply(lambda x: [item for item in x if item not in stop])
katniss.head()
##katniss=pd.read_csv('Project-4-data/katniss-no-capitals.csv')
katniss.head()
#ok the story_text is ok. Whew! Now to figure out how to take out the stop words.
#The reason it did that is because I didn't tokenize by word first.
#I need to tokenize the text by words before taking out the stop words. It needs to see the text in units of words.
nltk.download('punkt')
#Tokenize by word. I imported word_tokenize earlier in the notebook.
#This should create a new column with the story texts tokenized by word.
#Apparently there are still quotation marks in the story texts.
#These need to come out and be replaced by white space
katniss['story_text'] = katniss['story_text'].str.strip(to_strip='"')
katniss.head()
katniss.to_csv('katniss-no-num-punc-quote.csv')
#Still seems to be quotation marks at the end of some of the story texts.
#Will try to tokenize anyway. Getting an error that it expected a "string or bytes-like object"
#need to force the column to a str data type.
###katniss['story_text_wtokenized'] = word_tokenize(katniss['story_text_no_quotes'])
katniss.info()
#Getting an error that it expected a "string or bytes-like object"
#need to force the column to a str data type.
katniss['story_text']=katniss['story_text'].astype(str)
katniss.info()
katniss.head()
#tokenize by word
katniss['story_text'] = katniss['story_text'].apply(word_tokenize)
katniss.info()
katniss.head()
katniss.to_csv('katniss-word-tokenized-wcap-new.csv') | _____no_output_____ | MIT | Notebook-1-Project-4-Hunger-Games-Fanfiction-webscraping.ipynb | sutrofog/Sillman-Metis-Project4 |
I can delete a couple columns to save space. 'story_text' and 'story_text_no_quotes'using: >katniss.drop(["story_text", "story_text_no_quotes"], axis=1, inplace=True ) | #katniss.to_csv('katniss-word-tokenized_only.csv')
katniss.head()
#Now I can try to take out the stopwords.
katniss['story_text_without_stopwords'] = katniss['story_text'].apply(lambda x: [item for item in x if item not in stop])
katniss.head()
#Super! It worked! Save it as a .csv
katniss.to_csv('katniss-wtok-no-stops-wcaps.csv')
#I'll delete the column that still has the stopwords, to save space. 'story_text'
katniss.drop(["story_text"], axis=1, inplace=True )
katniss.head()
katniss.to_csv('katniss-nostops-wcaps-only.csv') | _____no_output_____ | MIT | Notebook-1-Project-4-Hunger-Games-Fanfiction-webscraping.ipynb | sutrofog/Sillman-Metis-Project4 |
Homework 2Cross Validation Problem In this homework, you will use cross validation to analyze the effect on model qualityof the number of model parameters and the noise in the observational data.You do this analysis in the context of design of experiments.The two factors are (i) number of model parameters and (ii) the noise in the observational data;the response will be the $R^2$ of the model (actually the $R^2$ averaged across the folds ofcross validation).You will investigate models of linear pathways with 2, 4, 6, 8, 10 parameters.For example, a two parameter model is use $S_1 \xrightarrow{v_1} S_2 \xrightarrow{v_3} S_3$,where $v_i = k_i s_i$, $k_i$ is a parameter to estimate, and $s_i$ is the concentration of $S_i$.The initial concentration of $S_1 = 10$, and the true value of $k_i$ is $i$. Thus, for a two parameter model,$k_1 = 1$, $k_2 = 2$.You will generate the synthetic data by adding anoise term to the true model.The noise term is drawn from a normal distribution with mean 0and standard deviations of 0.2, 0.5, 0.8, 1.0, and 1.5, depending on the experiment.You will design experiments, implement codes to run them, run the experiments, and interpret the results.The raw output of these experiments will bea table structured as the one below.Cell values will be the average $R^2$ across the folds of the cross validation done withone level for each factor. | | 2 | 4 | 6 | 8 | 10 | -- | -- | -- | -- | -- | -- | 0.2 | ? | ? | ? | ? | ? 0.5 | ? | ? | ? | ? | ? 0.8 | ? | ? | ? | ? | ? 1.0 | ? | ? | ? | ? | ? 1.5 | ? | ? | ? | ? | ? 1. (2 pt) **Generate Models.** Write (or generate) the models in Antimony, and produce plots for their true values. Use a simulation timeof 10 and 100 points.1. (1 pt) **Generate Synthetic Data.** Write a function that creates synthetic data given the parameters std and numParameter.1. (1 pt) **Extend ``CrossValidator``.** You will extend ``CrossValidator`` (in ``common/util_crossvalidation.py``)by creating a subclass ``ExtendedCrossValidator`` that has the method``calcAvgRsq``. The method takes no argument (except ``self``) and returns the average value of$R^2$ for the folds. Don't forget to document the function and include at least one tests.1. (4 pt) **Implement ``runExperiments``.** This function has inputs: (a) list of the number of parameters for themodels to study and (b) list of the standard deviations of the noise terms.It returns a dataframe with: columns are the number of parameters; rows (index) are the standard deviations of noise;and values are the average $R^2$ for the folds defined by the levels of the factors.Run experiments that produce the tables described above using five hold cross validation and 100 simulation points.1. (4 pt) **Calculate Effects.** Using the baseline standard deviation of noise of 0.8, number of parameters of 6, calculate $\mu$, $\alpha_{i,k_i}$,$\gamma_{i,i_k,j,k_j}$.1. (3 pt) **Analysis.** Answer the following questions 1. What is the effect on $R^2$ as the number of parameters increases? Why? 1. How does the noise standard deviation affect $R^2$? Why? 1. What are the interaction effects and how do they influence the response (average $R^2$)? **Please do your homework in a copy of this notebook, maintaining the sections.** Programming PreliminariesThis section provides the setup to run your python codes. | IS_COLAB = False
#
if IS_COLAB:
!pip install tellurium
!pip install SBstoat
#
# Constants for standalone notebook
if not IS_COLAB:
CODE_DIR = "/home/ubuntu/advancing-biomedical-models/common"
else:
from google.colab import drive
drive.mount('/content/drive')
CODE_DIR = "/content/drive/My Drive/Winter 2021/common"
import sys
sys.path.insert(0, CODE_DIR)
import util_crossvalidation as ucv
from SBstoat.namedTimeseries import NamedTimeseries, TIME
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tellurium as te
END_TIME = 5
NUM_POINT = 100
NOISE_STD = 0.5
# Column names
C_NOISE_STD = "noisestd"
C_NUM_PARAMETER = "no. parameters"
C_VALUE = "value"
#
NOISE_STDS = [0.2, 0.5, 0.8, 1.0, 1.5]
NUM_PARAMETERS = [2, 4, 6, 8, 10]
def isSame(collection1, collection2):
"""
Determines if two collections have the same elements.
"""
diff = set(collection1).symmetric_difference(collection2)
return len(diff) == 0
# Tests
assert(isSame(range(3), [0, 1, 2]))
assert(not isSame(range(4), range(3))) | _____no_output_____ | MIT | assignments/Homework2.ipynb | BioModelTools/topics-course |
Now You Code 4: Syracuse WeatherWrite a program to load the Syracuse weather data from Dec 2015 inJSON format into a Python list of dictionary. The file with the weather data is in your `Now-You-Code` folder: `"NYC4-syr-weather-dec-2015.json"`You should load this data into a Python list of dictionary using the `json` package. After you load this data, loop over the list of weather items and record whether or not the `'Mean TemperatureF'` is above or below freezing. Sort this information into a separate Python dictionary, called `stats` so you can print it out like this:```{'below-freezing': 4, 'above-freezing': 27}``` Step 1: Problem AnalysisThis function should get input from the user at run time and return the input address.Inputs:Outputs: Algorithm (Steps in Program): | # Step 2: Write code
import json
def load_weather_data():
with open('NYC4-syr-weather-dec-2015.json') as f:
data = f.read()
weather = json.loads(data)
return weather
def extract_weather_info(weather):
info = {}
info['mean temp'] = weather['Mean TemperatufeF']
info['high']
return info
print(weather) | _____no_output_____ | MIT | content/lessons/10/Now-You-Code/NYC4-Syracuse-Weather.ipynb | jferna22-su/ist256 |
Постановка задачиРассмотрим несколько моделей линейной регрессии, чтобы выяснить более оптимальную для первых 20 зданий.Данные:* http://video.ittensive.com/machine-learning/ashrae/building_metadata.csv.gz* http://video.ittensive.com/machine-learning/ashrae/weather_train.csv.gz* http://video.ittensive.com/machine-learning/ashrae/train.0.csv.gzСоревнование: https://www.kaggle.com/c/ashrae-energy-prediction/© ITtensive, 2020 | import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
import numpy as np
from scipy.interpolate import interp1d
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet, BayesianRidge
def reduce_mem_usage (df):
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if str(col_type)[:5] == "float":
c_min = df[col].min()
c_max = df[col].max()
if c_min > np.finfo("f2").min and c_max < np.finfo("f2").max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo("f4").min and c_max < np.finfo("f4").max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
elif str(col_type)[:3] == "int":
c_min = df[col].min()
c_max = df[col].max()
if c_min > np.iinfo("i1").min and c_max < np.iinfo("i1").max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo("i2").min and c_max < np.iinfo("i2").max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo("i4").min and c_max < np.iinfo("i4").max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo("i8").min and c_max < np.iinfo("i8").max:
df[col] = df[col].astype(np.int64)
elif col == "timestamp":
df[col] = pd.to_datetime(df[col])
elif str(col_type)[:8] != "datetime":
df[col] = df[col].astype("category")
end_mem = df.memory_usage().sum() / 1024**2
print('Потребление памяти меньше на', round(start_mem - end_mem, 2), 'Мб (минус', round(100 * (start_mem - end_mem) / start_mem, 1), '%)')
return df
buildings = pd.read_csv("http://video.ittensive.com/machine-learning/ashrae/building_metadata.csv.gz")
weather = pd.read_csv("http://video.ittensive.com/machine-learning/ashrae/weather_train.csv.gz")
energy = pd.read_csv("http://video.ittensive.com/machine-learning/ashrae/train.0.csv.gz")
energy = energy[(energy["building_id"]<20)]
energy = pd.merge(left=energy, right=buildings, how="left",
left_on="building_id", right_on="building_id")
energy = energy.set_index(["timestamp", "site_id"])
weather = weather.set_index(["timestamp", "site_id"])
energy = pd.merge(left=energy, right=weather, how="left",
left_index=True, right_index=True)
energy.reset_index(inplace=True)
energy = energy.drop(columns=["meter", "site_id", "year_built",
"square_feet", "floor_count"], axis=1)
del buildings
del weather
energy = reduce_mem_usage(energy)
print (energy.info())
energy["hour"] = energy["timestamp"].dt.hour.astype("int8")
energy["weekday"] = energy["timestamp"].dt.weekday.astype("int8")
for weekday in range(0,7):
energy['is_wday' + str(weekday)] = energy['weekday'].isin([weekday]).astype("int8")
energy["date"] = pd.to_datetime(energy["timestamp"].dt.date)
dates_range = pd.date_range(start='2015-12-31', end='2017-01-01')
us_holidays = calendar().holidays(start=dates_range.min(),
end=dates_range.max())
energy['is_holiday'] = energy['date'].isin(us_holidays).astype("int8")
energy["meter_reading_log"] = np.log(energy["meter_reading"] + 1)
energy_train,energy_test = train_test_split(energy[energy['meter_reading'] > 0],test_size=0.2)
from sklearn.metrics import *
hours = range(0,24)
buildings = range(0,energy_train['building_id'].max() + 1)
lr_columns = ['meter_reading_log','hour','building_id','is_holiday']
for wday in range(0,7):
lr_columns.append('is_wday' + str(wday))
| _____no_output_____ | MIT | ASHRAE/competitive_reg_models.ipynb | Costigun/kaggle_practice |
Линейная регрессия\begin{equation}z = Ax + By + C, |z-z_0|^2 \rightarrow min\end{equation}Лассо + LARS Лассо\begin{equation}\frac{1}{2n}|z-z_0|^2 + a(|A|+|B|) \rightarrow min\end{equation}Гребневая регрессия\begin{equation}|z-z_0|^2 + a(A^2 + B^2) \rightarrow min\end{equation}ElasticNet: Лассо + Гребневая регрессия\begin{equation}\frac{1}{2n}|z-z_0|^2 + \alpha p|A^2+B^2| + (\alpha - p)(|A|+|B|)/2 \rightarrow min\end{equation} | lr_models = {
"LinearRegression":LinearRegression,
"Lasso-0.01":Lasso,
"Lasso-0.1":Lasso,
"Lasso-1.0":Lasso,
"Ridge-0.01":Ridge,
"Ridge-0.1":Ridge,
"Ridge-1.0":Ridge,
"ELasticNet-1-1":ElasticNet,
"ELasticNet-0.1-1":ElasticNet,
"ELasticNet-1-0.1":ElasticNet,
"ELasticNet-0.1-0.1":ElasticNet,
"BayesianRidge":BayesianRidge
}
energy_train_lr = pd.DataFrame(energy_train,columns=lr_columns)
lr_models_scores = {}
for _ in lr_models:
lr_model = lr_models[_]
energy_lr_scores = [[]] * len(buildings)
for building in buildings:
energy_lr_scores[building] = [0] * len(hours)
energy_train_b = energy_train_lr[energy_train_lr['building_id'] == building]
for hour in hours:
energy_train_bh = energy_train_b[energy_train_b['hour'] == hour]
y = energy_train_bh['meter_reading_log']
x = energy_train_bh.drop(['meter_reading_log','hour','building_id'],axis=1)
if _ in ['Ridge-0.1','Lasso-0.1']:
model = lr_model(alpha=0.1,fit_intercept=False).fit(x,y)
elif _ in ['Ridge-0.01','Lasso-0.01']:
model = lr_model(alpha=0.01,fit_intercept=False).fit(x,y)
elif _ == 'ElasticNet-1-1':
model = lr_model(alpha=1,l1_ratio=1,fit_intercept=False).fit(x,y)
elif _ == 'ElasticNet-1-0.1':
model = lr_model(alpha=1,l1_ratio=0.1,fit_intercept=False).fit(x,y)
elif _ == 'ElasticNet-0.1-1':
model = lr_model(alpha=0.1,l1_ratio=1,fit_intercept=False).fit(x,y)
elif _ == 'ElasticNet-0.1-0.1':
model = lr_model(alpha=0.1,l1_ratio=0.1,fit_intercept=False).fit(x,y)
else:
model = lr_model(fit_intercept=False).fit(x,y)
energy_lr_scores[building][hour] = r2_score(y,model.predict(x))
lr_models_scores[_] = np.mean(energy_lr_scores)
print(lr_models_scores)
energy_lr = []
energy_ridge = []
energy_br = []
for building in buildings:
energy_lr.append([])
energy_ridge.append([])
energy_br.append([])
energy_train_b = energy_train_lr[energy_train_lr['building_id'] == building]
for hour in hours:
energy_lr[building].append([0] * (len(lr_columns)-3))
energy_ridge[building].append([0] * (len(lr_columns)-3))
energy_br[building].append([0] * (len(lr_columns)-3))
energy_train_bh = energy_train_b[energy_train_b['hour'] == hour]
y = energy_train_bh['meter_reading_log']
if len(y) > 0:
x = energy_train_bh.drop(['meter_reading_log','hour','building_id'],axis=1)
model = LinearRegression(fit_intercept=False).fit(x,y)
energy_lr[building][hour] = model.coef_
model = Ridge(alpha=0.01,fit_intercept=False).fit(x,y)
energy_ridge[building][hour] = model.coef_
model = BayesianRidge(fit_intercept=False).fit(x,y)
energy_br[building][hour] = model.coef_
print(energy_lr[0][0])
print(energy_ridge[0][0])
print(energy_br[0][0]) | [-0.05204313 5.44504565 5.41921165 5.47881611 5.41753305 5.43838778
5.45137392 5.44059806]
[-0.04938976 5.44244413 5.41674949 5.47670968 5.41516617 5.43591691
5.44949479 5.43872264]
[-0.05138182 5.44439819 5.41859905 5.47829205 5.41694412 5.43777302
5.45090643 5.44013149]
| MIT | ASHRAE/competitive_reg_models.ipynb | Costigun/kaggle_practice |
import matplotlib.pyplot as mpl
import matplotlib.ticker as plticker
import numpy as np
from scipy.optimize import minimize_scalar
import pathlib
if not pathlib.Path("mpl_utils.py").exists():
!curl -O https://raw.githubusercontent.com/joaochenriques/MCTE_2022/main/libs/mpl_utils.py &> /dev/null
import mpl_utils as mut
mut.config_plots()
%config InlineBackend.figure_formats = ['svg']
try:
from tqdm.notebook import tqdm
except ModuleNotFoundError:
!pip install tdqm
from tqdm.notebook import tqdm
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string)) | _____no_output_____ | MIT | ChannelFlows/Simulation/ChannelFlowSimulation.ipynb | joaochenriques/MCTE_2022 |
|
**Setup the problem** | ρw = 1025 # [kg/m³] salt water density
g = 9.8 # [m/s²] gravity aceleration
T = 12.0*3600.0 + 25.2*60.0 # [s] tide period
L = 20000 # [m] channel length
h = 60 # [m] channel depth
b = 4000 # [m] channel width
a = 1.2 # [m] tidal amplitude
S = h*b # [m²] channel area
twopi = 2*np.pi
ω = twopi / T # [rad/s] tidal frequency
Q0 = g*a*S / (ω*L) # [-] frictionless channel volumetric flow rate
qr = S * np.sqrt(g*h) # flow rate based on wave velocity
Cd = 0.005 # [-] friction coefficient
f = 2*Cd # [-] friction coefficient used in the model is twice the value
# usual used in tidal (non standard model)
Fr_0 = Q0 / ( S * np.sqrt( g * h ) )
Θ_T_star = ( 0.5 / S**2 ) * Q0**2 / ( g * a )
Θ_f_star = Θ_T_star * ( f * L / h )
printmd( "$\mathrm{Fr}_0 = %.3f$" % Fr_0 )
printmd( "$\Theta_\mathrm{f}^* = %.3f$" % Θ_f_star )
printmd( "$\Theta_\mathrm{T}^* = %.3f$" % Θ_T_star )
def local_CT_and_CP( Fr4b, Fr1, B ):
# See Chapter 3 of the MCTE Lecture notes
ζ4 = (1/2.)*Fr1**2 - 1/2.*Fr4b**2 + 1.0
Fr4t = (Fr1 - Fr4b*ζ4 + np.sqrt(B**2*Fr4b**2 - 2*B*Fr1**2 + 2*B*Fr1*Fr4b \
+ B*ζ4**2 - B + Fr1**2 - 2*Fr1*Fr4b*ζ4 + Fr4b**2*ζ4**2))/B
ζ4b = (Fr1 - Fr4t*ζ4)/(Fr4b - Fr4t)
ζ4t = -(Fr1 - Fr4b*ζ4)/(Fr4b - Fr4t)
Fr2t = Fr4t*ζ4t/B
C_T = (Fr4b**2 - Fr4t**2)/Fr1**2
C_P = C_T*Fr2t/Fr1
return C_T, C_P
def find_minus_CP( Fr4b, Fr1, B ):
# function created to discard the C_T when calling "local_CT_and_CP"
C_T, C_P = local_CT_and_CP( Fr4b, Fr1, B )
return -C_P # Minus C_P to allow minimization
def compute_BCT_BCP( Fr_0, B, Q_star ):
Fr1 = np.abs( Fr_0 * Q_star )
if Fr1 < 1E-3:
return 0.0, 0.0 # all zeros
# find the optimal C_P for the channel conditions
res = minimize_scalar( find_minus_CP, args=(Fr1, B), bounds=[0,1],
method='bounded',
options={ 'xatol': 1e-08, 'maxiter': 500, 'disp': 1 } )
Fr4b = res.x # optimal value
C_T, C_P = local_CT_and_CP( Fr4b, Fr1, B )
return B*C_T, B*C_P | _____no_output_____ | MIT | ChannelFlows/Simulation/ChannelFlowSimulation.ipynb | joaochenriques/MCTE_2022 |
**Solution of the ODE**$\displaystyle \frac{dQ^*}{dt^*}=\cos(t^*) - (\Theta_\text{f}^*+BC_\text{T} \Theta_\text{T}^*) \, Q^* \, |Q^*|$$\displaystyle \frac{d E_\text{T}^*}{dt^*}= BC_\text{P} \, |{Q^*}^3|$where $B$, $\Theta_\text{f}^*$ and $\Theta_\text{T}^*$ are constants, and $C_\text{T}$ and $C_\text{P}$ are computed as a function of the local Froude number.This system can be writen as$$\dfrac{d \mathbf{y}^*}{dt^*} = \mathbf{f}^*\!\!\left( \mathbf{y}^*, t^* \right),$$with$$\mathbf{y} = \begin{pmatrix}Q^*\\E_\text{T}^*\end{pmatrix}\tag{Eq. 1}$$and$$\tag{Eq. 2}\mathbf{f}^* = \begin{pmatrix}\cos(t^*) - (\Theta_\text{f}^*+BC_T \Theta_\text{T}^*) \, Q^* |Q^*|\\[4pt]BC_P \, |{Q^*}^3|\end{pmatrix}$$We adopt a first order solution of the type$$\dfrac{\mathbf{y}^*(t_n^*+\Delta t^*)-\mathbf{y}^*(t_n^*)}{\Delta t^*} = \mathbf{f}^*\bigg( t_n^*, \mathbf{y}^*\left(t_n^*\right) \bigg)$$resulting$$\mathbf{y}^*_{n+1} = \mathbf{y}^*_n + \Delta t^* \, \mathbf{f}^*\!\!\left( t^*_n,\mathbf{y}^*_n \right)\tag{Eq. 3}$$where$$\mathbf{y}^*_{n}=\mathbf{y}^*(t_n^*)$$$$\mathbf{y}^*_{n+1}=\mathbf{y}^*(t_n^*+\Delta t^*)$$ Define RHS of the ODE, see Eq. (2) | def f_star( ys, ts, Θ_f_star, Θ_T_star, Fr_0, B_rows ):
( Q_star, E_star ) = ys
BC_T_rows = np.zeros( len( B_rows ) )
BC_P_rows = np.zeros( len( B_rows ) )
B_0 = np.nan
for j, B in enumerate( B_rows ):
# do not repeat the computations if B is equal to the previous iteration
if B_0 != B:
BC_T_j, BC_P_j = compute_BCT_BCP( Fr_0, B, Q_star )
B_0 = B
BC_T_rows[j] = BC_T_j
BC_P_rows[j] = BC_P_j
return np.array(
( np.cos( ts ) - ( Θ_f_star + np.sum(BC_T_rows) * Θ_T_star ) * Q_star * np.abs( Q_star ),
np.sum(BC_P_rows) * np.abs( Q_star )**3 )
) | _____no_output_____ | MIT | ChannelFlows/Simulation/ChannelFlowSimulation.ipynb | joaochenriques/MCTE_2022 |
**Solution with channel bed friction and turbines thrust** | periods = 4
ppp = 100 # points per period
num = int(ppp*periods)
# stores time vector
ts_vec = np.linspace( 0, (2*np.pi) * periods, num )
Delta_ts = ts_vec[1] - ts_vec[0]
# vector that stores the lossless solution time series
ys_lossless_vec = np.zeros( ( num, 2 ) )
# solution of (Eq. 3) without "friction" term
for i, ts in tqdm( enumerate( ts_vec[1:] ) ):
ys_lossless_vec[i+1] = ys_lossless_vec[i] + \
Delta_ts * f_star( ys_lossless_vec[i], ts, 0, 0, 0, [0.0] ) | _____no_output_____ | MIT | ChannelFlows/Simulation/ChannelFlowSimulation.ipynb | joaochenriques/MCTE_2022 |
The blockage factor per turbine row $i$ is$$B_i=\displaystyle \frac{\left( n_\text{T} A_\text{T}\right)_i}{S_i}$$where $\left( n_\text{T} A_\text{T}\right)_i$ is the area of all turbines of row $i$, and $S_i$ is the cross-sectional area of the channel at section $i$. | fig, (ax1, ax2) = mpl.subplots(1,2, figsize=(12, 4.5) )
fig.subplots_adjust( wspace = 0.17 )
B_local = 0.1
n_step = 18
for n_mult in tqdm( ( 0, 1, 2, 4, 8, 16 ) ):
n_rows = n_step * n_mult
B_rows = [B_local] * n_rows
# vector that stores the solution time series
ys_vec = np.zeros( ( num, 2 ) )
# solution of (Eq. 3) with "friction" terms
for i, ts in tqdm( enumerate( ts_vec[1:] ) ):
ys_vec[i+1] = ys_vec[i] + \
Delta_ts * f_star( ys_vec[i], ts, \
Θ_f_star, Θ_T_star, Fr_0,\
B_rows )
ax1.plot( ts_vec/twopi, ys_vec[:,0] )
ax2.plot( ts_vec/twopi, ys_vec[:,1], label="$n_\mathrm{rows}=%i$" % (n_rows) )
ax1.plot( ts_vec/twopi, ys_lossless_vec[:,0], label="frictionless" )
ax1.grid()
ax1.set_title( "$B_i = %4.2f$" % B_local )
ax1.set_xlim( ( 0, 4 ) )
ax1.set_ylim( ( -1.1, 1.1 ) )
ax1.set_xlabel( '$t^*\!/\,(2\pi)$ [-]')
ax1.set_ylabel( '$Q^*$ [-]')
# ax1.legend( loc='lower left', fontsize=12)
ax1.text(-0.15, 1.05, 'a)', transform=ax1.transAxes, size=16, weight='semibold')
ax2.plot( np.nan, np.nan, label="frictionless" )
ax2.grid()
ax2.set_title( "$B_i = %4.2f$" % B_local )
ax2.set_xlim( ( 0, 4 ) )
ax2.set_xlabel( '$t^*\!/\,(2\pi)$ [-]')
ax2.set_ylabel( '$E_\mathrm{T}^*$ [-]')
ax2.legend( loc='upper left', fontsize=14, handlelength=2.9,labelspacing=0.25)
ax2.text(-0.15, 1.05, 'b)', transform=ax2.transAxes, size=16, weight='semibold');
mpl.savefig( 'Friction_model.pdf', bbox_inches='tight', pad_inches=0.02); | _____no_output_____ | MIT | ChannelFlows/Simulation/ChannelFlowSimulation.ipynb | joaochenriques/MCTE_2022 |
**Plot the solution as function of the number of turbines** | n_rows_lst = range( 0, 512+1, 8 ) # number of turbines [-]
Ps_lst = []
B_local = 0.1
ys1_vec = np.zeros( ( num, 2 ) )
for n_rows in tqdm( n_rows_lst ):
B_rows = [B_local]*n_rows
# solution of (Eq. 3) with "friction" terms
# the initial conditions are always (0,0)
for i, ts in enumerate( ts_vec[1:] ):
ys1_vec[i+1] = ys1_vec[i] + \
Delta_ts * f_star( ys1_vec[i], ts, \
Θ_f_star, Θ_T_star, Fr_0,\
B_rows )
# last value of the last period minus the first value of the last period
Ps = ( ys1_vec[-1,1] - ys1_vec[-ppp,1] )/ (2*np.pi)
Ps_lst.append( Ps )
mpl.plot( n_rows_lst, Ps_lst )
mpl.xlim( (0,500) )
mpl.title( "$B_i = %4.2f$" % B_local )
mpl.xlabel( r"number of rows, $n_\mathrm{rows}$")
mpl.ylabel( r"$P_\mathrm{T}^*$")
mpl.grid()
mpl.savefig( 'Friction_model_Power_nTurbines.pdf', bbox_inches='tight', pad_inches=0.02);
| _____no_output_____ | MIT | ChannelFlows/Simulation/ChannelFlowSimulation.ipynb | joaochenriques/MCTE_2022 |
**TOOLS FOR DEMOGRAPHY** Token y Drive | # Token para GEE
import ee
from google.colab import auth
auth.authenticate_user()
ee.Authenticate()
ee.Initialize()
# Vincular con Drive
from google.colab import drive
drive.mount('/content/drive') | Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
| MIT | Phyton/Manejo_Datos.ipynb | jcms2665/100-Days-Of-ML-Code |
Manejo de Bases de Datos--- | # Instalación de paquetes
!pip install pyreadstat
!pip install simpledbf
# Cargar paquetes
import os # Directorios
import csv
import matplotlib.pyplot as plt
import numpy as np # Data frame
import pandas as pd
import pyreadstat
os.getcwd()
a="/content/drive/MyDrive/28 Bases/TMODULO.csv"
inegi=pd.read_csv(a)
print ('Datos importados:',len(inegi))
pd.crosstab(inegi.SEXO, inegi.P6_20, inegi.FAC_PER, aggfunc = sum)
inegi.SEXO
| _____no_output_____ | MIT | Phyton/Manejo_Datos.ipynb | jcms2665/100-Days-Of-ML-Code |
Table of Contents1 Exploratory data analysis1.1 Desribe data1.1.1 Sample size1.1.2 Descriptive statistics1.1.3 Shapiro-Wilk Test1.1.4 Histograms1.2 Kendall's Tau correlation1.3 Correlation Heatmap | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import shapiro, kendalltau
from sklearn import linear_model
import statsmodels.api as sm
df = pd.read_csv('data/cleaned_data_gca.csv') | _____no_output_____ | MIT | 210601 gca data analyses.ipynb | rbnjd/gca_data_analyses |
Exploratory data analysis Desribe data Sample size | print('Sample size socio-demographics =', df[df.columns[0]].count())
print('Sample size psychological variables =', df[df.columns[4]].count()) | Sample size socio-demographics = 33
Sample size psychological variables = 34
| MIT | 210601 gca data analyses.ipynb | rbnjd/gca_data_analyses |
Descriptive statistics **Descriptive statistics for numeric data** | descriptive_stat = df.describe()
descriptive_stat = descriptive_stat.T
descriptive_stat['skew'] = df.skew()
descriptive_stat['kurtosis'] = df.kurt()
descriptive_stat.insert(loc=5, column='median', value=df.median())
descriptive_stat=descriptive_stat.apply(pd.to_numeric, errors='ignore')
descriptive_stat | _____no_output_____ | MIT | 210601 gca data analyses.ipynb | rbnjd/gca_data_analyses |
**Descriptive statistics for categorical data** | for col in list(df[['gender','education level']]):
print('variable:', col)
print(df[col].value_counts(dropna=False).to_string())
print('') | variable: gender
Männlich 18
Weiblich 14
Divers 1
NaN 1
variable: education level
Hochschulabschluss 16
Abitur 8
derzeit noch Schüler\*in 5
derzeit noch Schüler/*in 3
Fachhochschulabschluss 1
NaN 1
| MIT | 210601 gca data analyses.ipynb | rbnjd/gca_data_analyses |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.