markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
同时 deck 类支持迭代
for card in deck: print(card) # 反向迭代 for card in reversed(deck): print(card)
_____no_output_____
MIT
01-data-model/01-data-model.ipynb
yuechuanx/fluent-python-code-and-notes
迭代通常是隐式的,如果一个集合没有实现 `__contains__` 方法,那么 in 运算符会顺序做一次迭代搜索。
Card('Q', 'hearts') in deck Card('7', 'beasts') in deck
_____no_output_____
MIT
01-data-model/01-data-model.ipynb
yuechuanx/fluent-python-code-and-notes
进行排序,排序规则:2 最小,A最大。花色 黑桃 > 红桃 > 方块 > 梅花
card.rank suit_values = dict(spades=3, hearts=2, diamonds=1, clubs=0) def spades_high(card): rank_value = FrenchDeck.ranks.index(card.rank) return rank_value * len(suit_values) + suit_values[card.suit] for card in sorted(deck, key=spades_high): print(card)
_____no_output_____
MIT
01-data-model/01-data-model.ipynb
yuechuanx/fluent-python-code-and-notes
FrenchDeck 继承了 object 类。通过 `__len__`, `__getitem__` 方法,FrenchDeck和 Python 自有序列数据类型一样,可体现 Python 核心语言特性(如迭代和切片), Python 支持的所有魔术方法,可以参见 Python 文档 [Data Model](https://docs.python.org/3/reference/datamodel.html) 部分。比较重要的一点:不要把 `len`,`str` 等看成一个 Python 普通方法:由于这些操作的频繁程度非常高,所以 Python 对这些方法做了特殊的实现:它可以让 Python 的内置数据结构走后门以提高效率;但对于自定义的数据结构,又可以在对象上使用通用的接口来完成相应工作。但在代码编写者看来,`len(deck)` 和 `len([1,2,3])` 两个实现可能差之千里的操作,在 Python 语法层面上是高度一致的。 如何使用特殊方法特殊方法的存在是为了被 Python 解释器调用除非大量元编程,通常代码无需直接使用特殊方法通过内置函数来使用特殊方法是最好的选择 模拟数值类型实现一个二维向量(Vector)类![image.png](attachment:image.png)
from math import hypot class Vector: def __init__(self, x=0, y=0): self.x = x self.y = y def __repr__(self): return 'Vector(%r, %r)' % (self.x, self.y) def __abs__(self): return hypot(self.x, self.y) def __bool__(self): return bool(abs(self)) def __add__(self, other): x = self.x + other.x y = self.y + other.y return Vector(x, y) def __mul__(self, scalar): return Vector(self.x * scalar, self.y * scalar) # 使用 + 运算符 v1 = Vector(2, 4) v2 = Vector(2, 1) v1 + v2 # 调用 abs 内置函数 v = Vector(3, 4) abs(v) # 使用 * 运算符 v * 3
_____no_output_____
MIT
01-data-model/01-data-model.ipynb
yuechuanx/fluent-python-code-and-notes
The JSON data is fetched and stored in numpy arrays in sequences of 45 frames which isabout 1.5 seconds of the video [2].60% of the dataset has been used for training,20% for testingand 20% for validation. The training data has 7989 sequences of 45 frames, each containing the 2D coordinates of the 18 keypoints captured by OpenPose. The validation data consists of 2224such sequences and the test data contains 2598 sequences.The number of frames varied from 60,20,20 split at the video level. This was because of the difference in duration of videos.
import torch import numpy as np import numpy as np import pandas as pd import ast from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader, TensorDataset from os import listdir # check if CUDA is available train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('CUDA is not available. Training on CPU ...') else: print('CUDA is available! Training on GPU ...') batch_size = 5 filepaths = [ str("./20220201") + "/" + str(f) for f in listdir("./20220201/") if f.endswith('.csv')] data = pd.concat(map(pd.read_csv, filepaths)) data.drop(data.columns[0], axis=1, inplace=True) y = data[['1']] x = data.drop(['1','2'] , axis=1) x_train, x_test, y_train, y_test = train_test_split(x, y,test_size=0.2) X = [] for i in x_train.values: X.append(np.array(ast.literal_eval(i[0]))[0].T.astype(int)) x_train = np.array(X) x_train = x_train[:, :2 , : ] X_t = [] for i in x_test.values: X_t.append(np.array(ast.literal_eval(i[0]))[0].T.astype(int)) x_test = np.array(X_t) x_test = x_test[:, :2 , : ] train_data = TensorDataset(torch.tensor(np.array(x_train) , dtype=torch.float) , torch.tensor(np.array(y_train).squeeze() , dtype=torch.long)) train_loader = DataLoader(train_data, batch_size=5, shuffle=True) valid_data = TensorDataset(torch.tensor(np.array(x_test) , dtype=torch.float) , torch.tensor(np.array(y_test).squeeze() , dtype=torch.long)) valid_loader = DataLoader(valid_data, batch_size=5, shuffle=True) classes = ['laying','setting', 'standing'] x_train[0]
_____no_output_____
Apache-2.0
pose_classification/pose_classification_lstm.ipynb
julkar9/deep_learning_nano_degree
---![title](rnn_lstm_00.png)![title](rnn_lstm_0.png)![title](cnn_lstm.png) Define model structure
from torch import nn import torch.nn.functional as F class TimeDistributed(nn.Module): def __init__(self, module, batch_first=True): super(TimeDistributed, self).__init__() self.module = module self.batch_first = batch_first def forward(self, x): if len(x.size()) <= 2: return self.module(x) # Squash samples and timesteps into a single axis x_reshape = x.contiguous().view(-1, x.size(-1)) # (samples * timesteps, input_size) y = self.module(x_reshape) # We have to reshape Y if self.batch_first: y = y.contiguous().view(x.size(0), -1, y.size(-1)) # (samples, timesteps, output_size) else: y = y.view(-1, x.size(1), y.size(-1)) # (timesteps, samples, output_size) return y class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() # convolutional layer (sees 28x28x1 image tensor) # (W−F+2P)/S+1 = (28 - 3 )/1 + 1 = 26 self.conv1 = nn.Conv1d(2, 16, kernel_size=3, padding=1) self.bnm = nn.BatchNorm1d(16, momentum=0.1) # (W−F+2P)/S+1 = (13 - 3 )/1 + 1 = 11 self.conv2 = nn.Conv1d(16, 32, kernel_size=3 , padding=1) self.bnm2 = nn.BatchNorm1d(32, momentum=0.1) self.conv3 = nn.Conv1d(32, 64, kernel_size=3 , padding=1) self.bnm3 = nn.BatchNorm1d(64, momentum=0.1) self.conv4 = nn.Conv1d(64, 128, kernel_size=3 , padding=1) self.bnm4 = nn.BatchNorm1d(128, momentum=0.1) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = F.relu(self.dropout(self.bnm(self.conv1(x)))) # print("conv1", x.size()) x = F.relu(self.dropout(self.bnm2(self.conv2(x)))) # print("conv2", x.size()) x = F.relu(self.dropout(self.bnm3(self.conv3(x)))) # print("conv3", x.size()) x = F.relu(self.dropout(self.bnm4(self.conv4(x)))) # print("conv4", x.size()) x = x.view(-1, 2176) return x class Combine(nn.Module): def __init__(self): super(Combine, self).__init__() self.cnn = CNN() self.rnn = nn.LSTM( input_size=128, hidden_size=64, num_layers=1, batch_first=True) self.linear = nn.Linear(64,3) def forward(self, x): batch_size, C, timesteps = x.size() # x = x.view(-1, C, timesteps) c_out = self.cnn(x) r_in = c_out.view(batch_size, timesteps, -1) r_out, (h_n, h_c) = self.rnn(r_in) r_out2 = self.linear(r_out[:, -1, :]) return F.log_softmax(r_out2, dim=1) # define model # model = Sequential() # model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu'), input_shape=(None,n_length,n_features))) # model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu'))) # model.add(TimeDistributed(Dropout(0.5))) # model.add(TimeDistributed(MaxPooling1D(pool_size=2))) # model.add(TimeDistributed(Flatten())) # model.add(LSTM(100)) # model.add(Dropout(0.5)) # model.add(Dense(100, activation='relu')) # model.add(Dense(n_outputs, activation='softmax'))
_____no_output_____
Apache-2.0
pose_classification/pose_classification_lstm.ipynb
julkar9/deep_learning_nano_degree
Train the model
import torch.optim as optim model = Combine() if train_on_gpu: model.cuda() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01) # number of epochs to train the model n_epochs = 20 valid_loss_min = np.Inf # track change in validation loss for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### model.train() for data, target in train_loader: # print(target) # print(target) # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # print(output) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item()*data.size(0) ###################### # validate the model # ###################### model.eval() for data, target in valid_loader: # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model # print(data.shape) output = model(data) # calculate the batch loss loss = criterion(output, target) # update average validation loss valid_loss += loss.item()*data.size(0) # calculate average losses train_loss = train_loss/len(train_loader.sampler) valid_loss = valid_loss/len(valid_loader.sampler) # print training/validation statistics print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch, train_loss, valid_loss)) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,valid_loss)) torch.save(model.state_dict(), 'model_cifar.pt') valid_loss_min = valid_loss
_____no_output_____
Apache-2.0
pose_classification/pose_classification_lstm.ipynb
julkar9/deep_learning_nano_degree
--- Test you model
model.load_state_dict(torch.load('model_cifar.pt')) model import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix import seaborn as sn import pandas as pd # track test loss test_loss = 0.0 class_correct = list(0. for i in range(3)) class_total = list(0. for i in range(3)) y_pred = [] y_true = [] model.eval() # iterate over test data for data, target in valid_loader: # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) y_true.extend(target.cpu()) # Save Truth # calculate the batch loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) y_pred.extend(pred.cpu()) # compare predictions to true label correct_tensor = pred.eq(target.data.view_as(pred)) print(correct_tensor) correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy()) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # average test loss test_loss = test_loss/len(valid_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(3): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( classes[i], 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total))) # Build confusion matrix cf_matrix = confusion_matrix(y_true, y_pred) df_cm = pd.DataFrame(cf_matrix/np.sum(cf_matrix) *10, index = [i for i in classes], columns = [i for i in classes]) plt.figure(figsize = (12,7)) sn.heatmap(df_cm, annot=True) plt.savefig('output.png')
_____no_output_____
Apache-2.0
pose_classification/pose_classification_lstm.ipynb
julkar9/deep_learning_nano_degree
To molsysmt.MolSys
from molsysmt.tools import openff_Topology #openff_Topology.to_molsysmt_MolSys(item)
_____no_output_____
MIT
docs/contents/tools/classes/openff_Topology/to_molsysmt_MolSys.ipynb
dprada/molsysmt
Forecast using Air Passenger DataHere the famous Air Passenger dataset is use to create on step ahead forecast models using recurrent neural networks. + LSTM cells take input of the (n_obs, n_xdims, n_time)+ Statefull networks require the entire sequence of data to be preprocessed+
import pandas as pd import numpy as np from matplotlib import pyplot as plt from sklearn.preprocessing import MinMaxScaler, RobustScaler from sklearn.metrics import r2_score, mean_squared_error from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM, Dropout, BatchNormalization, Activation %matplotlib inline url = 'https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/AirPassengers.csv' airPass = pd.read_csv(url) airPass.index = airPass['time'] airPass.drop(['Unnamed: 0','time'], inplace=True, axis=1) airPass.head()
_____no_output_____
MIT
DeepLearning/ForecastingAirPassengersLSTM.ipynb
mdavis29/DataScience101
Data Pre Processing for Forecasting+ data is lag so that time x = time0 and y= time +1 (One Step ahead)+ in this example, x is used twice to simulate a multi dimension x input for forecasting+ X, and Y sides are scale between (0,1) ( we will reverse the scaling for calculating error metrics+ Y is scaled so that the loss propagating back is all on the same scale, does create instablility
n_obs = len(airPass['value'].values) n_ahead = 1 # in this case th n_time_steps = 12 # each observation will have 12 months of data n_obs_trimmed = int(n_obs/n_time_steps) n_xdims = 1 # created by hstacking the sequence to simumlate multi dimensional input n_train_obs = 9 n_outputs = 12 x = np.reshape(airPass['value'].values[0:n_obs_trimmed * n_time_steps ], (-1, 1)) scaler = MinMaxScaler().fit(x) x_scaled = scaler.transform(x) x_reshaped_scaled = np.reshape(x_scaled, (-1,n_xdims ,n_time_steps )) # trains on only the first n_train_observations, lags by n_ahead (in this case one year) X_train = x_reshaped_scaled[0:n_train_obs] y_train = x_reshaped_scaled[n_ahead:(n_train_obs + n_ahead)].squeeze() # squeeze reshapes from 3 to 2d # test on full data set X_test = x_reshaped_scaled[n_ahead:] y_test = x_reshaped_scaled[n_ahead:].squeeze() print('(n_years ie: obs), (n_xdims) , (time_steps ie months to consider)') print('x_train: {0}, y_train: {1}'.format(X_train.shape, y_train.shape)) print('x_test: {0}, y_test: {1}'.format(X_test.shape, y_test.shape))
(n_years ie: obs), (n_xdims) , (time_steps ie months to consider) x_train: (9, 1, 12), y_train: (9, 12) x_test: (11, 1, 12), y_test: (11, 12)
MIT
DeepLearning/ForecastingAirPassengersLSTM.ipynb
mdavis29/DataScience101
ModelingKeras lstm with 12 cells is used, with 12 outputs (one for each month of the year)This forecasting system will forecast an entire year at a time. + Dropout is used to prevent over fitting+ one row of input to this model is essentually 12 months of passenger counts of shape (1, 1, 12)
from keras.callbacks import EarlyStopping esm = EarlyStopping(patience=4) # design network model = Sequential() model.add(LSTM(12, input_shape=(n_xdims, n_time_steps ),return_sequences=False, dropout=0.2, recurrent_dropout=0.2, stateful=False, batch_size=1)) model.add(Dense(n_outputs )) model.add(Activation('linear')) model.compile(loss='mae', optimizer='adam') model.summary() # fit the model history = model.fit(X_train,y_train, epochs=100, batch_size=1, validation_data=(X_test, y_test), verbose=0, shuffle=False, callbacks=[esm]) print('mse last 5 epochs {}'.format(history.history['val_loss'][-5:]))
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_9 (LSTM) (1, 12) 1200 _________________________________________________________________ dense_9 (Dense) (1, 12) 156 _________________________________________________________________ activation_9 (Activation) (1, 12) 0 ================================================================= Total params: 1,356 Trainable params: 1,356 Non-trainable params: 0 _________________________________________________________________ mse last 5 epochs [0.0445328104225072, 0.045325480909510094, 0.04572556675835089, 0.04841325178065083, 0.053399924358183685]
MIT
DeepLearning/ForecastingAirPassengersLSTM.ipynb
mdavis29/DataScience101
Peformance on Entire DatasetPeformance is check across the entire dataset using mean squared error and R2_scoreThe MaxMin scaler is reversed to get predictions back on the orignal scale
preds = scaler.inverse_transform(np.reshape(model.predict(X_test, batch_size=1), (-1, 1))).flatten() y_true = airPass['value'].values[n_time_steps:] val_df = pd.DataFrame({'preds': preds, 'y_true':y_true}) mse = round(mean_squared_error(y_true, preds), 3) r2 = round(r2_score(y_true, preds), 3) print('performance on entire data sets mse: {} r2: {}'.format(mse, r2))
performance on entire data sets mse: 997.776 r2: 0.925
MIT
DeepLearning/ForecastingAirPassengersLSTM.ipynb
mdavis29/DataScience101
Peformance on Test SetSince the last two years (24 timesteps) were held out as a test, we can test performance just on that portionThe MaxMin scaler is reversed to get predictions back on the orignal scale. This should show a small drop in performance.
y_true_test = airPass['value'].values[n_time_steps:][-24:] preds_test = preds[-24:] mse_test = round(mean_squared_error(y_true_test, preds_test), 3) r2_test = round(r2_score(y_true_test, preds_test), 3) print('performance on last two years only in the test set, mse: {} r2: {}'.format(mse_test, r2_test)) val_df.plot()
_____no_output_____
MIT
DeepLearning/ForecastingAirPassengersLSTM.ipynb
mdavis29/DataScience101
Pandas
import pandas as pd dataframe = pd.read_csv('../data/MobileRating.csv') dataframe.head() dataframe.tail(7) print(len(dataframe)) print(dataframe.shape) # Accessing individual row dataframe.loc[3] dataframe_short = dataframe[40:45] dataframe_short dataframe_thin = dataframe[['PhoneId', 'RAM', 'Processor_frequency', 'Height', 'Capacity', 'Rating', 'Sim1_4G']] dataframe_thin.head() good_battery_df = dataframe_thin[dataframe_thin['Capacity'] >= 5000] good_battery_df good_battery_df.describe() dataframe_thin.dtypes good_battery_df[good_battery_df['Rating'] > 4]['Capacity'].mean() group = good_battery_df.groupby(['RAM']) for key, df_key in group: print(key) print(df_key) print('-----') group.mean()
_____no_output_____
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Plotting Dataframes
import matplotlib.pyplot as plt import seaborn as sns sns.set() ax = sns.boxplot(x="Internal Memory", y="Weight", data=dataframe) ax = sns.pairplot(dataframe_thin.drop(['PhoneId'], axis=1), diag_kind='hist') ax = sns.pairplot(dataframe_thin.drop('PhoneId', axis=1), diag_kind='hist', hue='Sim1_4G') sns.reset_orig()
_____no_output_____
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
VectorsVector is a collection of coordinates a point has in a given space. Vector has both magnitude and direction.In geometery, a two or more dimension space is called a Euclidean space. A space in any finite no. of dimensions, in which points are designated by coordinates(one for each dimension) and the distance b/w two points is given by a distance formula. L2 norm is also called Euclidean norm(Magnitude).Magnitude of a vector is given by: $\large \sqrt{\sum_{i=1}^{N} x_i^2}$ Plotting Vectors
import numpy as np import matplotlib.pyplot as plt plt.quiver(0,0,4,5, scale_units='xy', angles='xy', scale=1) plt.xlim(-10, 10) plt.ylim(-10, 10) plt.show() # Plot multiple vectors plt.quiver(0,0,4,5, scale_units='xy', angles='xy', scale=1, color='r') plt.quiver(0,0,-4,5, scale_units='xy', angles='xy', scale=1, color='b') plt.xlim(-10, 10) plt.ylim(-10, 10) plt.show() # Creating a method to plot multiple vectors def plot_vectors(vectors): colors = [ 'r', 'y', 'b', 'g', 'c', 'm', 'tan', 'black', 'darkorange', 'limegreen', 'aqua', 'violet', 'pink', 'magenta', 'teal', 'indigo' ] i = 0 for vector in vectors: plt.quiver(0, 0, vector[0], vector[1], scale_units='xy', angles='xy', scale=1, color=colors[i%len(colors)], label=vector) i += 1 plt.xlim(-10, 10) plt.ylim(-15, 15) plt.legend() plt.show() vectors = np.array([[4,3], [-4,3], [7,1], [3,6]]) plot_vectors(vectors)
_____no_output_____
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Vectors Addition and Substraction
# Addition of two vectors print(vectors[0] - vectors[1]) plot_vectors([vectors[0], vectors[1], vectors[0] + vectors[1]]) # Subtraction of two vectors print(vectors[0] - vectors[2]) plot_vectors([vectors[0], vectors[2], vectors[0] - vectors[2]])
[-3 2]
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Vector Dot Product$\large{\vec{a}\cdot\vec{b} = |\vec{a}| |\vec{b}| \cos(\theta) = a_x b_x + a_y b_y} = a^T b$
print(vectors[0], vectors[2]) dot_product = np.dot(vectors[0], vectors[2]) print(dot_product)
[4 3] [7 1] 31
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Projection of one vector(a) on another vector(b)$\large{a_b = |\vec{a}| \cos{\theta} = |\vec{a}| \frac{\vec{a} \cdot \vec{b}}{|\vec{a}| |\vec{b}|} = \frac{\vec{a} \cdot \vec{b}}{|\vec{b}|}}$$\large{ \vec{a_b} = a_b \hat{b} = a_b \frac{\vec{b}}{|\vec{b}|} }$
a = vectors[0] b = vectors[2] plot_vectors([a, b]) a_b = np.dot(a, b)/np.linalg.norm(b) print('Magnitude of projected vector:', a_b) vec_a_b = (a_b/np.linalg.norm(b))*b print('Projected vector:', vec_a_b) plot_vectors([a, b, vec_a_b]) # Another example a = vectors[1] b = vectors[2] plot_vectors([a, b]) a_b = np.dot(a, b)/np.linalg.norm(b) print('Magnitude of projected vector:', a_b) vec_a_b = (a_b/np.linalg.norm(b))*b print('Projected vector:', vec_a_b) plot_vectors([a, b, vec_a_b])
_____no_output_____
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
MatricesA matrix is a collection of vectors.
# Row matrix row_matrix = np.random.random((1, 4)) print(row_matrix) # Column matrix column_matrix = np.random.random((4, 1)) print(column_matrix)
[[0.50652656] [0.99386151] [0.6596067 ] [0.88428846]]
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Multiplying a matrix with a vectorThe vector gets transformed into a new vector(it strays from its path).
matrix = np.asarray([[1, 2], [2, 4]]) vector = np.asarray([3, 1]).reshape(-1, 1) # plot_vectors([vector]) print(matrix) print(vector) new_vector = np.dot(matrix, vector) print(new_vector) plot_vectors([vector, new_vector])
[[1 2] [2 4]] [[3] [1]] [[ 5] [10]]
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Matrix Addition and Substraction
matrix_1 = np.asarray([ [1, 0, 3], [3, 1, 1], [0, 2, 5] ]) matrix_2 = np.asarray([ [0, 1, 2], [3, 0, 5], [1, 2, 1] ]) print(matrix_1 + matrix_2) print(matrix_1 - matrix_2)
[[ 1 -1 1] [ 0 1 -4] [-1 0 4]]
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
Matrix Multiplication
print(np.dot(matrix_1, matrix_2))
[[ 3 7 5] [ 4 5 12] [11 10 15]]
MIT
code/DL002-Python Intermediate and Linear Algebra.ipynb
hemendrarajawat/DL-Basic-To-Advanced
We are working together on this via vscode live share and are talking over zoom. In terms of sharing the file we have a github repository set up. We met three times to work in total.
import pandas as pd import bs4 from bs4 import BeautifulSoup import requests from datetime import datetime as dt import numpy as np import re #2 url = "https://www.spaceweatherlive.com/en/solar-activity/top-50-solar-flares" webpage = requests.get(url) print(webpage) #response 403, this means that the webserver refused to authorize the request #to fix do this headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} webpage = requests.get(url, headers=headers) print(webpage) #now response 200 soup_content = BeautifulSoup(webpage.content,'html.parser') pretty = soup_content.prettify() #print(pretty) table_html = soup_content.find("table",{"class":"table table-striped"})#['data-value'] #from stackoverflow df = pd.read_html(table_html.prettify())[0] df.rename(columns={'Unnamed: 0':"rank",'Unnamed: 1':"x_class",'Unnamed: 2':"date",'Start':"start_time",'Maximum':"max_time",'End':"end_time",'Unnamed: 7':"movie",'Region':"region"},inplace=True) df.head() dataFrame = df.drop("movie",axis=1) dataFrame.head() for row in dataFrame.iterrows(): date = pd.to_datetime(row[1].date) time_start = pd.to_datetime(row[1].start_time) str_time = str(date)[:11]+str(time_start)[11:] startTime = pd.to_datetime(str_time) dataFrame.at[row[0],'start_time'] = startTime time_max= pd.to_datetime(row[1].max_time) str_time = str(date)[:11]+str(time_max)[11:] maxTime = pd.to_datetime(str_time) dataFrame.at[row[0],'max_time'] = maxTime time_end = pd.to_datetime(row[1].end_time) str_time = str(date)[:11]+str(time_end)[11:] endTime = pd.to_datetime(str_time) dataFrame.at[row[0],'end_time'] = endTime #date_time_obj = dt.strftime(str_time, '%y-%m-%d %H:%M:%S') #dt.combine(date,time_start) dataFrame = dataFrame.replace('-',np.nan) dataFrame_update = dataFrame.drop("date",axis=1) dataFrame_update.rename(columns={'start_time':'start_datetime','max_time':'max_datetime','end_time':'end_datetime'},inplace=True) region_column = dataFrame_update.region dataFrame_update=dataFrame_update.drop(['region'],axis=1) dataFrame_update['region'] = region_column dataFrame_update.start_datetime = dataFrame_update.start_datetime.astype('datetime64[ns]') dataFrame_update.max_datetime = dataFrame_update.max_datetime.astype('datetime64[ns]') dataFrame_update.end_datetime = dataFrame_update.end_datetime.astype('datetime64[ns]') print(dataFrame_update.dtypes) dataFrame_update #Step 3 url = "http://cdaw.gsfc.nasa.gov/CME_list/radio/waves_type2.html" headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} webpage = requests.get(url, headers=headers) soup_content = BeautifulSoup(webpage.content,'html.parser') #pretty = soup_content.prettify() #print(pretty) row_split = soup_content.find("pre").prettify().split('\n') colomns = [] data = [] regex = r'>(.*)<' for row in row_split[12:-2]: row = row.replace('<a','') data_split = row.split(' ')#remove '' while True: try: data_split.remove('') except: break try: cme_date = re.search( regex ,data_split[9]).groups()[0] #cme except: cme_date = data_split[9] cme_time = data_split[10] cpa = data_split[11] end_date = data_split[2] try: end_frequency = re.search( regex ,data_split[5]).groups()[0] except: end_frequency = data_split[5] end_time = data_split[3] flare_location = data_split[6] flare_region = data_split[7] importance = data_split[8] try: speed = re.search( regex ,data_split[13]).groups()[0] except: speed = data_split[13] start_date = data_split[0] try: start_frequency = re.search( regex ,data_split[4]).groups()[0] except: start_frequency = data_split[4] start_time = data_split[1] width = data_split[12] data.append([cme_date,cme_time,cpa,end_date,end_frequency,end_time, flare_location,flare_region,importance,speed,start_date, start_frequency,start_time,width]) columns = ["cme_date","cme_time","cpa","end_date","end_frequency","end_time", "flare_location","flare_region","importance","speed","start_date", "start_frequency","start_time","width"] nasa_df = pd.DataFrame(data,columns = columns) nasa_df.head() #step 4 nasa_df.replace(['--/--'],[np.nan],inplace=True) nasa_df.replace(['-----'],[np.nan],inplace=True) nasa_df.replace(['----'],[np.nan],inplace=True) nasa_df.replace(['????'],[np.nan],inplace=True) nasa_df.replace(['--:--'],[np.nan],inplace=True) nasa_df.replace(['------'],[np.nan],inplace=True) nasa_df['flare_location'].replace(['BACK'],['Back'],inplace=True) nasa_df['flare_location'].replace(['back'],['Back'],inplace=True) nasa_df['flare_region'].replace(['DSF'],['FILA'],inplace=True) nasa_df['width'].replace(['360h'],['360'],inplace=True) nasa_df["width"].replace(['---'],[np.nan],inplace=True) nasa_df.replace(['24:00'], ['23:59'], inplace=True) halo_flare = [] width_lower_bound = [] helper = nasa_df.iterrows() start_datetime = [] end_datetime = [] cme_datetime = [] for i in helper: #print(i[1]['cpa']) if i[1]['cpa'] == "Halo": halo_flare.append(True) else: halo_flare.append(False) if '&gt;' in str(i[1]['width']): width_lower_bound.append(True) nasa_df['width'][i[0]] = i[1]['width'][4:] else: width_lower_bound.append(False) #date time start_datetime.append(pd.to_datetime(str(i[1]['start_date'])+" "+str(i[1]['start_time']))) end_datetime.append(pd.to_datetime((str(i[1]['start_date'])[0:5]+str(i[1]['end_date'])+" "+str(i[1]['end_time'])))) try: cme_datetime.append(pd.to_datetime((str(i[1]['start_date'])[0:5]+str(i[1]['cme_date'])+" "+str(i[1]['cme_time'])))) except: cme_datetime.append(np.datetime64("NaT")) nasa_df.drop(columns = ['start_time', 'start_date', 'end_date', 'end_time', 'cme_date', 'cme_time'], inplace= True) nasa_df['is_flare'] = halo_flare nasa_df['width_lower_bound'] = width_lower_bound nasa_df['start_datetime'] = start_datetime nasa_df['end_datetime'] = end_datetime nasa_df['cme_datetime'] = cme_datetime nasa_df['cpa'].replace(['Halo'],[np.nan],inplace=True) #do not run more than once nasa_df = nasa_df.astype({'cpa':'float','end_frequency':'float','speed':'float','start_frequency':'float','width':'float'}) print(nasa_df.dtypes) nasa_df= nasa_df[['start_datetime','end_datetime','start_frequency','end_frequency','flare_location','flare_region','importance', 'cme_datetime','cpa', 'width','speed','is_flare','width_lower_bound']] nasa_df.head(10) #Part 2 start nasa_x_df = nasa_df[nasa_df['importance'].str[0]=='X'] nasa_x_df["importance"] = nasa_x_df["importance"].str[1:] nasa_x_df = nasa_x_df.astype({'importance':'float'}).sort_values(by='importance',ascending = False) nasa_x_df = nasa_x_df.astype({'importance':'string'}) nasa_x_df["importance"] = 'X' + nasa_x_df["importance"] compare_df = nasa_x_df.head(50) display(compare_df.head()) dataFrame_update.head() #We were able to replicate the top 50 solar flares pretty well, but when comparing #the two dataframes, some importance classification did not line up perfectly. #Question 2 def date_compare(date): # date is in the format x days xx:xx:xx date_split = date.split(" ") if int(date_split[0])>=1: return False new_split = date_split[2].split(':') if int(new_split[0])>2: return False return True def comparer(df1,df2): val = 0 start_diff = abs(df1['start_datetime']-df2['start_datetime']) end_diff = abs(df1['end_datetime']- df2['end_datetime']) if date_compare(str(start_diff)): val+=1 if date_compare(str(end_diff)): val+=1 return val arr_ranking_temp = [] for top_i in dataFrame_update.iterrows(): # (0=id,1 =info) temp_dict = dict() for nasa_i in compare_df.iterrows(): temp_dict[nasa_i[0]] = comparer(top_i[1],nasa_i[1]) arr_ranking_temp.append(temp_dict) dict_ranking = dict() count = 0 for i in arr_ranking_temp: if max(i.values()) == 0: dict_ranking[count] = np.nan else: dict_ranking[count] = (max(i,key=i.get), max(i.values())) count += 1 match_rank = [] index = [] for i in dict_ranking: if type(dict_ranking[i]) == tuple: #print(arr_ranking[i]) index.append(dict_ranking[i][0]) match_rank.append(dict_ranking[i][1]) #print(match_rank) #print(index) matchrank_df = pd.DataFrame(data={'match_rank': match_rank}, index=index) matchable = compare_df.merge(matchrank_df, how='right', right_index=True, left_index=True) matchable
_____no_output_____
MIT
_projects/project1.ipynb
M-Sender/cmps3160
We matched the rows by comparing their start time and end time. If both rows' start times were within three hours of each other, they would get a one point increase in their match rank - and same goes for end times. This means that the maximum possible match rank is 2. The NASA dataframe was sorted by importance in decreasing order, so we got the top 50 from it and compared it with the other dataset.
%matplotlib inline import matplotlib.pyplot as plt # Give each flare an overall rank matchable["overall_rank"] = list(range(1,36)) new_speed = matchable["speed"] / 9 matchable.plot.scatter(x='start_datetime', y='start_frequency', s=new_speed,title="Top 50 Solar Flares, size=speed") #Graph below has the top 50 solar flares plotted by date and the size of each dot is the speed of the flare. You can see that with the intensity, there is a positive tend over time. speeds = nasa_df.speed/9 nasa_df.plot.scatter(x='start_datetime', y='start_frequency',s=speeds, title="Entire Nasa Data Set, size=speed") #Same as above, but with the entire nasa dataset matchable.is_flare.value_counts().plot.pie(title="Top 50 Solar Flares") #Below is the top 50 solar flares the top 50 solar flares for if they had a halo cme. nasa_df.is_flare.value_counts().plot.pie(title="Whole Nasa Dataset") #Below is the whole nasa dataset for if they have a halo cme. #As you can see, there was a higher proportion of halo cmes in the top 50 solar flares than when compared to the whole dataset.
_____no_output_____
MIT
_projects/project1.ipynb
M-Sender/cmps3160
Lambda notebook demo v. 1.0.1 Author: Kyle RawlinsThis notebook provides a demo of the core capabilities of the lambda notebook, aimed at linguists who already have training in semantics (but not necessarily implemented semantics).Last updated Dec 2018. Version history: * 0.5: first version * 0.6: updated to work with refactored class hierarchy (Apr 2013) * 0.6.1: small fixes to adapt to changes in various places (Sep 2013) * 0.7: various fixes to work with alpha release (Jan 2014) * 0.9: substantial updates, merge content from LSA poster (Apr 2014) * 0.95: substantial updates for a series of demos in Apr-May 2014 * 1.0: various changes / fixes, more stand-alone text (2017) * 1.0.1: small tweaks (Nov/Dec 2018) To run through this demo incrementally, use shift-enter (runs and moves to next cell). If you run things out of order, you may encounter problems (missing variables etc.)
reload_lamb() from lamb.types import TypeMismatch, type_e, type_t, type_property from lamb.meta import TypedTerm, TypedExpr, LFun, CustomTerm from IPython.display import display # Just some basic configuration meta.constants_use_custom(False) lang.bracket_setting = lang.BRACKET_FANCY lamb.display.default(style=lamb.display.DerivStyle.BOXES) # you can also try lamb.display.DerivStyle.PROOF
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
First pitchHave you ever wanted to type something like this in, and have it actually do something?
%%lamb ||every|| = λ f_<e,t> : λ g_<e,t> : Forall x_e : f(x) >> g(x) ||student|| = L x_e : Student(x) ||danced|| = L x_e : Danced(x) r = ((every * student) * danced) r r.tree()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Two problems in formal semantics 1. Type-driven computation could be a lot easier to visualize and check. (Q: could it be made too easy?)2. Grammar fragments as in Montague Grammar: good idea in principle, hard to use in practice. * A **fragment** is a *complete* formalization of *sublanguage* consisting of the *key relevant phenomena* for the problem at hand. (Potential problem-points italicized.)Solution: a system for developing interactive fragments: "*IPython Lambda Notebook*"* Creator can work interactively with analysis -- accelerate development, limit time spent on tedious details.* Reader can explore derivations in ways that are not typically possible in typical paper format.* Creator and reader can be certain that derivations work, verified by the system.* Bring closer together formal semantics and computational modeling.Inspired by: * Von Eijck and Unger (2013): implementation of compositional semantics in Haskell. No interface (beyond standard Haskell terminal); great if you like Haskell. Introduced the idea of a fragment in digital form. * UPenn Lambda calculator (Champollion, Tauberer, and Romero 2007): teaching oriented. (Now under development again.) * `nltk.sem`: implementation of the lambda calculus with a typed metalanguage, interface with theorem provers. No interactive interface. * Jealousy of R studio, Matlab, Mathematica, etc. The role of formalism & fragments What does *formal* mean in semantics? What properties should a theory have? 1. Mathematically precise (lambda calculus, type theory, logic, model theory(?), ...) 2. Complete (covers "all" the relevant data). 3. Predictive (like any scientific theory). 4. Consistent, or at least compatible (with itself, analyses of other phenomena, some unifying conception of the grammar). The *method of fragments* (Partee 1979, Partee and Hendriks 1997) provides a structure for meeting these criteria. * Paper with a fragment provides a working system. (Probably.) * Explicit outer bound for empirical coverage. * Integration with a particular theory of grammar. (To some extent.) * Explicit answer to many detailed questions not necessarily dealt with in the text. **Claim**: fragments are a method of replicability, similar to a computational modeller providing their model. * To be clear, a fragment is neither necessary nor sufficient for having a good theory / analysis / paper...Additional benefit: useful internal check for researcher.>"...But I feel strongly that it is important to try to [work with fully explicit fragments] periodically, because otherwise it is extremely easy to think that you have a solution to a problem when in fact you don't." (Partee 1979, p. 41) The challenges of fragmentsPart 1 of the above quote:>"It can be very frustrating to try to specify frameworks and fragments explicitly; this project has not been entirely rewarding. I would not recommend that one always work with the constraint of full explicitness." (Ibid.) * Fragments can be tedious and time-consuming to write (not to mention hard). * Fragments as traditionally written are in practice not easy for a reader to use. - Dense/unapproachable. With exactness can come a huge chunk of hard-to-digest formalism. E.g. Partee (1979), about 10% of the paper. - Monolithic/non-modular. For the specified sublanguage, everything specified. Outside the bounds of the sublanguage, nothing specified. How does the theory fit in with others? - Exact opposite of the modern method -- researchers typically hold most aspects of the grammar constant (implicitly) while changing a few key points. (*Portner and Partee intro*)**Summary:** In practice, the typical payoff for neither the reader nor the writer of a fragment exceeded the effort. A solution: digital fragmentsVon Eijck and Unger 2010: specify a fragment in digital form.* They use Haskell. Type system of Haskell extremely well-suited to natural language semantics.* (Provocative statement) Interface, learning curve of Haskell not well suited to semanticists (or most people)? Benefits of digital fragments (in principle)* Interactive.* Easy to distribute, adapt, modify.* Possibility of modularity. (E.g. abstract a 'library' for compositional systems away from the analysis of a particular phenomenon.)* Bring closer together the CogSci idea of a 'computational model' to the project of natural language semantics.* Connections to computational semantics. (weak..) What sorts of things might we want in a fragment / system for fragments?* Typed lambda calculus.* Logic / logical metalanguage.* Framework for semantic composition. (Broad...)* Model theory? (x)* Interface with theorem provers? (x)IPython Lambda Notebook aims to provide these tools in a usable, interactive, format.* Choose Python, rather than Haskell/Java. Easy learning curve, rapid prototyping, existence of IPython.**Layer 1**: interface using IPython Notebook.**Layer 2**: flexible typed metalanguage.**Layer 3**: composition system for object language, building on layer 2. Layer 1: an interface using IPython/Jupyter Notebook (Perez and Granger 2007) * Client-server system where a specialized IPython "kernel" is running in the background. This kernel implements various tools for formal semantics (see parts 2-3). * Page broken down into cells in which can be entered python code, markdown code, raw text, other formats. * Jupyter: supports display of graphical representations of python objects. * Notebook format uses the "MathJax" framework to enable it to render most math-mode latex. Can have python objects automatically generate decent-looking formulas. Can use latex math mode in documentation as well (e.g. $\lambda x \in D_e : \mathit{CAT}(x)$)This all basically worked off-the-shelf.* Bulk of interface work so far: rendering code for logical and compositional representations.* Future: interactive widgets, etc.
meta.pmw_test1 meta.pmw_test1._repr_latex_()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
&nbsp; Part 2: a typed metalanguage The **metalanguage** infrastructure is a set of classes that implement the building blocks of logical expressions, lambda terms, and various combinations combinations. This rests on an implementation of a **type system** that matches what semanticists tend to assume.Starting point (2012): a few implementations of things like predicate logic do exist, this is an intro AI exercise sometimes. I started with the [AIMA python](http://code.google.com/p/aima-python/) _Expr_ class, based on the standard Russell and Norvig AI text. But, had to scrap most of it. Another starting point would have been `nltk.sem` (I was unaware of its existence at the time.)Preface cell with `%%lamb` to enter metalanguage formulas directly. The following cell defines a variable `x` that has type e, and exports it to the notebook's environment.
%%lamb reset x = x_e # define x to have this type x.type
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
This next cell defines some variables whose values are more complex object -- in fact, functions in the typed lambda calculus.
%%lamb test1 = L p_t : L x_e : P(x) & p # based on a Partee et al example test1b = L x_e : P(x) & Q(x) t2 = Q(x_e)
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
These are now registered as variables in the python namespace and can be manipulated directly. A typed lambda calculus is fully implemented with all that that entails -- e.g. the value of `test1` includes the whole syntactic structure of the formula, its type, etc. and can be used in constructing new formulas. The following cells build a complex function-argument formula, and following that, does the reduction.(Notice that beta reduction works properly, i.e. bound $x$ in the function is renamed in order to avoid collision with the free `x` in the argument.)
test1(t2) test1(t2).reduce() %%lamb catf = L x_e: Cat(x) dogf = λx: Dog(x_e) (catf(x)).type catf.type
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Type checking of course is a part of all this. If the types don't match, the computation will throw a `TypeMismatch` exception. The following cell uses python syntax to catch and print such errors.
result = None try: result = test1(x) # function is type <t<et>> so will trigger a type mismatch. This is a python exception so adds all sorts of extraneous stuff, but look to the bottom except TypeMismatch as e: result = e result
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
A more complex expression:
%%lamb p2 = (Cat_<e,t>(x_e) & p_t) >> (Exists y: Dog_<e,t>(y_e))
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
What is going on behind the scenes? The objects manipulated are recursively structured python objects of class TypedExpr.Class _TypedExpr_: parent class for typed expressions. Key subclasses:* BinaryOpExpr: parent class for things like conjunction.* TypedTerm: variables, constants of arbitrary type* BindingOp: operators that bind a single variable * LFun: lambda expressionMany straightforward expressions can be parsed. Most expressions are created using a call to TypedExpr.factory, which is abbreviated as "te" in the following examples. The `%%lamb` magic is calling this behind the scenes.Three ways of instantiating a variable `x` of type `e`:
%%lamb x = x_e # use cell magic x = te("x_e") # use factory function to parse string x x = meta.TypedTerm("x", types.type_e) # use object constructer x
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Various convenience python operators are overloaded, including functional calls. Here is an example repeated from earlier in two forms:
%%lamb p2 = (Cat_<e,t>(x_e) & p_t) >> (Exists y: Dog_<e,t>(y_e)) p2 = (te("Cat_<e,t>(x)") & te("p_t")) >> te("(Exists y: Dog_<e,t>(y_e))") p2
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Let's examine in detail what happens when a function and argument combine.
catf = meta.LFun(types.type_e, te("Cat(x_e)"), "x") catf catf(te("y_e"))
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Building a function-argument expression builds a complex, unreduced expression. This can be explicitly reduced (note that the `reduce_all()` function would be used to apply reduction recursively):
catf(te("y_e")).reduce() (catf(te("y_e")).reduce()).derivation
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
The metalanguage supports some basic type inference. Type inference happens already on combination of a function and argument into an unreduced expression, not on beta-reduction.
%lamb ttest = L x_X : P_<?,t>(x) # type <?,t> %lamb tvar = y_t ttest(tvar)
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
&nbsp; Part 3: composition systems for an object language On top of the metalanguage are '**composition systems**' for modeling (step-by-step) semantic composition in an object language such as English. This is the part of the lambda notebook that tracks and manipulates mappings between object language elements (words, trees, etc) and denotations in the metalanguage. A composition at its core consists of a set of composition rules; the following cell defines a simple composition system that will be familiar to anyone who has taken a basic course in compositional semantics. (This example is just a redefinition of the default composition system.)
# none of this is strictly necessary, the built-in library already provides effectively this system. fa = lang.BinaryCompositionOp("FA", lang.fa_fun, reduce=True) pm = lang.BinaryCompositionOp("PM", lang.pm_fun, commutative=False, reduce=True) pa = lang.BinaryCompositionOp("PA", lang.pa_fun, allow_none=True) demo_hk_system = lang.CompositionSystem(name="demo system", rules=[fa, pm, pa]) lang.set_system(demo_hk_system) demo_hk_system
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Expressing denotations is done in a `%%lamb` cell, and almost always begins with lexical items. The following cell defines several lexical items that will be familiar from introductory exercises in the Heim & Kratzer 1998 textbook "Semantics in Generative Grammar". These definitions produce items that are subclasses of the class `Composable`.
%%lamb ||cat|| = L x_e: Cat(x) ||gray|| = L x_e: Gray(x) ||john|| = John_e ||julius|| = Julius_e ||inP|| = L x_e : L y_e : In(y, x) # `in` is a reserved word in python ||texas|| = Texas_e ||isV|| = L p_<e,t> : p # `is` is a reserved word in python
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
In the purely type-driven mode, composition is triggered by using the '`*`' operator on a `Composable`. This searches over the available composition operations in the system to see if any results can be had. `inP` and `texas` above should be able to compose using the FA rule:
inP * texas
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
On the other hand `isV` is looking for a property, so we shouldn't expect succesful composition. Below this I have given a complete sentence and shown some introspection on that composition result.
julius * isV # will fail due to type mismatches sentence1 = julius * (isV * (inP * texas)) sentence1 sentence1.trace()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Composition will find all possible paths (beware of combinatorial explosion). I have temporarily disabled the fact that standard PM is symmetric/commutative (because of conjunction), to illustrate a case with multiple composition paths:
gray * cat gray * (cat * (inP * texas)) a = lang.Item("a", isV.content) # identity function for copula as well isV * (a * (gray * cat * (inP * texas))) np = ((gray * cat) * (inP * texas)) vp = (isV * (a * np)) sentence2 = julius * vp sentence2 sentence1.results[0] sentence1.results[0].tree() sentence2.results[0].tree()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
One of the infamous exercise examples from Heim and Kratzer (names different): (1) Julius is a gray cat in Texas fond of John. First let's get rid of all the extra readings, to keep this simple.
demo_hk_system.get_rule("PM").commutative = True fond = lang.Item("fond", "L x_e : L y_e : Fond(y)(x)") ofP = lang.Item("of", "L x_e : x") sentence3 = julius * (isV * (a * (((gray * cat) * (inP * texas)) * (fond * (ofP * john))))) sentence3 sentence3.tree()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
The _Composite_ class subclasses _nltk.Tree_, and so supports the things that class does. E.g. []-based paths:
parse_tree3 = sentence3.results[0] parse_tree3[0][1][1].tree()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
There is support for traces and indexed pronouns, using the PA rule. (The implementation may not be what you expect.)
binder = lang.Binder(23) binder2 = lang.Binder(5) t = lang.Trace(23, types.type_e) t2 = lang.Trace(5) display(t, t2, binder) ((t * gray)) b1 = (binder * (binder2 * (t * (inP * t2)))) b2 = (binder2 * (binder * (t * (inP * t2)))) display(b1, b2) b1.trace() b1.results[0].tree()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Composition in tree structuresSome in-progress work: implementing tree-based computation, and top-down/deferred computation* using nltk Tree objects.* system for deferred / uncertain types -- basic inference over unknown types* arbitrary order of composition expansion. (Of course, some orders will be far less efficient!)
reload_lamb() lang.set_system(lang.hk3_system) %%lamb ||gray|| = L x_e : Gray_<e,t>(x) ||cat|| = L x_e : Cat_<e,t>(x) t2 = Tree("S", ["NP", "VP"]) t2 t2 = Tree("S", ["NP", "VP"]) r2 = lang.hk3_system.compose(t2) r2.tree() r2.paths() Tree = lamb.utils.get_tree_class() t = Tree("NP", ["gray", Tree("N", ["cat"])]) t t2 = lang.CompositionTree.tree_factory(t) r = lang.hk3_system.compose(t2) r r.tree() r = lang.hk3_system.expand_all(t2) r r.tree() r.paths()
_____no_output_____
BSD-3-Clause
notebooks/Lambda Notebook Demo.ipynb
poetaster-org/lambda-notebook
Algorithms 1: Do Something!Today's exercise is to make a piece of code that completes a useful task, but write it as generalized as possible to be reusable for other people (including Future You)!
%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
lab2/lab2-Howard.ipynb
erinleighh/WWU-seminar-2018
DocumentationA "Docstring" is required for every function you write. Otherwise you will forget what it does and how it does it!One very common docstring format is the "[NumPy/SciPy](https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt)" standard:Below is a working function with a valid docstring as an example:
def MyFunc(arg1, arg2, kwarg1=5.0): ''' This is a function to calculate the number of quatloos required to reverse the polarity of a neutron flow. Parameters ---------- arg1 : float How many bleeps per blorp arg2 : float The foo/bar parameter kwarg1 : float, optional The quatloo to gold-pressed-latinum exchange rate Returns ------- float A specific resultification index ''' if kwarg1 > 5.0: print("wow, that's a lot of quatloos...") # this is the classical formula we learn in grade school output = arg1 + arg2 * kwarg1 return output # how to use the function x = MyFunc(7,8, kwarg1=9.2) # Check out the function's result print(x) # convert Kelvin to Fahrenheit def TempConvert(temp, K2F = True): ''' This is a function to calculate the temperature in Fahrenheit with a given input in Kelvin, and vice versa. Parameters ---------- temp : float The input temperature K2F : boolean The input temperature's unit, assumes Kelvin to Fahrenheit. Returns ------- float The Fahrenheit equivalent to input. ''' # this is the classical formula we learn in grade school if K2F == True: output = (9/5 * (temp - 273)) + 32 else: output = (5/9 * (temp - 32)) + 273 return output # Insert the degrees and "True" if converting from K to F, "False" if converting from F to K. x = TempConvert(32, False) # check print(x)
273.0
MIT
lab2/lab2-Howard.ipynb
erinleighh/WWU-seminar-2018
Today's AlgorithmHere's the goal:**Which constellation is a given point in?**This is where you could find the detailed constellation boundary lines data:http://vizier.cfa.harvard.edu/viz-bin/Cat?cat=VI%2F49You could use this data and do the full "Ray Casting" approach, or even cheat using matpltlib functions!http://stackoverflow.com/a/23453678**BUT**A simplified approach has been developed (that you should use!) from [Roman (1987)](http://cdsads.u-strasbg.fr/abs/1987PASP...99..695R)
# This is how to read in the coordinates and constellation names using Pandas # (this is a cleaned up version of Table 1 from Roman (1987) I prepared for you!) df = pd.read_csv('data/data.csv') df # Determine which constellation a given coordinate is in. def howard_constellation(ra, dec): ''' This is a function to determine which constellation a given coordinate is in. Parameters ---------- ra : float The right assention coordinate of the input. dec : float The declination coordinate of the input. index : int The indeces in which the coordinate passes the conditionals. (Includes MORE than just the constellation it's in.) boundsIndex : int The indeces in which the coordinate passes the conditionals. (Includes MORE than just the constellation it's in.) Returns ------- string The constellation the given coordinate is in. plot ''' # This is how to read in the coordinates and constellation names using Pandas df = pd.read_csv('data/data.csv') '''Based on the literature: Read down the column headed "DE_low" until a declination lower than or equal to the declination of the input is reached. Read down the column headed "RA_up" until a right assention greater than or equal to the right assention of the input is reached. Read down the column headed "RA_low" until a right assention lower than or equal to the right assention of the input is reached. The FIRST index where this is true is the constellation in which the coordinate is located.''' index = np.where((dec >= df['DE_low']) & (ra <= df['RA_up']) & (ra >= df['RA_low']))[0] output = df['name'].values[index][0] ''' Attempting to draw the constellation boundaries and the point of the ra/dec input. ''' boundsIndex = np.where(df['name'] == output)[0] plt.plot(df['RA_up'][boundsIndex], df['DE_low'][boundsIndex]) plt.plot(df['RA_low'][boundsIndex], df['DE_low'][boundsIndex]) plt.scatter(ra,dec) plt.show() return output # TESTS FOR YOUR FUNCTION! # these coordinates SHOULD be in constellation "LYR" ra=18.62 dec=38.78 x = constellation(ra, dec) print(x) # these should be in "APS" ra=14.78 dec=-79.03 x = constellation(ra, dec) print(x)
_____no_output_____
MIT
lab2/lab2-Howard.ipynb
erinleighh/WWU-seminar-2018
#Join Data Practice #These are the top 10 most frequently ordered products. How many times was each ordered? #Banana #Bag of Organic Bananas #Organic Strawberries #Organic Baby Spinach #Organic Hass Avocado #Organic Avocado #Large Lemon #Strawberries #Limes #Organic Whole Milk #First, write down which columns you need and which dataframes have them. #Next, merge these into a single dataframe. #Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products. import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz %cd instacart_2017_05_01 !ls -lh *.csv #discovered that the 'aisles' and 'departments' files are pretty useless for this task so I eliminated them after exploring them opt = pd.read_csv('order_products__train.csv') opt.head(10) opp = pd.read_csv('order_products__prior.csv') opp.head() ord = pd.read_csv('orders.csv') ord.head() prod = pd.read_csv('products.csv') prod.describe() #Attempting to explore the relevant column to find the exact food items - this didn't really help me but I wanted to try it anyway food = prod[['product_name']] food.head() #attempting to group by food and limit this list to just those items - this also did not help opp.groupby('add_to_cart_order').head(10) #I was finally able to pull out the individual products I needed - I want to figure out if I can do them all at once #Every attempt to combine them produced an error so I did each separately #I discovered that almost all of them are in aisle 24, dept 4 - so many those two sets could be useful after all - jk they're not banana = prod[(prod.product_name == 'Banana')] banana bob = prod[(prod.product_name == 'Bag of Organic Bananas')] bob straw = prod[(prod.product_name == 'Strawberries')] straw ostraw = prod[(prod.product_name == 'Organic Strawberries')] ostraw avo = prod[(prod.product_name == 'Avocado')] avo oha = prod[(prod.product_name == 'Organic Hass Avocado')] oha lime = prod[(prod.product_name == 'Limes')] lime lemon = prod[(prod.product_name == 'Large Lemon')] lemon milk = prod[(prod.product_name == 'Organic Whole Milk')] milk spinach = prod[(prod.product_name == 'Organic Baby Spinach')] spinach #Everything is in Dept 4 with the exception of milk in dept 16 #so, they are all in one list but now I need to get rid of the headings and unnecessary data food_list = [lemon, lime, milk, oha, avo, banana, bob, straw, ostraw, spinach] food_list #Well that sucks #combined these because they hold a lot of the same categories opp_opt = pd.concat([opp, opt], axis=0) opp_opt.describe ord_prod = pd.concat([ord, prod], axis=0) ord_prod.shape #These product ids match the top foods product ids - tbh tho I don't understand what the second column output represents opp_opt['product_id'].value_counts()[:10] #Making list of the ids needed: Top_food = opp_opt['product_id'].value_counts()[:10].index.tolist() Top_food #Making list of top ids and cooresponding order info condition = opp_opt['product_id'].isin(Top_food) small_list = opp_opt[condition] small_list.head(10) #The list has been merged on the common element (product id), inner because it's like joining two sets by the inner part of a venn diagram, but all data from both is included merge1 = pd.merge(small_list, prod, on='product_id', how='inner') merge1 #Trying to make the dataset look better and display the info needed - it worked! fin_df = merge1['product_name'].value_counts(sort=True).to_frame() fin_df = fin_df.reset_index() fin_df.columns = ['product_name', 'amount ordered'] fin_df
_____no_output_____
MIT
Join_and_Shape_Data.ipynb
kvinne-anc/Data-Science-Notebooks
Day 20 - Trench Maphttps://adventofcode.com/2021/day/20
from pathlib import Path INPUTS = Path("input.txt").read_text().strip().split("\n") ENHANCER = INPUTS[0] IMAGE = INPUTS[2:] def section_to_decimal(section: str) -> int: output = ''.join('1' if x == '#' else '0' for x in section) return int(output, base=2) assert section_to_decimal(section='...#...#.') == 34 def enhance_image( original: list[str], enhancer: str = ENHANCER, padder: str = ".", ) -> list[str]: extra_row = padder * (len(original[0]) + 4) # Expand the original 2 pixels in every dimension # to more easily grab sections on the edges for enhancing the final image. new_original = [ extra_row, extra_row, *[f"{padder*2}{x}{padder*2}" for x in original], extra_row, extra_row, ] output = [] for i in range(len(new_original) - 2): outrow = "" for j in range(len(new_original[0]) - 2): section = "".join(x[j : j + 3] for x in new_original[i : i + 3]) index = section_to_decimal(section=section) outrow += enhancer[index] output.append(outrow) return output def test_enhance_image(): enhancer = ( "..#.#..#####.#.#.#.###.##.....###.##.#..###.####..#####..#....#..#..##..##" "#..######.###...####..#..#####..##..#.#####...##.#.#..#.##..#.#......#.###" ".######.###.####...#.##.##..#..#..#####.....#.#....###..#.##......#.....#." ".#..#..##..#...##.######.####.####.#.#...#.......#..#.#.#...####.##.#....." ".#..#...##.#.##..#...##.#.##..###.#......#.#.......#.#.#.####.###.##...#.." "...####.#..#..#.##.#....##..#.####....##...##..#...#......#.#.......#....." "..##..####..#...#.#.#...##..#.#..###..#####........#..####......#..#" ) image = [ "#..#.", "#....", "##..#", "..#..", "..###", ] enhanced = enhance_image(original=image, enhancer=enhancer) expected = [ ".##.##.", "#..#.#.", "##.#..#", "####..#", ".#..##.", "..##..#", "...#.#.", ] assert enhanced == expected, "Output:\n" + "\n".join(enhanced) enhanced2 = enhance_image(original=enhanced, enhancer=enhancer, padder=enhancer[0]) expected2 = [ ".......#.", ".#..#.#..", "#.#...###", "#...##.#.", "#.....#.#", ".#.#####.", "..#.#####", "...##.##.", "....###..", ] assert enhanced2 == expected2, "Output:\n" + "\n".join(enhanced2) light_pixels = sum([sum([1 for y in x if y == "#"]) for x in enhanced2]) assert light_pixels == 35, light_pixels test_enhance_image()
_____no_output_____
MIT
2021/day20/main.ipynb
GriceTurrble/advent-of-code-2020-answers
I had some trouble at the next stage with AoC site telling me my count was off, despite everything seeming to work correctly. What I didn't account for was that the enhancement of a pixel surrounded by all dark pixels results in index `0`, and index 0 of my enhancer was a *light* pixel. This meant that every dark pixel in infinite directions would alternate between light and dark on each iteration (subsequently, the 512th pixel in the enhancer is ``, completing the alternating pattern, as all light pixels results in that final position in the enhancer).The fix for this is to adjust the enhancement algorithm so that it adds two new rows and columns on the outsides matching the first pixel in the enhancer on every *even* enhancement. This ensured that even the example code worked the same and that my own enhancer worked correctly.This gotcha had me stumped in part 1 for a while, but a couple lines of code later and it's solved.
pass1 = enhance_image(original=IMAGE, enhancer=ENHANCER) # As noted above, the second pass has to use the first pixel in the ENHANCER as a padder # in order to get back the correct image. pass2 = enhance_image(original=pass1, enhancer=ENHANCER, padder=ENHANCER[0])
_____no_output_____
MIT
2021/day20/main.ipynb
GriceTurrble/advent-of-code-2020-answers
Once we had a final image (in `pass2` above), we have to count the light pixels. This was a simple matter of flattening and summing all instances of `` in the final list of strings.I could have simplified ever so slightly had I converted the pixels to `1`s and `0`s first, but where's the fun in that?
light_pixels = sum([sum([1 for y in x if y == '#']) for x in pass2]) print(f"Number of light pixels: {light_pixels}")
Number of light pixels: 5057
MIT
2021/day20/main.ipynb
GriceTurrble/advent-of-code-2020-answers
Part 2Running the same algorithm 50x isn't much of a deal compared to running it 2x. We just need to be sure to pull the correct padding character on even-numbered iterations, so we have the `padder` line flipping on the result of `i % 2` (if 1, it's `True`, meaning `i == 1` and `3`, which are indices `2` and `4` etc., which are our even-numbered iterations).
image = IMAGE iterations = 50 for i in range(iterations): padder = ENHANCER[0] if i % 2 else "." image = enhance_image(original=image, enhancer=ENHANCER, padder=padder) light_pixels = sum([sum([1 for y in x if y == '#']) for x in image]) print(f"Number of light pixels: {light_pixels}")
Number of light pixels: 18502
MIT
2021/day20/main.ipynb
GriceTurrble/advent-of-code-2020-answers
Error Mitigation using noise-estimation circuit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, transpile, IBMQ from qiskit.circuit import Gate from qiskit.tools.visualization import plot_histogram from typing import Union import numpy as np import math import matplotlib.pyplot as plt qr = QuantumRegister(size = 6, name = 'qr') cr = ClassicalRegister(1,name='cr') circ = QuantumCircuit(qr,cr) circ.rx(-np.pi/2,qr[0]) circ.rx(-np.pi/2,qr[1]) circ.rx(-np.pi/2,qr[2]) circ.rx(np.pi/2,qr[3]) circ.rx(np.pi/2,qr[4]) circ.rx(np.pi/2,qr[5]) def one_time_step(num_qubits, J, to_gate=True) -> Union[Gate, QuantumCircuit]: # Define the circuit for one_time_step # J: the value of J*dt qr = QuantumRegister(num_qubits,name='qr') qc = QuantumCircuit(qr) for i in range( (num_qubits-1)//2): qc.cnot(qr[1+2*i],qr[2+2*i]) qc.rx(-J,qr[2*i+1]) qc.rz(-J,qr[2*i+2]) qc.cnot(qr[1+2*i],qr[2+2*i]) for i in range(num_qubits//2): qc.cnot(qr[2*i],qr[2*i+1]) qc.rx(-2*J,qr[2*i]) qc.rz(-2*J,qr[2*i+1]) qc.cnot(qr[2*i],qr[2*i+1]) for i in range( (num_qubits-1)//2): qc.cnot(qr[1+2*i],qr[2+2*i]) qc.rx(-J,qr[2*i+1]) qc.rz(-J,qr[2*i+2]) qc.cnot(qr[1+2*i],qr[2+2*i]) return qc.to_gate(label=' one time step') if to_gate else qc temp_circ = one_time_step(6, 0.1,to_gate=False) temp_circ.draw('mpl') J = 0.25 step = 4 for i in range(step): circ.append(one_time_step(6, J), qr) circ.draw('mpl') for i in range(6): circ.rx(-np.pi/2,qr[i]) circ.measure(qr[5],cr) circ.draw('mpl') simulator = Aer.get_backend('aer_simulator') circ_transpiled = transpile(circ, simulator) job = simulator.run(circ_transpiled, shots = 8192) res = job.result() counts = res.get_counts() plot_histogram(counts) def HeiSim_Step_Original(num_qubits, J, step, real_device=False): qr = QuantumRegister(size = num_qubits, name = 'qr') cr = ClassicalRegister(1,name='cr') circ = QuantumCircuit(qr,cr) circ.rx(-np.pi/2,qr[0]) circ.rx(-np.pi/2,qr[1]) circ.rx(-np.pi/2,qr[2]) circ.rx(np.pi/2,qr[3]) circ.rx(np.pi/2,qr[4]) circ.rx(np.pi/2,qr[5]) for i in range(step): circ.append(one_time_step(num_qubits, J), qr) for i in range(6): circ.rx(-np.pi/2,qr[i]) circ.measure(qr[5],cr) if real_device: provider = IBMQ.get_provider(hub='ibm-q-community',group='ibmquantumawards',project='open-science-22') backend = provider.get_backend(name='ibmq_jakarta') else: backend = Aer.get_backend('aer_simulator') circ_transpiled = transpile(circ, backend) job = backend.run(circ_transpiled, shots = 8192) res = job.result() counts = res.get_counts() counts_0 = counts.get('0') counts_1 = counts.get('1') if counts_0!=8192 and counts_1!=8192: return (counts.get('0') - counts.get('1'))/8192 elif counts_0==8192: return 1 elif counts_1==8192: return -1 M = [] for i in range(15): temp = HeiSim_Step_Original(num_qubits = 6, J = 0.1,step = i) M.append(temp) t = 0.2*np.array(range(15)) plt.plot(t,M) HeiSim_Step_Original(num_qubits = 6, J = 0.2,step = 1,real_device=True)
_____no_output_____
Apache-2.0
QHack_Project.ipynb
Dran-Z/QHack2022-OpenHackerthon
generate mock SEDs using the `provabgs` pipelineThese SEDs will be used to construct BGS-like spectra and DESI-like photometry, which will be used for P1 and S1 mock challenge tests
import os import numpy as np # --- plotting --- import corner as DFM import matplotlib as mpl import matplotlib.pyplot as plt #if 'NERSC_HOST' not in os.environ.keys(): # mpl.rcParams['text.usetex'] = True mpl.rcParams['font.family'] = 'serif' mpl.rcParams['axes.linewidth'] = 1.5 mpl.rcParams['axes.xmargin'] = 1 mpl.rcParams['xtick.labelsize'] = 'x-large' mpl.rcParams['xtick.major.size'] = 5 mpl.rcParams['xtick.major.width'] = 1.5 mpl.rcParams['ytick.labelsize'] = 'x-large' mpl.rcParams['ytick.major.size'] = 5 mpl.rcParams['ytick.major.width'] = 1.5 mpl.rcParams['legend.frameon'] = False from provabgs import infer as Infer from provabgs import models as Models prior = Infer.load_priors([ Infer.UniformPrior(9., 12., label='sed'), Infer.FlatDirichletPrior(4, label='sed'), # flat dirichilet priors Infer.UniformPrior(0., 1., label='sed'), # burst fraction Infer.UniformPrior(0., 13.27, label='sed'), # tburst Infer.UniformPrior(6.9e-5, 7.3e-3, label='sed'),# uniform priors on ZH coeff Infer.UniformPrior(6.9e-5, 7.3e-3, label='sed'),# uniform priors on ZH coeff Infer.UniformPrior(0., 3., label='sed'), # uniform priors on dust1 Infer.UniformPrior(0., 3., label='sed'), # uniform priors on dust2 Infer.UniformPrior(-2.2, 0.4, label='sed') # uniform priors on dust_index ])
_____no_output_____
MIT
nb/mocha_provabgs_mocks.ipynb
changhoonhahn/GQP_mock_challenge
sample $\theta_{\rm obs}$ from prior
# direcotyr on nersc #dat_dir = '/global/cscratch1/sd/chahah/gqp_mc/mini_mocha/provabgs_mocks/' # local direcotry dat_dir = '/Users/chahah/data/gqp_mc/mini_mocha/provabgs_mocks/' theta_obs = np.load(os.path.join(dat_dir, 'provabgs_mock.theta.npy')) z_obs = 0.2 m_nmf = Models.NMF(burst=True, emulator=False) wave_full = [] flux_full = [] wave_obs = np.linspace(3e3, 1e4, 1000) flux_obs = [] for i in range(theta_obs.shape[0]): w, f = m_nmf.sed(theta_obs[i], z_obs) wave_full.append(w) flux_full.append(f) _, f_obs = m_nmf.sed(theta_obs[i], z_obs, wavelength=wave_obs) flux_obs.append(f_obs) fig = plt.figure(figsize=(10,5)) sub = fig.add_subplot(111) for w, f, f_obs in zip(wave_full[:10], flux_full, flux_obs): sub.plot(w, f, c='k', ls=':') sub.plot(wave_obs, f_obs) sub.set_xlim(3e3, 1e4)
_____no_output_____
MIT
nb/mocha_provabgs_mocks.ipynb
changhoonhahn/GQP_mock_challenge
save to file
np.save(os.path.join(dat_dir, 'provabgs_mock.wave_full.npy'), np.array(wave_full)) np.save(os.path.join(dat_dir, 'provabgs_mock.flux_full.npy'), np.array(flux_full)) np.save(os.path.join(dat_dir, 'provabgs_mock.wave_obs.npy'), wave_obs) np.save(os.path.join(dat_dir, 'provabgs_mock.flux_obs.npy'), np.array(flux_obs))
_____no_output_____
MIT
nb/mocha_provabgs_mocks.ipynb
changhoonhahn/GQP_mock_challenge
Load the current s&p 500 ticker list, we will only be trading on these stocks
tickers = open('s&p500_tickers.dat', 'r').read().split('\n') print(tickers)
_____no_output_____
Apache-2.0
Pavol/ray_test.ipynb
AidanMar/reinforcement_learning
Data start and end dates and initialise ticker dataframe dictionary
one_day = pd.Timedelta(days=1) i = 0 cur_day = pd.to_datetime('1992-06-15', format=r'%Y-%m-%d') #pd.to_datetime('1992-06-15') end_day = pd.to_datetime('2020-01-01', format=r'%Y-%m-%d') end_df = pd.read_csv('equity_data/' + (end_day - one_day).strftime(r'%Y%m%d') + '.csv') ticker_df = end_df.loc[end_df.symbol.isin(tickers)] # Tickers that are in the dataframe on the last day ticker_dict = {ticker_df.symbol.iloc[i] : ticker_df.finnhub_id.iloc[i] for i in range(len(ticker_df.index))} # Create a mapping between tickers and finnhub_ids
_____no_output_____
Apache-2.0
Pavol/ray_test.ipynb
AidanMar/reinforcement_learning
For all dates between start and end range, load the day into ticker dict with the key being ticker and dataframe indexed by day
df_columns = pd.read_csv('equity_data/' + cur_day.strftime(r'%Y%m%d') + '.csv').columns ticker_dfs = { ticker : pd.DataFrame(index=pd.date_range(cur_day, end_day - one_day, freq='D'), columns=df_columns) for ticker in tickers } save_df = False if save_df: pbar = tqdm(total=(end_day - cur_day).days) while (cur_day != end_day): pbar.update() try: day_df = pd.read_csv('equity_data/' + cur_day.strftime(r'%Y%m%d') + '.csv') except FileNotFoundError: cur_day += one_day i += 1 continue for ticker in ticker_dict.keys(): if ticker in day_df.symbol.values: row = day_df.loc[day_df.finnhub_id == ticker_dict[ticker]] if row.shape[0] == 2: print(ticker) print(row) if not row.shape[0] == 0: ticker_dfs[ticker].loc[cur_day] = row.values[0, :] cur_day += one_day i += 1 pbar.close() # Loading logic, as the above is slow process and we dont want to perform it every time if save_df: for ticker, ticker_frame in ticker_dfs.items(): ticker_frame.reset_index(inplace=True) ticker_frame.to_feather('equity_data/stored/' + ticker.lower() + '.feather') else: print('Loading from storage...') for symbol in ticker_dict.keys(): ticker_dfs[symbol] = pd.read_feather('equity_data/stored/' + symbol.lower() + '.feather').set_index('index', drop=True) print(ticker_dfs) # Clear the data somewhat, we only want frames with more than 2000 days that have gaps no larger than 7 days to_delete = [] for ticker, frame in ticker_dfs.items(): prev_day = frame.index[-1] frame.dropna(axis='index', how='all', inplace=True) if frame.empty: to_delete.append(ticker) elif len(frame.index) < 2000: to_delete.append(ticker) else: for day in frame.index[::-1][1:]: if (prev_day - day).days > 7: # if gap between datapoints larger than 7 days, remove to_delete.append(ticker) break prev_day = day for ticker in to_delete: del ticker_dfs[ticker] print('Deleting ticker: ' + ticker) print(len(ticker_dfs.keys())) # Align dataframes by date and create an intersection index_intersection = ticker_dfs[list(ticker_dfs.keys())[0]].index print(index_intersection) for ticker, ticker_frame in ticker_dfs.items(): index_intersection = index_intersection.intersection(ticker_frame.index) print(ticker + ': ' + str(len(index_intersection))) print(index_intersection) for ticker in ticker_dfs.keys(): ticker_dfs[ticker] = ticker_dfs[ticker].loc[index_intersection] print(ticker_dfs) # Env config file structure for reference n_assets = 0 n_features = 0 config = { 'initial_balance': 0, 'initial_portfolio': [0]*n_assets, 'tickers': ['']*n_assets, # Tickers to trade, must correspond to tickers in dataframe dict! Implicitly defines number of assets 'indicators': [None]*n_features, # Indicator functions/classes to compute features for each stock, implicitly defines number of features. TODO: Support multidimensional indicators 'max_indicator_lookback': 0, # Number of days after which all indicators can compute proper values 'trading_days': 0, 'start_day_offset': None } class TradingEnv(gym.Env): def __init__(self, env_config): super(TradingEnv, self).__init__() self._env_config = env_config self._tickers = env_config['tickers'] self._indicator_funcs = self._env_config['indicators'] self._max_indicator_lookback = self._env_config['max_indicator_lookback'] # Number of days after which all indicators can compute proper values self._n_assets = len(self._tickers) self._n_features = len(self._indicator_funcs) assert self._n_assets != 0, 'Number of assets must not be zero!' assert self._n_features != 0, 'Number of features must not be zero!' self._df_dict = env_config['df_dict'] # Daily OHCL data for each stock, indexed and aligned by day self._days = self._df_dict[self._tickers[0]].index self._trading_days = env_config['trading_days'] # Number of days the algorithm will be trading self._start_day_idx = env_config['start_day_offset'] # Offset of the first trading day from the first dataframe day if self._start_day_idx is not None: assert self._start_day_idx >= self._max_indicator_lookback, 'start_day_offset must be larger than max_indicator_lookback in order to properly initialise all indicators' assert self._start_day_idx + self._trading_days <= len(self._days), 'start_day_idx + trading_days must be lower than the number of days' else: self._start_day_idx assert self._trading_days + self._max_indicator_lookback <= len(self._days) ,'The sum of trading_days + max_indicator_lookback must be lower than the number of days in the dataframe' self._initial_balance = self._env_config['initial_balance'] self._initial_portfolio = self._env_config['initial_portfolio'] if self._env_config['initial_portfolio'] is not None else [0] * self._n_assets assert len(self._initial_portfolio) == self._n_assets, 'Size of initial portfolio must equal the number of assets!' action_shape = (self._n_assets + 1,) obs_shape = (self._n_features*self._n_assets + 1,) self.action_space = gym.spaces.Box(np.full(action_shape, 0), np.full(action_shape, 1), shape=action_shape, dtype=np.float16) # Action space is the assets + cash for rebalancing self.observation_space = gym.spaces.Box(np.full(obs_shape, 0), np.inf, shape=obs_shape, dtype=np.float16) # Observation space is all the features for each asset + cash self.max_episode_steps = self._trading_days def reset(self): self._balance = self._initial_balance self._portfolio = self._initial_portfolio if self._start_day_idx is None: self._start_day_idx = np.random.randint(self._max_indicator_lookback, len(self._days) - self._trading_days) # If no start day chosen, generate a random start self._cur_day_idx = self._start_day_idx self._cur_day = self._days[self._cur_day_idx] self._cur_day_idx += 1 # Advance one day indicators = self._compute_indicators(self._cur_day) # Compute the indicators for the start date return np.append(indicators, self._balance) # Observation is number of indicators * number of assets + 1 def _compute_indicators(self, day): features = np.empty((self._n_features*self._n_assets,)) for (i, ticker) in enumerate(self._tickers): for (j, indicator) in enumerate(self._indicator_funcs): ticker_frame_slice = self._df_dict[ticker].loc[self._days[self._start_day_idx] - pd.Timedelta(days=1)*self._max_indicator_lookback:(day + pd.Timedelta(days=1))] # Get the relevant dataframe up until this day (inclusive) features[i*self._n_features + j] = indicator(ticker_frame_slice) return features def _asset_prices(self, day): # Use open prices on the current day prices = np.empty((self._n_assets,)) for i, ticker in enumerate(self._tickers): prices[i] = self._df_dict[ticker].loc[day].open return prices def _portfolio_val(self, portfolio, balance, day): return np.dot(self._asset_prices(self._cur_day), portfolio) + balance def _rebalance(self, actions): # TODO: Test this more to see if it makes sense weightings = self._softmax(actions) # First weight is for cash prices = self._asset_prices(self._cur_day) # Get the open prices of assets on the current day portfolio_val = np.dot(prices, self._portfolio) + self._balance return (portfolio_val*np.divide(weightings[1:], prices), portfolio_val*weightings[0]) # Rebalanced portfolio in the form of (assets, cash) def _reward(self): # For now just compute the increase in portfolio value return 1 - self._portfolio_val(self._portfolio, self._balance, self._cur_day) / self._portfolio_val(self._initial_portfolio, self._initial_balance, self._days[self._start_day_idx]) def step(self, action): self._cur_day = self._days[self._cur_day_idx] #print('Day: ' + str(self._cur_day)) (self._portfolio, self._balance) = self._rebalance(action) obs = np.append(self._compute_indicators(self._cur_day), self._balance) rw = self._reward() done = (self._cur_day_idx - self._start_day_idx) >= self._trading_days info = {} # TODO: Add info here self._cur_day_idx += 1 # Advance one day return obs, rw, done, info def _softmax(self, x): """Compute softmax values for each sets of scores in x.""" e_x = np.exp(x - np.max(x)) return e_x / e_x.sum() close_indicator = lambda df: df.close[-1] n_assets = len(ticker_dfs.keys()) env_config = { 'initial_balance': 1E6, 'initial_portfolio': [0]*n_assets, 'tickers': list(ticker_dfs.keys()), # Tickers to trade, must correspond to tickers in dataframe dict! Implicitly defines number of assets 'indicators': [close_indicator], # Indicator functions/classes to compute features for each stock, implicitly defines number of features. TODO: Support multidimensional indicators 'max_indicator_lookback': 0, # Number of days after which all indicators can compute proper values 'trading_days': 100, 'start_day_offset': None, 'df_dict': ticker_dfs } import ray.rllib.agents.ppo as ppo import ray.rllib.models.catalog as catalog import ray.tune as tune from ray.tune.logger import pretty_print config = ppo.DEFAULT_CONFIG.copy() config["num_gpus"] = 0 config["num_workers"] = 5 config["rollout_fragment_length"] = 100 config["train_batch_size"] = 500 #config["framework"] = "torch" config["env_config"] = env_config config["log_level"] = "DEBUG" config["env"] = TradingEnv model_config = catalog.MODEL_DEFAULTS.copy() model_config["use_lstm"] = True model_config["max_seq_len"] = 100 #trainer = ppo.PPOTrainer(config=config, env=TradingEnv) tune.run(ppo.PPOTrainer, stop={"training_iteration": 100}, config=config, local_dir='ray_results') """ # Can optionally call trainer.restore(path) to load a checkpoint. for i in range(1000): # Perform one iteration of training the policy with PPO result = trainer.train() print(pretty_print(result)) checkpoint = trainer.save() print("checkpoint saved at", checkpoint)"""
_____no_output_____
Apache-2.0
Pavol/ray_test.ipynb
AidanMar/reinforcement_learning
Web Scraping
#Importing the essential libraries #Beautiful Soup is a Python library for pulling data out of HTML and XML files #The Natural Language Toolkit import requests import nltk nltk.download('wordnet') from nltk.stem import WordNetLemmatizer from nltk.sentiment.vader import SentimentIntensityAnalyzer from bs4 import BeautifulSoup import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline import random from wordcloud import WordCloud from html.parser import HTMLParser import bs4 as bs import urllib.request import re import string #we are using request package to make a GET request for the website, which means we're getting data from it. r=requests.get('http://gflenv.com/about-us/') #Setting the correct text encoding of the HTML page r.encoding = 'utf-8' #Extracting the HTML from the request object html = r.text # Printing the first 500 characters in html print(html[:500]) # Creating a BeautifulSoup object from the HTML soup = BeautifulSoup(html) # Getting the text out of the soup text = soup.get_text() #total length len(text) text=text[5000:10000] text_nopunct='' text_nopunct= "".join([char for char in text if char not in string.punctuation]) len(text_nopunct) text_nopunct[2375:3980] text_nopunct=text_nopunct.strip('\n') text_nopunct=text_nopunct.strip('\n\n') text_nopunct=text_nopunct.strip('\n\n\n') text_nopunct[2375:3980] #Creating the tokenizer tokenizer = nltk.tokenize.RegexpTokenizer('\w+') #Tokenizing the text tokens = tokenizer.tokenize(text_nopunct) len(tokens) print(tokens[242:262]) #now we shall make everything lowercase for uniformity #to hold the new lower case words words = [] # Looping through the tokens and make them lower case for word in tokens: words.append(word.lower()) print(words[242:304]) #Stop words are generally the most common words in a language. #English stop words from nltk. stopwords = nltk.corpus.stopwords.words('english') words_new = [] #Now we need to remove the stop words from the words variable #Appending to words_new all words that are in words but not in sw for word in words: if word not in stopwords: words_new.append(word) len(words_new)
_____no_output_____
MIT
GFL Environmental Text Analysis.ipynb
SayantiDutta2000/Financial-Analysis
LemmatizationLemmatization is the algorithmic process of finding the lemma of a word depending on their meaning. Lemmatization usually refers to the morphological analysis of words, which aims to remove inflectional endings.
from nltk.stem import WordNetLemmatizer wn = WordNetLemmatizer() lem_words=[] for word in words_new: word=wn.lemmatize(word) lem_words.append(word) len(lem_words) same=0 diff=0 for i in range(0,416): if(lem_words[i]==words_new[i]): same=same+1 elif(lem_words[i]!=words_new[i]): diff=diff+1 print('Number of words Lemmatized=', diff) print('Number of words not Lemmatized=', same) #The frequency distribution of the words freq_dist = nltk.FreqDist(lem_words) #Frequency Distribution Plot plt.subplots(figsize=(20,12)) freq_dist.plot(30) #converting into string res=' '.join([i for i in lem_words if not i.isdigit()]) plt.subplots(figsize=(16,10)) wordcloud = WordCloud( background_color='black', max_words=100, width=1400, height=1200 ).generate(res) plt.imshow(wordcloud) plt.title('GFL Environmental (100 words)') plt.axis('off') plt.show() plt.subplots(figsize=(16,10)) wordcloud = WordCloud( background_color='black', max_words=200, width=1400, height=1200 ).generate(res) plt.imshow(wordcloud) plt.title('GFL Environmental (200 words)') plt.axis('off') plt.show()
_____no_output_____
MIT
GFL Environmental Text Analysis.ipynb
SayantiDutta2000/Financial-Analysis
Все тоже самое, что и в alfabattle_1_parq2, только вместо event_name идет event_category
import numpy as np import pandas as pd import gc import os import re df = pd.read_csv("../input/alfabattle1/alfabattle2_abattle_train_target.csv") parq0 = pd.read_parquet('../your_parqet0.parquet') parq0['timestamp'] = pd.to_datetime(parq0['timestamp']) parq0 = parq0.sort_values(by=['client', 'timestamp']) parq0 = parq0.merge(df[['session_id', 'client_pin', 'multi_class_target']], left_on=['client', 'session_id'], right_on=['client_pin', 'session_id'], how='left') parq0.drop(['client_pin'], axis=1, inplace=True) parq0['session_id'].loc[parq0['multi_class_target'].isna()] = np.nan parq0['session_id'] = parq0.groupby('client')['session_id'].ffill() parq0['multi_class_target'] = parq0.groupby('client')['multi_class_target'].ffill() parq0['session_id'].dropna(inplace=True) top_event = parq0['event_category'].value_counts(normalize=True)[:60].index concat_list = [] for i in range(10): print(i) parq0 = pd.read_parquet(f'../your_parqet{i}.parquet') parq0['timestamp'] = pd.to_datetime(parq0['timestamp']) parq0 = parq0.sort_values(by=['client', 'timestamp']) parq0 = parq0.merge(df[['session_id', 'client_pin', 'multi_class_target']], left_on=['client', 'session_id'], right_on=['client_pin', 'session_id'], how='left') parq0.drop(['client_pin'], axis=1, inplace=True) parq0['session_id'].loc[parq0['multi_class_target'].isna()] = np.nan parq0['session_id'] = parq0.groupby('client')['session_id'].ffill() parq0['multi_class_target'] = parq0.groupby('client')['multi_class_target'].ffill() parq0['session_id'].dropna(inplace=True) parq0.drop(['device_is_webview', 'page_urlhost', 'page_urlpath_full', 'net_connection_type', 'net_connection_tech', 'application_id'], axis=1, inplace=True) for event in top_event: parq0[event] = (parq0['event_category'] == event).astype('int16') parq0.drop(['timestamp', 'event_type', 'event_category', 'event_name', 'event_label', 'device_screen_name', 'timezone', 'multi_class_target'], axis=1, inplace=True) df_group = parq0.rename({'client':'client_pin'}, axis=1).groupby(['client_pin', 'session_id']).sum() concat_list.append(df_group) del df_group del parq0 gc.collect() df_con = pd.concat(concat_list) df = df.merge(df_con, how='left', on=['client_pin', 'session_id']) df.to_csv('alfa1_train_expend4.csv', index=False)
_____no_output_____
MIT
1.data preprocessing/.ipynb_checkpoints/alfabattle_1_parq3-checkpoint.ipynb
Aindstorm/alfabattle2_1stproblem
COMP 135 day03: Training Linear Regression Models via Analytical Formulas Objectives* Learn how to apply the standard "least squares" formulas for 'training' linear regression in 1 dimension* Learn how to apply the standard "least squares" formulas for 'training' linear regression in many dimensions (with matrix math)* Learn how these formulas minimize *mean squared error*, but maybe not other error metrics Outline* [Part 1: Simplest Linear Regression](part1) with 1-dim features, estimate slope only* * Exercise 1a: When does the formula fail?* * Exercise 1b: Can you show graphically the formula minimizes mean squared error?* * Exercise 1c: What would be optimal weight to minimize mean absolute error?* [Part 2: Simple Linear Regression](part2) with 1-dim features, slope and intercept* [Part 3: General case of linear regression with F features](part3)* [Part 4: What is a matrix inverse?](part4)* [Part 5: When can we trust numerical computation of the inverse?](part5)* [Part 6: General case of linear regression with F features, using numerically stable formulas](part6) Takeaways* Exact formulas exist for estimating the weight coefficients $w$ and bias/intercept $b$ for linear regression* * When $F=1$, just involves ratios of inner products* * When $F>1$, requires matrix multiplication and other operations, solving a linear system with $F+1$ unknowns ($F$ weights and 1 bias)* Prefer `np.linalg.solve` over `np.linalg.inv`.* * Numerical methods for computing inverses (like `np.linalg.inv`) are unreliable if the matrix $A$ is almost singular.* Linear algebra is a very important field of mathematics for understanding when a solution to a linear system of equations exists* These formulas minimize *mean squared error*, but likely may not minimize other error metrics* * Many ML methods are motivated by what is *mathematically convenient*.* * In practice, you should *definitely* consider if another objective is better for your regression task* * * Absolute error? Import libraries
import numpy as np import pandas as pd import sklearn # import plotting libraries import matplotlib import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn') # pretty matplotlib plots import seaborn as sns sns.set('notebook', style='whitegrid', font_scale=1.25) true_slope = 2.345 N = 7 x_N = np.linspace(-1, 1, N); y_N = true_slope * x_N prng = np.random.RandomState(33) ynoise_N = y_N + 0.7 * prng.randn(N) plt.plot(x_N, y_N, 'bs-', label='y') plt.plot(x_N, ynoise_N, 'rs-', label='y + noise'); plt.legend(loc='lower right');
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Part 1: Simplest Linear Regression with 1-dim features and only slopeEstimate slope only. We assume the bias/intercept is fixed to zero. Exact formula to estimate "least squares" solution w in 1D:$$w^* = \frac{\sum_n x_n y_n}{\sum_n x_n^2} = \frac{ \mathbf{x}^T \mathbf{y} }{ \mathbf{x}^T \mathbf{x} }$$ Estimate w using the 'true', noise-free y value
w_est = np.inner(x_N, y_N) / np.inner(x_N, x_N) print(w_est)
2.3449999999999998
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Estimate w using the noisy y values
w_est = np.inner(x_N, ynoise_N) / np.inner(x_N, x_N) print(w_est)
2.7606807490017067
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Exercise 1a: What if all examples had $x_n = 0$?What would happen? What does the algebra of the formula suggest? Exercise 1b: Can you show graphically that this minimizes *mean squared error*?
def predict_1d(x_N, w): # TODO fix me return 0.0 def calc_mean_squared_error(yhat_N, y_N): # TODO fix me return 0.0 G = 30 w_candidates_G = np.linspace(-3, 6, G) error_G = np.zeros(G) for gg, w in enumerate(w_candidates_G): yhat_N = predict_1d(x_N, w) error_G[gg] = calc_mean_squared_error(yhat_N, ynoise_N) plt.plot(w_candidates_G, error_G, 'r.-', label='objective function'); plt.plot(w_est * np.ones(2), np.asarray([0, 1]), 'b:', label='optimal weight value'); plt.xlabel('w'); plt.ylabel('mean squared error'); plt.legend()
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Exercise 1c: What about *mean absolute error*?Does the least-squares estimate of $w$ minimize mean absolute error for this example?
def calc_mean_abs_error(yhat_N, y_N): # TODO fixme return 0.0 G = 100 w_candidates_G = np.linspace(1, 4, G) error_G = np.zeros(G) for gg, w in enumerate(w_candidates_G): yhat_N = predict_1d(x_N, w) error_G[gg] = calc_mean_abs_error(yhat_N, ynoise_N) plt.plot(w_candidates_G, error_G, 'r.-', label='objective function'); plt.plot(w_est * np.ones(2), np.asarray([0, 1]), 'b:', label='optimal weight value'); plt.xlabel('w'); plt.ylabel('mean absolute error');
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Part 2: Simpler Linear Regression with slope and biasGoal: estimate slope $w$ and bias $b$ Then the best estimates of the slope and intercept are given by:$$w^* = \frac{ \sum_{n=1}^N (x_n - \bar{x}) (y_n - \bar{y}) }{\sum_{n=1}^N (x_n - \bar{x})^2 }$$and$$b^* = \bar{y} - w^* \bar{x}$$ Using the 'true', noise-free y valueSanity check : we should recover the true-slope, with zero intercept
xbar = np.mean(x_N) ybar = np.mean(y_N) w_est = np.inner(x_N - xbar, y_N - ybar) / np.inner(x_N - xbar, x_N - xbar) print(w_est) b_est = ybar - w_est * xbar print(b_est)
2.3449999999999998 -1.7938032012157886e-16
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Using the noisy y value
xbar = np.mean(x_N) ybar = np.mean(ynoise_N) w_est = np.inner(x_N - xbar, ynoise_N - ybar) / np.inner(x_N - xbar, x_N - xbar) b_est = ybar - w_est * xbar print("Estimated slope: " + str(w_est)) print("Estimated bias: " + str(b_est))
Estimated slope: 2.7606807490017067 Estimated bias: -0.4138756764186623
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Part 3: General case of Linear RegressionGoal:* estimate the vector $w \in \mathbb{R}^F$ of weight coefficients* estimate the bias scalar $b$ (aka intercept) Given a dataset of $N$ examples and $F$ feature dimensions, where* $\tilde{\mathbf{X}}$ is an $N \times F +1$ matrix of feature vectors, where we'll assume the last column is all ones* $\mathbf{y}$ is an $N \times 1$ column vector of outputsRemember that the formula is: $$\theta^* = (\tilde{\mathbf{X}}^T \tilde{\mathbf{X}} )^{-1} \tilde{\mathbf{X}}^T \mathbf{y}\\~\\w^* = [\theta^*_1 ~ \theta^*_2 \ldots \theta^*_F ]^T\\~\\b^* = \theta^*_{F+1}$$We need to compute a *matrix inverse* to do this.Let's try this out. Step by step.First, print out the $\tilde{X}$ array
x_N1 = x_N[:,np.newaxis] xtilde_N2 = np.hstack([x_N1, np.ones((x_N.size, 1))]) print(xtilde_N2)
[[-1. 1. ] [-0.66666667 1. ] [-0.33333333 1. ] [ 0. 1. ] [ 0.33333333 1. ] [ 0.66666667 1. ] [ 1. 1. ]]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Next, print out the $y$ array
print(ynoise_N)
[-2.56819745 -2.68541972 -1.85631918 -0.39928063 0.62995686 1.74174534 2.24038504]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Next, lets compute the matrix product $\tilde{X}^T \tilde{X}$, which is a $2 \times 2$ matrix
xTx_22 = np.dot(xtilde_N2.T, xtilde_N2) print(xTx_22)
[[ 3.11111111e+00 -2.22044605e-16] [-2.22044605e-16 7.00000000e+00]]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Next, lets compute the INVERSE of $\tilde{X}^T \tilde{X}$, which is again a $2 \times 2$ matrix
inv_xTx_22 = np.linalg.inv(xTx_22) # compute the inverse! print(inv_xTx_22)
[[3.21428571e-01 1.01959257e-17] [1.01959257e-17 1.42857143e-01]]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Next, let's compute the optimal $\theta$ vector according to our formula above
theta_G = np.dot(inv_xTx_22, np.dot(xtilde_N2.T, ynoise_N[:,np.newaxis])) # compute theta vector print(theta_G) print("Estimated slope: " + str(theta_G[0])) print("Estimated bias: " + str(theta_G[1]))
Estimated slope: [2.76068075] Estimated bias: [-0.41387568]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
We should get the SAME results as in our simpler LR case in Part 2. So this formula for the general case looks super easy, right? Not so fast...Let's take a minute and review just what the heck an *inverse* is, before we just blindly implement this formula... Part 4: Linear Algebra Review: What is the inverse of a matrix? Let $A$ be a square matrix with shape $(D, D)$.We say that matrix $A^{-1}$ is the *inverse* of $A$ if the product of $A$ and $A^{-1}$ yields the $D \times D$ *identity* matrix:$$A A^{-1} = I$$If $A^{-1}$ exists, it will also be a $D\times D $ square matrix.In Python, we can compute the inverse of a matrix using `np.linalg.inv`
# Define a square matrix with shape(3,3) A = np.diag(np.asarray([1., -2., 3.])) print(A) # Compute its inverse invA = np.linalg.inv(A) print(invA) np.dot(A, invA) # should equal identity
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Remember, in 1 dimensions, the inverse of $a$ is just $1/a$, since $a \cdot \frac{1}{a} = 1.0$
A = np.asarray([[2]]) print(A) invA = np.linalg.inv(A) print(invA)
[[0.5]]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Does the inverse always exist?No! Remember:* Even when $D=1$, if $A=0$, then the inverse does not exist ($\frac{1}{A}$ is undefined)* When $D \geq 2$, there are *infinitely many* square matrices $A$ that do not have an inverse
# Example 1: A = np.asarray([[0, 0], [0, 1.337]]) print(A) try: np.linalg.inv(A) except Exception as e: print(str(type(e)) + ": " + str(e)) # Example 2: A = np.asarray([[3.4, 3.4], [3.4, 3.4]]) print(A) try: np.linalg.inv(A) except Exception as e: print(str(type(e)) + ": " + str(e)) # Example 3: A = np.asarray([[-1.2, 4.7], [-2.4, 9.4]]) print(A) try: np.linalg.inv(A) except Exception as e: print(str(type(e)) + ": " + str(e))
[[-1.2 4.7] [-2.4 9.4]] <class 'numpy.linalg.LinAlgError'>: Singular matrix
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
What do these examples have in common???The columns of $A$ are not linearly independent!In other words, $A$ is not invertible whenever we can exactly construct one column of $A$ by a linear combination of other columns$$A_{:,D} = c_1 A_{:,1} + c_2 A_{:,2} + \ldots c_{D-1} A_{:,D-1}$$where $c_1$, $c_2$, $\ldots c_{D-1}$ are scalar weights.
# Look, here's the first column: A[:, 0] # And here's it being perfectly reconstructed by a scalar times the second column A[:, 1] * -1.2/4.7 # Example 3: A = np.asarray([[1.0, 2.0, -3.0], [2, 4, -6.0], [1.0, 1.0, 1.0]]) print(A) try: np.linalg.inv(A) except Exception as e: print(str(type(e)) + ": " + str(e))
[[ 1. 2. -3.] [ 2. 4. -6.] [ 1. 1. 1.]] <class 'numpy.linalg.LinAlgError'>: Singular matrix
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Important result from linear algebra: Invertible Matrix TheoremGiven a specific matrix $A$, the following statements are either *all* true or *all* false:* $A$ has an inverse (e.g. a matrix $A^{-1}$ exists s.t. $A A^{-1} = I$)* All $D$ columns of $A$ are linearly independent* The columns of $A$ span the space $\mathbb{R}^D$* $A$ has a non-zero determinantFor more implications, see the *Invertible Matrix Theorem*: Part 5: Is the numerical inverse reliable?Can we always trust the results of `np.linalg.inv`?Not really. Taking inverses is very tricky if the input matrix is not *very* well conditioned. A "good" example, where inverse works
# 3 indep rows of size 3. x_NF = np.random.randn(3, 3) xTx_FF = np.dot(x_NF.T, x_NF) np.linalg.inv(np.dot(x_NF.T, x_NF)) # First, verify the `inv` function computes *something* of the right shape inv_xTx_FF = np.linalg.inv(xTx_FF) print(inv_xTx_FF) # Next, verify the `inv` function result is ACTUALLY the inverse ans_FF = np.dot(xTx_FF, inv_xTx_FF) print(ans_FF) print("\nis this close enough to identity matrix? " + str( np.allclose(ans_FF, np.eye(3))))
[[ 1.00000000e+00 -4.33114505e-16 3.73409184e-16] [ 3.94146478e-16 1.00000000e+00 -4.54028728e-16] [ 1.59364741e-16 8.07877021e-16 1.00000000e+00]] is this close enough to identity matrix? True
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
A *bad* example, where `np.linalg.inv` may be unreliable
# Only 2 indep rows of size 3. should NOT be invertible # verify: determinant is close to zero x_NF = np.random.randn(2, 3) xTx_FF = np.dot(x_NF.T, x_NF) xTx_FF # First, verify the `inv` function computes *something* of the right shape inv_xTx_FF = np.linalg.inv(xTx_FF) print(inv_xTx_FF) # Next, verify the `inv` function result is ACTUALLY the inverse ans_FF = np.dot(xTx_FF, inv_xTx_FF) print(ans_FF) print("\nis this close enough to identity matrix? " + str( np.allclose(ans_FF, np.eye(3))))
[[ 1.4520932 0.49807897 -0.24114104] [-0.20159063 -0.27703514 -1.15131685] [-1.53776313 -0.71804165 -0.59193815]] is this close enough to identity matrix? False
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
What just happened?We just asked for an inverse.NumPy gave us a result that WAS NOT AN INVERSE, but we received NO WARNINGS OR ERRORS!So what should we do? Avoid naively calling `np.linalg.inv` and trusting the result. A better thing to do is use `np.linalg.solve`, as this will be more *stable* (trustworthy). What `np.linalg.solve(A, b)` does is that it uses DIFFERENT algorithm to directly return an answer to the questionWhat vector $\theta$ would be a valid solution to the equation$$A \theta = b$$for some matrix $A$ and vector $b$So for our case, we are requesting a solution (a specific vector $\theta$) to the equation$$\tilde{X}^T \tilde{X} \theta = \tilde{X}^T y$$ Part 6: Returning to general case linear regressionConstruct a simple case with $N=2$ examples and $F=2$ features.For general linear regression, this is an UNDER-determined system (we have 3 unknowns, but only 2 examples).
true_w_F1 = np.asarray([1.0, 1.0])[:,np.newaxis] true_b = np.asarray([0.0]) x_NF = np.asarray([[1.0, 2.0], [1.0, 1.0]]) + np.random.randn(2,2) * 0.001 print(x_NF) y_N1 = np.dot(x_NF, true_w_F1) + true_b print(y_N1)
[[2.99948664] [1.9999558 ]]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Punchline: there should be INFINITELY many weights $w$ and bias values $b$ that can reconstruct our $y$ **perfectly**Question: Can various estimation strategies find such weights? Try out sklearn
import sklearn.linear_model lr = sklearn.linear_model.LinearRegression() lr.fit(x_NF, y_N1)
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Print the estimated weights $w$ and intercept $b$
print(lr.coef_) print(lr.intercept_)
[[0.0013517 1.00134805]] [0.99740683]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Print the predicted values for $y$, alongside the *true* ones
print("Results for sklearn") print("Predicted y: " + str(np.squeeze(lr.predict(x_NF)))) print("True y: " + str(np.squeeze(y_N1)))
Results for sklearn Predicted y: [2.99948664 1.9999558 ] True y: [2.99948664 1.9999558 ]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Prep for our formulas: make the $\tilde{\mathbf{X}}$ arrayWill have shape $N \times (F+1)$Let's define $G = F+1$
xtilde_NG = np.hstack([x_NF, np.ones((2, 1))]) print(xtilde_NG) xTx_GG = np.dot(xtilde_NG.T, xtilde_NG)
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Try out using our least-squares formula, as implemented with `np.linalg.inv`
inv_xTx_GG = np.linalg.inv(xTx_GG) theta_G1 = np.dot(inv_xTx_GG, np.dot(xtilde_NG.T, y_N1))
_____no_output_____
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Best estimate of the weights and bias (after "unpacking" the vector $\theta$):
w_F = theta_G1[:-1, 0] b = theta_G1[-1] print(w_F) print(b) yhat_N1 = np.dot(xtilde_NG, theta_G1) print("Results for using naive np.linalg.inv") print("Predicted y: " + str(yhat_N1[:,0])) print("True y: " + str(y_N1[:,0]))
Results for using naive np.linalg.inv Predicted y: [1.99803269 0.99984927] True y: [2.99948664 1.9999558 ]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Expected result: you should see that predictions might be *quite far* from true y values! Try out using our formulas, as implemented with `np.linalg.solve`What should happen: We can find estimated parameters $w, b$ that perfectly predict the $y$
theta_G1 = np.linalg.solve(xTx_GG, np.dot(xtilde_NG.T, y_N1)) w_F = theta_G1[:-1,0] b = theta_G1[-1,0] print(w_F) print(b) yhat_N1 = np.dot(xtilde_NG, theta_G1) print("Results for using more stable formula implementation with np.linalg.solve") print("Predicted y: " + str(yhat_N1[:,0])) print("True y: " + str(y_N1[:,0]))
Results for using more stable formula implementation with np.linalg.solve Predicted y: [2.99948664 1.9999558 ] True y: [2.99948664 1.9999558 ]
MIT
labs/day03_LinearRegression_ExactFormulasForModelTraining.ipynb
ypark12/comp135-20f-assignments
Introduction to DebuggingIn this book, we want to explore _debugging_ - the art and science of fixing bugs in computer software. In particular, we want to explore techniques that _automatically_ answer questions like: Where is the bug? When does it occur? And how can we repair it? But before we start automating the debugging process, we first need to understand what this process is.In this chapter, we introduce basic concepts of systematic software debugging and the debugging process, and at the same time get acquainted with Python and interactive notebooks.
from bookutils import YouTubeVideo, quiz YouTubeVideo("bCHRCehDOq0")
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
**Prerequisites*** The book is meant to be a standalone reference; however, a number of _great books on debugging_ are listed at the end,* Knowing a bit of _Python_ is helpful for understanding the code examples in the book. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Intro_Debugging import ```and then make use of the following features.In this chapter, we introduce some basics of how failures come to be as well as a general process for debugging. A Simple Function Your Task: Remove HTML MarkupLet us start with a simple example. You may have heard of how documents on the Web are made out of text and HTML markup. HTML markup consists of _tags_ in angle brackets that surround the text, providing additional information on how the text should be interpreted. For instance, in the HTML text```htmlThis is emphasized.```the word "emphasized" is enclosed in the HTML tags `` (start) and `` (end), meaning that it should be interpreted (and rendered) in an emphasized way – typically in italics. In your environment, the HTML text gets rendered as> This is emphasized.There's HTML tags for pretty much everything – text markup (bold text, strikethrough text), text structure (titles, lists), references (links) to other documents, and many more. These HTML tags shape the Web as we know it. However, within all the HTML markup, it may become difficult to actually access the _text_ that lies within. We'd like to implement a simple function that removes _HTML markup_ and converts it into text. If our input is```htmlHere's some strong argument.```the output should be> Here's some strong argument. Here's a Python function which does exactly this. It takes a (HTML) string and returns the text without markup.
def remove_html_markup(s): tag = False out = "" for c in s: if c == '<': # start of markup tag = True elif c == '>': # end of markup tag = False elif not tag: out = out + c return out
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
This function works, but not always. Before we start debugging things, let us first explore its code and how it works. Understanding Python ProgramsIf you're new to Python, you might first have to understand what the above code does. We very much recommend the [Python tutorial](https://docs.python.org/3/tutorial/) to get an idea on how Python works. The most important things for you to understand the above code are these three:1. Python structures programs through _indentation_, so the function and `for` bodies are defined by being indented;2. Python is _dynamically typed_, meaning that the type of variables like `c`, `tag`, or `out` is determined at run-time.3. Most of Python's syntactic features are inspired by other common languages, such as control structures (`while`, `if`, `for`), assignments (`=`), or comparisons (`==`, `!=`, `<`).With that, you can already understand what the above code does: `remove_html_markup()` takes a (HTML) string `s` and then iterates over the individual characters (`for c in s`). By default, these characters are added to the return string `out`. However, if `remove_html_markup()` finds a `` character is found.In contrast to other languages, Python makes no difference between strings and characters – there's only strings. As in HTML, strings can be enclosed in single quotes (`'a'`) and in double quotes (`"a"`). This is useful if you want to specify a string that contains quotes, as in `'She said "hello", and then left'` or `"The first character is a 'c'"` Running a FunctionTo find out whether `remove_html_markup()` works correctly, we can *test* it with a few values. For the string```htmlHere's some strong argument.```for instance, it produces the correct value:
remove_html_markup("Here's some <strong>strong argument</strong>.")
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
Interacting with NotebooksIf you are reading this in the interactive notebook, you can try out `remove_html_markup()` with other values as well. Click on the above cells with the invocation of `remove_html_markup()` and change the value – say, to `remove_html_markup("foo")`. Press Shift+Enter (or click on the play symbol) to execute it and see the result. If you get an error message, go to the above cell with the definition of `remove_html_markup()` and execute this first. You can also run _all_ cells at once; see the Notebook menu for details. (You can actually also change the text by clicking on it, and corect mistaks such as in this sentence.) Executing a single cell does not execute other cells, so if your cell builds on a definition in another cell that you have not executed yet, you will get an error. You can select `Run all cells above` from the menu to ensure all definitions are set. Also keep in mind that, unless overwritten, all definitions are kept across executions. Occasionally, it thus helps to _restart the kernel_ (i.e. start the Python interpreter from scratch) to get rid of older, superfluous definitions. Testing a Function Since one can change not only invocations, but also definitions, we want to ensure that our function works properly now and in the future. To this end, we introduce tests through _assertions_ – a statement that fails if the given _check_ is false. The following assertion, for instance, checks that the above call to `remove_html_markup()` returns the correct value:
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \ "Here's some strong argument."
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
If you change the code of `remove_html_markup()` such that the above assertion fails, you will have introduced a bug. Oops! A Bug! As nice and simple as `remove_html_markup()` is, it is buggy. Some HTML markup is not properly stripped away. Consider this HTML tag, which would render as an input field in a form:```html">```If we feed this string into `remove_html_markup()`, we would expect an empty string as the result. Instead, this is what we get:
remove_html_markup('<input type="text" value="<your name>">')
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
Every time we encounter a bug, this means that our earlier tests have failed. We thus need to introduce another test that documents not only how the bug came to be, but also the result we actually expected. The assertion we write now fails with an error message. (The `ExpectError` magic ensures we see the error message, but the rest of the notebook is still executed.)
from ExpectError import ExpectError with ExpectError(): assert remove_html_markup('<input type="text" value="<your name>">') == ""
Traceback (most recent call last): File "<ipython-input-7-c7b482ebf524>", line 2, in <module> assert remove_html_markup('<input type="text" value="<your name>">') == "" AssertionError (expected)
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
With this, we now have our task: _Fix the failure as above._ Visualizing CodeTo properly understand what is going on here, it helps drawing a diagram on how `remove_html_markup()` works. Technically, `remove_html_markup()` implements a _state machine_ with two states `tag` and `¬ tag`. We change between these states depending on the characters we process. This is visualized in the following diagram:
from graphviz import Digraph, nohtml from IPython.display import display # ignore PASS = "✔" FAIL = "✘" PASS_COLOR = 'darkgreen' # '#006400' # darkgreen FAIL_COLOR = 'red4' # '#8B0000' # darkred STEP_COLOR = 'peachpuff' FONT_NAME = 'Raleway' # ignore def graph(comment="default"): return Digraph(name='', comment=comment, graph_attr={'rankdir': 'LR'}, node_attr={'style': 'filled', 'fillcolor': STEP_COLOR, 'fontname': FONT_NAME}, edge_attr={'fontname': FONT_NAME}) # ignore state_machine = graph() state_machine.node('Start', ) state_machine.edge('Start', '¬ tag') state_machine.edge('¬ tag', '¬ tag', label=" ¬ '<'\nadd character") state_machine.edge('¬ tag', 'tag', label="'<'") state_machine.edge('tag', '¬ tag', label="'>'") state_machine.edge('tag', 'tag', label="¬ '>'") # ignore display(state_machine)
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode
You see that we start in the non-tag state (`¬ tag`). Here, for every character that is not `''` character. A First FixLet us now look at the above state machine, and process through our input:```html">``` So what you can see is: We are interpreting the `'>'` of `""` as the closing of the tag. However, this is a quoted string, so the `'>'` should be interpreted as a regular character, not as markup. This is an example of _missing functionality:_ We do not handle quoted characters correctly. We haven't claimed yet to take care of all functionality, so we still need to extend our code. So we extend the whole thing. We set up a special "quote" state which processes quoted inputs in tags until the end of the quoted string is reached. This is how the state machine looks like:
# ignore state_machine = graph() state_machine.node('Start') state_machine.edge('Start', '¬ quote\n¬ tag') state_machine.edge('¬ quote\n¬ tag', '¬ quote\n¬ tag', label="¬ '<'\nadd character") state_machine.edge('¬ quote\n¬ tag', '¬ quote\ntag', label="'<'") state_machine.edge('¬ quote\ntag', 'quote\ntag', label="'\"'") state_machine.edge('¬ quote\ntag', '¬ quote\ntag', label="¬ '\"' ∧ ¬ '>'") state_machine.edge('quote\ntag', 'quote\ntag', label="¬ '\"'") state_machine.edge('quote\ntag', '¬ quote\ntag', label="'\"'") state_machine.edge('¬ quote\ntag', '¬ quote\n¬ tag', label="'>'") # ignore display(state_machine)
_____no_output_____
MIT
Intro_Debugging.ipynb
doscac/ReducingCode