markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
The score will always be an integer since it is based on upvotes and downvotes. Before converting however, we need to check if there are any null values.
df.isna().sum() df[df.isnull().any(axis=1)].head(20)
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
There is only a small amount of null values and they appear to be of little use, so removing them seems to be the best bet. Once the null values are removed we can convert score to an integer.
df = df.dropna() df['score'] = df['score'].astype('int') print(df.shape) df.head(10)
(1941086, 3)
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Initial Data AnalysisBefore getting into handling the comment body a better understanding of the score collumn needs to be gained.
df['score'].describe() sns.distplot(df["score"], kde=False)
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
As seen standard deviation and the distribution plot, there is a large distribution of data which makes the dataset skewed. In order to solve this log sclaling can be applied which might be useful later on.
mask = df["score"] > 0 sns.distplot(np.log1p(df["score"][mask]), kde=False)
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
The positive scores appear to be skewed with a significant majority of values being equal to 1.
mask = df["score"] < 0 sns.distplot(-np.log1p(-df["score"][mask]), kde=False)
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
The negative scores also seem a little skewed. Adding another score columnIn order to understand the data better and also create a logistic regression model a seperate column was created with the values of positive, negative or one score. Positive score being anything greater than 1, negative being anything less than 1 and one being 1. The reason for this classification is how comments on reddit work, since whenever a comment is made it automatically gets an upvote and therfore if the score is zero it got a downvote.
df['pn_score'] = "" for i in df['score'].index: if df['score'].at[i] > 1: df['pn_score'].at[i] = 'positive' elif float(df['score'].at[i]) <= 0: df['pn_score'].at[i] = 'negative' else: df['pn_score'].at[i] = 'one' df.head(10) pn_counts = df['pn_score'].value_counts() print(pn_counts) pn_counts.plot.bar() plt.ylabel("Number of Samples", fontsize=16)
positive 1053684 one 739088 negative 148314 Name: pn_score, dtype: int64
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Again there is an issue with distribution here. The majority of dataset has positive score values, where negative scores are much less frequent. Logistic Regression ModelThere will be a combination of logistic regression and linear regression models used.The logistic model will be created based on the categorical score values, so it will predict whether the comment will have a postive or negative score or a score of 1. In order for the comments to be meaningful predictors of score they first need to be turned into a vector of numerical features. The vectorizer used implements Text Frequency-Inverse Document Frequency (TfIdf) weighting. Additionally stop_words were removed from the vector.
log_vect = TfidfVectorizer(max_df = 0.95, min_df = 5, binary=True, stop_words='english') text_features = log_vect.fit_transform(df.body) print(text_features.shape) list(log_vect.vocabulary_)[:10] encoder = LabelEncoder() numerical_labels = encoder.fit_transform(df['pn_score']) training_X, testing_X, training_y, testing_y = train_test_split(text_features, numerical_labels, stratify=numerical_labels) print(training_y) logistic_regression = SGDClassifier(loss="log", penalty="l2", max_iter=250) logistic_regression.fit(training_X, training_y) pred_labels = logistic_regression.predict(testing_X) accuracy = accuracy_score(testing_y, pred_labels) cm = confusion_matrix(testing_y, pred_labels) print("Accuracy:", accuracy) print("Classes:", str(encoder.classes_)) print("Confusion Matrix:") print(cm)
[2 2 2 ... 0 0 1] Accuracy: 0.5961028042005309 Classes: ['negative' 'one' 'positive'] Confusion Matrix: [[ 0 5712 31367] [ 0 37538 147234] [ 0 11687 251734]]
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Since the data is so skewed a simple random over-sampling was used in order to increase the number of negative scores. The reason for using over-sampling as opposed to under-sampling is because we didn't want to loose any comments that could contribute as predictors. This does run the risk of overfitting the data however.
count_pos, count_one, count_neg = df['pn_score'].value_counts() df_pos_score = df[df['pn_score'] == 'positive'] df_neg_score = df[df['pn_score'] == 'negative'] df_one_score = df[df['pn_score'] == 'one'] df_neg_score_over = df_neg_score.sample(count_one, replace=True) df_score_over = pd.concat([df_pos_score, df_neg_score_over, df_one_score], axis=0) print('Random over-sampling:') pn_counts = df_score_over['pn_score'].value_counts() print(pn_counts) pn_counts.plot.bar() plt.ylabel("Number of Samples", fontsize=16)
Random over-sampling: positive 1053684 one 739088 negative 739088 Name: pn_score, dtype: int64
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Similarily to first model the comments need to be vectorized.
log_vect_over = TfidfVectorizer(max_df = 0.95, min_df = 5, binary=True, stop_words='english') text_features = log_vect_over.fit_transform(df_score_over.body) print(text_features.shape) list(log_vect_over.vocabulary_)[:10]
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Now that the comments are turned into vectorized features they can be used in the logistic regression model. In order to achieve better results the random over-sampled data is used.
encoder = LabelEncoder() numerical_labels = encoder.fit_transform(df_score_over['pn_score']) training_X, testing_X, training_y, testing_y = train_test_split(text_features, numerical_labels, stratify=numerical_labels) print(training_y) logistic_regression_over = SGDClassifier(loss="log", penalty="l2", max_iter=1500) logistic_regression_over.fit(training_X, training_y) pred_labels = logistic_regression_over.predict(testing_X) accuracy = accuracy_score(testing_y, pred_labels) cm = confusion_matrix(testing_y, pred_labels) print("Accuracy:", accuracy) print("Classes:", str(encoder.classes_)) print("Confusion Matrix:") print(cm)
[2 1 1 ... 0 2 0] Accuracy: 0.4774545196021897 Classes: ['negative' 'one' 'positive'] Confusion Matrix: [[ 33626 10591 140555] [ 16730 25612 142430] [ 13682 6765 242974]]
MIT
python/redditscore.ipynb
AlexHartford/redditscore
According to the confusion matrix the model struggles with determining a comment that has a score of 1 and usually mistakes it for a positive comment. It seems to perform the best with negative comments which could indicate overfitting of the data. Linear Regression ModelsThere will be two linear regression models, one for detecting the value of the positive score comments and another for detective the value of the negative score comments. The specific one will be used depending on the outcome of the logistic regression model. Positive ScoresThe first linear regression model will predict that of the postive score. In order to do that only the rows with a positive score are necessary.
pos_score_df = df[df.pn_score == 'positive'] pos_score_df.head()
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Similarily to the logistic regression the comments need to be transformed into a vector of numerical values.
pos_vect = TfidfVectorizer(max_df = 0.95, min_df = 5, binary=True, stop_words='english') text_features = pos_vect.fit_transform(pos_score_df.body) print(text_features.shape) list(pos_vect.vocabulary_)[:10]
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Now that the comments are vectorized, the model can be created. In order to eliminate the issue with large distribution noticed during the alaysis, the scores are log scaled.
X_train, X_test, y_train, y_test = train_test_split(text_features, np.log1p(pos_score_df['score'])) pos_linear_regression = SGDRegressor(max_iter=1500) pos_linear_regression.fit(X_train, y_train) test = pos_linear_regression.predict(X_test) mse = mean_squared_error(y_test, test) rmse = np.sqrt(mse) print() print("Positive Score Model MSE:", mse) print("Positive Score Model RMSE:", rmse)
Positive Score Model MSE: 0.8749582956086429 Positive Score Model RMSE: 0.935392054493004
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Based on the rmse the model seems to preform pretty well. Negative ScoresThe second linear regression model will predict the negative scores. Similarily to the first model only the rows with negative scores are necessary and the comment need to be vectorized using those.
neg_score_df = df[df.pn_score == 'negative'] neg_score_df.head() neg_vect = TfidfVectorizer(max_df = 0.95, min_df = 5, binary=True, stop_words='english') text_features = neg_vect.fit_transform(neg_score_df.body) print(text_features.shape) list(neg_vect.vocabulary_)[:10] X_train, X_test, y_train, y_test = train_test_split(text_features, -np.log1p(-neg_score_df["score"])) neg_linear_regression = SGDRegressor(max_iter=1500) neg_linear_regression.fit(X_train, y_train) test = neg_linear_regression.predict(X_test) mse = mean_squared_error(y_test, test) rmse = np.sqrt(mse) print() print("Negative Score Model MSE:", mse) print("Negative Score Model RMSE:", rmse)
Negative Score Model MSE: 1.0130192799066775 Negative Score Model RMSE: 1.0064885890593482
MIT
python/redditscore.ipynb
AlexHartford/redditscore
The results are similar to the first model. Combining ModelsFirst the logistic regression model will be used to preditc whether or not the score is negative or positive, then depending on the outcome the appropriate linear regression model will be used to predict the score value
a = (["You sir a simple idiot. Or a Russian bot. Either way not worth an actual sentence on why I didn't vote for that loon."]) logistic_result = logistic_regression_over.predict(log_vect_over.transform(a)) print('Logistic Result: ') print(logistic_result) print() if(logistic_result) == 2: linear_result = pos_linear_regression.predict(pos_vect.transform(a)) print('Linear Result: ') print(linear_result) elif(logistic_result) == 0: linear_result = neg_linear_regression.predict(neg_vect.transform(a)) print('Linear Result: ') print(linear_result)
Logistic Result: [0] Linear Result: [-1.09910057]
MIT
python/redditscore.ipynb
AlexHartford/redditscore
Lastly, we want to pickle our models and vectorizers for deployment.
import pickle pickle.dump(logistic_regression_over, open('logreg.pkl', 'wb')) pickle.dump(pos_linear_regression, open('poslinreg.pkl', 'wb')) pickle.dump(neg_linear_regression, open('neglinreg.pkl', 'wb')) pickle.dump(log_vect_over, open('log_vect.pkl', 'wb')) pickle.dump(pos_vect, open('pos_vect.pkl', 'wb')) pickle.dump(neg_vect, open('neg_vect.pkl', 'wb'))
_____no_output_____
MIT
python/redditscore.ipynb
AlexHartford/redditscore
`Note:` All assignment should be done inside the notebook (Double tap on each of this text to edit). `Question 1`: IRENE UMOH 17100310866 `Question 2`: What do you understand by natural language processing NLP is a subfield of artificial Intelligence (AI). In simple terms, Natural Language Processing is the ability of a computer program to learn, understand, and interpret the natural language of humans in any give context which can be either be written or spoken. Computers go through large amounts of text data and undergoes data preprocessing. Afterwards, an algorithm is developed to process the preprocessed data. This algorithm is developed either through rules-based system or machine learning-based system. `Question 3`: List some of the applications of natural language processing a. automatic translation b. analysis and categorization of medical records to predict illnesses and diseases c. for checking plagiarism and proofreading d. stock forecasting and insights into financial trading e. Customer service and feedback analysis `Question 4`: What are some of the challenges of NLP?a. Precision: Human speech is very ambiguous and not precise. This could be an issue for the computer program to analyse and give a precise output b. Tone of voice and inflection in semantic analysis. For example, computers cannot detect sarcasm c. Language continually evolves and this could lead to some computer programs becoming obsoleted. Errors in text and speech could be a hinderance during text analysis. This is because it would be difficult fir the machine to understand and interprete.Contextual words, phrases, homonyms and synonyms `Question 5`: Using some of the string operations learnt from this weeks topic (like: `set(), istitle(), split(), replace(), etc.`) Declear 4 different strings (sentences) and apply the functions/methods.
a = ' In the end, he realized he could see sound and hear words. ' b = 'I ate a sock because people on the Internet told me to' c = 'She had a car and she also had a car' d = 'The skeleton had skeletons of his own in the closet' e = 'peered' a1=a.split(' ') b1=b.split(' ') c1=c.split(' ') d1=d.split(' ') e1=e.split('e') len(a1) a1 [w for w in b1 if len(w) > 3] [w for w in b1 if w.istitle()] [w for w in d1 if w.endswith('n')] [w for w in d1 if w.startswith('s')] [w for w in a1 if w.splitlines()] len(set(c1)) set(c1) len(set([w.lower() for w in c1])) set([w.lower() for w in c1]) set([w.upper() for w in c1]) e1 'e'.join(e1) a2 = a.strip() a2.split(' ') a2 a2.find('a') a2.rfind('a') a3 = a2.replace('e','q') a3 c2=c.strip() c2.split() c2 c2.find('a') c2.rfind('a') c3 = c2.replace('car','child') c3
_____no_output_____
MIT
IRENE UMOH Assignment (Week 1 and 2).ipynb
ireneumoh24/ISM416
contiguous() is to arrange the tensor in a standard layout
a = torch.randn(3, 4, 5) b = a.permute(1, 2, 0) b_cont = b.contiguous() a_cont = a.contiguous() # a has "standard layout" (also known as C layout in numpy) descending strides, and no memory gaps (stride(i-1) == size(i)*stride(i)) print (a.shape, a.stride(), a.data_ptr()) # b has same storage as a (data_ptr), but has the strides and sizes swapped around print (b.shape, b.stride(), b.data_ptr()) # b_cont is in new storage, where it has been arranged in standard layout (which is "contiguous") print (b_cont.shape, b_cont.stride(), b_cont.data_ptr()) # a_cont is exactly as a, as a was contiguous all along print (a_cont.shape, a_cont.stride(), a_cont.data_ptr()) class Generator(nn.Module): def __init__(self, vocab_size, embedding_size, hidden_size, num_layers, bidirectionary_gru=False): super(Generator, self).__init__() # Initialize the embedding layer with the # - size of input (i.e. no. of words in input vocab) # - no. of hidden nodes in the embedding layer self.embedding = nn.Embedding(vocab_size, embedding_size, padding_idx=0) self.bidirectionary = bidirectionary_gru # Initialize the GRU with the # - size of the input (i.e. embedding layer) # - size of the hidden layer self.gru = nn.GRU( input_size=embedding_size, hidden_size=hidden_size, num_layers=num_layers, bias=True, # default True batch_first=True, # if True, then the input and output tensors are provided as (batch, seq, feature), otherwise, (seq, batch, feature) will be used. dropout=0, # default 0 bidirectional=bidirectionary_gru) # Initialize the "classifier" layer to map the RNN outputs # to the vocabulary. Remember we need to -1 because the # vectorized sentence we left out one token for both x and y: # - size of hidden_size of the GRU output. # - size of vocabulary self.classifier = nn.Linear(hidden_size * (2 if self.bidirectionary else 1), vocab_size) def forward(self, inputs, activate_by_softmax=False, initial_hidden_states=None): # vocab_size: V # embed_size: E # hidden_size: H # num_layers: L # num_directions: Dir # sequence_len: Seq = max_sent_len-1 # batch_size: n # single shape: (, Seq) ~~> (Seq, V) ==> via x weights:(V, E) ==> (Seq, E) # batched shape: (n, Seq) ~~> (n, Seq, V) ==> via x weights:(V, E) ==> (n, Seq, E) embedded = self.embedding(inputs) # single # input => final GRU output shape, for each token in each sentence: (Seq, E) ==> (Seq, H*Dir) # initial => final hidden states shape, for each sentence as a whole: (L*Dir, H) ==> (L*Dir, H) # # batched # input => final GRU output shape, for each token in each sentence: (n, Seq, E) ==> (n, Seq, H*Dir) # initial => final hidden states shape, for each sentence as a whole: (L*Dir, n, H) ==> (L*Dir, n, H) gru_output, final_hidden_states = self.gru(embedded, initial_hidden_states) # Matrix manipulation magic. batch_size, sequence_len, directional_hidden_size = gru_output.shape # Technically, linear layer takes a 2-D matrix as input, so more manipulation... # single shape: (Seq, H*Dir) ==> (Seq, H*Dir) # batched shape: (n, Seq, H*Dir) ==> (n*Seq, H*Dir) classification_inputs = gru_output.contiguous().view(batch_size * sequence_len, directional_hidden_size) # Apply dropout. # if the data size is relatively small, the dropout rate can be higher. normally 0.1~0.8 classification_inputs = F.dropout(classification_inputs, 0.5) # Put it through the classifier # single shape: (Seq, H*Dir) ==> via x weights:(H*Dir, V) ==> (Seq, V) # batched shape: (n*Seq, H*Dir) ==> via x weights:(H*Dir, V) ==> (n*Seq, V) output = self.classifier(classification_inputs) # reshape it to [batch_size x sequence_len x vocab_size] # single shape: (Seq, V) ==> via reshape ==> (Seq, V) # batched shape: (n*Seq, V) ==> via reshape ==> (n, Seq, V) output = output.view(batch_size, sequence_len, -1) # classification output shape: (n, Seq, V) # final hidden states shape, for each sentence as a whole: (L*Dir, n, H) return (F.softmax(output,dim=2), final_hidden_states) if activate_by_softmax else (output, final_hidden_states) # Set the hidden_size of the GRU embed_size = 12 hidden_size = 10 num_layers = 7 bidirectional = True _encoder = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers, bidirectionary_gru=bidirectional) # Take a batch. batch_size = 15 dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True) batch0 = next(iter(dataloader)) inputs0, lengths0 = batch0['x'], batch0['x_len'] targets0 = batch0['y'] initial_hidden_states = torch.zeros(num_layers*(2 if bidirectional else 1), batch_size, hidden_size) print('Input shape:\t', inputs0.shape) print("Vocab shape:\t", len(kilgariff_data.vocab)) print('Target shape:\t', targets0.shape) output0, hidden0 = _encoder(inputs0, initial_hidden_states=initial_hidden_states) print('Hidden shape:\t', hidden0.shape) print('Output shape:\t', output0.shape) _, predicted_indices = torch.max(output0, dim=1) print(predicted_indices.shape) device = 'cuda' if torch.cuda.is_available() else 'cpu' _hyper_para_names = ['embed_size', 'hidden_size', 'num_layers', 'loss_func', 'learning_rate', 'optimizer', 'batch_size'] Hyperparams = namedtuple('Hyperparams', _hyper_para_names) hyperparams = Hyperparams(embed_size=250, hidden_size=250, num_layers=1, loss_func=nn.CrossEntropyLoss, learning_rate=0.03, optimizer=optim.Adam, batch_size=245) hyperparams # Training routine. def train(num_epochs, dataloader, model, criterion, optimizer): losses = [] plt.ion() for _e in range(num_epochs): for batch in tqdm(dataloader): # Zero gradient. optimizer.zero_grad() x = batch['x'].to(device) x_len = batch['x_len'].to(device) y = batch['y'].to(device) # Feed forward. output, hidden = model(x, activate_by_softmax=False) # Compute loss: # Shape of the `output` is [batch_size x sequence_len x vocab_size] # Shape of `y` is [batch_size x sequence_len] # CrossEntropyLoss expects `output` to be [batch_size x vocab_size x sequence_len] _, prediction = torch.max(output, dim=2) loss = criterion(output.permute(0, 2, 1), y) loss.backward() optimizer.step() losses.append(loss.float().data) clear_output(wait=True) plt.plot(losses) plt.pause(0.05) print(hidden.shape) def initialize_data_model_optim_loss(hyperparams): # Initialize the dataset and dataloader. kilgariff_data = KilgariffDataset(tokenized_text) dataloader = DataLoader(dataset=kilgariff_data, batch_size=hyperparams.batch_size, shuffle=True) # Loss function. criterion = hyperparams.loss_func(ignore_index=kilgariff_data.vocab.token2id['<pad>'], reduction='mean') # Model. model = Generator(len(kilgariff_data.vocab), hyperparams.embed_size, hyperparams.hidden_size, hyperparams.num_layers).to(device) # Optimizer. optimizer = hyperparams.optimizer(model.parameters(), lr=hyperparams.learning_rate) return dataloader, model, optimizer, criterion def generate_example(model, temperature=1.0, max_len=100, hidden_state=None): start_token, start_idx = '<s>', 2 # Start state. inputs = torch.tensor(kilgariff_data.vocab.token2id[start_token]).unsqueeze(0).unsqueeze(0).to(device) sentence = [start_token] i = 0 while i < max_len and sentence[-1] not in ['</s>', '<pad>']: i += 1 embedded = model.embedding(inputs) output, hidden_state = model.gru(embedded, hidden_state) batch_size, sequence_len, hidden_size = output.shape output = output.contiguous().view(batch_size * sequence_len, hidden_size) output = model.classifier(output).view(batch_size, sequence_len, -1).squeeze(0) #_, prediction = torch.max(F.softmax(output, dim=2), dim=2) word_weights = output.div(temperature).exp().cpu() if len(word_weights.shape) > 1: word_weights = word_weights[-1] # Pick the last word. word_idx = torch.multinomial(word_weights, 1).view(-1) sentence.append(kilgariff_data.vocab[int(word_idx)]) inputs = tensor([kilgariff_data.vocab.token2id[word] for word in sentence]).unsqueeze(0).to(device) print(' '.join(sentence)) hyperparams = Hyperparams(embed_size=200, hidden_size=250, num_layers=3, loss_func=nn.CrossEntropyLoss, learning_rate=0.03, optimizer=optim.Adam, batch_size=300) dataloader, model, optimizer, criterion = initialize_data_model_optim_loss(hyperparams) train(3, dataloader, model, criterion, optimizer) for _ in range(10): generate_example(model) import json torch.save(model.state_dict(), 'gru-model.pth') hyperparams_str = Hyperparams(embed_size=250, hidden_size=250, num_layers=1, loss_func='nn.CrossEntropyLoss', learning_rate=0.03, optimizer='optim.Adam', batch_size=250) with open('gru-model.json', 'w') as fout: json.dump(dict(hyperparams_str._asdict()), fout)
_____no_output_____
MIT
mywork/Session 6 - GRU Language Model.ipynb
mingsqtt/textanalytics_ml
class NodoArbol: def __init__(self, dato, hijo_izq = None, hijo_der = None): self.dato = dato self.left = hijo_izq self.right = hijo_der class BinarySearchTree: def __init__(self): self.__root = None def insert(self, value): if self.__root == None: self.__root = NodoArbol(value, None, None) else: #Preguntar si value es menor que root, de ser el caso #Insertar a la izq. PERO puede ser el caso que el sub arbol #Izq ya tenga muchos elementos self.__insert_nodo__(self.__root, value) def __insert_nodo__(self, nodo, value): if nodo.dato == value: pass elif value < nodo.dato: #true va a la izq if nodo.left == None: #Si hay espacio en la izq, ahi va nodo.left = NodoArbol(value, None, None) #Insertamos el nodo else: self.__insert_nodo__(nodo.left, value) #Buscar en sub arbol izq else: if nodo.right == None: nodo.right = NodoArbol (value, None, None) else: self.__insert_nodo__ (nodo.right, vlue) #biscar en sub arbol de def buscar (self, value): if self.__root == None: return None else: #Haremos busqueda recursiva return self.__busca_nodo(self.__root, value) def __busca_nodo(self, nodo, value): if nodo == None: return None elif nodo.dato == value: return nodo.dato elif value < nodo.dato: return self.__busca_nodo(nodo.left, value) else: return self.__busca_nodo(nodo.right, value) def transversal (self, format = "inorden"): if format == "inorden": self.__recorrido_in(self.__root) elif format == "preorden": self.__recorrido_pre(self.__root) elif format == "posorden": self.__recorrido_pos(self.__root) else: print("Formato de recorrido no valido") def __recorrido_pre(self, nodo): if nodo != None: print(nodo.dato, end = ",") self.__recorrido_pre(nodo.left) self.__recorrido_pre(nodo.right) def __recorrido_in(self, nodo): if nodo != None: self.__recorrido_pre(nodo.left) print(nodo.dato, end = ",") self.__recorrido_pre(nodo.right) def __recorrido_pos(self, nodo): if nodo != None: self.__recorrido_pre(nodo.left) self.__recorrido_pre(nodo.right) print(nodo.dato, end = ",") bst = BinarySearchTree() bst.insert(50) bst.insert(30) bst.insert(20) res = bst.buscar(30) #true o false print("Dato: " + str(res)) print(bst.buscar(40)) print("Recorrido pre:") bst.transversal(format = "preorden") print("\n Recorrido in:") bst.transversal(format = "inorden") print("\n Recorrido pos:")
_____no_output_____
MIT
Tarea26.ipynb
Ed-10/Daa_2021_1
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Explicit Runge Kutta methods and their Butcher tables Authors: Brandon Clark & Zach Etienne This tutorial notebook stores known explicit Runge Kutta-like methods as Butcher tables in a Python dictionary format. **Notebook Status:** Validated **Validation Notes:** This tutorial notebook has been confirmed to be **self-consistent with its corresponding NRPy+ module**, as documented [below](code_validation). In addition, each of these Butcher tables has been verified to yield an RK method to the expected local truncation error in a challenging battery of ODE tests, in the [RK Butcher Table Validation tutorial notebook](Tutorial-RK_Butcher_Table_Validation.ipynb). NRPy+ Source Code for this module: [MoLtimestepping/RK_Butcher_Table_Dictionary.py](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) Introduction:The family of explicit [Runge Kutta](https://en.wikipedia.org/w/index.php?title=Runge%E2%80%93Kutta_methods&oldid=898536315)-like methods are commonly used when numerically solving ordinary differential equation (ODE) initial value problems of the form$$ y'(t) = f(y,t),\ \ \ y(t_0)=y_0.$$These methods can be extended to solve time-dependent partial differential equations (PDEs) via the [Method of Lines](https://en.wikipedia.org/w/index.php?title=Method_of_lines&oldid=855390257). In the Method of Lines, the above ODE can be generalized to $N$ coupled ODEs, all written as first-order-in-time PDEs of the form$$ \partial_{t}\mathbf{u}(t,x,y,u_1,u_2,u_3,...)=\mathbf{f}(t,x,y,...,u_1,u_{1,x},...),$$where $\mathbf{u}$ and $\mathbf{f}$ are vectors. The spatial partial derivatives of components of $\mathbf{u}$, e.g., $u_{1,x}$, may be computed using approximate numerical differentiation, like finite differences.As any explicit Runge-Kutta method has its own unique local truncation error, can in principle be used to solve time-dependent PDEs using the Method of Lines, and may be stable under different Courant-Friedrichs-Lewy (CFL) conditions, it is useful to have multiple methods at one's disposal. **This module provides a number of such methods.**More details about the Method of Lines is discussed further in the [Tutorial-RK_Butcher_Table_Generating_C_Code](Tutorial-RK_Butcher_Table_Generating_C_Code.ipynb) module where we generate the C code to implement the Method of Lines, and additional description can be found in the [Numerically Solving the Scalar Wave Equation: A Complete C Code](Tutorial-Start_to_Finish-ScalarWave.ipynb) NRPy+ tutorial notebook. Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python modules1. [Step 2](introbutcher): The Family of Explicit Runge-Kutta-Like Schemes (Butcher Tables) 1. [Step 2a](codebutcher): Generating a Dictionary of Butcher Tables for Explicit Runge Kutta Techniques 1. [Step 2.a.i](euler): Euler's Method 1. [Step 2.a.ii](rktwoheun): RK2 Heun's Method 1. [Step 2.a.iii](rk2mp): RK2 Midpoint Method 1. [Step 2.a.iv](rk2ralston): RK2 Ralston's Method 1. [Step 2.a.v](rk3): Kutta's Third-order Method 1. [Step 2.a.vi.](rk3heun): RK3 Heun's Method 1. [Step 2.a.vii](rk3ralston): RK3 Ralston's Method 1. [Step 2.a.viii](ssprk3): Strong Stability Preserving Runge-Kutta (SSPRK3) Method 1. [Step 2.a.ix](rkfour): Classic RK4 Method 1. [Step 2.a.x](dp5): RK5 Dormand-Prince Method 1. [Step 2.a.xi](dp5alt): RK5 Dormand-Prince Method Alternative 1. [Step 2.a.xii](ck5): RK5 Cash-Karp Method 1. [Step 2.a.xiii](dp6): RK6 Dormand-Prince Method 1. [Step 2.a.xiv](l6): RK6 Luther Method 1. [Step 2.a.xv](dp8): RK8 Dormand-Prince Method1. [Step 3](code_validation): Code Validation against `MoLtimestepping.RK_Butcher_Table_Dictionary` NRPy+ module1. [Step 4](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python modules [Back to [top](toc)\]$$\label{initializenrpy}$$Let's start by importing all the needed modules from Python:
# Step 1: Initialize needed Python modules import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2: The Family of Explicit Runge-Kutta-Like Schemes (Butcher Tables) [Back to [top](toc)\]$$\label{introbutcher}$$In general, a predictor-corrector method performs an estimate timestep from $n$ to $n+1$, using e.g., a Runge Kutta method, to get a prediction of the solution at timestep $n+1$. This is the "predictor" step. Then it uses this prediction to perform another, "corrector" step, designed to increase the accuracy of the solution.Let us focus on the ordinary differential equation (ODE)$$ y'(t) = f(y,t), $$which acts as an analogue for a generic PDE $\partial_{t}u(t,x,y,...)=f(t,x,y,...,u,u_x,...)$.The general family of Runge Kutta "explicit" timestepping methods are implemented using the following scheme:$$y_{n+1} = y_n + \sum_{i=1}^s b_ik_i $$where \begin{align}k_1 &= \Delta tf(y_n, t_n) \\k_2 &= \Delta tf(y_n + [a_{21}k_1], t_n + c_2\Delta t) \\k_3 &= \Delta tf(y_n +[a_{31}k_1 + a_{32}k_2], t_n + c_3\Delta t) \\& \ \ \vdots \\k_s &= \Delta tf(y_n +[a_{s1}k_1 + a_{s2}k_2 + \cdots + a_{s, s-1}k_{s-1}], t_n + c_s\Delta t)\end{align}Note $s$ is the number of right-hand side evaluations necessary for any given method, i.e., for RK2 $s=2$ and for RK4 $s=4$, and for RK6 $s=7$. These schemes are often written in the form of a so-called "Butcher tableau". or "Butcher table":$$\begin{array}{c|ccccc} 0 & \\ c_2 & a_{21} & \\ c_3 & a_{31} & a_{32} & \\ \vdots & \vdots & & \ddots \\ c_s & a_{s_1} & a_{s2} & \cdots & a_{s,s-1} \\ \hline & b_1 & b_2 & \cdots & b_{s-1} & b_s\end{array} $$As an example, the "classic" fourth-order Runge Kutta (RK4) method obtains the solution $y(t)$ to the single-variable ODE $y'(t) = f(y(t),t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{\Delta t}{2}), \\k_3 &= \Delta tf(y_n + \frac{1}{2}k_2, t_n + \frac{\Delta t}{2}), \\k_4 &= \Delta tf(y_n + k_3, t_n + \Delta t), \\y_{n+1} &= y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + \mathcal{O}\big((\Delta t)^5\big).\end{align}Its corresponding Butcher table is constructed as follows:$$\begin{array}{c|cccc} 0 & \\ 1/2 & 1/2 & \\ 1/2 & 0 & 1/2 & \\ 1 & 0 & 0 & 1 & \\ \hline & 1/6 & 1/3 & 1/3 & 1/6\end{array} $$This is one example of many explicit [Runge Kutta methods](https://en.wikipedia.org/w/index.php?title=List_of_Runge%E2%80%93Kutta_methods&oldid=896594269). Throughout the following sections we will highlight different Runge Kutta schemes and their Butcher tables from the first-order Euler's method up to and including an eighth-order method. Step 2.a: Generating a Dictionary of Butcher Tables for Explicit Runge Kutta Techniques [Back to [top](toc)\]$$\label{codebutcher}$$We can store all of the Butcher tables in Python's **Dictionary** format using the curly brackets {} and 'key':value pairs. The 'key' will be the *name* of the Runge Kutta method and the value will be the Butcher table itself stored as a list of lists. The convergence order for each Runge Kutta method is also stored. We will construct the dictionary `Butcher_dict` one Butcher table at a time in the following sections.
# Step 2a: Generating a Dictionary of Butcher Tables for Explicit Runge Kutta Techniques # Initialize the dictionary Butcher_dict Butcher_dict = {}
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.i: Euler's Method [Back to [top](toc)\]$$\label{euler}$$[Forward Euler's method](https://en.wikipedia.org/w/index.php?title=Euler_method&oldid=896152463) is a first order Runge Kutta method. Euler's method obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:$$y_{n+1} = y_{n} + \Delta tf(y_{n}, t_{n})$$with the trivial corresponding Butcher table $$\begin{array}{c|c}0 & \\ \hline & 1 \end{array}$$
# Step 2.a.i: Euler's Method Butcher_dict['Euler'] = ( [[sp.sympify(0)], ["", sp.sympify(1)]] , 1)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.ii: RK2 Heun's Method [Back to [top](toc)\]$$\label{rktwoheun}$$[Heun's method](https://en.wikipedia.org/w/index.php?title=Heun%27s_method&oldid=866896936) is a second-order RK method that obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + k_1, t_n + \Delta t), \\y_{n+1} &= y_n + \frac{1}{2}(k_1 + k_2) + \mathcal{O}\big((\Delta t)^3\big).\end{align}with corresponding Butcher table$$\begin{array}{c|cc} 0 & \\ 1 & 1 & \\ \hline & 1/2 & 1/2\end{array} $$
# Step 2.a.ii: RK2 Heun's Method Butcher_dict['RK2 Heun'] = ( [[sp.sympify(0)], [sp.sympify(1), sp.sympify(1)], ["", sp.Rational(1,2), sp.Rational(1,2)]] , 2)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.iii: RK2 Midpoint Method [Back to [top](toc)\]$$\label{rk2mp}$$[Midpoint method](https://en.wikipedia.org/w/index.php?title=Midpoint_method&oldid=886630580) is a second-order RK method that obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{2}{3}k_1, t_n + \frac{2}{3}\Delta t), \\y_{n+1} &= y_n + \frac{1}{2}k_2 + \mathcal{O}\big((\Delta t)^3\big).\end{align}with corresponding Butcher table$$\begin{array}{c|cc} 0 & \\ 1/2 & 1/2 & \\ \hline & 0 & 1\end{array} $$
# Step 2.a.iii: RK2 Midpoint (MP) Method Butcher_dict['RK2 MP'] = ( [[sp.sympify(0)], [sp.Rational(1,2), sp.Rational(1,2)], ["", sp.sympify(0), sp.sympify(1)]] , 2)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.iv: RK2 Ralston's Method [Back to [top](toc)\]$$\label{rk2ralston}$$Ralston's method (see [Ralston (1962)](https://www.ams.org/journals/mcom/1962-16-080/S0025-5718-1962-0150954-0/S0025-5718-1962-0150954-0.pdf), is a second-order RK method that obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{1}{2}\Delta t), \\y_{n+1} &= y_n + \frac{1}{4}k_1 + \frac{3}{4}k_2 + \mathcal{O}\big((\Delta t)^3\big).\end{align}with corresponding Butcher table$$\begin{array}{c|cc} 0 & \\ 2/3 & 2/3 & \\ \hline & 1/4 & 3/4\end{array} $$
# Step 2.a.iv: RK2 Ralston's Method Butcher_dict['RK2 Ralston'] = ( [[sp.sympify(0)], [sp.Rational(2,3), sp.Rational(2,3)], ["", sp.Rational(1,4), sp.Rational(3,4)]] , 2)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.v: Kutta's Third-order Method [Back to [top](toc)\]$$\label{rk3}$$[Kutta's third-order method](https://en.wikipedia.org/w/index.php?title=List_of_Runge%E2%80%93Kutta_methods&oldid=896594269) obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{1}{2}\Delta t), \\k_3 &= \Delta tf(y_n - k_1 + 2k_2, t_n + \Delta t) \\y_{n+1} &= y_n + \frac{1}{6}k_1 + \frac{2}{3}k_2 + \frac{1}{6}k_3 + \mathcal{O}\big((\Delta t)^4\big).\end{align}with corresponding Butcher table\begin{array}{c|ccc} 0 & \\ 1/2 & 1/2 & \\ 1 & -1 & 2 & \\ \hline & 1/6 & 2/3 & 1/6\end{array}
# Step 2.a.v: Kutta's Third-order Method Butcher_dict['RK3'] = ( [[sp.sympify(0)], [sp.Rational(1,2), sp.Rational(1,2)], [sp.sympify(1), sp.sympify(-1), sp.sympify(2)], ["", sp.Rational(1,6), sp.Rational(2,3), sp.Rational(1,6)]] , 3)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.vi: RK3 Heun's Method [Back to [top](toc)\]$$\label{rk3heun}$$[Heun's third-order method](https://en.wikipedia.org/w/index.php?title=List_of_Runge%E2%80%93Kutta_methods&oldid=896594269) obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{1}{3}k_1, t_n + \frac{1}{3}\Delta t), \\k_3 &= \Delta tf(y_n + \frac{2}{3}k_2, t_n + \frac{2}{3}\Delta t) \\y_{n+1} &= y_n + \frac{1}{4}k_1 + \frac{3}{4}k_3 + \mathcal{O}\big((\Delta t)^4\big).\end{align}with corresponding Butcher table\begin{array}{c|ccc} 0 & \\ 1/3 & 1/3 & \\ 2/3 & 0 & 2/3 & \\ \hline & 1/4 & 0 & 3/4\end{array}
# Step 2.a.vi: RK3 Heun's Method Butcher_dict['RK3 Heun'] = ( [[sp.sympify(0)], [sp.Rational(1,3), sp.Rational(1,3)], [sp.Rational(2,3), sp.sympify(0), sp.Rational(2,3)], ["", sp.Rational(1,4), sp.sympify(0), sp.Rational(3,4)]] , 3)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.vii: RK3 Ralton's Method [Back to [top](toc)\]$$\label{rk3ralston}$$Ralston's third-order method (see [Ralston (1962)](https://www.ams.org/journals/mcom/1962-16-080/S0025-5718-1962-0150954-0/S0025-5718-1962-0150954-0.pdf), obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{1}{2}\Delta t), \\k_3 &= \Delta tf(y_n + \frac{3}{4}k_2, t_n + \frac{3}{4}\Delta t) \\y_{n+1} &= y_n + \frac{2}{9}k_1 + \frac{1}{3}k_2 + \frac{4}{9}k_3 + \mathcal{O}\big((\Delta t)^4\big).\end{align}with corresponding Butcher table\begin{array}{c|ccc} 0 & \\ 1/2 & 1/2 & \\ 3/4 & 0 & 3/4 & \\ \hline & 2/9 & 1/3 & 4/9\end{array}
# Step 2.a.vii: RK3 Ralton's Method Butcher_dict['RK3 Ralston'] = ( [[0], [sp.Rational(1,2), sp.Rational(1,2)], [sp.Rational(3,4), sp.sympify(0), sp.Rational(3,4)], ["", sp.Rational(2,9), sp.Rational(1,3), sp.Rational(4,9)]] , 3)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.viii: Strong Stability Preserving Runge-Kutta (SSPRK3) Method [Back to [top](toc)\]$\label{ssprk3}$The [Strong Stability Preserving Runge-Kutta (SSPRK3)](https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methodsKutta's_third-order_method) method obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + k_1, t_n + \Delta t), \\k_3 &= \Delta tf(y_n + \frac{1}{4}k_1 + \frac{1}{4}k_2, t_n + \frac{1}{2}\Delta t) \\y_{n+1} &= y_n + \frac{1}{6}k_1 + \frac{1}{6}k_2 + \frac{2}{3}k_3 + \mathcal{O}\big((\Delta t)^4\big).\end{align}with corresponding Butcher table\begin{array}{c|ccc} 0 & \\ 1 & 1 & \\ 1/2 & 1/4 & 1/4 & \\ \hline & 1/6 & 1/6 & 2/3\end{array}
# Step 2.a.viii: Strong Stability Preserving Runge-Kutta (SSPRK3) Method Butcher_dict['SSPRK3'] = ( [[0], [sp.sympify(1), sp.sympify(1)], [sp.Rational(1,2), sp.Rational(1,4), sp.Rational(1,4)], ["", sp.Rational(1,6), sp.Rational(1,6), sp.Rational(2,3)]] , 3)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.ix: Classic RK4 Method [Back to [top](toc)\]$$\label{rkfour}$$The [classic RK4 method](https://en.wikipedia.org/w/index.php?title=Runge%E2%80%93Kutta_methods&oldid=894771467) obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:\begin{align}k_1 &= \Delta tf(y_n, t_n), \\k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{\Delta t}{2}), \\k_3 &= \Delta tf(y_n + \frac{1}{2}k_2, t_n + \frac{\Delta t}{2}), \\k_4 &= \Delta tf(y_n + k_3, t_n + \Delta t), \\y_{n+1} &= y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + \mathcal{O}\big((\Delta t)^5\big).\end{align}with corresponding Butcher table$$\begin{array}{c|cccc} 0 & \\ 1/2 & 1/2 & \\ 1/2 & 0 & 1/2 & \\ 1 & 0 & 0 & 1 & \\ \hline & 1/6 & 1/3 & 1/3 & 1/6\end{array} $$
# Step 2.a.vix: Classic RK4 Method Butcher_dict['RK4'] = ( [[sp.sympify(0)], [sp.Rational(1,2), sp.Rational(1,2)], [sp.Rational(1,2), sp.sympify(0), sp.Rational(1,2)], [sp.sympify(1), sp.sympify(0), sp.sympify(0), sp.sympify(1)], ["", sp.Rational(1,6), sp.Rational(1,3), sp.Rational(1,3), sp.Rational(1,6)]] , 4)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.x: RK5 Dormand-Prince Method [Back to [top](toc)\]$$\label{dp5}$$The fifth-order Dormand-Prince (DP) method from the RK5(4) family (see [Dormand, J. R.; Prince, P. J. (1980)](https://www.sciencedirect.com/science/article/pii/0771050X80900133?via%3Dihub)) Butcher table is:$$\begin{array}{c|ccccccc} 0 & \\ \frac{1}{5} & \frac{1}{5} & \\ \frac{3}{10} & \frac{3}{40} & \frac{9}{40} & \\ \frac{4}{5} & \frac{44}{45} & \frac{-56}{15} & \frac{32}{9} & \\ \frac{8}{9} & \frac{19372}{6561} & \frac{−25360}{2187} & \frac{64448}{6561} & \frac{−212}{729} & \\ 1 & \frac{9017}{3168} & \frac{−355}{33} & \frac{46732}{5247} & \frac{49}{176} & \frac{−5103}{18656} & \\ 1 & \frac{35}{384} & 0 & \frac{500}{1113} & \frac{125}{192} & \frac{−2187}{6784} & \frac{11}{84} & \\ \hline & \frac{35}{384} & 0 & \frac{500}{1113} & \frac{125}{192} & \frac{−2187}{6784} & \frac{11}{84} & 0\end{array} $$
# Step 2.a.x: RK5 Dormand-Prince Method Butcher_dict['DP5'] = ( [[0], [sp.Rational(1,5), sp.Rational(1,5)], [sp.Rational(3,10),sp.Rational(3,40), sp.Rational(9,40)], [sp.Rational(4,5), sp.Rational(44,45), sp.Rational(-56,15), sp.Rational(32,9)], [sp.Rational(8,9), sp.Rational(19372,6561), sp.Rational(-25360,2187), sp.Rational(64448,6561), sp.Rational(-212,729)], [sp.sympify(1), sp.Rational(9017,3168), sp.Rational(-355,33), sp.Rational(46732,5247), sp.Rational(49,176), sp.Rational(-5103,18656)], [sp.sympify(1), sp.Rational(35,384), sp.sympify(0), sp.Rational(500,1113), sp.Rational(125,192), sp.Rational(-2187,6784), sp.Rational(11,84)], ["", sp.Rational(35,384), sp.sympify(0), sp.Rational(500,1113), sp.Rational(125,192), sp.Rational(-2187,6784), sp.Rational(11,84), sp.sympify(0)]] , 5)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.xi: RK5 Dormand-Prince Method Alternative [Back to [top](toc)\]$$\label{dp5alt}$$The fifth-order Dormand-Prince (DP) method from the RK6(5) family (see [Dormand, J. R.; Prince, P. J. (1981)](https://www.sciencedirect.com/science/article/pii/0771050X81900103)) Butcher table is:$$\begin{array}{c|ccccccc} 0 & \\ \frac{1}{10} & \frac{1}{10} & \\ \frac{2}{9} & \frac{-2}{81} & \frac{20}{81} & \\ \frac{3}{7} & \frac{615}{1372} & \frac{-270}{343} & \frac{1053}{1372} & \\ \frac{3}{5} & \frac{3243}{5500} & \frac{-54}{55} & \frac{50949}{71500} & \frac{4998}{17875} & \\ \frac{4}{5} & \frac{-26492}{37125} & \frac{72}{55} & \frac{2808}{23375} & \frac{-24206}{37125} & \frac{338}{459} & \\ 1 & \frac{5561}{2376} & \frac{-35}{11} & \frac{-24117}{31603} & \frac{899983}{200772} & \frac{-5225}{1836} & \frac{3925}{4056} & \\ \hline & \frac{821}{10800} & 0 & \frac{19683}{71825} & \frac{175273}{912600} & \frac{395}{3672} & \frac{785}{2704} & \frac{3}{50}\end{array}$$
# Step 2.a.xi: RK5 Dormand-Prince Method Alternative Butcher_dict['DP5alt'] = ( [[0], [sp.Rational(1,10), sp.Rational(1,10)], [sp.Rational(2,9), sp.Rational(-2, 81), sp.Rational(20, 81)], [sp.Rational(3,7), sp.Rational(615, 1372), sp.Rational(-270, 343), sp.Rational(1053, 1372)], [sp.Rational(3,5), sp.Rational(3243, 5500), sp.Rational(-54, 55), sp.Rational(50949, 71500), sp.Rational(4998, 17875)], [sp.Rational(4, 5), sp.Rational(-26492, 37125), sp.Rational(72, 55), sp.Rational(2808, 23375), sp.Rational(-24206, 37125), sp.Rational(338, 459)], [sp.sympify(1), sp.Rational(5561, 2376), sp.Rational(-35, 11), sp.Rational(-24117, 31603), sp.Rational(899983, 200772), sp.Rational(-5225, 1836), sp.Rational(3925, 4056)], ["", sp.Rational(821, 10800), sp.sympify(0), sp.Rational(19683, 71825), sp.Rational(175273, 912600), sp.Rational(395, 3672), sp.Rational(785, 2704), sp.Rational(3, 50)]] , 5)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.xii: RK5 Cash-Karp Method [Back to [top](toc)\]$$\label{ck5}$$The fifth-order Cash-Karp Method (see [J. R. Cash, A. H. Karp. (1980)](https://dl.acm.org/citation.cfm?doid=79505.79507)) Butcher table is:$$\begin{array}{c|cccccc} 0 & \\ \frac{1}{5} & \frac{1}{5} & \\ \frac{3}{10} & \frac{3}{40} & \frac{9}{40} & \\ \frac{3}{5} & \frac{3}{10} & \frac{−9}{10} & \frac{6}{5} & \\ 1 & \frac{−11}{54} & \frac{5}{2} & \frac{−70}{27} & \frac{35}{27} & \\ \frac{7}{8} & \frac{1631}{55296} & \frac{175}{512} & \frac{575}{13824} & \frac{44275}{110592} & \frac{253}{4096} & \\ \hline & \frac{37}{378} & 0 & \frac{250}{621} & \frac{125}{594} & 0 & \frac{512}{1771} \end{array}$$
# Step 2.a.xii: RK5 Cash-Karp Method Butcher_dict['CK5'] = ( [[0], [sp.Rational(1,5), sp.Rational(1,5)], [sp.Rational(3,10),sp.Rational(3,40), sp.Rational(9,40)], [sp.Rational(3,5), sp.Rational(3,10), sp.Rational(-9,10), sp.Rational(6,5)], [sp.sympify(1), sp.Rational(-11,54), sp.Rational(5,2), sp.Rational(-70,27), sp.Rational(35,27)], [sp.Rational(7,8), sp.Rational(1631,55296), sp.Rational(175,512), sp.Rational(575,13824), sp.Rational(44275,110592), sp.Rational(253,4096)], ["",sp.Rational(37,378), sp.sympify(0), sp.Rational(250,621), sp.Rational(125,594), sp.sympify(0), sp.Rational(512,1771)]] , 5)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.xiii: RK6 Dormand-Prince Method [Back to [top](toc)\]$$\label{dp6}$$The sixth-order Dormand-Prince method (see [Dormand, J. R.; Prince, P. J. (1981)](https://www.sciencedirect.com/science/article/pii/0771050X81900103)) Butcher Table is$$\begin{array}{c|cccccccc} 0 & \\ \frac{1}{10} & \frac{1}{10} & \\ \frac{2}{9} & \frac{-2}{81} & \frac{20}{81} & \\ \frac{3}{7} & \frac{615}{1372} & \frac{-270}{343} & \frac{1053}{1372} & \\ \frac{3}{5} & \frac{3243}{5500} & \frac{-54}{55} & \frac{50949}{71500} & \frac{4998}{17875} & \\ \frac{4}{5} & \frac{-26492}{37125} & \frac{72}{55} & \frac{2808}{23375} & \frac{-24206}{37125} & \frac{338}{459} & \\ 1 & \frac{5561}{2376} & \frac{-35}{11} & \frac{-24117}{31603} & \frac{899983}{200772} & \frac{-5225}{1836} & \frac{3925}{4056} & \\ 1 & \frac{465467}{266112} & \frac{-2945}{1232} & \frac{-5610201}{14158144} & \frac{10513573}{3212352} & \frac{-424325}{205632} & \frac{376225}{454272} & 0 & \\ \hline & \frac{61}{864} & 0 & \frac{98415}{321776} & \frac{16807}{146016} & \frac{1375}{7344} & \frac{1375}{5408} & \frac{-37}{1120} & \frac{1}{10}\end{array}$$
# Step 2.a.xiii: RK6 Dormand-Prince Method Butcher_dict['DP6'] = ( [[0], [sp.Rational(1,10), sp.Rational(1,10)], [sp.Rational(2,9), sp.Rational(-2, 81), sp.Rational(20, 81)], [sp.Rational(3,7), sp.Rational(615, 1372), sp.Rational(-270, 343), sp.Rational(1053, 1372)], [sp.Rational(3,5), sp.Rational(3243, 5500), sp.Rational(-54, 55), sp.Rational(50949, 71500), sp.Rational(4998, 17875)], [sp.Rational(4, 5), sp.Rational(-26492, 37125), sp.Rational(72, 55), sp.Rational(2808, 23375), sp.Rational(-24206, 37125), sp.Rational(338, 459)], [sp.sympify(1), sp.Rational(5561, 2376), sp.Rational(-35, 11), sp.Rational(-24117, 31603), sp.Rational(899983, 200772), sp.Rational(-5225, 1836), sp.Rational(3925, 4056)], [sp.sympify(1), sp.Rational(465467, 266112), sp.Rational(-2945, 1232), sp.Rational(-5610201, 14158144), sp.Rational(10513573, 3212352), sp.Rational(-424325, 205632), sp.Rational(376225, 454272), sp.sympify(0)], ["", sp.Rational(61, 864), sp.sympify(0), sp.Rational(98415, 321776), sp.Rational(16807, 146016), sp.Rational(1375, 7344), sp.Rational(1375, 5408), sp.Rational(-37, 1120), sp.Rational(1,10)]] , 6)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.xiv: RK6 Luther's Method [Back to [top](toc)\]$$\label{l6}$$Luther's sixth-order method (see [H. A. Luther (1968)](http://www.ams.org/journals/mcom/1968-22-102/S0025-5718-68-99876-1/S0025-5718-68-99876-1.pdf)) Butcher table is:$$\begin{array}{c|ccccccc} 0 & \\ 1 & 1 & \\ \frac{1}{2} & \frac{3}{8} & \frac{1}{8} & \\ \frac{2}{3} & \frac{8}{27} & \frac{2}{27} & \frac{8}{27} & \\ \frac{(7-q)}{14} & \frac{(-21 + 9q)}{392} & \frac{(-56 + 8q)}{392} & \frac{(336 - 48q)}{392} & \frac{(-63 + 3q)}{392} & \\ \frac{(7+q)}{14} & \frac{(-1155 - 255q)}{1960} & \frac{(-280 - 40q)}{1960} & \frac{320q}{1960} & \frac{(63 + 363q)}{1960} & \frac{(2352 + 392q)}{1960} & \\ 1 & \frac{(330 + 105q)}{180} & \frac{2}{3} & \frac{(-200 + 280q)}{180} & \frac{(126 - 189q)}{180} & \frac{(-686 - 126q)}{180} & \frac{(490 - 70q)}{180} & \\ \hline & \frac{1}{20} & 0 & \frac{16}{45} & 0 & \frac{49}{180} & \frac{49}{180} & \frac{1}{20}\end{array}$$where $q = \sqrt{21}$.
# Step 2.a.xiv: RK6 Luther's Method q = sp.sqrt(21) Butcher_dict['L6'] = ( [[0], [sp.sympify(1), sp.sympify(1)], [sp.Rational(1,2), sp.Rational(3,8), sp.Rational(1,8)], [sp.Rational(2,3), sp.Rational(8,27), sp.Rational(2,27), sp.Rational(8,27)], [(7 - q)/14, (-21 + 9*q)/392, (-56 + 8*q)/392, (336 -48*q)/392, (-63 + 3*q)/392], [(7 + q)/14, (-1155 - 255*q)/1960, (-280 - 40*q)/1960, (-320*q)/1960, (63 + 363*q)/1960, (2352 + 392*q)/1960], [sp.sympify(1), ( 330 + 105*q)/180, sp.Rational(2,3), (-200 + 280*q)/180, (126 - 189*q)/180, (-686 - 126*q)/180, (490 - 70*q)/180], ["", sp.Rational(1, 20), sp.sympify(0), sp.Rational(16, 45), sp.sympify(0), sp.Rational(49, 180), sp.Rational(49, 180), sp.Rational(1, 20)]] , 6)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 2.a.xv: RK8 Dormand-Prince Method [Back to [top](toc)\]$$\label{dp8}$$The eighth-order Dormand-Prince Method (see [Dormand, J. R.; Prince, P. J. (1981)](https://www.sciencedirect.com/science/article/pii/0771050X81900103)) Butcher table is:$$\begin{array}{c|ccccccccc} 0 & \\ \frac{1}{18} & \frac{1}{18} & \\ \frac{1}{12} & \frac{1}{48} & \frac{1}{16} & \\ \frac{1}{8} & \frac{1}{32} & 0 & \frac{3}{32} & \\ \frac{5}{16} & \frac{5}{16} & 0 & \frac{-75}{64} & \frac{75}{64} & \\ \frac{3}{8} & \frac{3}{80} & 0 & 0 & \frac{3}{16} & \frac{3}{20} & \\ \frac{59}{400} & \frac{29443841}{614563906} & 0 & 0 & \frac{77736538}{692538347} & \frac{-28693883}{1125000000} & \frac{23124283}{1800000000} & \\ \frac{93}{200} & \frac{16016141}{946692911} & 0 & 0 & \frac{61564180}{158732637} & \frac{22789713}{633445777} & \frac{545815736}{2771057229} & \frac{-180193667}{1043307555} & \\ \frac{5490023248}{9719169821} & \frac{39632708}{573591083} & 0 & 0 & \frac{-433636366}{683701615} & \frac{-421739975}{2616292301} & \frac{100302831}{723423059} & \frac{790204164}{839813087} & \frac{800635310}{3783071287} & \\ \frac{13}{20} & \frac{246121993}{1340847787} & 0 & 0 & \frac{-37695042795}{15268766246} & \frac{-309121744}{1061227803} & \frac{-12992083}{490766935} & \frac{6005943493}{2108947869} & \frac{393006217}{1396673457} & \frac{123872331}{1001029789} & \\ \frac{1201146811}{1299019798} & \frac{-1028468189}{846180014} & 0 & 0 & \frac{8478235783}{508512852} & \frac{1311729495}{1432422823} & \frac{-10304129995}{1701304382} & \frac{-48777925059}{3047939560} & \frac{15336726248}{1032824649} & \frac{-45442868181}{3398467696} & \frac{3065993473}{597172653} & \\ 1 & \frac{185892177}{718116043} & 0 & 0 & \frac{-3185094517}{667107341} & \frac{-477755414}{1098053517} & \frac{-703635378}{230739211} & \frac{5731566787}{1027545527} & \frac{5232866602}{850066563} & \frac{-4093664535}{808688257} & \frac{3962137247}{1805957418} & \frac{65686358}{487910083} & \\ 1 & \frac{403863854}{491063109} & 0 & 0 & \frac{-5068492393}{434740067} & \frac{-411421997}{543043805} & \frac{652783627}{914296604} & \frac{11173962825}{925320556} & \frac{-13158990841}{6184727034} & \frac{3936647629}{1978049680} & \frac{-160528059}{685178525} & \frac{248638103}{1413531060} & 0 & \\ & \frac{14005451}{335480064} & 0 & 0 & 0 & 0 & \frac{-59238493}{1068277825} & \frac{181606767}{758867731} & \frac{561292985}{797845732} & \frac{-1041891430}{1371343529} & \frac{760417239}{1151165299} & \frac{118820643}{751138087} & \frac{-528747749}{2220607170} & \frac{1}{4}\end{array}$$
# Step 2.a.xv: RK8 Dormand-Prince Method Butcher_dict['DP8']=( [[0], [sp.Rational(1, 18), sp.Rational(1, 18)], [sp.Rational(1, 12), sp.Rational(1, 48), sp.Rational(1, 16)], [sp.Rational(1, 8), sp.Rational(1, 32), sp.sympify(0), sp.Rational(3, 32)], [sp.Rational(5, 16), sp.Rational(5, 16), sp.sympify(0), sp.Rational(-75, 64), sp.Rational(75, 64)], [sp.Rational(3, 8), sp.Rational(3, 80), sp.sympify(0), sp.sympify(0), sp.Rational(3, 16), sp.Rational(3, 20)], [sp.Rational(59, 400), sp.Rational(29443841, 614563906), sp.sympify(0), sp.sympify(0), sp.Rational(77736538, 692538347), sp.Rational(-28693883, 1125000000), sp.Rational(23124283, 1800000000)], [sp.Rational(93, 200), sp.Rational(16016141, 946692911), sp.sympify(0), sp.sympify(0), sp.Rational(61564180, 158732637), sp.Rational(22789713, 633445777), sp.Rational(545815736, 2771057229), sp.Rational(-180193667, 1043307555)], [sp.Rational(5490023248, 9719169821), sp.Rational(39632708, 573591083), sp.sympify(0), sp.sympify(0), sp.Rational(-433636366, 683701615), sp.Rational(-421739975, 2616292301), sp.Rational(100302831, 723423059), sp.Rational(790204164, 839813087), sp.Rational(800635310, 3783071287)], [sp.Rational(13, 20), sp.Rational(246121993, 1340847787), sp.sympify(0), sp.sympify(0), sp.Rational(-37695042795, 15268766246), sp.Rational(-309121744, 1061227803), sp.Rational(-12992083, 490766935), sp.Rational(6005943493, 2108947869), sp.Rational(393006217, 1396673457), sp.Rational(123872331, 1001029789)], [sp.Rational(1201146811, 1299019798), sp.Rational(-1028468189, 846180014), sp.sympify(0), sp.sympify(0), sp.Rational(8478235783, 508512852), sp.Rational(1311729495, 1432422823), sp.Rational(-10304129995, 1701304382), sp.Rational(-48777925059, 3047939560), sp.Rational(15336726248, 1032824649), sp.Rational(-45442868181, 3398467696), sp.Rational(3065993473, 597172653)], [sp.sympify(1), sp.Rational(185892177, 718116043), sp.sympify(0), sp.sympify(0), sp.Rational(-3185094517, 667107341), sp.Rational(-477755414, 1098053517), sp.Rational(-703635378, 230739211), sp.Rational(5731566787, 1027545527), sp.Rational(5232866602, 850066563), sp.Rational(-4093664535, 808688257), sp.Rational(3962137247, 1805957418), sp.Rational(65686358, 487910083)], [sp.sympify(1), sp.Rational(403863854, 491063109), sp.sympify(0), sp.sympify(0), sp.Rational(-5068492393, 434740067), sp.Rational(-411421997, 543043805), sp.Rational(652783627, 914296604), sp.Rational(11173962825, 925320556), sp.Rational(-13158990841, 6184727034), sp.Rational(3936647629, 1978049680), sp.Rational(-160528059, 685178525), sp.Rational(248638103, 1413531060), sp.sympify(0)], ["", sp.Rational(14005451, 335480064), sp.sympify(0), sp.sympify(0), sp.sympify(0), sp.sympify(0), sp.Rational(-59238493, 1068277825), sp.Rational(181606767, 758867731), sp.Rational(561292985, 797845732), sp.Rational(-1041891430, 1371343529), sp.Rational(760417239, 1151165299), sp.Rational(118820643, 751138087), sp.Rational(-528747749, 2220607170), sp.Rational(1, 4)]] , 8)
_____no_output_____
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 3: Code validation against `MoLtimestepping.RK_Butcher_Table_Dictionary` NRPy+ module [Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the dictionary of Butcher tables between1. this tutorial and 2. the NRPy+ [MoLtimestepping.RK_Butcher_Table_Dictionary](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) module.We analyze all key/value entries in the dictionary for consistency.
# Step 3: Code validation against MoLtimestepping.RK_Butcher_Table_Dictionary NRPy+ module import sys # Standard Python module for multiplatform OS-level functions from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict as B_dict valid = True for key, value in Butcher_dict.items(): if Butcher_dict[key] != B_dict[key]: valid = False print(key) if valid == True and len(Butcher_dict.items()) == len(B_dict.items()): print("The dictionaries match!") else: print("ERROR: Dictionaries don't match!") sys.exit(1)
The dictionaries match!
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-RK_Butcher_Table_Dictionary.pdf](Tutorial-RK_Butcher_Table_Dictionary.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-RK_Butcher_Table_Dictionary")
Created Tutorial-RK_Butcher_Table_Dictionary.tex, and compiled LaTeX file to PDF file Tutorial-RK_Butcher_Table_Dictionary.pdf
BSD-2-Clause
Tutorial-RK_Butcher_Table_Dictionary.ipynb
stevenrbrandt/nrpytutorial
Load Audio files
al_train = AudioLoader(directory='../data/train') # al_test = AudioLoader(directory="../data/test",tts_file=r'/trsTest.txt') df_train_audio_data = al_train.get_audio_info_with_data() # df_test_audio_data = al_test.get_audio_info_with_data() # rp = ResultPickler() # rp.load_data("../models/LoadedAudioInfo.pkl") # data_dict = rp.get_data() # # data_dict.keys() # df_train_audio_data = data_dict['TrainAudioInfoWithoutTTS'] df_train_audio_data # instantiate audio manuplator class am_train = AudioManipulator(df_train_audio_data) # Plot Time Series data of vis.plot_series(df_train_audio_data.loc[0,"TimeSeriesData"])
_____no_output_____
MIT
notebooks/AudioManipulation.ipynb
DePacifier/AMH-STT
Preprocessing the audio Data- change the duration to the same size- convert channels to stereo by duplicating the other channel- standardize the sampling rate to the same one- Data Augmentation- Extract Features Convert Channels to Stereo by duplicating the other channel
am_train.convert_stereo_audio() am_train.get_audio_info() # am_train.get_audio_info().head().loc[0,"TimeSeriesData"].shape num_rows, y_len = am_train.get_audio_info().loc[0,"TimeSeriesData"].shape num_rows,y_len
_____no_output_____
MIT
notebooks/AudioManipulation.ipynb
DePacifier/AMH-STT
Change the duration to the same sizeFrom Our Data Exploration, we found that most frequent audio files has a duration between 2 to 6. And to reduce the bias, we will pad all the audio to make it equal in length with the maximum.
am_train.resize_audio() am_train.get_audio_info() am_train.get_audio_info().loc[0,"TimeSeriesData"].shape
_____no_output_____
MIT
notebooks/AudioManipulation.ipynb
DePacifier/AMH-STT
Standardize Sampling Rate
# count sampling rate frequencies pd.DataFrame({"count": df_train_audio_data.groupby("SamplingRate")["SamplingRate"].count()}) am_train.resample_audio() am_train.get_audio_info()
_____no_output_____
MIT
notebooks/AudioManipulation.ipynb
DePacifier/AMH-STT
Our SamplingRate is the same all around our data but we have resampled it to 44100. Now we have our processed data, we will save the preprocessed files to a folder in a .wav format.
am_train.write_wave_files("../data/train/wav/")
_____no_output_____
MIT
notebooks/AudioManipulation.ipynb
DePacifier/AMH-STT
Augmentation Feature Extraction We can now extract features but first we convert back to mono since the librosa library expects a monochannel audio.
# features = am_train.extract_features() # features.head() # vis.plot_spectrogram(features.loc[0,'Melspectogram']) # vis.plot_spectrogram(features.loc[0,'Melspectogram_db'])
_____no_output_____
MIT
notebooks/AudioManipulation.ipynb
DePacifier/AMH-STT
Fictitious Names Introduction:This time you will create a data again Special thanks to [Chris Albon](http://chrisalbon.com/) for sharing the dataset and materials.All the credits to this exercise belongs to him. In order to understand about it go to [here](https://blog.codinghorror.com/a-visual-explanation-of-sql-joins/). Step 1. Import the necessary libraries
import pandas as pd
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 2. Create the 3 DataFrames based on the followin raw data
raw_data_1 = { 'subject_id': ['1', '2', '3', '4', '5'], 'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'], 'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']} raw_data_2 = { 'subject_id': ['4', '5', '6', '7', '8'], 'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'], 'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']} raw_data_3 = { 'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'], 'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 3. Assign each to a variable called data1, data2, data3
data3
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 4. Join the two dataframes along rows and assign all_data
all_data = pd.concat([data1, data2]) all_data
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 5. Join the two dataframes along columns and assing to all_data_col
all_data_col = pd.concat([data1, data2], axis = 1) all_data_col
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 6. Print data3
data3
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 7. Merge all_data and data3 along the subject_id value
pd.merge(all_data, data3, on='subject_id')
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 8. Merge only the data that has the same 'subject_id' on both data1 and data2
pd.merge(data1, data2, on='subject_id', how='inner')
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
Step 9. Merge all values in data1 and data2, with matching records from both sides where available.
pd.merge(data1, data2, on='subject_id', how='outer')
_____no_output_____
BSD-3-Clause
05_Merge/Fictitous Names/Exercises_with_solutions.ipynb
ktats/pandas_exercises
In this example, we will compare development of a pairs trading strategy using backtrader and vectorbt.
import numpy as np import pandas as pd import datetime import collections import math import pytz import scipy.stats as st SYMBOL1 = 'PEP' SYMBOL2 = 'KO' FROMDATE = datetime.datetime(2017, 1, 1, tzinfo=pytz.utc) TODATE = datetime.datetime(2019, 1, 1, tzinfo=pytz.utc) PERIOD = 100 CASH = 100000 COMMPERC = 0.005 # 0.5% ORDER_PCT1 = 0.1 ORDER_PCT2 = 0.1 UPPER = st.norm.ppf(1 - 0.05 / 2) LOWER = -st.norm.ppf(1 - 0.05 / 2) MODE = 'OLS' # OLS, log_return
_____no_output_____
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
Data
import vectorbt as vbt start_date = FROMDATE.replace(tzinfo=pytz.utc) end_date = TODATE.replace(tzinfo=pytz.utc) data = vbt.YFData.download([SYMBOL1, SYMBOL2], start=start_date, end=end_date) data = data.loc[(data.wrapper.index >= start_date) & (data.wrapper.index < end_date)] print(data.data[SYMBOL1].iloc[[0, -1]]) print(data.data[SYMBOL2].iloc[[0, -1]])
Open High Low Close \ Date 2017-01-03 00:00:00+00:00 91.831129 91.962386 91.192316 91.577354 2018-12-31 00:00:00+00:00 102.775161 103.249160 101.604091 102.682220 Volume Dividends Stock Splits Date 2017-01-03 00:00:00+00:00 3741200 0.0 0 2018-12-31 00:00:00+00:00 5019100 0.0 0 Open High Low Close \ Date 2017-01-03 00:00:00+00:00 35.801111 36.068542 35.611321 36.059914 2018-12-31 00:00:00+00:00 43.810820 43.856946 43.321878 43.681664 Volume Dividends Stock Splits Date 2017-01-03 00:00:00+00:00 14711000 0.0 0 2018-12-31 00:00:00+00:00 10576300 0.0 0
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
backtrader Adapted version of https://github.com/mementum/backtrader/blob/master/contrib/samples/pair-trading/pair-trading.py
import backtrader as bt import backtrader.feeds as btfeeds import backtrader.indicators as btind class CommInfoFloat(bt.CommInfoBase): """Commission schema that keeps size as float.""" params = ( ('stocklike', True), ('commtype', bt.CommInfoBase.COMM_PERC), ('percabs', True), ) def getsize(self, price, cash): if not self._stocklike: return self.p.leverage * (cash / self.get_margin(price)) return self.p.leverage * (cash / price) class OLSSlopeIntercept(btind.PeriodN): """Calculates a linear regression using OLS.""" _mindatas = 2 # ensure at least 2 data feeds are passed packages = ( ('pandas', 'pd'), ('statsmodels.api', 'sm'), ) lines = ('slope', 'intercept',) params = ( ('period', 10), ) def next(self): p0 = pd.Series(self.data0.get(size=self.p.period)) p1 = pd.Series(self.data1.get(size=self.p.period)) p1 = sm.add_constant(p1) intercept, slope = sm.OLS(p0, p1).fit().params self.lines.slope[0] = slope self.lines.intercept[0] = intercept class Log(btind.Indicator): """Calculates log.""" lines = ('log',) def next(self): self.l.log[0] = math.log(self.data[0]) class OLSSpread(btind.PeriodN): """Calculates the z-score of the OLS spread.""" _mindatas = 2 # ensure at least 2 data feeds are passed lines = ('spread', 'spread_mean', 'spread_std', 'zscore',) params = (('period', 10),) def __init__(self): data0_log = Log(self.data0) data1_log = Log(self.data1) slint = OLSSlopeIntercept(data0_log, data1_log, period=self.p.period) spread = data0_log - (slint.slope * data1_log + slint.intercept) self.l.spread = spread self.l.spread_mean = bt.ind.SMA(spread, period=self.p.period) self.l.spread_std = bt.ind.StdDev(spread, period=self.p.period) self.l.zscore = (spread - self.l.spread_mean) / self.l.spread_std class LogReturns(btind.PeriodN): """Calculates the log returns.""" lines = ('logret',) params = (('period', 1),) def __init__(self): self.addminperiod(self.p.period + 1) def next(self): self.l.logret[0] = math.log(self.data[0] / self.data[-self.p.period]) class LogReturnSpread(btind.PeriodN): """Calculates the spread of the log returns.""" _mindatas = 2 # ensure at least 2 data feeds are passed lines = ('logret0', 'logret1', 'spread', 'spread_mean', 'spread_std', 'zscore',) params = (('period', 10),) def __init__(self): self.l.logret0 = LogReturns(self.data0, period=1) self.l.logret1 = LogReturns(self.data1, period=1) self.l.spread = self.l.logret0 - self.l.logret1 self.l.spread_mean = bt.ind.SMA(self.l.spread, period=self.p.period) self.l.spread_std = bt.ind.StdDev(self.l.spread, period=self.p.period) self.l.zscore = (self.l.spread - self.l.spread_mean) / self.l.spread_std class PairTradingStrategy(bt.Strategy): """Basic pair trading strategy.""" params = dict( period=PERIOD, order_pct1=ORDER_PCT1, order_pct2=ORDER_PCT2, printout=True, upper=UPPER, lower=LOWER, mode=MODE ) def log(self, txt, dt=None): if self.p.printout: dt = dt or self.data.datetime[0] dt = bt.num2date(dt) print('%s, %s' % (dt.isoformat(), txt)) def notify_order(self, order): if order.status in [bt.Order.Submitted, bt.Order.Accepted]: return # Await further notifications if order.status == order.Completed: if order.isbuy(): buytxt = 'BUY COMPLETE {}, size = {:.2f}, price = {:.2f}'.format( order.data._name, order.executed.size, order.executed.price) self.log(buytxt, order.executed.dt) else: selltxt = 'SELL COMPLETE {}, size = {:.2f}, price = {:.2f}'.format( order.data._name, order.executed.size, order.executed.price) self.log(selltxt, order.executed.dt) elif order.status in [order.Expired, order.Canceled, order.Margin]: self.log('%s ,' % order.Status[order.status]) pass # Simply log # Allow new orders self.orderid = None def __init__(self): # To control operation entries self.orderid = None self.order_pct1 = self.p.order_pct1 self.order_pct2 = self.p.order_pct2 self.upper = self.p.upper self.lower = self.p.lower self.status = 0 # Signals performed with PD.OLS : if self.p.mode == 'log_return': self.transform = LogReturnSpread(self.data0, self.data1, period=self.p.period) elif self.p.mode == 'OLS': self.transform = OLSSpread(self.data0, self.data1, period=self.p.period) else: raise ValueError("Unknown mode") self.spread = self.transform.spread self.zscore = self.transform.zscore # For tracking self.spread_sr = pd.Series(dtype=float, name='spread') self.zscore_sr = pd.Series(dtype=float, name='zscore') self.short_signal_sr = pd.Series(dtype=bool, name='short_signals') self.long_signal_sr = pd.Series(dtype=bool, name='long_signals') def next(self): if self.orderid: return # if an order is active, no new orders are allowed self.spread_sr[self.data0.datetime.datetime()] = self.spread[0] self.zscore_sr[self.data0.datetime.datetime()] = self.zscore[0] self.short_signal_sr[self.data0.datetime.datetime()] = False self.long_signal_sr[self.data0.datetime.datetime()] = False if self.zscore[0] > self.upper and self.status != 1: # Check conditions for shorting the spread & place the order self.short_signal_sr[self.data0.datetime.datetime()] = True # Placing the order self.log('SELL CREATE {}, price = {:.2f}, target pct = {:.2%}'.format( self.data0._name, self.data0.close[0], -self.order_pct1)) self.order_target_percent(data=self.data0, target=-self.order_pct1) self.log('BUY CREATE {}, price = {:.2f}, target pct = {:.2%}'.format( self.data1._name, self.data1.close[0], self.order_pct2)) self.order_target_percent(data=self.data1, target=self.order_pct2) self.status = 1 elif self.zscore[0] < self.lower and self.status != 2: # Check conditions for longing the spread & place the order self.long_signal_sr[self.data0.datetime.datetime()] = True # Place the order self.log('SELL CREATE {}, price = {:.2f}, target pct = {:.2%}'.format( self.data1._name, self.data1.close[0], -self.order_pct2)) self.order_target_percent(data=self.data1, target=-self.order_pct2) self.log('BUY CREATE {}, price = {:.2f}, target pct = {:.2%}'.format( self.data0._name, self.data0.close[0], self.order_pct1)) self.order_target_percent(data=self.data0, target=self.order_pct1) self.status = 2 def stop(self): if self.p.printout: print('==================================================') print('Starting Value - %.2f' % self.broker.startingcash) print('Ending Value - %.2f' % self.broker.getvalue()) print('==================================================') class DataAnalyzer(bt.analyzers.Analyzer): """Analyzer to extract OHLCV.""" def create_analysis(self): self.rets0 = {} self.rets1 = {} def next(self): self.rets0[self.strategy.datetime.datetime()] = [ self.data0.open[0], self.data0.high[0], self.data0.low[0], self.data0.close[0], self.data0.volume[0] ] self.rets1[self.strategy.datetime.datetime()] = [ self.data1.open[0], self.data1.high[0], self.data1.low[0], self.data1.close[0], self.data1.volume[0] ] def get_analysis(self): return self.rets0, self.rets1 class CashValueAnalyzer(bt.analyzers.Analyzer): """Analyzer to extract cash and value.""" def create_analysis(self): self.rets = {} def notify_cashvalue(self, cash, value): self.rets[self.strategy.datetime.datetime()] = (cash, value) def get_analysis(self): return self.rets class OrderAnalyzer(bt.analyzers.Analyzer): """Analyzer to extract order price, size, value, and paid commission.""" def create_analysis(self): self.rets0 = {} self.rets1 = {} def notify_order(self, order): if order.status == order.Completed: if order.data._name == SYMBOL1: rets = self.rets0 else: rets = self.rets1 rets[self.strategy.datetime.datetime()] = ( order.executed.price, order.executed.size, -order.executed.size * order.executed.price, order.executed.comm ) def get_analysis(self): return self.rets0, self.rets1 def prepare_cerebro(data0, data1, use_analyzers=True, **params): # Create a cerebro cerebro = bt.Cerebro() # Add the 1st data to cerebro cerebro.adddata(data0) # Add the 2nd data to cerebro cerebro.adddata(data1) # Add the strategy cerebro.addstrategy(PairTradingStrategy, **params) # Add the commission - only stocks like a for each operation cerebro.broker.setcash(CASH) # Add the commission - only stocks like a for each operation comminfo = CommInfoFloat(commission=COMMPERC) cerebro.broker.addcommissioninfo(comminfo) if use_analyzers: # Add analyzers cerebro.addanalyzer(DataAnalyzer) cerebro.addanalyzer(CashValueAnalyzer) cerebro.addanalyzer(OrderAnalyzer) return cerebro class PandasData(btfeeds.PandasData): params = ( # Possible values for datetime (must always be present) # None : datetime is the "index" in the Pandas Dataframe # -1 : autodetect position or case-wise equal name # >= 0 : numeric index to the colum in the pandas dataframe # string : column name (as index) in the pandas dataframe ('datetime', None), ('open', 'Open'), ('high', 'High'), ('low', 'Low'), ('close', 'Close'), ('volume', 'Volume'), ('openinterest', None), ) # Create the 1st data data0 = PandasData(dataname=data.data[SYMBOL1], name=SYMBOL1) # Create the 2nd data data1 = PandasData(dataname=data.data[SYMBOL2], name=SYMBOL2) # Prepare a cerebro cerebro = prepare_cerebro(data0, data1) # And run it bt_strategy = cerebro.run()[0] %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (13, 8) cerebro.plot(iplot=False) # Extract OHLCV bt_s1_rets, bt_s2_rets = bt_strategy.analyzers.dataanalyzer.get_analysis() data_cols = ['open', 'high', 'low', 'close', 'volume'] bt_s1_ohlcv = pd.DataFrame.from_dict(bt_s1_rets, orient='index', columns=data_cols) bt_s2_ohlcv = pd.DataFrame.from_dict(bt_s2_rets, orient='index', columns=data_cols) print(bt_s1_ohlcv.shape) print(bt_s2_ohlcv.shape) print(bt_s1_ohlcv.iloc[[0, -1]]) print(bt_s2_ohlcv.iloc[[0, -1]]) try: np.testing.assert_allclose(bt_s1_ohlcv.values, data.data[SYMBOL1].iloc[:, :5].values) np.testing.assert_allclose(bt_s2_ohlcv.values, data.data[SYMBOL2].iloc[:, :5].values) except AssertionError as e: print(e) # Extract cash and value series bt_cashvalue_rets = bt_strategy.analyzers.cashvalueanalyzer.get_analysis() bt_cashvalue_df = pd.DataFrame.from_dict(bt_cashvalue_rets, orient='index', columns=['cash', 'value']) bt_cash = bt_cashvalue_df['cash'] bt_value = bt_cashvalue_df['value'] print(bt_cash.iloc[[0, -1]]) print(bt_value.iloc[[0, -1]]) # Extract order info bt_s1_order_rets, bt_s2_order_rets = bt_strategy.analyzers.orderanalyzer.get_analysis() order_cols = ['order_price', 'order_size', 'order_value', 'order_comm'] bt_s1_orders = pd.DataFrame.from_dict(bt_s1_order_rets, orient='index', columns=order_cols) bt_s2_orders = pd.DataFrame.from_dict(bt_s2_order_rets, orient='index', columns=order_cols) print(bt_s1_orders.iloc[[0, -1]]) print(bt_s2_orders.iloc[[0, -1]]) # Extract spread and z-score bt_spread = bt_strategy.spread_sr bt_zscore = bt_strategy.zscore_sr print(bt_spread.iloc[[0, -1]]) print(bt_zscore.iloc[[0, -1]]) # Extract signals bt_short_signals = bt_strategy.short_signal_sr bt_long_signals = bt_strategy.long_signal_sr print(bt_short_signals[bt_short_signals]) print(bt_long_signals[bt_long_signals]) # How fast is bt? cerebro = prepare_cerebro(data0, data1, use_analyzers=False, printout=False) %timeit cerebro.run(preload=False)
1.07 s ± 16.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
vectorbt Using Portfolio.from_orders
from numba import njit @njit def rolling_logret_zscore_nb(a, b, period): """Calculate the log return spread.""" spread = np.full_like(a, np.nan, dtype=np.float_) spread[1:] = np.log(a[1:] / a[:-1]) - np.log(b[1:] / b[:-1]) zscore = np.full_like(a, np.nan, dtype=np.float_) for i in range(a.shape[0]): from_i = max(0, i + 1 - period) to_i = i + 1 if i < period - 1: continue spread_mean = np.mean(spread[from_i:to_i]) spread_std = np.std(spread[from_i:to_i]) zscore[i] = (spread[i] - spread_mean) / spread_std return spread, zscore @njit def ols_spread_nb(a, b): """Calculate the OLS spread.""" a = np.log(a) b = np.log(b) _b = np.vstack((b, np.ones(len(b)))).T slope, intercept = np.dot(np.linalg.inv(np.dot(_b.T, _b)), np.dot(_b.T, a)) spread = a - (slope * b + intercept) return spread[-1] @njit def rolling_ols_zscore_nb(a, b, period): """Calculate the z-score of the rolling OLS spread.""" spread = np.full_like(a, np.nan, dtype=np.float_) zscore = np.full_like(a, np.nan, dtype=np.float_) for i in range(a.shape[0]): from_i = max(0, i + 1 - period) to_i = i + 1 if i < period - 1: continue spread[i] = ols_spread_nb(a[from_i:to_i], b[from_i:to_i]) spread_mean = np.mean(spread[from_i:to_i]) spread_std = np.std(spread[from_i:to_i]) zscore[i] = (spread[i] - spread_mean) / spread_std return spread, zscore # Calculate OLS z-score using Numba for a nice speedup if MODE == 'OLS': vbt_spread, vbt_zscore = rolling_ols_zscore_nb( bt_s1_ohlcv['close'].values, bt_s2_ohlcv['close'].values, PERIOD ) elif MODE == 'log_return': vbt_spread, vbt_zscore = rolling_logret_zscore_nb( bt_s1_ohlcv['close'].values, bt_s2_ohlcv['close'].values, PERIOD ) else: raise ValueError("Unknown mode") vbt_spread = pd.Series(vbt_spread, index=bt_s1_ohlcv.index, name='spread') vbt_zscore = pd.Series(vbt_zscore, index=bt_s1_ohlcv.index, name='zscore') # Assert equality of bt and vbt z-score arrays pd.testing.assert_series_equal(bt_spread, vbt_spread[bt_spread.index]) pd.testing.assert_series_equal(bt_zscore, vbt_zscore[bt_zscore.index]) # Generate short and long spread signals vbt_short_signals = (vbt_zscore > UPPER).rename('short_signals') vbt_long_signals = (vbt_zscore < LOWER).rename('long_signals') vbt_short_signals, vbt_long_signals = pd.Series.vbt.signals.clean( vbt_short_signals, vbt_long_signals, entry_first=False, broadcast_kwargs=dict(columns_from='keep')) def plot_spread_and_zscore(spread, zscore): fig = vbt.make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.05) spread.vbt.plot(add_trace_kwargs=dict(row=1, col=1), fig=fig) zscore.vbt.plot(add_trace_kwargs=dict(row=2, col=1), fig=fig) vbt_short_signals.vbt.signals.plot_as_exit_markers(zscore, add_trace_kwargs=dict(row=2, col=1), fig=fig) vbt_long_signals.vbt.signals.plot_as_entry_markers(zscore, add_trace_kwargs=dict(row=2, col=1), fig=fig) fig.update_layout(height=500) fig.add_shape( type="rect", xref='paper', yref='y2', x0=0, y0=UPPER, x1=1, y1=LOWER, fillcolor="gray", opacity=0.2, layer="below", line_width=0, ) return fig plot_spread_and_zscore(vbt_spread, vbt_zscore).show_svg() # Assert equality of bt and vbt signal arrays pd.testing.assert_series_equal( bt_short_signals[bt_short_signals], vbt_short_signals[vbt_short_signals] ) pd.testing.assert_series_equal( bt_long_signals[bt_long_signals], vbt_long_signals[vbt_long_signals] ) # Build percentage order size symbol_cols = pd.Index([SYMBOL1, SYMBOL2], name='symbol') vbt_order_size = pd.DataFrame(index=bt_s1_ohlcv.index, columns=symbol_cols) vbt_order_size[SYMBOL1] = np.nan vbt_order_size[SYMBOL2] = np.nan vbt_order_size.loc[vbt_short_signals, SYMBOL1] = -ORDER_PCT1 vbt_order_size.loc[vbt_long_signals, SYMBOL1] = ORDER_PCT1 vbt_order_size.loc[vbt_short_signals, SYMBOL2] = ORDER_PCT2 vbt_order_size.loc[vbt_long_signals, SYMBOL2] = -ORDER_PCT2 # Execute at the next bar vbt_order_size = vbt_order_size.vbt.fshift(1) print(vbt_order_size[~vbt_order_size.isnull().any(axis=1)]) # Simulate the portfolio vbt_close_price = pd.concat((bt_s1_ohlcv['close'], bt_s2_ohlcv['close']), axis=1, keys=symbol_cols) vbt_open_price = pd.concat((bt_s1_ohlcv['open'], bt_s2_ohlcv['open']), axis=1, keys=symbol_cols) def simulate_from_orders(): """Simulate using `Portfolio.from_orders`.""" return vbt.Portfolio.from_orders( vbt_close_price, # current close as reference price size=vbt_order_size, price=vbt_open_price, # current open as execution price size_type='targetpercent', val_price=vbt_close_price.vbt.fshift(1), # previous close as group valuation price init_cash=CASH, fees=COMMPERC, cash_sharing=True, # share capital between assets in the same group group_by=True, # all columns belong to the same group call_seq='auto', # sell before buying freq='d' # index frequency for annualization ) vbt_pf = simulate_from_orders() print(vbt_pf.orders.records_readable) # Proof that both bt and vbt produce the same result pd.testing.assert_series_equal(bt_cash, vbt_pf.cash().rename('cash')) pd.testing.assert_series_equal(bt_value, vbt_pf.value().rename('value')) print(vbt_pf.stats()) # Plot portfolio from functools import partial def plot_orders(portfolio, column=None, add_trace_kwargs=None, fig=None): portfolio.orders.plot(column=column, add_trace_kwargs=add_trace_kwargs, fig=fig) vbt_pf.plot(subplots=[ ('symbol1_orders', dict( title=f"Orders ({SYMBOL1})", yaxis_title="Price", check_is_not_grouped=False, plot_func=partial(plot_orders, column=SYMBOL1), pass_column=False )), ('symbol2_orders', dict( title=f"Orders ({SYMBOL2})", yaxis_title="Price", check_is_not_grouped=False, plot_func=partial(plot_orders, column=SYMBOL2), pass_column=False )) ]).show_svg() # How fast is vbt? %timeit simulate_from_orders()
3.04 ms ± 5.31 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
While Portfolio.from_orders is a very convenient and optimized function for simulating portfolios, it requires some prior steps to produce the size array. In the example above, we needed to manually run the calculation of the spread z-score, generate the signals from the z-score, build the size array from the signals, and make sure that all arrays are perfectly aligned. All these steps must be repeated and adapted accordingly once there is more than one hyperparameter combination to test.Nevertheless, dividing the pipeline into clearly separated backtesting steps helps us to analyze each step thoroughly and actually does wonders for strategy development and debugging. Using Portfolio.from_order_func Portfolio.from_order_func follows a different (self-contained) approach where as much steps as possible should be defined in the simulation function itself. It sequentially processes timestamps one by one and executes orders based on the logic the user defined rather than parses this logic from some arrays. While this makes order execution less transparent as you cannot analyze each piece of data on the fly anymore (sadly, no pandas and plotting within Numba), it has one big advantage over other vectorized methods: event-driven order processing. This gives best flexibility (you can write any logic), security (less probability of exposing yourself to a look-ahead bias among other biases), and performance (you're traversing the data only once). This method is the most similar one compared to backtrader.
from vectorbt.portfolio import nb as portfolio_nb from vectorbt.base.reshape_fns import flex_select_auto_nb from vectorbt.portfolio.enums import SizeType, Direction from collections import namedtuple Memory = namedtuple("Memory", ('spread', 'zscore', 'status')) Params = namedtuple("Params", ('period', 'upper', 'lower', 'order_pct1', 'order_pct2')) @njit def pre_group_func_nb(c, _period, _upper, _lower, _order_pct1, _order_pct2): """Prepare the current group (= pair of columns).""" assert c.group_len == 2 # In contrast to bt, we don't have a class instance that we could use to store arrays, # so let's create a namedtuple acting as a container for our arrays # ( you could also pass each array as a standalone object, but a single object is more convenient) spread = np.full(c.target_shape[0], np.nan, dtype=np.float_) zscore = np.full(c.target_shape[0], np.nan, dtype=np.float_) # Note that namedtuples aren't mutable, you can't simply assign a value, # thus make status variable an array of one element for an easy assignment status = np.full(1, 0, dtype=np.int_) memory = Memory(spread, zscore, status) # Treat each param as an array with value per group, and select the combination of params for this group period = flex_select_auto_nb(0, c.group, np.asarray(_period), True) upper = flex_select_auto_nb(0, c.group, np.asarray(_upper), True) lower = flex_select_auto_nb(0, c.group, np.asarray(_lower), True) order_pct1 = flex_select_auto_nb(0, c.group, np.asarray(_order_pct1), True) order_pct2 = flex_select_auto_nb(0, c.group, np.asarray(_order_pct2), True) # Put all params into a container (again, this is optional) params = Params(period, upper, lower, order_pct1, order_pct2) # Create an array that will store our two target percentages used by order_func_nb # we do it here instead of in pre_segment_func_nb to initialize the array once, instead of in each row size = np.empty(c.group_len, dtype=np.float_) # The returned tuple is passed as arguments to the function below return (memory, params, size) @njit def pre_segment_func_nb(c, memory, params, size, mode): """Prepare the current segment (= row within group).""" # We want to perform calculations once we reach full window size if c.i < params.period - 1: size[0] = np.nan # size of nan means no order size[1] = np.nan return (size,) # z-core is calculated using a window (=period) of spread values # This window can be specified as a slice window_slice = slice(max(0, c.i + 1 - params.period), c.i + 1) # Here comes the same as in rolling_ols_zscore_nb if mode == 'OLS': a = c.close[window_slice, c.from_col] b = c.close[window_slice, c.from_col + 1] memory.spread[c.i] = ols_spread_nb(a, b) elif mode == 'log_return': logret_a = np.log(c.close[c.i, c.from_col] / c.close[c.i - 1, c.from_col]) logret_b = np.log(c.close[c.i, c.from_col + 1] / c.close[c.i - 1, c.from_col + 1]) memory.spread[c.i] = logret_a - logret_b else: raise ValueError("Unknown mode") spread_mean = np.mean(memory.spread[window_slice]) spread_std = np.std(memory.spread[window_slice]) memory.zscore[c.i] = (memory.spread[c.i] - spread_mean) / spread_std # Check if any bound is crossed # Since zscore is calculated using close, use zscore of the previous step # This way we are executing signals defined at the previous bar # Same logic as in PairTradingStrategy if memory.zscore[c.i - 1] > params.upper and memory.status[0] != 1: size[0] = -params.order_pct1 size[1] = params.order_pct2 # Here we specify the order of execution # call_seq_now defines order for the current group (2 elements) c.call_seq_now[0] = 0 c.call_seq_now[1] = 1 memory.status[0] = 1 elif memory.zscore[c.i - 1] < params.lower and memory.status[0] != 2: size[0] = params.order_pct1 size[1] = -params.order_pct2 c.call_seq_now[0] = 1 # execute the second order first to release funds early c.call_seq_now[1] = 0 memory.status[0] = 2 else: size[0] = np.nan size[1] = np.nan # Group value is converted to shares using previous close, just like in bt # Note that last_val_price contains valuation price of all columns, not just the current pair c.last_val_price[c.from_col] = c.close[c.i - 1, c.from_col] c.last_val_price[c.from_col + 1] = c.close[c.i - 1, c.from_col + 1] return (size,) @njit def order_func_nb(c, size, price, commperc): """Place an order (= element within group and row).""" # Get column index within group (if group starts at column 58 and current column is 59, # the column within group is 1, which can be used to get size) group_col = c.col - c.from_col return portfolio_nb.order_nb( size=size[group_col], price=price[c.i, c.col], size_type=SizeType.TargetPercent, fees=commperc ) def simulate_from_order_func(): """Simulate using `Portfolio.from_order_func`.""" return vbt.Portfolio.from_order_func( vbt_close_price, order_func_nb, vbt_open_price.values, COMMPERC, # *args for order_func_nb pre_group_func_nb=pre_group_func_nb, pre_group_args=(PERIOD, UPPER, LOWER, ORDER_PCT1, ORDER_PCT2), pre_segment_func_nb=pre_segment_func_nb, pre_segment_args=(MODE,), fill_pos_record=False, # a bit faster init_cash=CASH, cash_sharing=True, group_by=True, freq='d' ) vbt_pf2 = simulate_from_order_func() print(vbt_pf2.orders.records_readable) # Proof that both bt and vbt produce the same result pd.testing.assert_series_equal(bt_cash, vbt_pf2.cash().rename('cash')) pd.testing.assert_series_equal(bt_value, vbt_pf2.value().rename('value')) # How fast is vbt? %timeit simulate_from_order_func()
4.44 ms ± 20.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
Numba paradise (or hell?) - fastest
def simulate_nb_from_order_func(): """Simulate using `simulate_nb`.""" # iterate over 502 rows and 2 columns, each element is a potential order target_shape = vbt_close_price.shape # number of columns in the group - exactly two group_lens = np.array([2]) # build default call sequence (orders are executed from the left to the right column) call_seq = portfolio_nb.build_call_seq(target_shape, group_lens) # initial cash per group init_cash = np.array([CASH], dtype=np.float_) order_records, log_records = portfolio_nb.simulate_nb( target_shape=target_shape, close=vbt_close_price.values, # used for target percentage, but we override the valuation price group_lens=group_lens, init_cash=init_cash, cash_sharing=True, call_seq=call_seq, segment_mask=np.full(target_shape, True), # used for disabling some segments pre_group_func_nb=pre_group_func_nb, pre_group_args=(PERIOD, UPPER, LOWER, ORDER_PCT1, ORDER_PCT2), pre_segment_func_nb=pre_segment_func_nb, pre_segment_args=(MODE,), order_func_nb=order_func_nb, order_args=(vbt_open_price.values, COMMPERC), fill_pos_record=False ) return target_shape, group_lens, call_seq, init_cash, order_records, log_records target_shape, group_lens, call_seq, init_cash, order_records, log_records = simulate_nb_from_order_func() # Print order records in a readable format print(vbt.Orders(vbt_close_price.vbt.wrapper, order_records, vbt_close_price).records_readable) # Proof that both bt and vbt produce the same cash from vectorbt.records import nb as records_nb col_map = records_nb.col_map_nb(order_records['col'], target_shape[1]) cash_flow = portfolio_nb.cash_flow_nb(target_shape, order_records, col_map, False) cash_flow_grouped = portfolio_nb.cash_flow_grouped_nb(cash_flow, group_lens) cash_grouped = portfolio_nb.cash_grouped_nb(target_shape, cash_flow_grouped, group_lens, init_cash) pd.testing.assert_series_equal(bt_cash, bt_cash.vbt.wrapper.wrap(cash_grouped)) # Proof that both bt and vbt produce the same value asset_flow = portfolio_nb.asset_flow_nb(target_shape, order_records, col_map, Direction.All) assets = portfolio_nb.assets_nb(asset_flow) asset_value = portfolio_nb.asset_value_nb(vbt_close_price.values, assets) asset_value_grouped = portfolio_nb.asset_value_grouped_nb(asset_value, group_lens) value = portfolio_nb.value_nb(cash_grouped, asset_value_grouped) pd.testing.assert_series_equal(bt_value, bt_value.vbt.wrapper.wrap(value)) # To produce more complex metrics such as stats, it's advisable to use Portfolio, # which can be easily constructed from the arguments and outputs of simulate_nb vbt_pf3 = vbt.Portfolio( wrapper=vbt_close_price.vbt(freq='d', group_by=True).wrapper, close=vbt_close_price, order_records=order_records, log_records=log_records, init_cash=init_cash, cash_sharing=True, call_seq=call_seq ) print(vbt_pf3.stats()) # How fast is vbt? %timeit simulate_nb_from_order_func()
2.24 ms ± 3.67 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
As you can see, writing Numba isn't straightforward and requires at least intermediate knowledge of NumPy. That's why Portfolio.from_orders and other class methods based on arrays are usually a good starting point. Multiple parameters Now, why waste all energy to port a strategy to vectorbt? Right, for hyperparameter optimization.*The example below is just for demo purposes, usually brute-forcing many combinations on a single data sample easily leads to overfitting.*
periods = np.arange(10, 105, 5) uppers = np.arange(1.5, 2.2, 0.1) lowers = -1 * np.arange(1.5, 2.2, 0.1) def simulate_mult_from_order_func(periods, uppers, lowers): """Simulate multiple parameter combinations using `Portfolio.from_order_func`.""" # Build param grid param_product = vbt.utils.params.create_param_product([periods, uppers, lowers]) param_tuples = list(zip(*param_product)) param_columns = pd.MultiIndex.from_tuples(param_tuples, names=['period', 'upper', 'lower']) # We need two price columns per param combination vbt_close_price_mult = vbt_close_price.vbt.tile(len(param_columns), keys=param_columns) vbt_open_price_mult = vbt_open_price.vbt.tile(len(param_columns), keys=param_columns) return vbt.Portfolio.from_order_func( vbt_close_price_mult, order_func_nb, vbt_open_price_mult.values, COMMPERC, # *args for order_func_nb pre_group_func_nb=pre_group_func_nb, pre_group_args=( np.array(param_product[0]), np.array(param_product[1]), np.array(param_product[2]), ORDER_PCT1, ORDER_PCT2 ), pre_segment_func_nb=pre_segment_func_nb, pre_segment_args=(MODE,), fill_pos_record=False, init_cash=CASH, cash_sharing=True, group_by=param_columns.names, freq='d' ) vbt_pf_mult = simulate_mult_from_order_func(periods, uppers, lowers) print(vbt_pf_mult.total_return().sort_values()) vbt_pf_mult.total_return().vbt.histplot().show_svg() # How fast is vbt? %timeit simulate_mult_from_order_func(periods, uppers, lowers)
2.16 s ± 16.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Apache-2.0
examples/PairsTrading.ipynb
zhnagchulan/vectorbt
Predictions
'''preds = reg.predict_proba(X_test) preds_valid = clf.predict_proba(X_valid) print(f"BEST VALID SCORE FOR {dataset_name} : {clf.best_cost}") print(f"FINAL TEST SCORE FOR {dataset_name} : {test_auc}")'''
_____no_output_____
MIT
stock_fit.ipynb
LiziCyber/tabnet
Save and load Model
# save tabnet model saving_path_name = "./tabnet_model_test_1" saved_filepath = reg.save_model(saving_path_name) # define new model with basic parameters and load state dict weights loaded_reg = TabNetRegressor() loaded_reg.load_model(saved_filepath)
_____no_output_____
MIT
stock_fit.ipynb
LiziCyber/tabnet
Global explainability : feat importance summing to 1
reg.feature_importances_
_____no_output_____
MIT
stock_fit.ipynb
LiziCyber/tabnet
Local explainability and masks
explain_matrix, masks = reg.explain(X_valid) fig, axs = plt.subplots(1, 3, figsize=(20,20)) for i in range(3): axs[i].imshow(masks[i][:50]) axs[i].set_title(f"mask {i}")
_____no_output_____
MIT
stock_fit.ipynb
LiziCyber/tabnet
Modeling and Simulation in PythonChapter 1Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) JupyterWelcome to Modeling and Simulation, welcome to Python, and welcome to Jupyter.This is a Jupyter notebook, which is a development environment where you can write and run Python code. Each notebook is divided into cells. Each cell contains either text (like this cell) or Python code. Selecting and running cellsTo select a cell, click in the left margin next to the cell. You should see a blue frame surrounding the selected cell.To edit a code cell, click inside the cell. You should see a green frame around the selected cell, and you should see a cursor inside the cell.To edit a text cell, double-click inside the cell. Again, you should see a green frame around the selected cell, and you should see a cursor inside the cell.To run a cell, hold down SHIFT and press ENTER. * If you run a text cell, Jupyter typesets the text and displays the result.* If you run a code cell, it runs the Python code in the cell and displays the result, if any.To try it out, edit this cell, change some of the text, and then press SHIFT-ENTER to run it. Adding and removing cellsYou can add and remove cells from a notebook using the buttons in the toolbar and the items in the menu, both of which you should see at the top of this notebook.Try the following exercises:1. From the Insert menu select "Insert cell below" to add a cell below this one. By default, you get a code cell, as you can see in the pulldown menu that says "Code".2. In the new cell, add a print statement like `print('Hello')`, and run it.3. Add another cell, select the new cell, and then click on the pulldown menu that says "Code" and select "Markdown". This makes the new cell a text cell.4. In the new cell, type some text, and then run it.5. Use the arrow buttons in the toolbar to move cells up and down.6. Use the cut, copy, and paste buttons to delete, add, and move cells.7. As you make changes, Jupyter saves your notebook automatically, but if you want to make sure, you can press the save button, which looks like a floppy disk from the 1990s.8. Finally, when you are done with a notebook, select "Close and Halt" from the File menu. Using the notebooksThe notebooks for each chapter contain the code from the chapter along with addition examples, explanatory text, and exercises. I recommend you 1. Read the chapter first to understand the concepts and vocabulary, 2. Run the notebook to review what you learned and see it in action, and then3. Attempt the exercises.If you try to work through the notebooks without reading the book, you're gonna have a bad time. The notebooks contain some explanatory text, but it is probably not enough to make sense if you have not read the book. If you are working through a notebook and you get stuck, you might want to re-read (or read!) the corresponding section of the book. Importing modsimThe following cell imports `modsim`, which is a collection of functions we will use throughout the book. Whenever you start the notebook, you will have to run the following cell. It does three things:1. It uses a Jupyter "magic command" to specify whether figures should appear in the notebook, or pop up in a new window.2. It configures Jupyter to display some values that would otherwise be invisible. 3. It imports everything defined in `modsim`.Select the following cell and press SHIFT-ENTER to run it.
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import * print('If this cell runs successfully, it produces no output other than this message.')
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
The first time you run this on a new installation of Python, it might produce a warning message in pink. That's probably ok, but if you get a message that says `modsim.py depends on Python 3.7 features`, that means you have an older version of Python, and some features in `modsim.py` won't work correctly.If you need a newer version of Python, I recommend installing Anaconda. You'll find more information in the preface of the book. The penny mythThe following cells contain code from the beginning of Chapter 1.`modsim` defines `UNITS`, which contains variables representing pretty much every unit you've ever heard of. It uses [Pint](https://pint.readthedocs.io/en/latest/), which is a Python library that provides tools for computing with units.The following lines create new variables named `meter` and `second`.
meter = UNITS.meter second = UNITS.second
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
To find out what other units are defined, type `UNITS.` (including the period) in the next cell and then press TAB. You should see a pop-up menu with a list of units. Create a variable named `a` and give it the value of acceleration due to gravity.
a = 9.8 * meter / second**2
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
Create `t` and give it the value 4 seconds.
t = 4 * second
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
Compute the distance a penny would fall after `t` seconds with constant acceleration `a`. Notice that the units of the result are correct.
a * t**2 / 2
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
**Exercise**: Compute the velocity of the penny after `t` seconds. Check that the units of the result are correct.
# Solution goes here
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
**Exercise**: Why would it be nonsensical to add `a` and `t`? What happens if you try?
# Solution goes here
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
The error messages you get from Python are big and scary, but if you read them carefully, they contain a lot of useful information.1. Start from the bottom and read up.2. The last line usually tells you what type of error happened, and sometimes additional information.3. The previous lines are a "traceback" of what was happening when the error occurred. The first section of the traceback shows the code you wrote. The following sections are often from Python libraries.In this example, you should get a `DimensionalityError`, which is defined by Pint to indicate that you have violated a rules of dimensional analysis: you cannot add quantities with different dimensions.Before you go on, you might want to delete the erroneous code so the notebook can run without errors. Falling penniesNow let's solve the falling penny problem.Set `h` to the height of the Empire State Building:
h = 381 * meter
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
Compute the time it would take a penny to fall, assuming constant acceleration.$ a t^2 / 2 = h $$ t = \sqrt{2 h / a}$
t = sqrt(2 * h / a)
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
Given `t`, we can compute the velocity of the penny when it lands.$v = a t$
v = a * t
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
We can convert from one set of units to another like this:
mile = UNITS.mile hour= UNITS.hour v.to(mile/hour)
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
**Exercise:** Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from `h` plus 10 feet.Define a variable named `foot` that contains the unit `foot` provided by `UNITS`. Define a variable named `pole_height` and give it the value 10 feet.What happens if you add `h`, which is in units of meters, to `pole_height`, which is in units of feet? What happens if you write the addition the other way around?
# Solution goes here # Solution goes here
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
**Exercise:** In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating.As a simplification, let's assume that the acceleration of the penny is `a` until the penny reaches 18 m/s, and then 0 afterwards. What is the total time for the penny to fall 381 m?You can break this question into three parts:1. How long until the penny reaches 18 m/s with constant acceleration `a`.2. How far would the penny fall during that time?3. How long to fall the remaining distance with constant velocity 18 m/s?Suggestion: Assign each intermediate result to a variable with a meaningful name. And assign units to all quantities!
# Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here
_____no_output_____
MIT
notebooks/chap01.ipynb
erhardt/ModSimPy
Global Imports
%pylab inline
Populating the interactive namespace from numpy and matplotlib
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
External Package Imports
import os as os import pickle as pickle import pandas as pd
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
Module Imports
from Stats.Scipy import * from Stats.Survival import * from Helpers.Pandas import * from Figures.FigureHelpers import * from Figures.Pandas import * from Figures.Boxplots import * from Figures.Survival import draw_survival_curve, survival_and_stats from Figures.Survival import draw_survival_curves from Figures.Survival import survival_stat_plot import Data.Firehose as FH
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
Tweaking Display Parameters
pd.set_option('precision', 3) pd.set_option('display.width', 300) plt.rcParams['font.size'] = 12 '''Color schemes for paper taken from http://colorbrewer2.org/''' colors = plt.rcParams['axes.color_cycle'] colors_st = ['#CA0020', '#F4A582', '#92C5DE', '#0571B0'] colors_th = ['#E66101', '#FDB863', '#B2ABD2', '#5E3C99']
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
Function to Pull a Firehose Run Container
def get_run(firehose_dir, version='Latest'): ''' Helper to get a run from the file-system. ''' path = '{}/ucsd_analyses'.format(firehose_dir) if version is 'Latest': version = sorted(os.listdir(path))[-1] run = pickle.load(open('{}/{}/RunObject.p'.format(path, version), 'rb')) return run
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
Read In Data Here we read in the pre-processed data that we downloaded and initialized in the [download_data notebook](download_data.ipynb).
print 'populating namespace with data' OUT_PATH = '/cellar/users/agross/TCGA_Code/CancerData/Data' RUN_DATE = '2014_07_15' VERSION = 'all' CANCER = 'HNSC' FIGDIR = '../Figures/' if not os.path.isdir(FIGDIR): os.makedirs(FIGDIR) run_path = '{}/Firehose__{}/'.format(OUT_PATH, RUN_DATE) run = get_run(run_path, 'Run_' + VERSION) run.data_path = run_path run.result_path = run_path + 'ucsd_analyses' run.report_path = run_path + 'ucsd_analyses/Run_all' cancer = run.load_cancer(CANCER) cancer.path = '{}/{}'.format(run.report_path , cancer.name) clinical = cancer.load_clinical() mut = cancer.load_data('Mutation') mut.uncompress() cn = cancer.load_data('CN_broad') cn.uncompress() rna = FH.read_rnaSeq(run.data_path, cancer.name, tissue_code='All') mirna = FH.read_miRNASeq(run.data_path, cancer.name, tissue_code='All') keepers_o = pd.read_csv('/cellar/users/agross/TCGA_Code/TCGA_Working/Data/Firehose__2014_04_16/' + 'old_keepers.csv', index_col=0, squeeze=True) keepers_o = array(keepers_o)
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
Update Clinical Data
from Processing.ProcessClinicalDataPortal import update_clinical_object p = '/cellar/users/agross/TCGA_Code/TCGA/Data' path = p + '/Followup_R9/HNSC/' clinical = update_clinical_object(clinical, path) clinical.clinical.shape #hpv = clinical.hpv surv = clinical.survival.survival_5y age = clinical.clinical.age.astype(float) old = pd.Series(1.*(age>=75), name='old') p = '/cellar/users/agross/TCGA_Code/TCGA/Data' f = p + '/MAFs/PR_TCGA_HNSC_PAIR_Capture_All_Pairs_QCPASS_v4.aggregated.capture.tcga.uuid.automated.somatic.maf.txt' mut_new = pd.read_table(f, skiprows=4, low_memory=False) keep = (mut_new.Variant_Classification.isin(['Silent', 'Intron', "3'UTR", "5'UTR"])==False) mut_new = mut_new[keep] mut_new['barcode'] = mut_new.Tumor_Sample_Barcode.map(lambda s: s[:12]) mut_new = mut_new.groupby(['barcode','Hugo_Symbol']).size().unstack().fillna(0).T mut_new = mut.df.combine_first(mut_new).fillna(0) gistic = FH.get_gistic_gene_matrix(run.data_path, cancer.name) del_3p = gistic.ix['3p14.2'].median(0) del_3p.name = '3p_deletion'
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
HPV Data
p = '/cellar/users/agross/TCGA_Code/TCGA/' hpv_all = pd.read_csv(p + '/Extra_Data/hpv_summary_3_20_13_distribute.csv', index_col=0) hpv = hpv_all.Molecular_HPV.map({0:'HPV-', 1:'HPV+'}) hpv.name = 'HPV' hpv_seq = hpv status = clinical.clinical[['hpvstatusbyishtesting','hpvstatusbyp16testing']] status = status.replace('[Not Evaluated]', nan) hpv_clin = (status.dropna() == 'Positive').sum(1) hpv_clin = hpv_clin.map({2: 'HPV+', 0:'HPV-', 1:nan}).dropna() hpv_clin.value_counts() hpv_clin.ix[hpv_clin.index.diff(hpv_seq.index)].value_counts() hpv_new = pd.read_table(p + '/Data/Followup_R6/HNSC/auxiliary_hnsc.txt', skiprows=[1], index_col=0, na_values=['[Not Available]']) hpv_new = hpv_new['hpv_status'] hpv = (hpv_seq.dropna() == 'HPV+').combine_first(hpv_new == 'Positive') hpv.name = 'HPV' hpv.value_counts() n = ti(hpv==False) fisher_exact_test(del_3p<0, mut_new.ix['TP53'].ix[n.diff(keepers_o)]>0)
_____no_output_____
MIT
Notebooks/HNSCC_Imports.ipynb
theandygross/CancerData
این دفترچه تنها جهت تمرین در کنار مطالعه‌ی درس‌نامه اضافه شده است.
import numpy as np i = np.identity(3) b = i == 0 print(i[b].shape) x = np.full((3, 3), 8) print(np.dot(i, x))
[[8. 8. 8.] [8. 8. 8.] [8. 8. 8.]]
MIT
quera/13609/46444/solution.ipynb
TheMn/Quera-College-ML-Course
3 - Displaying Histograms and Crossplots Created by: Andy McDonald The following tutorial illustrates how to display well data from a LAS file on histograms and crossplots. Loading Well Data from CSV The following cells load data in from a CSV file and replace the null values (-999.25) with Not a Number (NaN) values. More detail can be found in 1. Loading and Displaying Well Data From CSV.
import os import pandas as pd import numpy as np import matplotlib.pyplot as plt root = '/users/kai/desktop/data_science/data/dongara' well_name = 'dongara_20' file_format = '.csv' well = pd.read_csv(os.path.join(root,well_name+file_format), header=0) well.replace(-999.25, np.nan, inplace=True) cols = well.columns[well.dtypes.eq('object')] well[cols] = well[cols].apply(pd.to_numeric, errors='coerce') well.head(10)
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
Displaying data on a histogram Displaying a simple histogram can be done by calling the .hist function on the well dataframe and specifying the column.
well.hist(column="GR")
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
The number of bins can be controled by the bins parameter:
well.hist(column="GR", bins = 30)
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
We can also change the opacity of the bars by using the alpha parameter:
well.hist(column="GR", bins = 30, alpha = 0.5)
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
Plotting multiple histograms on one plot It can be more efficient to loop over the columns (curves) within the dataframe and create a plot with multiple histograms, rather than duplicating the previous line multiple times. First we need to create a list of our curve names.
cols_to_plot = list(well)
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
We can remove the depth curve from our list and focus on our curves. The same line can be applied to other curves that need removing.
cols_to_plot.remove("DEPT") #Setup the number of rows and columns for our plot rows = 5 cols = 2 fig=plt.figure(figsize=(10,10)) for i, feature in enumerate(cols_to_plot): ax=fig.add_subplot(rows,cols,i+1) well[feature].hist(bins=20,ax=ax,facecolor='green', alpha=0.6) ax.set_title(feature+" Distribution") ax.set_axisbelow(True) ax.grid(color='whitesmoke') plt.tight_layout() plt.show()
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
Displaying data on a crossplot (Scatterplot) As seen in the first notebook, we can display a crossplot by simply doing the following. using the c argument we can add a third curve to colour the data.
well.plot(kind="scatter", x="NPHI", y="RHOB", c="GR", colormap="YlOrRd_r", ylim=(3,2))
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
We can take the above crossplot and create a 3D version. First we need to make sure the Jupyter notbook is setup for displaying interactive 3D plots using the following command.
%matplotlib inline from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(111, projection="3d") ax.scatter(well["NPHI"], well["RHOB"], well["GR"], alpha= 0.3, c="r")
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
If we want to have multiple crossplots on view, we can do this by:
fig, ax = plt.subplots(figsize=(10,10)) ax1 = plt.subplot2grid((2,2), (0,0), rowspan=1, colspan=1) ax2 = plt.subplot2grid((2,2), (0,1), rowspan=1, colspan=1) ax3 = plt.subplot2grid((2,2), (1,0), rowspan=1, colspan=1) ax4 = plt.subplot2grid((2,2), (1,1), rowspan=1, colspan=1) ax1.scatter(x= "NPHI", y= "RHOB", data= well, marker="s", alpha= 0.2) ax1.set_ylim(3, 1.8) ax1.set_ylabel("RHOB (g/cc)") ax1.set_xlabel("NPHI (dec)") ax2.scatter(x= "GR", y= "RHOB", data= well, marker="p", alpha= 0.2) ax1.set_ylim(3, 1.8) ax2.set_ylabel("RHOB (g/cc)") ax2.set_xlabel("GR (API)") ax3.scatter(x= "DT", y= "RHOB", data= well, marker="*", alpha= 0.2) ax3.set_ylim(3, 1.8) ax3.set_ylabel("RHOB (g/cc)") ax3.set_xlabel("DT (us/ft)") ax4.scatter(x= "GR", y= "DT", data= well, marker="D", alpha= 0.2) ax4.set_ylabel("DT (us/ft)") ax4.set_xlabel("GR (API)") plt.tight_layout()
_____no_output_____
MIT
03 - Displaying Histograms and Crossplots.ipynb
will6309/Petrophysics-Python-Series
Ranking multiple systemsIn this notebook, we consider the situation where we have scores from multiple different automated scoring systems, each with different levels of performance. We evaluate these systems against the same as well as different pairs of raters and show that:1. When using the same pair of raters to evaluate all of the systems, all metrics including PRMSE are able to rank the systems accurately.2. However, when a different pair of raters is chosen for each system, the conventional agreement metrics are not able to rank the systems accurately whereas PRMSE still does.
import itertools import json import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot as plt from pathlib import Path from rsmtool.utils.prmse import prmse_true from simulation.dataset import Dataset from simulation.utils import (compute_agreement_one_system_one_rater_pair, compute_agreement_multiple_systems_one_rater_pair, compute_ranks_from_metrics)
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Step 1: SetupTo set up the experiment, we first load the dataset we have already created and saved in the `making_a_dataset.ipynb` notebook and use that for this experiment.For convenience and replicability, we have pre-defined many of the parameters that are used in our notebooks and saved them in the file `settings.json`. We load this file below.
# load the dataset file dataset = Dataset.from_file('../data/default.dataset') # let's remind ourselves what the dataset looks like print(dataset) # load the experimental settings file experiment_settings = json.load(open('settings.json', 'r')) # now get the data frames for our loaded dataset df_scores, df_rater_metadata, df_system_metadata = dataset.to_frames()
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Step 2: Evaluate all systems against same pair of ratersFirst, we evaluate the scores assigned by all our simulated systems in the dataset against the same pair of simulated human raters from the dataset. To simulate the more usual scenario, we sample two raters from the "average" rater category.
# define our pre-selected rater category chosen_rater_category = "average" # get the list of rater IDs in this category rater_ids = df_rater_metadata[df_rater_metadata['rater_category'] == chosen_rater_category]['rater_id'] # choose 2 rater IDs randomly from these chosen_rater_pair = rater_id1, rater_id2 = rater_ids.sample(n=2, random_state=1234567890).values.tolist() # Print this pair out print(f'we chose the rater pair: {chosen_rater_pair}')
we chose the rater pair: ['h_107', 'h_101']
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Now, we compute the agreement metrics as well as the PRMSE values for all of the simulated systems in our dataset against our pre-selected rater pair.
# initialize some lists that will hold our metric and PRMSE values for each category metric_dfs = [] prmse_series = [] # iterate over each system category for system_category in dataset.system_categories: # get the system IDs that belong to this system category system_ids_for_category = df_system_metadata[df_system_metadata['system_category'] == system_category]['system_id'] # compute the agreement metrics for all of the systems in this category against our chosen rater pair df_metrics_for_category = compute_agreement_multiple_systems_one_rater_pair(df_scores, system_ids_for_category, chosen_rater_pair[0], chosen_rater_pair[1], include_mean=True) # note that `compute_agreement_multiple_systems_one_rater_pair()` returns the metric values # against both the average of the two raters' scores as well as the first rater's scores; # for this analysis, we choose to use only the metric values against the average df_metrics_for_category = df_metrics_for_category[df_metrics_for_category['reference'] == 'h1-h2 mean'] df_metrics_for_category.drop('reference', axis=1, inplace=True) # save the system category in the data frame too since we need it for plotting df_metrics_for_category['system_category'] = system_category # save this metrics dataframe in the list metric_dfs.append(df_metrics_for_category) # compute the PRMSE values for all of the systems in this category against our chosen ratr pair prmse_series_for_category = system_ids_for_category.apply(lambda system_id: prmse_true(df_scores[system_id], df_scores[[rater_id1, rater_id2]])) # save these PRMES values in the list prmse_series.append(prmse_series_for_category) # combine all of the per-category agreement metric values into a single data frame df_metrics_same_rater_pair_with_categories = pd.concat(metric_dfs).reset_index(drop=True) # and combine all of the per-category PRMSE values and add it as another column in the same data frame df_metrics_same_rater_pair_with_categories['PRMSE'] = pd.concat(prmse_series)
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Now, we need to use each metric to compute the ranks of each of the systems based on the values.
# compute the ranks given the metric values df_ranks_same_rater_pair = compute_ranks_from_metrics(df_metrics_same_rater_pair_with_categories) # now compute a longer verssion of this rank data frame that is more amenable to plotting df_ranks_same_rater_pair_long = df_ranks_same_rater_pair.melt(id_vars=['system_category', 'system_id'], var_name='metric', value_name='rank') # plot the ranks sns.catplot(x='system_category', y='rank', data=df_ranks_same_rater_pair_long, col='metric', kind='box') plt.show()
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations