markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
How To Break Into the FieldNow you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question. | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import HowToBreakIntoTheField as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
schema = pd.read_csv('./survey_results_schema.csv')
df.head() | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
Question 1**1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column. | def get_description(column_name, schema=schema):
'''
INPUT - schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT -
desc - string - the description of the column
'''
desc = list(schema[schema['Column'] == column_name]['Question'])[0]
return desc
#test your code
#Check your function against solution - you shouldn't need to change any of the below code
get_description(df.columns[0]) # This should return a string of the first column description
#Check your function against solution - you shouldn't need to change any of the below code
descrips = set(get_description(col) for col in df.columns)
t.check_description(descrips) | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column. | get_description('CousinEducation') | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
Question 2**2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up! | cous_ed_vals = df.CousinEducation.value_counts()#Provide a pandas series of the counts for each CousinEducation status
cous_ed_vals # assure this looks right
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(cous_ed_vals/df.shape[0]).plot(kind="bar");
plt.title("Formal Education"); | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned. | possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
def clean_and_plot(df, title='Method of Educating Suggested', plot=True):
'''
INPUT
df - a dataframe holding the CousinEducation column
title - string the title of your plot
axis - axis object
plot - bool providing whether or not you want a plot back
OUTPUT
study_df - a dataframe with the count of how many individuals
Displays a plot of pretty things related to the CousinEducation column.
'''
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df.set_index('method', inplace=True)
if plot:
(study_df/study_df.sum()).plot(kind='bar', legend=None);
plt.title(title);
plt.show()
props_study_df = study_df/study_df.sum()
return props_study_df
props_df = clean_and_plot(df) | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
Question 4**4.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**. | def higher_ed(formal_ed_str):
'''
INPUT
formal_ed_str - a string of one of the values from the Formal Education column
OUTPUT
return 1 if the string is in ("Master's degree", "Professional degree")
return 0 otherwise
'''
if formal_ed_str in ("Master's degree", "Professional degree"):
return 1
else:
return 0
df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df
# Check your code here
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
higher_ed_perc = df['HigherEd'].mean()
t.higher_ed_test(higher_ed_perc) | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
Question 5**5.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**.Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column. | ed_1 = df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1
ed_0 = df[df['HigherEd'] == 0] # Subset df to only those with HigherEd of 0
print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect
print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect
#Check your subset is correct - you should get a plot that was created using pandas styling
#which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html
ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False)
ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False)
comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True)
comp_df.columns = ['ed_1_perc', 'ed_0_perc']
comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc']
comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d']) | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
Question 6**6.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude. | sol = {'Everyone should get a higher level of formal education': False,
'Regardless of formal education, online courses are the top suggested form of education': True,
'There is less than a 1% difference between suggestions of the two groups for all forms of education': False,
'Those with higher formal education suggest it more than those who do not have it': True}
t.conclusions(sol) | _____no_output_____ | CC0-1.0 | Chapter01__Introduction_to_Data_Science/How To Break Into the Field - Solution .ipynb | marceloestevam/Nanodegree_DataScientist |
Label encoding | # Applying encoding to the PRODUCT column
df_product_and_complaint['S_ID'] = df_product_and_complaint['SERVICE_TYPE'].factorize()[0]
#factorize[0] arranges the index of each encoded number accordingly to the
# index of your categorical variables in the service_type column
# Creates a dataframe of the PRODUCT to their respective PRODUCT_ID
category_id_df = df_product_and_complaint[['SERVICE_TYPE', 'S_ID']].drop_duplicates()
# Dictionaries for future use. Creating our cheatsheets for what each encoded label represents.
category_to_id = dict(category_id_df.values) # Creates a service_type: S_ID key-value pair
id_to_category = dict(category_id_df[['S_ID', 'SERVICE_TYPE']].values) # Creates a S_ID: SERVICE_TYPE key-value pair
df_product_and_complaint.head(10)
#Now that we have encoded our columns, time to move on to the next step -- cleaning the fricken text data
#But save our dataframe here so we don't run into memory issues later and we can start from a new starting point
## Pickling reduced dataframe
#with open('df_product_and_complaint.pickle', 'wb') as to_write:
# pickle.dump(df_product_and_complaint, to_write)
# Loading Pickled DataFrame
with open('df_product_and_complaint.pickle', 'rb') as to_read:
df_product_and_complaint = pickle.load(to_read)
# Reviewing our Loaded Dataframe
print(df_product_and_complaint.info())
print('--------------------------------------------------------------------------------------')
print(df_product_and_complaint.head().to_string()) | <class 'pandas.core.frame.DataFrame'>
Int64Index: 7307 entries, 0 to 7311
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 SERVICE_TYPE 7307 non-null object
1 MAIN_DESCRIPTION 7307 non-null object
2 S_ID 7307 non-null int64
dtypes: int64(1), object(2)
memory usage: 228.3+ KB
None
--------------------------------------------------------------------------------------
SERVICE_TYPE MAIN_DESCRIPTION S_ID
0 Animal assistance involving livestock - Other action DOG WITH JAW TRAPPED IN MAGAZINE RACK,B15 0
1 Animal assistance involving livestock - Other action ASSIST RSPCA WITH FOX TRAPPED,B15 0
2 Animal rescue from below ground - Domestic pet DOG CAUGHT IN DRAIN,B15 1
3 Animal rescue from water - Farm animal HORSE TRAPPED IN LAKE,J17 2
4 Animal assistance involving livestock - Other action RABBIT TRAPPED UNDER SOFA,B15 0
| MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Text Pre-Processing | # Looking at a sample text
sample_complaint = list(df_product_and_complaint.MAIN_DESCRIPTION[:5])[4]
# Converting to a list for TfidfVectorizer to use
list_sample_complaint = []
list_sample_complaint.append(sample_complaint)
list_sample_complaint
# Observing what words are extracted from a TfidfVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
tf_idf3 = TfidfVectorizer(stop_words='english')
check3 = tf_idf3.fit_transform(list_sample_complaint)
print(tf_idf3.get_feature_names()) | ['b15', 'rabbit', 'sofa', 'trapped']
| MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Model/classifier selection train/startified/test splits | # Split the data into X and y data sets
X, y = df_product_and_complaint.MAIN_DESCRIPTION, df_product_and_complaint.SERVICE_TYPE
print('X shape:', X.shape, 'y shape:', y.shape)
# Split the data into X and y data sets
X, y = df_product_and_complaint.MAIN_DESCRIPTION, df_product_and_complaint.SERVICE_TYPE
print('X shape:', X.shape, 'y shape:', y.shape)
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y,
test_size=0.2, # 80% train/cv, 20% test
stratify=y,
random_state=seed)
print('X_train', X_train_val.shape)
print('y_train', y_train_val.shape)
print('X_test', X_test.shape)
print('y_test', y_test.shape)
# Performing Text Pre-Processing
# Import tfidfVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
# Text Preprocessing
# The text needs to be transformed to vectors so as the algorithms will be able make predictions.
# In this case it will be used the Term Frequency – Inverse Document Frequency (TFIDF) weight
# to evaluate how important A WORD is to A DOCUMENT in a COLLECTION OF DOCUMENTS.
# tfidf1 = 1-gram only.
tfidf1 = TfidfVectorizer(sublinear_tf=True, # set to true to scale the term frequency in logarithmic scale.
min_df=5,
stop_words='english')
X_train_val_tfidf1 = tfidf1.fit_transform(X_train_val).toarray()
X_test_tfidf1 = tfidf1.transform(X_test)
# tfidf2 = unigram and bigram
tfidf2 = TfidfVectorizer(sublinear_tf=True, # set to true to scale the term frequency in logarithmic scale.
min_df=5,
ngram_range=(1,2), # we consider unigrams and bigrams
stop_words='english')
X_train_val_tfidf2 = tfidf2.fit_transform(X_train_val).toarray()
X_test_tfidf2 = tfidf2.transform(X_test)
# # StratifiedKFold -> Split 5
# ## We now want to do stratified kfold to preserve the proportion of the category imbalances
# # (number is split evenly from all the classes)
kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed) | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Baseline Model - Train/Stratified CV with MultinomialNB() | print('1-gram number of (rows, features):', X_train_val_tfidf1.shape)
def metric_cv_stratified(model, X_train_val, y_train_val, n_splits, name):
"""
Accepts a Model Object, converted X_train_val and y_train_val, n_splits, name
and returns a dataframe with various cross-validated metric scores
over a stratified n_splits kfold for a multi-class classifier.
"""
# Start timer
import timeit
start = timeit.default_timer()
### Computations below
# StratifiedKFold
## We now want to do stratified kfold to preserve the proportion of the category imbalances
# (number is split evenly from all the classes)
from sklearn.model_selection import StratifiedKFold # incase user forgest to import
kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)
# Initializing Metrics
accuracy = 0.0
micro_f1 = 0.0
macro_precision = 0.0
macro_recall = 0.0
macro_f1 = 0.0
weighted_precision = 0.0
weighted_recall = 0.0
weighted_f1 = 0.0
roc_auc = 0.0 #Not considering this score in this case
# Storing metrics
from sklearn.model_selection import cross_val_score # incase user forgets to import
accuracy = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='accuracy'))
micro_f1 = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='f1_micro'))
macro_precision = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='precision_macro'))
macro_recall = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='recall_macro'))
macro_f1 = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='f1_macro'))
weighted_precision = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='precision_weighted'))
weighted_recall = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='recall_weighted'))
weighted_f1 = np.mean(cross_val_score(model, X_train_val, y_train_val, cv=kf, scoring='f1_weighted'))
# Stop timer
stop = timeit.default_timer()
elapsed_time = stop - start
return pd.DataFrame({'Model' : [name],
'Accuracy' : [accuracy],
'Micro F1' : [micro_f1],
'Macro Precision': [macro_precision],
'Macro Recall' : [macro_recall],
'Macro F1score' : [macro_f1],
'Weighted Precision': [weighted_precision],
'Weighted Recall' : [weighted_recall],
'Weighted F1' : [weighted_f1],
'Time taken': [elapsed_time] # timetaken: to be used for comparison later
})
# ## Data Science Story:
# # Testing on MultinomialNB first
# # Initialize Model Object
# mnb = MultinomialNB()
# results_cv_stratified_1gram = metric_cv_stratified(mnb, X_train_val_tfidf1, y_train_val, 5, 'MultinomialNB1')
# results_cv_stratified_2gram = metric_cv_stratified(mnb, X_train_val_tfidf2, y_train_val, 5, 'MultinomialNB2')
results_cv_stratified_1gram
results_cv_stratified_2gram | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
1-gram | # ## Testing on all Models using 1-gram
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# ## We do NOT want these two. They take FOREVER to train AND predict
# # knn = KNeighborsClassifier()
# # decisiontree = DecisionTreeClassifier(random_state=seed)
# # to concat all models
# results_cv_straitified_1gram = pd.concat([metric_cv_stratified(mnb, X_train_val_tfidf1, y_train_val, 5, 'MultinomialNB1'),
# metric_cv_stratified(gnb, X_train_val_tfidf1, y_train_val, 5, 'GaussianNB1'),
# metric_cv_stratified(logit, X_train_val_tfidf1, y_train_val, 5, 'LogisticRegression1'),
# metric_cv_stratified(randomforest, X_train_val_tfidf1, y_train_val, 5, 'RandomForest1'),
# metric_cv_stratified(linearsvc, X_train_val_tfidf1, y_train_val, 5, 'LinearSVC1')
# ], axis=0).reset_index()
results_cv_straitified_1gram
#with open('results_cv_straitified_1gram_df.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_1gram, to_write)
with open('results_cv_straitified_1gram_df.pickle', 'rb') as to_read:
results_cv_straitified_1gram = pickle.load(to_read)
## Testing on all Models using 2-gram
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# knn = KNeighborsClassifier()
# decisiontree = DecisionTreeClassifier(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# # # to concat all models
# results_cv_straitified_2gram = pd.concat([metric_cv_stratified(mnb, X_train_val_tfidf2, y_train_val, 5, 'MultinomialNB2'),
# metric_cv_stratified(gnb, X_train_val_tfidf2, y_train_val, 5, 'GaussianNB2'),
# metric_cv_stratified(logit, X_train_val_tfidf2, y_train_val, 5, 'LogisticRegression2'),
# metric_cv_stratified(randomforest, X_train_val_tfidf2, y_train_val, 5, 'RandomForest2'),
# metric_cv_stratified(linearsvc, X_train_val_tfidf2, y_train_val, 5, 'LinearSVC2')
# ], axis=0).reset_index()
results_cv_straitified_2gram
results_cv_straitified_1gram
#with open('results_cv_straitified_2gram_df.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_2gram, to_write)
with open('results_cv_straitified_2gram_df.pickle', 'rb') as to_read:
results_cv_straitified_2gram = pickle.load(to_read) | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Using GloVe50d | #Each complaint is mapped to a feature vector by averaging the word embeddings of all words in the review.
#These features are then fed into the defined function above for train/cross validation.
# ## Using pre-trained GloVe
# #download from https://nlp.stanford.edu/projects/glove/
# glove_file = glove_dir = 'glove.6B.50d.txt'
# w2v_output_file = 'glove.6B.50d.txt.w2v'
# # The following utility converts file formats
# gensim.scripts.glove2word2vec.glove2word2vec(glove_file, w2v_output_file)
# # Now we can load it!
# glove_model_50d = gensim.models.KeyedVectors.load_word2vec_format(w2v_output_file, binary=False)
# # Pickle glove model so we don't have to do the above steps again and keep the damn glove.6b.50d in our folder
# with open('glove_model_50d.pickle', 'wb') as to_write:
# pickle.dump(glove_model_50d, to_write)
# Load pickled glove_model
with open('glove_model_50d.pickle', 'rb') as to_read:
glove_model_50d = pickle.load(to_read)
num_features = 50 # depends on the pre-trained model you are loading
def complaint_to_wordlist(review, remove_stopwords=False):
"""
Convert a complaint to a list of words. Removal of stop words is optional.
"""
# remove non-letters
review_text = re.sub("[^a-zA-Z]"," ", review)
# convert to lower case and split at whitespace
words = review_text.lower().split()
# remove stop words (false by default)
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
return words # list of tokenized and cleaned words
# num_features refer to the dimensionality of the model you are using
# model refers to the trained word2vec/glove model
# words refer to the words in a single document/entry
def make_feature_vec(words, model, num_features):
"""
Average the word vectors for a set of words
"""
feature_vec = np.zeros((num_features,), # creates a zero matrix of (num_features, )
dtype="float32") # pre-initialize (for speed)
# Initialize a counter for the number of words in a complaint
nwords = 0.
index2word_set = set(model.index2word) # words known to the model
# Loop over each word in the comment and, if it is in the model's vocabulary, add its feature vector to the total
for word in words: # for each word in the list of words
if word in index2word_set: # if each word is found in the words known to the model
nwords = nwords + 1. # add 1 to nwords
feature_vec = np.add(feature_vec, model[word])
# Divide by the number of words to get the average
if nwords > 0:
feature_vec = np.divide(feature_vec, nwords)
return feature_vec
# complaints refers to the whole corpus you intend to put in.
# Therefore you need to append all these info from your df into a list first
def get_avg_feature_vecs(complaints, model, num_features):
"""
Calculate average feature vectors for ALL complaints
"""
# Initialize a counter for indexing
counter = 0
# pre-initialize (for speed)
complaint_feature_vecs = np.zeros((len(complaints),num_features), dtype='float32')
for complaint in complaints: # each complaint is made up of tokenized/cleaned/stopwords removed words
complaint_feature_vecs[counter] = make_feature_vec(complaint, model, num_features)
counter = counter + 1
return complaint_feature_vecs
# # Tokenizing and vectorizing our Train_Val Complaints (80%)
# clean_train_val_complaints = []
# for complaint in X_train_val:
# clean_train_val_complaints.append(complaint_to_wordlist(complaint, True))
# X_train_val_glove_features = get_avg_feature_vecs(clean_train_val_complaints, glove_model_50d, num_features)
# # Tokenizing and vectorizing our Test Complaints (20%)
# clean_test_complaints = []
# for complaint in X_train_val:
# clean_test_complaints.append(complaint_to_wordlist(complaint, True))
# X_test_glove_features = get_avg_feature_vecs(clean_test_complaints, glove_model_50d, num_features)
# ## Run the X_train_val_word2vec_features into our defined function for scoring
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# # to concat all models
# results_cv_straitified_glove50d = pd.concat([
# # metric_cv_stratified(mnb, X_train_val_glove_features, y_train_val, 5, 'MultinomialNB_glove50d'),
# metric_cv_stratified(gnb, X_train_val_glove_features, y_train_val, 5, 'GaussianNB_glove50d'),
# metric_cv_stratified(logit, X_train_val_glove_features, y_train_val, 5, 'LogisticRegression_glove50d'),
# metric_cv_stratified(randomforest, X_train_val_glove_features, y_train_val, 5, 'RandomForest_glove50d'),
# metric_cv_stratified(linearsvc, X_train_val_glove_features, y_train_val, 5, 'LinearSVC_glove50d')
# ], axis=0).reset_index()
# # Saving Results into a DF
# with open('results_cv_straitified_glove50d.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_glove50d, to_write)
# Opening Results
with open('results_cv_straitified_glove50d.pickle', 'rb') as to_read:
results_cv_straitified_glove50d = pickle.load(to_read)
results_cv_straitified_glove50d | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Using GloVe100d | del glove_model_50d, results_cv_straitified_glove50d
# ## Using pre-trained GloVe
# # download from https://nlp.stanford.edu/projects/glove/
# num_features = 100 # depends on the pre-trained model you are loading
# glove_file = glove_dir = 'glove.6B.' + str(num_features) + 'd.txt'
# w2v_output_file = 'glove.6B.' + str(num_features) + 'd.txt.w2v'
# # The following utility converts file formats
# gensim.scripts.glove2word2vec.glove2word2vec(glove_file, w2v_output_file)
# # Now we can load it!
# glove_model_100d = gensim.models.KeyedVectors.load_word2vec_format(w2v_output_file, binary=False)
# # Pickle glove model so we don't have to do the above steps again and keep the damn glove.6b.50d in our folder
# with open('glove_model_' + str(num_features) + 'd.pickle', 'wb') as to_write:
# pickle.dump(glove_model_100d, to_write)
# Load pickled glove_model
with open('glove_model_100d.pickle', 'rb') as to_read:
glove_model_100d = pickle.load(to_read)
# # For Train_Val Complaints (80%)
# clean_train_val_complaints = []
# for complaint in X_train_val:
# clean_train_val_complaints.append(complaint_to_wordlist(complaint, True))
# X_train_val_glove_features = get_avg_feature_vecs(clean_train_val_complaints, glove_model_100d, num_features)
# # For Test Complaints (20%)
# clean_test_complaints = []
# for complaint in X_train_val:
# clean_test_complaints.append(complaint_to_wordlist(complaint, True))
# X_test_glove_features = get_avg_feature_vecs(clean_test_complaints, glove_model_100d, num_features)
# ## Run the X_train_val_word2vec_features into our defined function for scoring
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# ## We do NOT want these two. They take FOREVER to train AND predict
# # knn = KNeighborsClassifier()
# # decisiontree = DecisionTreeClassifier(random_state=seed)
# # to concat all models
# results_cv_straitified_glove100d = pd.concat([
# # metric_cv_stratified(mnb, X_train_val_glove_features, y_train_val, 5, 'MultinomialNB_glove50d'),
# metric_cv_stratified(gnb, X_train_val_glove_features, y_train_val, 5, 'GaussianNB_glove100d'),
# metric_cv_stratified(logit, X_train_val_glove_features, y_train_val, 5, 'LogisticRegression_glove100d'),
# metric_cv_stratified(randomforest, X_train_val_glove_features, y_train_val, 5, 'RandomForest_glove100d'),
# metric_cv_stratified(linearsvc, X_train_val_glove_features, y_train_val, 5, 'LinearSVC_glove100d')
# ], axis=0).reset_index()
# with open('results_cv_straitified_glove100d.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_glove100d, to_write)
# Opening Results
with open('results_cv_straitified_glove100d.pickle', 'rb') as to_read:
results_cv_straitified_glove100d = pickle.load(to_read)
results_cv_straitified_glove100d | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Using GloVe200d | del glove_model_100d, results_cv_straitified_glove100d
# ## Using pre-trained GloVe
# # download from https://nlp.stanford.edu/projects/glove/
# num_features = 200 # depends on the pre-trained model you are loading
# glove_file = glove_dir = 'glove.6B.' + str(num_features) + 'd.txt'
# w2v_output_file = 'glove.6B.' + str(num_features) + 'd.txt.w2v'
# # The following utility converts file formats
# gensim.scripts.glove2word2vec.glove2word2vec(glove_file, w2v_output_file)
# # Now we can load it!
# glove_model_200d = gensim.models.KeyedVectors.load_word2vec_format(w2v_output_file, binary=False)
# # Pickle glove model so we don't have to do the above steps again and keep the damn glove.6b.50d in our folder
# with open('glove_model_' + str(num_features) + 'd.pickle', 'wb') as to_write:
# pickle.dump(glove_model_200d, to_write)
with open('glove_model_200d.pickle', 'rb') as to_read:
glove_model_200d = pickle.load(to_read)
# # For Train_Val Complaints (80%)
# clean_train_val_complaints = []
# for complaint in X_train_val:
# clean_train_val_complaints.append(complaint_to_wordlist(complaint, True))
# X_train_val_glove_features = get_avg_feature_vecs(clean_train_val_complaints, glove_model_200d, num_features)
# #Already run above
# #For Test Complaints (20%)
# clean_test_complaints = []
# for complaint in X_train_val:
# clean_test_complaints.append(complaint_to_wordlist(complaint, True))
# X_test_glove_features = get_avg_feature_vecs(clean_test_complaints, glove_model_200d, num_features)
# ## Run the X_train_val_word2vec_features into our defined function for scoring
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# ## We do NOT want these two. They take FOREVER to train AND predict
# # knn = KNeighborsClassifier()
# # decisiontree = DecisionTreeClassifier(random_state=seed)
# # to concat all models
# results_cv_straitified_glove200d = pd.concat([
# # metric_cv_stratified(mnb, X_train_val_glove_features, y_train_val, 5, 'MultinomialNB_glove50d'),
# metric_cv_stratified(gnb, X_train_val_glove_features, y_train_val, 5, 'GaussianNB_glove200d'),
# metric_cv_stratified(logit, X_train_val_glove_features, y_train_val, 5, 'LogisticRegression_glove200d'),
# metric_cv_stratified(randomforest, X_train_val_glove_features, y_train_val, 5, 'RandomForest_glove200d'),
# metric_cv_stratified(linearsvc, X_train_val_glove_features, y_train_val, 5, 'LinearSVC_glove200d')
# ], axis=0).reset_index()
# with open('results_cv_straitified_glove200d.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_glove200d, to_write)
with open('results_cv_straitified_glove200d.pickle', 'rb') as to_read:
results_cv_straitified_glove200d = pickle.load(to_read)
results_cv_straitified_glove200d | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
Using GloVe300d | del glove_model_200d, results_cv_straitified_glove200d
# ## Using pre-trained GloVe
# # download from https://nlp.stanford.edu/projects/glove/
# num_features = 300 # depends on the pre-trained model you are loading
# glove_file = glove_dir = 'glove.6B.' + str(num_features) + 'd.txt'
# w2v_output_file = 'glove.6B.' + str(num_features) + 'd.txt.w2v'
# # The following utility converts file formats
# gensim.scripts.glove2word2vec.glove2word2vec(glove_file, w2v_output_file)
# # Now we can load it!
# glove_model_300d = gensim.models.KeyedVectors.load_word2vec_format(w2v_output_file, binary=False)
# # Pickle glove model so we don't have to do the above steps again and keep the damn glove.6b.50d in our folder
# with open('glove_model_' + str(num_features) + 'd.pickle', 'wb') as to_write:
# pickle.dump(glove_model_300d, to_write)
# # Load pickled glove_model
# with open('glove_model_300d.pickle', 'rb') as to_read:
# glove_model_300d = pickle.load(to_read)
# # For Train_Val Complaints (80%)
# clean_train_val_complaints = []
# for complaint in X_train_val:
# clean_train_val_complaints.append(complaint_to_wordlist(complaint, True))
# X_train_val_glove_features = get_avg_feature_vecs(clean_train_val_complaints, glove_model_300d, num_features)
# #Already run above
# # For Test Complaints (20%)
# clean_test_complaints = []
# for complaint in X_train_val:
# clean_test_complaints.append(complaint_to_wordlist(complaint, True))
# X_test_glove_features = get_avg_feature_vecs(clean_test_complaints, glove_model_300d, num_features)
# ## Run the X_train_val_word2vec_features into our defined function for scoring
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# ## We do NOT want these two. They take FOREVER to train AND predict
# # knn = KNeighborsClassifier()
# # decisiontree = DecisionTreeClassifier(random_state=seed)
# # to concat all models
# results_cv_straitified_glove300d= pd.concat([
# # metric_cv_stratified(mnb, X_train_val_glove_features, y_train_val, 5, 'MultinomialNB_glove50d'),
# metric_cv_stratified(gnb, X_train_val_glove_features, y_train_val, 5, 'GaussianNB_glove300d'),
# metric_cv_stratified(logit, X_train_val_glove_features, y_train_val, 5, 'LogisticRegression_glove300d'),
# metric_cv_stratified(randomforest, X_train_val_glove_features, y_train_val, 5, 'RandomForest_glove300d'),
# metric_cv_stratified(linearsvc, X_train_val_glove_features, y_train_val, 5, 'LinearSVC_glove300d')
# ], axis=0).reset_index()
# with open('results_cv_straitified_glove300d.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_glove300d, to_write)
# Opening Results
with open('results_cv_straitified_glove300d.pickle', 'rb') as to_read:
results_cv_straitified_glove300d = pickle.load(to_read)
results_cv_straitified_glove300d | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
GoogleNews Word2Vec300d | del glove_model_300d, results_cv_straitified_glove300d
# ## Using pre-trained GoogleNews Word2Vec
# # download from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit
# num_features = 300 # depends on the pre-trained model you are loading
# # Path to where the word2vec file lives
# google_vec_file = 'GoogleNews-vectors-negative300.bin'
# # Load it! This might take a few minutes...
# word2vec_model_300d = gensim.models.KeyedVectors.load_word2vec_format(google_vec_file, binary=True)
# # it is just loading all the different weights (embedding) into python
# # Pickle word2vec 300d model so we don't have to do the above steps again and keep the damn file in our folder
# with open('word2vec_model_' + str(num_features) + 'd.pickle', 'wb') as to_write:
# pickle.dump(word2vec_model_300d, to_write)
# Load pickled glove_model
with open('word2vec_model_300d.pickle', 'rb') as to_read:
word2vec_model_300d = pickle.load(to_read)
# # For Train_Val Complaints (80%)
# clean_train_val_complaints = []
# for complaint in X_train_val:
# clean_train_val_complaints.append(complaint_to_wordlist(complaint, True))
# X_train_val_glove_features = get_avg_feature_vecs(clean_train_val_complaints, word2vec_model_300d, num_features)
# ## Run the X_train_val_word2vec_features into our defined function for scoring
# # Initialize Model Object
# gnb = GaussianNB()
# mnb = MultinomialNB()
# logit = LogisticRegression(random_state=seed)
# randomforest = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
# linearsvc = LinearSVC()
# ## We do NOT want these two. They take FOREVER to train AND predict
# # knn = KNeighborsClassifier()
# # decisiontree = DecisionTreeClassifier(random_state=seed)
# # to concat all models
# results_cv_straitified_word2vec300d= pd.concat([
# # metric_cv_stratified(mnb, X_train_val_glove_features, y_train_val, 5, 'MultinomialNB_glove50d'),
# metric_cv_stratified(gnb, X_train_val_glove_features, y_train_val, 5, 'GaussianNB_word2vec300d'),
# metric_cv_stratified(logit, X_train_val_glove_features, y_train_val, 5, 'LogisticRegression_word2vec300d'),
# metric_cv_stratified(randomforest, X_train_val_glove_features, y_train_val, 5, 'RandomForest_word2vec300d'),
# metric_cv_stratified(linearsvc, X_train_val_glove_features, y_train_val, 5, 'LinearSVC_word2vec300d')
# ], axis=0).reset_index()
# with open('results_cv_straitified_word2vec300d.pickle', 'wb') as to_write:
# pickle.dump(results_cv_straitified_word2vec300d, to_write)
# Opening Results
with open('results_cv_straitified_word2vec300d.pickle', 'rb') as to_read:
results_cv_straitified_word2vec300d = pickle.load(to_read)
results_cv_straitified_word2vec300d | _____no_output_____ | MIT | .ipynb_checkpoints/ASAR-checkpoint.ipynb | siddharthshenoy/Keywordandsum |
For full documentation on this project, see [here](https://new-languages-for-nlp.github.io/course-materials/w2/projects.html) This notebook: - Loads project file from GitHub- Loads assets from GitHub repo- installs the custom language object - converts the training data to spaCy binary- configure the project.yml file - train the model - assess performance - package the model (or push to huggingface) 1 Prepare the Notebook Environment | # @title Colab comes with spaCy v2, needs upgrade to v3
GPU = True # @param {type:"boolean"}
# Install spaCy v3 and libraries for GPUs and transformers
!pip install spacy --upgrade
if GPU:
!pip install 'spacy[transformers,cuda111]'
!pip install wandb spacy-huggingface-hub | _____no_output_____ | MIT | New_Language_Training_(Colab).ipynb | New-Languages-for-NLP/kanbun |
The notebook will pull project files from your GitHub repository. Note that you need to set the langugage (lang), treebank (same as the repo name), test_size and package name in the project.yml file in your repository. | private_repo = False # @param {type:"boolean"}
repo_name = "kanbun" # @param {type:"string"}
!rm -rf /content/newlang_project
!rm -rf $repo_name
if private_repo:
git_access_token = "" # @param {type:"string"}
git_url = (
f"https://{git_access_token}@github.com/New-Languages-for-NLP/{repo_name}/"
)
!git clone $git_url -b main
!cp -r ./$repo_name/newlang_project .
!mkdir newlang_project/assets/
!mkdir newlang_project/configs/
!mkdir newlang_project/corpus/
!mkdir newlang_project/metrics/
!mkdir newlang_project/packages/
!mkdir newlang_project/training/
!mkdir newlang_project/assets/$repo_name
!cp -r ./$repo_name/* newlang_project/assets/$repo_name/
!rm -rf ./$repo_name
else:
!python -m spacy project clone newlang_project --repo https://github.com/New-Languages-for-NLP/$repo_name --branch main
!python -m spacy project assets /content/newlang_project
# Install the custom language object from Cadet
!python -m spacy project run install /content/newlang_project | _____no_output_____ | MIT | New_Language_Training_(Colab).ipynb | New-Languages-for-NLP/kanbun |
2 Prepare the Data for Training | # @title (optional) cell to corrects a problem when your tokens have no pos value
%%writefile /usr/local/lib/python3.7/dist-packages/spacy/training/converters/conllu_to_docs.py
import re
from .conll_ner_to_docs import n_sents_info
from ...training import iob_to_biluo, biluo_tags_to_spans
from ...tokens import Doc, Token, Span
from ...vocab import Vocab
from wasabi import Printer
def conllu_to_docs(
input_data,
n_sents=10,
append_morphology=False,
ner_map=None,
merge_subtokens=False,
no_print=False,
**_
):
"""
Convert conllu files into JSON format for use with train cli.
append_morphology parameter enables appending morphology to tags, which is
useful for languages such as Spanish, where UD tags are not so rich.
Extract NER tags if available and convert them so that they follow
BILUO and the Wikipedia scheme
"""
MISC_NER_PATTERN = "^((?:name|NE)=)?([BILU])-([A-Z_]+)|O$"
msg = Printer(no_print=no_print)
n_sents_info(msg, n_sents)
sent_docs = read_conllx(
input_data,
append_morphology=append_morphology,
ner_tag_pattern=MISC_NER_PATTERN,
ner_map=ner_map,
merge_subtokens=merge_subtokens,
)
sent_docs_to_merge = []
for sent_doc in sent_docs:
sent_docs_to_merge.append(sent_doc)
if len(sent_docs_to_merge) % n_sents == 0:
yield Doc.from_docs(sent_docs_to_merge)
sent_docs_to_merge = []
if sent_docs_to_merge:
yield Doc.from_docs(sent_docs_to_merge)
def has_ner(input_data, ner_tag_pattern):
"""
Check the MISC column for NER tags.
"""
for sent in input_data.strip().split("\n\n"):
lines = sent.strip().split("\n")
if lines:
while lines[0].startswith("#"):
lines.pop(0)
for line in lines:
parts = line.split("\t")
id_, word, lemma, pos, tag, morph, head, dep, _1, misc = parts
for misc_part in misc.split("|"):
if re.match(ner_tag_pattern, misc_part):
return True
return False
def read_conllx(
input_data,
append_morphology=False,
merge_subtokens=False,
ner_tag_pattern="",
ner_map=None,
):
"""Yield docs, one for each sentence"""
vocab = Vocab() # need vocab to make a minimal Doc
for sent in input_data.strip().split("\n\n"):
lines = sent.strip().split("\n")
if lines:
while lines[0].startswith("#"):
lines.pop(0)
doc = conllu_sentence_to_doc(
vocab,
lines,
ner_tag_pattern,
merge_subtokens=merge_subtokens,
append_morphology=append_morphology,
ner_map=ner_map,
)
yield doc
def get_entities(lines, tag_pattern, ner_map=None):
"""Find entities in the MISC column according to the pattern and map to
final entity type with `ner_map` if mapping present. Entity tag is 'O' if
the pattern is not matched.
lines (str): CONLL-U lines for one sentences
tag_pattern (str): Regex pattern for entity tag
ner_map (dict): Map old NER tag names to new ones, '' maps to O.
RETURNS (list): List of BILUO entity tags
"""
miscs = []
for line in lines:
parts = line.split("\t")
id_, word, lemma, pos, tag, morph, head, dep, _1, misc = parts
if "-" in id_ or "." in id_:
continue
miscs.append(misc)
iob = []
for misc in miscs:
iob_tag = "O"
for misc_part in misc.split("|"):
tag_match = re.match(tag_pattern, misc_part)
if tag_match:
prefix = tag_match.group(2)
suffix = tag_match.group(3)
if prefix and suffix:
iob_tag = prefix + "-" + suffix
if ner_map:
suffix = ner_map.get(suffix, suffix)
if suffix == "":
iob_tag = "O"
else:
iob_tag = prefix + "-" + suffix
break
iob.append(iob_tag)
return iob_to_biluo(iob)
def conllu_sentence_to_doc(
vocab,
lines,
ner_tag_pattern,
merge_subtokens=False,
append_morphology=False,
ner_map=None,
):
"""Create an Example from the lines for one CoNLL-U sentence, merging
subtokens and appending morphology to tags if required.
lines (str): The non-comment lines for a CoNLL-U sentence
ner_tag_pattern (str): The regex pattern for matching NER in MISC col
RETURNS (Example): An example containing the annotation
"""
# create a Doc with each subtoken as its own token
# if merging subtokens, each subtoken orth is the merged subtoken form
if not Token.has_extension("merged_orth"):
Token.set_extension("merged_orth", default="")
if not Token.has_extension("merged_lemma"):
Token.set_extension("merged_lemma", default="")
if not Token.has_extension("merged_morph"):
Token.set_extension("merged_morph", default="")
if not Token.has_extension("merged_spaceafter"):
Token.set_extension("merged_spaceafter", default="")
words, spaces, tags, poses, morphs, lemmas = [], [], [], [], [], []
heads, deps = [], []
subtok_word = ""
in_subtok = False
for i in range(len(lines)):
line = lines[i]
parts = line.split("\t")
id_, word, lemma, pos, tag, morph, head, dep, _1, misc = parts
if "." in id_:
continue
if "-" in id_:
in_subtok = True
if "-" in id_:
in_subtok = True
subtok_word = word
subtok_start, subtok_end = id_.split("-")
subtok_spaceafter = "SpaceAfter=No" not in misc
continue
if merge_subtokens and in_subtok:
words.append(subtok_word)
else:
words.append(word)
if in_subtok:
if id_ == subtok_end:
spaces.append(subtok_spaceafter)
else:
spaces.append(False)
elif "SpaceAfter=No" in misc:
spaces.append(False)
else:
spaces.append(True)
if in_subtok and id_ == subtok_end:
subtok_word = ""
in_subtok = False
id_ = int(id_) - 1
head = (int(head) - 1) if head not in ("0", "_") else id_
tag = pos if tag == "_" else tag
morph = morph if morph != "_" else ""
dep = "ROOT" if dep == "root" else dep
lemmas.append(lemma)
if pos == "_":
pos = ""
poses.append(pos)
tags.append(tag)
morphs.append(morph)
heads.append(head)
deps.append(dep)
doc = Doc(
vocab,
words=words,
spaces=spaces,
tags=tags,
pos=poses,
deps=deps,
lemmas=lemmas,
morphs=morphs,
heads=heads,
)
for i in range(len(doc)):
doc[i]._.merged_orth = words[i]
doc[i]._.merged_morph = morphs[i]
doc[i]._.merged_lemma = lemmas[i]
doc[i]._.merged_spaceafter = spaces[i]
ents = get_entities(lines, ner_tag_pattern, ner_map)
doc.ents = biluo_tags_to_spans(doc, ents)
if merge_subtokens:
doc = merge_conllu_subtokens(lines, doc)
# create final Doc from custom Doc annotation
words, spaces, tags, morphs, lemmas, poses = [], [], [], [], [], []
heads, deps = [], []
for i, t in enumerate(doc):
words.append(t._.merged_orth)
lemmas.append(t._.merged_lemma)
spaces.append(t._.merged_spaceafter)
morphs.append(t._.merged_morph)
if append_morphology and t._.merged_morph:
tags.append(t.tag_ + "__" + t._.merged_morph)
else:
tags.append(t.tag_)
poses.append(t.pos_)
heads.append(t.head.i)
deps.append(t.dep_)
doc_x = Doc(
vocab,
words=words,
spaces=spaces,
tags=tags,
morphs=morphs,
lemmas=lemmas,
pos=poses,
deps=deps,
heads=heads,
)
doc_x.ents = [Span(doc_x, ent.start, ent.end, label=ent.label) for ent in doc.ents]
return doc_x
def merge_conllu_subtokens(lines, doc):
# identify and process all subtoken spans to prepare attrs for merging
subtok_spans = []
for line in lines:
parts = line.split("\t")
id_, word, lemma, pos, tag, morph, head, dep, _1, misc = parts
if "-" in id_:
subtok_start, subtok_end = id_.split("-")
subtok_span = doc[int(subtok_start) - 1 : int(subtok_end)]
subtok_spans.append(subtok_span)
# create merged tag, morph, and lemma values
tags = []
morphs = {}
lemmas = []
for token in subtok_span:
tags.append(token.tag_)
lemmas.append(token.lemma_)
if token._.merged_morph:
for feature in token._.merged_morph.split("|"):
field, values = feature.split("=", 1)
if field not in morphs:
morphs[field] = set()
for value in values.split(","):
morphs[field].add(value)
# create merged features for each morph field
for field, values in morphs.items():
morphs[field] = field + "=" + ",".join(sorted(values))
# set the same attrs on all subtok tokens so that whatever head the
# retokenizer chooses, the final attrs are available on that token
for token in subtok_span:
token._.merged_orth = token.orth_
token._.merged_lemma = " ".join(lemmas)
token.tag_ = "_".join(tags)
token._.merged_morph = "|".join(sorted(morphs.values()))
token._.merged_spaceafter = (
True if subtok_span[-1].whitespace_ else False
)
with doc.retokenize() as retokenizer:
for span in subtok_spans:
retokenizer.merge(span)
return doc
# Convert the conllu files from inception to spaCy binary format
# Read the conll files with ner data and as ents to spaCy docs
!python -m spacy project run convert /content/newlang_project -F
# test/train split
!python -m spacy project run split /content/newlang_project
# Debug the data
!python -m spacy project run debug /content/newlang_project | _____no_output_____ | MIT | New_Language_Training_(Colab).ipynb | New-Languages-for-NLP/kanbun |
3 Model Training | # train the model
!python -m spacy project run train /content/newlang_project | _____no_output_____ | MIT | New_Language_Training_(Colab).ipynb | New-Languages-for-NLP/kanbun |
If you get `ValueError: Could not find gold transition - see logs above.` You may not have sufficent data to train on: https://github.com/explosion/spaCy/discussions/7282 | # Evaluate the model using the test data
!python -m spacy project run evaluate /content/newlang_project
# Find the path for your meta.json file
# You'll need to add newlang_project/ + the path from the training step just after "✔ Saved pipeline to output directory"
!ls newlang_project/training/urban-giggle/model-last
# Update meta.json
import spacy
import srsly
# Change path to match that from the training cell where it says "✔ Saved pipeline to output directory"
meta_path = "newlang_project/training/urban-giggle/model-last/meta.json"
# Replace values below for your project
my_meta = {
"lang": "yi",
"name": "yiddish_sm",
"version": "0.0.1",
"description": "Yiddish pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, lemmatizer.",
"author": "New Languages for NLP",
"email": "[email protected]",
"url": "https://newnlp.princeton.edu",
"license": "MIT",
}
meta = spacy.util.load_meta(meta_path)
meta.update(my_meta)
srsly.write_json(meta_path, meta)
meta | _____no_output_____ | MIT | New_Language_Training_(Colab).ipynb | New-Languages-for-NLP/kanbun |
Download the trained model to your computer. | # Save the model to disk in a format that can be easily downloaded and re-used.
!python -m spacy package ./newlang_project/training/urban-giggle/model-last newlang_project/export
from google.colab import files
# replace with the path in the previous cell under "✔ Successfully created zipped Python package"
files.download(
"newlang_project/export/yi_yiddish_sm-0.0.1/dist/yi_yiddish_sm-0.0.1.tar.gz"
)
# once on your computer, you can pip install en_pipeline-0.0.0.tar.gz
# Add to 4_trained_models folder in GitHub | _____no_output_____ | MIT | New_Language_Training_(Colab).ipynb | New-Languages-for-NLP/kanbun |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer-primer). Design an LRU cache Constraints and assumptions* What are we caching? * We are cahing the results of web queries* Can we assume inputs are valid or do we have to validate them? * Assume they're valid* Can we assume this fits memory? * Yes Solution | %%writefile lru_cache.py
class Node(object):
def __init__(self, results):
self.results = results
self.next = next
class LinkedList(object):
def __init__(self):
self.head = None
self.tail = None
def move_to_front(self, node): # ...
def append_to_front(self, node): # ...
def remove_from_tail(self): # ...
class Cache(object):
def __init__(self, MAX_SIZE):
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
self.linked_list = LinkedList()
def get(self, query)
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
"""
node = self.lookup[query]
if node is None:
return None
self.linked_list.move_to_front(node)
return node.results
def set(self, results, query):
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
If the entry is new and the cache is at capacity, removes the oldest entry
before the new entry is added.
"""
node = self.lookup[query]
if node is not None:
# Key exists in cache, update the value
node.results = results
self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
new_node = Node(results)
self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node | Overwriting lru_cache.py
| CC-BY-4.0 | something-learned/Interview/system-design-primer/solutions/object_oriented_design/lru_cache/lru_cache.ipynb | gopala-kr/Code-Rush-101 |
[Table of Contents](./table_of_contents.ipynb) H Infinity filter | %matplotlib inline
#format the book
import book_format
book_format.set_style() | _____no_output_____ | CC-BY-4.0 | Appendix-D-HInfinity-Filters.ipynb | wjdghksdl26/Kalman-and-Bayesian-Filters-in-Python |
I am still mulling over how to write this chapter. In the meantime, Professor Dan Simon at Cleveland State University has an accessible introduction here:http://academic.csuohio.edu/simond/courses/eec641/hinfinity.pdfIn one sentence the $H_\infty$ (H infinity) filter is like a Kalman filter, but it is robust in the face of non-Gaussian, non-predictable inputs.My FilterPy library contains an H-Infinity filter. I've pasted some test code below which implements the filter designed by Simon in the article above. Hope it helps. | import numpy as np
import matplotlib.pyplot as plt
from filterpy.hinfinity import HInfinityFilter
dt = 0.1
f = HInfinityFilter(2, 1, dim_u=1, gamma=.01)
f.F = np.array([[1., dt],
[0., 1.]])
f.H = np.array([[0., 1.]])
f.G = np.array([[dt**2 / 2, dt]]).T
f.P = 0.01
f.W = np.array([[0.0003, 0.005],
[0.0050, 0.100]])/ 1000 #process noise
f.V = 0.01
f.Q = 0.01
u = 1. #acceleration of 1 f/sec**2
xs = []
vs = []
for i in range(1,40):
f.update (5)
#print(f.x.T)
xs.append(f.x[0,0])
vs.append(f.x[1,0])
f.predict(u=u)
plt.subplot(211)
plt.plot(xs)
plt.title('position')
plt.subplot(212)
plt.plot(vs)
plt.title('velocity'); | _____no_output_____ | CC-BY-4.0 | Appendix-D-HInfinity-Filters.ipynb | wjdghksdl26/Kalman-and-Bayesian-Filters-in-Python |
Generative Adversarial Networks:label:`sec_basic_gan`Throughout most of this book, we have talked about how to make predictions. In some form or another, we used deep neural networks learned mappings from data examples to labels. This kind of learning is called discriminative learning, as in, we'd like to be able to discriminate between photos cats and photos of dogs. Classifiers and regressors are both examples of discriminative learning. And neural networks trained by backpropagation have upended everything we thought we knew about discriminative learning on large complicated datasets. Classification accuracies on high-res images has gone from useless to human-level (with some caveats) in just 5-6 years. We will spare you another spiel about all the other discriminative tasks where deep neural networks do astoundingly well.But there is more to machine learning than just solving discriminative tasks. For example, given a large dataset, without any labels, we might want to learn a model that concisely captures the characteristics of this data. Given such a model, we could sample synthetic data examples that resemble the distribution of the training data. For example, given a large corpus of photographs of faces, we might want to be able to generate a new photorealistic image that looks like it might plausibly have come from the same dataset. This kind of learning is called generative modeling.Until recently, we had no method that could synthesize novel photorealistic images. But the success of deep neural networks for discriminative learning opened up new possibilities. One big trend over the last three years has been the application of discriminative deep nets to overcome challenges in problems that we do not generally think of as supervised learning problems. The recurrent neural network language models are one example of using a discriminative network (trained to predict the next character) that once trained can act as a generative model.In 2014, a breakthrough paper introduced Generative adversarial networks (GANs) :cite:`Goodfellow.Pouget-Abadie.Mirza.ea.2014`, a clever new way to leverage the power of discriminative models to get good generative models. At their heart, GANs rely on the idea that a data generator is good if we cannot tell fake data apart from real data. In statistics, this is called a two-sample test - a test to answer the question whether datasets $X=\{x_1,\ldots, x_n\}$ and $X'=\{x'_1,\ldots, x'_n\}$ were drawn from the same distribution. The main difference between most statistics papers and GANs is that the latter use this idea in a constructive way. In other words, rather than just training a model to say "hey, these two datasets do not look like they came from the same distribution", they use the [two-sample test](https://en.wikipedia.org/wiki/Two-sample_hypothesis_testing) to provide training signals to a generative model. This allows us to improve the data generator until it generates something that resembles the real data. At the very least, it needs to fool the classifier. Even if our classifier is a state of the art deep neural network.:label:`fig_gan`The GAN architecture is illustrated in :numref:`fig_gan`.As you can see, there are two pieces in GAN architecture - first off, we need a device (say, a deep network but it really could be anything, such as a game rendering engine) that might potentially be able to generate data that looks just like the real thing. If we are dealing with images, this needs to generate images. If we are dealing with speech, it needs to generate audio sequences, and so on. We call this the generator network. The second component is the discriminator network. It attempts to distinguish fake and real data from each other. Both networks are in competition with each other. The generator network attempts to fool the discriminator network. At that point, the discriminator network adapts to the new fake data. This information, in turn is used to improve the generator network, and so on.The discriminator is a binary classifier to distinguish if the input $x$ is real (from real data) or fake (from the generator). Typically, the discriminator outputs a scalar prediction $o\in\mathbb R$ for input $\mathbf x$, such as using a dense layer with hidden size 1, and then applies sigmoid function to obtain the predicted probability $D(\mathbf x) = 1/(1+e^{-o})$. Assume the label $y$ for the true data is $1$ and $0$ for the fake data. We train the discriminator to minimize the cross-entropy loss, *i.e.*,$$ \min_D \{ - y \log D(\mathbf x) - (1-y)\log(1-D(\mathbf x)) \},$$For the generator, it first draws some parameter $\mathbf z\in\mathbb R^d$ from a source of randomness, *e.g.*, a normal distribution $\mathbf z \sim \mathcal{N} (0, 1)$. We often call $\mathbf z$ as the latent variable.It then applies a function to generate $\mathbf x'=G(\mathbf z)$. The goal of the generator is to fool the discriminator to classify $\mathbf x'=G(\mathbf z)$ as true data, *i.e.*, we want $D( G(\mathbf z)) \approx 1$.In other words, for a given discriminator $D$, we update the parameters of the generator $G$ to maximize the cross-entropy loss when $y=0$, *i.e.*,$$ \max_G \{ - (1-y) \log(1-D(G(\mathbf z))) \} = \max_G \{ - \log(1-D(G(\mathbf z))) \}.$$If the generator does a perfect job, then $D(\mathbf x')\approx 1$ so the above loss near 0, which results the gradients are too small to make a good progress for the discriminator. So commonly we minimize the following loss:$$ \min_G \{ - y \log(D(G(\mathbf z))) \} = \min_G \{ - \log(D(G(\mathbf z))) \}, $$which is just feed $\mathbf x'=G(\mathbf z)$ into the discriminator but giving label $y=1$.To sum up, $D$ and $G$ are playing a "minimax" game with the comprehensive objective function:$$min_D max_G \{ -E_{x \sim \text{Data}} log D(\mathbf x) - E_{z \sim \text{Noise}} log(1 - D(G(\mathbf z))) \}.$$Many of the GANs applications are in the context of images. As a demonstration purpose, we are going to content ourselves with fitting a much simpler distribution first. We will illustrate what happens if we use GANs to build the world's most inefficient estimator of parameters for a Gaussian. Let us get started. | %matplotlib inline
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np() | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Generate Some "Real" DataSince this is going to be the world's lamest example, we simply generate data drawn from a Gaussian. | X = np.random.normal(0.0, 1, (1000, 2))
A = np.array([[1, 2], [-0.1, 0.5]])
b = np.array([1, 2])
data = np.dot(X, A) + b | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Let us see what we got. This should be a Gaussian shifted in some rather arbitrary way with mean $b$ and covariance matrix $A^TA$. | d2l.set_figsize()
d2l.plt.scatter(data[:100, (0)].asnumpy(), data[:100, (1)].asnumpy());
print(f'The covariance matrix is\n{np.dot(A.T, A)}')
batch_size = 8
data_iter = d2l.load_array((data,), batch_size) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
GeneratorOur generator network will be the simplest network possible - a single layer linear model. This is since we will be driving that linear network with a Gaussian data generator. Hence, it literally only needs to learn the parameters to fake things perfectly. | net_G = nn.Sequential()
net_G.add(nn.Dense(2)) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
DiscriminatorFor the discriminator we will be a bit more discriminating: we will use an MLP with 3 layers to make things a bit more interesting. | net_D = nn.Sequential()
net_D.add(nn.Dense(5, activation='tanh'),
nn.Dense(3, activation='tanh'),
nn.Dense(1)) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
TrainingFirst we define a function to update the discriminator. | #@save
def update_D(X, Z, net_D, net_G, loss, trainer_D):
"""Update discriminator."""
batch_size = X.shape[0]
ones = np.ones((batch_size,), ctx=X.ctx)
zeros = np.zeros((batch_size,), ctx=X.ctx)
with autograd.record():
real_Y = net_D(X)
fake_X = net_G(Z)
# Do not need to compute gradient for `net_G`, detach it from
# computing gradients.
fake_Y = net_D(fake_X.detach())
loss_D = (loss(real_Y, ones) + loss(fake_Y, zeros)) / 2
loss_D.backward()
trainer_D.step(batch_size)
return float(loss_D.sum()) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
The generator is updated similarly. Here we reuse the cross-entropy loss but change the label of the fake data from $0$ to $1$. | #@save
def update_G(Z, net_D, net_G, loss, trainer_G):
"""Update generator."""
batch_size = Z.shape[0]
ones = np.ones((batch_size,), ctx=Z.ctx)
with autograd.record():
# We could reuse `fake_X` from `update_D` to save computation
fake_X = net_G(Z)
# Recomputing `fake_Y` is needed since `net_D` is changed
fake_Y = net_D(fake_X)
loss_G = loss(fake_Y, ones)
loss_G.backward()
trainer_G.step(batch_size)
return float(loss_G.sum()) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Both the discriminator and the generator performs a binary logistic regression with the cross-entropy loss. We use Adam to smooth the training process. In each iteration, we first update the discriminator and then the generator. We visualize both losses and generated examples. | def train(net_D, net_G, data_iter, num_epochs, lr_D, lr_G, latent_dim, data):
loss = gluon.loss.SigmoidBCELoss()
net_D.initialize(init=init.Normal(0.02), force_reinit=True)
net_G.initialize(init=init.Normal(0.02), force_reinit=True)
trainer_D = gluon.Trainer(net_D.collect_params(),
'adam', {'learning_rate': lr_D})
trainer_G = gluon.Trainer(net_G.collect_params(),
'adam', {'learning_rate': lr_G})
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs], nrows=2, figsize=(5, 5),
legend=['discriminator', 'generator'])
animator.fig.subplots_adjust(hspace=0.3)
for epoch in range(num_epochs):
# Train one epoch
timer = d2l.Timer()
metric = d2l.Accumulator(3) # loss_D, loss_G, num_examples
for X in data_iter:
batch_size = X.shape[0]
Z = np.random.normal(0, 1, size=(batch_size, latent_dim))
metric.add(update_D(X, Z, net_D, net_G, loss, trainer_D),
update_G(Z, net_D, net_G, loss, trainer_G),
batch_size)
# Visualize generated examples
Z = np.random.normal(0, 1, size=(100, latent_dim))
fake_X = net_G(Z).asnumpy()
animator.axes[1].cla()
animator.axes[1].scatter(data[:, 0], data[:, 1])
animator.axes[1].scatter(fake_X[:, 0], fake_X[:, 1])
animator.axes[1].legend(['real', 'generated'])
# Show the losses
loss_D, loss_G = metric[0]/metric[2], metric[1]/metric[2]
animator.add(epoch + 1, (loss_D, loss_G))
print(f'loss_D {loss_D:.3f}, loss_G {loss_G:.3f}, '
f'{metric[2] / timer.stop():.1f} examples/sec') | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Now we specify the hyperparameters to fit the Gaussian distribution. | lr_D, lr_G, latent_dim, num_epochs = 0.05, 0.005, 2, 20
train(net_D, net_G, data_iter, num_epochs, lr_D, lr_G,
latent_dim, data[:100].asnumpy()) | loss_D 0.693, loss_G 0.693, 549.8 examples/sec
| MIT | python/d2l-en/mxnet/chapter_generative-adversarial-networks/gan.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Unstructured Profilers **Data profiling** - *is the process of examining a dataset and collecting statistical or informational summaries about said dataset.*The Profiler class inside the DataProfiler is designed to generate *data profiles* via the Profiler class, which ingests either a Data class or a Pandas DataFrame. Currently, the Data class supports loading the following file formats:* Any delimited (CSV, TSV, etc.)* JSON object* Avro* Parquet* Text files* Pandas Series/DataframeOnce the data is loaded, the Profiler can calculate statistics and predict the entities (via the Labeler) of every column (csv) or key-value (JSON) store as well as dataset wide information, such as the number of nulls, duplicates, etc.This example will look at specifically the unstructured data types for unstructured profiling. This means that only text files, lists of strings, single column pandas dataframes/series, or DataProfile Data objects in string format will work with the unstructured profiler. Reporting One of the primary purposes of the Profiler are to quickly identify what is in the dataset. This can be useful for analyzing a dataset prior to use or determining which columns could be useful for a given purpose.In terms of reporting, there are multiple reporting options:* **Pretty**: Floats are rounded to four decimal places, and lists are shortened.* **Compact**: Similar to pretty, but removes detailed statistics* **Serializable**: Output is json serializable and not prettified* **Flat**: Nested Output is returned as a flattened dictionaryThe **Pretty** and **Compact** reports are the two most commonly used reports and includes `global_stats` and `data_stats` for the given dataset. `global_stats` contains overall properties of the data such as samples used and file encoding. `data_stats` contains specific properties and statistics for each text sample.For unstructured profiles, the report looks like this:```"global_stats": { "samples_used": int, "empty_line_count": int, "file_type": string, "encoding": string},"data_stats": { "data_label": { "entity_counts": { "word_level": dict(int), "true_char_level": dict(int), "postprocess_char_level": dict(int) }, "times": dict(float) }, "statistics": { "vocab": list(char), "words": list(string), "word_count": dict(int), "times": dict(float) }}``` | import os
import sys
import json
try:
sys.path.insert(0, '..')
import dataprofiler as dp
except ImportError:
import dataprofiler as dp
data_path = "../dataprofiler/tests/data"
# remove extra tf loggin
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
data = dp.Data(os.path.join(data_path, "txt/discussion_reddit.txt"))
profile = dp.Profiler(data)
report = profile.report(report_options={"output_format": "pretty"})
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
Profiler Type It should be noted, in addition to reading the input data from text files, DataProfiler allows the input data as a pandas dataframe, a pandas series, a list, and Data objects (when an unstructured format is selected) if the Profiler is explicitly chosen as unstructured. | # run data profiler and get the report
import pandas as pd
data = dp.Data(os.path.join(data_path, "csv/SchoolDataSmall.csv"), options={"data_format": "records"})
profile = dp.Profiler(data, profiler_type='unstructured')
report = profile.report(report_options={"output_format":"pretty"})
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
Profiler options The DataProfiler has the ability to turn on and off components as needed. This is accomplished via the `ProfilerOptions` class.For example, if a user doesn't require vocab count information they may desire to turn off the word count functionality.Below, let's remove the vocab count and set the stop words. Full list of options in the Profiler section of the [DataProfiler documentation](https://capitalone.github.io/DataProfiler). | data = dp.Data(os.path.join(data_path, "txt/discussion_reddit.txt"))
profile_options = dp.ProfilerOptions()
# Setting multiple options via set
profile_options.set({ "*.vocab.is_enabled": False, "*.is_case_sensitive": True })
# Set options via directly setting them
profile_options.unstructured_options.text.stop_words = ["These", "are", "stop", "words"]
profile = dp.Profiler(data, options=profile_options)
report = profile.report(report_options={"output_format": "compact"})
# Print the report
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
Updating Profiles Beyond just profiling, one of the unique aspects of the DataProfiler is the ability to update the profiles. To update appropriately, the schema (columns / keys) must match appropriately. | # Load and profile a CSV file
data = dp.Data(os.path.join(data_path, "txt/sentence-3x.txt"))
profile = dp.Profiler(data)
# Update the profile with new data:
new_data = dp.Data(os.path.join(data_path, "txt/sentence-3x.txt"))
profile.update_profile(new_data)
# Take a peek at the data
print(data.data)
print(new_data.data)
# Report the compact version of the profile
report = profile.report(report_options={"output_format": "compact"})
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
Merging Profiles Merging profiles are an alternative method for updating profiles. Particularly, multiple profiles can be generated seperately, then added together with a simple `+` command: `profile3 = profile1 + profile2` | # Load a CSV file with a schema
data1 = dp.Data(os.path.join(data_path, "txt/sentence-3x.txt"))
profile1 = dp.Profiler(data1)
# Load another CSV file with the same schema
data2 = dp.Data(os.path.join(data_path, "txt/sentence-3x.txt"))
profile2 = dp.Profiler(data2)
# Merge the profiles
profile3 = profile1 + profile2
# Report the compact version of the profile
report = profile3.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
As you can see, the `update_profile` function and the `+` operator function similarly. The reason the `+` operator is important is that it's possible to *save and load profiles*, which we cover next. Saving and Loading a Profile Not only can the Profiler create and update profiles, it's also possible to save, load then manipulate profiles. | # Load data
data = dp.Data(os.path.join(data_path, "txt/sentence-3x.txt"))
# Generate a profile
profile = dp.Profiler(data)
# Save a profile to disk for later (saves as pickle file)
profile.save(filepath="my_profile.pkl")
# Load a profile from disk
loaded_profile = dp.Profiler.load("my_profile.pkl")
# Report the compact version of the profile
report = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
With the ability to save and load profiles, profiles can be generated via multiple machines then merged. Further, profiles can be stored and later used in applications such as change point detection, synthetic data generation, and more. | # Load a multiple files via the Data class
filenames = ["txt/sentence-3x.txt",
"txt/sentence.txt"]
data_objects = []
for filename in filenames:
data_objects.append(dp.Data(os.path.join(data_path, filename)))
print(data_objects)
# Generate and save profiles
for i in range(len(data_objects)):
profile = dp.Profiler(data_objects[i])
report = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4))
profile.save(filepath="data-"+str(i)+".pkl")
# Load profiles and add them together
profile = None
for i in range(len(data_objects)):
if profile is None:
profile = dp.Profiler.load("data-"+str(i)+".pkl")
else:
profile += dp.Profiler.load("data-"+str(i)+".pkl")
# Report the compact version of the profile
report = profile.report(report_options={"output_format":"compact"})
print(json.dumps(report, indent=4)) | _____no_output_____ | Apache-2.0 | examples/unstructured_profilers.ipynb | taylorfturner/DataProfiler |
Functions | def adf_test(time_series):
"""
param time_series: takes a time series list as an input
return: True/False as a results of KPSS alongside the output in dataframe
"""
dftest = adfuller(time_series, autolag='AIC')
dfoutput = pd.Series(dftest[0:4],
index=[
'Test Statistic', 'p-value', '#Lags Used',
'Number of Observations Used'
])
for key, value in dftest[4].items():
dfoutput['Critical Value (%s)' % key] = value
if dfoutput['p-value'] < 0.01:
return True, dfoutput
else:
return False, dfoutput
def kpss_test(time_series):
kpsstest = kpss(time_series, regression='c')
dfoutput = pd.Series(kpsstest[0:3],
index=['Test Statistic', 'p-value', 'Lags Used'])
for key, value in kpsstest[3].items():
dfoutput['Critical Value (%s)' % key] = value
if dfoutput['p-value'] < 0.01:
return False, dfoutput
else:
return True, dfoutput
def most_frequent(list):
counter = 0
num = list[0]
for i in list:
curr_frequency = list.count(i)
if curr_frequency > counter:
counter = curr_frequency
num = i
return num
def identify_cont_disc(df):
"""
:param df: the metric data column(s) that has no NAN or constant values
:return: list of continuous metrics and their corresponding data column(s)
"""
raw_feature_list = df.columns
raw_feature_list = list(raw_feature_list.values)
# feature_list = df.columns
discrete_features = []
continuous_features = []
for colum in raw_feature_list:
if len(df[colum].unique()) < 20:
# print(colum, ': ', df[colum].unique())
discrete_features.append(colum)
else:
# print(colum, ": continuous features")
continuous_features.append(colum)
df_cont = df[continuous_features].copy()
df_disc = df[discrete_features].copy()
return continuous_features, discrete_features
def analysisPeriod(df_raw, feature, time_feature, plot=False, verbose=False):
"""
:param df_raw: data set
:param feature: metric name
:param time_feature: time series name
:param plot: visual analysis functionality
:param verbose: print details on the console
:return: stationary, seasonal, period, decomposed series
"""
## INITIALIZATION: time series should be normalised into [0, 1]
seasonal = False
stationary = False
df_ts = df_raw.copy()
# Stationary Check
# ADF TEST: Augmented Dickey–Fuller test
# KPSS TEST: Kwiatkowski–Phillips–Schmidt–Shin TEST
adf_result, adf_output = adf_test(df_ts[feature])
kpss_result, kpss_output = kpss_test(df_ts[feature])
if verbose:
print('adf-Test')
print(adf_result)
print(adf_output)
print('kpss-Test')
print(kpss_result)
print(kpss_output)
# This is the code to use two tests, it will return true for stationary if or(test1,test2) = True
if adf_result == True & kpss_result == True:
stationary = True
elif adf_result == True & kpss_result == False:
stationary = False
print("Difference Stationary")
elif adf_result == False & kpss_result == True:
stationary = False
print("Trend Stationary")
else:
stationary = False
# First: checking flat line.
if np.all(np.isclose(df_ts[feature].values, df_ts[feature].values[0])):
print('Constant series')
seasonal = False
period = 1
result_add = None
else:
# If not flat line then:
# Seasonality Check:
# Automatic find the period based on Time Index
# Shift windows to find autocorrelations
shift_ = []
for i in np.arange(len(df_ts[feature])):
shift_.append(df_ts[feature].autocorr(lag=i))
shift_ = np.array(shift_)
# if max of Autocorelation greater than 0.9, we have seasonal
if max(shift_) >= 0.9:
seasonal = True
# find peaks of autocorelation -> in order to find local maxima
# peaks, _ = find_peaks(shift_, height=0.5)
peaks = find_peaks_cwt(shift_, np.arange(1, 10))
# turn peaks into differences between peaks
diff = []
for i in np.arange(len(peaks) - 1):
diff.append(peaks[i + 1] - peaks[i])
if len(diff) == 0: # can't find peaks
first_period = 1 # need to check again this!
else:
# return the most distance between peaks -> that is period of data
first_period = most_frequent(list(diff))
if verbose:
#print('Candidate periods:', set(diff))
for eachdiff in diff:
print(df_ts[feature].autocorr(lag=eachdiff), end='\t')
print()
if (plot == True) & (verbose == True):
plt.figure(figsize=(20, 3))
sm.graphics.tsa.plot_acf(df_ts[feature].squeeze(),
lags=int(first_period))
# if period is too large
if first_period > int(len(df_ts) / 2):
if verbose:
print('Frequency for Moving Average is over half size!')
first_period = int(len(df_ts) / 2)
# SEASONAL ANALYSIS
if verbose:
print('First period:', first_period)
df_ts.index = pd.to_datetime(df_ts[time_feature],
format='%Y-%m-%d %H:%M:%S')
rolling_mean = df_ts[feature].rolling(window=int(first_period)).mean()
exp1 = pd.Series(df_ts[feature].ewm(span=int(first_period),
adjust=False).mean())
exp1.index = pd.to_datetime(df_ts[time_feature],
format='%Y-%m-%d %H:%M:%S')
if (verbose == True) & (plot == True):
df_ori = df_ts[[feature, time_feature]].copy()
df_ori.set_index(time_feature, inplace=True)
fig, ax = plt.subplots(figsize=(15, 4))
df_ori.plot(ax=ax)
exp1.plot(ax=ax)
ax.legend([
'Original Series',
'Moving Average Series with P=%d' % first_period
])
plt.show()
# Using Moving Average
result_add = seasonal_decompose(exp1,
model='additive',
extrapolate_trend='freq',
freq=first_period)
# Using STL
# from statsmodels.tsa.seasonal import STL
# stl = STL(exp1, period=first_period, robust=True)
# result_add = stl.fit()
# Only check the seasonal series to find again the best period
arr_seasonal_ = pd.Series(result_add.seasonal + result_add.resid)
# if seasonal is flat
if np.all(np.isclose(arr_seasonal_, arr_seasonal_[0])):
if verbose == True:
print('Seasonal + Residual become flat')
seasonal = False
period = 1
else:
# if seasonal is not flat
# Continue to use autocorrelation to find the period
shift_ = []
for i in np.arange(len(arr_seasonal_)):
shift_.append(arr_seasonal_.autocorr(lag=i))
shift_ = np.array(shift_)
# Find peaks again for seasonal + residual
peaks, _ = find_peaks(shift_, height=0.85, distance=7)
# peaks = find_peaks_cwt(shift_,np.arange(1,10))
# Looking for possible periods
if len(peaks) < 2:
if df_ts[feature].autocorr(lag=first_period) > 0.80:
period = first_period
seasonal = True
else:
period = 1
seasonal = False
result_add = None
# result_add = seasonal_decompose(df_ts[feature], model='additive', extrapolate_trend='freq',freq=period)
else:
diff = []
for i in np.arange(len(peaks)):
if i + 1 < len(peaks):
diff.append(peaks[i + 1] - peaks[i])
if verbose:
print('Candidate periods:', set(diff))
for eachdiff in diff:
print(df_ts[feature].autocorr(lag=eachdiff), end='\t')
print()
if verbose:
print('Peaks of autocorr:', diff)
if 2 * most_frequent(list(diff)) > len(df_ts):
seasonal = False
period = 1
result_add = None
else:
seasonal = True
period = most_frequent(list(diff))
if (plot == True) & (verbose == True):
sm.graphics.tsa.plot_acf(exp1.squeeze(), lags=int(period) * 2)
plt.show()
# Final Decomposition
result_add = seasonal_decompose(df_ts[feature],
model='additive',
extrapolate_trend='freq',
freq=period)
# plot results of decomposition
if plot:
plt.rcParams.update({'figure.figsize': (10, 10)})
result_add.plot()
plt.show()
plt.figure(figsize=(20, 3))
plt.plot(df_ts[feature].values, label="Timeseries")
plt.axvline(x=0, color='r', ls='--')
plt.axvline(x=period, color='r', ls='--')
plt.grid(True)
plt.axis('tight')
plt.legend(loc="best", fontsize=13)
plt.show()
continuous, discrete = identify_cont_disc(df_raw[[feature]])
return stationary, seasonal, period, result_add, continuous, discrete | _____no_output_____ | MIT | .ipynb_checkpoints/Stationarity-Decomposition-Periodicity-checkpoint.ipynb | ahtshamzafar1/Time-Series-Data-Analysis |
Timeseries Analysis | df_weather=pd.read_csv(r'C:\Users\ahtis\OneDrive\Desktop\ARIMA\data\data.csv')
df_weather = df_weather[1:60]
df_weather = df_weather.dropna()
feature_name = "glucose"
df_weather["Timestamp"] = pd.to_datetime(df_weather["Timestamp"], format='%Y-%m-%d %H:%M:%S', utc=True)
df_weather["Timestamp"] = pd.DatetimeIndex(df_weather["Timestamp"], tz='Europe/Berlin')
Timestamp = df_weather.columns[0]
stationary, seasonal, period, resultdfs, continuous, discrete = analysisPeriod(
df_weather.head(2500),
feature=feature_name,
time_feature=Timestamp,
plot=True,
verbose=True)
print("Timeseries %s is Stationary? %s " % (feature_name, stationary))
print("Timeseries %s is Seasonal? %s " % (feature_name, seasonal))
if seasonal and period > 1:
print("Period for Timeseries %s = %s " % (feature_name, period))
if seasonal and period == 1:
print("Period for Timeseries %s is not found" % (feature_name, period))
if continuous:
print("Timeseries %s is Continuous" % (feature_name))
else:
print("Timeseries %s is Discrete" % (feature_name)) | _____no_output_____ | MIT | .ipynb_checkpoints/Stationarity-Decomposition-Periodicity-checkpoint.ipynb | ahtshamzafar1/Time-Series-Data-Analysis |
This notebook can be used to generate fake structural variant test data for testing the genome finishing module.Given a source fasta, bam, and paired-end fastqs and insertion parameters, it creates a directory with the following files: ref.fa reads.1.fq reads.2.fq | import sys
from django.core.management import setup_environ
import settings
setup_environ(settings)
import random
import re
import os
from Bio import SeqIO
import pysam
from genome_finish.millstone_de_novo_fns import get_avg_genome_coverage
# def _make_fake_insertion(ref_endpoints, ins_endpoints):
ref_endpoints = (2930000, 2940000)
ins_endpoints = (2932000, 2933000)
desired_coverage = 40
output_no_insertion_ref = True
test_number = 6
template_dir = '/home/wahern/projects/millstone/genome_designer/test_data/genome_finish_test/mg1655_test/templates'
source_fasta = os.path.join(template_dir, 'mg1655.fa')
source_bam = os.path.join(template_dir, 'lib1_rec07.bwa_align.bam')
source_fq1 = os.path.join(template_dir, 'lib1_rec07.1.fq')
source_fq2 = os.path.join(template_dir, 'lib1_rec07.2.fq')
test_dir = '/home/wahern/projects/millstone/genome_designer/test_data/genome_finish_test/mg1655_test'
output_dir = os.path.join(test_dir, str(test_number))
output_fasta = os.path.join(output_dir, 'ref.fa')
output_fq1 = os.path.join(output_dir, 'reads.1.fq')
output_fq2 = os.path.join(output_dir, 'reads.2.fq')
assert not os.path.exists(output_dir)
# Get sample
if desired_coverage:
coverage = get_avg_genome_coverage(source_bam)
if desired_coverage > coverage:
raise Exception('Desired coverage:' + str(desired_coverage) +
' is greater than the genome\'s average coverage of ' +
str(coverage))
read_sampling_fraction = desired_coverage / coverage
linecount = 0
with open(source_fq1) as fh:
for line in fh:
linecount+=1
include_read = [random.random() < read_sampling_fraction for i in xrange(linecount)]
os.mkdir(output_dir)
# Get source seqrecord
with open(source_fasta, 'r') as source_fasta_fh:
source_seqrecord = SeqIO.parse(source_fasta_fh, 'fasta').next()
output_seqrecord = (
source_seqrecord[ref_endpoints[0]:ins_endpoints[0]] +
source_seqrecord[ins_endpoints[1]:ref_endpoints[1]]
)
# Sanity check
assert len(output_seqrecord) == ref_endpoints[1] - ref_endpoints[0] - (
ins_endpoints[1] - ins_endpoints[0])
# Add some metadata to the header.
output_seqrecord.id = source_seqrecord.id
output_seqrecord.description = source_seqrecord.description + (
', FAKE_SHORT: ' +
'{ref_endpoints:' + str(ref_endpoints) + ', ' +
'ins_endpoints:' + str(ins_endpoints) + '}')
# Write output fasta
with open(output_fasta, 'w') as fh:
SeqIO.write([output_seqrecord], fh, 'fasta')
# Get reads in region
qnames_in_region = {}
source_af = pysam.AlignmentFile(source_bam)
for read in source_af.fetch('NC_000913', ref_endpoints[0], ref_endpoints[1]):
if (not read.is_unmapped and
ref_endpoints[0] <= read.reference_start <= ref_endpoints[1] and
ref_endpoints[0] <= read.reference_end <= ref_endpoints[1]):
qnames_in_region[read.qname] = True
source_af.close()
# Go through fastqs and write reads in ROI to file
p1 = re.compile('@(\S+)')
for input_fq_path, output_fq_path in [(source_fq1, output_fq1), (source_fq2, output_fq2)]:
counter = 0
if desired_coverage:
iterator = iter(include_read)
with open(input_fq_path, 'r') as in_fh, open(output_fq_path, 'w') as out_fh:
for line in in_fh:
m1 = p1.match(line)
if m1:
qname = m1.group(1)
if qname in qnames_in_region:
if desired_coverage:
if iterator.next():
out_fh.write(line)
out_fh.write(in_fh.next())
out_fh.write(in_fh.next())
out_fh.write(in_fh.next())
else:
out_fh.write(line)
out_fh.write(in_fh.next())
out_fh.write(in_fh.next())
out_fh.write(in_fh.next())
if output_no_insertion_ref:
# Get source seqrecord
with open(source_fasta, 'r') as source_fasta_fh:
source_seqrecord = SeqIO.parse(source_fasta_fh, 'fasta').next()
output_seqrecord = (
source_seqrecord[ref_endpoints[0]:ref_endpoints[1]]
)
# Add some metadata to the header.
output_seqrecord.id = source_seqrecord.id
output_seqrecord.description = source_seqrecord.description + (
', FAKE_SHORT: ' +
'{ref_endpoints:' + str(ref_endpoints) + ', ' +
'ins_endpoints:None}')
# Write output fasta
no_ins_fasta = os.path.join(output_dir, 'no_ins_ref.fa')
with open(no_ins_fasta, 'w') as fh:
SeqIO.write([output_seqrecord], fh, 'fasta')
| _____no_output_____ | MIT | genome_designer/debug/make_new_refs_clean.ipynb | churchlab/millstone |
Think Bayes: Chapter 5This notebook presents code and exercises from Think Bayes, second edition.Copyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT | from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Beta
import thinkplot | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
OddsThe following function converts from probabilities to odds. | def Odds(p):
return p / (1-p) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
And this function converts from odds to probabilities. | def Probability(o):
return o / (o+1) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
If 20% of bettors think my horse will win, that corresponds to odds of 1:4, or 0.25. | p = 0.2
Odds(p) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
If the odds against my horse are 1:5, that corresponds to a probability of 1/6. | o = 1/5
Probability(o) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
We can use the odds form of Bayes's theorem to solve the cookie problem: | prior_odds = 1
likelihood_ratio = 0.75 / 0.5
post_odds = prior_odds * likelihood_ratio
post_odds | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
And then we can compute the posterior probability, if desired. | post_prob = Probability(post_odds)
post_prob | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
If we draw another cookie and it's chocolate, we can do another update: | likelihood_ratio = 0.25 / 0.5
post_odds *= likelihood_ratio
post_odds | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
And convert back to probability. | post_prob = Probability(post_odds)
post_prob | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Oliver's bloodThe likelihood ratio is also useful for talking about the strength of evidence without getting bogged down talking about priors.As an example, we'll solve this problem from MacKay's {\it Information Theory, Inference, and Learning Algorithms}:> Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type 'O' blood. The blood groups of the two traces are found to be of type 'O' (a common type in the local population, having frequency 60) and of type 'AB' (a rare type, with frequency 1). Do these data [the traces found at the scene] give evidence in favor of the proposition that Oliver was one of the people [who left blood at the scene]?If Oliver isone of the people who left blood at the crime scene, then heaccounts for the 'O' sample, so the probability of the datais just the probability that a random member of the populationhas type 'AB' blood, which is 1%.If Oliver did not leave blood at the scene, then we have twosamples to account for. If we choose two random people fromthe population, what is the chance of finding one with type 'O'and one with type 'AB'? Well, there are two ways it might happen:the first person we choose might have type 'O' and the second'AB', or the other way around. So the total probability is$2 (0.6) (0.01) = 1.2$%.So the likelihood ratio is: | like1 = 0.01
like2 = 2 * 0.6 * 0.01
likelihood_ratio = like1 / like2
likelihood_ratio | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Since the ratio is less than 1, it is evidence *against* the hypothesis that Oliver left blood at the scence.But it is weak evidence. For example, if the prior odds were 1 (that is, 50% probability), the posterior odds would be 0.83, which corresponds to a probability of: | post_odds = 1 * like1 / like2
Probability(post_odds) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
So this evidence doesn't "move the needle" very much. **Exercise:** Suppose other evidence had made you 90% confident of Oliver's guilt. How much would this exculpatory evince change your beliefs? What if you initially thought there was only a 10% chance of his guilt?Notice that evidence with the same strength has a different effect on probability, depending on where you started. | # Solution
post_odds = Odds(0.9) * like1 / like2
Probability(post_odds)
# Solution
post_odds = Odds(0.1) * like1 / like2
Probability(post_odds) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Comparing distributionsLet's get back to the Kim Rhode problem from Chapter 4:> At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. They each hit 15 of 25 targets, sending the match into sudden death. In the first round, both hit 1 of 2 targets. In the next two rounds, they each hit 2 targets. Finally, in the fourth round, Rhode hit 2 and Wei hit 1, so Rhode won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games.>But after all that shooting, what is the probability that Rhode is actually a better shooter than Wei? If the same match were held again, what is the probability that Rhode would win?I'll start with a uniform distribution for `x`, the probability of hitting a target, but we should check whether the results are sensitive to that choice.First I create a Beta distribution for each of the competitors, and update it with the results. | rhode = Beta(1, 1, label='Rhode')
rhode.Update((22, 11))
wei = Beta(1, 1, label='Wei')
wei.Update((21, 12)) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Based on the data, the distribution for Rhode is slightly farther right than the distribution for Wei, but there is a lot of overlap. | thinkplot.Pdf(rhode.MakePmf())
thinkplot.Pdf(wei.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability') | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
To compute the probability that Rhode actually has a higher value of `p`, there are two options:1. Sampling: we could draw random samples from the posterior distributions and compare them.2. Enumeration: we could enumerate all possible pairs of values and add up the "probability of superiority".I'll start with sampling. The Beta object provides a method that draws a random value from a Beta distribution: | iters = 1000
count = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
if x1 > x2:
count += 1
count / iters | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
`Beta` also provides `Sample`, which returns a NumPy array, so we an perform the comparisons using array operations: | rhode_sample = rhode.Sample(iters)
wei_sample = wei.Sample(iters)
np.mean(rhode_sample > wei_sample) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
The other option is to make `Pmf` objects that approximate the Beta distributions, and enumerate pairs of values: | def ProbGreater(pmf1, pmf2):
total = 0
for x1, prob1 in pmf1.Items():
for x2, prob2 in pmf2.Items():
if x1 > x2:
total += prob1 * prob2
return total
pmf1 = rhode.MakePmf(1001)
pmf2 = wei.MakePmf(1001)
ProbGreater(pmf1, pmf2)
pmf1.ProbGreater(pmf2)
pmf1.ProbLess(pmf2) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
**Exercise:** Run this analysis again with a different prior and see how much effect it has on the results. SimulationTo make predictions about a rematch, we have two options again:1. Sampling. For each simulated match, we draw a random value of `x` for each contestant, then simulate 25 shots and count hits.2. Computing a mixture. If we knew `x` exactly, the distribution of hits, `k`, would be binomial. Since we don't know `x`, the distribution of `k` is a mixture of binomials with different values of `x`.I'll do it by sampling first. | import random
def flip(p):
return random.random() < p | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
`flip` returns True with probability `p` and False with probability `1-p`Now we can simulate 1000 rematches and count wins and losses. | iters = 1000
wins = 0
losses = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
count1 = count2 = 0
for _ in range(25):
if flip(x1):
count1 += 1
if flip(x2):
count2 += 1
if count1 > count2:
wins += 1
if count1 < count2:
losses += 1
wins/iters, losses/iters | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Or, realizing that the distribution of `k` is binomial, we can simplify the code using NumPy: | rhode_rematch = np.random.binomial(25, rhode_sample)
thinkplot.Hist(Pmf(rhode_rematch))
wei_rematch = np.random.binomial(25, wei_sample)
np.mean(rhode_rematch > wei_rematch)
np.mean(rhode_rematch < wei_rematch) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Alternatively, we can make a mixture that represents the distribution of `k`, taking into account our uncertainty about `x`: | from thinkbayes2 import MakeBinomialPmf
def MakeBinomialMix(pmf, label=''):
mix = Pmf(label=label)
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
for k, p in binom.Items():
mix[k] += prob * p
return mix
rhode_rematch = MakeBinomialMix(rhode.MakePmf(), label='Rhode')
wei_rematch = MakeBinomialMix(wei.MakePmf(), label='Wei')
thinkplot.Pdf(rhode_rematch)
thinkplot.Pdf(wei_rematch)
thinkplot.Config(xlabel='hits')
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Alternatively, we could use MakeMixture: | from thinkbayes2 import MakeMixture
def MakeBinomialMix2(pmf):
binomials = Pmf()
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
binomials[binom] = prob
return MakeMixture(binomials) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Here's how we use it. | rhode_rematch = MakeBinomialMix2(rhode.MakePmf())
wei_rematch = MakeBinomialMix2(wei.MakePmf())
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
**Exercise:** Run this analysis again with a different prior and see how much effect it has on the results. Distributions of sums and differencesSuppose we want to know the total number of targets the two contestants will hit in a rematch. There are two ways we might compute the distribution of this sum:1. Sampling: We can draw samples from the distributions and add them up.2. Enumeration: We can enumerate all possible pairs of values.I'll start with sampling: | iters = 1000
pmf = Pmf()
for _ in range(iters):
k = rhode_rematch.Random() + wei_rematch.Random()
pmf[k] += 1
pmf.Normalize()
thinkplot.Hist(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Or we could use `Sample` and NumPy: | ks = rhode_rematch.Sample(iters) + wei_rematch.Sample(iters)
pmf = Pmf(ks)
thinkplot.Hist(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Alternatively, we could compute the distribution of the sum by enumeration: | def AddPmfs(pmf1, pmf2):
pmf = Pmf()
for v1, p1 in pmf1.Items():
for v2, p2 in pmf2.Items():
pmf[v1 + v2] += p1 * p2
return pmf | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Here's how it's used: | pmf = AddPmfs(rhode_rematch, wei_rematch)
thinkplot.Pdf(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
The `Pmf` class provides a `+` operator that does the same thing. | pmf = rhode_rematch + wei_rematch
thinkplot.Pdf(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
**Exercise:** The Pmf class also provides the `-` operator, which computes the distribution of the difference in values from two distributions. Use the distributions from the previous section to compute the distribution of the differential between Rhode and Wei in a rematch. On average, how many clays should we expect Rhode to win by? What is the probability that Rhode wins by 10 or more? | # Solution
pmf = rhode_rematch - wei_rematch
thinkplot.Pdf(pmf)
# Solution
# On average, we expect Rhode to win by about 1 clay.
pmf.Mean(), pmf.Median(), pmf.Mode()
# Solution
# But there is, according to this model, a 2% chance that she could win by 10.
sum([p for (x, p) in pmf.Items() if x >= 10]) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Distribution of maximumSuppose Kim Rhode continues to compete in six more Olympics. What should we expect her best result to be?Once again, there are two ways we can compute the distribution of the maximum:1. Sampling.2. Analysis of the CDF.Here's a simple version by sampling: | iters = 1000
pmf = Pmf()
for _ in range(iters):
ks = rhode_rematch.Sample(6)
pmf[max(ks)] += 1
pmf.Normalize()
thinkplot.Hist(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
And here's a version using NumPy. I'll generate an array with 6 rows and 10 columns: | iters = 1000
ks = rhode_rematch.Sample((6, iters))
ks | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Compute the maximum in each column: | maxes = np.max(ks, axis=0)
maxes[:10] | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
And then plot the distribution of maximums: | pmf = Pmf(maxes)
thinkplot.Hist(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Or we can figure it out analytically. If the maximum is less-than-or-equal-to some value `k`, all 6 random selections must be less-than-or-equal-to `k`, so: $ CDF_{max}(x) = CDF(x)^6 $`Pmf` provides a method that computes and returns this `Cdf`, so we can compute the distribution of the maximum like this: | pmf = rhode_rematch.Max(6).MakePmf()
thinkplot.Hist(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
**Exercise:** Here's how Pmf.Max works: def Max(self, k): """Computes the CDF of the maximum of k selections from this dist. k: int returns: new Cdf """ cdf = self.MakeCdf() cdf.ps **= k return cdfWrite a function that takes a Pmf and an integer `n` and returns a Pmf that represents the distribution of the minimum of `k` values drawn from the given Pmf. Use your function to compute the distribution of the minimum score Kim Rhode would be expected to shoot in six competitions. | def Min(pmf, k):
cdf = pmf.MakeCdf()
cdf.ps = 1 - (1-cdf.ps)**k
return cdf
pmf = Min(rhode_rematch, 6).MakePmf()
thinkplot.Hist(pmf) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
Exercises **Exercise:** Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze? | # Solution
n_allergic = 4
n_non = 6
p_allergic = 0.5
p_non = 0.1
pmf = MakeBinomialPmf(n_allergic, p_allergic) + MakeBinomialPmf(n_non, p_non)
thinkplot.Hist(pmf)
# Solution
pmf.Mean() | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
**Exercise** [This study from 2015](http://onlinelibrary.wiley.com/doi/10.1111/apt.13372/full) showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.Here is a description of the study:>"We studied 35 non-CD subjects (31 females) that were on a gluten-free diet (GFD), in a double-blind challenge study. Participants were randomised to receive either gluten-containing flour or gluten-free flour for 10 days, followed by a 2-week washout period and were then crossed over. The main outcome measure was their ability to identify which flour contained gluten.>"The gluten-containing flour was correctly identified by 12 participants (34%)..."Since 12 out of 35 participants were able to identify the gluten flour, the authors conclude "Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity."This conclusion seems odd to me, because if none of the patients were sensitive to gluten, we would expect some of them to identify the gluten flour by chance. So the results are consistent with the hypothesis that none of the subjects are actually gluten sensitive.We can use a Bayesian approach to interpret the results more precisely. But first we have to make some modeling decisions.1. Of the 35 subjects, 12 identified the gluten flour based on resumption of symptoms while they were eating it. Another 17 subjects wrongly identified the gluten-free flour based on their symptoms, and 6 subjects were unable to distinguish. So each subject gave one of three responses. To keep things simple I follow the authors of the study and lump together the second two groups; that is, I consider two groups: those who identified the gluten flour and those who did not.2. I assume (1) people who are actually gluten sensitive have a 95% chance of correctly identifying gluten flour under the challenge conditions, and (2) subjects who are not gluten sensitive have only a 40% chance of identifying the gluten flour by chance (and a 60% chance of either choosing the other flour or failing to distinguish).Using this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval? | # Solution
# Here's a class that models the study
class Gluten(Suite):
def Likelihood(self, data, hypo):
"""Computes the probability of the data under the hypothesis.
data: tuple of (number who identified, number who did not)
hypothesis: number of participants who are gluten sensitive
"""
# compute the number who are gluten sensitive, `gs`, and
# the number who are not, `ngs`
gs = hypo
yes, no = data
n = yes + no
ngs = n - gs
pmf1 = MakeBinomialPmf(gs, 0.95)
pmf2 = MakeBinomialPmf(ngs, 0.4)
pmf = pmf1 + pmf2
return pmf[yes]
# Solution
prior = Gluten(range(0, 35+1))
thinkplot.Pdf(prior)
# Solution
posterior = prior.Copy()
data = 12, 23
posterior.Update(data)
# Solution
thinkplot.Pdf(posterior)
thinkplot.Config(xlabel='# who are gluten sensitive',
ylabel='PMF', legend=False)
# Solution
posterior.CredibleInterval(95) | _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
**Exercise** Coming soon: the space invaders problem. | # Solution
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
| _____no_output_____ | MIT | code/.ipynb_checkpoints/chap05soln-checkpoint.ipynb | proTao/LearningBayes |
'The 80/20 Pandas Tutorial: 5 Key Methods for the Majority of Your Data Transformation Needs'> An opinionated pandas tutorial on my preferred methods to accomplish the most essential data transformation tasks in a way that will make veteran R and tidyverse users smile.- toc: false- badges: true- comments: true- categories: [pandas, tidyverse]- hide: false- image: images/80_20_pandas.png  Ahh, pandas. In addition to being everyone's favorite vegetarian bear from south central China, it's also _the_ python library for working with tabular data, a.k.a. dataframes.When you dive into pandas, you'll quickly find out that there is a lot going on; indeed there are [hundreds](https://pandas.pydata.org/docs/reference/frame.html) of methods for operating on dataframes. But luckily for us, as with many areas of life, there is a [Pareto Principle](https://en.wikipedia.org/wiki/Pareto_principle), or 80/20 rule, that will help us focus on the small set of methods that collectively solve the majority of our data transformation needs.If you're like me, then pandas is not your first data-handling tool; maybe you've been using SQL or R with `data.table` or `dplyr`. If so, that's great because you already have a sense for the key operations we need when working with tabular data. In their book, [R for Data Science](https://r4ds.had.co.nz/), Garrett Grolemund and Hadley Wickham describe five essential operations for manipulating dataframes. I've found that these cover the majority of my data transformation tasks to prepare data for analysis, visualization, and modeling.1. filtering rows based on data values2. sorting rows based on data values3. selecting columns by name4. adding new columns based on the existing columns5. creating grouped summaries of the datasetI would add that we also need a way to build up more complex transformations by chaining these fundamental operations together sequentially. Before we dive in, here's the TLDR on the pandas methods that I prefer for accomplishing these tasks, along with their equivalents from SQL and `dplyr` in R. | description | pandas | SQL | dplyr |-----------------------------------------------|--------------------| |-------------------------| filter rows based on data values | `query()` | `WHERE` | `filter()` | sort rows based on data values | `sort_values()` | `ORDER BY` | `arrange()` | select columns by name | `filter()` | `SELECT` | `select()` | add new columns based on the existing columns | `assign()` | `AS` | `mutate()` | create grouped summaries of the dataset | `groupby()` `apply()` | `GROUP BY` | `group_by()` `summarise()`| chain operations together | `.` | | `%>%` Imports and Data | import pandas as pd
import numpy as np | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
We'll use the [nycflights13](https://github.com/hadley/nycflights13) dataset which contains data on the 336,776 flights that departed from New York City in 2013. | # pull some data into a pandas dataframe
flights = pd.read_csv('https://www.openintro.org/book/statdata/nycflights.csv')
flights.head() | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
Select rows based on their values with `query()` `query()` lets you retain a subset of rows based on the values of the data; it's like `dplyr::filter()` in R or `WHERE` in SQL.Its argument is a string specifying the condition to be met for rows to be included in the result.You specify the condition as an expression involving the column names and comparison operators like ``, `=`, `==` (equal), and `~=` (not equal). You can specify compound expressions using `and` and `or`,and you can even check if the column value matches any items in a list. | #hide_output
# compare one column to a value
flights.query('month == 6')
# compare two column values
flights.query('arr_delay > dep_delay')
# using arithmetic
flights.query('arr_delay > 0.5 * air_time')
# using "and"
flights.query('month == 6 and day == 1')
# using "or"
flights.query('origin == "JFK" or dest == "JFK"')
# column value matching any item in a list
flights.query('carrier in ["AA", "UA"]') | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
You may have noticed that it seems to be much more popular to filter pandas data frames using boolean indexing.Indeed when I ask my favorite search engine how to filter a pandas dataframe on its values, I find[this tutorial](https://cmdlinetips.com/2018/02/how-to-subset-pandas-dataframe-based-on-values-of-a-column/),[this blog post](https://medium.com/swlh/3-ways-to-filter-pandas-dataframe-by-column-values-dfb6609b31de),[various](https://stackoverflow.com/questions/17071871/how-to-select-rows-from-a-dataframe-based-on-column-values)[questions](https://stackoverflow.com/questions/11869910/pandas-filter-rows-of-dataframe-with-operator-chaining)on Stack Overflow,and even [the pandas documentation](https://pandas.pydata.org/pandas-docs/stable/getting_started/intro_tutorials/03_subset_data.html),all espousing boolean indexing.Here's what it looks like. | #hide_output
# canonical boolean indexing
flights[(flights['carrier'] == "AA") & (flights['origin'] == "JFK")]
# the equivalent use of query()
flights.query('carrier == "AA" and origin == "JFK"') | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
There are a few reasons I prefer `query()` over boolean indexing.1. `query()` does not require me to type the dataframe name again, whereas boolean indexing requires me to type it every time I wish to refer to a column.1. `query()` makes the code easier to read and understand, especially when expressions get complex.1. `query()` is [more computationally efficient](https://jakevdp.github.io/PythonDataScienceHandbook/03.12-performance-eval-and-query.html) than boolean indexing.1. `query()` can safely be used in dot chains, which we'll see very soon. Select columns by name with `filter()` `filter()` lets you pick out a specific set of columns by name; it's analogous to `dplyr::select()` in R or `SELECT` in SQL.You can either provide exactly the column names you want, or you can grab all columns whose names contain a given substring or which match a given regular expression. This isn't a big deal when your dataframe has only a few columns, but is particularly useful when you have a dataframe with tens or hundreds of columns. | #hide_output
# select a list of columns
flights.filter(['origin', 'dest'])
# select columns containing a particular substring
flights.filter(like='time')
# select columns matching a regular expression
flights.filter(regex='e$') | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
Sort rows with `sort_values()` `sort_values()` changes the order of the rows based on the data values; it's like`dplyr::arrange()` in R or `ORDER BY` in SQL.You can specify one or more columns on which to sort, where their order denotes the sorting priority. You can also specify whether to sort in ascending or descending order. | #hide_output
# sort by a single column
flights.sort_values('air_time')
# sort by a single column in descending order
flights.sort_values('air_time', ascending=False)
# sort by carrier, then within carrier, sort by descending distance
flights.sort_values(['carrier', 'distance'], ascending=[True, False]) | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
Add new columns with `assign()` `assign()` adds new columns which can be functions of the existing columns; it's like `dplyr::mutate()` from R. | #hide_output
# add a new column based on other columns
flights.assign(speed = lambda x: x.distance / x.air_time)
# another new column based on existing columns
flights.assign(gain = lambda x: x.dep_delay - x.arr_delay) | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
If you're like me, this way of using `assign()` might seem a little strange at first.Let's break it down.In the call to `assign()` the keyword argument `speed` tells pandas the name of our new column.The business to the right of the `=` is a inline lambda function that takes the dataframe we passed to `assign()` and returns the column we want to add.I like using `x` as the lambda argument because its easy to type and it evokes tabular data (think [design matrix](https://en.wikipedia.org/wiki/Design_matrix)), which reminds me that it refers to the entire dataframe.We can then access the other columns in our dataframe using the dot like `x.other_column`. It's true that you can skip the whole lambda business and refer to the dataframe to which you are assigning directly inside the assign. That might look like this. ```flights.assign(speed = flights.distance / flights.air_time)``` I prefer using a lambda for the following reasons.1. If you gave your dataframe a good name, using the lambda will save you from typing the name every time you want to refer to a column.1. The lambda makes your code more portable. Since you refer to the dataframe as a generic `x`, you can reuse this same assignment code on a dataframe with a different name.1. Most importantly, the lambda will allow you to harness the power of dot chaining. Chain transformations together with the dot chain One of the awesome things about pandas is that the `object.method()` paradigm lets us easily build up complex dataframe transformations from a sequence of method calls.In R, this is effectively accomplished by the pipe `%>%` operator.For example, suppose we want to look at high-speed flights from JFK to Honolulu, which would require us to query for JFK to Honolulu flights, assign a speed column, and maybe sort on that new speed column.We can say: | #hide_output
# neatly chain method calls together
(
flights
.query('origin == "JFK"')
.query('dest == "HNL"')
.assign(speed = lambda x: x.distance / x.air_time)
.sort_values(by='speed', ascending=False)
.query('speed > 8.0')
) | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
We compose the dot chain by wrapping the entire expression in parentheses and indenting each line within.The first line is the name of the dataframe on which we are operating.Each subsequent line has a single method call.There are a few great things about writing the code this way:1. Readability. It's easy to scan down the left margin of the code to see what's happening. The first line gives us our noun (the dataframe) and each subsequent line starts with a verb. You could read this as "take `flights` then query the rows where origin is JFK, then query for rows where destination is HNL, then assign a new column called speed, then sort the dataframe by speed, then query only for the rows where speed is greater than 8.0.1. Flexibility - It's easy to comment out individual lines and re-run the cell. It's also easy to reorder operations, since only one thing happens on each line.1. Neatness - We have not polluted our workspace with any intermediate variables, nor have we wasted any mental energy thinking of names for any temporary variables. By default, dot chains do not modify the original dataframe; they just output a temporary result that we can inspect directly in the output.If you want to store the result, or pass it along to another function (e.g. for plotting), you can simply assign the entire dot chain to a variable. | #hide_output
# sotre the output of the dot chain in a new dataframe
flights_high_speed = (
flights
.assign(speed = lambda x: x.distance / x.air_time)
.query('speed > 8.0')
) | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
Collapsing rows into grouped summaries with `groupby()` `groupby()` combined with `apply()` gives us flexibility and control over our grouped summaries; it's like `dplyr::group_by()` and `dplyr::summarise()` in R.This is the primary pattern I use for SQL-style groupby operations in pandas. Specifically it unlocks the following essential functionality you're used to having in SQL.1. specify the names of the aggregation columns we create1. specify which aggregation function to use on which columns1. compose more complex aggregations such as the proportion of rows meeting some condition1. aggregate over arbitrary functions of multiple columnsLet's check out the departure delay stats for each carrier. | # grouped summary with groupby and apply
(
flights
.groupby(['carrier'])
.apply(lambda d: pd.Series({
'n_flights': len(d),
'med_delay': d.dep_delay.median(),
'avg_delay': d.dep_delay.mean(),
}))
.head()
) | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
While you might be used to `apply()` acting over the rows or columns of a dataframe, here we're calling apply on a grouped dataframe object, so it's acting over the _groups_.According to the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html):> The function passed to apply must take a dataframe as its first argument and return a dataframe, a series or a scalar. apply will then take care of combining the results back together into a single dataframe or series. apply is therefore a highly flexible grouping method.We need to supply `apply()` with a function that takes each chunk of the grouped dataframe and returns (in our case) a series object with one element for each new aggregation column.Notice that I use a lambda to specify the function we pass to `apply())`, and that I name its argument `d`, which reminds me that it's a dataframe.My lambda returns a pandas series whose index entries specify the new aggregation column names, and whose values constitute the results of the aggregations for each group.Pandas will then stitch everything back together into a lovely dataframe.Notice how nice the code looks when we use this pattern. Each aggregation is specified on its own line, which makes it easy to see what aggregation columns we're creating and allows us to comment, uncomment, and reorder the aggregations without breaking anything. Here are some more complex aggregations to illustrate some useful patterns. | # more complex grouped summary
(
flights
.groupby(['carrier'])
.apply(lambda d: pd.Series({
'avg_gain': np.mean(d.dep_delay - d.arr_delay),
'pct_delay_gt_30': np.mean(d.dep_delay > 30),
'pct_late_dep_early_arr': np.mean((d.dep_delay > 0) & (d.arr_delay < 0)),
'avg_arr_given_dep_delay_gt_0': d.query('dep_delay > 0').arr_delay.mean(),
'cor_arr_delay_dep_delay': np.corrcoef(d.dep_delay, d.arr_delay)[0,1],
}))
.head()
) | _____no_output_____ | Apache-2.0 | _notebooks/2020-11-25-8020-pandas.ipynb | mcb00/blog_bak |
Data Transfer This notebook has information regarding the data transfer per latitude in 12 day chunks run for 60 days | from lusee.observation import LObservation
from lusee.lunar_satellite import LSatellite, ObservedSatellite
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from scipy.optimize import curve_fit
import time | _____no_output_____ | MIT | LatitudeTable.ipynb | kssumanth27/notebooks |
The demodulation function below follows the formula from the excel sheet. Each variable closely matches the varibles from excel sheet | def demodulation(dis_range, rate_pw2, extra_ant_gain):
R = np.array([430,1499.99,1500,1999.99,2000,2999.99,3000,4499.99,4500,7499.99,7500,10000])
Pt_error = np.array([11.00,11.00,8.50,8.50,6.00,6.00,4.00,4.00,3.00,3.00,2.50,2.50])
Antenna_gain = np.arange(11)
SANT = np.array([21.8,21.8,21.6,21.2,20.6,19.9,18.9,17.7,16.4,14.6,12.6])
Srange_max = 8887.0 #Slant Range
Srange_min = 2162.0
Srange_mean = 6297.0
Freq_MHz = 2250.0
Asset_EIRP = 13.0 + extra_ant_gain#dBW
Srange = dis_range
free_space_path_loss = -20*np.log10(4*np.pi*Freq_MHz*1000000*Srange*1000/300000000)
R_interp = np.linspace(430,10000,1000)
Pt_error_intp = interp1d(R,Pt_error)
Off_pt_angle = 0
Pt_error_main = Pt_error_intp(Srange)
Antenna_gain_intp = np.linspace(0,10,1000)
SANT_intp = interp1d(Antenna_gain,SANT,fill_value="extrapolate")
#print(Off_pt_angle + Pt_error_main)
SANT_main = SANT_intp(Off_pt_angle+Pt_error_main)
Antenna_return_loss = 15
Mismatch_loss = 10*np.log10(1-(10**(-Antenna_return_loss/20))**2)
SC_noise_temp = 26.8
SCGT = SANT_main + Mismatch_loss - SC_noise_temp
Uplink_CN0 = Asset_EIRP + free_space_path_loss + SCGT - 10*np.log10(1.38e-23)
Mod_loss = 0.0 #says calculated but given
Implementation_loss = -1.0 #assumed
Pll_bw_Hz = 700 #assumed
Pll_bw_dB = 10*np.log10(Pll_bw_Hz)
SN_loop = 17.9470058901322
Carrier_margin = Uplink_CN0 + Implementation_loss - Pll_bw_dB - SN_loop
Coded_symb_rt_input = rate_pw2
Coded_symb_rt = 2**Coded_symb_rt_input
Code_rate = 0.662430862918876 #theory
Data_rate = Coded_symb_rt * Code_rate
EbN0 = Uplink_CN0 + Implementation_loss - 10*np.log10(Data_rate*1000)
Threshold_EbN0 = 2.1
Data_demod_margin = EbN0 - Threshold_EbN0
return Data_demod_margin | _____no_output_____ | MIT | LatitudeTable.ipynb | kssumanth27/notebooks |
The below function and the curve_fit is written to calculate the antenna gain that is added to EIRP from above function | def ext_gain(x,a,b,c):
return a*x**2 + b*x + c
gain_data = [6.5,4.5,0]
ang_gain = [90,60,30]
popt,pcov = curve_fit(ext_gain,ang_gain,gain_data)
popt | _____no_output_____ | MIT | LatitudeTable.ipynb | kssumanth27/notebooks |
The below cell plots the histograms of altitude(deg) and Distance(Km) for 13 different latitudes from 30 to -90 | maxi = np.zeros(shape = 13)
mini = np.zeros(shape = 13)
avg = np.zeros(shape = 13)
for i in range(13):
num = 30+i*(-10)
obs = LObservation(lunar_day = "FY2024", lun_lat_deg = num, deltaT_sec=10*60)
S = LSatellite()
obsat = ObservedSatellite(obs,S)
transits = obsat.get_transit_indices()
trans_time = np.array([])
dist_lun = np.array([])
alt_lun = np.array([])
for j in range(len(transits)):
k,l = transits[j]
trans_time = np.append(trans_time,l-k)
dist_lun = np.append(dist_lun,obsat.dist_km()[k:l])
alt_lun = np.append(alt_lun,obsat.alt_rad()[k:l]/np.pi*180)
maxi[i] = np.max(trans_time)*10/(60)
mini[i] = np.min(trans_time)*10/(60)
avg[i] = np.average(trans_time)*10/(60)
fig, axs = plt.subplots(figsize =(10, 7))
type(dist_lun)
# was not sure on how wide the bins should be
plt.hist2d(dist_lun, alt_lun,bins = [20,20])
plt.title("Time availability (10mins) -- Lat = %i deg" %num)
axs.set_xlabel('distance (km)')
axs.set_ylabel('altitude (deg)')
cbar = plt.colorbar()
cbar.set_label('Transit 10Minutes') | _____no_output_____ | MIT | LatitudeTable.ipynb | kssumanth27/notebooks |
The cell below calculates the data transfer transfer in kbs. The variable that are commented out will be removed in next revision of this file. Disclaimer: This cell takes around 90 mins to run, which includes, calculating the variables from the lusee.lunar_satellite which takes most time followed by the repetitive use of the demodulation function. I'll try to create an numpy array that saves the calculations from LObservation function, so that repeated time taking process can be saved. I didn't color code the speeds yet. I don't know how to do that right away, might need some time. I'm not comfortable with arrays yet, hence I used list to save the data transfers, which I'll optimize in future versions | t0 = time.time()
#main_max = np.zeros(shape = 13)
#main_min = np.zeros(shape = 13)
#main_mean = np.zeros(shape = 13)
#counti_list = []
#counti_list_max = []
#counti_list_min = []
#counti_list_mean = []
datai_list = []
datai_list_max = []
datai_list_min = []
datai_list_mean = []
for i in range(13): # This loop iterates every calculation for 13 latitudes
num = 30+i*(-10)
obs = LObservation(lunar_day = "2025-02-01 13:00:00 to 2025-04-01 16:00:00",lun_lat_deg = num, deltaT_sec=60)
S = LSatellite()
obsat = ObservedSatellite(obs,S)
transits = obsat.get_transit_indices()
# counti = np.zeros(shape = 49)
# counti_max = np.zeros(shape = 49)
# counti_min = np.zeros(shape = 49)
#counti_mean = np.zeros(shape = 49)
## The datai_max, min, mean are not written right now.
datai = np.zeros(shape = 49)
datai_max = np.zeros(shape = 49)
datai_min = np.zeros(shape = 49)
datai_mean = np.zeros(shape = 49)
print("loop number",i)
for c in range(49): #This loop iterates for 12 day chunks
#maxcount = np.array([])
#mincount = np.array([])
#countcount = np.array([])
#count_decoy = 0
#count_decoy = np.zeros((len(transits)))
for t in range(len(transits)): # This loop iterates for each visible transit range
#count_decoy = 0
ti,tf = transits[t]
if ti>24*60*(c) and ti<24*60*(c+12):
for talt in obsat.alt_rad()[ti:tf]:
if talt > 0.3: # This if loop checks for the altitude > 0.3 rad; and calculates the data
# transferred for wrt the distance by caluculating the demodulation (> 3.0)
dis_r = obsat.dist_km()[ti:tf][np.where(obsat.alt_rad()[ti:tf] == talt)]
#print(dis_r)
pw2 = 12
extra_gain = ext_gain(talt*180/(np.pi),*popt)
demod = demodulation(dis_r,pw2,extra_gain)
while demod <= 3.0:
#r
pw2 = pw2 - 1
demod= demodulation(dis_r,pw2,extra_gain)
datai[c] = datai[c] + 60*2**pw2
#counti[c] = counti[c] + 1
#count_decoy = count_decoy + 1
#count_decoy = counti[c]-count_decoy
#print(count_decoy)
#countcount = np.append(countcount,count_decoy)
#counti_max[c] = np.max(countcount)
#counti_min[c] = np.min(countcount)
#counti_mean[c] = np.average(countcount)
#print(countcount)
#counti_list_max.append(counti_max.tolist())
#counti_list.append(counti.tolist())
#counti_list_min.append(counti_min.tolist())
#counti_list_mean.append(counti_mean.tolist())
datai_list.append(datai.tolist())
#main_max[i] = np.max(counti_max)
#main_min[i] = np.min(counti_min)
#main_mean[i] = np.mean(counti_mean)
print(main_max)
print(main_min)
print(main_mean)
t1 = time.time()
print("time elapsed: {}s".format(t1-t0)) | loop number 0
loop number 1
loop number 2
loop number 3
loop number 4
loop number 5
loop number 6
loop number 7
loop number 8
loop number 9
loop number 10
loop number 11
loop number 12
[ 82. 124. 194. 317. 394. 427. 443. 452. 455. 454. 448. 437. 416.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 290. 377. 415.]
[ 39.24483393 48.92472818 82.43428713 149.45121242 189.05772082
219.24983257 243.30419658 256.93997946 261.02817024 303.46666957
379.54920344 406.90021949 415.97606692]
time elapsed: 5399.114231586456s
| MIT | LatitudeTable.ipynb | kssumanth27/notebooks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.