markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Finally a little bit of glue to make it all go together.
def animate_param(data, arg_name='n_neighbors', arg_list=[]): frame_data = generate_frame_data(data, arg_name, arg_list) return create_animation(frame_data, arg_name, arg_list)
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
Now we can create an animation. It will be embedded as an HTML5 video into this notebook.
animate_param(data, 'n_neighbors', [3, 4, 5, 7, 10, 15, 25, 50, 100, 200]) animate_param(data, 'min_dist', [0.0, 0.01, 0.1, 0.2, 0.4, 0.6, 0.9]) animate_param(data, 'local_connectivity', [0.1, 0.2, 0.5, 1, 2, 5, 10]) animate_param(data, 'set_op_mix_ratio', np.linspace(0.0, 1.0, 10))
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
Za začetek povsem sledimo korakom, ki smo jih naredili „na roke“. Povzamimo „algoritem“ vse člene damo na levo stran enačbo pomnožimo z $x$ levo stran faktoriziramo iz faktorjev preberemo rešitev
enacba = sym.Eq(x+2/x,3) enacba
01a_enacbe.ipynb
mrcinv/matpy
gpl-2.0
Vključimo izpis formul v lepši obliki, ki ga omogoča SymPy.
sym.init_printing() # lepši izpis formul enacba # vse člene damo na levo stran in pomnožimo z x leva = (enacba.lhs - enacba.rhs)*x leva # levo stran razpišemo/zmnožimo leva = sym.expand(leva) leva # levo stran faktoriziramo leva = sym.factor(leva) leva
01a_enacbe.ipynb
mrcinv/matpy
gpl-2.0
Od tu naprej postane precej komplicirano, kako rešitve programsko izluščiti iz zadnjega rezultata. Če nas zanimajo le rešitve, lahko zgornji postopek izpustimo in preprosto uporabimo funkcijo solve.
# rešitve enačbe najlažje dobimo s funkcijo solve resitve = sym.solve(enacba) resitve
01a_enacbe.ipynb
mrcinv/matpy
gpl-2.0
Grafična rešitev Rešitve enačbe si lahko predstavljamo grafično. Iščemo vrednosti $x$, pri katerih je leva stran enaka desni. Če narišemo graf leve in desne strani na isto sliko, so rešitve enačbe ravno x-koordinate presečišča obeh grafov. Za risanje grafov uporabimo knjižnico matplotlib. Graf funkcije narišemo tako, da funkcijo tabeliramo v veliko točkah. Da lažje računamo s tabelami, uporabimo tudi knjižnico numpy, ki je namenjena delu z vektorji in matrikami.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline t = np.arange(-1,3,0.01) # zaporedje x-ov, v katerih bomo tabelirali funkcijo leva_f = sym.lambdify(x,enacba.lhs) # lambdify iz leve strani enačbe naredi python funkcijo, ki jo uporabimo na t desna_f = sym.lambdify(x,enacba.rhs) # podobno za desno stran (rhs - right hand side, lhs - left hand side) plt.plot(t,leva_f(t)) # leva stran /funkcija leva_f deluje po komponentah seznama t plt.plot(t,[desna_f(ti) for ti in t]) # funkcija desna_t je konstanta (število 3) in zato ne vrne seznama iste dolžine kot t plt.ylim(0,5) plt.plot(resitve,[leva_f(r) for r in resitve],'or') plt.show()
01a_enacbe.ipynb
mrcinv/matpy
gpl-2.0
Naloga Poišči vse rešitve enačbe $$x^2-2=1/x.$$ Uporabi sympy.solve in grafično predstavi rešitve. naprej: neenačbe >>
import disqus %reload_ext disqus %disqus matpy
01a_enacbe.ipynb
mrcinv/matpy
gpl-2.0
Input
bloboftext = """ This little piggy went to market, This little piggy stayed home, This little piggy had roast beef, This little piggy had none, And this little piggy went wee wee wee all the way home. """
text/2015-07-23_nltk-and-POS.ipynb
csiu/datasci
mit
Workflow Tokenization to break text into units e.g. words, phrases, or symbols Stop word removal to get rid of common words e.g. this, a, is
## Tokenization bagofwords = nltk.word_tokenize(bloboftext.lower()) print len(bagofwords) ## Stop word removal stop = stopwords.words('english') bagofwords = [i for i in bagofwords if i not in stop] print len(bagofwords)
text/2015-07-23_nltk-and-POS.ipynb
csiu/datasci
mit
About stemmers and lemmatisation Stemming to reduce a word to its roots e.g. having => hav Lemmatisation to determine a word's lemma/canonical form e.g. having => have English Stemmers and Lemmatizers For stemming English words with NLTK, you can choose between the PorterStemmer or the LancasterStemmer. The Porter Stemming Algorithm is the oldest stemming algorithm supported in NLTK, originally published in 1979. The Lancaster Stemming Algorithm is much newer, published in 1990, and can be more aggressive than the Porter stemming algorithm. The WordNet Lemmatizer uses the WordNet Database to lookup lemmas. Lemmas differ from stems in that a lemma is a canonical form of the word, while a stem may not be a real word. Resources: PorterStemmer or the SnowballStemmer (Snowball == Porter2) Stemming and Lemmatization What are the major differences and benefits of Porter and Lancaster Stemming algorithms?
snowball_stemmer = SnowballStemmer("english") ## What words was stemmed? _original = set(bagofwords) _stemmed = set([snowball_stemmer.stem(i) for i in bagofwords]) print 'BEFORE:\t%s' % ', '.join(map(lambda x:'"%s"'%x, _original-_stemmed)) print ' AFTER:\t%s' % ', '.join(map(lambda x:'"%s"'%x, _stemmed-_original)) del _original, _stemmed ## Proceed with stemming bagofwords = [snowball_stemmer.stem(i) for i in bagofwords]
text/2015-07-23_nltk-and-POS.ipynb
csiu/datasci
mit
Count & POS tag of each stemmed/non-stop word meaning of POS tags: Penn Part of Speech Tags NN Noun, singular or mass VBD Verb, past tense
for token, count in Counter(bagofwords).most_common(): print '%d\t%s\t%s' % (count, nltk.pos_tag([token])[0][1], token)
text/2015-07-23_nltk-and-POS.ipynb
csiu/datasci
mit
Proportion of POS tags
record = {} for token, count in Counter(bagofwords).most_common(): postag = nltk.pos_tag([token])[0][1] if record.has_key(postag): record[postag] += count else: record[postag] = count recordpd = pd.DataFrame.from_dict([record]).T recordpd.columns = ['count'] N = sum(recordpd['count']) recordpd['percent'] = recordpd['count']/N*100 recordpd
text/2015-07-23_nltk-and-POS.ipynb
csiu/datasci
mit
The first step is to create and object using the SampleSize class with the parameter of interest, the sample size calculation method, and the stratification status. In this example, we want to calculate sample size for proportions, using wald method for a stratified design. This is achived with the following snippet of code. python SampleSize( parameter="proportion", method="wald", stratification=True ) Because, we are using a stratified sample design, it is best to specify the expected coverage levels by stratum. If the information is not available then aggregated values can be used across the strata. The 2017 Senegal DHS published the coverage rates by region hence we have the information available by stratum. To provide the informmation to Samplics we use the python dictionaries as follows python expected_coverage = { "Dakar": 0.849, "Ziguinchor": 0.809, "Diourbel": 0.682, "Saint-Louis": 0.806, "Tambacounda": 0.470, "Kaolack": 0.797, "Thies": 0.834, "Louga": 0.678, "Fatick": 0.766, "Kolda": 0.637, "Matam": 0.687, "Kaffrine": 0.766, "Kedougou": 0.336, "Sedhiou": 0.742, } Now, we want to calculate the sample size with desired precision of 0.07 which means that we want the expected vaccination coverage rates to have 7% half confidence intervals e.g. expected rate of 90% will have a confidence interval of [83%, 97%]. Note that the desired precision can be specified by stratum in a similar way as the target coverage using a python dictionary. Given that information, we can calculate the sample size using SampleSize class as follows.
# target coverage rates expected_coverage = { "Dakar": 0.849, "Ziguinchor": 0.809, "Diourbel": 0.682, "Saint-Louis": 0.806, "Tambacounda": 0.470, "Kaolack": 0.797, "Thies": 0.834, "Louga": 0.678, "Fatick": 0.766, "Kolda": 0.637, "Matam": 0.687, "Kaffrine": 0.766, "Kedougou": 0.336, "Sedhiou": 0.742, } # Declare the sample size calculation parameters sen_vaccine_wald = SampleSize( parameter="proportion", method="wald", stratification=True ) # calculate the sample size sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07) # show the calculated sample size print("\nCalculated sample sizes by stratum:") sen_vaccine_wald.samp_size
docs/source/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
The sample size calculation above assumes that the design effect (DEFF) was equal to 1. A design effect of 1 correspond to sampling design with a variance equivalent to a simple random selection of same sample size. In the context of complex sampling designs, DEFF is often different from 1. Stage sampling and unequal weights usually increase the design effect above 1. The 2017 Senegal DHS indicated a design effect equal to 1.963 (1.401^2) for basic vaccination. Hence, to calculate the sample size, we will use the design effect provided by DHS.
sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07, deff=1.401 ** 2) sen_vaccine_wald.to_dataframe()
docs/source/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Since the sample design is stratified, the sample size calculation will be more precised if DEFF is specified at the stratum level which is available from the 2017 Senegal DHS provided report. Some regions have a design effect below 1. To be conservative with our sample size calculation, we will use 1.21 as the minimum design effect to use in the sample size calculation.
# Target coverage rates expected_deff = { "Dakar": 1.100 ** 2, "Ziguinchor": 1.100 ** 2, "Diourbel": 1.346 ** 2, "Saint-Louis": 1.484 ** 2, "Tambacounda": 1.366 ** 2, "Kaolack": 1.360 ** 2, "Thies": 1.109 ** 2, "Louga": 1.902 ** 2, "Fatick": 1.100 ** 2, "Kolda": 1.217 ** 2, "Matam": 1.403 ** 2, "Kaffrine": 1.256 ** 2, "Kedougou": 2.280 ** 2, "Sedhiou": 1.335 ** 2, } # Calculate sample sizes using deff at the stratum level sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07, deff=expected_deff) # Convert sample sizes to a dataframe sen_vaccine_wald.to_dataframe()
docs/source/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
The sample size calculation above does not account for attrition of sample sizes due to non-response. In the 2017 Semegal DHS, the overal household and women reponse rate was abou 94.2%.
# Calculate sample sizes with a resp_rate of 94.2% sen_vaccine_wald.calculate( target=expected_coverage, half_ci=0.07, deff=expected_deff, resp_rate=0.942 ) # Convert sample sizes to a dataframe sen_vaccine_wald.to_dataframe( col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"] )
docs/source/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
Fleiss method The World Health OR=rganization (WHO) recommends using the Fleiss method for calculating sample size for vaccination coverage survey (see https://www.who.int/immunization/documents/who_ivb_18.09/en/). To use the Fleiss method, the examples shown above are the same with method="fleiss".
sen_vaccine_fleiss = SampleSize( parameter="proportion", method="fleiss", stratification=True ) sen_vaccine_fleiss.calculate( target=expected_coverage, half_ci=0.07, deff=expected_deff, resp_rate=0.942 ) sen_vaccine_sample = sen_vaccine_fleiss.to_dataframe( col_names=["region", "vaccine_coverage", "precision", "number_12_23_months"] ) sen_vaccine_sample
docs/source/tutorial/sample_size_calculation.ipynb
survey-methods/samplics
mit
2. Explore the data and process the missing values
train.info() test.info() # Define the function to fill the missing values def replace_nan(data): # в столбцах 'START_PACK' и 'OFFER_GROUP' заменим NaN на 'Unknown' data['START_PACK'] = data['START_PACK'].fillna('Unknown') data['OFFER_GROUP'] = data['OFFER_GROUP'].fillna('Unknown') # столбцы с датами приведем к формату datetime data['ACT_DATE'] = pd.to_datetime(data['ACT_DATE'], format='%Y-%m-%d', errors='ignore') data['BIRTHDAY'] = pd.to_datetime(data['BIRTHDAY'], format='%Y-%m-%d', errors='ignore') # в столбце GENDER заменим NaN на M, так как 16034 из 28600 записей имеют значение M data['GENDER'] = data['GENDER'].fillna('M') # по условию задачи, NaN в столбце 'MLLS_STATE' означает что абонент не является участником программы лояльности data['MLLS_STATE'] = data['MLLS_STATE'].fillna('No') # по условиям задачи NaN в столбце 'OBLIG_NUM' означает, что абонент не пользовался рассрочкой data['OBLIG_NUM'] = data['OBLIG_NUM'].fillna(0.0) # NaN в столбце 'ASSET_TYPE_LAST' вероятно означает, что абонент не приобретал оборудование в компании data['ASSET_TYPE_LAST'] = data['ASSET_TYPE_LAST'].fillna('Not buying') # в столбце 'USAGE_AREA' заменим NaN на 'Undefined' data['USAGE_AREA'] = data['USAGE_AREA'].fillna('Undefined') # в остальных столбцах заменим NaN на 0.0, считая что отсутствие данных означает отсутствие активности data['REFILL_OCT_16'] = data['REFILL_OCT_16'].fillna(0.0) data['REFILL_NOV_16'] = data['REFILL_NOV_16'].fillna(0.0) data['OUTGOING_OCT_16'] = data['OUTGOING_OCT_16'].fillna(0.0) data['OUTGOING_NOV_16'] = data['OUTGOING_NOV_16'].fillna(0.0) data['GPRS_OCT_16'] = data['GPRS_OCT_16'].fillna(0.0) data['GPRS_NOV_16'] = data['GPRS_NOV_16'].fillna(0.0) data['REVENUE_OCT_16'] = data['REVENUE_OCT_16'].fillna(0.0) data['REVENUE_NOV_16'] = data['REVENUE_NOV_16'].fillna(0.0) # переведем BYR в BYN def byr_to_byn(data): data['REFILL_OCT_16'] = data['REFILL_OCT_16']/10000.0 data['REFILL_NOV_16'] = data['REFILL_NOV_16']/10000.0 # Create several new features def new_features(data): # срок с даты подключения до 1 декабря 2016 в днях data['AGE_ACT'] = [int(i.days) for i in (pd.datetime(2016, 12, 1) - data['ACT_DATE'])] # день недели, в который состоялось подключение data['WEEKDAY'] = data['ACT_DATE'].dt.dayofweek # добавим год рождения абонента и заменим пропущенные данные средним data['BIRTH_YEAR'] = pd.DatetimeIndex(data['BIRTHDAY']).year data['BIRTH_YEAR'] = data['BIRTH_YEAR'].fillna(data['BIRTH_YEAR'].mean()) # добавим столбец с возрастом абонента на момент подключения data['AGE_AB'] = pd.DatetimeIndex(data['ACT_DATE']).year - data['BIRTH_YEAR'] # добавим столбцы с разностями показателей ноября и октября data['REFIL_DELTA'] = data['REFILL_NOV_16'] - data['REFILL_OCT_16'] data['OUTGOING_DELTA'] = data['OUTGOING_NOV_16'] - data['OUTGOING_OCT_16'] data['GPRS_DELTA'] = data['GPRS_NOV_16'] - data['GPRS_OCT_16'] data['REVENUE_DELTA'] = data['REVENUE_NOV_16'] - data['REVENUE_OCT_16'] # удалим столбецы 'BIRTHDAY' и 'ACT_DATE' del data['BIRTHDAY'] del data['ACT_DATE'] # переведем BYR в BYN byr_to_byn(train) byr_to_byn(test) # Process the training data replace_nan(train) new_features(train) # Process the test data replace_nan(test) new_features(test) train.info()
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
Now we have test and train data sets without missing data and with a few new features 3. Preparing data for machine learning
# Conversion of categorical data le = LabelEncoder() for n in ['STATUS', 'TP_CURRENT', 'START_PACK', 'OFFER_GROUP', 'GENDER', 'MLLS_STATE', 'PORTED_IN', 'PORTED_OUT', 'OBLIG_ON_START', 'ASSET_TYPE_LAST', 'DEVICE_TYPE_BUS', 'USAGE_AREA']: le.fit(train[n]) train[n] = le.transform(train[n]) test[n] = le.transform(test[n]) # Standardization of data features = list(train.columns) del features[0] del features[22] scaler = StandardScaler() for n in features: scaler.fit(train[n]) train[n] = scaler.transform(train[n]) test[n] = scaler.transform(test[n]) # Break train into training and test set X_train, X_test, y_train, y_test = train_test_split(train[features], train.ACTIVITY_DEC_16, test_size=0.20, random_state=123)
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
4. Built the first model to all features
# Ensemble of classifiers by Weighted Average Probabilities clf1 = LogisticRegression(random_state=42) clf2 = RandomForestClassifier(random_state=42) clf3 = SGDClassifier(loss='log', random_state=42) eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('sgd', clf3)], voting='soft', weights=[1,1,1]) # Quality control of the model by cross-validation with calculation of ROC AUC for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'SGD', 'Ensemble']): scores2 = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='roc_auc') print("ROC AUC: %0.6f (+/- %0.6f) [%s]" % (scores2.mean(), scores2.std(), label))
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
On the training data, the best result is provided by an ensemble of three algorithms 5. Determine the importance of attributes using the Random Forest
# Построим лес и подсчитаем важность признаков forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(X_train, y_train) importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] # Выведем ранг признаков по важности print("Feature ranking:") for f in range(X_train.shape[1]): print("%d. %s (%f)" % (f + 1, list(X_train.columns)[indices[f]], importances[indices[f]])) # Сделаем график важности признаков plt.figure() plt.title("Feature importances") plt.bar(range(X_train.shape[1]), importances[indices], color="r", yerr=std[indices], align="center") plt.xticks(range(X_train.shape[1]), indices) plt.xlim([-1, X_train.shape[1]]) plt.show()
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
As we can see, the most important features are STATUS, USAGE_AREA, DEVICE_TYPE_BUS и REVENUE_NOV_16 6. Select the features for classification
# Create a list of features sorted by importance imp_features = [] for i in indices: imp_features.append(features[i]) # the best accuracy is obtained by using the 17 most important features best_features = imp_features[:17] X_train2 = X_train[best_features] # Quality control of the model by cross-validation with calculation of ROC AUC for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'SGD', 'Ensemble']): scores2 = cross_val_score(estimator=clf, X=X_train2, y=y_train, cv=10, scoring='roc_auc') print("ROC AUC: %0.6f (+/- %0.6f) [%s]" % (scores2.mean(), scores2.std(), label))
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
7. Building a classifier based on test data
# roc curve on test data colors = ['black', 'orange', 'blue', 'green'] linestyles = [':', '--', '-.', '-'] for clf, label, clr, ls in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'SGD', 'Ensemble'], colors, linestyles): y_pred = clf.fit(X_train[best_features], y_train).predict_proba(X_test[best_features])[:, 1] fpr, tpr, thresholds = roc_curve(y_true=y_test, y_score=y_pred) roc_auc = auc(x=fpr, y=tpr) plt.plot(fpr, tpr, color=clr, linestyle=ls, label='%s (auc = %0.2f)' % (label, roc_auc)) plt.legend(loc='lower right') plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2) plt.xlim([-0.1, 1.1]) plt.ylim([-0.1, 1.1]) plt.grid() plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.show()
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
The ROC AUC values obtained for the cross validation and for the test sample are the same, which indicates that the model is not overfitted and not underfitted. 8. Getting the final result
result_pred = eclf.fit(X_train[best_features], y_train).predict_proba(test[best_features]) result = pd.DataFrame(test['USER_ID']) result['ACTIVITY_DEC_16_PROB'] = list(result_pred[:, 1]) result.to_csv('result.csv', encoding='utf8', index=None)
Vasilev_Sergey_eng.ipynb
Vasilyeu/mobile_customer
mit
We also prepared a simple tabulated file with the description of each GSM. It will be usefull to calculate LFC.
experiments = pd.read_table("GSE6207_experiments.tab")
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
We can look in to this file:
experiments
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
Now we select the GSMs that are controls.
controls = experiments[experiments.Type == 'control'].Experiment.tolist()
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
Using GEOparse we can download experiments and look into the data:
gse = GEOparse.get_GEO("GSE6207")
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
The GPL we are interested:
gse.gpls['GPL570'].columns
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
And the columns that are available for exemplary GSM:
gse.gsms["GSM143385"].columns
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
We take the opportunity and check if everything is OK with the control samples. For this we just use simple histogram. To obtain table with each GSM as column, ID_REF as index and VALUE in each cell we use pivot_samples method from GSE object (we restrict the columns to the controls):
pivoted_control_samples = gse.pivot_samples('VALUE')[controls] pivoted_control_samples.head()
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
And we plot:
pivoted_control_samples.hist() sns.despine(offset=10, trim=True)
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
Next we would like to filter out probes that are not expressed. The gene is expressed (in definition here) when its average log2 intensity in control samples is above 0.25 quantile. I.e. we filter out worst 25% genes.
pivoted_control_samples_average = pivoted_control_samples.median(axis=1) print "Number of probes before filtering: ", len(pivoted_control_samples_average) expression_threshold = pivoted_control_samples_average.quantile(0.25) expressed_probes = pivoted_control_samples_average[pivoted_control_samples_average >= expression_threshold].index.tolist() print "Number of probes above threshold: ", len(expressed_probes)
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
We can see that the filtering succeeded. Now we can pivot all the samples and filter out probes that are not expressed:
samples = gse.pivot_samples("VALUE").ix[expressed_probes]
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
The most important thing is to calculate log fold changes. What we have to do is for each time-point identify control and transfected sample and subtract the VALUES (they are provided as log2 transformed already, we subtract transfection from the control). In the end we create new DataFrame with LFCs:
lfc_results = {} sequence = ['4 hours', '8 hours', '16 hours', '24 hours', '32 hours', '72 hours', '120 hours'] for time, group in experiments.groupby("Time"): print time control_name = group[group.Type == "control"].Experiment.iloc[0] transfection_name = group[group.Type == "transfection"].Experiment.iloc[0] lfc_results[time] = (samples[transfection_name] - samples[control_name]).to_dict() lfc_results = pd.DataFrame(lfc_results)[sequence]
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
Let's look at the data sorted by 24-hours time-point:
lfc_results.sort("24 hours").head()
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
We are interested in the gene expression changes upon transfection. Thus, we have to annotate each probe with ENTREZ gene ID, remove probes without ENTREZ or with multiple assignments. Although this strategy might not be optimal, after this we average the LFC for each gene over probes.
# annotate with GPL lfc_result_annotated = lfc_results.reset_index().merge(gse.gpls['GPL570'].table[["ID", "ENTREZ_GENE_ID"]], left_on='index', right_on="ID").set_index('index') del lfc_result_annotated["ID"] # remove probes without ENTREZ lfc_result_annotated = lfc_result_annotated.dropna(subset=["ENTREZ_GENE_ID"]) # remove probes with more than one gene assigned lfc_result_annotated = lfc_result_annotated[~lfc_result_annotated.ENTREZ_GENE_ID.str.contains("///")] # for each gene average LFC over probes lfc_result_annotated = lfc_result_annotated.groupby("ENTREZ_GENE_ID").median()
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
We can now look at the data:
lfc_result_annotated.sort("24 hours").head()
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
At that point our job is basicaly done. However, we might want to check if the experiments worked out at all. To do this we will use hsa-miR-124a-3p targets predicted by MIRZA-G algorithm. The targets should be downregulated. First we read MIRZA-G results:
header = ["GeneID", "miRNA", "Total score without conservation", "Total score with conservation"] miR124_targets = pd.read_table("seed-mirza-g_all_mirnas_per_gene_scores_miR_124a.tab", names=header) miR124_targets.head()
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
We shall extract targets as a simple list of strings:
miR124_targets_list = map(str, miR124_targets.GeneID.tolist()) print "Number of targets:", len(miR124_targets_list)
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
As can be seen there is a lot of targets (genes that posses a seed match in their 3'UTRs). We will use all of them. As first stem we will annotate genes if they are targets or not and add this information as a column to DataFrame:
lfc_result_annotated["Is miR-124a target"] = [i in miR124_targets_list for i in lfc_result_annotated.index] cols_to_plot = [i for i in lfc_result_annotated.columns if "hour" in i]
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
In the end we can plot the results:
a = sns.pointplot(data=lfc_result_annotated[lfc_result_annotated["Is miR-124a target"]][cols_to_plot], color=c2, label="miR-124a target") b = sns.pointplot(data=lfc_result_annotated[~lfc_result_annotated["Is miR-124a target"]][cols_to_plot], color=c1, label="No miR-124a target") sns.despine() pl.legend([pl.mpl.patches.Patch(color=c2), pl.mpl.patches.Patch(color=c1)], ["miR-124a target", "No miR-124a target"], frameon=True, loc='lower left') pl.xlabel("Time after transfection") pl.ylabel("Median log2 fold change")
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
guma44/GEOparse
bsd-3-clause
Columns Interested loan_status -- Current status of the loan<br/> loan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/> int_rate -- interest rate of the loan <br/> grade -- LC assigned loan grade<br/> sub_grade -- LC assigned sub loan grade <br/> purpose -- A category provided by the borrower for the loan request. <br/> -- dummy annual_inc -- The self-reported annual income provided by the borrower during registration.<br/> emp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. <br/> -- dummie fico_range_low fico_range_high home_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER <br/> tot_cur_bal -- Total current balance of all accounts num_actv_bc_tl -- number of active bank accounts<br/> (avg_cur_bal -- average current balance of all accounts )<br/> mort_acc -- number of mortgage accounts<br/> num_actv_rev_tl -- Number of currently active revolving trades<br/> dti -- A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income. pub_rec_bankruptcies - Number of public record bankruptcies<br/> delinq_amnt -- title -- mths_since_last_delinq -- The number of months since the borrower's last delinquency.<br/> mths_since_recent_revol_delinq -- Months since most recent revolving delinquency.<br/> total_cu_tl -- Number of finance trades<br/> last_credit_pull_d -- The most recent month LC pulled credit for this loan<br/>
## 2015 df_app_2015 = pd.read_csv('data/LoanStats3d_securev1.csv.zip', compression='zip', low_memory=False,\ header=1) df_app_2015.loan_status.unique() df_app_2015.head(5) df_app_2015['delinq_amnt'].unique() df_app_2015.info(max_cols=111) df_app_2015.groupby('title').loan_amnt.mean() df_app_2015.groupby('purpose').loan_amnt.mean() df_app_2015['emp_length'].unique()
Data_Preprocessing/LendingClub_DataExploratory.ipynb
kayzhou22/DSBiz_Project_LendingClub
mit
Decriptive Analyss Annual income distribution Total loan amount groupby interest rate chunks Average loan amount groupby grade Average loan amount groupby
## selected columns df = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'grade', 'sub_grade',\ 'purpose',\ 'annual_inc', 'emp_length', 'home_ownership',\ 'fico_range_low','fico_range_high',\ 'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\ 'pub_rec_bankruptcies','dti' ]] df.head(3) len(df.dropna()) df.shape df.loan_status.unique() len(df[df['loan_status']=='Fully Paid']) len(df[df['loan_status']=='Default']) len(df[df['loan_status']=='Charged Off']) len(df[df['loan_status']=='Late (31-120 days)']) df.info() df.loan_status.unique() ## Convert applicable fields to numeric (I only select "Interest Rate" to use for this analysis) df.ix[:,'int_rate'] = df.ix[:,['int_rate']]\ .applymap(lambda e: pd.to_numeric(str(e).rstrip()[:-1], errors='coerce')) df.info() df = df.rename(columns={"int_rate": "int_rate(%)"}) df.head(3) #len(df.dropna(thresh= , axis=1).columns) df.describe() # 1. Loan Amount distribution # # create plots and histogram to visualize total loan amounts fig = pl.figure(figsize=(8,10)) ax1 = fig.add_subplot(211) ax1.plot(range(len(df)), sorted(df.loan_amnt), '.', color='purple') ax1.set_xlabel('Loan Applicant Count') ax1.set_ylabel('Loan Amount ($)') ax1.set_title('Fig 1a - Sorted Issued Loan Amount (2015)', size=15) # all_ histogram # pick upper bound 900 to exclude too large numbers ax2 = fig.add_subplot(212) ax2.hist(df.loan_amnt, range=(df.loan_amnt.min(), 36000), color='purple') ax2.set_xlabel('Loan Amount -$', size=12) ax2.set_ylabel('Counts',size=12) ax2.set_title('Fig 1b - Sorted Issued Loan Amount (2015)', size=15)
Data_Preprocessing/LendingClub_DataExploratory.ipynb
kayzhou22/DSBiz_Project_LendingClub
mit
Fig 1a shows the sorted issued loan amounts from low to high.<br/> Fig 2c is a histogram showing the distribution of the issued loan amounts. Obeservation<br/> The Loan amounts vary from $1000 to $35,000, and the most frequent loan amounts issued are around $10,000.
inc_75 = df.describe().loc['75%', 'annual_inc'] count_75 = int(len(df)*0.75) # 2. Applicant Anual Income Distribution fig = pl.figure(figsize=(8,16)) ax0 = fig.add_subplot(311) ax0.plot(range(len(df.annual_inc)), sorted(df.annual_inc), '.', color='blue') ax0.set_xlabel('Loan Applicant Count') ax0.set_ylabel('Applicant Annual Income ($)') ax0.set_title('Fig 2a - Sorted Applicant Annual Income-all ($) (2015)', size=15) # use 75% quantile to plot the graph and histograms -- excluding extreme values inc_75 = df.describe().loc['75%', 'annual_inc'] inc_below75 = df.annual_inc[df.annual_inc <= inc_75] count_75 = int(len(df)*0.75) ax1 = fig.add_subplot(312) ax1.plot(range(count_75), sorted(df.annual_inc)[:count_75], '.', color='blue') ax1.set_xlabel('Loan Applicant Count') ax1.set_ylabel('Applicant Annual Income ($)') ax1.set_title('Fig 2b - Sorted Applicant Annual Income-75% ($) (2015)',size=15) # all_ histogram # pick upper bound 900 to exclude too large numbers ax2 = fig.add_subplot(313) ax2.hist(df.annual_inc, range=(df.annual_inc.min(), inc_75), color='blue') ax2.set_xlabel('Applicant Annual Income -$', size=12) ax2.set_ylabel('Counts',size=12) ax2.set_title('Fig 2c - Sorted Applicant Income-75% ($) (2015)',size=15)
Data_Preprocessing/LendingClub_DataExploratory.ipynb
kayzhou22/DSBiz_Project_LendingClub
mit
Fig 2a and Fig 2b both show the sorted applicant annual income from low to high. The former indicates extreme values, and the latter plots only those values below the 75% quantile, which looks more sensible.<br/> Fig 2c is a histogram showing the distribution of the applicants' income (below 75% quantile). Obeservation The most frequent annual income amounts of ths applicants are between $40,000 and below $60,000.
4.600000e+04 # 3. Loan amount and Applicant Annual Income # View all pl.figure(figsize=(6,4)) pl.plot(df.annual_inc, df.loan_amnt, '.') pl.ylim(0, 40000) pl.xlim(0, 0.2e7) # df.annual_inc.max() pl.title('Fig 3a - Loan Amount VS Applicant Annual Income_all', size=15) pl.ylabel('Loan Amount ($)', size=15) pl.xlabel('Applicant Annual Income ($)', size=15)
Data_Preprocessing/LendingClub_DataExploratory.ipynb
kayzhou22/DSBiz_Project_LendingClub
mit
Fig 3a shows the approved loan amount against the applicants' annual income. <br/> Oberservation:<br/> We can see that there are a few people with self-reported income that is very high, while majority of the applicants are with income less than $100,000. These extreme values indicate a possibility of outliers. Method to deal with Outliers <br/> Locate Outliers using Median-Absolute-Deviation (MAD) test and remove them for further analysis Pick samples to set outlier range using the mean of the outlier boundries-- the method could be improved by using ramdom sampling
# 3b pl.figure(figsize=(6,4)) pl.plot(df.annual_inc, df.loan_amnt, '.') pl.ylim(0, 40000) pl.xlim(0, inc_75) pl.title('Fig 3b - Loan Amount VS Applicant Annual Income_75%', size=15) pl.ylabel('Loan Amount ($)', size=15) pl.xlabel('Applicant Annual Income ($)', size=15)
Data_Preprocessing/LendingClub_DataExploratory.ipynb
kayzhou22/DSBiz_Project_LendingClub
mit
Fig 3b is plot of the loan amount VS applicant annual income with all extreme income amounts being excluded. Observation:<br/> Now it is clearer to see that there is quite "rigid" standard to determine loan amounts based on income, however, there are still exceptions (sparse points above the "division line".
pl.plot(np.log(df.annual_inc), np.log(df.loan_amnt), '.') # 4. Average loan amount groupby grade mean_loan_grade = df.groupby('grade')['loan_amnt'].mean() mean_loan_grade sum_loan_grade = df.groupby('grade')['loan_amnt'].sum() sum_loan_grade fig = pl.figure(figsize=(8,12)) #16,5 ax0 = fig.add_subplot(211) ax0.plot(range(len(mean_loan_grade)), mean_loan_grade, 'o', color='blue') ax0.set_ylim(0, 23000) ax0.set_xlim(-0.5, len(mean_loan_grade)) ax0.set_xticks(range(len(mean_loan_grade))) ax0.set_xticklabels(('A','B','C','D','E','F','G')) ax0.set_xlabel('Grade') ax0.set_ylabel('Average Loan Amount ($)') ax0.set_title('Fig 4a - Average Loan Amount by Grade ($) (2015)', size=15) ax1 = fig.add_subplot(212) ax1.plot(range(len(sum_loan_grade)), sum_loan_grade, 'o', color='brown') ax1.set_ylim(0, 2.3e9) ax1.set_xlim(-0.5, len(sum_loan_grade)) ax1.set_xticks(range(len(sum_loan_grade))) ax1.set_xticklabels(('A','B','C','D','E','F','G')) ax1.set_xlabel('Grade') ax1.set_ylabel('Total Loan Amount ($)') ax1.set_title('Fig 4b - Total Loan Amount by Grade ($) (2015)', size=15)
Data_Preprocessing/LendingClub_DataExploratory.ipynb
kayzhou22/DSBiz_Project_LendingClub
mit
Initialize
folder = '../zHat/' metric = get_metric()
MES/polAvg/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Define the function to take the derivative of NOTE: These do not need to be fulfilled in order to get convergence z must be periodic The field $f(\rho, \theta)$ must be of class infinity in $z=0$ and $z=2\pi$ The field $f(\rho, \theta)$ must be continuous in the $\rho$ direction with $f(\rho, \theta + \pi)$ But this needs to be fulfilled: 1. The field $f(\rho, \theta)$ must be single valued when $\rho\to0$ 2. Eventual BC in $\rho$ must be satisfied
# We need Lx from boututils.options import BOUTOptions myOpts = BOUTOptions(folder) Lx = eval(myOpts.geom['Lx']) # Z hat function # NOTE: The function is not continuous over origo s = 2 c = pi w = pi/2 the_vars['f'] = ((1/2)*(tanh(s*(z-(c-w/2)))-tanh(s*(z-(c+w/2)))))*sin(3*2*pi*x/Lx)
MES/polAvg/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Calculating the solution
the_vars['S'] = (integrate(the_vars['f'], (z, 0, 2*np.pi))/(2*np.pi)).evalf()
MES/polAvg/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
DecisionTreeClassifier is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, …, K-1]) classification. Applying to Iris Dataset
from sklearn.datasets import load_iris from sklearn import tree iris = load_iris() iris.data[0:5] iris.feature_names X = iris.data[:, 2:] y = iris.target y clf = tree.DecisionTreeClassifier(random_state=42) clf = clf.fit(X, y) from sklearn.tree import export_graphviz export_graphviz(clf, out_file="tree.dot", feature_names=iris.feature_names[2:], class_names=iris.target_names, rounded=True, filled=True) import graphviz dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names[2:], class_names=iris.target_names, rounded=True, filled=True) graph = graphviz.Source(dot_data) import numpy as np import seaborn as sns sns.set_style('whitegrid') import matplotlib.pyplot as plt %matplotlib inline
test05_machine_learning/Code snippets.ipynb
GitYiheng/reinforcement_learning_test
mit
Start Here
df = sns.load_dataset('iris') df.head() col = ['petal_length', 'petal_width'] X = df.loc[:, col] species_to_num = {'setosa': 0, 'versicolor': 1, 'virginica': 2} df['tmp'] = df['species'].map(species_to_num) y = df['tmp'] clf = tree.DecisionTreeClassifier() clf = clf.fit(X, y) X[0:5] X.values X.values.reshape(-1,1) Xv = X.values.reshape(-1,1) Xv h = 0.02 # set the spacing Xv.min() Xv.max() + 1 x_min, x_max = Xv.min(), Xv.max() + 1 y.min() y.max() + 1 y_min, y_max = y.min(), y.max() + 1 y_min y_max np.arange(x_min, x_max, h) np.arange(y_min, y_max, h) np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) xx yy xx.ravel() xx.ravel? yy.ravel() np.c_[xx.ravel(), yy.ravel()] np.c_? pd.DataFrame(np.c_[xx.ravel(), yy.ravel()]) z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) z xx.shape z.shape z = z.reshape(xx.shape) z.shape plt.contourf?
test05_machine_learning/Code snippets.ipynb
GitYiheng/reinforcement_learning_test
mit
matplotlib documentation
fig = plt.figure(figsize=(16,10)) ax = plt.contourf(xx, yy, z, cmap = 'afmhot', alpha=0.3); fig = plt.figure(figsize=(16,10)) plt.scatter(X.values[:, 0], X.values[:, 1], c=y, s=80, alpha=0.9, edgecolors='g'); fig = plt.figure(figsize=(16,10)) ax = plt.contourf(xx, yy, z, cmap = 'afmhot', alpha=0.3); plt.scatter(X.values[:, 0], X.values[:, 1], c=y, s=80, alpha=0.9, edgecolors='g');
test05_machine_learning/Code snippets.ipynb
GitYiheng/reinforcement_learning_test
mit
Part 2 : For three or more files Set 1: download and unzip files, and read data. Create a list for all files, and two dictionaries to conect to their url and file name of .csv. Check which file exists by using os.path.exists in for and if loop, and print out results. Only download files which don't exist by putting code in else loop. Add some print commands in the loop to show which file is downloading and tell after it is done. Unzip the files, and use zf list and data lits to read 3 .csv files respectively. <span style="color:red">Since 3 sets of data are the same kind of data, I first creat a blank data frame outside the for loop, and then use append command to merge all the data. Use shape and tail command to check data.
import os import requests import zipfile import pandas as pd zipfiles = ['HCEPDB_moldata_set1.zip','HCEPDB_moldata_set2.zip','HCEPDB_moldata_set3.zip'] url = {'HCEPDB_moldata_set1.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set1.zip','HCEPDB_moldata_set2.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set2.zip','HCEPDB_moldata_set3.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set3.zip'} csvfile = {'HCEPDB_moldata_set1.zip':'HCEPDB_moldata_set1.csv','HCEPDB_moldata_set2.zip':'HCEPDB_moldata_set2.csv','HCEPDB_moldata_set3.zip':'HCEPDB_moldata_set3.csv'} zf = [] data = [] alldata = pd.DataFrame() for i in range(len(zipfiles)): #check whether file exists. if os.path.exists(zipfiles[i]): print(zipfiles[i],'exists.') else: print(zipfiles[i],"doesn't exist.") #Download files. print(zipfiles[i],'is downloading.') req = requests.get(url[zipfiles[i]]) assert req.status_code == 200 with open(zipfiles[i], 'wb') as f: f.write(req.content) print(zipfiles[i],'is downloaded.') #Unzip and read .csv files. zf.append(zipfile.ZipFile(zipfiles[i])) data.append(pd.read_csv(zf[i].open(csvfile[zipfiles[i]]))) alldata = alldata.append(data[i],ignore_index=True) #Check data print('\nCheck data') print('shape of',csvfile[zipfiles[0]],'=',data[0].shape,'\nshape of',csvfile[zipfiles[1]],'=',data[1].shape,'\nshape of',csvfile[zipfiles[2]],'=',data[2].shape, '\nshape of all data =',alldata.shape) print('\n') alldata.tail()
SEDS_Hw/seds-hw-2-procedural-python-part-1-danielfather7/SEDS-HW2.ipynb
danielfather7/teach_Python
gpl-3.0
Set 2: analyza data
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import math alldata['(xi-x)^2'] = (alldata['mass'] - alldata['mass'].mean())**2 SD = math.sqrt(sum(alldata['(xi-x)^2'])/alldata.shape[0]) M = alldata['mass'].mean() print('standard diviation of mass = ',SD,', mean of mass = ',M,"\n") alldata['mass_group'] = pd.cut(alldata['mass'],bins=[min(alldata['mass']),M-3*SD,M-2*SD,M-SD,M+SD,M+2*SD,M+3*SD,max(alldata['mass'])],labels=["<(-3SD)","-3SD~-2SD","-2SD~-SD","-SD~+SD","+SD~+2SD","+2SD~+3SD",">(+3SD)"]) count = pd.value_counts(alldata['mass_group'],normalize=True) print("Count numbers in each group(%)\n",count,"\n") print("within 1 standard diviation:",count[3],"\nwithin 2 standard diviation:",count[2]+count[3]+count[4],"\nwithin 3 standard diviation:",count[2]+count[3]+count[4]+count[1]+count[5],"\n") print("Conclusions: mass is nearly normal distribution!")
SEDS_Hw/seds-hw-2-procedural-python-part-1-danielfather7/SEDS-HW2.ipynb
danielfather7/teach_Python
gpl-3.0
Benchmark of vectorization
pi=np.array([.3,.3,.4]) A=np.array([[.2,.3,.5],[.1,.5,.4],[.6,.1,.3]]) B=np.array([[0.1,0.5,0.4],[0.2,0.4,0.4],[0.3,0.6,0.1]]) states,sequence=HMM.sim_HMM(A,B,pi,100) %timeit HMM.Baum_Welch(A,B,pi,sequence,1000,0,scale=True) %timeit HMM.hmm_unoptimized.Baum_Welch(A,B,pi,sequence,1000,0,scale=True) %timeit HMM.Baum_Welch(A,B,pi,sequence,1000,0,scale=False) %timeit HMM.hmm_unoptimized.Baum_Welch(A,B,pi,sequence,1000,0,scale=False)
Code_Report.ipynb
xiaozhouw/663
mit
As for the optimization, we employed vectorization to avoid the use of triple for-loops under the update section of the Baum-Welch algorithm. We used broadcasting with numpy.newaxis to implement Baum-Welch algorithm much faster. As we can see from Benchmark part in the report, under class HMM we have 2 functions for Baum-Welch algorithm called Baum_Welch and Baum_Welch_fast. In Baum_Welch_fast, vectorization is applied when calculating $\xi$ while in Baum_Welch, we use a for loop. Notice in Baum_Welch, all other parts are implemented with vectorization. This is just an example how vectorization greatly improve the speed. Notice that the run time for vectorized Baum-Welch algorithm is 2.43 s per loop (with scaling) and 1 s per loop (without scaling) compared to 4.01 s per loop (with scaling) and 261 s per loop (without scaling). Other functions are implemented with vectorization as well. Vectorization greatly improves our time performance. Simulations Effect of chain length
A=np.array([[0.1,0.5,0.4],[0.3,0.5,0.2],[0.7,0.2,0.1]]) B=np.array([[0.1,0.1,0.1,0.7],[0.5,0.5,0,0],[0.7,0.1,0.1,0.1]]) pi=np.array([0.25,0.25,0.5]) A_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]]) B_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]]) pi_init=np.array([0.3,0.3,0.4]) lengths=[50,100,200,500,1000] acc=[] k=30 for i in lengths: mean_acc=0 for j in range(k): states,sequence=HMM.sim_HMM(A,B,pi,i) Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init, pi_init,sequence,10,0,True) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) mean_acc=mean_acc+np.mean(seq_hat==states) acc.append(mean_acc/k) plt.plot(lengths,acc)
Code_Report.ipynb
xiaozhouw/663
mit
From the plot we can see that the length of the chain does have an effect on the performance of Baum-Welch Algorithm and Viterbi decoding. We can see that when the chain is too long, the algorithms tend to have a bad results. Effects of initial values in Baum-Welch Algorithm
A=np.array([[0.1,0.5,0.4],[0.3,0.5,0.2],[0.7,0.2,0.1]]) B=np.array([[0.1,0.1,0.1,0.7],[0.5,0.5,0,0],[0.7,0.1,0.1,0.1]]) pi=np.array([0.25,0.25,0.5]) ############INITIAL VALUES 1############### A_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]]) B_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]]) pi_init=np.array([0.3,0.3,0.4]) k=50 acc=np.zeros(k) for i in range(k): states,sequence=HMM.sim_HMM(A,B,pi,500) Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,10,0,False) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[i]=np.mean(seq_hat==states) print("Accuracy: ",np.mean(acc)) ############INITIAL VALUES 2############### A_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]]) B_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]]) pi_init=np.array([0.5,0.2,0.3]) k=50 acc=np.zeros(k) for i in range(k): states,sequence=HMM.sim_HMM(A,B,pi,500) Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,10,0,True) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[i]=np.mean(seq_hat==states) print("Accuracy: ",np.mean(acc)) ############INITIAL VALUES 3############### A_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]]) B_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]]) pi_init=np.array([0.5,0.2,0.3]) k=50 acc=np.zeros(k) for i in range(k): states,sequence=HMM.sim_HMM(A,B,pi,500) Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,10,0,True) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[i]=np.mean(seq_hat==states) print("Accuracy: ",np.mean(acc)) ############INITIAL VALUES 4############### A_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]]) B_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]]) pi_init=np.array([0.5,0.2,0.3]) k=50 acc=np.zeros(k) for i in range(k): states,sequence=HMM.sim_HMM(A,B,pi,500) Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,10,0,True) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[i]=np.mean(seq_hat==states) print("Accuracy: ",np.mean(acc))
Code_Report.ipynb
xiaozhouw/663
mit
From this part, we can see that the choice of initial values are greatly important. Because Baum-Welch algorithm does not guarantee global maximum, a bad choice of initial values will make Baum-Welch Algorithm to be trapped in a local maximum. Moreover, our experiments show that the initial values for emission matrix $B$ are more important by comparing initial values 3 and 4. The initial parameters represent your belief. Effect of number of iteration in Baum-Welch Algorithm
############INITIAL VALUES 1############### A_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]]) B_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]]) pi_init=np.array([0.3,0.3,0.4]) n_iter=[1,5,10,25,50,100,500] acc=np.zeros([k,len(n_iter)]) k=30 for j in range(k): states,sequence=HMM.sim_HMM(A,B,pi,100) t=0 for i in n_iter: Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,i,0,False) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[j,t]=np.mean(seq_hat==states) t+=1 plt.plot(n_iter,np.mean(acc,axis=0))
Code_Report.ipynb
xiaozhouw/663
mit
In this initial condition, we can see one feature of Baum-Welch Algorithm: Baum-Welch Algorithm tends to overfit the data, which is $P(Y|\theta_{final})>P(Y|\theta_{true})$.
############INITIAL VALUES 2############### A_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]]) B_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]]) pi_init=np.array([0.5,0.2,0.3]) n_iter=[1,5,10,25,50,100,500] acc=np.zeros([k,len(n_iter)]) k=30 for j in range(k): states,sequence=HMM.sim_HMM(A,B,pi,100) t=0 for i in n_iter: Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,i,0,False) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[j,t]=np.mean(seq_hat==states) t+=1 plt.plot(n_iter,np.mean(acc,axis=0)) ############INITIAL VALUES 3############### A_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]]) B_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]]) pi_init=np.array([0.5,0.2,0.3]) n_iter=[1,5,10,25,50,100,500] acc=np.zeros([k,len(n_iter)]) k=30 for j in range(k): states,sequence=HMM.sim_HMM(A,B,pi,100) t=0 for i in n_iter: Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,i,0,False) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[j,t]=np.mean(seq_hat==states) t+=1 plt.plot(n_iter,np.mean(acc,axis=0)) ############INITIAL VALUES 4############### A_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]]) B_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]]) pi_init=np.array([0.5,0.2,0.3]) n_iter=[1,5,10,25,50,100,500] acc=np.zeros([k,len(n_iter)]) k=30 for j in range(k): states,sequence=HMM.sim_HMM(A,B,pi,100) t=0 for i in n_iter: Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init, sequence,i,0,False) seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc[j,t]=np.mean(seq_hat==states) t+=1 plt.plot(n_iter,np.mean(acc,axis=0))
Code_Report.ipynb
xiaozhouw/663
mit
In other situations, increasing the number of iterations in Baum-Welch Algorithm tends to better fit the data. Applications
dat=pd.read_csv("data/weather-test2-1000.txt",skiprows=1,header=None) dat.head(5) seq=dat[1].map({"no":0,"yes":1}).tolist() states=dat[0].map({"sunny":0,"rainy":1,"foggy":2}) initial_A=np.array([[0.7,0.2,0.1],[0.3,0.6,0.1],[0.1,0.6,0.3]]) initial_B=np.array([[0.9,0.1],[0.1,0.9],[0.4,0.6]]) initial_pi=np.array([0.4,0.4,0.2]) Ahat,Bhat,pihat=HMM.Baum_Welch(initial_A,initial_B,initial_pi,seq, max_iter=100,threshold=0,scale=True) states_hat=HMM.Viterbi(Ahat,Bhat,pihat,seq) print(np.mean(states_hat==states))
Code_Report.ipynb
xiaozhouw/663
mit
Comparative Analysis
A=np.array([[0.1,0.5,0.4],[0.3,0.5,0.2],[0.7,0.2,0.1]]) B=np.array([[0.1,0.1,0.1,0.7],[0.5,0.5,0,0],[0.7,0.1,0.1,0.1]]) pi=np.array([0.25,0.25,0.5]) A_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]]) B_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]]) pi_init=np.array([0.3,0.3,0.4]) states,sequence=HMM.sim_HMM(A,B,pi,100)
Code_Report.ipynb
xiaozhouw/663
mit
Comparing Viterbi decoding
mod=hmm.MultinomialHMM(n_components=3) mod.startprob_=pi mod.transmat_=A mod.emissionprob_=B res_1=mod.decode(np.array(sequence).reshape([100,1]))[1] res_2=HMM.Viterbi(A,B,pi,sequence) np.array_equal(res_1,res_2) %timeit -n100 mod.decode(np.array(sequence).reshape([100,1])) %timeit -n100 HMM.Viterbi(A,B,pi,sequence)
Code_Report.ipynb
xiaozhouw/663
mit
From the above we can see that we coded our Viterbi algorith correctly. But the time complexity is not good enought. When we check the source code of hmmlearn, we see that they used C to make things faster. In the future, we might want to use c++ to implement this algorithm and wrap it for python. Comparing Baum-Welch Algorithm
k=50 acc=[] for i in range(k): Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init, pi_init,sequence,max_iter=10, threshold=0,scale=True) states_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence) acc.append(np.mean(states_hat==states)) plt.plot(acc) k=50 acc=[] for i in range(k): mod=hmm.MultinomialHMM(n_components=3) mod=mod.fit(np.array(sequence).reshape([100,1])) pred_states=mod.decode(np.array(sequence).reshape([100,1]))[1] acc.append(np.mean(pred_states==states)) plt.plot(acc)
Code_Report.ipynb
xiaozhouw/663
mit
Lab Task 1: Create the convolutional base The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers. As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure our CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument input_shape to our first layer.
# TODO 1 - Write a code to configure our CNN to process inputs of CIFAR images.
courses/machine_learning/deepdive2/image_understanding/labs/cnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
As you can see, our (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers. Lab Task 2: Compile and train the model
# TODO 2 - Write a code to compile and train a model
courses/machine_learning/deepdive2/image_understanding/labs/cnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task 3: Evaluate the model
# TODO 3 - Write a code to evaluate a model. # Print the test accuracy. print(test_acc)
courses/machine_learning/deepdive2/image_understanding/labs/cnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information. Vox Upshot 538 BuzzFeed Upload the image for the visualization to this directory and display the image inline in this notebook.
# Add your filename and uncomment the following line: Image(filename='main.0 (1).png')
assignments/assignment04/TheoryAndPracticeEx01.ipynb
ajhenrikson/phys202-2015-work
mit
Define manifactured solutions We have that $$S = \nabla\cdot(S_n\nabla_\perp\phi) = S_n\nabla_\perp^2\phi + \nabla S_n\cdot \nabla_\perp \phi = S_n\nabla_\perp^2\phi + \nabla_\perp S_n\cdot \nabla_\perp \phi$$ We will use the Delp2 operator for the perpendicular Laplace operator (as the y-derivatives vanishes in cylinder geometry). We have Delp2$(f)=g^{xx}\partial_x^2 f + g^{zz}\partial_z^2 f + 2g^{xz}\partial_x\partial_z f + G^1\partial_x f + G^3\partial_z f$ Using the cylinder geometry, we get that Delp2$(f)=\partial_x^2 f + \frac{1}{x^2}\partial_z^2 f + \frac{1}{x}\partial_x f$ Further on, due to orthogonality we have that $$\nabla_\perp S_n\cdot \nabla_\perp \phi = \mathbf{e}^i\cdot \mathbf{e}^i(\partial_i S_n)(\partial_i \phi) = g^{xx}(\partial_x S_n)(\partial_x \phi) + g^{zz}(\partial_z S_n)(\partial_z \phi) = (\partial_x S_n)(\partial_x \phi) + \frac{1}{x^2}(\partial_z S_n)(\partial_z \phi)$$ This gives $$S = \nabla\cdot(S_n\nabla_\perp\phi) = S_n\partial_x^2 \phi + S_n\frac{1}{x^2}\partial_z^2 \phi + S_n\frac{1}{x}\partial_x \phi + (\partial_x S_n)(\partial_x \phi) + \frac{1}{x^2}(\partial_z S_n)(\partial_z \phi)$$ We will use this to calculate the analytical solution. NOTE: z must be periodic The field $f(\rho, \theta)$ must be of class infinity in $z=0$ and $z=2\pi$ The field $f(\rho, \theta)$ must be single valued when $\rho\to0$ The field $f(\rho, \theta)$ must be continuous in the $\rho$ direction with $f(\rho, \theta + \pi)$ Eventual BC in $\rho$ must be satisfied
# We need Lx from boututils.options import BOUTOptions myOpts = BOUTOptions(folder) Lx = eval(myOpts.geom['Lx']) # Two normal gaussians # The gaussian # In cartesian coordinates we would like # f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2)) # In cylindrical coordinates, this translates to # f = exp(-(1/(2*w^2))*(x^2 + y^2 + x0^2 + y0^2 - 2*(x*x0+y*y0) )) # = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta)*cos(theta0)+sin(theta)*sin(theta0)) )) # = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta - theta0)) )) w = 0.8*Lx rho0 = 0.3*Lx theta0 = 5*pi/4 the_vars['phi'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) )) w = 0.5*Lx rho0 = 0.2*Lx theta0 = 0 the_vars['S_n'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))
MES/divOfScalarTimesVector/2a-divSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Calculate the solution
the_vars['S'] = the_vars['S_n']*Delp2(the_vars['phi'], metric=metric)\ + metric.g11*DDX(the_vars['S_n'], metric=metric)*DDX(the_vars['phi'], metric=metric)\ + metric.g33*DDZ(the_vars['S_n'], metric=metric)*DDZ(the_vars['phi'], metric=metric)
MES/divOfScalarTimesVector/2a-divSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Analyzing structured data with Tensorflow Data Validation This notebook demonstrates how TensorFlow Data Validation (TFDV) can be used to analyze and validate structured data, including generating descriptive statistics, inferring and fine tuning schema, checking for and fixing anomalies, and detecting drift and skew. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent. TFDV is the tool to achieve it. You are going to use a variant of Cover Type dataset. For more information about the dataset refer to the dataset summary page. Lab setup Make sure to set the Jupyter kernel for this notebook to tfx. Import packages and check the versions
import os import tempfile import tensorflow as tf import tensorflow_data_validation as tfdv import time from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions, StandardOptions, SetupOptions, DebugOptions, WorkerOptions from google.protobuf import text_format from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2 print('TensorFlow version: {}'.format(tf.__version__)) print('TensorFlow Data Validation version: {}'.format(tfdv.__version__))
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Set the GCS locations of datasets used during the lab
TRAINING_DATASET='gs://workshop-datasets/covertype/training/dataset.csv' TRAINING_DATASET_WITH_MISSING_VALUES='gs://workshop-datasets/covertype/training_missing/dataset.csv' EVALUATION_DATASET='gs://workshop-datasets/covertype/evaluation/dataset.csv' EVALUATION_DATASET_WITH_ANOMALIES='gs://workshop-datasets/covertype/evaluation_anomalies/dataset.csv' SERVING_DATASET='gs://workshop-datasets/covertype/serving/dataset.csv'
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Set the local path to the lab's folder.
LAB_ROOT_FOLDER='/home/mlops-labs/lab-31-tfdv-structured-data'
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Configure GCP project, region, and staging bucket
PROJECT_ID = 'mlops-workshop' REGION = 'us-central1' STAGING_BUCKET = 'gs://{}-staging'.format(PROJECT_ID)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Computing and visualizing descriptive statistics TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation. Let's start by using tfdv.generate_statistics_from_csv to compute statistics for the training data split. Notice that although your dataset is in Google Cloud Storage you will run you computation locally on the notebook's host, using the Beam DirectRunner. Later in the lab, you will use Cloud Dataflow to calculate statistics on a remote distributed cluster.
train_stats = tfdv.generate_statistics_from_csv( data_location=TRAINING_DATASET_WITH_MISSING_VALUES )
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
You can now use tfdv.visualize_statistics to create a visualization of your data. tfdv.visualize_statistics uses Facets that provides succinct, interactive visualizations to aid in understanding and analyzing machine learning datasets.
tfdv.visualize_statistics(train_stats)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
The interactive widget you see is Facets Overview. - Numeric features and categorical features are visualized separately, including charts showing the distributions for each feature. - Features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature. - Try clicking "expand" above the charts to change the display - Try hovering over bars in the charts to display bucket ranges and counts - Try switching between the log and linear scales - Try selecting "quantiles" from the "Chart to show" menu, and hover over the markers to show the quantile percentages Infering Schema Now let's use tfdv.infer_schema to create a schema for the data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics. Infer the schema from the training dataset statistics
schema = tfdv.infer_schema(train_stats) tfdv.display_schema(schema=schema)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
In general, TFDV uses conservative heuristics to infer stable data properties from the statistics in order to avoid overfitting the schema to the specific dataset. It is strongly advised to review the inferred schema and refine it as needed, to capture any domain knowledge about the data that TFDV's heuristics might have missed. In our case tfdv.infer_schema did not interpreted the Soil_Type and Cover_Type fields properly. Although both fields are encoded as integers, they should be interpreted as categorical rather than numeric. You can use TFDV to manually update the schema including, specifing which features are categorical and which ones are quantitative and defining feature domains. Fine tune the schema You are going to modify the schema: - Particularize the Soil_Type and Cover_Type as categorical features. Notice that at this point you don't set the domain of Soil_Type as enumerating all possible values is not possible without a full scan of the dataset. After you re-generate the statistics using the correct feature designations you can retrieve the domain from the new statistics and finalize the schema - Set contstraints on the values of the Slope feature. You know that the slope is measured in degrees of arc and should be in the 0-90 range.
tfdv.get_feature(schema, 'Soil_Type').type = schema_pb2.FeatureType.BYTES tfdv.set_domain(schema, 'Soil_Type', schema_pb2.StringDomain(name='Soil_Type', value=[])) tfdv.set_domain(schema, 'Cover_Type', schema_pb2.IntDomain(name='Cover_Type', min=1, max=7, is_categorical=True)) tfdv.get_feature(schema, 'Slope').type = schema_pb2.FeatureType.FLOAT tfdv.set_domain(schema, 'Slope', schema_pb2.FloatDomain(name='Slope', min=0, max=90)) tfdv.display_schema(schema=schema)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Generate new statistics using the updated schema.
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) train_stats = tfdv.generate_statistics_from_csv( data_location=TRAINING_DATASET_WITH_MISSING_VALUES, stats_options=stats_options, ) tfdv.visualize_statistics(train_stats)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Finalize the schema The train_stats object is a instance of the statistics_pb2 class, which is a Python wrapper around the statistics.proto protbuf. You can use the protobuf Python interface to retrieve the generated statistics, including the infered domains of categorical features.
soil_type_stats = [feature for feature in train_stats.datasets[0].features if feature.path.step[0]=='Soil_Type'][0].string_stats soil_type_domain = [bucket.label for bucket in soil_type_stats.rank_histogram.buckets] tfdv.set_domain(schema, 'Soil_Type', schema_pb2.StringDomain(name='Soil_Type', value=soil_type_domain)) tfdv.display_schema(schema=schema)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Creating statistics using Cloud Dataflow Previously, you created descriptive statistics using local compute. This may work for smaller datasets. But for large datasets you may not have enough local compute power. The tfdv.generate_statistics_* functions can utilize DataflowRunner to run Beam processing on a distributed Dataflow cluster. To run TFDV on Google Cloud Dataflow, the TFDV library must be must be installed on the Dataflow workers. There are different ways to install additional packages on Dataflow. You are going to use the Python setup.py file approach. You also configure tfdv.generate_statistics_from_csv to use the final schema created in the previous steps. Configure Dataflow settings Create the setup.py configured to install TFDV.
%%writefile setup.py from setuptools import setup setup( name='tfdv', description='TFDV Runtime.', version='0.1', install_requires=[ 'tensorflow_data_validation==0.15.0' ] )
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Regenerate statistics Re-generate the statistics using Dataflow and the final schema. You can monitor the job progress using Dataflow UI
options = PipelineOptions() options.view_as(GoogleCloudOptions).project = PROJECT_ID options.view_as(GoogleCloudOptions).region = REGION options.view_as(GoogleCloudOptions).job_name = "tfdv-{}".format(time.strftime("%Y%m%d-%H%M%S")) options.view_as(GoogleCloudOptions).staging_location = STAGING_BUCKET + '/staging/' options.view_as(GoogleCloudOptions).temp_location = STAGING_BUCKET + '/tmp/' options.view_as(StandardOptions).runner = 'DataflowRunner' options.view_as(SetupOptions).setup_file = os.path.join(LAB_ROOT_FOLDER, 'setup.py') stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) train_stats = tfdv.generate_statistics_from_csv( data_location=TRAINING_DATASET_WITH_MISSING_VALUES, stats_options=stats_options, pipeline_options=options, output_path=STAGING_BUCKET + '/output/' ) tfdv.visualize_statistics(train_stats)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Analyzing evaluation data So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface. You will now generate statistics for the evaluation split and visualize both training and evaluation splits on the same chart: The training and evaluation datasets overlay, making it easy to compare them. The charts now include a percentages view, which can be combined with log or the default linear scales. Click expand on the Numeric Features chart, and select the log scale. Review the Slope feature, and notice the difference in the max. Will that cause problems?
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) eval_stats = tfdv.generate_statistics_from_csv( data_location=EVALUATION_DATASET_WITH_ANOMALIES, stats_options=stats_options ) tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats, lhs_name='EVAL DATASET', rhs_name='TRAIN_DATASET')
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Checking for anomalies Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values. What would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset?
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema) tfdv.display_anomalies(anomalies)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Fixing evaluation anomalies in the schema It looks like we have some new values for Soil_Type and some out-of-range values for Slope in our evaluation data, that we didn't have in our training data. Whever it should be considered anomaly, depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset. In our case, you are going to add the 5151 value to the domain of Soil_Type as 5151 is a valid USFS Ecological Landtype Units code representing the unspecified soil type. The out-of-range values in Slope are data errors and should be fixed at the source.
tfdv.get_domain(schema, 'Soil_Type').value.append('5151')
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Re-validate with the updated schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(updated_anomalies)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
The unexpected string values error in Soil_Type is gone but the out-of-range error in Slope is still there. Let's pretend you have fixed the source and re-evaluate the evaluation split without corrupted Slope.
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) eval_stats = tfdv.generate_statistics_from_csv( data_location=EVALUATION_DATASET, stats_options=stats_options ) updated_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(updated_anomalies) tfdv.display_schema(schema=schema)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Schema environments In supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In cases like that introducing slight schema variations is necessary. For example, in this dataset the Cover_Type feature is included as the label for training, but it's missing in the serving data. If you validate the serving data statistics against the current schema you get an anomaly
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) eval_stats = tfdv.generate_statistics_from_csv( data_location=SERVING_DATASET, stats_options=stats_options ) serving_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(serving_anomalies)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Environments can be used to address such scenarios. In particular, specific features in schema can be associated with specific environments.
schema.default_environment.append('TRAINING') schema.default_environment.append('SERVING') tfdv.get_feature(schema, 'Cover_Type').not_in_environment.append('SERVING')
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
If you validate the serving statistics against the serving environment in schema you will not get anomaly
serving_anomalies = tfdv.validate_statistics(eval_stats, schema, environment='SERVING') tfdv.display_anomalies(serving_anomalies)
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Freezing the schema When the schema is finalized it can be saved as a textfile and managed under source control like any other code artifact.
output_dir = os.path.join(tempfile.mkdtemp(),'covertype_schema') tf.io.gfile.makedirs(output_dir) schema_file = os.path.join(output_dir, 'schema.pbtxt') tfdv.write_schema_text(schema, schema_file) !cat {schema_file}
examples/tfdv-structured-data/tfdv-covertype.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Table of Contents 1.- Anaconda 2.- GIT 3.- IPython 4.- Jupyter Notebook 5.- Inside Ipython and Kernels 6.- Magics <div id='anaconda' /> 1.- Anaconda Although Python is an open-source, cross-platform language, installing it with the usual scientific packages used to be overly complicated. Fortunately, there is now an all-in-one scientific Python distribution, Anaconda (by Continuum Analytics), that is free, cross-platform, and easy to install. Note: There are other distributions and installation options (like Canopy, WinPython, Python(x, y), and others). Why to use Anaconda: 1. User level install of the version of python you want. 2. Able to install/update packages completely independent of system libraries or admin privileges. 3. No risk of messing up required system libraries. 4. Comes with the conda manager which allows us to handle the packages and magage environments. 5. It "completely" solves the problem of packages dependencies. 6. Most important scientific packages (NumPy, SciPy, Scikit-Learn and others) are compiled with MKL support. 7. Many scientific communities are using it!. Note: In this course we will use Python3. Installation Download installation script here. Run in a terminal: bash bash Anaconda3-4.3.1-Linux-x86_64.sh Then modify the PATH environment variable in your ~/.bashrc appending the next line: bash export PATH=~/anaconda3/bin:$PATH Run source ~/.bashrc and test your installation by calling the python interpreter! Conda and useful comands Install packages bash conda install package_name Update packages bash conda update package_name conda update --all Search packages bash conda search package_pattern Clean Installation bash conda clean {--lock, --tarballs, --index-cache, --packages, --source-cache, --all} Environments Isolated distribution of packages. Create an environments bash conda create --name env_name python=version packages_to_install_in_env conda create --name python2 python=2.7 anaconda conda create --name python26 python=2.6 python Switch to environments bash source activate env_name List all available environments bash conda info --envs Delete an environment bash conda remove --name env_name --all Important Note: If you install packages with pip, they will be installed in the running environment. For more info about conda see here <div id='git' /> 2.- Git Git is a version control system (VCS) for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for software development, but it can be used to keep track of changes in any files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows. Online providers supporting Git include GitHub (https://github.com), Bitbucket (https://bitbucket.org), Google code (https://code.google.com), Gitorious (https://gitorious.org), and SourceForge (https://sourceforge.net). In order to get your git repository ready for use, follow these instructions: Create the project directory. bash mkdir project &amp;&amp; cd project Initialize the local directory as a Git repository. bash git init Add the files in your new local repository. This stages them for the first commit. ```bash touch README git add . To unstage a file, use 'git reset HEAD YOUR-FILE'. 4. Commit the files that you've staged in your local repository.bash git commit -m "First commit" To remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again. 5. Add the URL for the remote repository where your local repository will be pushed.bash git remote add origin remote_repository_URL Sets the new remote git remote -v Verifies the new remote URL 6. Push the changes in your local repository to GitHub.bash git push -u origin master ``` <div id='ipython' /> 3.- IPython IPython its just an improved version of the standard Python shell, that provides tools for interactive computing in Python. Here are some cool features of IPython: Better syntax highlighting. Code completion. Direct access to bash/linux commands (cd, ls, pwd, rm, mkdir, etc). Additional commands can be exectuted with: !command. who command to see defined variables in the current session. Inspect objects with ?. And magics, which we will see briefly. <div id='jupyter' /> 4.- Jupyter Notebook It is a web-based interactive environment that combines code, rich text, images, videos, animations, mathematics, and plots into a single document. This modern tool is an ideal gateway to high-performance numerical computing and data science in Python. New paragraph This is rich text with links, equations: $$ \hat{f}(\xi) = \int_{-\infty}^{+\infty} f(x) \, \mathrm{e}^{-i \xi x} dx, $$ code with syntax highlighting python def fibonacci(n): if n &lt;= 1: return n else: return fibonacci(n-1) + fibonacci(n-2) images: and plots:
xgrid = np.linspace(-3,3,50) f1 = np.exp(-xgrid**2) f2 = np.tanh(xgrid) plt.figure(figsize=(8,6)) plt.plot(xgrid, f1, 'bo-') plt.plot(xgrid, f2, 'ro-') plt.title('Just a demo plot') plt.grid() plt.show()
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
IPython also comes with a sophisticated display system that lets us insert rich web elements in the notebook. Here you can see an example of how to add Youtube videos in a notebook
from IPython.display import YouTubeVideo YouTubeVideo('HrxX9TBj2zY')
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
<div id='inside' /> 5.- Inside Ipython and Kernels The IPython Kernel is a separate IPython process which is responsible for running user code, and things like computing possible completions. Frontends, like the notebook or the Qt console, communicate with the IPython Kernel using JSON messages sent over ZeroMQ sockets. The core execution machinery for the kernel is shared with terminal IPython: A kernel process can be connected to more than one frontend simultaneously. In this case, the different frontends will have access to the same variables. The Client-Server architecture The Notebook frontend does something extra. In addition to running your code, it stores code and output, together with markdown notes, in an editable document called a notebook. When you save it, this is sent from your browser to the notebook server, which saves it on disk as a JSON file with a .ipynb extension. The notebook server, not the kernel, is responsible for saving and loading notebooks, so you can edit notebooks even if you don’t have the kernel for that language —you just won’t be able to run code. The kernel doesn’t know anything about the notebook document: it just gets sent cells of code to execute when the user runs them. Others Kernels There are two ways to develop a kernel for another language. Wrapper kernels reuse the communications machinery from IPython, and implement only the core execution part. Native kernels implement execution and communications in the target language: Note: To see a list of all available kernels (and installation instructions) visit here. Convert notebooks to other formats It is also possible to convert the original JSON notebook to the following formats: html, latex, pdf, rst, markdown and script. For that you must run bash jupyter-nbconvert --to FORMAT notebook.ipynb with FORMAT as one of the above options. Lets convert this notebook to htlm! <div id='magics' /> 6.- Magics IPython magics are custom commands that let you interact with your OS and filesystem. There are line magics % (which just affect the behavior of such line) and cell magics %% (which affect the whole cell). Here we test some useful magics:
# this will list all magic commands %lsmagic # also work in ls, cd, mkdir, etc %pwd %history # this will execute and show the output of the program %run ./hola_mundo.py def naive_loop(): for i in range(100): for j in range(100): for k in range(100): a = 1+1 return None %timeit -n 10 naive_loop() %time naive_loop() %%bash cd .. ls
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
lets you capture the standard output and error output of some code into a Python variable. Here is an example (the outputs are captured in the output Python variable).
%%capture output !ls %%writefile myfile.txt Holanda que talca! !cat myfile.txt !rm myfile.txt
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
Writting our own magics! In this section we will create a new cell magic that compiles and executes C++ code in the Notebook.
from IPython.core.magic import register_cell_magic
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
To create a new cell magic, we create a function that takes a line (containing possible options) and a cell's contents as its arguments, and we decorate it with @register_cell_magic.
@register_cell_magic def cpp(line, cell): """Compile, execute C++ code, and return the standard output.""" # We first retrieve the current IPython interpreter instance. ip = get_ipython() # We define the source and executable filenames. source_filename = '_temp.cpp' program_filename = '_temp' # We write the code to the C++ file. with open(source_filename, 'w') as f: f.write(cell) # We compile the C++ code into an executable. compile = ip.getoutput("g++ {0:s} -o {1:s}".format( source_filename, program_filename)) # We execute the executable and return the output. output = ip.getoutput('./{0:s}'.format(program_filename)) print('\n'.join(output)) %%cpp #include<iostream> int main() { std::cout << "Hello world!"; }
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
This cell magic is currently only available in your interactive session. To distribute it, you need to create an IPython extension. This is a regular Python module or package that extends IPython. To create an IPython extension, copy the definition of the cpp() function (without the decorator) to a Python module, named cpp_ext.py for example. Then, add the following at the end of the file: python def load_ipython_extension(ipython): """This function is called when the extension is loaded. It accepts an IPython InteractiveShell instance. We can register the magic with the `register_magic_function` method of the shell instance.""" ipython.register_magic_function(cpp, 'cell') Then, you can load the extension with %load_ext cpp_ext. The cpp_ext.py le needs to be in the PYTHONPATH, for example in the current directory.
%load_ext cpp_ext
01_intro/01_intro.ipynb
mavillan/SciProg
gpl-3.0
You can then use this integer model to index into an array of rock properties:
import numpy as np vps = np.array([2320, 2350, 2350]) rhos = np.array([2650, 2600, 2620])
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0