markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Hyperparameter Tuning in KNN Manually finding the optimal value of n_neighbors parameter
# Find the optimal value of the n_neighbors parameter models={f'KNN_{i}':KNeighborsClassifier(n_neighbors=i) for i in range(2,31)} # run the model only for fold number 4 ie the 5th fold accuracy,confusion_matrices,classification_report=run(fold=4,df=df_optimal_KNN,models=models,print_details=True) x=[i for i in range(2,31)] y=accuracy plt.plot(x,y) plt.xlabel('Number of Nearest Neighbors') plt.ylabel('Accuracy Score') plt.title("Optimal n_neighbors value") plt.show()
_____no_output_____
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Using Grid Search to find optimal values of n_neighbors and p
from sklearn import model_selection from sklearn import metrics def hyperparameter_tune_and_run(df,num_folds,models,target_name,param_grid,evaluation_metric,print_details=False): X=df.drop(labels=[target_name,'kfolds'],axis=1).values y=df[target_name] model_name,model_constructor=list(models.items())[0] model = model_selection.GridSearchCV( estimator = model_constructor, param_grid = param_grid, scoring = evaluation_metric, verbose = 10, cv = num_folds ) model.fit(X,y) if(print_details==True): print(f"Best score : {model.best_score_}") print("Best parameters : ") best_parameters=model.best_estimator_.get_params() for param_name in sorted(param_grid.keys()): print(f"\t{param_name}: {best_parameters[param_name]}") return model models={'KNN': KNeighborsClassifier()} param_grid = { "n_neighbors" : [i for i in range(2,31)], "p" : [2,3] } model = hyperparameter_tune_and_run(df=df_optimal_KNN,num_folds=5,models=models,target_name='Outcome',param_grid=param_grid,evaluation_metric="accuracy",print_details=True)
Fitting 5 folds for each of 58 candidates, totalling 290 fits [CV] n_neighbors=2, p=2 .............................................. [CV] .................. n_neighbors=2, p=2, score=0.770, total= 0.0s [CV] n_neighbors=2, p=2 .............................................. [CV] .................. n_neighbors=2, p=2, score=0.730, total= 0.0s [CV] n_neighbors=2, p=2 .............................................. [CV] .................. n_neighbors=2, p=2, score=0.785, total= 0.0s [CV] n_neighbors=2, p=2 .............................................. [CV] .................. n_neighbors=2, p=2, score=0.755, total= 0.0s [CV] n_neighbors=2, p=2 .............................................. [CV] .................. n_neighbors=2, p=2, score=0.740, total= 0.0s [CV] n_neighbors=2, p=3 .............................................. [CV] .................. n_neighbors=2, p=3, score=0.775, total= 0.0s [CV] n_neighbors=2, p=3 .............................................. [CV] .................. n_neighbors=2, p=3, score=0.725, total= 0.0s [CV] n_neighbors=2, p=3 .............................................. [CV] .................. n_neighbors=2, p=3, score=0.790, total= 0.0s [CV] n_neighbors=2, p=3 .............................................. [CV] .................. n_neighbors=2, p=3, score=0.745, total= 0.0s [CV] n_neighbors=2, p=3 .............................................. [CV] .................. n_neighbors=2, p=3, score=0.745, total= 0.0s [CV] n_neighbors=3, p=2 .............................................. [CV] .................. n_neighbors=3, p=2, score=0.750, total= 0.0s [CV] n_neighbors=3, p=2 .............................................. [CV] .................. n_neighbors=3, p=2, score=0.750, total= 0.0s [CV] n_neighbors=3, p=2 .............................................. [CV] .................. n_neighbors=3, p=2, score=0.740, total= 0.0s [CV] n_neighbors=3, p=2 .............................................. [CV] .................. n_neighbors=3, p=2, score=0.755, total= 0.0s [CV] n_neighbors=3, p=2 .............................................. [CV] .................. n_neighbors=3, p=2, score=0.745, total= 0.0s [CV] n_neighbors=3, p=3 .............................................. [CV] .................. n_neighbors=3, p=3, score=0.745, total= 0.0s [CV] n_neighbors=3, p=3 ..............................................
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Comparison between KNN and NB1. Dataset when KNN was considered for feature selection2. Dataset when NB was considered for feature selection
# Compare between KNN and Naive Bayes models={ 'KNN': KNeighborsClassifier(n_neighbors=12,p=3), 'Gaussian Naive Bayes': GaussianNB(), } # accuracies => list of 5 lists. Each list will contain 3 values ie KNN accuracy, Gaussian Naive Bayes accuracies,confusion_matrices,classification_reports=[],[],[] for f in range(5): accuracy,confusion_matrix,classification_report=run(f,df_optimal_KNN,models=models,print_details=True) accuracies.append(accuracy) confusion_matrices.append(confusion_matrix) classification_reports.append(classification_report) print(accuracies) x_axis_labels=['Predicted Normal','Predicted Diabetic'] y_axis_labels=['True Normal','True Diabetic'] import seaborn as sns # Heatmap of confusion matrix of 5th fold of KNN sns.heatmap(confusion_matrices[4][0],xticklabels=x_axis_labels,yticklabels=y_axis_labels,annot=True) # Heatmap of confusion matrix of 5th fold of Naive Bayes sns.heatmap(confusion_matrices[4][1],xticklabels=x_axis_labels,yticklabels=y_axis_labels,annot=True) # Classification report of 5th fold of KNN print("KNN") print(classification_reports[4][0]) # Classification report of 5th fold of Naive Bayes print("Naive Bayes") print(classification_reports[4][1]) accuracies,confusion_matrices,classification_reports=[],[],[] for f in range(5): accuracy,confusion_matrix,classification_report=run(f,df_optimal_NB,models=models,print_details=True) accuracies.append(accuracy) confusion_matrices.append(confusion_matrix) classification_reports.append(classification_report) import seaborn as sns # Heatmap of confusion matrix of 5th fold of KNN sns.heatmap(confusion_matrices[4][0],xticklabels=x_axis_labels,yticklabels=y_axis_labels,annot=True) # Heatmap of confusion matrix of 5th fold of Naive Bayes sns.heatmap(confusion_matrices[4][1],xticklabels=x_axis_labels,yticklabels=y_axis_labels,annot=True) # Classification report of 5th fold of KNN print("KNN") print(classification_reports[4][0]) # Classification report of 5th fold of Naive Bayes print("Naive Bayes") print(classification_reports[4][1])
_____no_output_____
MIT
Diabetes.ipynb
AryanMethil/Diabetes-KNN-vs-Naive-Bayes
Getting info on Priming experiment dataset that's needed for modeling Info:* __Which gradient(s) to simulate?__* For each gradient to simulate: * Infer total richness of starting community * Get distribution of total OTU abundances per fraction * Number of sequences per sample * Infer total abundance of each target taxon User variables
baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/' workDir = os.path.join(baseDir, 'exp_info') otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt' otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt' metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt' #otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta' #otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt' #genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Init
import glob %load_ext rpy2.ipython %%R library(ggplot2) library(dplyr) library(tidyr) library(gridExtra) library(fitdistrplus) if not os.path.isdir(workDir): os.makedirs(workDir)
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Loading OTU table (filter to just bulk samples)
%%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R tbl.h = tbl %>% gather('sample', 'count', 1:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Which gradient(s) to simulate?
%%R -w 900 -h 400 tbl.h.s = tbl.h %>% group_by(sample) %>% summarize(total_count = sum(count)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) + geom_point() + facet_grid(isotope ~ treatment) + theme( text = element_text(size=16) ) %%R tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
NotesSamples to simulate* Isotope: * 12C vs 13C* Treatment: * 700* Days: * 14 * 28 * 45
%%R # bulk soil samples for gradients to simulate samples.to.use = c( "X12C.700.14.05.NA", "X12C.700.28.03.NA", "X12C.700.45.01.NA", "X13C.700.14.08.NA", "X13C.700.28.06.NA", "X13C.700.45.01.NA" )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Total richness of starting (bulk-soil) communityMethod:* Total number of OTUs in OTU table (i.e., gamma richness)* Just looking at bulk soil samples Loading just bulk soil
%%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(ends_with('.NA')) tbl$OTUId = rownames(tbl) tbl %>% ncol %>% print tbl[1:4,1:4] %%R tbl.h = tbl %>% gather('sample', 'count', 1:(ncol(tbl)-1)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R -w 800 tbl.s = tbl.h %>% filter(count > 0) %>% group_by(sample, isotope, treatment, day, rep, fraction) %>% summarize(n_taxa = n()) ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) + geom_point() + facet_grid(isotope ~ treatment) + theme_bw() + theme( text = element_text(size=16), axis.text.x = element_blank() ) %%R -w 800 -h 350 # filter to just target samples tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use) ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) + geom_bar(stat='identity') + facet_grid(. ~ isotope) + labs(y = 'Number of taxa') + theme_bw() + theme( text = element_text(size=16), axis.text.x = element_blank() ) %%R message('Bulk soil total observed richness: ') tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Number of taxa in all fractions corresponding to each bulk soil sample* Trying to see the difference between richness of bulk vs gradients (veil line effect)
%%R -i otuTableFile # loading OTU table tbl = read.delim(otuTableFile, sep='\t') %>% select(-ends_with('.NA')) tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R # basename of fractions samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use) samps = tbl.h$sample %>% unique fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE)) for (n in names(fracs)){ n.frac = length(fracs[[n]]) cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n') } %%R # function for getting all OTUs in a sample n.OTUs = function(samples, otu.long){ otu.long.f = otu.long %>% filter(sample %in% samples, count > 0) n.OTUs = otu.long.f$OTUId %>% unique %>% length return(n.OTUs) } num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h) num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame colnames(num.OTUs) = c('n_taxa') num.OTUs$sample = rownames(num.OTUs) num.OTUs %%R tbl.s.f %>% as.data.frame %%R # joining with bulk soil sample summary table num.OTUs$data = 'fractions' tbl.s.f$data = 'bulk_soil' tbl.j = rbind(num.OTUs, tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>% mutate(isotope = gsub('X|\\..+', '', sample), sample = gsub('\\.[0-9]+\\.NA', '', sample)) tbl.j %%R -h 300 -w 800 ggplot(tbl.j, aes(sample, n_taxa, fill=data)) + geom_bar(stat='identity', position='dodge') + facet_grid(. ~ isotope, scales='free_x') + labs(y = 'Number of OTUs') + theme( text = element_text(size=16) # axis.text.x = element_text(angle=90) )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Distribution of total sequences per fraction * Number of sequences per sample* Using all samples to assess this one* Just fraction samples__Method:__* Total number of sequences (total abundance) per sample Loading OTU table
%%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(-ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R -h 400 tbl.h.s = tbl.h %>% group_by(sample) %>% summarize(total_seqs = sum(count)) p = ggplot(tbl.h.s, aes(total_seqs)) + theme_bw() + theme( text = element_text(size=16) ) p1 = p + geom_histogram(binwidth=200) p2 = p + geom_density() grid.arrange(p1,p2,ncol=1)
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Distribution fitting
%%R -w 700 -h 350 plotdist(tbl.h.s$total_seqs) %%R -w 450 -h 400 descdist(tbl.h.s$total_seqs, boot=1000) %%R f.n = fitdist(tbl.h.s$total_seqs, 'norm') f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm') f.ll = fitdist(tbl.h.s$total_seqs, 'logis') #f.c = fitdist(tbl.s$count, 'cauchy') f.list = list(f.n, f.ln, f.ll) plot.legend = c('normal', 'log-normal', 'logistic') par(mfrow = c(2,1)) denscomp(f.list, legendtext=plot.legend) qqcomp(f.list, legendtext=plot.legend) %%R gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend) %%R summary(f.ln)
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Notes:* best fit: * lognormal * mean = 10.113 * sd = 1.192 Does sample size correlate to buoyant density? Loading OTU table
%%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(-ends_with('.NA')) %>% select(-starts_with('X0MC')) tbl = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% mutate(sample = gsub('^X', '', sample)) tbl %>% head %%R # summarize tbl.s = tbl %>% group_by(sample) %>% summarize(total_count = sum(count)) tbl.s %>% head(n=3)
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Loading metadata
%%R -i metaDataFile tbl.meta = read.delim(metaDataFile, sep='\t') tbl.meta %>% head(n=3)
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Determining association
%%R -w 700 tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample')) ggplot(tbl.j, aes(Density, total_count, color=rep)) + geom_point() + facet_grid(Treatment ~ Day) %%R -w 600 -h 350 ggplot(tbl.j, aes(Density, total_count)) + geom_point(aes(color=Treatment)) + geom_smooth(method='lm') + labs(x='Buoyant density', y='Total sequences') + theme_bw() + theme( text = element_text(size=16) )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Number of taxa along the gradient
%%R tbl.s = tbl %>% filter(count > 0) %>% group_by(sample) %>% summarize(n_taxa = sum(count > 0)) tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample')) tbl.j %>% head(n=3) %%R -w 900 -h 600 ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) + #geom_area(stat='identity', alpha=0.5, position='dodge') + geom_point() + geom_line() + labs(x='Buoyant density', y='Number of taxa') + facet_grid(Treatment ~ Day) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Notes:* Many taxa out to the tails of the gradient.* It seems that the DNA fragments were quite diffuse in the gradients. Total abundance of each target taxon: bulk soil approach* Getting relative abundances from bulk soil samples * This has the caveat of likely undersampling richness vs using all gradient fraction samples. * i.e., veil line effect
%%R -i otuTableFile # loading OTU table tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(matches('OTUId'), ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R # long table format w/ selecting samples of interest tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>% filter(sample %in% samples.to.use, count > 0) tbl.h %>% head %%R message('Number of samples: ', tbl.h$sample %>% unique %>% length) message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length) %%R tbl.hs = tbl.h %>% group_by(OTUId) %>% summarize( total_count = sum(count), mean_count = mean(count), median_count = median(count), sd_count = sd(count) ) %>% filter(total_count > 0) tbl.hs %>% head
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
For each sample, writing a table of OTU_ID and count
%%R -i workDir setwd(workDir) samps = tbl.h$sample %>% unique %>% as.vector for(samp in samps){ outFile = paste(c(samp, 'OTU.txt'), collapse='_') tbl.p = tbl.h %>% filter(sample == samp, count > 0) write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F) message('Table written: ', outFile) message(' Number of OTUs: ', tbl.p %>% nrow) }
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Making directories for simulations
p = os.path.join(workDir, '*_OTU.txt') files = glob.glob(p) baseDir = os.path.split(workDir)[0] newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files] newDirs = [os.path.join(baseDir, x) for x in newDirs] for newDir,f in zip(newDirs, files): if not os.path.isdir(newDir): print 'Making new directory: {}'.format(newDir) os.makedirs(newDir) else: print 'Directory exists: {}'.format(newDir) # symlinking file linkPath = os.path.join(newDir, os.path.split(f)[1]) if not os.path.islink(linkPath): os.symlink(f, linkPath)
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X13C.700.28.06 Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X12C.700.28.03 Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X13C.700.14.08 Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X13C.700.45.01 Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X12C.700.45.01 Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X12C.700.14.05
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Rank-abundance distribution for each sample
%%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(matches('OTUId'), ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R # long table format w/ selecting samples of interest tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>% filter(sample %in% samples.to.use, count > 0) tbl.h %>% head %%R # ranks of relative abundances tbl.r = tbl.h %>% group_by(sample) %>% mutate(perc_rel_abund = count / sum(count) * 100, rank = row_number(-perc_rel_abund)) %>% unite(day_rep, day, rep, sep='-') tbl.r %>% as.data.frame %>% head(n=3) %%R -w 900 -h 350 ggplot(tbl.r, aes(rank, perc_rel_abund)) + geom_point() + # labs(x='Buoyant density', y='Number of taxa') + facet_wrap(~ day_rep) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Taxon abundance range for each sample-fraction
%%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(-ends_with('.NA')) %>% select(-starts_with('X0MC')) tbl = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% mutate(sample = gsub('^X', '', sample)) tbl %>% head %%R tbl.ar = tbl %>% #mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>% #mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>% group_by(sample) %>% mutate(rel_abund = count / sum(count)) %>% summarize(abund_range = max(rel_abund) - min(rel_abund)) %>% ungroup() %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.ar %>% head(n=3) %%R -w 800 tbl.ar = tbl.ar %>% mutate(fraction = as.numeric(fraction)) ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) + geom_point() + geom_line() + labs(x='Buoyant density', y='Range of relative abundance values') + facet_grid(treatment ~ day) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Total abundance of each target taxon: all fraction samples approach* Getting relative abundances from all fraction samples for the gradient * I will need to calculate (mean|max?) relative abundances for each taxon and then re-scale so that cumsum = 1
%%R -i otuTableFile # loading OTU table tbl = read.delim(otuTableFile, sep='\t') %>% select(-ends_with('.NA')) tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R # basename of fractions samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use) samps = tbl.h$sample %>% unique fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE)) for (n in names(fracs)){ n.frac = length(fracs[[n]]) cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n') } %%R # function for getting mean OTU abundance from all fractions OTU.abund = function(samples, otu.long){ otu.rel.abund = otu.long %>% filter(sample %in% samples, count > 0) %>% ungroup() %>% group_by(sample) %>% mutate(total_count = sum(count)) %>% ungroup() %>% mutate(perc_abund = count / total_count * 100) %>% group_by(OTUId) %>% summarize(mean_perc_abund = mean(perc_abund), median_perc_abund = median(perc_abund), max_perc_abund = max(perc_abund)) return(otu.rel.abund) } ## calling function otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h) otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund)) otu.rel.abund %>% head %%R -h 600 -w 900 # plotting otu.rel.abund.l = otu.rel.abund %>% gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund) otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value) ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) + geom_point(shape='O', alpha=0.7) + scale_y_log10() + facet_grid(abund_stat ~ sample) + theme_bw() + theme( text = element_text(size=16), axis.text.x = element_blank(), legend.position = 'none' )
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
For each sample, writing a table of OTU_ID and count
%%R -i workDir setwd(workDir) # each sample is a file samps = otu.rel.abund.l$sample %>% unique %>% as.vector for(samp in samps){ outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_') tbl.p = otu.rel.abund %>% filter(sample == samp, mean_perc_abund > 0) write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F) cat('Table written: ', outFile, '\n') cat(' Number of OTUs: ', tbl.p %>% nrow, '\n') }
_____no_output_____
MIT
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb
arischwartz/test
Here Z ∼ binomial(1, 0.5) is the protected attribute. Features related to the protected attribute are sampled from X ∼ N(¡, I) with ¡ = 1 when Z = 0 and ¡ = 2 when Z = 1. Other features not related to the protected attribute Z are generated with ¡ = 0. First 4 features are correlated with z. The first 10 features are correlated with y according to a logistic regression model y = logit(β^TX) with β ∼ N(¡β, 0.1), where ¡β = 5 for the first 6 features and ¡β = 0 for all others.
z = np.zeros(1000) for j in range(1000): z[j] = np.random.binomial(1,0.5) x_correlated = np.zeros((1000,4)) x_uncorrelated = np.zeros((1000,16)) for j in range(16): for i in range (1000): if j < 4: x_correlated[i][j] = np.random.normal((z[i]*2 + 10), 1, 1) x_uncorrelated[i][j] = np.random.normal(0,1,1) x = np.concatenate((x_correlated,x_uncorrelated),axis=1) x = np.concatenate((x,np.reshape(z,(1000,1))),axis=1) b = np.zeros(21) noise = np.random.normal(0,1,1000) for i in range (10): b[i] = np.random.normal(5,0.1,1) y = logit(NormalizeData(np.dot(x,b)) + noise.T) for i in range (len(y)): if y[i] > 0: y[i] = int(1) else: y[i] = int(0) column = [] for i in range(21): column.append(str(i+1)) dataframe = pd.DataFrame(x, columns = column) model_dtree = d_tree.DecisionTree(20,0,'21',1) model_dtree.fit(dataframe,y) fairness_importance = model_dtree._fairness_importance() feature = [] score = [] for key, value in fairness_importance.items(): print(key, value) feature.append(key) score.append((value)) utils.draw_plot(feature,score,"Results/Synthetic/eqop.pdf") model_dtree_dp = d_tree.DecisionTree(20,0,'21',2) model_dtree_dp.fit(dataframe,y) fairness_importance_dp = model_dtree_dp._fairness_importance() feature = [] score_dp = [] for key, value in fairness_importance_dp.items(): print(key, value) feature.append(key) score_dp.append((value)) utils.draw_plot(feature,score_dp,"Results/Synthetic/DP.pdf") count_z0 = count_z1 = 0 count0 = count1 = 0 z0 = z1 = 0 for i in range (1000): if y[i] == 0: count0+=1 else: count1+=1 if x[i][20] == 0: count_z0 += 1 else: count_z1 +=1 if x[i][20] == 0: z0+=1 else: z1+=1 print(count0,count1, count_z0,count_z1,z0,z1)
809 191 104 87 498 502
MIT
benchmark/synthetic.ipynb
DebolinaHalder/599
![Annif_logo.png](attachment:Annif_logo.png) Annif tutorial with Jupyter notebook [Annif](https://annif.org/) is an open source subject indexing tool for new documents and aims to improve the discoverability of vast amount of electronic documents. In order to accomplish automatic subject indexing task, annif uses ML/NLP algorithms to leverage existing training data in the form of subject vocabulary and metadata. For the purpose of a test use-case service of Annif at CSC, small subsets of yso-finna-theses records (i.e., yso-finna-small.tsv.gz file as provided by Annif tutorial dataset) from the [Finna.fi](https://www.finna.fi/?lng=en-gb) discovery serviceis are used. The backend models of this annif instance inculdes handful of different subject indexing algorithms namely, Maui, TF-IDF, ensemble and Omikuji (Parabel/Bonsai) methods. These models are trained in supercomputing (Puhti) environment using singularity container for Annif application. This tutorial uses REST API calls to interact with a given Annif webserver which can be either [Annif webserver](https://api.annif.org/v1/ui/) hosted by national library of Finland or a [test case webserver](https://annif.rahtiapp.fi/v1/ui/) hosted by CSC Learning ObjectivesUpon completion of this tutorial, you will be able to learn how to: - List available trained projects in a given annif webserver - Perform subject indexing with Annif using different projects (subject vocabularies and existing metadata) List all available projects from a given annif webserverAll available projects form annif webserver can be retrieved using Annif REST API GET call. >**Note**: One can make REST API call using the following *curl* command in command-line environment: curl -X GET --header 'Accept: application/json' 'https://annif.rahtiapp.fi/v1/projects' Below is the python way of making REST API GET call to annif server and then converting the resulting json data in the form of a table
import requests import json from pandas import json_normalize headers = {'Accept': 'application/json'} base_url='https://annif.rahtiapp.fi/v1/projects' # Annif webserver hosted by CSC #base_url='https://api.annif.org/v1/projects' # Annif webserver by NatLibFi response = requests.get(base_url, headers=headers) d=response.json() # print resulting json file: print(response.text) json_normalize(d['projects'])
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
Perform subject indexing with AnnifThere are mainly six types of projects in a test case example of [Annif](https://annif.rahtiapp.fi) hosted at CSC and it runson Rahti container cloud. Let's see how to get subject indexing with each of these projects here 1. Perform subject indexing for your own text using YSO TFIDF project (projectid: yso-tfidf-en) This can be accomplished using swagger API POST call> **Note**: Here is the curl command for subject indexing: curl -X POST --header 'Content-Type: application/x-www-form-urlencoded' --header 'Accept: application/json' -d 'text=frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary&limit=10' 'https://annif.rahtiapp.fi/v1/projects/yso-tfidf-en/suggest'Below is the python approach:
projectid='yso-tfidf-en' text='frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary' url= base_url+ '/' + projectid +'/suggest' data = {'text': text} headers = {'Content-Type': 'application/x-www-form-urlencoded','Accept': 'application/json'} response = requests.post(url, headers=headers, data=data) d=response.json() # print(response.text) display(json_normalize(d['results'])) data=json_normalize(d['results']) data.loc[:,['label','score','uri']].plot('label',kind='bar')
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
2. Perform subject indexing with YSO ensemble project (projectid:'yso-ensemble-en') This can be accomplished using swagger API POST call> **Note**: curl command - curl -X POST --header 'Content-Type: application/x-www-form-urlencoded' --header 'Accept: application/json' -d 'text=frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary&limit=10' 'https://annif.rahtiapp.fi/v1/projects/yso-tfidf-en/suggest'Using python:
projectid='yso-ensemble-en' text='frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary' url= base_url+ '/' + projectid +'/suggest' data = {'text': text} headers = {'Content-Type': 'application/x-www-form-urlencoded','Accept': 'application/json'} response = requests.post(url, headers=headers, data=data) d=response.json() # print(response.text) display(json_normalize(d['results'])) data=json_normalize(d['results']) data.loc[:,['label','score','uri']].plot('label',kind='bar')
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
3. Perform subject indexing with for your own text using project 'yso-maui-en' This can be accomplished using swagger API POST call> **curl command**:curl -X POST --header 'Content-Type: application/x-www-form-urlencoded' --header 'Accept: application/json' -d 'text=frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary&limit=10' 'https://annif.rahtiapp.fi/v1/projects/yso-maui-en/suggest' Using python approach:
projectid='yso-maui-en' text='frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary' url= base_url+ '/' + projectid +'/suggest' data = {'text': text} headers = { 'Content-Type': 'application/x-www-form-urlencoded','Accept': 'application/json'} response = requests.post(url, headers=headers, data=data) d=response.json() # print(response.text) display(json_normalize(d['results'])) data=json_normalize(d['results']) data.loc[:,['label','score','uri']].plot('label',kind='bar')
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
4. Perform subject indexing for your own text using project 'yso-omikuji-parabel-en' This can be accomplished using swagger API POST call>**curl command**: curl -X POST --header 'Content-Type: application/x-www-form-urlencoded' --header 'Accept: application/json' -d 'text=frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary&limit=10' 'https://annif.rahtiapp.fi/v1/projects/yso-omikuji-parabel-en/suggest'Using python approach:
projectid='yso-omikuji-parabel-en' text='frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary' url= base_url+ '/' + projectid +'/suggest' data = {'text': text} headers = {'Content-Type': 'application/x-www-form-urlencoded','Accept': 'application/json'} response = requests.post(url, headers=headers, data=data) d=response.json() # print(response.text) display(json_normalize(d['results'])) data=json_normalize(d['results']) data.loc[:,['label','score','uri']].plot('label',kind='bar')
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
5. Perform subject indexing for your own text using project 'yso-omikuji-bonsai-en' This can be accomplished using swagger API POST call>**curl command**:curl -X POST --header 'Content-Type: application/x-www-form-urlencoded' --header 'Accept: application/json' -d 'text=frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary&limit=10' 'https://annif.rahtiapp.fi/v1/projects/yyso-omikuji-bonsai-en/suggest'Using python approach:
projectid='yso-omikuji-bonsai-en' text='frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary' url= base_url+ '/' + projectid +'/suggest' data = {'text': text} headers = {'Content-Type': 'application/x-www-form-urlencoded','Accept': 'application/json'} response = requests.post(url, headers=headers, data=data) d=response.json() # print(response.text) display(json_normalize(d['results'])) data=json_normalize(d['results']) data.loc[:,['label','score','uri']].plot('label',kind='bar')
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
6. Perform subject indexing for your own text using project 'yso-nn-ensemble-en' This can be accomplished using swagger API POST call>**curl command**: curl -X POST --header 'Content-Type: application/x-www-form-urlencoded' --header 'Accept: application/json' -d 'text=frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary&limit=10' 'https://annif.rahtiapp.fi/v1/projects/yso-nn-ensemble-en/suggest'Using python approach:
projectid='yso-nn-ensemble-en' text='frequently occurring or otherwise salient terms in the document are matched with terms in the vocabulary' url= base_url+ '/' + projectid +'/suggest' data = {'text': text} headers = { 'Content-Type': 'application/x-www-form-urlencoded','Accept': 'application/json'} response = requests.post(url, headers=headers, data=data) d=response.json() display(json_normalize(d['results'])) data=json_normalize(d['results']) data.loc[:,['label','score','uri']].plot('label',kind='bar')
_____no_output_____
Apache-2.0
annif.ipynb
CSCfi/annif-utils
Several exercises: Jupyter Notebook, iPython and ipyparallel, and HPC MPICHThe content of this notebook is borrowed extensively from Daan Van Hauwermeiren, from his tutotial of ipyparallel, stored on github https://github.com/DaanVanHauwermeiren/ipyparallel-tutorial/blob/master/02-ipyparallel-tutorial-direct-interface.ipynb. Many adaptions have been made to accommodate this demonstration and this HPC MPI environment.Prior to running these steps in this notebook, the following details mustbe completed outside the context of Jupyter, and generally will need to be facilitated by a systems administator with appropriate rights and knowledge of HPC and MPI. 1) HPC and MPICH have been configured, with compute nodes running 2) Related to HPC and MPICH, an NFS/NIS environment exists to facilitate the 'scientist' user environment across all of the computing resources in the cluster 3) The ipyparallel client/engine environment must be configured and started that supports MPI/MPICH 4) Ensure that the "IPython Cluster" called "mpi" in the JupyterHub is runningOnce the above details have been acomplished, import the IPython ipyparallel module and create a Client instance
# import the IPython ipyparallel module and create a Client instance # In this demonstration, an MPI-oriented client is created, referenced by the 'mpi' profile # There are 4 mpi engines that have been configured and running on 4 separate HPC compute nodes import ipyparallel as ipp rc = ipp.Client(profile='mpi') # Show that there are engines running, responding rc.ids # Create an ipyparallel object, constructed via list-access to the client: vobject = rc[:]
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Python’s builtin map() functions allows a function to be applied to a sequence element-by-element. This type of code is typically trivial to parallelize. In fact, since IPython’s interface is all about functions anyway, you can just use the builtin map() with a RemoteFunction, or a vobject’s map() method.do an arbitrary serial computation using just the power of the HPC head node, ... and show how long it takes to compute
%%time serial_result = list(map(lambda x:x**2**2, range(30)))
CPU times: user 18 Β΅s, sys: 3 Β΅s, total: 21 Β΅s Wall time: 25.5 Β΅s
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Now do the same computation using the MPI compute nodes HPC cluster, ... and show how long it takes
%%time parallel_result = vobject.map_sync(lambda x:x**2**2, range(30)) serial_result==parallel_result
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Remote function decoratorsRemote functions are just like normal functions, but when they are called, they execute on one or more engines, rather than locally. Here we will demonstrate the @parallel function decorator, which creates parallel functions that break up an element-wise operations and distribute them to remote workers. It also reconstructs the result from each worker as the result is returned.
# First, we'll enable blocking, which will be explored more throughly later. # In short, blocking will ensure that each task won't proceed until all the remotely distributed work is complete. @vobject.remote(block=True) # Define a function called "getpid" that ... well, you can see the description def getpid(): ''' import library os and return the process number (pid) corresponding with the execution ''' import os return os.getpid() # Using our newly defined function, show the process id of the engine running on each compute node getpid()
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
We'll use numpy to create some complicated (random) arrays, then use those arrays for some big computations that should benefit by some distributed HPC compute resources.
import numpy as np A = np.random.random((64,48)) # Create a little function that can do the calculations as a distribution among multiple, parallel compute nodes @vobject.parallel(block=True) def pmul(A,B): return A*B
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
We want to be able to compare the amount of time it takes to do the calulation locally on the HPC headand the amount of time it takes to do the calculation among the distributed compute nodesFirst, do the calculation locally, then do it remotely
%%time C_local = A*A %%time C_remote = pmul(A,A) (C_local == C_remote).all()
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Create a simple, new function that can be called locally but that will execute remotely, in parallel.It's just a simple instruction that will "echo" the output of what is run on the remote worker
@vobject.parallel(block=True) def echo(x): return str(x) echo(range(5)) echo.map(range(5))
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Blocking executionIn blocking mode, the iPython ipyparallel object (called vobject in these examples; defined at the beginning of this notebook) submits the command to the controller, which places the command in the engines’ queues for execution. The apply() call then blocks until the engines are done executing the command.
# Show function names (on the remote worker) that beging with the string "apply" [x for x in dir(vobject) if x.startswith('apply')] vobject.block = True vobject['a'] = 5 vobject['b'] = 10 vobject.apply(lambda x: a+b+x, 27) vobject.block = False vobject.apply_sync(lambda x: a+b+x, 27)
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Python commands can be executed as strings on specific engines by using a vobject’s execute method:
rc[::2].execute('c=a+b') rc[1::2].execute('c=a-b') vobject['c'] # shorthand for vobject.pull('c', block=True)
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Non-blocking executionIn non-blocking mode, apply() submits the command to be executed and then returns a AsyncResult object immediately. The AsyncResult object gives you a way of getting a result at a later time through its get() method.More info on the AsyncResult object: http://ipyparallel.readthedocs.io/en/6.0.2/asyncresult.htmlparallel-asyncresultThis allows you to quickly submit long running commands without blocking your local Python/IPython session:
# define our function def wait(t): import time tic = time.time() time.sleep(t) return time.time()-tic # In non-blocking mode ar = vobject.apply_async(wait, 3) # Now block for the result, and the output won't disply until after 3 seconds ar.get() # Again in non-blocking mode, with longer wait (10 seconds) ar = vobject.apply_async(wait, 10) # Poll to see if the result is ready # If you run this fast enough following the previous step, the output will be "False" # But if you wait for 10 seconds, before executing this step, the output will be "True" ar.ready() # ask for the result, but wait a maximum of 1 second: ar.get(1)
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Often, it is desirable to wait until a set of AsyncResult objects are done. For this, there is the method wait(). This method takes a tuple of AsyncResult objects (or msg_ids or indices to the client’s History), and blocks until all of the associated results are ready.In proper Jupyter Notebook fashion, the step progress indicator will show as a '*' character until the instruction is completed. Output will not be displayed until the instruction is completed.
vobject.block=False # A trivial list of AsyncResults objects pr_list = [vobject.apply_async(wait, 3) for i in range(10)] # Wait until all of the clients have completed the instruction vobject.wait(pr_list) # Then, their results are ready using get() or the `.r` attribute pr_list[0].get()
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
Scatter and gatherSometimes it is useful to partition a sequence and push the partitions to different engines. In MPI language, this is know as scatter/gather and we follow that terminology. However, it is important to remember that in IPython’s Client class, scatter() is from the interactive IPython session to the engines and gather() is from the engines back to the interactive IPython session. For scatter/gather operations between engines, MPI, pyzmq, or some other direct interconnect should be used.
vobject.scatter('a',range(16)) vobject['a'] vobject.gather('a')
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
parallel list comprehensionsIn many cases list comprehensions are nicer than using the map function. While we don’t have fully parallel list comprehensions, it is simple to get the basic effect using scatter() and gather():The %px magic executes a single Python command on the engines specified by the targets attribute of the view instance
vobject.scatter('x', range(64)) #Parallel execution on engines: [0, 1, 2, 3] %px y = [i**10 for i in x] y = vobject.gather('y') print(y.get()[-10:])
[210832519264920576, 253295162119140625, 303305489096114176, 362033331456891249, 430804206899405824, 511116753300641401, 604661760000000000, 713342911662882601, 839299365868340224, 984930291881790849]
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
example: monte carlo approximation of piA simple toy problem to get a handle on multiple engines is a Monte Carlo approximation of Ο€.Let’s say we have a dartboard with a round target inscribed on a square board. If you threw darts randomly, and they land evenly distributed on the square board, how many darts would you expect to hit the target?
from random import random from math import pi vobject['random'] = random def mcpi(nsamples): s = 0 for i in range(nsamples): x = random() y = random() if x*x + y*y <= 1: s+=1 return 4.*s/nsamples def multi_mcpi(view, nsamples): p = len(view.targets) if nsamples % p: # ensure even divisibility nsamples += p - (nsamples%p) subsamples = nsamples//p ar = view.apply(mcpi, subsamples) return sum(ar)/p def check_pi(tol=1e-5, step=10, verbose=False): guess = 0 spi = pi steps = 0 while abs(spi-guess)/spi > tol: for i in range(step): x = random() y = random() if x*x+y*y <= 1: guess += 4. steps += step spi = pi*steps if verbose: print(spi, guess, abs(spi-guess)/spi) return steps, guess/steps %%time mcpi(int(1e9)) # 1e7 means 10 to the 7th power. "e" stands for "expontent" %%time multi_mcpi(vobject, int(1e9)) check_pi()
_____no_output_____
CC0-1.0
mpi.ipynb
craiggardner/jupyter_notebooks
def what_is_installed(): import pycaret from pycaret import show_versions show_versions() try: what_is_installed() except: !pip install pycaret-ts-alpha what_is_installed() import numpy as np import pandas as pd from pycaret.datasets import get_data from pycaret.time_series import TSForecastingExperiment #### Exogenous variables ---- data = pd.DataFrame({'a': np.random.randn(200), 'b': np.random.randn(200), 'c': np.random.randn(200)}) #### Produce dependent variable based on exogenous variables ---- # NOTE: Only dependent on 'a' and 'c' but not 'b' data['y'] = data['a'].shift(4) + data['c'].shift(8) data.dropna(inplace=True) data.shape #### Create Time Series Forecasting Experiment ---- exp = TSForecastingExperiment() global_plot_settings = {"renderer": "colab"} exp.setup(data=data, target="y", seasonal_period=1, fh=8, fig_kwargs=global_plot_settings, session_id=42) exp.plot_model(plot="acf")
_____no_output_____
MIT
time_series/pycaret/pycaret_ts_ccf.ipynb
ngupta23/medium_articles
**Not much to go by in terms of forecasting y just by itself (without exogenous variables)**
exp.plot_model(plot="ccf")
_____no_output_____
MIT
time_series/pycaret/pycaret_ts_ccf.ipynb
ngupta23/medium_articles
Q-Learning> Off-Policy Temporal Difference Learning.
import gym import numpy as np import matplotlib.pyplot as plt from IPython.display import clear_output from time import sleep env_name = "Taxi-v3" epsilon = 1 decay_rate = 0.001 min_epsilon = 0.01 max_episodes = 2500 print_interval = 100 test_episodes = 3 lr = 0.4 gamma = 0.99 env = gym.make(env_name) env = gym.wrappers.Monitor(env, "./vid", force=True) n_states = env.observation_space.n n_actions = env.action_space.n print(f"Number of states: {n_states}\n" f"Number of actions: {n_actions}") q_table = np.zeros((n_states, n_actions)) def choose_action(state): global q_table if epsilon > np.random.uniform(): action = env.action_space.sample() else: action = np.argmax(q_table[state, :]) return action def update_table(state, action, reward, done, next_state): global q_table q_table[state, action] += lr * (reward + gamma * np.max(q_table[next_state, :]) * (1 - done) - q_table[state, action])
_____no_output_____
MIT
Q_Learning/Taxi_env.ipynb
alirezakazemipour/Q-Table-Numpy
Pseudocode>
running_reward = [] for episode in range(1, 1 + max_episodes): state = env.reset() done = False episode_reward = 0 while not done: action = choose_action(state) next_state, reward, done, _ = env.step(action) update_table(state, action, reward, done, next_state) episode_reward += reward if done: break state = next_state epsilon = epsilon - decay_rate if epsilon - decay_rate > min_epsilon else min_epsilon if episode == 1: running_reward.append(episode_reward) else: running_reward.append(0.99 * running_reward[-1] + 0.01 * episode_reward) if episode % print_interval == 0: print(f"Ep:{episode}| " f"Ep_reward:{episode_reward}| " f"Running_reward:{running_reward[-1]:.3f}| " f"Epsilon:{epsilon:.3f}| ") plt.figure() plt.style.use("ggplot") plt.plot(np.arange(max_episodes), running_reward) plt.title("Running_reward") for episode in range(1, 1 + test_episodes): state = env.reset() done = False episode_reward = 0 while not done: action = choose_action(state) next_state, reward, done, _ = env.step(action) env.render() clear_output(wait=True) sleep(0.2) episode_reward += reward if done: break state = next_state print(f"Ep:{episode}| " f"Ep_reward:{episode_reward}| ") env.close()
Ep:3| Ep_reward:4|
MIT
Q_Learning/Taxi_env.ipynb
alirezakazemipour/Q-Table-Numpy
Stationary freatic flow between two water courses above a semi-pervious layer (Wesseling) Constant precipitation N on a strip of land between two parallel water courses with water level hs causes a rise h(x) of the groundwater level that induces in a groundwater flow towards the water courses. The phreatic groundwater is separated from an aquifer below by a semi-pervious layer. The interaction between phreatic groundwater system and the aquifer is determined by the resistance c of the semi-pervious layer and the groundwater head H in the aquifer below. A solution was published by Wesseling & Wesseling (1984). A practical dicussion is provided by Van Drecht (1997, in Dutch)
import numpy as np %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') def hx(x,L,hs,p,H,c,T): """Return phreatic groundwater level h(x) between two water courses Parameters: x : numpy array Distance from centre of the water course (m) L : float Distance between the centre of both water courses (m) hs : float water level in the water courses (m) p : float precipitation (m/day) H : float groundwater head in the deep aquifer (m) c : float resistance of semi-pervious layer (day) T : float transmissivity of the deep aquifer (m2/day) Returns ------- numpy array """ labda = np.sqrt(T*c) alpha = L / (2*labda) hx = hs + (H - hs + p*c)* (np.tanh(alpha) * np.sinh(x/labda) - np.cosh(x/labda) + 1) return hx def qd(L,hs,p,H,c,T): """Return groundwater discharge to water courses""" labda = np.sqrt(T*c) alpha = L / (2*labda) qd = ((H-hs)/c+p)*np.tanh(alpha)/alpha return qd def qs(L,hs,p,H,c,T): """Return groundwater seepage""" #labda = np.sqrt(T*c) #alpha = L / (2*labda) #qd = -p + ((H-hs)/c+p)*np.tanh(alpha)/alpha qs = -p + qd(L,hs,p,H,c,T) return qs L = 400. hs = 0.0 p = 300./365./1000. H = 0.0 c = 100000. T = 15. x = np.linspace(1,L/2,25) # h0 p = 300./365./1000. H = 0.0 c = 10. h0 = hx(x,L,hs,p,H,c,T) qd0 = qd(L,hs,p,H,c,T) qs0 = qs(L,hs,p,H,c,T) # h1 p = 300./365./1000. H = -0.025 c = 10. h1 = hx(x,L,hs,p,H,c,T) qd1 = qd(L,hs,p,H,c,T) qs1 = qs(L,hs,p,H,c,T) # h2 p = 300./365./1000. H = 0.025 c = 10. h2 = hx(x,L,hs,p,H,c,T) qd2 = qd(L,hs,p,H,c,T) qs2 = qs(L,hs,p,H,c,T) # h3 p = 7./1000. H = 0.0 c = 10. h3 = hx(x,L,hs,p,H,c,T) qd3 = qd(L,hs,p,H,c,T) qs3 = qs(L,hs,p,H,c,T) # h4 p = 300./365./1000. H = 0.20 c = 10. h4 = hx(x,L,hs,p,H,c,T) qd4 = qd(L,hs,p,H,c,T) qs4 = qs(L,hs,p,H,c,T) # h5 p = 7./1000. H = 0.20 c = 10. h5 = hx(x,L,hs,p,H,c,T) qd5 = qd(L,hs,p,H,c,T) qs5 = qs(L,hs,p,H,c,T) plt.rcParams['figure.figsize'] = [10, 7] fig, ax = plt.subplots() #4c4c4c gray #f65b00 orange #fabb01 yellow #0000FF dark blue #0080FF light blue ax.plot(x, h0, '-', color='#0080FF', label='dH = 0, N = 0.8 mm/d'); ax.plot(x, h3, '-', color='#0000FF', label='dH = 0cm, N = 7 mm/d'); ax.plot(x, h1, '--', color='#0080FF', label='dH = -2.5cm, N = 0.8 mm/d'); ax.plot(x, h2, '--', color='#0080FF', label='dH = +2.5cm, N = 0.8 mm/d'); ax.plot(x, h4, '-', color='#fabb01', label='dH = +20cm, N = 0.8 mm/d'); ax.plot(x, h5, '-', color='#f65b00', label='dH = +20cm, N = 7 mm/d'); ax.set_ylim(-0.05, 0.3) ax.set_xlim(0, L/2) ax.set_xlabel('afstand tot de beek (m)',fontsize=11) ax.set_ylabel('opbolling van het grondwater (m)', fontsize = 11) #ax.set_title('') #lg = ax.legend(loc="upper right") ax.grid(True) #legframe = plt.legend.get_frame() #lg.get_frame().set_alpha(0.5) #lg.get_frame().set_facecolor('white') ax.legend(loc="upper right", fancybox=True, frameon=True, facecolor='white', framealpha=1, ) plt.show() fig.savefig('wesseling_hx.png') # slootafvoer for q in [0.8/1000,7/1000,qd0,qd3,qd4,qd5]: m3dagkm = q*L*1000 lsdkm = m3dagkm*1/86.4 print(m3dagkm,lsdkm) # kwelflux for q in [0.8/1000,7/1000,qs0,qs3,qs4,qs5]: m3dagkm = q*L*1000 lsdkm = m3dagkm*1/86.4 print(m3dagkm,lsdkm)
320.0 3.7037037037037033 2800.0000000000005 32.40740740740741 -308.63433088123435 -3.5721566074216935 -2628.53571800518 -30.422867106541435 181.26361767539473 2.097958537909661 -2138.637769448551 -24.752751961210077
MIT
Watercourse/Stationary freatic flow between two water courses above a semi-pervious layer (Wesseling).ipynb
tdmeij/GWF
Text Classification with PySpark - Multiclass Text ClassificationTask- Predict the subject category given a course title or text
import pyspark from pyspark import SparkContext sc = SparkContext(master='local[2]') # lunch UI sc # create spark seassion from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Text Classifier").getOrCreate() # read the dataset and load df = spark.read.csv('udemy.csv',header=True, inferSchema=True) df.show(5) df.columns df = df.select('course_title','subject') df.show(5) df.groupby('subject').count().sort("count",ascending=False).show() # getting values count using pandas # df.toPandas()['subject'].value_counts() # check for missing values df.toPandas()['subject'].isnull().sum() # drop missing values df = df.dropna(subset= ['subject']) # check for missing values df.toPandas()['subject'].isnull().sum() df.show(5)
+--------------------+----------------+ | course_title| subject| +--------------------+----------------+ |Ultimate Investme...|Business Finance| |Complete GST Cour...|Business Finance| |Financial Modelin...|Business Finance| |Beginner to Pro -...|Business Finance| |How To Maximize Y...|Business Finance| +--------------------+----------------+ only showing top 5 rows
Apache-2.0
getStarted/TextClassification/textClassification.ipynb
iamhimanshu0/Spark
Feature Extractionbuild features + count vectorizer+ tfIDF+ wordEmbeddings+ hashingTF+ etc...We have 2 things in Pipeline stages- Transformer- Estimator**Transformer** (Data to Data)Function that takes data and fit, transform them into augmented data or featuresi.e Extractors, Vectorizer, Scalers (Tokenizer, StopwordRemover, CountVectorizer, IDF)**Estimator** (Data to model)Function that takes data as input and fit the data and produces a model we can use to predicti.e LogisticRegression
from pyspark.ml.feature import Tokenizer, StopWordsRemover, CountVectorizer, IDF, StringIndexer # dir(pyspark.ml.feature) # Stages for the pipeline tokenizer = Tokenizer(inputCol='course_title', outputCol='mytokens') stopwordRemover = StopWordsRemover(inputCol='mytokens',outputCol='filtered_tokens') vectorizer = CountVectorizer(inputCol='filtered_tokens',outputCol='rawFeatures') idf = IDF(inputCol='rawFeatures', outputCol='vectorizedFeatures') # work on taget variable (subject) # label encoding/indexing labelEncoder = StringIndexer(inputCol='subject',outputCol='label').fit(df) labelEncoder.transform(df).show(5) # labelEncoder.labels # making dict to labels label_dict = { 'Web Development':0.0, 'Business Finance':1.0, 'Musical Instruments':2.0, 'Graphic Design':3.0 } df = labelEncoder.transform(df) df.show(5) # split dataset (train_df, test_df) = df.randomSplit((0.7,0.3),seed=42) train_df.show(2) # machine learning model (Estimator) (data to model) from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(featuresCol='vectorizedFeatures', labelCol = 'label' )
_____no_output_____
Apache-2.0
getStarted/TextClassification/textClassification.ipynb
iamhimanshu0/Spark
Building the pipeline
from pyspark.ml import Pipeline pipeline = Pipeline( stages=[tokenizer, stopwordRemover, vectorizer, idf, lr] ) pipeline.stages # model building lr_model = pipeline.fit(train_df) lr_model # get predicction on test data predictions = lr_model.transform(test_df) # predictions.show() predictions.columns predictions.select('rawPrediction', 'probability','subject','label','prediction').show(10)
+--------------------+--------------------+-------------------+-----+----------+ | rawPrediction| probability| subject|label|prediction| +--------------------+--------------------+-------------------+-----+----------+ |[8.30964874634511...|[0.87877993991729...|Musical Instruments| 2.0| 0.0| |[-1.3744065857781...|[1.90975343878318...|Musical Instruments| 2.0| 2.0| |[0.60822716351824...|[3.28451283099288...|Musical Instruments| 2.0| 2.0| |[-1.0584564885297...|[3.70732079181542...| Business Finance| 1.0| 1.0| |[24.6296077836821...|[0.99999999906211...| Web Development| 0.0| 0.0| |[22.0136686708729...|[0.99999999049941...| Web Development| 0.0| 0.0| |[19.9225858177008...|[0.99999995276066...| Web Development| 0.0| 0.0| |[-5.7386799100009...|[5.78822181193782...|Musical Instruments| 2.0| 2.0| |[-19.060576929776...|[1.71813778453453...| Graphic Design| 3.0| 3.0| |[-2.4736166619785...|[1.84538870784594...|Musical Instruments| 2.0| 2.0| +--------------------+--------------------+-------------------+-----+----------+ only showing top 10 rows
Apache-2.0
getStarted/TextClassification/textClassification.ipynb
iamhimanshu0/Spark
model evaluation+ Accuracy+ Precision+ F1Score+ etc
from pyspark.ml.evaluation import MulticlassClassificationEvaluator evaluator = MulticlassClassificationEvaluator(predictionCol='prediction',labelCol='label') accuracy = evaluator.evaluate(predictions) accuracy*100 """ Method 2: precision, f1score classification report """ from pyspark.mllib.evaluation import MulticlassMetrics lr_metric = MulticlassMetrics(predictions['label','prediction'].rdd) print("Accuracy ", lr_metric.accuracy) print("precision ", lr_metric.precision(1.0)) print("f1Score ", lr_metric.fMeasure(1.0)) print("recall ", lr_metric.recall(1.0))
Accuracy 0.9182509505703422 precision 0.9544159544159544 f1Score 0.9178082191780822 recall 0.8839050131926122
Apache-2.0
getStarted/TextClassification/textClassification.ipynb
iamhimanshu0/Spark
Confusion matrix- convert to pandas- sklearn
y_true = predictions.select('label') y_true = y_true.toPandas() y_predict = predictions.select('prediction') y_predict = y_predict.toPandas() from sklearn.metrics import confusion_matrix, classification_report cm = confusion_matrix(y_true, y_predict) cm
_____no_output_____
Apache-2.0
getStarted/TextClassification/textClassification.ipynb
iamhimanshu0/Spark
making prediction on one sample+ sample as df+ apply pipeline
from pyspark.sql.types import StringType exl = spark.createDataFrame([ ("Building Machine Learning Apps with Python and PySpark", StringType()) ], #column name ['course_title'] ) exl.show() # show fill exl.show(truncate=False) # making prediction prediction_ex1 = lr_model.transform(exl) prediction_ex1.show(truncate=True) prediction_ex1.columns prediction_ex1.select('course_title','rawPrediction','probability','prediction').show() label_dict # save and load the model modelPath = "models/pyspark_lr_model" lr_model.write().save(modelPath) # loading pickled model from pyspark.ml.pipeline import PipelineModel presistedModel =PipelineModel.load(modelPath) # laodModel # making prediction loadModel = presistedModel.transform(exl) loadModel.show(truncate=True) loadModel.select('course_title','rawPrediction','probability','prediction').show()
+--------------------+--------------------+--------------------+----------+ | course_title| rawPrediction| probability|prediction| +--------------------+--------------------+--------------------+----------+ |Building Machine ...|[14.7174498131555...|[0.99999814636182...| 0.0| +--------------------+--------------------+--------------------+----------+
Apache-2.0
getStarted/TextClassification/textClassification.ipynb
iamhimanshu0/Spark
______Copyright by Pierian Data Inc.For more information, visit us at www.pieriandata.com Text Methods A normal Python string has a variety of method calls available:
mystring = 'hello' mystring.capitalize() mystring.isdigit() help(str)
Help on class str in module builtins: class str(object) | str(object='') -> str | str(bytes_or_buffer[, encoding[, errors]]) -> str | | Create a new string object from the given object. If encoding or | errors is specified, then the object must expose a data buffer | that will be decoded using the given encoding and error handler. | Otherwise, returns the result of object.__str__() (if defined) | or repr(object). | encoding defaults to sys.getdefaultencoding(). | errors defaults to 'strict'. | | Methods defined here: | | __add__(self, value, /) | Return self+value. | | __contains__(self, key, /) | Return key in self. | | __eq__(self, value, /) | Return self==value. | | __format__(self, format_spec, /) | Return a formatted version of the string as described by format_spec. | | __ge__(self, value, /) | Return self>=value. | | __getattribute__(self, name, /) | Return getattr(self, name). | | __getitem__(self, key, /) | Return self[key]. | | __getnewargs__(...) | | __gt__(self, value, /) | Return self>value. | | __hash__(self, /) | Return hash(self). | | __iter__(self, /) | Implement iter(self). | | __le__(self, value, /) | Return self<=value. | | __len__(self, /) | Return len(self). | | __lt__(self, value, /) | Return self<value. | | __mod__(self, value, /) | Return self%value. | | __mul__(self, value, /) | Return self*value. | | __ne__(self, value, /) | Return self!=value. | | __repr__(self, /) | Return repr(self). | | __rmod__(self, value, /) | Return value%self. | | __rmul__(self, value, /) | Return value*self. | | __sizeof__(self, /) | Return the size of the string in memory, in bytes. | | __str__(self, /) | Return str(self). | | capitalize(self, /) | Return a capitalized version of the string. | | More specifically, make the first character have upper case and the rest lower | case. | | casefold(self, /) | Return a version of the string suitable for caseless comparisons. | | center(self, width, fillchar=' ', /) | Return a centered string of length width. | | Padding is done using the specified fill character (default is a space). | | count(...) | S.count(sub[, start[, end]]) -> int | | Return the number of non-overlapping occurrences of substring sub in | string S[start:end]. Optional arguments start and end are | interpreted as in slice notation. | | encode(self, /, encoding='utf-8', errors='strict') | Encode the string using the codec registered for encoding. | | encoding | The encoding in which to encode the string. | errors | The error handling scheme to use for encoding errors. | The default is 'strict' meaning that encoding errors raise a | UnicodeEncodeError. Other possible values are 'ignore', 'replace' and | 'xmlcharrefreplace' as well as any other name registered with | codecs.register_error that can handle UnicodeEncodeErrors. | | endswith(...) | S.endswith(suffix[, start[, end]]) -> bool | | Return True if S ends with the specified suffix, False otherwise. | With optional start, test S beginning at that position. | With optional end, stop comparing S at that position. | suffix can also be a tuple of strings to try. | | expandtabs(self, /, tabsize=8) | Return a copy where all tab characters are expanded using spaces. | | If tabsize is not given, a tab size of 8 characters is assumed. | | find(...) | S.find(sub[, start[, end]]) -> int | | Return the lowest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Return -1 on failure. | | format(...) | S.format(*args, **kwargs) -> str | | Return a formatted version of S, using substitutions from args and kwargs. | The substitutions are identified by braces ('{' and '}'). | | format_map(...) | S.format_map(mapping) -> str | | Return a formatted version of S, using substitutions from mapping. | The substitutions are identified by braces ('{' and '}'). | | index(...) | S.index(sub[, start[, end]]) -> int | | Return the lowest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Raises ValueError when the substring is not found. | | isalnum(self, /) | Return True if the string is an alpha-numeric string, False otherwise. | | A string is alpha-numeric if all characters in the string are alpha-numeric and | there is at least one character in the string. | | isalpha(self, /) | Return True if the string is an alphabetic string, False otherwise. | | A string is alphabetic if all characters in the string are alphabetic and there | is at least one character in the string. | | isascii(self, /) | Return True if all characters in the string are ASCII, False otherwise. | | ASCII characters have code points in the range U+0000-U+007F. | Empty string is ASCII too. | | isdecimal(self, /) | Return True if the string is a decimal string, False otherwise. | | A string is a decimal string if all characters in the string are decimal and | there is at least one character in the string. | | isdigit(self, /) | Return True if the string is a digit string, False otherwise. | | A string is a digit string if all characters in the string are digits and there | is at least one character in the string. | | isidentifier(self, /) | Return True if the string is a valid Python identifier, False otherwise. | | Use keyword.iskeyword() to test for reserved identifiers such as "def" and | "class". | | islower(self, /) | Return True if the string is a lowercase string, False otherwise. | | A string is lowercase if all cased characters in the string are lowercase and | there is at least one cased character in the string. | | isnumeric(self, /) | Return True if the string is a numeric string, False otherwise. | | A string is numeric if all characters in the string are numeric and there is at | least one character in the string. | | isprintable(self, /) | Return True if the string is printable, False otherwise. | | A string is printable if all of its characters are considered printable in | repr() or if it is empty. | | isspace(self, /) | Return True if the string is a whitespace string, False otherwise. | | A string is whitespace if all characters in the string are whitespace and there | is at least one character in the string. | | istitle(self, /) | Return True if the string is a title-cased string, False otherwise. | | In a title-cased string, upper- and title-case characters may only | follow uncased characters and lowercase characters only cased ones. | | isupper(self, /) | Return True if the string is an uppercase string, False otherwise. | | A string is uppercase if all cased characters in the string are uppercase and | there is at least one cased character in the string. | | join(self, iterable, /) | Concatenate any number of strings. | | The string whose method is called is inserted in between each given string. | The result is returned as a new string. | | Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs' | | ljust(self, width, fillchar=' ', /) | Return a left-justified string of length width. | | Padding is done using the specified fill character (default is a space). | | lower(self, /) | Return a copy of the string converted to lowercase. | | lstrip(self, chars=None, /) | Return a copy of the string with leading whitespace removed. | | If chars is given and not None, remove characters in chars instead. | | partition(self, sep, /) | Partition the string into three parts using the given separator. | | This will search for the separator in the string. If the separator is found, | returns a 3-tuple containing the part before the separator, the separator | itself, and the part after it. | | If the separator is not found, returns a 3-tuple containing the original string | and two empty strings. | | replace(self, old, new, count=-1, /) | Return a copy with all occurrences of substring old replaced by new. | | count | Maximum number of occurrences to replace. | -1 (the default value) means replace all occurrences. | | If the optional argument count is given, only the first count occurrences are | replaced. | | rfind(...) | S.rfind(sub[, start[, end]]) -> int | | Return the highest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Return -1 on failure. | | rindex(...) | S.rindex(sub[, start[, end]]) -> int | | Return the highest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Raises ValueError when the substring is not found. | | rjust(self, width, fillchar=' ', /) | Return a right-justified string of length width. | | Padding is done using the specified fill character (default is a space). | | rpartition(self, sep, /) | Partition the string into three parts using the given separator. | | This will search for the separator in the string, starting at the end. If | the separator is found, returns a 3-tuple containing the part before the | separator, the separator itself, and the part after it. | | If the separator is not found, returns a 3-tuple containing two empty strings | and the original string. | | rsplit(self, /, sep=None, maxsplit=-1) | Return a list of the words in the string, using sep as the delimiter string. | | sep | The delimiter according which to split the string. | None (the default value) means split according to any whitespace, | and discard empty strings from the result. | maxsplit | Maximum number of splits to do. | -1 (the default value) means no limit. | | Splits are done starting at the end of the string and working to the front. | | rstrip(self, chars=None, /) | Return a copy of the string with trailing whitespace removed. | | If chars is given and not None, remove characters in chars instead. | | split(self, /, sep=None, maxsplit=-1) | Return a list of the words in the string, using sep as the delimiter string. | | sep | The delimiter according which to split the string. | None (the default value) means split according to any whitespace, | and discard empty strings from the result. | maxsplit | Maximum number of splits to do. | -1 (the default value) means no limit. | | splitlines(self, /, keepends=False) | Return a list of the lines in the string, breaking at line boundaries. | | Line breaks are not included in the resulting list unless keepends is given and | true. | | startswith(...) | S.startswith(prefix[, start[, end]]) -> bool | | Return True if S starts with the specified prefix, False otherwise. | With optional start, test S beginning at that position. | With optional end, stop comparing S at that position. | prefix can also be a tuple of strings to try. | | strip(self, chars=None, /) | Return a copy of the string with leading and trailing whitespace removed. | | If chars is given and not None, remove characters in chars instead. | | swapcase(self, /) | Convert uppercase characters to lowercase and lowercase characters to uppercase. | | title(self, /) | Return a version of the string where each word is titlecased. | | More specifically, words start with uppercased characters and all remaining | cased characters have lower case. | | translate(self, table, /) | Replace each character in the string using the given translation table. | | table | Translation table, which must be a mapping of Unicode ordinals to | Unicode ordinals, strings, or None. | | The table must implement lookup/indexing via __getitem__, for instance a | dictionary or list. If this operation raises LookupError, the character is | left untouched. Characters mapped to None are deleted. | | upper(self, /) | Return a copy of the string converted to uppercase. | | zfill(self, width, /) | Pad a numeric string with zeros on the left, to fill a field of the given width. | | The string is never truncated. | | ---------------------------------------------------------------------- | Static methods defined here: | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature. | | maketrans(x, y=None, z=None, /) | Return a translation table usable for str.translate(). | | If there is only one argument, it must be a dictionary mapping Unicode | ordinals (integers) or characters to Unicode ordinals, strings or None. | Character keys will be then converted to ordinals. | If there are two arguments, they must be strings of equal length, and | in the resulting dictionary, each character in x will be mapped to the | character at the same position in y. If there is a third argument, it | must be a string, whose characters will be mapped to None in the result.
Apache-2.0
07-Text-Methods.ipynb
srijikabanerjee/demo2
Pandas and TextPandas can do a lot more than what we show here. Full online documentation on things like advanced string indexing and regular expressions with pandas can be found here: https://pandas.pydata.org/docs/user_guide/text.html Text Methods on Pandas String Column
import pandas as pd names = pd.Series(['andrew','bobo','claire','david','4']) names names.str.capitalize() names.str.isdigit()
_____no_output_____
Apache-2.0
07-Text-Methods.ipynb
srijikabanerjee/demo2
Splitting , Grabbing, and Expanding
tech_finance = ['GOOG,APPL,AMZN','JPM,BAC,GS'] len(tech_finance) tickers = pd.Series(tech_finance) tickers tickers.str.split(',') tickers.str.split(',').str[0] tickers.str.split(',',expand=True)
_____no_output_____
Apache-2.0
07-Text-Methods.ipynb
srijikabanerjee/demo2
Cleaning or Editing Strings
messy_names = pd.Series(["andrew ","bo;bo"," claire "]) # Notice the "mis-alignment" on the right hand side due to spacing in "andrew " and " claire " messy_names messy_names.str.replace(";","") messy_names.str.strip() messy_names.str.replace(";","").str.strip() messy_names.str.replace(";","").str.strip().str.capitalize()
_____no_output_____
Apache-2.0
07-Text-Methods.ipynb
srijikabanerjee/demo2
Alternative with Custom apply() call
def cleanup(name): name = name.replace(";","") name = name.strip() name = name.capitalize() return name messy_names messy_names.apply(cleanup)
_____no_output_____
Apache-2.0
07-Text-Methods.ipynb
srijikabanerjee/demo2
Which one is more efficient?
import timeit # code snippet to be executed only once setup = ''' import pandas as pd import numpy as np messy_names = pd.Series(["andrew ","bo;bo"," claire "]) def cleanup(name): name = name.replace(";","") name = name.strip() name = name.capitalize() return name ''' # code snippet whose execution time is to be measured stmt_pandas_str = ''' messy_names.str.replace(";","").str.strip().str.capitalize() ''' stmt_pandas_apply = ''' messy_names.apply(cleanup) ''' stmt_pandas_vectorize=''' np.vectorize(cleanup)(messy_names) ''' timeit.timeit(setup = setup, stmt = stmt_pandas_str, number = 10000) timeit.timeit(setup = setup, stmt = stmt_pandas_apply, number = 10000) timeit.timeit(setup = setup, stmt = stmt_pandas_vectorize, number = 10000)
_____no_output_____
Apache-2.0
07-Text-Methods.ipynb
srijikabanerjee/demo2
Texas updates their data daily at noon CDT
from selenium import webdriver import time import pandas as pd import pendulum import re import yaml from selenium.webdriver.chrome.options import Options chrome_options = Options() #chrome_options.add_argument("--disable-extensions") #chrome_options.add_argument("--disable-gpu") #chrome_options.add_argument("--no-sandbox) # linux only chrome_options.add_argument("--start-maximized") # chrome_options.add_argument("--headless") chrome_options.add_argument("user-agent=[Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:73.0) Gecko/20100101 Firefox/73.0]") with open('config.yaml', 'r') as f: config = yaml.safe_load(f.read()) state = 'TX' scrape_timestamp = pendulum.now().strftime('%Y%m%d%H%M%S') # MD positive cases by county url = 'https://www.dshs.state.tx.us/news/updates.shtm#coronavirus' url = 'https://txdshs.maps.arcgis.com/apps/opsdashboard/index.html#/ed483ecd702b4298ab01e8b9cafc8b83' def fetch(): driver = webdriver.Chrome('../20190611 - Parts recommendation/chromedriver', options=chrome_options) driver.get(url) time.sleep(5) datatbl = driver.find_element_by_class_name('feature-list') datatbl.find_elements_by_class_name('external-html') datatbl = datatbl.text.split('\n') data = [] for i in range(0, len(datatbl), 2): data.append([datatbl[i], datatbl[i+1]]) page_source = driver.page_source driver.close() return pd.DataFrame(data, columns=['county','positive_cases']), page_source def save(df, source): df.to_csv(f"{config['data_folder']}/{state}_county_{scrape_timestamp}.txt", sep='|', index=False) with open(f"{config['data_source_backup_folder']}/{state}_county_{scrape_timestamp}.html", 'w') as f: f.write(source) def run(): df, source = fetch() save(df, source)
_____no_output_____
MIT
TX by county.ipynb
kirbs-/covid-19-dataset
Convert Dataset FormatsThis recipe demonstrates how to use FiftyOne to convert datasets on disk between common formats. Setup If you haven't already, install FiftyOne:
!pip install fiftyone import fiftyone as fo
_____no_output_____
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
If the above import fails due to a `cv2` error, it is an issue with OpenCV in Colab environments. [Follow these instructions to resolve it.](https://github.com/voxel51/fiftyone/issues/1494issuecomment-1003148448). This notebook contains bash commands. To run it as a notebook, you must install the [Jupyter bash kernel](https://github.com/takluyver/bash_kernel) via the command below.Alternatively, you can just copy + paste the code blocks into your shell.
pip install bash_kernel python -m bash_kernel.install
_____no_output_____
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
In this recipe we'll use the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) to download some open source datasets to work with.Specifically, we'll need [TensorFlow](https://www.tensorflow.org/) and [TensorFlow Datasets](https://www.tensorflow.org/datasets) installed to [access the datasets](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.htmlcustomizing-your-ml-backend):
pip install tensorflow tensorflow-datasets
_____no_output_____
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Download datasets Download the test split of the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) using the command below:
# Download the test split of CIFAR-10 fiftyone zoo datasets download cifar10 --split test
Downloading split 'test' to '~/fiftyone/cifar10/test' Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ~/fiftyone/cifar10/tmp-download/cifar-10-python.tar.gz 170500096it [00:04, 35887670.65it/s] Extracting ~/fiftyone/cifar10/tmp-download/cifar-10-python.tar.gz to ~/fiftyone/cifar10/tmp-download 100% |β–ˆβ–ˆβ–ˆ| 10000/10000 [5.2s elapsed, 0s remaining, 1.8K samples/s] Dataset info written to '~/fiftyone/cifar10/info.json'
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Download the validation split of the [KITTI dataset]( http://www.cvlibs.net/datasets/kitti) from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) using the command below:
# Download the validation split of KITTI fiftyone zoo datasets download kitti --split validation
Split 'validation' already downloaded
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
The fiftyone convert command The [FiftyOne CLI](https://voxel51.com/docs/fiftyone/cli/index.html) provides a number of utilities for importing and exporting datasets in a variety of common (or custom) formats.Specifically, the `fiftyone convert` command provides a convenient way to convert datasets on disk between formats by specifying the [fiftyone.types.Dataset](https://voxel51.com/docs/fiftyone/api/fiftyone.types.htmlfiftyone.types.dataset_types.Dataset) type of the input and desired output.FiftyOne provides a collection of [builtin types](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlsupported-formats) that you can use to read/write datasets in common formats out-of-the-box: | Dataset format | Import Supported? | Export Supported? | Conversion Supported? || ---------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- || [ImageDirectory](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlimagedirectory) | βœ“ | βœ“ | βœ“ || [VideoDirectory](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlvideodirectory) | βœ“ | βœ“ | βœ“ || [FiftyOneImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimageclassificationdataset) | βœ“ | βœ“ | βœ“ || [ImageClassificationDirectoryTree](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlimageclassificationdirectorytree) | βœ“ | βœ“ | βœ“ || [TFImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmltfimageclassificationdataset) | βœ“ | βœ“ | βœ“ || [FiftyOneImageDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagedetectiondataset) | βœ“ | βœ“ | βœ“ || [COCODetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcocodetectiondataset) | βœ“ | βœ“ | βœ“ || [VOCDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlvocdetectiondataset) | βœ“ | βœ“ | βœ“ || [KITTIDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlkittidetectiondataset) | βœ“ | βœ“ | βœ“ || [YOLODataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlyolodataset) | βœ“ | βœ“ | βœ“ || [TFObjectDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmltfobjectdetectiondataset) | βœ“ | βœ“ | βœ“ || [CVATImageDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcvatimagedataset) | βœ“ | βœ“ | βœ“ || [CVATVideoDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcvatvideodataset) | βœ“ | βœ“ | βœ“ || [FiftyOneImageLabelsDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagelabelsdataset) | βœ“ | βœ“ | βœ“ || [FiftyOneVideoLabelsDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyonevideolabelsdataset) | βœ“ | βœ“ | βœ“ || [BDDDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlbdddataset) | βœ“ | βœ“ | βœ“ | In addition, you can define your own [custom dataset types](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcustom-formats) to read/write datasets in your own formats.The usage of the `fiftyone convert` command is as follows:
fiftyone convert -h
usage: fiftyone convert [-h] [--input-dir INPUT_DIR] [--input-type INPUT_TYPE] [--output-dir OUTPUT_DIR] [--output-type OUTPUT_TYPE] Convert datasets on disk between supported formats. Examples:: # Convert an image classification directory tree to TFRecords format fiftyone convert \ --input-dir /path/to/image-classification-directory-tree \ --input-type fiftyone.types.ImageClassificationDirectoryTree \ --output-dir /path/for/tf-image-classification-dataset \ --output-type fiftyone.types.TFImageClassificationDataset # Convert a COCO detection dataset to CVAT image format fiftyone convert \ --input-dir /path/to/coco-detection-dataset \ --input-type fiftyone.types.COCODetectionDataset \ --output-dir /path/for/cvat-image-dataset \ --output-type fiftyone.types.CVATImageDataset optional arguments: -h, --help show this help message and exit --input-dir INPUT_DIR the directory containing the dataset --input-type INPUT_TYPE the fiftyone.types.Dataset type of the input dataset --output-dir OUTPUT_DIR the directory to which to write the output dataset --output-type OUTPUT_TYPE the fiftyone.types.Dataset type to output
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Convert CIFAR-10 dataset When you downloaded the test split of the CIFAR-10 dataset above, it was written to disk as a dataset in [fiftyone.types.FiftyOneImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimageclassificationdataset) format.You can verify this by printing information about the downloaded dataset:
fiftyone zoo datasets info cifar10
***** Dataset description ***** The CIFAR-10 dataset consists of 60000 32 x 32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. Dataset size: 132.40 MiB Source: https://www.cs.toronto.edu/~kriz/cifar.html ***** Supported splits ***** test, train ***** Dataset location ***** ~/fiftyone/cifar10 ***** Dataset info ***** { "name": "cifar10", "zoo_dataset": "fiftyone.zoo.datasets.torch.CIFAR10Dataset", "dataset_type": "fiftyone.types.dataset_types.FiftyOneImageClassificationDataset", "num_samples": 10000, "downloaded_splits": { "test": { "split": "test", "num_samples": 10000 } }, "classes": [ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ] }
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
The snippet below uses `fiftyone convert` to convert the test split of the CIFAR-10 dataset to [fiftyone.types.ImageClassificationDirectoryTree](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlimageclassificationdirectorytree) format, which stores classification datasets on disk in a directory tree structure with images organized per-class:```β”œβ”€β”€ /β”‚ β”œβ”€β”€ .β”‚ β”œβ”€β”€ .β”‚ └── ...β”œβ”€β”€ /β”‚ β”œβ”€β”€ .β”‚ β”œβ”€β”€ .β”‚ └── ...└── ...```
INPUT_DIR=$(fiftyone zoo datasets find cifar10 --split test) OUTPUT_DIR=/tmp/fiftyone/cifar10-dir-tree fiftyone convert \ --input-dir ${INPUT_DIR} --input-type fiftyone.types.FiftyOneImageClassificationDataset \ --output-dir ${OUTPUT_DIR} --output-type fiftyone.types.ImageClassificationDirectoryTree
Loading dataset from '~/fiftyone/cifar10/test' Input format 'fiftyone.types.dataset_types.FiftyOneImageClassificationDataset' 100% |β–ˆβ–ˆβ–ˆ| 10000/10000 [4.2s elapsed, 0s remaining, 2.4K samples/s] Import complete Exporting dataset to '/tmp/fiftyone/cifar10-dir-tree' Export format 'fiftyone.types.dataset_types.ImageClassificationDirectoryTree' 100% |β–ˆβ–ˆβ–ˆ| 10000/10000 [6.2s elapsed, 0s remaining, 1.7K samples/s] Export complete
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Let's verify that the conversion happened as expected:
ls -lah /tmp/fiftyone/cifar10-dir-tree/ ls -lah /tmp/fiftyone/cifar10-dir-tree/airplane/ | head
total 8000 drwxr-xr-x 1002 voxel51 wheel 31K Jul 14 11:08 . drwxr-xr-x 12 voxel51 wheel 384B Jul 14 11:08 .. -rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000004.jpg -rw-r--r-- 1 voxel51 wheel 1.1K Jul 14 11:23 000011.jpg -rw-r--r-- 1 voxel51 wheel 1.1K Jul 14 11:23 000022.jpg -rw-r--r-- 1 voxel51 wheel 1.3K Jul 14 11:23 000028.jpg -rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000045.jpg -rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000053.jpg -rw-r--r-- 1 voxel51 wheel 1.3K Jul 14 11:23 000075.jpg
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Now let's convert the classification directory tree to [TFRecords](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmltfimageclassificationdataset) format!
INPUT_DIR=/tmp/fiftyone/cifar10-dir-tree OUTPUT_DIR=/tmp/fiftyone/cifar10-tfrecords fiftyone convert \ --input-dir ${INPUT_DIR} --input-type fiftyone.types.ImageClassificationDirectoryTree \ --output-dir ${OUTPUT_DIR} --output-type fiftyone.types.TFImageClassificationDataset
Loading dataset from '/tmp/fiftyone/cifar10-dir-tree' Input format 'fiftyone.types.dataset_types.ImageClassificationDirectoryTree' 100% |β–ˆβ–ˆβ–ˆ| 10000/10000 [4.0s elapsed, 0s remaining, 2.5K samples/s] Import complete Exporting dataset to '/tmp/fiftyone/cifar10-tfrecords' Export format 'fiftyone.types.dataset_types.TFImageClassificationDataset' 0% ||--| 1/10000 [23.2ms elapsed, 3.9m remaining, 43.2 samples/s] 2020-07-14 11:24:15.187387: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-07-14 11:24:15.201384: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f83df428f60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-14 11:24:15.201405: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 100% |β–ˆβ–ˆβ–ˆ| 10000/10000 [8.2s elapsed, 0s remaining, 1.3K samples/s] Export complete
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Let's verify that the conversion happened as expected:
ls -lah /tmp/fiftyone/cifar10-tfrecords
total 29696 drwxr-xr-x 3 voxel51 wheel 96B Jul 14 11:24 . drwxr-xr-x 4 voxel51 wheel 128B Jul 14 11:24 .. -rw-r--r-- 1 voxel51 wheel 14M Jul 14 11:24 tf.records
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Convert KITTI dataset When you downloaded the validation split of the KITTI dataset above, it was written to disk as a dataset in [fiftyone.types.FiftyOneImageDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagedetectiondataset) format.You can verify this by printing information about the downloaded dataset:
fiftyone zoo datasets info kitti
***** Dataset description ***** KITTI contains a suite of vision tasks built using an autonomous driving platform. The full benchmark contains many tasks such as stereo, optical flow, visual odometry, etc. This dataset contains the object detection dataset, including the monocular images and bounding boxes. The dataset contains 7481 training images annotated with 3D bounding boxes. A full description of the annotations can be found in the README of the object development kit on the KITTI homepage. Dataset size: 5.27 GiB Source: http://www.cvlibs.net/datasets/kitti ***** Supported splits ***** test, train, validation ***** Dataset location ***** ~/fiftyone/kitti ***** Dataset info ***** { "name": "kitti", "zoo_dataset": "fiftyone.zoo.datasets.tf.KITTIDataset", "dataset_type": "fiftyone.types.dataset_types.FiftyOneImageDetectionDataset", "num_samples": 423, "downloaded_splits": { "validation": { "split": "validation", "num_samples": 423 } }, "classes": [ "Car", "Van", "Truck", "Pedestrian", "Person_sitting", "Cyclist", "Tram", "Misc" ] }
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
The snippet below uses `fiftyone convert` to convert the test split of the CIFAR-10 dataset to [fiftyone.types.COCODetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlcocodetectiondataset) format, which writes the dataset to disk with annotations in [COCO format](https://cocodataset.org/format-data).
INPUT_DIR=$(fiftyone zoo datasets find kitti --split validation) OUTPUT_DIR=/tmp/fiftyone/kitti-coco fiftyone convert \ --input-dir ${INPUT_DIR} --input-type fiftyone.types.FiftyOneImageDetectionDataset \ --output-dir ${OUTPUT_DIR} --output-type fiftyone.types.COCODetectionDataset
Loading dataset from '~/fiftyone/kitti/validation' Input format 'fiftyone.types.dataset_types.FiftyOneImageDetectionDataset' 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 423/423 [1.2s elapsed, 0s remaining, 351.0 samples/s] Import complete Exporting dataset to '/tmp/fiftyone/kitti-coco' Export format 'fiftyone.types.dataset_types.COCODetectionDataset' 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 423/423 [4.4s elapsed, 0s remaining, 96.1 samples/s] Export complete
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Let's verify that the conversion happened as expected:
ls -lah /tmp/fiftyone/kitti-coco/ ls -lah /tmp/fiftyone/kitti-coco/data | head cat /tmp/fiftyone/kitti-coco/labels.json | python -m json.tool 2> /dev/null | head -20 echo "..." cat /tmp/fiftyone/kitti-coco/labels.json | python -m json.tool 2> /dev/null | tail -20
{ "info": { "year": "", "version": "", "description": "Exported from FiftyOne", "contributor": "", "url": "https://voxel51.com/fiftyone", "date_created": "2020-07-14T11:24:40" }, "licenses": [], "categories": [ { "id": 0, "name": "Car", "supercategory": "none" }, { "id": 1, "name": "Cyclist", "supercategory": "none" ... "area": 4545.8, "segmentation": null, "iscrowd": 0 }, { "id": 3196, "image_id": 422, "category_id": 3, "bbox": [ 367.2, 107.3, 36.2, 105.2 ], "area": 3808.2, "segmentation": null, "iscrowd": 0 } ] }
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Now let's convert from COCO format to [CVAT Image format](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlcvatimageformat) format!
INPUT_DIR=/tmp/fiftyone/kitti-coco OUTPUT_DIR=/tmp/fiftyone/kitti-cvat fiftyone convert \ --input-dir ${INPUT_DIR} --input-type fiftyone.types.COCODetectionDataset \ --output-dir ${OUTPUT_DIR} --output-type fiftyone.types.CVATImageDataset
Loading dataset from '/tmp/fiftyone/kitti-coco' Input format 'fiftyone.types.dataset_types.COCODetectionDataset' 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 423/423 [2.0s elapsed, 0s remaining, 206.4 samples/s] Import complete Exporting dataset to '/tmp/fiftyone/kitti-cvat' Export format 'fiftyone.types.dataset_types.CVATImageDataset' 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 423/423 [1.3s elapsed, 0s remaining, 323.7 samples/s] Export complete
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Let's verify that the conversion happened as expected:
ls -lah /tmp/fiftyone/kitti-cvat cat /tmp/fiftyone/kitti-cvat/labels.xml | head -20 echo "..." cat /tmp/fiftyone/kitti-cvat/labels.xml | tail -20
<?xml version="1.0" encoding="utf-8"?> <annotations> <version>1.1</version> <meta> <task> <size>423</size> <mode>annotation</mode> <labels> <label> <name>Car</name> <attributes> </attributes> </label> <label> <name>Cyclist</name> <attributes> </attributes> </label> <label> <name>Misc</name> ... <box label="Pedestrian" xtl="360" ytl="116" xbr="402" ybr="212"> </box> <box label="Pedestrian" xtl="396" ytl="120" xbr="430" ybr="212"> </box> <box label="Pedestrian" xtl="413" ytl="112" xbr="483" ybr="212"> </box> <box label="Pedestrian" xtl="585" ytl="80" xbr="646" ybr="215"> </box> <box label="Pedestrian" xtl="635" ytl="94" xbr="688" ybr="212"> </box> <box label="Pedestrian" xtl="422" ytl="85" xbr="469" ybr="210"> </box> <box label="Pedestrian" xtl="457" ytl="93" xbr="520" ybr="213"> </box> <box label="Pedestrian" xtl="505" ytl="101" xbr="548" ybr="206"> </box> <box label="Pedestrian" xtl="367" ytl="107" xbr="403" ybr="212"> </box> </image> </annotations>
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
CleanupYou can cleanup the files generated by this recipe by running the command below:
rm -rf /tmp/fiftyone
_____no_output_____
Apache-2.0
docs/source/recipes/convert_datasets.ipynb
pixta-dev/fiftyone
Session 1 Homework Solution=========================This is the homework for the first session of the MolSSI Python Scripting Level 2 Workshop.This homework is intended to give you practice with the material covered in the first session. Goals: - Utilize pandas to read in and work with the data in a csv file. - Utilize matplotlib to create plots and subplots of data stored in a pandas dataframe. - Utilize pandas to extract specific information from a dataframe. - Utilize matplotlib to construct a plot with multiple groups of data. Exercise 1Using the `PubChemElements_all.csv` file used during Session 1, create a plot of the Ionization Energy trend of the periodic table. The trend can be visualized by plotting the Ionization Energy of each element against it's Atomic Number. HINT: Use pandas to read in the csv file and plot the data using matplotlib. Exercise 2Create a pair of subplots comparing the Ionization Energy and Electronegativity trends. The Electronegativity trend can be plotted in the same way as the Ionization Energy, by plotting the Electronegativity of each element against it's Atomic Number. Exercise 3 Create a function that will assign a color coding to a particular Standard State of an element, i.e. red for gases, blue for solids, etc. Use the apply function from pandas to apply the function across the periodic table, creating a column of assigned colors. Create a pair of subplots utilizing the assigned color as the color of the marker: - Atomic Mass against the Melting point of each element - Ionization Energy against the Melting Point of each element Exercise 1 Solution
# Import necessary packages for the homework: import os import pandas as pd import matplotlib.pyplot as plt # %matplotlib notebook # Create a filepath to the periodic table csv file. file_path = os.path.join("data", "PubChemElements_all.csv") # Use pandas to read the csv file into a table. df = pd.read_csv(file_path) # Print the first 5 rows of the table to get a quick glance at its contents. df.head() # Create a simple scatter plot of the Ionization Energy vs the Atomic number of each element. ion_fig, ion_ax = plt.subplots() ion_ax.scatter('AtomicNumber', 'IonizationEnergy', data=df) ion_ax.set_xlabel('Atomic Number') ion_ax.set_ylabel('Ionization Energy')
_____no_output_____
BSD-3-Clause
book/homework_1_solutions.ipynb
janash/python-analysis
Exercise 2 Solution
# Create a set of subplots of the two trends: Ionization Energy and Electronegativity. comparison_fig, comparison_ax = plt.subplots(1, 2) # Add the first subplot. comparison_ax[0].scatter('AtomicNumber', 'IonizationEnergy', data=df) comparison_ax[0].set_xlabel('Atomic Number') comparison_ax[0].set_ylabel('Ionization') comparison_fig.tight_layout() # Add the second subplot. comparison_ax[1].scatter('AtomicNumber', 'Electronegativity', data=df) comparison_ax[1].set_xlabel('Atomic Number') comparison_ax[1].set_ylabel('Electronegativity') comparison_fig.tight_layout() comparison_fig
_____no_output_____
BSD-3-Clause
book/homework_1_solutions.ipynb
janash/python-analysis
Exercise 3 Solution
# Determine possible states stored in the Dataframe states = pd.unique(df['StandardState']) states # Create a function that returns a different color for each type of standard state. def assign_state_color(standard_state): state_markers = {'Gas': 'r', 'Solid': 'b', 'Liquid': 'g', 'Expected to be a Solid': 'y', 'Expected to be a Gas': 'k'} return state_markers[standard_state] # Apply the function to the dataframe, creating a new column. df['StateMarker'] = df['StandardState'].apply(assign_state_color) # Create a plot that uses the colors assigned to the table of the Atomic Mass vs the Melting Point. state_fig, state_ax = plt.subplots(1, 2, figsize=(8,4)) state_ax[0].set_xlabel('Atomic Mass') state_ax[0].set_ylabel('Melting Point') # Add each state as a separate scatter to the same subplot. for state in states: dataframe = df[df['StandardState'] == state] line = state_ax[0].scatter('AtomicMass', 'MeltingPoint', data=dataframe, color=dataframe.iloc[0]['StateMarker']) line.set_label(state) state_ax[1].set_xlabel('Ionization Energy') state_ax[1].set_ylabel('Melting Point') for state in states: dataframe = df[df['StandardState'] == state] line = state_ax[1].scatter('IonizationEnergy', 'MeltingPoint', data=dataframe, color=dataframe.iloc[0]['StateMarker']) line.set_label(state) # Create a legend state_ax[1].legend(states) state_fig.tight_layout()
_____no_output_____
BSD-3-Clause
book/homework_1_solutions.ipynb
janash/python-analysis
Generator function to armstrong numbers
def genFunc(): start = 1 end = 1000 for i in range(start, end + 1): if i >= 10: order = len(str(i)) sum = 0 temp = i while temp > 0: dig = temp % 10 sum += dig ** order temp //= 10 if i == sum: yield i for x in genFunc(): print(x)
153 370 371 407
Apache-2.0
Assignment Day 9 q2.ipynb
gopi2650/letsupgrade-python
define categorical / numeric columns
d.hist(figsize=[20, 20], sharey=False, bins=50) d.hist(figsize=[20, 20], sharey=True) print() col_identity = {'ignore': ['accident_id','provider_and_id','provider_code'], 'numeric' : ['license_acquiring_date', 'accident_year','accident_month'], 'category' : ['age_group', 'sex', 'vehicle_type', 'safety_measures', 'population_type', 'home_region', 'home_district', 'home_natural_area', 'home_municipal_status', 'home_residence_type', 'medical_type', 'safety_measures_use', 'car_id', 'involve_id']} col_target = 'late_deceased' if not 'data' in globals(): data_fname = 'data/involved_hebrew_dummies.parquet' if os.path.isfile(data_fname): data = pd.read_parquet(data_fname) else: rel_cols = col_identity['category'] + col_identity['numeric'] data = d[col_identity['numeric']].fillna(-1) data['license_acquiring_date'] = data['license_acquiring_date'].replace(0, -1) dummies = pd.get_dummies(d[col_identity['category']], columns=col_identity['category'], prefix_sep='==') data = pd.concat([dummies, data], axis=1) data.to_parquet(data_fname) data.head() y = d[col_target].fillna(0) y[y == 2] = 1 y = y.astype(np.int16).values x = data.fillna(-1).values y.sum(), y.shape[0], x.shape
_____no_output_____
MIT
datascience/2018_10_27_anyway_data_trial_1.ipynb
neuhofmo/anyway_projects
balance the data and train forest of decision trees
from sklearn.model_selection import train_test_split from imblearn.ensemble import BalancedRandomForestClassifier # !!!!!! balanced! from sklearn.metrics import f1_score, precision_score, recall_score, balanced_accuracy_score del df X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=42, test_size=0.25) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) print("%.4f, %.4f"%(y_train.sum() / len(y_train), y_test.sum() / len(y_test))) brf = BalancedRandomForestClassifier(n_estimators=30, random_state=0) brf.fit(X_train, y_train) y_pred = brf.predict(X_test) print('f1 score = %.3f'%(f1_score(y_test, y_pred, average='weighted'))) print('precision = %.3f'%(precision_score(y_test, y_pred, average='weighted'))) print('recall = %.3f'%(recall_score(y_test, y_pred, average='weighted'))) print('accuracy(balanced) = %.3f'%(balanced_accuracy_score(y_test, y_pred))) importance = pd.DataFrame(list(zip(data.columns, brf.feature_importances_)), columns=['feature', 'importance']).sort_values('importance', ascending=False) importance[0:20].style.bar() if not os.path.isdir('models'): os.mkdir('models') from sklearn.externals import joblib joblib.dump(brf, 'models/2018_10_28_death_risk_balanced_RF_classifier_01.joblib') import lime #eplain? #see http://explained.ai/rf-importance/index.html from sklearn.metrics import f1_score, precision_score, recall_score, balanced_accuracy_score def metric(model, x, y): return balanced_accuracy_score(y, model.predict(x)) def permutation_importances(model, X_train, y_train): baseline = metric(model, X_train, y_train) imp = [] for col in X_train.columns: save = X_train[col].copy() X_train[col] = np.random.permutation(X_train[col]) m = metric(model, X_train, y_train) X_train[col] = save imp.append(baseline - m) return np.array(imp) #permutation_importances(brf, data, y_train) d.groupby('accident_year').count()['accident_id'].plot(kind='bar')
_____no_output_____
MIT
datascience/2018_10_27_anyway_data_trial_1.ipynb
neuhofmo/anyway_projects
Data Support
fan_smoothing_window = 60 # time width of smoothing wndow def load_df(df_name): df = pd.read_csv( df_name, usecols=[0, 1, 2, 3, 4, 5, 6] ) df.rename(index=str, columns={ # remove units for easier indexing 'Time (s)': 'Time', 'Temperature': 'Thermistor', 'Error (deg C)': 'Error', 'Setpoint Reached': 'Reached' }, inplace=True) df.dropna(how='all', inplace=True) df.index = pd.to_timedelta(df['Time'], unit='s') # set index to units of seconds df.Time = df.Time / 60 # set Time to units of minutes return df def smooth_fan(df): df['FanSmooth'] = df.Fan.rolling(fan_smoothing_window, win_type='hamming').mean()
_____no_output_____
BSD-3-Clause
results/Temperature Control Plots.ipynb
ethanjli/punchcard-microfluidics
Plotting Support
figure_width = 17.5 figure_temps_height = 4 figure_complete_height = 7.5 figure_complete_height_ratio = (3, 2) box_width_shrink_factor = 0.875 # to fit the figure legend on the right ylabel_position = -0.08 min_temp = 20 max_temp = 100 legend_location = 'center right' reached_color = 'gainsboro' # light gray setpoint_color = 'tab:green' thermistor_color = 'tab:orange' fan_color = 'tab:blue' heater_color = 'tab:red' def fig_temps(title): (fig, ax_temp) = plt.subplots( figsize=(figure_width, figure_temps_height) ) ax_temp.set_title(title) return (fig, ax_temp) def fig_complete(title): (fig, (ax_temp, ax_duties)) = plt.subplots( nrows=2, sharex=True, gridspec_kw={ 'height_ratios': figure_complete_height_ratio }, figsize=(figure_width, figure_complete_height) ) ax_temp.set_title(title) return (fig, (ax_temp, ax_duties)) def plot_setpoint_reached(df, ax, label=True): legend_label = 'Reached\nSetpoint' if not label: legend_label = '_' + legend_label # hide the label from the legend (groups, _) = ndi.label(df.Reached.values.tolist()) df = pd.DataFrame({ 'Time': df.Time, 'ReachedGroup': groups }) result = ( df .loc[df.ReachedGroup != 0] .groupby('ReachedGroup')['Time'] .agg(['first', 'last']) ) for (i, (group_start, group_end)) in enumerate(result.values.tolist()): ax.axvspan(group_start, group_end, facecolor=reached_color, label=legend_label) if i == 0: legend_label = '_' + legend_label # hide subsequent labels from the legend def plot_temps(df, ax, label_x=True): ax.plot(df.Time, df.Setpoint, color=setpoint_color, label='Setpoint') ax.plot(df.Time, df.Thermistor, color=thermistor_color, label='Thermistor') ax.set_xlim([df.Time[0], df.Time[-1]]) ax.set_ylim([min_temp, max_temp]) if label_x: ax.set_xlabel('Time (min)') ax.set_ylabel('Temperature\n(Β°C)') def plot_efforts(df, ax): ax.plot(df.Time, df.FanSmooth, color=fan_color, label='Fan') ax.plot(df.Time, df.Heater, color=heater_color, label='Heater') ax.set_xlabel('Time (min)') ax.set_ylabel('Duty\nCycle') def shrink_ax_width(ax, shrink_factor): box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * shrink_factor, box.height]) def fig_plot_complete(df, title): (fig, (ax_temp, ax_duties)) = fig_complete(title) plot_setpoint_reached(df, ax_temp) plot_temps(df, ax_temp, label_x=False) ax_temp.yaxis.set_label_coords(ylabel_position, 0.5) shrink_ax_width(ax_temp, box_width_shrink_factor) plot_setpoint_reached(df, ax_duties, label=False) plot_efforts(df, ax_duties) ax_duties.yaxis.set_label_coords(ylabel_position, 0.5) shrink_ax_width(ax_duties, box_width_shrink_factor) fig.legend(loc=legend_location) def fig_plot_temps(df, title): (fig, ax_temp) = fig_temps(title) plot_setpoint_reached(df, ax_temp) plot_temps(df, ax_temp) ax_temp.yaxis.set_label_coords(ylabel_position, 0.5) shrink_ax_width(ax_temp, box_width_shrink_factor) fig.legend(loc=legend_location)
_____no_output_____
BSD-3-Clause
results/Temperature Control Plots.ipynb
ethanjli/punchcard-microfluidics
Stepwise Sequence
df_stepwise = load_df('20190117 Thermal Subsystem Testing Data - Fifth Test.csv') smooth_fan(df_stepwise) df_stepwise fig_plot_temps(df_stepwise, 'Stepwise Adjustment Control Sequence') plt.savefig('stepwise_control.pdf', format='pdf') plt.savefig('stepwise_control.png', format='png') fig_plot_complete(df_stepwise, 'Stepwise Control Sequence')
_____no_output_____
BSD-3-Clause
results/Temperature Control Plots.ipynb
ethanjli/punchcard-microfluidics
Lysis Sequence
df_lysis = load_df('20190117 Thermal Subsystem Testing Data - Fourth Test.csv') smooth_fan(df_lysis) df_lysis fig_plot_temps(df_lysis, 'Thermal Lysis Control Sequence') fig_plot_complete(df_lysis, 'Thermal Lysis Control Sequence') plt.savefig('thermal_lysis.pdf', format='pdf') plt.savefig('thermal_lysis.png', format='png')
'HelveticaNeueLTStd_Md.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output. 'HelveticaNeueLTStd_Roman.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output. 'HelveticaNeueLTStd_Md.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output. 'HelveticaNeueLTStd_Roman.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output.
BSD-3-Clause
results/Temperature Control Plots.ipynb
ethanjli/punchcard-microfluidics
**Table of Contents:*** Introduction* The RMS Titanic* Import Libraries* Getting the Data* Data Exploration/Analysis* Data Preprocessing - Missing Data - Converting Features - Creating Categories - Creating new Features* Building Machine Learning Models - Training 8 different models - Which is the best model ? - K-Fold Cross Validation* Random Forest - What is Random Forest ? - Feature importance - Hyperparameter Tuning * Further Evaluation - Confusion Matrix - Precision and Recall - F-Score - Precision Recall Curve - ROC AUC Curve - ROC AUC Score* Submission* Summary **Introduction**In this kernel I will go through the whole process of creating a machine learning model on the famous Titanic dataset, which is used by many people all over the world. It provides information on the fate of passengers on the Titanic, summarized according to economic status (class), sex, age and survival. In this challenge, we are asked to predict whether a passenger on the titanic would have been survived or not. **The RMS Titanic**RMS Titanic was a British passenger liner that sank in the North Atlantic Ocean in the early morning hours of 15 April 1912, after it collided with an iceberg during its maiden voyage from Southampton to New York City. There were an estimated 2,224 passengers and crew aboard the ship, and more than 1,500 died, making it one of the deadliest commercial peacetime maritime disasters in modern history. The RMS Titanic was the largest ship afloat at the time it entered service and was the second of three Olympic-class ocean liners operated by the White Star Line. The Titanic was built by the Harland and Wolff shipyard in Belfast. Thomas Andrews, her architect, died in the disaster. ![Titanic](http://titanic2ship.com/wp-content/uploads/2013/10/ColorPlans-CyrilCodus-LG.jpg) **Import Libraries**
# linear algebra import numpy as np # data processing import pandas as pd # data visualization import seaborn as sns %matplotlib inline from matplotlib import pyplot as plt from matplotlib import style # Algorithms from sklearn import linear_model from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC, LinearSVC from sklearn.naive_bayes import GaussianNB
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
**Getting the Data**
test_df = pd.read_csv("../input/test.csv") train_df = pd.read_csv("../input/train.csv")
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
**Data Exploration/Analysis**
train_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 12 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Pclass 891 non-null int64 Name 891 non-null object Sex 891 non-null object Age 714 non-null float64 SibSp 891 non-null int64 Parch 891 non-null int64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 204 non-null object Embarked 889 non-null object dtypes: float64(2), int64(5), object(5) memory usage: 83.6+ KB
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
**The training-set has 891 examples and 11 features + the target variable (survived)**. 2 of the features are floats, 5 are integers and 5 are objects. Below I have listed the features with a short description: survival: Survival PassengerId: Unique Id of a passenger. pclass: Ticket class sex: Sex Age: Age in years sibsp: of siblings / spouses aboard the Titanic parch: of parents / children aboard the Titanic ticket: Ticket number fare: Passenger fare cabin: Cabin number embarked: Port of Embarkation
train_df.describe()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Above we can see that **38% out of the training-set survived the Titanic**. We can also see that the passenger ages range from 0.4 to 80. On top of that we can already detect some features, that contain missing values, like the 'Age' feature.
train_df.head(15)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
From the table above, we can note a few things. First of all, that we **need to convert a lot of features into numeric** ones later on, so that the machine learning algorithms can process them. Furthermore, we can see that the **features have widely different ranges**, that we will need to convert into roughly the same scale. We can also spot some more features, that contain missing values (NaN = not a number), that wee need to deal with.**Let's take a more detailed look at what data is actually missing:**
total = train_df.isnull().sum().sort_values(ascending=False) percent_1 = train_df.isnull().sum()/train_df.isnull().count()*100 percent_2 = (round(percent_1, 1)).sort_values(ascending=False) missing_data = pd.concat([total, percent_2], axis=1, keys=['Total', '%']) missing_data.head(5)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
The Embarked feature has only 2 missing values, which can easily be filled. It will be much more tricky, to deal with the 'Age' feature, which has 177 missing values. The 'Cabin' feature needs further investigation, but it looks like that we might want to drop it from the dataset, since 77 % of it are missing.
train_df.columns.values
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Above you can see the 11 features + the target variable (survived). **What features could contribute to a high survival rate ?** To me it would make sense if everything except 'PassengerId', 'Ticket' and 'Name' would be correlated with a high survival rate. **1. Age and Sex:**
survived = 'survived' not_survived = 'not survived' fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10, 4)) women = train_df[train_df['Sex']=='female'] men = train_df[train_df['Sex']=='male'] ax = sns.distplot(women[women['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[0], kde =False) ax = sns.distplot(women[women['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[0], kde =False) ax.legend() ax.set_title('Female') ax = sns.distplot(men[men['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[1], kde = False) ax = sns.distplot(men[men['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[1], kde = False) ax.legend() _ = ax.set_title('Male')
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
You can see that men have a high probability of survival when they are between 18 and 30 years old, which is also a little bit true for women but not fully. For women the survival chances are higher between 14 and 40.For men the probability of survival is very low between the age of 5 and 18, but that isn't true for women. Another thing to note is that infants also have a little bit higher probability of survival.Since there seem to be **certain ages, which have increased odds of survival** and because I want every feature to be roughly on the same scale, I will create age groups later on. **3. Embarked, Pclass and Sex:**
FacetGrid = sns.FacetGrid(train_df, row='Embarked', size=4.5, aspect=1.6) FacetGrid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette=None, order=None, hue_order=None ) FacetGrid.add_legend()
/opt/conda/lib/python3.6/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) /opt/conda/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Embarked seems to be correlated with survival, depending on the gender. Women on port Q and on port S have a higher chance of survival. The inverse is true, if they are at port C. Men have a high survival probability if they are on port C, but a low probability if they are on port Q or S. Pclass also seems to be correlated with survival. We will generate another plot of it below. **4. Pclass:**
sns.barplot(x='Pclass', y='Survived', data=train_df)
/opt/conda/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects