markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
How does it perform on a different distribution?
n = 1000 X, y = datasets.make_circles(n_samples=n, shuffle=True, noise=0.05, random_state=None, factor = 0.4) plt.scatter(X[:,0], X[:,1]) # k-means fails km = KMeans(n_clusters = 2) km.fit(X) plt.scatter(X[:,0], X[:,1], c = km.predict(X))
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
By adjusting the distance parameter `epsilon`, we can find a way to cluster the two circular blobs. We do run into singularity issues for some values of `epsilon`, but otherwise the results are plotted below.
fig, axs = plt.subplots(11, figsize=(8,50)) for i in range(3,11): epsilon = i/10 try: axs[i].scatter(X[:,0], X[:,1], c = spectral_clustering(X,epsilon)) axs[i].set_title(label = "epsilon = " + str(epsilon)) except: print("Error when epsilon = ", epsilon)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Topic: Challenge Set 1 (MTA Subway Turnstile Data - Subject: Explore MTA turnstile dataDate: 09/29/2018Name: Brenner Heintz
import pandas as pd import numpy as np import random import itertools import calendar import datetime as dt import seaborn as sns import matplotlib.pyplot as plt import matplotlib.dates as mdates %matplotlib inline %xmode import matplotlib.style as style style.use('fivethirtyeight') sns.set_context('notebook', font_scale=1.2) %config InlineBackend.figure_format = 'svg'
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
Downloaded data from: http://web.mta.info/developers/turnstile.htmlDownloaded 3 weeks of data:Saturday, September 22, 2018Saturday, September 15, 2018Saturday, September 08, 2018Documentation at: http://web.mta.info/developers/resources/nyct/turnstile/ts_Field_Description.txtMap of the MTA system: http://web.mta.info/maps/submap.html **Challenge 1**Open up a new IPython notebookDownload a few MTA turnstile data files Open up a file, use csv reader to read it and ensure there is a column for each feature (C/A, UNIT, SCP, STATION). These are the first four columns.
df1 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_180908.txt') df2 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_180915.txt') df3 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_180922.txt') frames = [df1, df2, df3] df = pd.concat(frames) df.info(2) df.head(2)
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 2**"Let's turn this into a time series. Create a new column that specifies the date and time of each entry."
df['DATE'] = pd.to_datetime(df['DATE'], format='%m/%d/%Y') # df['DATETIME'] = pd.to_datetime(df.DATE + ' ' + df.TIME, format='%m/%d/%Y')
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 3**These counts are for every n hours. (What is n?) We want total daily entries.
df['STATION_KEY'] = df['C/A'] + ' ' + df['UNIT'] + ' ' + df['STATION'] df['EXITS'] = df['EXITS '] df.drop('EXITS ', axis=1, inplace=True) # Reset index because index was duplicated on all 3 original dataframes df.reset_index(inplace=True) df['ENTRY_DIFFS'] = df.groupby(['STATION_KEY','SCP'])['ENTRIES'].diff(periods=-1)*-1 df['EXIT_DIFFS'] = df.groupby(['STATION_KEY','SCP'])['EXITS'].diff(periods=-1)*-1 df['TOTAL'] = df['ENTRY_DIFFS'] + df['EXIT_DIFFS'] df = df[(df['ENTRY_DIFFS'] < 2E5) & (df['ENTRY_DIFFS'] > 0) & (df['EXIT_DIFFS'] < 2E5) & (df['EXIT_DIFFS'] > 0)] df.head(1) df.groupby(['STATION_KEY', 'SCP','DATE'])['ENTRY_DIFFS'].sum()
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 4**Now plot the daily time series for a turnstile.
x = df.groupby(['STATION_KEY', 'SCP','DATE'])['ENTRY_DIFFS'].sum() x = pd.DataFrame(x) x.reset_index(inplace=True) x.head(2) x_values = x[(x['SCP']=='02-00-00') & (x['STATION_KEY']=='A002 R051 59 ST')]['DATE'] y_values = x[(x['SCP']=='02-00-00') & (x['STATION_KEY']=='A002 R051 59 ST')]['ENTRY_DIFFS'] y_values = y_values.astype(int) fig, ax = plt.subplots() fig.set_size_inches(8,4) fig.autofmt_xdate() ax.xaxis.set_major_locator(mdates.WeekdayLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b. %d')) ax.set_xlabel('Date') ax.set_ylabel('Daily Traffic') ax.set_title('Daily Turnstile Entries') plt.tight_layout() plt.plot(x_values,y_values, linewidth=1.5, color='r')
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 5**We want to combine the numbers together -- for each ControlArea/UNIT/STATION combo, for each day, add the counts from each turnstile belonging to that combo.
df.head(3) df['UNIT'].nunique() df.groupby(['C/A', 'UNIT', 'SCP', 'DATE']).sum()
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 6**Similarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station.
station_df = df.groupby(['STATION', 'DATE']).sum() station_df.reset_index(inplace=True)
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 7**Plot the time series (either daily or your preferred level of granularity) for a station.
x_values = station_df[station_df['STATION'] == '1 AV']['DATE'] y_values = station_df[station_df['STATION'] == '1 AV']['TOTAL'] y_values = y_values.astype(int) fig, ax = plt.subplots() fig.set_size_inches(8,4) fig.autofmt_xdate() ax.xaxis.set_major_locator(mdates.WeekdayLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b. %d')) ax.set_xlabel('Date') ax.set_ylabel('Daily Traffic') ax.set_title('Daily Turnstile Entries (1st Ave Station)') # plt.tight_layout() dates = mdates.date2num(x_values) plt.plot_date(dates, y_values, fmt='-', color='purple', linewidth=2);
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 8**Select a station and find the total daily counts for this station. Then plot those daily counts for each week separately.To clarify: if I have 10 weeks of data on the 28th st 6 station, I will add 10 lines to the same figure (e.g. running plt.plot(week_count_list) once for each week). Each plot will have 7 points of data.
fig, ax = plt.subplots() fig.set_size_inches(8,4) fig.autofmt_xdate() ax.xaxis.set_major_locator(mdates.WeekdayLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b. %d')) ax.set_xlabel('Date') ax.set_ylabel('Daily Entries') ax.set_title('Daily Turnstile Entries (First Ave Station)') plt.plot(x_values[:8],y_values[:8], linewidth=1.5, color='r') plt.plot(x_values[8:16],y_values[8:16], linewidth=1.5, color='g') plt.plot(x_values[16:24],y_values[16:24], linewidth=1.5, color='b') plt.legend(labels=['Week 1', 'Week 2', 'Week 3'], loc='best')
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 9**Over multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate
total_ridership_counts = df.groupby('STATION').sum() total_ridership_counts.reset_index(inplace=True) total_ridership_counts.head(3)
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
**Challenge 10**Make a single list of these total ridership values and plot it withplt.hist(total_ridership_counts)
y_vals = total_ridership_counts['TOTAL'] fig, ax = plt.subplots() fig.set_size_inches(8,4) ax.set_xlabel('Entries') ax.set_ylabel('Number of Stations') ax.set_title('Histogram of Total Entries') ax.set_xlim(0,3000000) plt.ticklabel_format(style='plain', axis='x') plt.hist(y_vals, bins=30);
_____no_output_____
MIT
weekly_challenges/challenge_set_1_heintz.ipynb
athena15/metis
Notebook 1: Homology matrix generation from genome sequencesIn this notebook, I will be applying the notebooks accompanying the paper by Norsigian et al., 2020. (doi:10.1038/s41596-019-0254-3.) I will apply this for the P. thermo model we've been working on and the M10EXG strain that we used to validate our model with.This is the the first notebook in the tutorial to create homology matrix from genome sequences.There are four major steps in this notebook1. Download the genome annotation (GenBank files) from NCBI, and generate fasta files (protein &nucleotide) from them2. Perform BLASTp to find homologous proteins in strains of interest3. Use best bidirectional hits to create gene presence/absence matrix4. Supplementary for best practice: use BLASTn to check if we have missed any unannotated open reading frames and retain these genes in orthology matrix as well as guide future manual curation
#import packages needed import pandas as pd from glob import glob from Bio import Entrez, SeqIO import sys import cobra import decimal
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
__NOTE__ to be able to import the Entrez and SeqIO, I need to change the folder name from 'bio' to 'Bio' and then it'll work. C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\Lib\site-packagesSo be careful whenever i install Biopython again that this needs to be fixed. Here I will be working with strains in the faculative anaerobic clade of the genus. I will also add genomes that are obligate aerobes to see if that could highlight to us what changed between these species that made them become obligate aerobes.
# Load the information on the five strains we will be working with in this tutorial StrainsOfInterest=pd.read_excel('Strain Information.xlsx') StrainsOfInterest #The Reference Genome is as Described in the Base Reconstruction; here the reference is referenceStrainID='NCIMB11955' targetStrainIDs=list(StrainsOfInterest['NCBI ID'])
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
1. Download genome annotations (GenBank files) to generate fasta files Dowload genomes from NCBIDownload the genome annotations (GenBank files) from NCBI for strains of interest.
# define a function to download the annotated genebank files from NCBI def dl_genome(id, folder='genomes'): # be sure get CORRECT ID files=glob('%s/*.gb'%folder) out_file = '%s/%s.gb'%(folder, id) if out_file in files: print (out_file, 'already downloaded') return else: print ('downloading %s from NCBI'%id) from Bio import Entrez Entrez.email = "[email protected]" #Insert email here for NCBI handle = Entrez.efetch(db="nucleotide", id=id, rettype="gb", retmode="text") fout = open(out_file,'w') fout.write(handle.read()) fout.close() # execute the above function, and download the GenBank files for 8 P. thermo strains for strain in targetStrainIDs: dl_genome(strain, folder='genomes') #also download the reference strain info dl_genome(referenceStrainID, folder='genomes')
downloading CP016622.1 from NCBI
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Examine the Downloaded Strains
# define a function to gather information of the downloaded strains from the GenBank files def get_strain_info(folder='genomes'): files = glob('%s/*.gb'%folder) strain_info = [] for file in files: handle = open(file) record = SeqIO.read(handle, "genbank") for f in record.features: if f.type=='source': info = {} info['file'] = file info['id'] = file.split('\\')[-1].split('.')[0] for q in f.qualifiers.keys(): info[q] = '|'.join(f.qualifiers[q]) strain_info.append(info) return pd.DataFrame(strain_info) # information on the downloaded strain get_strain_info(folder='genomes')
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Generate FASTA files for both Protein and Nucleotide PipelinesFrom the GenBank file, we can extract sequence and annoation information to generate fasta files for the protein and nucleotide analyses. The resulting fasta files will then be used in step 2 as input for BLAST
# define a function to parse the Genbank file to generate fasta files for both protein and nucleotide sequences def parse_genome(id, type='prot', in_folder='genomes', out_folder='prots', overwrite=1): in_file = '%s/%s.gb'%(in_folder, id) out_file='%s/%s.fa'%(out_folder, id) files =glob('%s/*.fa'%out_folder) if out_file in files and overwrite==0: print (out_file, 'already parsed') return else: print ('parsing %s'%id) handle = open(in_file) fout = open(out_file,'w') x = 0 records = SeqIO.parse(handle, "genbank") for record in records: for f in record.features: if f.type=='CDS': seq=f.extract(record.seq) if type=='nucl': seq=str(seq) else: seq=str(seq.translate()) if 'locus_tag' in f.qualifiers.keys(): locus = f.qualifiers['locus_tag'][0] elif 'gene' in f.qualifiers.keys(): locus = f.qualifiers['gene'][0] else: locus = 'gene_%i'%x x+=1 fout.write('>%s\n%s\n'%(locus, seq)) fout.close() # Generate fasta files for 5 strains of interest for strain in targetStrainIDs: parse_genome(strain, type='prot', in_folder='genomes', out_folder='prots') parse_genome(strain, type='nucl', in_folder='genomes', out_folder='nucl') #Also generate fasta files for the reference strain parse_genome(referenceStrainID, type='nucl', in_folder='genomes', out_folder='nucl') parse_genome(referenceStrainID, type='prots', in_folder='genomes', out_folder='prots')
parsing NCIMB11955 parsing NCIMB11955
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
2. Perform BLAST to find homologous proteins in strains of interest Make BLAST DB for each of the target strains for both Protein and Nucleotide PipelinesIn this tutorial, we will run both BLASTp for proteins and BLSATn for nucleotides. BLASTp will be used as the main approach to identify homologous proteins in reference strain and other strains of interest, while BLASTn will be used as a supplementary method to check for any unannotated genes
# Define a function to make blast database for either protein of nucleotide def make_blast_db(id,folder='prots',db_type='prot'): import os out_file ='%s/%s.fa.pin'%(folder, id) files =glob('%s/*.fa.pin'%folder) if out_file in files: print (id, 'already has a blast db') return if db_type=='nucl': ext='fna' else: ext='fa' cmd_line='makeblastdb -in %s/%s.%s -dbtype %s' %(folder, id, ext, db_type) print ('making blast db with following command line...') print (cmd_line) os.system(cmd_line) sys.path.append('..\\..\\..\\..\\..\\..\\Program Files\\NCBI\\blast-2.10.1+\\bin') # make protein sequence databases # Because we are performing bi-directional blast, we make databases from both reference strain and strains of interest for strain in targetStrainIDs: make_blast_db(strain,folder='prots',db_type='prot') make_blast_db(referenceStrainID,folder='prots',db_type='prot')
making blast db with following command line... makeblastdb -in prots/2501416905.fa -dbtype prot making blast db with following command line... makeblastdb -in prots/NCIMB11955.fa -dbtype prot
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Define functions to run protein BLAST and get sequence lengths- BLASTp will be the main approach used here to identify homologous proteins between strains - Aside from sequence similarity, we also want to ensure the coverage of sequence mapping is sufficient. Therefore, we need to identiy the sequence length for each protein and compare it with the alignment length.
# define a function to run BLASTp def run_blastp(seq,db,in_folder='prots', out_folder='bbh', out=None,outfmt=6,evalue=0.001,threads=1): import os if out==None: out='%s/%s_vs_%s.txt'%(out_folder, seq, db) print(out) files =glob('%s/*.txt'%out_folder) if out in files: print (seq, 'already blasted') return print ('blasting %s vs %s'%(seq, db)) db = '%s/%s.fa'%(in_folder, db) seq = '%s/%s.fa'%(in_folder, seq) cmd_line='blastp -db %s -query %s -out %s -evalue %s -outfmt %s -num_threads %i' \ %(db, seq, out, evalue, outfmt, threads) print ('running blastp with following command line...') print (cmd_line) os.system(cmd_line) return out # define a function to get sequence length def get_gene_lens(query, in_folder='prots'): file = '%s/%s.fa'%(in_folder, query) handle = open(file) records = SeqIO.parse(handle, "fasta") out = [] for record in records: out.append({'gene':record.name, 'gene_length':len(record.seq)}) out = pd.DataFrame(out) return out
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
3. Use Bi-Directional BLASTp Best Hits to create gene presence/absence matrix Obtain Bi-Directional BLASTp Best HitsFrom the above BLASTp results, we can obtain Bi-Directional BLASTp Best Hits to identify homologous proteins. Note beside gene similarity score, the coverage of alignment is also used to filter mapping results.
# define a function to get Bi-Directional BLASTp Best Hits def get_bbh(query, subject, in_folder='bbh'): #Utilize the defined protein BLAST function run_blastp(query, subject) run_blastp(subject, query) query_lengths = get_gene_lens(query, in_folder='prots') subject_lengths = get_gene_lens(subject, in_folder='prots') #Define the output file of this BLAST out_file = '%s/%s_vs_%s_parsed.csv'%(in_folder,query, subject) files=glob('%s/*_parsed.csv'%in_folder) #Combine the results of the protein BLAST into a dataframe print ('parsing BBHs for', query, subject) cols = ['gene', 'subject', 'PID', 'alnLength', 'mismatchCount', 'gapOpenCount', 'queryStart', 'queryEnd', 'subjectStart', 'subjectEnd', 'eVal', 'bitScore'] bbh=pd.read_csv('%s/%s_vs_%s.txt'%(in_folder,query, subject), sep='\t', names=cols) bbh = pd.merge(bbh, query_lengths) bbh['COV'] = bbh['alnLength']/bbh['gene_length'] bbh2=pd.read_csv('%s/%s_vs_%s.txt'%(in_folder,subject, query), sep='\t', names=cols) bbh2 = pd.merge(bbh2, subject_lengths) bbh2['COV'] = bbh2['alnLength']/bbh2['gene_length'] out = pd.DataFrame() # Filter the genes based on coverage bbh = bbh[bbh.COV>=0.25] bbh2 = bbh2[bbh2.COV>=0.25] #Delineate the best hits from the BLAST for g in bbh.gene.unique(): res = bbh[bbh.gene==g] if len(res)==0: continue best_hit = res.loc[res.PID.idxmax()] best_gene = best_hit.subject res2 = bbh2[bbh2.gene==best_gene] if len(res2)==0: continue best_hit2 = res2.loc[res2.PID.idxmax()] best_gene2 = best_hit2.subject if g==best_gene2: best_hit['BBH'] = '<=>' else: best_hit['BBH'] = '->' out=pd.concat([out, pd.DataFrame(best_hit).transpose()]) #Save the final file to a designated CSV file out.to_csv(out_file) # Execute the BLAST for each target strain against the reference strain, save results to 'bbh' i.e. "bidirectional best # hits" folder to create # homology matrix for strain in targetStrainIDs: get_bbh(referenceStrainID,strain, in_folder='bbh')
bbh/NCIMB11955_vs_2501416905.txt blasting NCIMB11955 vs 2501416905 running blastp with following command line... blastp -db prots/2501416905.fa -query prots/NCIMB11955.fa -out bbh/NCIMB11955_vs_2501416905.txt -evalue 0.001 -outfmt 6 -num_threads 1 bbh/2501416905_vs_NCIMB11955.txt blasting 2501416905 vs NCIMB11955 running blastp with following command line... blastp -db prots/NCIMB11955.fa -query prots/2501416905.fa -out bbh/2501416905_vs_NCIMB11955.txt -evalue 0.001 -outfmt 6 -num_threads 1 parsing BBHs for NCIMB11955 2501416905
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Parse the BLAST Results into one Homology Matrix of the Reconstruction GenesFor the homology matrix, want to find, for each gene in the reference annotation, is there one in the other strains. And then later filter this down to metabolic genes.
#Load all the BLAST files between the reference strain and target strains blast_files=glob('%s/*_parsed.csv'%'bbh') for blast in blast_files: bbh=pd.read_csv(blast) print (blast,bbh.shape)
bbh\NCIMB11955_vs_2501416905_parsed.csv (3520, 16)
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
In this section of the notebook, I will deviate from the published tutorial. In the tutorial, they map the orthologous genes onto the curated model of the reference genome. In reality, we are curious as to how homologous the genomes are to one another, and how many metabolic genes the different strains have in common. So we need to compare the orthologues to the reference genome and not the reference model. I'll try to adapt the scripts so that we can get a homology matrix that captures this. Make a single dataframe where one column is all the genes in the reference organism, and the other column is a different strain and contains the PID.Then you can count for each column, the number of genes which have a PID above 80% (selected threshold) and compare it to the total number of genes.
#import all the csv files compare = pd.read_csv('bbh/NCIMB11955_vs_2501416905_parsed.csv') #filter out all other columns that i won't use later compare = compare[['gene', 'PID']] #list of all ORFs found in the reference genome with open('prots/NCIMB11955.fa') as fasta_file: # Will close handle cleanly NCIMB_ids = [] for seq_record in SeqIO.parse(fasta_file, 'fasta'): # (generator) NCIMB_ids.append(seq_record.id) comparison = pd.DataFrame({'gene': NCIMB_ids}) comparison = pd.merge(comparison, compare, on='gene',how="outer") strains = ['NCIMB11955', 'M10EXG'] comparison.columns = strains
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now that we have a dataframe that compares the PID scores for each gene in the reference genome to the other strains, we can start to 'sum up' what percentage of the genes in the reference have a matched gene in the strain. For this, we will set a threshold of 80% sequence identity between genes to be counted as a true homologue. One can decide to change this arbitrarily set value if they like. The reference genome has 3708 ORFs annotated to it.
columns = list(comparison) for i in columns: #iterate through the columns if i in 'NCIMB11955': # skip reference column continue else: #now go through each row in this column common_genes = [] for index,row in comparison.iterrows(): value = row[i] if value > 90: #selected threshold level common_genes.append(1) elif value < 90: common_genes.append(0) homology = sum(common_genes) fraction = 100*homology/3708 print(i,': ', fraction)
M10EXG : 88.9967637540453
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Try the same as above, but use the E-value as the threshold instead.
#import all the csv files compare = pd.read_csv('bbh/NCIMB11955_vs_2501416905_parsed.csv') #filter out all other columns that i won't use later compare_eval = compare[['gene', 'eVal']] #make dataframe for first comparison, and then add on the rest comparison_eval = pd.DataFrame({'gene': NCIMB_ids}) comparison_eval = pd.merge(comparison_eval, compare_eval, on='gene',how="outer") strains = ['NCIMB11955', 'M10EXG'] comparison_eval.columns = strains columns = list(comparison_eval) for i in columns: #iterate through the columns if i in 'NCIMB11955': # skip reference column continue else: #now go through each row in this column common_genes = [] for index,row in comparison_eval.iterrows(): value = row[i] if value < 5E-5 : #selected threshold level common_genes.append(1) elif value >5E-5: continue homology = sum(common_genes) fraction = (homology/3708)*100 print(i,': ', fraction)
M10EXG : 94.47141316073355
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now that we have done the above, we can start to look at the overlap between the strains, so that we can create a venn diagram of total ORFs that are unique to each strain but also ovelap. __Approach__- Import the fasta files in the prots folder: this is a list of all the ORFs in each strain. - make a list, per strain, of all genes for that strain that fit the comparison threshold (for this use Eval < 1E-10 for now) i.e. all the genes that these two strains have in common- then make an overlap of the all the lists: should give for each strain which are unique and which overlap with one another.
#import total strain lists, for each strain with open('prots/2501416905.fa') as fasta_file: # Will close handle cleanly M10EXG_ids = [] for seq_record in SeqIO.parse(fasta_file, 'fasta'): # (generator) M10EXG_ids.append(seq_record.id)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Next is to make a dataframe for the comparison of the reference (here NCIMB11955) to the other strain. The list should then contain the gene names of the reference organism, as well as the strain being mapped against.
compare_eval ol_reference = [] #the overlapping genes, with the reference strain ID ol_strain =[] #the overlapping genes, with the reference strain ID for index,row in compare.iterrows(): value = row['eVal'] if value > 1E-5: #selected threshold level continue #we don't want to save these anywhere elif value < 1E-5: ol_reference.append(row['gene']) ol_strain.append(row['subject']) NCIMB_M10EXG_OL = pd.DataFrame({'NCIMB11955': ol_reference, 'M10EXG': ol_strain}) # so NCIMB_M10EXG_OL is now a dataframe that maps each gene in the NCIMB to a gene in M10EXG len(NCIMB_M10EXG_OL)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
In the example above, you then see which 3498 genes of the target strain match the reference strain.We can then see how many genes each strain has by themselves and from that make the venndiagram.
len(NCIMB_ids)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
So NCIMB has 259 unique ORFs. In the above, we had NCIMB as the reference strain. TO find how many true unique genes there are in M10EXG, we would need to repeat the above analysis but now with M10EXG as reference strain. That will be done below.
StrainsOfInterest=pd.read_excel('Strain InformationB.xlsx') StrainsOfInterest #switch reference and target here referenceStrainID='2501416905' targetStrainIDs=list(StrainsOfInterest['NCBI ID']) for strain in targetStrainIDs: get_bbh(referenceStrainID,strain, in_folder='bbh')
bbh/2501416905_vs_NCIMB11955.txt blasting 2501416905 vs NCIMB11955 running blastp with following command line... blastp -db prots/NCIMB11955.fa -query prots/2501416905.fa -out bbh/2501416905_vs_NCIMB11955.txt -evalue 0.001 -outfmt 6 -num_threads 1 bbh/NCIMB11955_vs_2501416905.txt blasting NCIMB11955 vs 2501416905 running blastp with following command line... blastp -db prots/2501416905.fa -query prots/NCIMB11955.fa -out bbh/NCIMB11955_vs_2501416905.txt -evalue 0.001 -outfmt 6 -num_threads 1 parsing BBHs for 2501416905 NCIMB11955
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now import this file and make it into a dataframe so we can find the unique genes.
#import all the csv files compare = pd.read_csv('bbh/2501416905_vs_NCIMB11955_parsed.csv') #filter out all other columns that i won't use later compare_eval = compare[['gene', 'eVal', 'subject']] strains = ['M10EXG', 'eVal', 'NCIMB11955'] compare_eval.columns = strains
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now that we have done the above, we can slook at the overlap from M10EXg to the NCIMB strain. __Approach__- make a list, per strain, of all genes for that strain that fit the comparison threshold (for this use Eval < 1E-5 for now) i.e. all the genes that these two strains have in common
ol_reference = [] #the overlapping genes, with the reference strain ID ol_strain =[] #the overlapping genes, with the reference strain ID for index,row in compare_eval.iterrows(): value = row['eVal'] if value > 1E-5: #selected threshold level continue #we don't want to save these anywhere elif value < 1E-5: ol_reference.append(row['M10EXG']) ol_strain.append(row['NCIMB11955']) M10EXG_NCIMB_OL = pd.DataFrame({'M10EXG': ol_reference, 'NCIMB': ol_strain}) # so M10EXG_NCIMB_OL is now a dataframe that maps each gene in the M10EXG to a gene in NCIMB11955 len(M10EXG_NCIMB_OL)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
There are 5 genes less mapped as overlap, but this is within the error range.
len(M10EXG_ids)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
So we would have 234 unique genes in M10EXG. So overall, we have 3498 genes that overlap, 259 unique in NCIMB and 234 unique in M10EXG. This very closely matches the results of the KBASE pipeline we did, which is good. Filter for metabolic genesNow that we know the overlap in the total protein, we want to find out what percentage of the metabolic genes overlap between the strains. So first I will filter all the genes from the target organism into the ones that are metabolic genes. And then apply this list to the list of bi-directional hits, so see how many of the metabolic genes overlap. I can't get the definition to work, So I will just write a script that will make a dataframe linking each gene to a possible EC code. Then we can filter the bbh-file made previously for these metabolic genes and get the answer that way.
genes = [] EC = [] x = 0 for seq_record in SeqIO.parse("genomes/NCIMB11955.gb", "genbank"): for f in seq_record.features: if f.type=='CDS': if 'locus_tag' in f.qualifiers.keys(): locus = f.qualifiers['locus_tag'][0] elif 'gene' in f.qualifiers.keys(): locus = f.qualifiers['gene'][0] else: locus = 'gene_%i'%x x+=1 try: synonyms = f.qualifiers['gene_synonym'] #here it will check that it has one of the gene_synonyms as ec code, i.e. is metabolic check = [] for a in synonyms: if a[0].isdigit(): ec = a check.append(1) else: continue if sum(check) > 0: genes.append(locus) EC.append(a) except KeyError: continue NCIMB_met = pd.DataFrame({'Gene': genes, 'EC': EC})
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now I'll do the same for the M10EXG genome/annotation.
genes = [] EC = [] x = 0 for seq_record in SeqIO.parse("genomes/2501416905.gb", "genbank"): for f in seq_record.features: if f.type=='CDS': if 'locus_tag' in f.qualifiers.keys(): locus = f.qualifiers['locus_tag'][0] elif 'gene' in f.qualifiers.keys(): locus = f.qualifiers['gene'][0] else: locus = 'gene_%i'%x x+=1 try: synonyms = f.qualifiers['gene_synonym'] #here it will check that it has one of the gene_synonyms as ec code, i.e. is metabolic check = [] for a in synonyms: if a[0].isdigit(): ec = a check.append(1) else: continue if sum(check) > 0: genes.append(locus) EC.append(a) except KeyError: continue M10EXG_met = pd.DataFrame({'Gene': genes, 'EC': EC}) len(NCIMB_met) len(M10EXG_met)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
So, this shows we have 1424 metabolic genes in the reference strain and 1417 in the M10EXG strain. Note, there are ofcourse many hypothetical proteins annotated in the genome. Those are ignored. Now, I will have to filter the list of BBH based on just the genes that are metabolic. Approach:- NCIMB_M10EXG_OL has all the CDS that are considered hits here (with the threshold of 1E-5 set). I will iterate through each row, and make a new dataframe which contains the filtered set of genes for comparing.NOTE: at the end of comparing with NCIMC11955 as reference, I will do the same with the M10EXG as reference, to find which genes are unique there as well.
NCIMB = [] M10EXG = [] for row, index in NCIMB_M10EXG_OL.iterrows(): ref_gene = index['NCIMB11955'] try: NCIMB_met.loc[NCIMB_met["Gene"] == ref_gene,'EC'].values[0] #if it is a metaoblic gene this will give a hit NCIMB.append(index['NCIMB11955']) M10EXG.append(index['M10EXG']) except IndexError: #if the hit isn't metabolic, we can ignore it continue #make data frame OL_metabolic = pd.DataFrame({'NCIMB11955':NCIMB, 'M10EXG':M10EXG}) len(OL_metabolic)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
So we have 1384 metabolic genes that overlap. That means there are 40 genes unique to NCIMB, and for M10EXG we need to check how many are still unmatched. Finally, I will make a list of the unique metabolic genes, that can be supplied in supplementary, and given a short look through if anything unexpected appears.Approach:- NCIMB_met and M10EXG_met has all the metabolic genes with their corresponding EC code. I will filter out the significant hits here for the NCIMB strain first. I need to repeat the above analysis for M10EXG to be able to see how many unique it has properly. (will be done further in this notebook)- then try to find some more information about each E.C. code, e.g. from KEGG database I've Used before? I can look into the type of reactions (i.e. category) and/or the extact reaction name.
genes = [] ECs =[] for row, index in NCIMB_met.iterrows(): gene = index['Gene'] try: OL_metabolic.loc[OL_metabolic["NCIMB11955"] == gene,'M10EXG'].values[0] #If the gene is in the overlap dataframe it will give an output continue except IndexError: #i.e. if the gene doesn't have an overlap in M10EXG genes.append(gene) ECs.append(index['EC']) #make dataframe NCIMB_unique = pd.DataFrame({'Unique gene':genes, 'EC':ECs}) len(NCIMB_unique)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now we know that for NCIMB, there are 40 unique metabolic genes and 1384 overlapping genes. I will do the same as above but for the M10EXG strain to find how many are unique in that strain when that is used as reference sequence.
M10EXG = [] NCIMB = [] for row, index in M10EXG_NCIMB_OL.iterrows(): ref_gene = index['M10EXG'] try: M10EXG_met.loc[M10EXG_met["Gene"] == ref_gene,'EC'].values[0] #if it is a metaoblic gene this will give a hit M10EXG.append(index['M10EXG']) NCIMB.append(index['NCIMB']) except IndexError: #if the hit isn't metabolic, we can ignore it continue #make data frame OL_metabolic_M10EXG = pd.DataFrame({'M10EXG':M10EXG, 'NCIMB11955':NCIMB}) len(OL_metabolic_M10EXG)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Again here there is a difference of a few genes, but falls within an error range if you consider the size of the total metabolic gene set. So this is fine. Next, we will make a dataframe of the metabolic genes that are unique to M10EXG.
genes = [] ECs =[] for row, index in M10EXG_met.iterrows(): gene = index['Gene'] try: OL_metabolic_M10EXG.loc[OL_metabolic_M10EXG["M10EXG"] == gene,'NCIMB11955'].values[0] #If the gene is in the overlap dataframe it will give an output continue except IndexError: #i.e. if the gene doesn't have an overlap in M10EXG genes.append(gene) ECs.append(index['EC']) #make dataframe M10EXG_unique = pd.DataFrame({'Unique gene':genes, 'EC':ECs}) len(M10EXG_unique)
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
So we have 29 unique genes in M10EXG, and 40 in our strain. Now we have two dataframes, one for each strain, with the unique metabolic genes, we can try to find a bit more information about them. I will try to get a name for the reaction, as well as what pathway they are a part of. First I will prepare a dataframe from the KEGG Site that contains the information about which pathway the EC code belongs to.
df = pd.read_csv('http://rest.kegg.jp/link/ec/pathway', header=None, sep = '\t') df.columns = ['Pathway', 'EC'] #rename the columns #remove all 'path:' and 'rn:' df['Pathway'] = df['Pathway'].str.replace(r'path:ec', '') df['EC'] = df['EC'].str.replace(r'ec:', '') #remove the rows with 'path_map' to prevent duplication df = df[~df['Pathway'].str.contains("map")] #now link the pathway code to the name of the pathway it is involved in df_groups = pd.read_csv('http://rest.kegg.jp/list/pathway', header=None, sep = '\t') df_groups.columns = ['Pathway', 'Name'] df_groups['Pathway'] = df_groups['Pathway'].str.replace(r'path:map', '') #now filter out the IDs I dont want to include #i want to remove all rows below number 153 df_groups = df_groups[0:154] #then merge the df and df_groups together df = pd.merge(df_groups,df,on='Pathway',how='left')
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now I can map the EC code from the unique metabolic reaction lists to these pathway classes.
pathway =[] for index, row in NCIMB_unique.iterrows(): ec = row['EC'] types =[] found = df.loc[df["EC"] == ec] for indexa, rowa in found.iterrows() : types.append(rowa['Name']) pathway.append(types) NCIMB_unique['Pathway'] = pathway pathway =[] for index, row in M10EXG_unique.iterrows(): ec = row['EC'] types =[] found = df.loc[df["EC"] == ec] for indexa, rowa in found.iterrows() : types.append(rowa['Name']) pathway.append(types) M10EXG_unique['Pathway'] = pathway
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now I'll also add a column which looks at the KO of the ec codes recognized so that we get a bit more information about which reactions are unique in each strain.First I will prepare a dataframe that contains the EC codes linked to the KO ontology terms for these annotations. That we can use to map the unique reactions to.There will probably still be some that are not mapped, those we will need to check by hand.
df = pd.read_csv('http://rest.kegg.jp/link/ec/ko', header=None, sep = '\t') df.columns = ['KO', 'EC'] #rename the columns #remove all 'ko:' and 'ec:' df['KO'] = df['KO'].str.replace(r'ko:', '') df['EC'] = df['EC'].str.replace(r'ec:', '') #now import the list of KO terms with more meaningful description ko = pd.read_csv('http://rest.kegg.jp/list/ko', header=None, sep = '\t') ko.columns = ['KO', 'Name'] ko['KO'] = ko['KO'].str.replace(r'ko:', '') #link the two dataframes together df = pd.merge(df,ko,on='KO',how='left')
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now we can map the EC code to a KO term.
ko_term =[] for index, row in NCIMB_unique.iterrows(): ec = row['EC'] types =[] found = df.loc[df["EC"] == ec] for indexa, rowa in found.iterrows() : types.append(rowa['Name']) ko_term.append(types) NCIMB_unique['KO'] = ko_term ko_term =[] for index, row in M10EXG_unique.iterrows(): ec = row['EC'] types =[] found = df.loc[df["EC"] == ec] for indexa, rowa in found.iterrows() : types.append(rowa['Name']) ko_term.append(types) M10EXG_unique['KO'] = ko_term
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
Now I will export these two tables and need to some some manual inspection of them.
M10EXG_unique.to_csv('M10EXG_unique.csv') NCIMB_unique.to_csv('NCIMB_unique.csv')
_____no_output_____
Apache-2.0
notebooks/Genome comparison/55. Execute Sequence Comparison to Generate Homology Matrix -new.ipynb
biosustain/p-thermo
IntroductionIn this exercise you'll apply more advanced encodings to encode the categorical variables ito improve your classifier model. The encodings you will implement are:- Count Encoding- Target Encoding- Leave-one-out Encoding- CatBoost Encoding- Feature embedding with SVD You'll refit the classifier after each encoding to check its performance on hold-out data. First, run the next cell to repeat the work you did in the last exercise.
import numpy as np import pandas as pd from sklearn import preprocessing, metrics import lightgbm as lgb # Set up code checking # This can take a few seconds, thanks for your patience from learntools.core import binder binder.bind(globals()) from learntools.feature_engineering.ex2 import * clicks = pd.read_parquet('../input/feature-engineering-data/baseline_data.pqt')
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
Here I'll define a couple functions to help test the new encodings.
def get_data_splits(dataframe, valid_fraction=0.1): """ Splits a dataframe into train, validation, and test sets. First, orders by the column 'click_time'. Set the size of the validation and test sets with the valid_fraction keyword argument. """ dataframe = dataframe.sort_values('click_time') valid_rows = int(len(dataframe) * valid_fraction) train = dataframe[:-valid_rows * 2] # valid size == test size, last two sections of the data valid = dataframe[-valid_rows * 2:-valid_rows] test = dataframe[-valid_rows:] return train, valid, test def train_model(train, valid, test=None, feature_cols=None): if feature_cols is None: feature_cols = train.columns.drop(['click_time', 'attributed_time', 'is_attributed']) dtrain = lgb.Dataset(train[feature_cols], label=train['is_attributed']) dvalid = lgb.Dataset(valid[feature_cols], label=valid['is_attributed']) param = {'num_leaves': 64, 'objective': 'binary', 'metric': 'auc', 'seed': 7} num_round = 1000 print("Training model!") bst = lgb.train(param, dtrain, num_round, valid_sets=[dvalid], early_stopping_rounds=20, verbose_eval=False) valid_pred = bst.predict(valid[feature_cols]) valid_score = metrics.roc_auc_score(valid['is_attributed'], valid_pred) print(f"Validation AUC score: {valid_score}") if test is not None: test_pred = bst.predict(test[feature_cols]) test_score = metrics.roc_auc_score(test['is_attributed'], test_pred) return bst, valid_score, test_score else: return bst, valid_score
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
Run this cell to get a baseline score. If your encodings do better than this, you can keep them.
print("Baseline model") train, valid, test = get_data_splits(clicks) _ = train_model(train, valid)
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
1) Categorical encodings and leakageThese encodings are all based on statistics calculated from the dataset like counts and means. Considering this, what data should you be using to calculate the encodings?Uncomment the following line after you've decided your answer.
q_1.solution()
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
2) Count encodingsHere, encode the categorical features `['ip', 'app', 'device', 'os', 'channel']` using the count of each value in the data set. Using `CountEncoder` from the `category_encoders` library, fit the encoding using the categorical feature columns defined in `cat_features`. Then apply the encodings to the train and validation sets, adding them as new columns with names suffixed `"_count"`.
import category_encoders as ce cat_features = ['ip', 'app', 'device', 'os', 'channel'] train, valid, test = get_data_splits(clicks) # Create the count encoder count_enc = ____ # Learn encoding from the training set ____ # Apply encoding to the train and validation sets as new columns # Make sure to add `_count` as a suffix to the new columns train_encoded = ____ valid_encoded = ____ q_2.check() # Uncomment if you need some guidance q_2.hint() q_2.solution() #%%RM_IF(PROD)%% cat_features = ['ip', 'app', 'device', 'os', 'channel'] train, valid, test = get_data_splits(clicks) # Create the count encoder count_enc = ce.CountEncoder(cols=cat_features) # Learn encoding from the training set count_enc.fit(train[cat_features]) # Apply encoding to the train and validation sets train_encoded = train.join(count_enc.transform(train[cat_features]).add_suffix('_count')) valid_encoded = valid.join(count_enc.transform(valid[cat_features]).add_suffix('_count')) q_2.assert_check_passed() # Train the model on the encoded datasets # This can take around 30 seconds to complete _ = train_model(train_encoded, valid_encoded)
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
Count encoding improved our model's score! 3) Why is count encoding effective?At first glance, it could be surprising that Count Encoding helps make accurate models. Why do you think is count encoding is a good idea, or how does it improve the model score?Uncomment the following line after you've decided your answer.
q_3.solution()
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
4) Target encodingHere you'll try some supervised encodings that use the labels (the targets) to transform categorical features. The first one is target encoding. Create the target encoder from the `category_encoders` library. Then, learn the encodings from the training dataset, apply the encodings to all the datasets and retrain the model.
cat_features = ['ip', 'app', 'device', 'os', 'channel'] train, valid, test = get_data_splits(clicks) # Create the target encoder. You can find this easily by using tab completion. # Start typing ce. the press Tab to bring up a list of classes and functions. target_enc = ____ # Learn encoding from the training set. Use the 'is_attributed' column as the target. ____ # Apply encoding to the train and validation sets as new columns # Make sure to add `_target` as a suffix to the new columns train_encoded = ____ valid_encoded = ____ q_4.check() # Uncomment these if you need some guidance #q_4.hint() #q_4.solution() #%%RM_IF(PROD)%% cat_features = ['ip', 'app', 'device', 'os', 'channel'] target_enc = ce.TargetEncoder(cols=cat_features) train, valid, test = get_data_splits(clicks) target_enc.fit(train[cat_features], train['is_attributed']) train_encoded = train.join(target_enc.transform(train[cat_features]).add_suffix('_target')) valid_encoded = valid.join(target_enc.transform(valid[cat_features]).add_suffix('_target')) q_4.assert_check_passed() _ = train_model(train_encoded, valid_encoded)
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
5) Try removing IP encodingTry leaving `ip` out of the encoded features and retrain the model with target encoding again. You should find that the score increases and is above the baseline score! Why do you think the score is below baseline when we encode the IP address but above baseline when we don't?Uncomment the following line after you've decided your answer.
# q_5.solution()
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
6) CatBoost EncodingThe CatBoost encoder is supposed to working well with the LightGBM model. Encode the categorical features with `CatBoostEncoder` and train the model on the encoded data again.
train, valid, test = get_data_splits(clicks) # Create the CatBoost encoder cb_enc = ____ # Learn encoding from the training set ____ # Apply encoding to the train and validation sets as new columns # Make sure to add `_cb` as a suffix to the new columns train_encoded = ____ valid_encoded = ____ q_6.check() # Uncomment these if you need some guidance #q_6.hint() #q_6.solution() #%%RM_IF(PROD)%% cat_features = ['app', 'device', 'os', 'channel'] train, valid, _ = get_data_splits(clicks) cb_enc = ce.CatBoostEncoder(cols=cat_features, random_state=7) # Learn encodings on the train set cb_enc.fit(train[cat_features], train['is_attributed']) # Apply encodings to each set train_encoded = train.join(cb_enc.transform(train[cat_features]).add_suffix('_cb')) valid_encoded = valid.join(cb_enc.transform(valid[cat_features]).add_suffix('_cb')) q_6.assert_check_passed() _ = train_model(train, valid)
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
The CatBoost encodings work the best, so we'll keep those.
encoded = cb_enc.transform(clicks[cat_features]) for col in encoded: clicks.insert(len(clicks.columns), col + '_cb', encoded[col])
_____no_output_____
Apache-2.0
notebooks/feature_engineering/raw/ex2.ipynb
dansbecker/learntools
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
Eager execution basics Run in Google Colab View source on GitHub This is an introductory tutorial for using TensorFlow. It will cover:* Importing required packages* Creating and using Tensors* Using GPU acceleration* Datasets Import TensorFlowTo get started, import the `tensorflow` module and enable eager execution.Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow.compat.v1 as tf
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
TensorsA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `Tensor` objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
print(tf.add(1, 2)) print(tf.add([1, 2], [3, 4])) print(tf.square(5)) print(tf.reduce_sum([1, 2, 3])) print(tf.encode_base64("hello world")) # Operator overloading is also supported print(tf.square(2) + tf.square(3))
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
Each Tensor has a shape and a datatype
x = tf.matmul([[1]], [[2, 3]]) print(x.shape) print(x.dtype)
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
The most obvious differences between NumPy arrays and TensorFlow Tensors are:1. Tensors can be backed by accelerator memory (like GPU, TPU).2. Tensors are immutable. NumPy CompatibilityConversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:* TensorFlow operations automatically convert NumPy ndarrays to Tensors.* NumPy operations automatically convert Tensors to NumPy ndarrays.Tensors can be explicitly converted to NumPy ndarrays by invoking the `.numpy()` method on them.These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
import numpy as np ndarray = np.ones([3, 3]) print("TensorFlow operations convert numpy arrays to Tensors automatically") tensor = tf.multiply(ndarray, 42) print(tensor) print("And NumPy operations convert Tensors to numpy arrays automatically") print(np.add(tensor, 1)) print("The .numpy() method explicitly converts a Tensor to a numpy array") print(tensor.numpy())
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
GPU accelerationMany TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
x = tf.random.uniform([3, 3]) print("Is there a GPU available: "), print(tf.test.is_gpu_available()) print("Is the Tensor on GPU #0: "), print(x.device.endswith('GPU:0'))
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementThe term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager. For example:
import time def time_matmul(x): start = time.time() for loop in range(10): tf.matmul(x, x) result = time.time()-start print("10 loops: {:0.2f}ms".format(1000*result)) # Force execution on CPU print("On CPU:") with tf.device("CPU:0"): x = tf.random_uniform([1000, 1000]) assert x.device.endswith("CPU:0") time_matmul(x) # Force execution on GPU #0 if available if tf.test.is_gpu_available(): with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc. x = tf.random_uniform([1000, 1000]) assert x.device.endswith("GPU:0") time_matmul(x)
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
DatasetsThis section demonstrates the use of the [`tf.data.Dataset` API](https://www.tensorflow.org/r1/guide/datasets) to build pipelines to feed data to your model. It covers:* Creating a `Dataset`.* Iteration over a `Dataset` with eager execution enabled.We recommend using the `Dataset`s API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.If you're familiar with TensorFlow graphs, the API for constructing the `Dataset` object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.You can use Python iteration over the `tf.data.Dataset` object and do not need to explicitly create an `tf.data.Iterator` object.As a result, the discussion on iterators in the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasets) is not relevant when eager execution is enabled. Create a source `Dataset`Create a _source_ dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices) or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Guide](https://www.tensorflow.org/r1/guide/datasetsreading_input_data) for more information.
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6]) # Create a CSV file import tempfile _, filename = tempfile.mkstemp() with open(filename, 'w') as f: f.write("""Line 1 Line 2 Line 3 """) ds_file = tf.data.TextLineDataset(filename)
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) etc. to apply transformations to the records of the dataset. See the [API documentation for `tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for details.
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2) ds_file = ds_file.batch(2)
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
IterateWhen eager execution is enabled `Dataset` objects support iteration.If you're familiar with the use of `Dataset`s in TensorFlow graphs, note that there is no need for calls to `Dataset.make_one_shot_iterator()` or `get_next()` calls.
print('Elements of ds_tensors:') for x in ds_tensors: print(x) print('\nElements in ds_file:') for x in ds_file: print(x)
_____no_output_____
Apache-2.0
site/en/r1/tutorials/eager/eager_basics.ipynb
atharva1503/docs
use cpu because the following computation need a lot of memory
device = 'cpu' train_features, train_labels = train_features.to(device), train_labels.to(device) num_train_data = train_labels.shape[0] num_class = torch.max(train_labels) + 1 torch.manual_seed(args.rng_seed) torch.cuda.manual_seed_all(args.rng_seed) perm = torch.randperm(num_train_data).to(device) print(perm)
tensor([36044, 49165, 37807, ..., 42128, 15898, 31476])
MIT
notebooks/knn.ipynb
Bhaskers-Blu-Org2/metric-transfer.pytorch
soft label
fig = plt.figure(dpi=200) for num_labeled_data in [50, 100, 250, 500, 1000, 2000, 4000, 8000]: index_labeled = [] index_unlabeled = [] data_per_class = num_labeled_data // args.num_class for c in range(10): indexes_c = perm[train_labels[perm] == c] index_labeled.append(indexes_c[:data_per_class]) index_unlabeled.append(indexes_c[data_per_class:]) index_labeled = torch.cat(index_labeled) index_unlabeled = torch.cat(index_unlabeled) # index_labeled = perm[:num_labeled_data] # index_unlabeled = perm[num_labeled_data:] # calculate similarity matrix dist = torch.mm(train_features, train_features[index_labeled].t()) dist[index_labeled, torch.arange(num_labeled_data)] = 0 K = min(num_labeled_data, 200) yd, yi = dist.topk(K, dim=1, largest=True, sorted=True) candidates = train_labels.view(1,-1).expand(num_train_data, -1) retrieval = torch.gather(candidates, 1, index_labeled[yi]) retrieval_one_hot = torch.zeros(num_train_data * K, num_class).to(device) retrieval_one_hot.scatter_(1, retrieval.view(-1, 1), 1) temperature = 0.1 yd_transform = (yd / temperature).exp_() probs = torch.sum(torch.mul(retrieval_one_hot.view(num_train_data, -1 , num_class), yd_transform.view(num_train_data, -1, 1)), 1) probs.div_(probs.sum(dim=1, keepdim=True)) probs_sorted, predictions = probs.sort(1, True) correct = predictions.eq(train_labels.data.view(-1,1)) confidence = probs_sorted[:, 0] # - probs_sorted[:, 1] correct = correct[index_unlabeled, :] confidence = confidence[index_unlabeled] n = confidence.shape[0] arange = 1 + np.arange(n) idx = confidence.sort(descending=True)[1] correct_sorted = correct[idx, 0].numpy() accuracies = np.cumsum(correct_sorted) / arange xs = arange / n plt.plot(xs, accuracies, label='num_labeled_data={}'.format(num_labeled_data)) # save pseudo labels unlabeled_probs_top1, unlabeled_indexes_top1 = probs_sorted[:, 0][index_unlabeled].sort(0, True) pseudo_indexes = index_unlabeled[unlabeled_indexes_top1] pseudo_labels = predictions[index_unlabeled, 0][unlabeled_indexes_top1] pseudo_probs = probs[index_unlabeled][unlabeled_indexes_top1] assert torch.all(pseudo_labels == pseudo_probs.max(1)[1]) save_dict = { 'pseudo_indexes': pseudo_indexes, 'pseudo_labels': pseudo_labels, 'pseudo_probs': pseudo_probs, 'labeled_indexes': index_labeled, 'unlabeled_indexes': index_unlabeled, } torch.save(save_dict, os.path.join(args.save_path, '{}.pth.tar'.format(num_labeled_data))) acc = (pseudo_labels == train_labels[pseudo_indexes]).float().mean().item() print('num_labeled={:4}, acc={:2.2f}, AUC={:2.2f}'.format(num_labeled_data, acc*100, accuracies.mean() * 100)) plt.xlabel('ratio of data') plt.ylabel('top1 accuracy') # plt.xticks(np.arange(0, 1.05, 0.1)) # plt.yticks(np.arange(0.36, 1.01, 0.05)) plt.grid() legend = plt.legend(loc='upper left', bbox_to_anchor=(1, 1)) plt.show()
tensor([24681, 42151, 48978, 41040, 36909, 8628, 24936, 35926, 15934, 8801, 36293, 28026, 3814, 34981, 21135, 16904, 20152, 3486, 11894, 29780, 23932, 33744, 41766, 42979, 49518, 11341, 6091, 48161, 36335, 29858, 36044, 25569, 46340, 8832, 38677, 37807, 4480, 18517, 8409, 15769, 49165, 43846, 13449, 34615, 38862, 17849, 49031, 4115, 29909, 34515]) num_labeled= 50, acc=38.82, AUC=54.82 tensor([24681, 42151, 48978, 41040, 36909, 10197, 38770, 40649, 43279, 27934, 8628, 24936, 35926, 15934, 8801, 27814, 37390, 12841, 22270, 10107, 36293, 28026, 3814, 34981, 21135, 46451, 48404, 8366, 32477, 48870, 16904, 20152, 3486, 11894, 29780, 47462, 34297, 1271, 28738, 12147, 23932, 33744, 41766, 42979, 49518, 18912, 32018, 17323, 1974, 27259, 11341, 6091, 48161, 36335, 29858, 17315, 22879, 43307, 23185, 10318, 36044, 25569, 46340, 8832, 38677, 36881, 11751, 9278, 1836, 8557, 37807, 4480, 18517, 8409, 15769, 440, 45506, 22942, 37635, 49162, 49165, 43846, 13449, 34615, 38862, 11524, 21361, 18397, 10903, 33335, 17849, 49031, 4115, 29909, 34515, 1456, 32156, 21390, 25936, 46645]) num_labeled= 100, acc=45.43, AUC=62.99 tensor([24681, 42151, 48978, 41040, 36909, 10197, 38770, 40649, 43279, 27934, 48148, 38536, 37693, 33617, 27768, 36763, 31048, 2345, 15097, 49656, 20265, 10365, 49091, 36836, 35285, 8628, 24936, 35926, 15934, 8801, 27814, 37390, 12841, 22270, 10107, 43669, 36935, 9806, 2389, 17223, 40808, 45946, 31854, 49487, 578, 31363, 2318, 2765, 29571, 15248, 36293, 28026, 3814, 34981, 21135, 46451, 48404, 8366, 32477, 48870, 38750, 45437, 11815, 34305, 22797, 27326, 38971, 30453, 9585, 21701, 47668, 41469, 300, 44482, 17954, 16904, 20152, 3486, 11894, 29780, 47462, 34297, 1271, 28738, 12147, 40307, 44267, 11855, 22489, 31199, 45646, 6936, 35292, 6714, 14575, 47132, 14125, 5086, 9343, 16013, 23932, 33744, 41766, 42979, 49518, 18912, 32018, 17323, 1974, 27259, 10654, 20054, 15256, 46803, 27812, 39173, 11810, 27935, 6983, 15578, 25954, 17146, 38810, 32496, 46321, 11341, 6091, 48161, 36335, 29858, 17315, 22879, 43307, 23185, 10318, 16247, 35949, 31833, 26102, 6222, 28877, 38784, 29549, 32795, 31568, 20322, 42243, 39846, 34133, 26785, 36044, 25569, 46340, 8832, 38677, 36881, 11751, 9278, 1836, 8557, 39511, 31396, 35620, 9481, 37289, 24394, 16822, 37564, 47653, 28188, 45944, 35444, 20432, 21644, 24946, 37807, 4480, 18517, 8409, 15769, 440, 45506, 22942, 37635, 49162, 10081, 37293, 29962, 40230, 26420, 42284, 47996, 30720, 3054, 21691, 10422, 43896, 33469, 45230, 22687, 49165, 43846, 13449, 34615, 38862, 11524, 21361, 18397, 10903, 33335, 17560, 11906, 23965, 8483, 41980, 891, 34910, 26038, 28874, 17625, 22948, 8489, 11265, 36893, 6346, 17849, 49031, 4115, 29909, 34515, 1456, 32156, 21390, 25936, 46645, 3489, 45235, 3126, 13128, 40168, 18175, 40819, 28019, 39620, 29182, 14458, 46401, 16051, 16394, 3737]) num_labeled= 250, acc=58.77, AUC=77.08 tensor([24681, 42151, 48978, 41040, 36909, 10197, 38770, 40649, 43279, 27934, 48148, 38536, 37693, 33617, 27768, 36763, 31048, 2345, 15097, 49656, 20265, 10365, 49091, 36836, 35285, 42465, 44296, 15003, 8209, 38023, 31880, 32206, 29039, 14945, 31888, 29837, 1950, 7251, 16637, 10343, 32460, 44820, 3295, 7168, 3092, 46523, 19682, 49635, 12477, 43592, 8628, 24936, 35926, 15934, 8801, 27814, 37390, 12841, 22270, 10107, 43669, 36935, 9806, 2389, 17223, 40808, 45946, 31854, 49487, 578, 31363, 2318, 2765, 29571, 15248, 27576, 7540, 1985, 2762, 48358, 19594, 47745, 37081, 47495, 31208, 11301, 49554, 11418, 19382, 47839, 32147, 9476, 30495, 40431, 39933, 30439, 35339, 43609, 13277, 30945, 36293, 28026, 3814, 34981, 21135, 46451, 48404, 8366, 32477, 48870, 38750, 45437, 11815, 34305, 22797, 27326, 38971, 30453, 9585, 21701, 47668, 41469, 300, 44482, 17954, 38329, 45218, 39478, 3196, 39983, 39687, 41411, 22897, 32181, 37797, 28057, 35824, 34053, 20645, 35161, 36011, 21725, 48101, 22322, 34836, 16783, 9386, 28856, 33987, 21163, 16904, 20152, 3486, 11894, 29780, 47462, 34297, 1271, 28738, 12147, 40307, 44267, 11855, 22489, 31199, 45646, 6936, 35292, 6714, 14575, 47132, 14125, 5086, 9343, 16013, 30947, 48088, 48762, 11743, 34946, 3320, 49791, 6930, 47963, 45082, 33427, 26336, 23584, 9865, 16808, 47306, 9073, 44093, 45471, 17137, 27683, 49609, 9591, 2770, 16432, 23932, 33744, 41766, 42979, 49518, 18912, 32018, 17323, 1974, 27259, 10654, 20054, 15256, 46803, 27812, 39173, 11810, 27935, 6983, 15578, 25954, 17146, 38810, 32496, 46321, 712, 17420, 37731, 38015, 48513, 20564, 3460, 3360, 381, 14656, 30589, 11499, 16383, 86, 41003, 47345, 17877, 1814, 25666, 6875, 9141, 23255, 6303, 33823, 11510, 11341, 6091, 48161, 36335, 29858, 17315, 22879, 43307, 23185, 10318, 16247, 35949, 31833, 26102, 6222, 28877, 38784, 29549, 32795, 31568, 20322, 42243, 39846, 34133, 26785, 32333, 751, 3167, 6253, 49920, 16935, 49866, 48991, 39167, 35254, 47913, 30735, 29392, 32516, 36016, 19177, 38632, 20860, 28831, 26382, 26072, 16212, 40662, 48181, 34411, 36044, 25569, 46340, 8832, 38677, 36881, 11751, 9278, 1836, 8557, 39511, 31396, 35620, 9481, 37289, 24394, 16822, 37564, 47653, 28188, 45944, 35444, 20432, 21644, 24946, 23479, 15385, 40549, 31140, 28169, 11124, 29296, 36749, 7339, 33690, 5182, 14694, 4644, 13500, 34013, 36586, 23708, 25417, 42099, 9814, 26089, 5596, 17351, 45518, 48895, 37807, 4480, 18517, 8409, 15769, 440, 45506, 22942, 37635, 49162, 10081, 37293, 29962, 40230, 26420, 42284, 47996, 30720, 3054, 21691, 10422, 43896, 33469, 45230, 22687, 7833, 19956, 5778, 21601, 37273, 14301, 40517, 20072, 19779, 34904, 589, 10873, 23612, 21108, 7831, 18874, 7836, 9082, 3343, 8980, 37573, 41000, 7849, 27054, 47189, 49165, 43846, 13449, 34615, 38862, 11524, 21361, 18397, 10903, 33335, 17560, 11906, 23965, 8483, 41980, 891, 34910, 26038, 28874, 17625, 22948, 8489, 11265, 36893, 6346, 11842, 21202, 16315, 24760, 27243, 4742, 27976, 31390, 34977, 17335, 45620, 8968, 26923, 18444, 44681, 11091, 12126, 3094, 23215, 13893, 47978, 29766, 16336, 36405, 12646, 17849, 49031, 4115, 29909, 34515, 1456, 32156, 21390, 25936, 46645, 3489, 45235, 3126, 13128, 40168, 18175, 40819, 28019, 39620, 29182, 14458, 46401, 16051, 16394, 3737, 26511, 26805, 18120, 14610, 30722, 35974, 24086, 44058, 146, 45840, 18030, 47269, 5138, 48196, 44182, 15244, 10930, 10793, 23551, 41181, 5605, 28932, 3276, 19831, 14695]) num_labeled= 500, acc=67.35, AUC=84.90 tensor([24681, 42151, 48978, 41040, 36909, 10197, 38770, 40649, 43279, 27934, 48148, 38536, 37693, 33617, 27768, 36763, 31048, 2345, 15097, 49656, 20265, 10365, 49091, 36836, 35285, 42465, 44296, 15003, 8209, 38023, 31880, 32206, 29039, 14945, 31888, 29837, 1950, 7251, 16637, 10343, 32460, 44820, 3295, 7168, 3092, 46523, 19682, 49635, 12477, 43592, 30240, 46458, 49169, 15546, 5775, 24420, 47520, 6952, 42010, 10115, 14778, 47740, 44771, 35626, 15594, 46750, 20568, 34313, 3909, 5346, 38312, 4239, 37191, 34578, 12767, 44082, 14052, 38326, 36358, 14307, 9373, 32378, 47242, 27961, 25097, 3515, 35665, 30114, 39959, 8096, 17101, 30400, 31991, 10811, 2413, 28866, 41222, 43266, 40928, 49556, 8628, 24936, 35926, 15934, 8801, 27814, 37390, 12841, 22270, 10107, 43669, 36935, 9806, 2389, 17223, 40808, 45946, 31854, 49487, 578, 31363, 2318, 2765, 29571, 15248, 27576, 7540, 1985, 2762, 48358, 19594, 47745, 37081, 47495, 31208, 11301, 49554, 11418, 19382, 47839, 32147, 9476, 30495, 40431, 39933, 30439, 35339, 43609, 13277, 30945, 8690, 23921, 1502, 35694, 29808, 2180, 12719, 16765, 42443, 25534, 15779, 27037, 1946, 17861, 21238, 14035, 45675, 11831, 45001, 22067, 11850, 1170, 25915, 25071, 43369, 28326, 14805, 31445, 12620, 41075, 9266, 9809, 45475, 31209, 34179, 13843, 16410, 26315, 42237, 22553, 39769, 44954, 25842, 7358, 1006, 12123, 184, 17294, 22063, 590, 36293, 28026, 3814, 34981, 21135, 46451, 48404, 8366, 32477, 48870, 38750, 45437, 11815, 34305, 22797, 27326, 38971, 30453, 9585, 21701, 47668, 41469, 300, 44482, 17954, 38329, 45218, 39478, 3196, 39983, 39687, 41411, 22897, 32181, 37797, 28057, 35824, 34053, 20645, 35161, 36011, 21725, 48101, 22322, 34836, 16783, 9386, 28856, 33987, 21163, 28158, 45428, 17217, 22012, 26219, 9002, 48288, 30836, 13557, 14404, 10160, 23930, 28933, 45662, 11779, 3368, 8438, 26266, 10155, 30454, 47710, 49101, 17745, 13195, 46053, 830, 15160, 6520, 4257, 40865, 4598, 13985, 21168, 6648, 48427, 7555, 24978, 31495, 2191, 35482, 25859, 34651, 35107, 49248, 46575, 19964, 36242, 19874, 46983, 18464, 16904, 20152, 3486, 11894, 29780, 47462, 34297, 1271, 28738, 12147, 40307, 44267, 11855, 22489, 31199, 45646, 6936, 35292, 6714, 14575, 47132, 14125, 5086, 9343, 16013, 30947, 48088, 48762, 11743, 34946, 3320, 49791, 6930, 47963, 45082, 33427, 26336, 23584, 9865, 16808, 47306, 9073, 44093, 45471, 17137, 27683, 49609, 9591, 2770, 16432, 4266, 19727, 5918, 17897, 36269, 10753, 23197, 7603, 38614, 23995, 42134, 36588, 21049, 14030, 19049, 43978, 2412, 24154, 20144, 28055, 34016, 43850, 8088, 36767, 19617, 48390, 14942, 6585, 34768, 32160, 9024, 27938, 38887, 16586, 6981, 31913, 7132, 14107, 43633, 47683, 6422, 6466, 49325, 16035, 37410, 15967, 8738, 33980, 1120, 2986, 23932, 33744, 41766, 42979, 49518, 18912, 32018, 17323, 1974, 27259, 10654, 20054, 15256, 46803, 27812, 39173, 11810, 27935, 6983, 15578, 25954, 17146, 38810, 32496, 46321, 712, 17420, 37731, 38015, 48513, 20564, 3460, 3360, 381, 14656, 30589, 11499, 16383, 86, 41003, 47345, 17877, 1814, 25666, 6875, 9141, 23255, 6303, 33823, 11510, 9359, 45076, 49252, 28315, 41554, 23603, 36156, 6009, 47921, 45286, 18264, 31515, 47482, 34808, 41216, 34316, 43524, 43239, 40550, 8257, 22386, 1158, 19358, 21323, 24900, 49812, 27500, 30094, 19006, 543, 31998, 32997, 44519, 9963, 25435, 32757, 38375, 15864, 14073, 11794, 30802, 39803, 19711, 4137, 41286, 20870, 11190, 42575, 38598, 12160, 11341, 6091, 48161, 36335, 29858, 17315, 22879, 43307, 23185, 10318, 16247, 35949, 31833, 26102, 6222, 28877, 38784, 29549, 32795, 31568, 20322, 42243, 39846, 34133, 26785, 32333, 751, 3167, 6253, 49920, 16935, 49866, 48991, 39167, 35254, 47913, 30735, 29392, 32516, 36016, 19177, 38632, 20860, 28831, 26382, 26072, 16212, 40662, 48181, 34411, 20043, 48422, 27950, 12839, 43425, 45828, 6519, 23067, 38680, 18074, 39029, 8730, 42518, 32910, 8198, 17780, 5100, 25287, 34739, 28079, 38692, 45501, 27377, 44838, 21767, 38099, 25272, 10072, 13100, 42950, 48220, 11928, 12567, 22473, 39718, 43446, 30279, 44268, 33138, 26365, 41130, 34879, 3580, 43851, 49334, 23046, 42092, 44112, 1618, 13598, 36044, 25569, 46340, 8832, 38677, 36881, 11751, 9278, 1836, 8557, 39511, 31396, 35620, 9481, 37289, 24394, 16822, 37564, 47653, 28188, 45944, 35444, 20432, 21644, 24946, 23479, 15385, 40549, 31140, 28169, 11124, 29296, 36749, 7339, 33690, 5182, 14694, 4644, 13500, 34013, 36586, 23708, 25417, 42099, 9814, 26089, 5596, 17351, 45518, 48895, 42836, 2978, 931, 37134, 9784, 43269, 35098, 2605, 37956, 34564, 8564, 7348, 45280, 48627, 8991, 39657, 72, 25294, 10815, 1584, 33787, 48401, 34388, 47873, 35614, 29292, 9742, 863, 6273, 10837, 18126, 21636, 33201, 33548, 40748, 17480, 45208, 16442, 10786, 14513, 4062, 16546, 2777, 24266, 17172, 40466, 12765, 7538, 39823, 24808, 37807, 4480, 18517, 8409, 15769, 440, 45506, 22942, 37635, 49162, 10081, 37293, 29962, 40230, 26420, 42284, 47996, 30720, 3054, 21691, 10422, 43896, 33469, 45230, 22687, 7833, 19956, 5778, 21601, 37273, 14301, 40517, 20072, 19779, 34904, 589, 10873, 23612, 21108, 7831, 18874, 7836, 9082, 3343, 8980, 37573, 41000, 7849, 27054, 47189, 15364, 31967, 10077, 17616, 1273, 35056, 18601, 3358, 20186, 31777, 20202, 45480, 42290, 41877, 48551, 6776, 5254, 37474, 33648, 34733, 18405, 20436, 35238, 30991, 5989, 42814, 29245, 39432, 48649, 42423, 34428, 26208, 7606, 9271, 45699, 44986, 36495, 27565, 11278, 16348, 47572, 30592, 13394, 29899, 40198, 43355, 6372, 11, 33773, 29185, 49165, 43846, 13449, 34615, 38862, 11524, 21361, 18397, 10903, 33335, 17560, 11906, 23965, 8483, 41980, 891, 34910, 26038, 28874, 17625, 22948, 8489, 11265, 36893, 6346, 11842, 21202, 16315, 24760, 27243, 4742, 27976, 31390, 34977, 17335, 45620, 8968, 26923, 18444, 44681, 11091, 12126, 3094, 23215, 13893, 47978, 29766, 16336, 36405, 12646, 9474, 17180, 34582, 20650, 9188, 21840, 3333, 24632, 10682, 31424, 48164, 4225, 40571, 627, 44534, 14402, 33713, 1782, 35352, 12610, 1506, 15490, 22172, 46668, 5273, 4995, 8241, 3425, 20239, 31930, 2160, 11667, 44384, 9514, 40789, 259, 46363, 30509, 19028, 22477, 485, 42502, 15951, 39365, 12041, 5600, 13616, 18221, 31846, 34120, 17849, 49031, 4115, 29909, 34515, 1456, 32156, 21390, 25936, 46645, 3489, 45235, 3126, 13128, 40168, 18175, 40819, 28019, 39620, 29182, 14458, 46401, 16051, 16394, 3737, 26511, 26805, 18120, 14610, 30722, 35974, 24086, 44058, 146, 45840, 18030, 47269, 5138, 48196, 44182, 15244, 10930, 10793, 23551, 41181, 5605, 28932, 3276, 19831, 14695, 26136, 30056, 29384, 47028, 46830, 48340, 39876, 24089, 49534, 28120, 19199, 24477, 43137, 27365, 26681, 39585, 7418, 6438, 43619, 39522, 11594, 40742, 13155, 34549, 24712, 29628, 39427, 14265, 4638, 17888, 34863, 22935, 5499, 53, 1390, 8479, 38402, 45813, 31216, 40083, 27439, 11700, 26659, 24188, 3363, 39361, 23975, 11210, 23166, 703])
MIT
notebooks/knn.ipynb
Bhaskers-Blu-Org2/metric-transfer.pytorch
IntroductionThe aim of this project is to conduct a sentiment analysis using python and twitter's API. The topic in my case is Expo 2020.**What is Expo 2020?**> Initiated in London 1851. It is a global gathering aimed to find solutions to challenges imposed by the current times. Aims to create enriching and immersive experience. World expo traverses through different cities each time. It also revolves around certain themes. Currently, World expo is taking place in Dubai, UAE. Between Oct, 2021 and Mar, 2022. For more information please visit [expo2020dubai](https://www.expo2020dubai.com/)**The goal of this project**> As discussed above the main goal is to conduct a sentiment analysis and learn the basics of data science and big data projects. Since expo 2020 is considered an educational exhibition that revolves around modern day problems and is hosted currently in the middle east. The aim is to measure the awareness of the arab society - *By arab society we mean anyone who posts their opinions in arabic*This part is about discovering and collecting as much data as possible that is relating to the topic at hand. I will be needing tweets that are written in Arabic. Each step is further illustrated in its own markdown.
import tweepy import pandas as pd import numpy as np import configparser import matplotlib.pyplot as plt import random
_____no_output_____
MIT
tweets-collection.ipynb
dalalbinhumaid/expo-2020-sentiment-analysis
1. Configuration and Authentication ---This is the setup part and API authentication. Prior to using the API it is necessary to create a developer account, the account grants you two levels of access. A user level and an application/project level. I will be using **configparser** to ensure my API keys are not visible. I suggest you do the same. The following is how to set up the configuration process.1. create a project from the developer's portal2. generate your API and access keys3. save them in a 'config.ini' file in the following format: ``` ini [twitter] CONSUMER_KEY = 'YOUR CONSUMER KEY' CONSUMER_SECRET = 'YOUR CONSUMER SECRET' ACCESS_TOKEN = 'YOUR ACCESS TOKEN' ACCESS_TOKEN_SECRET = 'YOUR ACCESS TOKEN SECRET' ``` 4. install configparser by running `pip install configparser`> **Note:** If you don't plan on using the config parser make sure you remove the import and change the next cell accordingly. To eliminate any errors. Make sure you adhere to the same variable names!
# read the file from 'config.ini' config = configparser.ConfigParser() config.read('config.ini') # API Variables CONSUMER_KEY = config['twitter']['CONSUMER_KEY'] CONSUMER_SECRET = config['twitter']['CONSUMER_SECRET'] ACCESS_TOKEN = config['twitter']['ACCESS_TOKEN'] ACCESS_TOKEN_SECRET = config['twitter']['ACCESS_TOKEN_SECRET'] # authenticate using tweepy def twitter_setup(): auth = tweepy.OAuth1UserHandler(CONSUMER_KEY, CONSUMER_SECRET) # project access auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) # user access api = tweepy.API(auth = auth) return api extractor = twitter_setup()
_____no_output_____
MIT
tweets-collection.ipynb
dalalbinhumaid/expo-2020-sentiment-analysis
2. Data Collection---After setting up the credentials and authenticating the project. I can start extracting data using **tweepy's** API. The aim is to search different terms and different hashtags in order to collect as much entries as the API allows for. There are many limitations since I have the `Elevated Access`. The main ones is that the search API only allows search to go back 7 days. One way this was managed was starting the project early. However, the ideal goal is to be able to search the entire archive. Nonetheless, the project aims to measure the opinion at the current time. Therefore, the regular search will suffice!I have created a function that when called extracts tweets depending on a local list. This list can have as many search queires as anyone would like. Also, The function parses the needed information and stores it in a data frame. Upon each iteration it appends to the previous data frame. This is beneficial when storing the data in .csv files.
def extract_tweets(): tweets = [] # main data frame data = [] # temporary data frame columns_header = ['ID', 'Tweet', 'Timestamp', 'Likes', 'Retweets', 'Length'] search_terms = ['@expo2020dubai -filter:retweets', '#expo2020 -filter:retweets', '#اكسبو -filter:retweets', 'اكسبو دبي -filter:retweets'] # search terms # fetch the tweets once prior to the iteration to append things correctly collected_tweets = tweepy.Cursor(extractor.search_tweets, q='expo dubai -filter:retweets', lang='ar', tweet_mode='extended').items(600) for tweet in collected_tweets: data.append([tweet.id, tweet.full_text, tweet.created_at,tweet.favorite_count, tweet.retweet_count, len(tweet.full_text)]) tweets = pd.DataFrame(data=data, columns=columns_header) # store in original data frame for term in search_terms: data = [] collected_tweets = tweepy.Cursor(extractor.search_tweets, q=term, lang='ar', tweet_mode='extended').items(600) for tweet in collected_tweets: data.append([tweet.id, tweet.full_text, tweet.created_at, tweet.favorite_count, tweet.retweet_count, len(tweet.full_text)]) df = pd.DataFrame(data=data, columns=columns_header) frames = [tweets, df] tweets = pd.concat(frames) # append the data frame to the previous one # since we are appending data frames the index value changes each time # here the goal is to create a new index that is incremented by one tweets.insert(0, 'index', range(0, len(tweets))) tweets = tweets.set_index('index') # random number to ensure files don't get overwritten tweets.to_csv('Tweets\\tweets155.csv') return tweets tweets = extract_tweets()
_____no_output_____
MIT
tweets-collection.ipynb
dalalbinhumaid/expo-2020-sentiment-analysis
3. Preliminary Data Exploration
display(tweets.head()) display(tweets.tail()) print('total of collected tweets is ', len(tweets)) tweets.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 706 entries, 0 to 705 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ID 706 non-null int64 1 Tweet 706 non-null object 2 Timestamp 706 non-null datetime64[ns, UTC] 3 Likes 706 non-null int64 4 Retweets 706 non-null int64 5 Length 706 non-null int64 dtypes: datetime64[ns, UTC](1), int64(4), object(1) memory usage: 38.6+ KB
MIT
tweets-collection.ipynb
dalalbinhumaid/expo-2020-sentiment-analysis
4. Data Visualization
plt.plot(tweets['Timestamp'], tweets['Likes']) plt.gcf().autofmt_xdate() plt.show() plt.scatter(tweets['Length'], tweets['Likes'], color='pink') plt.show() # tweets1 = pd.read_csv('tweets1.csv') # tweets2 = pd.read_csv('tweets3.csv') # tweets3 = pd.read_csv('tweets327.csv') # tweets4 = pd.read_csv('tweets1439.csv') # tweets5 = pd.read_csv('tweets526.csv') # tweets6 = pd.read_csv('tweets546.csv') # tweets7 = pd.read_csv('tweets129.csv') # tweets8 = pd.read_csv('tweets1700.csv') # tweets1 = tweets1.drop(['Unnamed: 0'], axis=1) # tweets2 = tweets2.drop(['Unnamed: 0'], axis=1) # tweets3 = tweets3.drop(['index'], axis=1) # tweets4 = tweets4.drop(['index'], axis=1) # tweets5 = tweets5.drop(['index'], axis=1) # tweets6 = tweets6.drop(['index'], axis=1) # tweets7 = tweets7.drop(['index'], axis=1) # tweets8 = tweets8.drop(['index'], axis=1) # frames = [tweets1, tweets2, tweets3, tweets4, tweets5, tweets6, tweets7, tweets8] # final_tweets = pd.concat(frames) # append the data frame to the previous one # final_tweets.insert(0, 'index', range(0, len(final_tweets))) # final_tweets = final_tweets.set_index('index') # display(final_tweets.head()) # display(final_tweets.tail()) # final_tweets.to_csv('dirty_tweets_updated.csv')
_____no_output_____
MIT
tweets-collection.ipynb
dalalbinhumaid/expo-2020-sentiment-analysis
Plotly
import plotly.graph_objects as go import dash import dash_core_components as dcc import dash_html_components as html country_list = df_plot[1:].columns fig = go.Figure() for each in country_list: fig.add_trace(go.Scatter( x = df_plot.date, y = df_plot[each], mode = 'markers+lines', name = each, line_width = 2, marker_size = 4, opacity = 0.9 )) fig.update_layout( xaxis_title = 'Time', yaxis_title = "Confirmed infected people (source johns hopkins, log-scale)" ) fig.update_yaxes(type = 'log') fig.update_layout(xaxis_rangeslider_visible = True) fig.show(renderer = 'chrome') option_list = [] for each in country_list: label_dict = {} label_dict['label'] = each label_dict['value'] = each option_list.append(label_dict) app = dash.Dash() app.layout = html.Div([ html.Label('Multi-Select Country'), dcc.Dropdown( id = 'country_drop_down', options = option_list, value = ['Canada', 'India'], multi = True ), dcc.Graph(figure = fig, id = 'main_window_slope') ]) from dash.dependencies import Input, Output @app.callback( Output('main_window_slope', 'figure'), [Input('country_drop_down', 'value')] ) def update_figure(country_list): traces = [] for each in country_list: traces.append(dict( x = df_plot.date, y = df_plot[each], mode = 'markers+lines', name = each, line_width = 2, marker_size = 4, opacity = 0.9 )) return { 'data': traces, 'layout': dict( width = 1280, height = 720, xaxis_title = "Time", yaxis_title = "Confirmed infected people (source johns hopkins, log-scale)", ) } app.run_server(debug = True, use_reloader = False)
Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ Dash is running on http://127.0.0.1:8050/ * Serving Flask app "__main__" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on
FTL
notebooks/EDA.ipynb
simran-grewal/COVID-19-Data-Analysis
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Get started with TensorBoard View on TensorFlow.org Run in Google Colab View source on GitHub In machine learning, to improve something you often need to be able to measure it. TensorBoard is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.This quickstart will show how to quickly get started with TensorBoard. The remaining guides in this website provide more details on specific capabilities, many of which are not included here.
# Load the TensorBoard notebook extension %load_ext tensorboard import tensorflow as tf import datetime # Clear any logs from previous runs !rm -rf ./logs/
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Using the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset as the example, normalize the data and write a function that creates a simple Keras model for classifying the images into 10 classes.
mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 def create_model(): return tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ])
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Using TensorBoard with Keras Model.fit() When training with Keras's [Model.fit()](https://www.tensorflow.org/api_docs/python/tf/keras/models/Modelfit), adding the `tf.keras.callbacks.TensorBoard` callback ensures that logs are created and stored. Additionally, enable histogram computation every epoch with `histogram_freq=1` (this is off by default)Place the logs in a timestamped subdirectory to allow easy selection of different training runs.
model = create_model() model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) model.fit(x=x_train, y=y_train, epochs=5, validation_data=(x_test, y_test), callbacks=[tensorboard_callback])
Train on 60000 samples, validate on 10000 samples Epoch 1/5 60000/60000 [==============================] - 15s 246us/sample - loss: 0.2217 - accuracy: 0.9343 - val_loss: 0.1019 - val_accuracy: 0.9685 Epoch 2/5 60000/60000 [==============================] - 14s 229us/sample - loss: 0.0975 - accuracy: 0.9698 - val_loss: 0.0787 - val_accuracy: 0.9758 Epoch 3/5 60000/60000 [==============================] - 14s 231us/sample - loss: 0.0718 - accuracy: 0.9771 - val_loss: 0.0698 - val_accuracy: 0.9781 Epoch 4/5 60000/60000 [==============================] - 14s 227us/sample - loss: 0.0540 - accuracy: 0.9820 - val_loss: 0.0685 - val_accuracy: 0.9795 Epoch 5/5 60000/60000 [==============================] - 14s 228us/sample - loss: 0.0433 - accuracy: 0.9862 - val_loss: 0.0623 - val_accuracy: 0.9823
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Start TensorBoard through the command line or within a notebook experience. The two interfaces are generally the same. In notebooks, use the `%tensorboard` line magic. On the command line, run the same command without "%".
%tensorboard --logdir logs/fit
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
A brief overview of the dashboards shown (tabs in top navigation bar):* The **Scalars** dashboard shows how the loss and metrics change with every epoch. You can use it to also track training speed, learning rate, and other scalar values.* The **Graphs** dashboard helps you visualize your model. In this case, the Keras graph of layers is shown which can help you ensure it is built correctly. * The **Distributions** and **Histograms** dashboards show the distribution of a Tensor over time. This can be useful to visualize weights and biases and verify that they are changing in an expected way.Additional TensorBoard plugins are automatically enabled when you log other types of data. For example, the Keras TensorBoard callback lets you log images and embeddings as well. You can see what other plugins are available in TensorBoard by clicking on the "inactive" dropdown towards the top right. Using TensorBoard with other methods When training with methods such as [`tf.GradientTape()`](https://www.tensorflow.org/api_docs/python/tf/GradientTape), use `tf.summary` to log the required information.Use the same dataset as above, but convert it to `tf.data.Dataset` to take advantage of batching capabilities:
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) train_dataset = train_dataset.shuffle(60000).batch(64) test_dataset = test_dataset.batch(64)
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
The training code follows the [advanced quickstart](https://www.tensorflow.org/tutorials/quickstart/advanced) tutorial, but shows how to log metrics to TensorBoard. Choose loss and optimizer:
loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam()
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Create stateful metrics that can be used to accumulate values during training and logged at any point:
# Define our metrics train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32) train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy') test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32) test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Define the training and test functions:
def train_step(model, optimizer, x_train, y_train): with tf.GradientTape() as tape: predictions = model(x_train, training=True) loss = loss_object(y_train, predictions) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) train_loss(loss) train_accuracy(y_train, predictions) def test_step(model, x_test, y_test): predictions = model(x_test) loss = loss_object(y_test, predictions) test_loss(loss) test_accuracy(y_test, predictions)
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Set up summary writers to write the summaries to disk in a different logs directory:
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") train_log_dir = 'logs/gradient_tape/' + current_time + '/train' test_log_dir = 'logs/gradient_tape/' + current_time + '/test' train_summary_writer = tf.summary.create_file_writer(train_log_dir) test_summary_writer = tf.summary.create_file_writer(test_log_dir)
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Start training. Use `tf.summary.scalar()` to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. You have control over which metrics to log and how often to do it. Other `tf.summary` functions enable logging other types of data.
model = create_model() # reset our model EPOCHS = 5 for epoch in range(EPOCHS): for (x_train, y_train) in train_dataset: train_step(model, optimizer, x_train, y_train) with train_summary_writer.as_default(): tf.summary.scalar('loss', train_loss.result(), step=epoch) tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch) for (x_test, y_test) in test_dataset: test_step(model, x_test, y_test) with test_summary_writer.as_default(): tf.summary.scalar('loss', test_loss.result(), step=epoch) tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch) template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}' print (template.format(epoch+1, train_loss.result(), train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100)) # Reset metrics every epoch train_loss.reset_states() test_loss.reset_states() train_accuracy.reset_states() test_accuracy.reset_states()
Epoch 1, Loss: 0.24321186542510986, Accuracy: 92.84333801269531, Test Loss: 0.13006582856178284, Test Accuracy: 95.9000015258789 Epoch 2, Loss: 0.10446818172931671, Accuracy: 96.84833526611328, Test Loss: 0.08867532759904861, Test Accuracy: 97.1199951171875 Epoch 3, Loss: 0.07096975296735764, Accuracy: 97.80166625976562, Test Loss: 0.07875105738639832, Test Accuracy: 97.48999786376953 Epoch 4, Loss: 0.05380449816584587, Accuracy: 98.34166717529297, Test Loss: 0.07712937891483307, Test Accuracy: 97.56999969482422 Epoch 5, Loss: 0.041443776339292526, Accuracy: 98.71833038330078, Test Loss: 0.07514958828687668, Test Accuracy: 97.5
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Open TensorBoard again, this time pointing it at the new log directory. We could have also started TensorBoard to monitor training while it progresses.
%tensorboard --logdir logs/gradient_tape
_____no_output_____
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
That's it! You have now seen how to use TensorBoard both through the Keras callback and through `tf.summary` for more custom scenarios. TensorBoard.dev: Host and share your ML experiment results[TensorBoard.dev](https://tensorboard.dev) is a free public service that enables you to upload your TensorBoard logs and get a permalink that can be shared with everyone in academic papers, blog posts, social media, etc. This can enable better reproducibility and collaboration.To use TensorBoard.dev, run the following command:
!tensorboard dev upload \ --logdir logs/fit \ --name "(optional) My latest experiment" \ --description "(optional) Simple comparison of several hyperparameters" \ --one_shot
2020-12-17 00:52:27.342972: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 ***** TensorBoard Uploader ***** This will upload your TensorBoard logs to https://tensorboard.dev/ from the following directory: logs/fit This TensorBoard will be visible to everyone. Do not upload sensitive data. Your use of this service is subject to Google's Terms of Service <https://policies.google.com/terms> and Privacy Policy <https://policies.google.com/privacy>, and TensorBoard.dev's Terms of Service <https://tensorboard.dev/policy/terms/>. This notice will not be shown again while you are logged into the uploader. To log out, run `tensorboard dev auth revoke`. Continue? (yes/NO) Yes Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=373649185512-8v619h5kft38l4456nm2dj4ubeqsrvh6.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state=IdEQglmQZEYpNFRPpVZ7G44Y0MLdJI&prompt=consent&access_type=offline Enter the authorization code:
Apache-2.0
Copy_of_get_started.ipynb
dlminvestments/cloudml-template
Commonly Available Datasets MNIST DatasetDataset of 70,000 Handwritten Digits (28X28 Images)Original Dataset : http://yann.lecun.com/exdb/mnist/
from sklearn.datasets import load_digits import matplotlib.pyplot as plt mnist = load_digits() X = mnist.data Y = mnist.target print(X.shape) print(Y.shape) example = X[42] print(Y[42]) img = example.reshape((8,8)) print(img) plt.imshow(img,cmap="gray") plt.show()
_____no_output_____
MIT
ml_repo/2. Working with Libraries/Some Common Datasets (Updated).ipynb
sachinpr0001/data_science
Boston DatasetHousing Prices Dataset
from sklearn.datasets import load_boston boston = load_boston() X = boston.data Y = boston.target print(X.shape) print(Y.shape)
(506,)
MIT
ml_repo/2. Working with Libraries/Some Common Datasets (Updated).ipynb
sachinpr0001/data_science
Loading & Visualising MNIST Dataset using Pandas & Matplotlib
import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv("../Datasets/MNIST-2/mnist_train.csv") df.shape df.head(n=3) print(type(df)) data = df.values np.random.shuffle(data) print(type(data)) print(data.shape) X = data[ : ,1: ] Y = data[ : ,0] print(X.shape,Y.shape) ## Try to visualise one image def drawImg(X,Y,i): plt.imshow(X[i].reshape(28,28),cmap='gray') plt.title("Label "+ str(Y[i])) plt.show() for i in range(1): drawImg(X,Y,i) ## Split this dataset => split = int(0.80*X.shape[0]) print(split) X_train,Y_train = X[ :split, :], Y[:split] X_test,Y_test = X[split: , :], Y[split: ] print(X_train.shape,Y_train.shape) print(X_test.shape,Y_test.shape) # Randomization import numpy as np a = np.array([1,2,3,4,5]) np.random.shuffle(a) print(a) # Randomly shuffle a 2 D array a = np.array([[1,2,3], [4,5,6], [7,8,9]]) np.random.shuffle(a) print(a) # Try to plot a visualisation (Grid of first 25 images 5 X 5) plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.imshow(X_train[i].reshape(28,28),cmap='gray') plt.title(Y_train[i]) plt.axis("off") # last thing from sklearn.model_selection import train_test_split XT,Xt,YT,Yt = train_test_split(X,Y,test_size=0.2,random_state=5) print(XT.shape,YT.shape) print(Xt.shape,Yt.shape)
(33600, 784) (33600,) (8400, 784) (8400,)
MIT
ml_repo/2. Working with Libraries/Some Common Datasets (Updated).ipynb
sachinpr0001/data_science
Python fundamentalsA quick introduction to the [Python programming language](https://www.python.org/) and [Jupyter notebooks](https://jupyter.org/). ([We're using Python 3, not Python 2](https://pythonclock.org/).) Basic data types and the print() function
# variable assignment # https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-python-3 # strings -- enclose in single or double quotes, just make sure they match my_name = 'Cody' # numbers int_num = 6 float_num = 6.4 # the print function print(8) print('Hello!') print(my_name) print(int_num) print(float_num) # booleans print(True) print(False) print(4 > 6) print(6 == 6) print('ell' in 'Hello')
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
Basic mathYou can do [basic math](https://www.digitalocean.com/community/tutorials/how-to-do-math-in-python-3-with-operators) with Python. (You can also do [more advanced math](https://docs.python.org/3/library/math.html).)
# addition add_eq = 4 + 2 # subtraction sub_eq = 4 - 2 # multiplication mult_eq = 4 * 2 # division div_eq = 4 / 2 # etc.
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
ListsA comma-separated collection of items between square brackets: `[]`. Python keeps track of the order of things inside a list.
# create a list: name, hometown, age # an item's position in the list is the key thing cody = ['Cody', 'Midvale, WY', 32] # create another list of mixed data my_list = [1, 2, 3, 'hello', True, ['a', 'b', 'c']] # use len() to get the number of items in the list my_list_count = len(my_list) print('There are', my_list_count, 'items in my list.') # use square brackets [] to access items in a list # (counting starts at zero in Python) # get the first item first_item = my_list[0] print(first_item) # you can do negative indexing to get items from the end of your list # get the last item last_item = my_list[-1] print(last_item) # Use colons to get a range of items in a list # get the first two items # the last number in a list slice is the first list item that's ~not~ included in the result my_range = my_list[0:2] print(my_range) # if you leave the last number off, it takes the item at the first number's index and everything afterward # get everything from the third item onward my_open_range = my_list[2:] print(my_open_range) # Use append() to add things to a list my_list.append(5) print(my_list) # Use pop() to remove items from the end of a list my_list.pop() print(my_list) # use join() to join items from a list into a string with a delimiter of your choosing letter_list = ['a', 'b', 'c'] joined_list = '-'.join(letter_list) print(joined_list)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
DictionariesA data structure that maps _keys_ to _values_ inside curly brackets: `{}`. Items in the dictionary are separated by commas. Python does not keep track of the order of items in a dictionary; if you need to keep track of insertion order, use an [OrderedDict](https://docs.python.org/3/library/collections.htmlcollections.OrderedDict) instead.
my_dict = {'name': 'Cody', 'title': 'Training director', 'organization': 'IRE'} # Access items in a dictionary using square brackets and the key (typically a string) my_name = my_dict['name'] print(my_name) # You can also use the `get()` method to retrieve values # you can optionally provide a second argument as the default value # if the key doesn't exist (otherwise defaults to `None`) my_name = my_dict.get('name', 'Jefferson Humperdink') print(my_name) # Use the .keys() method to get the keys of a dictionary print(my_dict.keys()) # Use the .values() method to get the values print(my_dict.values()) # add items to a dictionary using square brackets, the name of the key (typically a string) # and set the value like you'd set a variable, with = my_dict['my_age'] = 32 print(my_dict) # delete an item from a dictionary with `del` del my_dict['my_age'] print(my_dict)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
Commenting your codePython skips lines that begin with a hashtag -- these lines are used to write comments to help explain the code to others (and to your future self).Multi-line comments are enclosed between triple quotes: """ """
# this is a one-line comment """ This is a multi-line comment ~~~ """
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
Comparison operatorsWhen you want to [compare values](https://docs.python.org/3/reference/expressions.htmlvalue-comparisons), you can use these symbols:- `<` means less than- `>` means greater than- `==` means equal- `>=` means greater than or equal- `<=` means less than or equal- `!=` means not equal
4 > 6 'Hello!' == 'Hello!' (2 + 2) != (4 * 2) 100.2 >= 100
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
String functionsPython has a number of built-in methods to work with strings. They're useful if, say, you're using Python to clean data. Here are a few of them: _strip()_Call `strip()` on a string to remove whitespace from either side. It's like using the `=TRIM()` function in Excel.
whitespace_str = ' hello! ' print(whitespace_str.strip())
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
_upper()_ and _lower()_Call `.upper()` on a string to make the characters uppercase. Call `.lower()` on a string to make the characters lowercase. This can be useful when testing strings for equality.
my_name = 'Cody' my_name_upper = my_name.upper() print(my_name_upper) my_name_lower = my_name.lower() print(my_name_lower)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
_replace()_Use `.replace()` to substitute bits of text.
company = 'Bausch & Lomb' company_no_ampersand = company.replace('&', 'and') print(company_no_ampersand)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
_split()_Use `.split()` to split a string on some delimiter. If you don't specify a delimiter, it uses a single space as the default.
date = '6/4/2011' date_split = date.split('/') print(date_split)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
_zfill()_Among other things, you can use `.zfill()` to add zero padding -- for instance, if you're working with ZIP code data that was saved as a number somewhere and you've lost the leading zeroes for that handful of ZIP codes that begin with 0._Note: `.zfill()` is a string method, so if you want to apply it to a number, you'll need to first coerce it to a string with `str()`._
mangled_zip = '2301' fixed_zip = mangled_zip.zfill(5) print(fixed_zip) num_zip = 2301 fixed_num_zip = str(num_zip).zfill(5) print(fixed_num_zip)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
_slicing_Like lists, strings are _iterables_, so you can use slicing to grab chunks.
my_string = 'supercalifragilisticexpialidocious' chunk = my_string[9:20] print(chunk)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
_startswith()_, _endswith()_ and _in_If you need to test whether a string starts with a series of characters, use `.startswith()`. If you need to test whether a string ends with a series of characters, use `.endswith()`. If you need to test whether a string is part of another string -- or a list of strings -- use `.in()`.These are case sensitive, so you'd typically `.upper()` or `.lower()` the strings you're comparing to ensure an apples-to-apples comparison.
str_to_test = 'hello' print(str_to_test.startswith('hel')) print(str_to_test.endswith('lo')) print('el' in str_to_test) print(str_to_test in ['hi', 'whatsup', 'salutations', 'hello'])
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017