hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d04075594dd3c3d31c326b1add11c238c50f42bf | 983,203 | ipynb | Jupyter Notebook | 0-preprocess_datasets.ipynb | carpenterlab/2021_Haghighi_submitted | e62226c27d4a20ad2f8412eb7223a573e2da5942 | [
"BSD-3-Clause"
] | 6 | 2021-07-06T12:28:20.000Z | 2022-02-26T12:25:45.000Z | 0-preprocess_datasets.ipynb | carpenterlab/2021_Haghighi_NeurIPS_Dataset_submitted | e62226c27d4a20ad2f8412eb7223a573e2da5942 | [
"BSD-3-Clause"
] | 3 | 2022-02-25T20:20:23.000Z | 2022-02-26T00:46:52.000Z | 0-preprocess_datasets.ipynb | carpenterlab/2021_Haghighi_NeurIPS_Dataset_submitted | e62226c27d4a20ad2f8412eb7223a573e2da5942 | [
"BSD-3-Clause"
] | 2 | 2022-01-08T18:33:11.000Z | 2022-02-25T23:56:14.000Z | 93.33615 | 66,351 | 0.754752 | [
[
[
"### Cell Painting morphological (CP) and L1000 gene expression (GE) profiles for the following datasets:\n \n- **CDRP**-BBBC047-Bray-CP-GE (Cell line: U2OS) : \n * $\\bf{CP}$ There are 30,430 unique compounds for CP dataset, median number of replicates --> 4\n * $\\bf{GE}$ There are 21,782 unique compounds for GE dataset, median number of replicates --> 3\n * 20,131 compounds are present in both datasets.\n\n- **CDRP-bio**-BBBC036-Bray-CP-GE (Cell line: U2OS) : \n * $\\bf{CP}$ There are 2,242 unique compounds for CP dataset, median number of replicates --> 8\n * $\\bf{GE}$ There are 1,917 unique compounds for GE dataset, median number of replicates --> 2\n * 1916 compounds are present in both datasets.\n\n \n- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) : \n * $\\bf{CP}$ There are 593 unique alleles for CP dataset, median number of replicates --> 8\n * $\\bf{GE}$ There are 529 unique alleles for GE dataset, median number of replicates --> 8\n * 525 alleles are present in both datasets.\n \n \n- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :\n * $\\bf{CP}$ There are 323 unique alleles for CP dataset, median number of replicates --> 5\n * $\\bf{GE}$ There are 327 unique alleles for GE dataset, median number of replicates --> 2\n * 150 alleles are present in both datasets.\n \n- **LINCS**-Pilot1-CP-GE (Cell line: U2OS) :\n * $\\bf{CP}$ There are 1570 unique compounds across 7 doses for CP dataset, median number of replicates --> 5\n * $\\bf{GE}$ There are 1402 unique compounds for GE dataset, median number of replicates --> 3\n * $N_{p/d}$: 6984 compounds are present in both datasets.\n--------------------------------------------\n #### Link to the processed profiles:\n \n https://cellpainting-datasets.s3.us-east-1.amazonaws.com/Rosetta-GE-CP",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook\n%load_ext autoreload\n%autoreload 2\nimport numpy as np\nimport scipy.spatial\nimport pandas as pd\nimport sklearn.decomposition\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport os\nfrom cmapPy.pandasGEXpress.parse import parse\nfrom utils.replicateCorrs import replicateCorrs\nfrom utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile,saveDF_to_CSV_GZ_no_timestamp\nfrom importlib import reload\nfrom utils.normalize_funcs import standardize_per_catX\n# sns.set_style(\"whitegrid\")\n# np.__version__\npd.__version__",
"_____no_output_____"
]
],
[
[
"### Input / ouput files:",
"_____no_output_____"
],
[
"- **CDRPBIO**-BBBC047-Bray-CP-GE (Cell line: U2OS) : \n * $\\bf{CP}$ \n * Input:\n * Output:\n \n * $\\bf{GE}$ \n * Input: .mat files that are generated using https://github.com/broadinstitute/2014_wawer_pnas\n * Output:\n \n- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) : \n * $\\bf{CP}$ \n * Input:\n * Output:\n \n * $\\bf{GE}$ \n * Input:\n * Output:\n \n- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :\n * $\\bf{CP}$ \n * Input:\n * Output:\n \n * $\\bf{GE}$ \n * Input: https://data.broadinstitute.org/icmap/custom/TA/brew/pc/TA.OE005_U2OS_72H/\n * Output:\n \n### Reformat Cell-Painting Data Sets\n- CDRP and TA-ORF are in /storage/data/marziehhaghighi/Rosetta/raw-profiles/\n- Luad is already processed by Juan, source of the files is at /storage/luad/profiles_cp\n in case you want to reformat",
"_____no_output_____"
]
],
[
[
"fileName='RepCorrDF'\n### dirs on gpu cluster\n# rawProf_dir='/storage/data/marziehhaghighi/Rosetta/raw-profiles/'\n# procProf_dir='/home/marziehhaghighi/workspace_rosetta/workspace/'\n\n### dirs on ec2\nrawProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/'\n# procProf_dir='./'\nprocProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/'\n# s3://imaging-platform/projects/2018_04_20_Rosetta/workspace/preprocessed_data\n# aws s3 sync preprocessed_data s3://cellpainting-datasets/Rosetta-GE-CP/preprocessed_data --profile jumpcpuser\n\nfilename='../../results/RepCor/'+fileName+'.xlsx'\n",
"_____no_output_____"
],
[
"# ls ../../\n# https://cellpainting-datasets.s3.us-east-1.amazonaws.com/",
"_____no_output_____"
]
],
[
[
"# CDRP-BBBC047-Bray",
"_____no_output_____"
],
[
"### GE - L1000 - CDRP",
"_____no_output_____"
]
],
[
[
"os.listdir(rawProf_dir+'/l1000_CDRP/')",
"_____no_output_____"
],
[
"cdrp_dataDir=rawProf_dir+'/l1000_CDRP/'\ncpd_info = pd.read_csv(cdrp_dataDir+\"/compounds.txt\", sep=\"\\t\", dtype=str)\ncpd_info.columns",
"_____no_output_____"
],
[
"from scipy.io import loadmat\nx = loadmat(cdrp_dataDir+'cdrp.all.prof.mat')\n\nk1=x['metaWell']['pert_id'][0][0]\nk2=x['metaGen']['AFFX_PROBE_ID'][0][0]\nk3=x['metaWell']['pert_dose'][0][0]\nk4=x['metaWell']['det_plate'][0][0]\n# pert_dose\n# x['metaWell']['pert_id'][0][0][0][0][0]\npertID = []\nprobID=[]\nfor r in range(len(k1)):\n v = k1[r][0][0]\n pertID.append(v)\n# probID.append(k2[r][0][0])\n\nfor r in range(len(k2)):\n probID.append(k2[r][0][0])\n \npert_dose=[]\ndet_plate=[]\nfor r in range(len(k3)):\n pert_dose.append(k3[r][0])\n det_plate.append(k4[r][0][0]) \n \ndataArray=x['pclfc'];\ncdrp_l1k_rep = pd.DataFrame(data=dataArray,columns=probID)\ncdrp_l1k_rep['pert_id']=pertID\ncdrp_l1k_rep['pert_dose']=pert_dose\ncdrp_l1k_rep['det_plate']=det_plate\ncdrp_l1k_rep['BROAD_CPD_ID']=cdrp_l1k_rep['pert_id'].str[:13]\ncdrp_l1k_rep2=pd.merge(cdrp_l1k_rep, cpd_info, how='left',on=['BROAD_CPD_ID'])\nl1k_features_cdrp=cdrp_l1k_rep2.columns[cdrp_l1k_rep2.columns.str.contains(\"_at\")]\ncdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['BROAD_CPD_ID']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)\ncdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_id']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)\n\n# cdrp_l1k_df.head()\nprint(cpd_info.shape,cdrp_l1k_rep.shape,cdrp_l1k_rep2.shape)\n\ncdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['pert_id_dose'].replace('DMSO_-666.0', 'DMSO')\ncdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_sample_dose'].replace('DMSO_-666.0', 'DMSO')\n\nsaveDF_to_CSV_GZ_no_timestamp(cdrp_l1k_rep2,procProf_dir+'preprocessed_data/CDRP-BBBC047-Bray/L1000/replicate_level_l1k.csv.gz');\n# cdrp_l1k_rep2.head()",
"(32324, 4) (68120, 981) (68120, 986)\n"
],
[
"# cpd_info",
"_____no_output_____"
]
],
[
[
"### CP - CDRP",
"_____no_output_____"
]
],
[
[
"profileType=['_augmented','_normalized']\n\nbioactiveFlag=\"\";# either \"-bioactive\" or \"\"\n\nplates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')\nfor pt in profileType[1:2]:\n repLevelCDRP0=[]\n for p in plates:\n# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))\n repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive\n repLevelCDRP = pd.concat(repLevelCDRP0)\n metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')\n # metaCDRP1=metaCDRP1.rename(columns={\"PlateName\":\"Metadata_Plate_Map_Name\",'Well':'Metadata_Well'})\n # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()\n repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])\n# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)\n# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)\n repLevelCDRP2[\"Metadata_mmoles_per_liter2\"]=(repLevelCDRP2[\"Metadata_mmoles_per_liter\"]*2).round(2)\n repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)\n\n repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')\n repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')\n \n# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')\n\n# ,\n if bioactiveFlag:\n dataFolderName='CDRPBIO-BBBC036-Bray'\n saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\\\n '/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n else:\n# sgfsgf\n dataFolderName='CDRP-BBBC047-Bray'\n saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\\\n '/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n\n print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)",
"_____no_output_____"
],
[
"dataFolderName='CDRP-BBBC047-Bray'\ncp_feats=repLevelCDRP.columns[repLevelCDRP.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")].tolist()\nfeatures_to_remove =find_correlation(repLevelCDRP2[cp_feats], threshold=0.9, remove_negative=False)\nrepLevelCDRP2_var_sel=repLevelCDRP2.drop(columns=features_to_remove)\nsaveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2_var_sel,procProf_dir+'preprocessed_data/'+dataFolderName+\\\n '/CellPainting/replicate_level_cp'+'_normalized_variable_selected'+'.csv.gz')",
"/home/ubuntu/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py:3167: RuntimeWarning: compression has no effect when passing file-like object as input.\n formatter.save()\n"
],
[
"# features_to_remove\n# features_to_remove\n# features_to_remove",
"_____no_output_____"
],
[
"repLevelCDRP2['Nuclei_Texture_Variance_RNA_3_0']",
"_____no_output_____"
],
[
"# repLevelCDRP2.shape\n# cp_scaled.columns[cp_scaled.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")].tolist()",
"_____no_output_____"
]
],
[
[
"# CDRP-bio-BBBC036-Bray",
"_____no_output_____"
],
[
"### GE - L1000 - CDRPBIO",
"_____no_output_____"
]
],
[
[
"bioactiveFlag=\"-bioactive\";# either \"-bioactive\" or \"\"\nplates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')",
"_____no_output_____"
],
[
"# plates",
"_____no_output_____"
],
[
"cdrp_l1k_rep2_bioactive=cdrp_l1k_rep2[cdrp_l1k_rep2[\"pert_sample_dose\"].isin(repLevelCDRP2.Metadata_Sample_Dose.unique().tolist())]\n",
"_____no_output_____"
],
[
"cdrp_l1k_rep.det_plate",
"_____no_output_____"
]
],
[
[
"### CP - CDRPBIO",
"_____no_output_____"
]
],
[
[
"profileType=['_augmented','_normalized','_normalized_variable_selected']\n\nbioactiveFlag=\"-bioactive\";# either \"-bioactive\" or \"\"\n\nplates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')\nfor pt in profileType:\n repLevelCDRP0=[]\n for p in plates:\n# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))\n repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive\n repLevelCDRP = pd.concat(repLevelCDRP0)\n metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')\n # metaCDRP1=metaCDRP1.rename(columns={\"PlateName\":\"Metadata_Plate_Map_Name\",'Well':'Metadata_Well'})\n # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()\n repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])\n# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)\n# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)\n repLevelCDRP2[\"Metadata_mmoles_per_liter2\"]=(repLevelCDRP2[\"Metadata_mmoles_per_liter\"]*2).round(2)\n repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)\n\n repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')\n repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')\n \n# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')\n\n# ,\n if bioactiveFlag:\n dataFolderName='CDRPBIO-BBBC036-Bray'\n saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\\\n '/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n else:\n dataFolderName='CDRP-BBBC047-Bray'\n saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\\\n '/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n\n print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)",
"_____no_output_____"
]
],
[
[
"# LUAD-BBBC041-Caicedo",
"_____no_output_____"
],
[
"### GE - L1000 - LUAD",
"_____no_output_____"
]
],
[
[
"os.listdir(rawProf_dir+'/l1000_LUAD/input/')",
"_____no_output_____"
],
[
"os.listdir(rawProf_dir+'/l1000_LUAD/output/')",
"_____no_output_____"
],
[
"luad_dataDir=rawProf_dir+'/l1000_LUAD/'\nluad_info1 = pd.read_csv(luad_dataDir+\"/input/TA.OE014_A549_96H.map\", sep=\"\\t\", dtype=str)\nluad_info2 = pd.read_csv(luad_dataDir+\"/input/TA.OE015_A549_96H.map\", sep=\"\\t\", dtype=str)\nluad_info=pd.concat([luad_info1, luad_info2], ignore_index=True)\nluad_info.head()",
"_____no_output_____"
],
[
"luad_l1k_df = parse(luad_dataDir+\"/output/high_rep_A549_8reps_141230_ZSPCINF_n4232x978.gctx\").data_df.T.reset_index()\nluad_l1k_df=luad_l1k_df.rename(columns={\"cid\":\"id\"})\n# cdrp_l1k_df['XX']=cdrp_l1k_df['cid'].str[0]\n# cdrp_l1k_df['BROAD_CPD_ID']=cdrp_l1k_df['cid'].str[2:15]\nluad_l1k_df2=pd.merge(luad_l1k_df, luad_info, how='inner',on=['id'])\nluad_l1k_df2=luad_l1k_df2.rename(columns={\"x_mutation_status\":\"allele\"})\n\nl1k_features=luad_l1k_df2.columns[luad_l1k_df2.columns.str.contains(\"_at\")]\nluad_l1k_df2['allele']=luad_l1k_df2['allele'].replace('UnTrt', 'DMSO')\nprint(luad_info.shape,luad_l1k_df.shape,luad_l1k_df2.shape)\nsaveDF_to_CSV_GZ_no_timestamp(luad_l1k_df2,procProf_dir+'/preprocessed_data/LUAD-BBBC041-Caicedo/L1000/replicate_level_l1k.csv.gz')",
"(5945, 54) (4232, 979) (4232, 1032)\n"
],
[
"luad_l1k_df_scaled = standardize_per_catX(luad_l1k_df2,'det_plate',l1k_features.tolist());\nx_l1k_luad=replicateCorrs(luad_l1k_df_scaled.reset_index(drop=True),'allele',l1k_features,1)\n# x_l1k_luad=replicateCorrs(luad_l1k_df2[luad_l1k_df2['allele']!='DMSO'].reset_index(drop=True),'allele',l1k_features,1)\n# saveAsNewSheetToExistingFile(filename,x_l1k_luad[2],'l1k-luad')",
"here3\n"
]
],
[
[
"### CP - LUAD",
"_____no_output_____"
]
],
[
[
"profileType=['_augmented','_normalized','_normalized_variable_selected']\nplates=os.listdir('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/')\nfor pt in profileType[1:2]:\n repLevelLuad0=[]\n for p in plates:\n repLevelLuad0.append(pd.read_csv('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/'+p+'/'+p+pt+'.csv'))\n repLevelLuad = pd.concat(repLevelLuad0)\n metaLuad1=pd.read_csv(rawProf_dir+'/CP_LUAD/metadata/combined_platemaps_AHB_20150506_ssedits.csv')\n metaLuad1=metaLuad1.rename(columns={\"PlateName\":\"Metadata_Plate_Map_Name\",'Well':'Metadata_Well'})\n metaLuad1['Metadata_Well']=metaLuad1['Metadata_Well'].str.lower()\n # metaLuad2=pd.read_csv('~/workspace_rosetta/workspace/raw_profiles/CP_LUAD/metadata/barcode_platemap.csv')\n # Y[Y['Metadata_Well']=='g05']['Nuclei_Texture_Variance_Mito_5_0']\n repLevelLuad2=pd.merge(repLevelLuad, metaLuad1, how='inner',on=['Metadata_Plate_Map_Name','Metadata_Well'])\n repLevelLuad2['x_mutation_status']=repLevelLuad2['x_mutation_status'].replace(np.nan, 'DMSO')\n cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\n# repLevelLuad2.to_csv(procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')\n saveDF_to_CSV_GZ_no_timestamp(repLevelLuad2,procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n print(metaLuad1.shape,repLevelLuad.shape,repLevelLuad2.shape) ",
"_____no_output_____"
],
[
"pt=['_normalized']\n# Read save data\nrepLevelLuad2=pd.read_csv('./preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')\n\n# repLevelTA.head()\ncp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\ncols2remove0=[i for i in cp_features if ((repLevelLuad2[i].isnull()).sum(axis=0)/repLevelLuad2.shape[0])>0.05]\nprint(cols2remove0)\nrepLevelLuad2=repLevelLuad2.drop(cols2remove0, axis=1);\ncp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\nrepLevelLuad2 = repLevelLuad2.interpolate()\nrepLevelLuad2 = standardize_per_catX(repLevelLuad2,'Metadata_Plate',cp_features.tolist());\ndf1=repLevelLuad2[~repLevelLuad2['x_mutation_status'].isnull()].reset_index(drop=True)\nx_cp_luad=replicateCorrs(df1,'x_mutation_status',cp_features,1)\nsaveAsNewSheetToExistingFile(filename,x_cp_luad[2],'cp-luad')",
"_____no_output_____"
]
],
[
[
"# TA-ORF-BBBC037-Rohban",
"_____no_output_____"
],
[
"### GE - L1000 ",
"_____no_output_____"
]
],
[
[
"taorf_datadir=rawProf_dir+'/l1000_TA_ORF/'\ngene_info = pd.read_csv(taorf_datadir+\"TA.OE005_U2OS_72H.map.txt\", sep=\"\\t\", dtype=str)\n# gene_info.columns\n# TA.OE005_U2OS_72H_INF_n729x22268.gctx\n# TA.OE005_U2OS_72H_QNORM_n729x978.gctx\n# TA.OE005_U2OS_72H_ZSPCINF_n729x22268.gctx\n# TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx\ntaorf_l1k0 = parse(taorf_datadir+\"TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx\")\n# taorf_l1k0 = parse(taorf_datadir+\"TA.OE005_U2OS_72H_QNORM_n729x978.gctx\")\ntaorf_l1k_df0=taorf_l1k0.data_df\ntaorf_l1k_df=taorf_l1k_df0.T.reset_index()\nl1k_features=taorf_l1k_df.columns[taorf_l1k_df.columns.str.contains(\"_at\")]\ntaorf_l1k_df=taorf_l1k_df.rename(columns={\"cid\":\"id\"})\ntaorf_l1k_df2=pd.merge(taorf_l1k_df, gene_info, how='inner',on=['id'])\n# print(taorf_l1k_df.shape,gene_info.shape,taorf_l1k_df2.shape)\ntaorf_l1k_df2.head()\n# x_genesymbol_mutation\ntaorf_l1k_df2['pert_id']=taorf_l1k_df2['pert_id'].replace('CMAP-000', 'DMSO')\n# compression_opts = dict(method='zip',archive_name='out.csv') \n# taorf_l1k_df2.to_csv(procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz',index=False,compression=compression_opts)\nsaveDF_to_CSV_GZ_no_timestamp(taorf_l1k_df2,procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz')\nprint(gene_info.shape,taorf_l1k_df.shape,taorf_l1k_df2.shape)\n# gene_info.head()",
"/home/ubuntu/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py:3167: RuntimeWarning: compression has no effect when passing file-like object as input.\n formatter.save()\n"
],
[
"taorf_l1k_df2.groupby(['x_genesymbol_mutation']).size().describe()",
"_____no_output_____"
],
[
"taorf_l1k_df2.groupby(['pert_id']).size().describe()",
"_____no_output_____"
]
],
[
[
"#### Check Replicate Correlation",
"_____no_output_____"
]
],
[
[
"# df1=taorf_l1k_df2[taorf_l1k_df2['pert_id']!='CMAP-000']\n\ndf1_scaled = standardize_per_catX(taorf_l1k_df2,'det_plate',l1k_features.tolist());\ndf1_scaled2=df1_scaled[df1_scaled['pert_id']!='DMSO']\nx=replicateCorrs(df1_scaled2,'pert_id',l1k_features,1)",
"here3\n"
]
],
[
[
"### CP - TAORF",
"_____no_output_____"
]
],
[
[
"profileType=['_augmented','_normalized','_normalized_variable_selected']\nplates=os.listdir(rawProf_dir+'TA-ORF-BBBC037-Rohban/')\nfor pt in profileType[0:1]:\n repLevelTA0=[]\n for p in plates:\n repLevelTA0.append(pd.read_csv(rawProf_dir+'TA-ORF-BBBC037-Rohban/'+p+'/'+p+pt+'.csv'))\n repLevelTA = pd.concat(repLevelTA0)\n metaTA1=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA.csv')\n metaTA2=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA_2.csv')\n# metaTA2=metaTA2.rename(columns={\"Metadata_broad_sample\":\"Metadata_broad_sample_2\",'Metadata_Treatment':'Gene Allele Name'})\n metaTA=pd.merge(metaTA2, metaTA1, how='left',on=['Metadata_broad_sample'])\n# metaTA2=metaTA2.rename(columns={\"Metadata_Treatment\":\"Metadata_pert_name\"})\n# repLevelTA2=pd.merge(repLevelTA, metaTA2, how='left',on=['Metadata_pert_name'])\n repLevelTA2=pd.merge(repLevelTA, metaTA, how='left',on=['Metadata_broad_sample'])\n\n# repLevelTA2=repLevelTA2.rename(columns={\"Gene Allele Name\":\"Allele\"})\n repLevelTA2['Metadata_broad_sample']=repLevelTA2['Metadata_broad_sample'].replace(np.nan, 'DMSO')\n saveDF_to_CSV_GZ_no_timestamp(repLevelTA2,procProf_dir+'/preprocessed_data/TA-ORF-BBBC037-Rohban/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n print(metaTA.shape,repLevelTA.shape,repLevelTA2.shape)\n",
"(323, 4) (1920, 1801) (1920, 1804)\n"
],
[
"# repLevelTA.head()\ncp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\ncols2remove0=[i for i in cp_features if ((repLevelTA2[i].isnull()).sum(axis=0)/repLevelTA2.shape[0])>0.05]\nprint(cols2remove0)\nrepLevelTA2=repLevelTA2.drop(cols2remove0, axis=1);\n# cp_features=list(set(cp_features)-set(cols2remove0))\n# repLevelTA2=repLevelTA2.replace('nan', np.nan)\nrepLevelTA2 = repLevelTA2.interpolate()\ncp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\nrepLevelTA2 = standardize_per_catX(repLevelTA2,'Metadata_Plate',cp_features.tolist());\ndf1=repLevelTA2[~repLevelTA2['Metadata_broad_sample'].isnull()].reset_index(drop=True)\nx_taorf_cp=replicateCorrs(df1,'Metadata_broad_sample',cp_features,1)\n# saveAsNewSheetToExistingFile(filename,x_taorf_cp[2],'cp-taorf')",
"[]\n"
],
[
"# plates",
"_____no_output_____"
]
],
[
[
"# LINCS-Pilot1",
"_____no_output_____"
],
[
"### GE - L1000 - LINCS",
"_____no_output_____"
]
],
[
[
"os.listdir(rawProf_dir+'/l1000_LINCS/2016_04_01_a549_48hr_batch1_L1000/')",
"_____no_output_____"
],
[
"os.listdir(rawProf_dir+'/l1000_LINCS/metadata/')",
"_____no_output_____"
],
[
"data_meta_match_ls=[['level_3','level_3_q2norm_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],\n ['level_4W','level_4W_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],\n ['level_4','level_4_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],\n ['level_5_modz','level_5_modz_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt'],\n ['level_5_rank','level_5_rank_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt']]",
"_____no_output_____"
],
[
"lincs_dataDir=rawProf_dir+'/l1000_LINCS/'\nlincs_pert_info = pd.read_csv(lincs_dataDir+\"/metadata/REP.A_A549_pert_info.txt\", sep=\"\\t\", dtype=str)\nlincs_meta_level3 = pd.read_csv(lincs_dataDir+\"/metadata/col_meta_level_3_REP.A_A549_only_n27837.txt\", sep=\"\\t\", dtype=str)\n# lincs_info1 = pd.read_csv(lincs_dataDir+\"/metadata/REP.A_A549_pert_info.txt\", sep=\"\\t\", dtype=str)\nprint(lincs_meta_level3.shape)\nlincs_meta_level3.head()\n# lincs_info2 = pd.read_csv(lincs_dataDir+\"/input/TA.OE015_A549_96H.map\", sep=\"\\t\", dtype=str)\n# lincs_info=pd.concat([lincs_info1, lincs_info2], ignore_index=True)\n# lincs_info.head()",
"(27837, 45)\n"
],
[
"# lincs_meta_level3.groupby('distil_id').size()\nlincs_meta_level3['distil_id'].unique().shape",
"_____no_output_____"
],
[
"# lincs_meta_level3.columns.tolist()\n# lincs_meta_level3.pert_id",
"_____no_output_____"
],
[
"ls /home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/preprocessed_data/LINCS-Pilot1/CellPainting",
"\u001b[0m\u001b[01;34mCellPainting\u001b[0m/ \u001b[01;34mL1000\u001b[0m/\r\n"
],
[
"# procProf_dir+'preprocessed_data/LINCS-Pilot1/'\nprocProf_dir",
"_____no_output_____"
],
[
"for el in data_meta_match_ls:\n lincs_l1k_df=parse(lincs_dataDir+\"/2016_04_01_a549_48hr_batch1_L1000/\"+el[1]).data_df.T.reset_index()\n lincs_meta0 = pd.read_csv(lincs_dataDir+\"/metadata/\"+el[2], sep=\"\\t\", dtype=str)\n lincs_meta=pd.merge(lincs_meta0, lincs_pert_info, how='left',on=['pert_id'])\n lincs_meta=lincs_meta.rename(columns={\"distil_id\":\"cid\"})\n lincs_l1k_df2=pd.merge(lincs_l1k_df, lincs_meta, how='inner',on=['cid'])\n lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id']+'_'+lincs_l1k_df2['nearest_dose'].astype(str)\n lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id_dose'].replace('DMSO_-666', 'DMSO')\n# lincs_l1k_df2.to_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz',index=False,compression='gzip')\n saveDF_to_CSV_GZ_no_timestamp(lincs_l1k_df2,procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz')",
"_____no_output_____"
],
[
"# lincs_l1k_df2",
"_____no_output_____"
],
[
"lincs_l1k_rep['pert_id_dose'].unique()",
"_____no_output_____"
],
[
"lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[1][0]+'.csv.gz')\n# l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains(\"_at\")]\n# x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)\n# # saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')\n# # lincs_l1k_rep.head()",
"_____no_output_____"
],
[
"lincs_l1k_rep.pert_id.unique().shape",
"_____no_output_____"
],
[
"lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')\nlincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains('dose')]",
"_____no_output_____"
],
[
"lincs_l1k_rep[['pert_dose', 'pert_dose_unit', 'pert_idose', 'nearest_dose']]",
"_____no_output_____"
],
[
"lincs_l1k_rep['nearest_dose'].unique()",
"_____no_output_____"
],
[
"# lincs_l1k_rep.rna_plate.unique()",
"_____no_output_____"
],
[
"lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')\nl1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains(\"_at\")]\nlincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());\nx=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)",
"_____no_output_____"
],
[
"lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')\nl1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains(\"_at\")]\nlincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());\nx_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)\nsaveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')",
"/home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/utils/replicateCorrs.py:44: RuntimeWarning: Mean of empty slice\n repCorrDf.loc[u,'RepCor']=np.nanmean(repCorr)\n/home/ubuntu/anaconda3/lib/python3.6/site-packages/numpy/lib/nanfunctions.py:1113: RuntimeWarning: Mean of empty slice\n return np.nanmean(a, axis, out=out, keepdims=keepdims)\n"
],
[
"lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')\nl1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains(\"_at\")]\nlincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());\nx_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)\nsaveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')",
"_____no_output_____"
],
[
"saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')",
"_____no_output_____"
]
],
[
[
"raw data\n",
"_____no_output_____"
]
],
[
[
"# set(repLevelLuad2)-set(Y1.columns)",
"_____no_output_____"
],
[
"# Y1[['Allele', 'Category', 'Clone ID', 'Gene Symbol']].head()",
"_____no_output_____"
],
[
"# repLevelLuad2[repLevelLuad2['PublicID']=='BRDN0000553807'][['Col','InsertLength','NCBIGeneID','Name','OtherDescriptions','PublicID','Row','Symbol','Transcript','Vector','pert_type','x_mutation_status']].head()",
"_____no_output_____"
]
],
[
[
"#### Check Replicate Correlation",
"_____no_output_____"
],
[
"### CP - LINCS",
"_____no_output_____"
]
],
[
[
"# Ran the following on:\n# https://ec2-54-242-99-61.compute-1.amazonaws.com:5006/notebooks/workspace_nucleolar/2020_07_20_Nucleolar_Calico/1-NucleolarSizeMetrics.ipynb\n# Metadata\ndef recode_dose(x, doses, return_level=False):\n closest_index = np.argmin([np.abs(dose - x) for dose in doses])\n if np.isnan(x):\n return 0\n if return_level:\n return closest_index + 1\n else:\n return doses[closest_index]\n \nprimary_dose_mapping = [0.04, 0.12, 0.37, 1.11, 3.33, 10, 20]\n\n\nmetadata=pd.read_csv(\"/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/CP_LINCS/metadata/matadata_lincs_2.csv\")\nmetadata['Metadata_mmoles_per_liter']=metadata.mmoles_per_liter.values.round(2)\nmetadata=metadata.rename(columns={\"Assay_Plate_Barcode\": \"Metadata_Plate\",'broad_sample':'Metadata_broad_sample','well_position':'Metadata_Well'})\n\nlincs_submod_root_dir=\"/home/ubuntu/datasetsbucket/lincs-cell-painting/\"\n\nprofileType=['_augmented','_normalized','_normalized_dmso',\\\n '_normalized_feature_select','_normalized_feature_select_dmso']\n# profileType=['_normalized']\n# plates=metadata.Assay_Plate_Barcode.unique().tolist()\nplates=metadata.Metadata_Plate.unique().tolist()\nfor pt in profileType[4:5]:\n repLevelLINCS0=[]\n \n for p in plates:\n profile_add=lincs_submod_root_dir+\"/profiles/2016_04_01_a549_48hr_batch1/\"+p+\"/\"+p+pt+\".csv.gz\"\n if os.path.exists(profile_add):\n repLevelLINCS0.append(pd.read_csv(profile_add))\n \n repLevelLINCS = pd.concat(repLevelLINCS0)\n meta_lincs1=metadata.rename(columns={\"broad_sample\": \"Metadata_broad_sample\"})\n # metaCDRP1=metaCDRP1.rename(columns={\"PlateName\":\"Metadata_Plate_Map_Name\",'Well':'Metadata_Well'})\n # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()\n \n repLevelLINCS2=pd.merge(repLevelLINCS,meta_lincs1,how='left', on=[\"Metadata_broad_sample\",\"Metadata_Well\",\"Metadata_Plate\",'Metadata_mmoles_per_liter'])\n \n\n repLevelLINCS2 = repLevelLINCS2.assign(Metadata_dose_recode=(repLevelLINCS2.Metadata_mmoles_per_liter.apply(\n lambda x: recode_dose(x, primary_dose_mapping, return_level=False))))\n repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)\n# repLevelLINCS2['Metadata_Sample_Dose']=repLevelLINCS2['Metadata_broad_sample']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)\n repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id_dose'].replace(np.nan, 'DMSO')\n# saveDF_to_CSV_GZ_no_timestamp(repLevelLINCS2,procProf_dir+'/preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt+'.csv.gz')\n print(meta_lincs1.shape,repLevelLINCS.shape,repLevelLINCS2.shape)",
"(53760, 17) (52223, 1243) (52223, 1257)\n"
],
[
"# (8120, 15) (52223, 1810) (688699, 1825)\n# repLevelLINCS",
"_____no_output_____"
],
[
"# pd.merge(repLevelLINCS,meta_lincs1,how='left', on=[\"Metadata_broad_sample\"]).shape\nrepLevelLINCS.shape,meta_lincs1.shape",
"_____no_output_____"
],
[
"(8120, 15) (52223, 1238) (52223, 1253)",
"_____no_output_____"
],
[
"csv_l1k_lincs=pd.read_csv('./preprocessed_data/LINCS-Pilot1/L1000/replicate_level_l1k'+'.csv.gz')\ncsv_pddf=pd.read_csv('./preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')",
"_____no_output_____"
],
[
"csv_l1k_lincs.head()",
"_____no_output_____"
],
[
"csv_l1k_lincs.pert_id_dose.unique()",
"_____no_output_____"
],
[
"csv_pddf.Metadata_pert_id_dose.unique()",
"_____no_output_____"
]
],
[
[
"#### Read saved data",
"_____no_output_____"
]
],
[
[
"repLevelLINCS2.groupby(['Metadata_pert_id']).size()",
"_____no_output_____"
],
[
"repLevelLINCS2.groupby(['Metadata_pert_id_dose']).size().describe()",
"_____no_output_____"
],
[
"repLevelLINCS2.Metadata_Plate.unique().shape",
"_____no_output_____"
],
[
"repLevelLINCS2['Metadata_pert_id_dose'].unique().shape",
"_____no_output_____"
],
[
"# csv_pddf['Metadata_mmoles_per_liter'].round(0).unique()\n# np.sort(csv_pddf['Metadata_mmoles_per_liter'].unique())",
"_____no_output_____"
],
[
"csv_pddf.groupby(['Metadata_dose_recode']).size()#.median()",
"_____no_output_____"
],
[
"# repLevelLincs2=csv_pddf.copy()\nimport gc\ncp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\ncols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]\nprint(cols2remove0)\nrepLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);\nprint('here0')\n# cp_features=list(set(cp_features)-set(cols2remove0))\n# repLevelTA2=repLevelTA2.replace('nan', np.nan)\ndel repLevelLincs2\ngc.collect()\nprint('here0')\ncp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\nrepLevelLincs3[cp_features] = repLevelLincs3[cp_features].interpolate()\nprint('here1')\nrepLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());\nprint('here1')\n\n# df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)\n# repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()\nrepSizeDF=repLevelLincs3.groupby(['Metadata_pert_id_dose']).size().reset_index()\n\nhighRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id_dose.tolist()\nhighRepComp.remove('DMSO')\n# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\\\n# (repLevelLincs3['Metadata_dose_recode']==1.11)]\ndf0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id_dose'].isin(highRepComp))]\nx_lincs_cp=replicateCorrs(df0,'Metadata_pert_id_dose',cp_features,1)\n# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')",
"['Cells_RadialDistribution_FracAtD_DNA_1of4', 'Cells_RadialDistribution_FracAtD_DNA_2of4', 'Cells_RadialDistribution_FracAtD_DNA_3of4', 'Cells_RadialDistribution_FracAtD_DNA_4of4', 'Cells_RadialDistribution_MeanFrac_DNA_1of4', 'Cells_RadialDistribution_MeanFrac_DNA_2of4', 'Cells_RadialDistribution_MeanFrac_DNA_3of4', 'Cells_RadialDistribution_MeanFrac_DNA_4of4', 'Cells_RadialDistribution_RadialCV_DNA_1of4', 'Cells_RadialDistribution_RadialCV_DNA_2of4', 'Cells_RadialDistribution_RadialCV_DNA_3of4', 'Cells_RadialDistribution_RadialCV_DNA_4of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_1of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_2of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_3of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_4of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_1of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_2of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_3of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_4of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_1of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_2of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_3of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_4of4', 'Nuclei_RadialDistribution_FracAtD_DNA_1of4', 'Nuclei_RadialDistribution_FracAtD_DNA_2of4', 'Nuclei_RadialDistribution_FracAtD_DNA_3of4', 'Nuclei_RadialDistribution_FracAtD_DNA_4of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_1of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_2of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_3of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_4of4', 'Nuclei_RadialDistribution_RadialCV_DNA_1of4', 'Nuclei_RadialDistribution_RadialCV_DNA_2of4', 'Nuclei_RadialDistribution_RadialCV_DNA_3of4', 'Nuclei_RadialDistribution_RadialCV_DNA_4of4']\nhere0\nhere0\nhere1\nhere1\nhere2\n"
],
[
"repSizeDF",
"_____no_output_____"
],
[
"# repLevelLincs2=csv_pddf.copy()\n\n# cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\n# cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]\n# print(cols2remove0)\n# repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);\n# # cp_features=list(set(cp_features)-set(cols2remove0))\n# # repLevelTA2=repLevelTA2.replace('nan', np.nan)\n# repLevelLincs3 = repLevelLincs3.interpolate()\n\n# repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());\n\n# cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\n# # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)\n# # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()\nrepSizeDF=repLevelLincs3.groupby(['Metadata_pert_id']).size().reset_index()\n\nhighRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id.tolist()\n# highRepComp.remove('DMSO')\n# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\\\n# (repLevelLincs3['Metadata_dose_recode']==1.11)]\ndf0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id'].isin(highRepComp))]\nx_lincs_cp=replicateCorrs(df0,'Metadata_pert_id',cp_features,1)\n# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')",
"here2\n"
],
[
"# x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)\n# highRepComp[-1]\n",
"_____no_output_____"
],
[
"saveAsNewSheetToExistingFile(filename,x[2],'cp-lincs')",
"_____no_output_____"
],
[
"# repLevelLincs3.Metadata_Plate\nrepLevelLincs3.head()",
"_____no_output_____"
],
[
"# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']==\"BRD-A00147595\")][['Metadata_Plate','Metadata_Well']].drop_duplicates()",
"_____no_output_____"
],
[
"# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']==\"BRD-A00147595\") &\n# (csv_pddf['Metadata_Plate']=='SQ00015196') & (csv_pddf['Metadata_Well']==\"B12\")][csv_pddf.columns[1820:]].drop_duplicates()",
"_____no_output_____"
],
[
"# def standardize_per_catX(df,column_name):\ncolumn_name='Metadata_Plate'\nrepLevelLincs_scaled_perPlate=repLevelLincs3.copy()\nrepLevelLincs_scaled_perPlate[cp_features.tolist()]=repLevelLincs3[cp_features.tolist()+[column_name]].groupby(column_name).transform(lambda x: (x - x.mean()) / x.std()).values",
"_____no_output_____"
],
[
"# def standardize_per_catX(df,column_name):\n# # column_name='Metadata_Plate'\n# cp_features=df.columns[df.columns.str.contains(\"Cells_|Cytoplasm_|Nuclei_\")]\n# df_scaled_perPlate=df.copy()\n# df_scaled_perPlate[cp_features.tolist()]=\\\n# df[cp_features.tolist()+[column_name]].groupby(column_name)\\\n# .transform(lambda x: (x - x.mean()) / x.std()).values\n# return df_scaled_perPlate",
"_____no_output_____"
],
[
"df0=repLevelLincs_scaled_perPlate[(repLevelLincs_scaled_perPlate['Metadata_Sample_Dose'].isin(highRepComp))]\nx=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0407e18e3245cace734256562da38de76f6b234 | 4,371 | ipynb | Jupyter Notebook | docs/_build/.jupyter_cache/executed/a16bf6dd769bf7c2933e79b3ba4efcae/base.ipynb | cancermqiao/CancerMBook | bd26c0e3e1f76f66b75aacf75b3cb8602715e803 | [
"Apache-2.0"
] | null | null | null | docs/_build/.jupyter_cache/executed/a16bf6dd769bf7c2933e79b3ba4efcae/base.ipynb | cancermqiao/CancerMBook | bd26c0e3e1f76f66b75aacf75b3cb8602715e803 | [
"Apache-2.0"
] | null | null | null | docs/_build/.jupyter_cache/executed/a16bf6dd769bf7c2933e79b3ba4efcae/base.ipynb | cancermqiao/CancerMBook | bd26c0e3e1f76f66b75aacf75b3cb8602715e803 | [
"Apache-2.0"
] | null | null | null | 23.755435 | 102 | 0.466712 | [
[
[
"import torch\n\nx = torch.ones(5) # input tensor\ny = torch.zeros(3) # expected output\nw = torch.randn(5, 3, requires_grad=True)\nb = torch.randn(3, requires_grad=True)\nz = torch.matmul(x, w)+b\nloss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)",
"_____no_output_____"
],
[
"print('Gradient function for z =', z.grad_fn)\nprint('Gradient function for loss =', loss.grad_fn)",
"Gradient function for z = <AddBackward0 object at 0x7ff7fd77e940>\nGradient function for loss = <BinaryCrossEntropyWithLogitsBackward object at 0x7ff7fd77efa0>\n"
],
[
"loss.backward()\nprint(w.grad)\nprint(b.grad)",
"tensor([[0.0159, 0.2858, 0.0136],\n [0.0159, 0.2858, 0.0136],\n [0.0159, 0.2858, 0.0136],\n [0.0159, 0.2858, 0.0136],\n [0.0159, 0.2858, 0.0136]])\ntensor([0.0159, 0.2858, 0.0136])\n"
],
[
"z = torch.matmul(x, w)+b\nprint(z.requires_grad)\n\nwith torch.no_grad():\n z = torch.matmul(x, w)+b\nprint(z.requires_grad)",
"True\nFalse\n"
],
[
"z = torch.matmul(x, w)+b\nz_det = z.detach()\nprint(z_det.requires_grad)",
"False\n"
],
[
"inp = torch.eye(5, requires_grad=True)\nout = (inp+1).pow(2)\nout.backward(torch.ones_like(inp), retain_graph=True)\nprint(\"First call\\n\", inp.grad)\nout.backward(torch.ones_like(inp), retain_graph=True)\nprint(\"\\nSecond call\\n\", inp.grad)\ninp.grad.zero_()\nout.backward(torch.ones_like(inp), retain_graph=True)\nprint(\"\\nCall after zeroing gradients\\n\", inp.grad)",
"First call\n tensor([[4., 2., 2., 2., 2.],\n [2., 4., 2., 2., 2.],\n [2., 2., 4., 2., 2.],\n [2., 2., 2., 4., 2.],\n [2., 2., 2., 2., 4.]])\n\nSecond call\n tensor([[8., 4., 4., 4., 4.],\n [4., 8., 4., 4., 4.],\n [4., 4., 8., 4., 4.],\n [4., 4., 4., 8., 4.],\n [4., 4., 4., 4., 8.]])\n\nCall after zeroing gradients\n tensor([[4., 2., 2., 2., 2.],\n [2., 4., 2., 2., 2.],\n [2., 2., 4., 2., 2.],\n [2., 2., 2., 4., 2.],\n [2., 2., 2., 2., 4.]])\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0407e434f2461756a42c9425d08d0e7ea48c1dc | 85,540 | ipynb | Jupyter Notebook | thecounted_visual.ipynb | KikiCS/thecounted | a40613ee8eb015967dd037a5692113d3c0290208 | [
"MIT"
] | null | null | null | thecounted_visual.ipynb | KikiCS/thecounted | a40613ee8eb015967dd037a5692113d3c0290208 | [
"MIT"
] | null | null | null | thecounted_visual.ipynb | KikiCS/thecounted | a40613ee8eb015967dd037a5692113d3c0290208 | [
"MIT"
] | null | null | null | 49.387991 | 25,163 | 0.489315 | [
[
[
"# Introduction\nVisualization of statistics that support the claims of Black Lives Matter movement, data from 2015 and 2016.\n\nData source: https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/about-the-counted\n\nIdea from BuzzFeed article: https://www.buzzfeednews.com/article/peteraldhous/race-and-police-shootings",
"_____no_output_____"
],
[
"### Imports\nLibraries and data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\nfrom bokeh.io import output_notebook, show, export_png\nfrom bokeh.plotting import figure, output_file\nfrom bokeh.models import HoverTool, ColumnDataSource,NumeralTickFormatter\nfrom bokeh.palettes import Spectral4, PuBu4\nfrom bokeh.transform import dodge\nfrom bokeh.layouts import gridplot",
"_____no_output_____"
],
[
"selectcolumns=['raceethnicity','armed']\ndf1 = pd.read_csv('the-counted-2015.csv',usecols=selectcolumns)\ndf1.head()",
"_____no_output_____"
],
[
"df2 = pd.read_csv('the-counted-2016.csv',usecols=selectcolumns)\ndf2.head()",
"_____no_output_____"
],
[
"df=pd.concat([df1,df2])\ndf.shape # df contains \"The Counted\" data from both 2015 and 2016",
"_____no_output_____"
]
],
[
[
"Source for ethnicities percentage in 2015: https://www.statista.com/statistics/270272/percentage-of-us-population-by-ethnicities/\n\nSource for population total: https://en.wikipedia.org/wiki/Demography_of_the_United_States#Vital_statistics_from_1935",
"_____no_output_____"
]
],
[
[
"ethndic={\"White\": 61.72,\n \"Latino\": 17.66,\n \"Black\": 12.38,\n \"Others\": (5.28+2.05+0.73+0.17)\n }\n#print(type(ethndic))\nprint(ethndic)\npopulation=(321442000 + 323100000)/2 # average between 2015 and 2016 data\n# estimates by ethnicity\nethnestim={\"White\": round((population*ethndic[\"White\"]/100)),\n \"Latino\": round((population*ethndic[\"Latino\"]/100)),\n \"Black\": round((population*ethndic[\"Black\"]/100)),\n \"Others\": round((population*ethndic[\"Others\"]/100))\n }\nprint(ethnestim)",
"{'White': 61.72, 'Latino': 17.66, 'Black': 12.38, 'Others': 8.23}\n{'White': 198905661, 'Latino': 56913059, 'Black': 39897150, 'Others': 26522903}\n"
]
],
[
[
"# Analysis",
"_____no_output_____"
]
],
[
[
"df.groupby(by='raceethnicity').describe()",
"_____no_output_____"
]
],
[
[
"Check if there are any missing values:",
"_____no_output_____"
]
],
[
[
"df.isna().sum()",
"_____no_output_____"
],
[
"df = df[(df.raceethnicity != 'Arab-American') & (df.raceethnicity != 'Unknown')]\n# no data available about the percentage of this ethnicity over population, so it is discarded\ndf.replace(to_replace=['Asian/Pacific Islander','Native American','Other'],value='Others',inplace=True)\n# those categories all fall under Others in the population percentages found online\ndf.replace(to_replace=['Hispanic/Latino'],value='Latino',inplace=True)\n# this value is renamed for consistency with population ethnicity data",
"_____no_output_____"
],
[
"df.groupby(by='raceethnicity').describe()",
"_____no_output_____"
],
[
"def givepercent (dtf,ethnicity):\n # Function to compute percentages by ethnicity\n return round(((dtf.raceethnicity == ethnicity).sum()/(dtf.shape[0])*100),2)",
"_____no_output_____"
],
[
"killed={\"White\":(df.raceethnicity == 'White').sum(),\n \"Latino\": (df.raceethnicity == 'Latino').sum(),\n \"Black\": (df.raceethnicity == 'Black').sum(),\n \"Others\": (df.raceethnicity == 'Others').sum()\n }\nprint(killed)\nkilledperc={\"White\": givepercent(df,'White'), \n \"Latino\": givepercent(df,'Latino'),\n \"Black\": givepercent(df,'Black'),\n \"Others\": givepercent(df,'Others')\n }\nprint(killedperc)",
"{'White': 1158, 'Latino': 378, 'Black': 573, 'Others': 83}\n{'White': 52.83, 'Latino': 17.24, 'Black': 26.14, 'Others': 3.79}\n"
],
[
"df.groupby(by='armed').describe()",
"_____no_output_____"
]
],
[
[
"The analysis is limited to the value *No*, but could consider *Disputed* and *Non-lethal firearm*, which constitute other 108 data points.",
"_____no_output_____"
]
],
[
[
"dfunarmed = df[(df.armed == 'No')]\ndfunarmed.groupby(by='raceethnicity').describe()",
"_____no_output_____"
],
[
"unarmed={\"White\":(dfunarmed.raceethnicity == 'White').sum(),\n \"Latino\": (dfunarmed.raceethnicity == 'Latino').sum(),\n \"Black\": (dfunarmed.raceethnicity == 'Black').sum(),\n \"Others\": (dfunarmed.raceethnicity == 'Others').sum()\n }\nprint(unarmed)\nunarmedperc={\"White\":givepercent(dfunarmed,'White'),\n \"Latino\": givepercent(dfunarmed,'Latino'),\n \"Black\": givepercent(dfunarmed,'Black'),\n \"Others\": givepercent(dfunarmed,'Others')\n }\nprint(unarmedperc)",
"{'White': 201, 'Latino': 67, 'Black': 121, 'Others': 11}\n{'White': 50.25, 'Latino': 16.75, 'Black': 30.25, 'Others': 2.75}\n"
],
[
"def percent1ethn (portion,population,decimals):\n # Function to compute the percentage of the portion killed over a given population\n return round((portion/population*100),decimals)",
"_____no_output_____"
],
[
"killed1ethn={\"White\": percent1ethn(killed['White'],ethnestim['White'],6), \n \"Latino\": percent1ethn(killed['Latino'],ethnestim['Latino'],6), \n \"Black\": percent1ethn(killed['Black'],ethnestim['Black'],6), \n \"Others\": percent1ethn(killed['Others'],ethnestim['Others'],6)\n }\nprint(killed1ethn)\nunarmedoverkilled={\"White\": percent1ethn(unarmed['White'],killed['White'],2), \n \"Latino\": percent1ethn(unarmed['Latino'],killed['Latino'],2), \n \"Black\": percent1ethn(unarmed['Black'],killed['Black'],2), \n \"Others\": percent1ethn(unarmed['Others'],killed['Others'],2)\n }\nprint(unarmedoverkilled)",
"{'White': 0.000582, 'Latino': 0.000664, 'Black': 0.001436, 'Others': 0.000313}\n{'White': 17.36, 'Latino': 17.72, 'Black': 21.12, 'Others': 13.25}\n"
]
],
[
[
"**Hypothesis testing**",
"_____no_output_____"
]
],
[
[
"whitesample=ethnestim['White']\nblacksample=ethnestim['Black']\nwkilled=killed['White']\nbkilled=killed['Black']\npw=wkilled/whitesample\npb=bkilled/blacksample\n\n#happened by chance?\n#Hnull pw-pb = 0 (no difference between white and black)\n#Halt pw-pb != 0 (the two proportions are different)\n\n#Significance level = 5%\n\n# Test statistic: Z-statistic\ndifference=pb-pw\nprint(difference)\nstandarderror=np.sqrt(((pw*(1-pw))/whitesample)+((pb*(1-pb))/blacksample))\nzstat=(difference)/standarderror\nprint(zstat)\n\n# Z-score for significance level\nzscore=1.96\nif zstat > zscore:\n print(\"The null hypothesis is rejected.\")\nelse:\n print(\"The null hypothesis is not rejected.\")",
"8.540072690469898e-06\n13.688442014042046\nThe null hypothesis is rejected.\n"
],
[
"whitesample=killed['White']\nblacksample=killed['Black']\nwunarmed=unarmed['White']\nbunarmed=unarmed['Black']\npw=wunarmed/whitesample\npb=bunarmed/blacksample\n\n#happened by chance?\n#Hnull pw-pb = 0 (no difference between white and black)\n#Halt pw-pb != 0 (the two proportions are different)\n\n#Significance level = 5%\n\n# Test statistic: Z-statistic\ndifference=pb-pw\nprint(difference)\nstandarderror=np.sqrt(((pw*(1-pw))/whitesample)+((pb*(1-pb))/blacksample))\nzstat=(difference)/standarderror\nprint(zstat)\n\n# Z-score for significance level\nzscore=1.96\nif zstat > zscore:\n print(\"The null hypothesis is rejected.\")\nelse:\n print(\"The null hypothesis is not rejected.\")",
"0.037594154934035034\n1.8463487953468565\nThe null hypothesis is not rejected.\n"
],
[
"ethnicities = list(ethndic.keys())\npopulethn = list(ethndic.values())\nkilled = list(killedperc.values())\nunarmed = list(unarmedperc.values())\n\ndata1 = {'ethnicities' : ethnicities,\n 'populethn' : populethn,\n 'killed' : killed,\n 'unarmed' : unarmed}\n\nsource = ColumnDataSource(data=data1)",
"_____no_output_____"
]
],
[
[
"# Results",
"_____no_output_____"
]
],
[
[
"TOOLS = \"pan,wheel_zoom,box_zoom,reset,save,box_select\"\npalette=Spectral4\ntitlefontsize='16pt'\n\ncplot = figure(title=\"The Counted (data from 2015 and 2016)\", tools=TOOLS,\n x_range=ethnicities, y_range=(0, 75))#, sizing_mode='scale_both')\n\ncplot.vbar(x=dodge('ethnicities', 0.25, range=cplot.x_range),top='populethn', source=source,\n width=0.4,line_width=0 ,line_color=None, legend='Ethnicity % over population',\n color=str(Spectral4[0]), name='populethn')\n\ncplot.vbar(x=dodge('ethnicities', -0.25, range=cplot.x_range), top='killed', source=source,\n width=0.4, line_width=0 ,line_color=None, legend=\"Killed % over total killed\",\n color=str(Spectral4[2]), name=\"killed\")\n\ncplot.vbar(x=dodge('ethnicities', 0.0, range=cplot.x_range), top='unarmed', source=source,\n width=0.4, line_width=0 ,line_color=None, legend=\"Unarmed % over total unarmed\",\n color=str(Spectral4[1]), name=\"unarmed\")\n\ncplot.add_tools(HoverTool(names=[\"unarmed\"],\n tooltips=[\n ( 'Population', '@populethn{(00.00)}%' ),\n ( 'Killed', '@killed{(00.00)}%' ),\n ( 'Unarmed', '@unarmed{(00.00)}%' )], # Fields beginning with @ display values from ColumnDataSource. \n mode='vline'))\n\n#cplot.x_range.range_padding = 0.1\ncplot.xgrid.grid_line_color = None\n\ncplot.legend.location = \"top_right\"\ncplot.xaxis.axis_label = \"Ethnicity\"\ncplot.xaxis.axis_label_text_font_size='18pt'\n\ncplot.xaxis.minor_tick_line_color = None\ncplot.title.text_font_size=titlefontsize\ncplot.legend.label_text_font_size='16pt'\ncplot.xaxis.major_label_text_font_size='16pt'\ncplot.yaxis.major_label_text_font_size='16pt'",
"_____no_output_____"
],
[
"perckillethn = list(killed1ethn.values())\n\ndata2 = {'ethnicities' : ethnicities,\n 'perckillethn' : perckillethn}\n\nsource = ColumnDataSource(data=dict(data2, color=PuBu4))",
"_____no_output_____"
],
[
"plot2 = figure(title=\"Killed % over population with same ethnicity\",\n tools=TOOLS, x_range=ethnicities, y_range=(0, max(perckillethn)*1.2))#, sizing_mode='scale_both')\n\nplot2.vbar(x=dodge('ethnicities', 0.0, range=cplot.x_range), top='perckillethn', source=source,\n width=0.4, line_width=0 ,line_color=None, legend=\"\",\n color='color', name=\"perckillethn\")\n\nplot2.add_tools(HoverTool(names=[\"perckillethn\"],\n tooltips=[\n ( 'Killed', '@perckillethn{(0.00000)}%' )],\n #( 'Unarmed', '@unarmed{(00.00)}%' )], # Fields beginning with @ display values from ColumnDataSource. \n mode='vline'))\n\n#plot2.x_range.range_padding = 0.1\nplot2.xgrid.grid_line_color = None\n\nplot2.xaxis.axis_label = \"Ethnicity\"\nplot2.xaxis.axis_label_text_font_size='18pt'\n\nplot2.xaxis.minor_tick_line_color = None\nplot2.title.text_font_size=titlefontsize\nplot2.xaxis.major_label_text_font_size='16pt'\nplot2.yaxis.major_label_text_font_size='16pt'\nplot2.yaxis[0].formatter = NumeralTickFormatter(format=\"0.0000\")",
"_____no_output_____"
],
[
"percunarmethn = list(unarmedoverkilled.values())\n\ndata3 = {'ethnicities' : ethnicities,\n 'percunarmethn' : percunarmethn}\n\nsource = ColumnDataSource(data=dict(data3, color=PuBu4))",
"_____no_output_____"
],
[
"plot3 = figure(title=\"Unarmed % over killed with same ethnicity\",\n tools=TOOLS, x_range=ethnicities, y_range=(0, max(percunarmethn)*1.2))#, sizing_mode='scale_both')\n\nplot3.vbar(x=dodge('ethnicities', 0.0, range=cplot.x_range), top='percunarmethn', source=source,\n width=0.4, line_width=0 ,line_color=None, legend=\"\",\n color='color', name=\"percunarmethn\")\n\nplot3.add_tools(HoverTool(names=[\"percunarmethn\"],\n tooltips=[\n ( 'Unarmed', '@percunarmethn{(00.00)}%' )],\n #( 'Unarmed', '@unarmed{(00.00)}%' )], # Fields beginning with @ display values from ColumnDataSource. \n mode='vline'))\n\n#plot3.x_range.range_padding = 0.1\nplot3.xgrid.grid_line_color = None\n\nplot3.xaxis.axis_label = \"Ethnicity\"\nplot3.xaxis.axis_label_text_font_size='18pt'\n\nplot3.xaxis.minor_tick_line_color = None\nplot3.title.text_font_size=titlefontsize\nplot3.xaxis.major_label_text_font_size='16pt'\nplot3.yaxis.major_label_text_font_size='16pt'",
"_____no_output_____"
],
[
"output_file(\"thecounted.html\", title=\"The Counted Visualization\")\n\noutput_notebook()\ngplot=gridplot([cplot, plot2, plot3], sizing_mode='stretch_both', ncols=3)#, plot_width=800, plot_height=600)\nshow(gplot) # open a browser\nexport_png(gplot, filename=\"bokeh_thecounted.png\")",
"_____no_output_____"
]
],
[
[
"Hover on the bar charts to read the percentage values.",
"_____no_output_____"
],
[
"# Conclusions\nThe plot shows that if the people shot by police were proportional to the population distribution, the orange and green bar charts should have been almost the same height as the blue ones. Although this is true for Latino ethnicity, it is not for the Black one: this is the second most represented among killed and among those killed who were unarmed.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d040804c6f24c7b30489a37b5572c93cb8a1344f | 2,547 | ipynb | Jupyter Notebook | classes/01 preprocessing/01_preprocessing_03.ipynb | mariolpantunes/ml101 | 71072bb4b0d2e9d7894c2de699d1ada569752ec3 | [
"MIT"
] | 2 | 2022-01-20T17:16:21.000Z | 2022-01-28T19:26:54.000Z | classes/01 preprocessing/01_preprocessing_03.ipynb | mariolpantunes/ml101 | 71072bb4b0d2e9d7894c2de699d1ada569752ec3 | [
"MIT"
] | null | null | null | classes/01 preprocessing/01_preprocessing_03.ipynb | mariolpantunes/ml101 | 71072bb4b0d2e9d7894c2de699d1ada569752ec3 | [
"MIT"
] | 1 | 2022-03-02T09:23:15.000Z | 2022-03-02T09:23:15.000Z | 2,547 | 2,547 | 0.647428 | [
[
[
"# ML 101\n\n## RDP and line simplification\n\nRDP is a line simplifaction algorithm that can be used to reduce the number of points.",
"_____no_output_____"
]
],
[
[
"!pip install git+git://github.com/mariolpantunes/knee@main#egg=knee",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = [16, 9]\nimport numpy as np\nimport knee.linear_fit as lf\nimport knee.rdp as rdp\n\n# generate some data\nx = np.arange(0, 100, 0.1)\ny = np.sin(x)\n\nprint(len(x))\n\nplt.plot(x,y)\nplt.show()",
"_____no_output_____"
],
[
"# generate the points\npoints = np.array([x,y]).T\nreduced, removed = rdp.rdp(points, 0.01, cost=lf.Linear_Metrics.rpd)\nreduced_points = points[reduced]\n\nprint(f'Lenght = {len(reduced_points)}')\n\nplt.plot(x,y)\nx = reduced_points[:,0]\ny = reduced_points[:,1]\nplt.plot(x,y, 'ro')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Knee detection\n\nElbow/knee detection is a method to select te ideal cut-off point in a performance curve",
"_____no_output_____"
]
],
[
[
"# generate some data\nx = np.arange(0.1, 10, 0.1)\ny = 1/(x**2)\n\nplt.plot(x,y)\nplt.show()",
"_____no_output_____"
],
[
"import knee.lmethod as lmethod\n\n# generate the points\npoints = np.array([x,y]).T\n\nidx = lmethod.knee(points)\n\nplt.plot(x,y)\nplt.plot(x[idx], y[idx], 'ro')\nplt.show()",
"_____no_output_____"
],
[
"import knee.dfdt as dfdt\n\n# generate the points\npoints = np.array([x,y]).T\n\nidx = dfdt.knee(points)\n\nprint(idx)\n\nplt.plot(x,y)\nplt.plot(x[idx], y[idx], 'ro')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0408a9baf0e7440f70b13b046376da6cc26ac6b | 14,605 | ipynb | Jupyter Notebook | notebooks/OW_Get_Data.ipynb | Jordan-Ireland/Unit2-Sprint4 | 3b4b1fbbb60c0f1de462a6f659ca88c4572e72ea | [
"MIT"
] | null | null | null | notebooks/OW_Get_Data.ipynb | Jordan-Ireland/Unit2-Sprint4 | 3b4b1fbbb60c0f1de462a6f659ca88c4572e72ea | [
"MIT"
] | null | null | null | notebooks/OW_Get_Data.ipynb | Jordan-Ireland/Unit2-Sprint4 | 3b4b1fbbb60c0f1de462a6f659ca88c4572e72ea | [
"MIT"
] | null | null | null | 32.894144 | 146 | 0.504074 | [
[
[
"import json\nimport requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\nfrom scipy.stats import norm\n\ndf = pd.read_csv('user_info.csv')\n\nfrom selenium import webdriver\noptions = webdriver.ChromeOptions()\noptions.add_argument('--ignore-certificate-errors')\noptions.add_argument('--incognito')\noptions.add_argument('--headless')\ndriver = webdriver.Chrome(\"../assets/chromedriver\", options=options)",
"_____no_output_____"
],
[
"class Player:\n def __init__(self, name, level, rating, prestige, games_won, qps, medals, hero):\n self.name = name\n self.level = level\n self.rating = rating\n self.prestige = prestige\n self.qps = qps\n self.medals = medals\n self.hero = hero\n self.kd_ratio = [i/(1+sum([qps.elims,qps.deaths])) for i in [qps.elims,qps.deaths]]\n self.games_won = games_won\n\nclass Stats:\n def __init__(self, elims=0, dmg_done=0, deaths=0, solo_kills=0):\n self.elims = elims\n self.dmg_done = dmg_done\n self.deaths = deaths\n self.solo_kills = solo_kills\n \nclass Medals:\n def __init__(self, bronze=0, silver=0, gold=0):\n self.bronze = bronze\n self.silver = silver\n self.gold = gold\n \nhero_list = ['ana','ashe','baptiste','bastion','brigitte','dVa','doomfist',\n 'genji','hanzo','junkrat','lucio','mccree','mei','mercy','moira',\n 'orisa','pharah','reaper','reinhardt','roadhog','soldier76','sombra',\n 'symmetra','torbjorn','tracer','widowmaker','winston','wreckingBall',\n 'zarya','zenyatta','sigma']\n\ndef create_player(js):\n heroes = {}\n if 'quickPlayStats' not in js:\n for hero in hero_list:\n heroes.update({hero: Stats(0,0,0,0)})\n return Player(js['name'], js['level'],js['rating'],js['prestige'], 0, Stats(), Medals(), heroes)\n if 'careerStats' not in js['quickPlayStats']:\n for hero in hero_list:\n heroes.update({hero: Stats(0,0,0,0)})\n return Player(js['name'], js['level'],js['rating'],js['prestige'], 0, Stats(), Medals(), heroes)\n if js.get('quickPlayStats',{}).get('careerStats',{}) == None or 'allHeroes' not in js.get('quickPlayStats',{}).get('careerStats',{}):\n for hero in hero_list:\n heroes.update({hero: Stats(0,0,0,0)})\n return Player(js['name'], js['level'],js['rating'],js['prestige'], 0, Stats(), Medals(), heroes)\n \n elims = 0\n damageDone = 0\n deaths = 0\n soloKills = 0\n\n if js['quickPlayStats']['careerStats']['allHeroes']['combat'] != None:\n\n if 'eliminations' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:\n elims = js['quickPlayStats']['careerStats']['allHeroes']['combat']['eliminations']\n\n if 'damageDone' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:\n damageDone = js['quickPlayStats']['careerStats']['allHeroes']['combat']['damageDone']\n\n if 'deaths' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:\n deaths = js['quickPlayStats']['careerStats']['allHeroes']['combat']['deaths']\n\n if 'soloKills' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:\n soloKills = js['quickPlayStats']['careerStats']['allHeroes']['combat']['soloKills']\n \n qps = Stats(elims,damageDone,deaths,soloKills)\n\n medals = Medals(js['quickPlayStats']['awards'].get('medalsBronze'),\n js['quickPlayStats']['awards'].get('medalsSilver'),\n js['quickPlayStats']['awards'].get('medalsGold'))\n \n for hero in hero_list:\n print(hero)\n if hero in js['quickPlayStats']['careerStats']:\n elims = 0\n damageDone = 0\n deaths = 0\n soloKills = 0\n \n if js['quickPlayStats']['careerStats'][hero]['combat'] != None:\n \n if 'eliminations' in js['quickPlayStats']['careerStats'][hero]['combat']:\n elims = js['quickPlayStats']['careerStats'][hero]['combat']['eliminations']\n\n if 'damageDone' in js['quickPlayStats']['careerStats'][hero]['combat']:\n damageDone = js['quickPlayStats']['careerStats'][hero]['combat']['damageDone']\n\n if 'deaths' in js['quickPlayStats']['careerStats'][hero]['combat']:\n deaths = js['quickPlayStats']['careerStats'][hero]['combat']['deaths']\n\n if 'soloKills' in js['quickPlayStats']['careerStats'][hero]['combat']:\n soloKills = js['quickPlayStats']['careerStats'][hero]['combat']['soloKills']\n \n heroes.update({hero: Stats(elims,damageDone,deaths,soloKills)})\n else:\n heroes.update({hero: Stats(0,0,0,0)})\n \n return Player(js['name'], js['level'],js['rating'],js['prestige'], js['quickPlayStats']['games']['won'], qps, medals, heroes)\n\ndef df_object(p):\n item = [p.name,p.level,p.rating,p.prestige,p.games_won,p.qps.elims,p.qps.dmg_done,\n p.qps.deaths,p.qps.solo_kills,p.medals.bronze,p.medals.silver,p.medals.gold]\n \n for hero in hero_list:\n item.extend([p.hero[hero].elims,p.hero[hero].dmg_done,p.hero[hero].deaths,p.hero[hero].solo_kills])\n \n return item",
"_____no_output_____"
],
[
"usernames = pd.read_csv('../assets/data/usernames_scraped_fixed.csv')\nusernames.head()\nlen(usernames['users'])",
"_____no_output_____"
],
[
"##dataframe setup\ncolumns = ['username','level','rating','prestige','games_won','qps_elims','qps_dmg_done',\n 'qps_deaths','qps_solo_kills','medals_bronze','medals_silver','medals_gold']\n\nfor hero in hero_list:\n hero_data = [f'{hero}_elims',f'{hero}_dmg_done',f'{hero}_deaths',f'{hero}_solo_kills']\n columns.extend(hero_data)\n\ndata = pd.DataFrame(columns=columns)\n\namount = 0\nfor user in usernames['users'].values:\n url = f\"https://ow-api.com/v1/stats/pc/us/{user}/complete\"\n print(url)\n response = requests.get(url)\n j = json.loads(response.text)\n u = create_player(j)\n data.loc[len(data), :] = df_object(u)\n amount += 1\n \n percent = np.round((amount/len(usernames['users'])),decimals=2)\n clear_output()\n progress = widgets.IntProgress(\n value=amount,\n min=0,\n max=len(usernames['users'].values),\n step=1,\n description=f'{percent}%',\n bar_style='info', # 'success', 'info', 'warning', 'danger' or ''\n orientation='horizontal'\n )\n display(progress)",
"_____no_output_____"
],
[
"data.head()\ndata.tail()\n\ndf = pd.read_csv('user_info.csv')\nprint(df.shape)\ndf = df.append(data)\ndf.shape, data.shape\n\ndata.to_csv('user_info.csv',index=False)",
"(8, 136)\n"
],
[
"# def s(username):\n# global search\n# search = username\n \n# interactive(s, username='')",
"_____no_output_____"
],
[
"# usernames = pd.read_csv('usernames_scraped_fixed.csv')\n# usernames.head()\n\n# df = pd.read_csv('usernames_scraped.csv')",
"_____no_output_____"
],
[
"# username_scraped = []\n\n# def str2bool(v):\n# return v.lower() in (\"True\", \"true\")\n\n# for name in df['users']:\n# driver.get(f\"https://playoverwatch.com/en-us/search?q={name}\")\n# time.sleep(2)\n# page_source = driver.page_source\n \n# soup = BeautifulSoup(page_source)\n# players = soup.find_all('a', class_=\"player-badge\")\n \n# for element in players:\n# locked = str2bool(element.find(\"div\", {\"data-visibility-private\": True})['data-visibility-private'])\n# if(locked == False):\n# username_scraped.append(element.find(class_='player-badge-name').text.replace('#', '-'))\n \n# print(len(username_scraped))",
"_____no_output_____"
],
[
"# print(len(username_scraped))\n\n# df1 = pd.read_csv('usernames_scraped_fixed.csv')\n\n# df2 = pd.DataFrame(username_scraped,columns=['users'])\n\n# df1 = df1.append(df2)\n\n# df1.to_csv('usernames_scraped_fixed.csv',index=False)\n\n# df1.shape",
"_____no_output_____"
],
[
"# usernames['users'].values",
"_____no_output_____"
],
[
"# def on_change(b):\n# global player\n# player = name=dropbox.value\n# print('player')\n\n# dropbox = widgets.Select(\n# options=usernames['users'].values,\n# value=usernames['users'].values[0],\n# description='User:',\n# disabled=False\n# )\n# dropbox.observe(on_change, names='value')\n \n# display(dropbox)",
"_____no_output_____"
],
[
"# player",
"_____no_output_____"
],
[
"# soup = BeautifulSoup(page_source)\n\n# players = soup.find_all('a', class_=\"player-badge\")\n \n# def f(name):\n# return name\n\n# def on_button_clicked(b):\n# global player\n# player = name=b.description\n\n# displays = []\n# for element in players:\n# locked = str2bool(element.find(\"div\", {\"data-visibility-private\": True})['data-visibility-private'])\n# if(locked == True):\n# tooltip = 'Sorry, player has their profile set to private...'\n# icon = 'lock'\n# else:\n# tooltip = \"Click to view this player\"\n# icon = 'unlock'\n# button = widgets.Button(\n# description=element.find(class_='player-badge-name').text.capitalize().replace('#', '-'),\n# disabled=locked,\n# button_style='', # 'success', 'info', 'warning', 'danger' or ''\n# icon=icon,\n# tooltip=tooltip\n# )\n# out = widgets.Output()\n \n# button.on_click(on_button_clicked)\n# display(button,out)",
"_____no_output_____"
],
[
"# url = f\"https://ow-api.com/v1/stats/pc/us/{player}/complete\"\n# print(url)\n# response = requests.get(url)\n\n# print(response)",
"_____no_output_____"
],
[
"# j = json.loads(response.text)\n# if(j['private'] == True):\n# print(\"Sorry can't load this profile. it's private\")\n# else:\n# print(j['name'])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04096ca5994998ca9056f14a6dc04b3cb9a650b | 8,538 | ipynb | Jupyter Notebook | codes/data/mnist/generate_mnist.ipynb | NIRVANALAN/CE7454_2020 | 17efb7607c87b1a7a0fcaa94587202d2550308df | [
"MIT"
] | 221 | 2019-08-15T02:19:23.000Z | 2022-02-22T00:53:06.000Z | codes/data/mnist/generate_mnist.ipynb | jiegev5/CE7454_2019 | 3c20cc26db657c23a30971a1d0440fab57723798 | [
"MIT"
] | 1 | 2020-08-26T08:01:20.000Z | 2020-10-06T14:43:51.000Z | codes/data/mnist/generate_mnist.ipynb | jiegev5/CE7454_2019 | 3c20cc26db657c23a30971a1d0440fab57723798 | [
"MIT"
] | 77 | 2019-08-15T07:06:00.000Z | 2021-11-25T10:12:09.000Z | 54.730769 | 4,896 | 0.774654 | [
[
[
"## Download MNIST",
"_____no_output_____"
]
],
[
[
"# For Google Colaboratory\nimport sys, os\nif 'google.colab' in sys.modules:\n from google.colab import drive\n drive.mount('/content/gdrive')\n file_name = 'generate_mnist.ipynb'\n import subprocess\n path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode(\"utf-8\")\n print(path_to_file)\n path_to_file = path_to_file.replace(file_name,\"\").replace('\\n',\"\")\n os.chdir(path_to_file)\n !pwd",
"_____no_output_____"
],
[
"import torch\nimport torchvision\nimport torchvision.transforms as transforms\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"trainset = torchvision.datasets.MNIST(root='./temp', train=True,\n download=True, transform=transforms.ToTensor())\ntestset = torchvision.datasets.MNIST(root='./temp', train=False,\n download=True, transform=transforms.ToTensor())",
"_____no_output_____"
],
[
"idx=4\npic, label =trainset[idx]\npic=pic.squeeze()\nplt.imshow(pic.numpy(), cmap='gray')\nplt.show()\nprint(label)",
"_____no_output_____"
],
[
"train_data=torch.Tensor(60000,28,28)\ntrain_label=torch.LongTensor(60000)\nfor idx , example in enumerate(trainset):\n train_data[idx]=example[0].squeeze()\n train_label[idx]=example[1]\ntorch.save(train_data,'train_data.pt')\ntorch.save(train_label,'train_label.pt')",
"_____no_output_____"
],
[
"test_data=torch.Tensor(10000,28,28)\ntest_label=torch.LongTensor(10000)\nfor idx , example in enumerate(testset):\n test_data[idx]=example[0].squeeze()\n test_label[idx]=example[1]\ntorch.save(test_data,'test_data.pt')\ntorch.save(test_label,'test_label.pt')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0409d07bbb904b2c92e84b0085ea301fc12724b | 7,242 | ipynb | Jupyter Notebook | Python Basic Assignment -5.ipynb | MuhammedAbin/iNeuron-Assignment | 3bf3eed637d2ccc9cbf61a07efc11a1f0bc5a953 | [
"Apache-2.0"
] | null | null | null | Python Basic Assignment -5.ipynb | MuhammedAbin/iNeuron-Assignment | 3bf3eed637d2ccc9cbf61a07efc11a1f0bc5a953 | [
"Apache-2.0"
] | null | null | null | Python Basic Assignment -5.ipynb | MuhammedAbin/iNeuron-Assignment | 3bf3eed637d2ccc9cbf61a07efc11a1f0bc5a953 | [
"Apache-2.0"
] | null | null | null | 25.956989 | 129 | 0.460094 | [
[
[
"# 1. Write a Python Program to Find LCM?",
"_____no_output_____"
]
],
[
[
"# Given some numbers, LCM of these numbers is the smallest positive integer that is divisible by all the numbers numbers\ndef LCM(ls): \n lar = max(ls) \n flag=1\n while(flag==1): \n for i in ls:\n if(lar%i!=0): \n flag=1\n break \n else:\n flag=0\n continue\n lcm=lar\n lar += 1 \n return lcm \nn=int(input(\"Enter the range of your input \"))\nls=[]\nfor i in range(n):\n a=int(input(\"Enter the number \"))\n ls.append(a)\nprint(\"The LCM of entered numbers is\", LCM(ls))",
"Enter the range of your input 3\nEnter the number 4\nEnter the number 6\nEnter the number 8\nThe LCM of entered numbers is 24\n"
]
],
[
[
"# 2. Write a Python Program to Find HCF?",
"_____no_output_____"
]
],
[
[
"def HCF(ls):\n lar=0\n for i in range(2,min(ls)+1) :\n count=0\n for j in ls:\n if j%i==0:\n count=count+1\n if count==len(ls):\n lar=i\n return lar\n if count!=len(ls):\n return 1\nn=int(input(\"Enter the range of your input \"))\nls=[]\nfor i in range(n):\n a=int(input(\"Enter the number \"))\n ls.append(a)\nprint(\"The LCM of entered numbers is\", HCF(ls))",
"Enter the range of your input 3\nEnter the number 4\nEnter the number 6\nEnter the number 8\nThe LCM of entered numbers is 2\n"
]
],
[
[
"# 3. Write a Python Program to Convert Decimal to Binary, Octal and Hexadecimal?",
"_____no_output_____"
]
],
[
[
"a=int(input(\"Enter the Decimal value: \"))\ndef ToBinary(x):\n binary=\" \"\n while(x!=0 and x>0):\n binary=binary+str(x%2)\n x=int(x/2)\n return(binary[::-1])\ndef ToOctal(x):\n if(x<8):\n print(x)\n else:\n octal=\" \"\n while(x!=0 and x>0):\n octal=octal+str(x%8)\n x=int(x/8)\n return(octal[::-1])\ndef ToHexa(x):\n if(x<10):\n print(x)\n else:\n ls={10:\"A\", 11:\"B\", 12:\"C\", 13:\"D\", 14:\"E\", 15:\"F\"}\n Hexa=\" \"\n while(x!=0 and x>0):\n if(x%16) in ls.keys():\n Hexa=Hexa+ls[x%16]\n else:\n Hexa=Hexa+str(x%16)\n x=int(x/16)\n return(Hexa[::-1])\nprint(\"Binary of\",a,\"is\", ToBinary(a))\nprint(\"Octal of\",a,\"is\", ToOctal(a))\nprint(\"Hexadecimal of\",a,\"is\", ToHexa(a))",
"Enter the Decimal value: 15\nBinary of 15 is 1111 \nOctal of 15 is 17 \nHexadecimal of 15 is F \n"
]
],
[
[
"# 4. Write a Python Program To Find ASCII value of a character?",
"_____no_output_____"
]
],
[
[
"a=input(\"Enter the character to find ASCII: \")\nprint(\"ASCII of \",a ,\"is\", ord(a))",
"Enter the character to find ASCII: g\nASCII of g is 103\n"
]
],
[
[
"# 5.Write a Python Program to Make a Simple Calculator with 4 basic mathematical operations?",
"_____no_output_____"
]
],
[
[
"class Calculator:\n def Add(self,x,y):\n return x+y\n def Subtract(self,x,y):\n return x-y\n def Multiply(self,x,y):\n return x*y\n def Divide(self,x,y):\n try:\n return x/y\n except Exception as es:\n print(\"An error has occured\",es)\nCalc = Calculator()\nprint(\"Choose your Calculator Operation: \")\nprint(\"1 : Addition\")\nprint(\"2 : Subtraction\")\nprint(\"3 : Multipilication\")\nprint(\"4 : Division\")\na=int(input(\"Enter Your Selection: \"))\nx=int(input(\"Enter Your 1st number: \"))\ny=int(input(\"Enter Your 2nd number: \"))\nif(a==1):\n print(\"You chose Addition: \")\n print(\"The result of your operation is\", Calc.Add(x,y))\nelif(a==2):\n print(\"You chose Subtraction: \")\n print(\"The result of your operation is\", Calc.Subtract(x,y))\nelif(a==3):\n print(\"You chose Multipilication: \")\n print(\"The result of your operation is\", Calc.Multiply(x,y))\nelif(a==4):\n print(\"You chose Division: \")\n print(\"The result of your operation is\", Calc.Divide(x,y))\nelse:\n print(\"You chose a wrong operation\")",
"Choose your Calculator Operation: \n1 : Addition\n2 : Subtraction\n3 : Multipilication\n4 : Division\nEnter Your Selection: 1\nEnter Your 1st number: 5\nEnter Your 2nd number: 5\nYou chose addition: \nThe result of your opperation is 10\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d040b322cfe0acc6cdc29ab043f0b599e4560126 | 2,193 | ipynb | Jupyter Notebook | classification-fraud-detection/solution-0-setup/1.0 download [data].ipynb | nfaggian/ml-examples | e70441d6ec1309f5cc68e1370a9e9379ca0a91d7 | [
"Apache-2.0"
] | 2 | 2020-04-27T04:13:40.000Z | 2020-04-27T04:13:45.000Z | classification-fraud-detection/solution-0-setup/1.0 download [data].ipynb | nfaggian/ml-examples | e70441d6ec1309f5cc68e1370a9e9379ca0a91d7 | [
"Apache-2.0"
] | 2 | 2020-03-31T11:54:17.000Z | 2020-04-06T07:38:35.000Z | classification-fraud-detection/solution-0-setup/1.0 download [data].ipynb | nfaggian/ml-examples | e70441d6ec1309f5cc68e1370a9e9379ca0a91d7 | [
"Apache-2.0"
] | null | null | null | 26.743902 | 512 | 0.604651 | [
[
[
"# Download the ULB Fraud Dataset\n\nAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n\nTERMS OF USE:\nThis data is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/. It is provided through the Google Cloud Public Datasets Program and is provided \"AS IS\" without any warranty, express or implied, from Google. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nfrom google.cloud import bigquery",
"_____no_output_____"
],
[
"client = bigquery.Client()\n\nsql = \"\"\" \nSELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection`\n\"\"\"\n\ndf = client.query(sql).to_dataframe()",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.to_csv('../dataset/creditcard.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d040c2142565b6a3944580f821e03e958d4315e4 | 127,844 | ipynb | Jupyter Notebook | Wrangling of @WeLoveDogs Twitter dataset using Python/wrangle_act.ipynb | navpreetmattu/dprojects | c65e20774541b39d5ccdb013e5d99c68014fd0e1 | [
"MIT"
] | null | null | null | Wrangling of @WeLoveDogs Twitter dataset using Python/wrangle_act.ipynb | navpreetmattu/dprojects | c65e20774541b39d5ccdb013e5d99c68014fd0e1 | [
"MIT"
] | null | null | null | Wrangling of @WeLoveDogs Twitter dataset using Python/wrangle_act.ipynb | navpreetmattu/dprojects | c65e20774541b39d5ccdb013e5d99c68014fd0e1 | [
"MIT"
] | null | null | null | 46.522562 | 24,132 | 0.607099 | [
[
[
"\n# Data Wrangling, Analysis and Visualization of @WeLoveDogs twitter data.\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport tweepy as ty\nimport requests\nimport json\nimport io\nimport time",
"_____no_output_____"
]
],
[
[
"## Gathering",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('twitter-archive-enhanced.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"image_response = requests.get(r'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv')",
"_____no_output_____"
],
[
"image_df = pd.read_csv(io.StringIO(image_response.content.decode('utf-8')), sep='\\t')",
"_____no_output_____"
],
[
"image_df.head()",
"_____no_output_____"
],
[
"image_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2075 entries, 0 to 2074\nData columns (total 12 columns):\ntweet_id 2075 non-null int64\njpg_url 2075 non-null object\nimg_num 2075 non-null int64\np1 2075 non-null object\np1_conf 2075 non-null float64\np1_dog 2075 non-null bool\np2 2075 non-null object\np2_conf 2075 non-null float64\np2_dog 2075 non-null bool\np3 2075 non-null object\np3_conf 2075 non-null float64\np3_dog 2075 non-null bool\ndtypes: bool(3), float64(3), int64(2), object(4)\nmemory usage: 152.1+ KB\n"
],
[
"CONSUMER_KEY = '<My Consumer Key>'\nCONSUMER_SECRET = '<My Consumer Secret>'\nACCESS_TOKEN = '<My Access Token>'\nACCESS_TOKEN_SECRET = '<My Access Token Secret>'",
"_____no_output_____"
],
[
"auth = ty.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)",
"_____no_output_____"
],
[
"auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)",
"_____no_output_____"
],
[
"api = ty.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)",
"_____no_output_____"
],
[
"tweet_count = 0\nfor id in df['tweet_id']:\n with open('tweet_json.txt', 'a') as file:\n try:\n start = time.time()\n tweet = api.get_status(id, tweet_mode='extended')\n st = json.dumps(tweet._json)\n file.writelines(st + '\\n')\n tweet_count += 1\n if tweet_count % 20 == 0 or tweet_count == len(df): \n end = time.time()\n print('Tweet id: {0}\\tDownload Time: {1} sec\\tTweets Downloaded: {2}.'.format(id, (end-start), tweet_count))\n except Exception as e:\n print('Exception occured for tweet {0} : {1}'.format(id, str(e)))",
"Exception occured for tweet 888202515573088257 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 888078434458587136\tDownload Time: 0.5414106845855713 sec\tTweets Downloaded: 20.\nTweet id: 884562892145688576\tDownload Time: 0.5888540744781494 sec\tTweets Downloaded: 40.\nTweet id: 880465832366813184\tDownload Time: 0.9378821849822998 sec\tTweets Downloaded: 60.\nTweet id: 877316821321428993\tDownload Time: 1.639167308807373 sec\tTweets Downloaded: 80.\nException occured for tweet 873697596434513921 : [{'code': 144, 'message': 'No status found with that ID.'}]\nException occured for tweet 872668790621863937 : [{'code': 34, 'message': 'Sorry, that page does not exist.'}]\nTweet id: 872620804844003328\tDownload Time: 0.8575863838195801 sec\tTweets Downloaded: 100.\nException occured for tweet 869988702071779329 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 868880397819494401\tDownload Time: 0.9696576595306396 sec\tTweets Downloaded: 120.\nException occured for tweet 866816280283807744 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 863907417377173506\tDownload Time: 0.547405481338501 sec\tTweets Downloaded: 140.\nException occured for tweet 861769973181624320 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 860177593139703809\tDownload Time: 3.960103750228882 sec\tTweets Downloaded: 160.\nTweet id: 856330835276025856\tDownload Time: 0.5950930118560791 sec\tTweets Downloaded: 180.\nTweet id: 852912242202992640\tDownload Time: 0.5863792896270752 sec\tTweets Downloaded: 200.\nTweet id: 849051919805034497\tDownload Time: 1.6753406524658203 sec\tTweets Downloaded: 220.\nTweet id: 845812042753855489\tDownload Time: 0.5766551494598389 sec\tTweets Downloaded: 240.\nException occured for tweet 845459076796616705 : [{'code': 144, 'message': 'No status found with that ID.'}]\nException occured for tweet 842892208864923648 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 841680585030541313\tDownload Time: 0.8575961589813232 sec\tTweets Downloaded: 260.\nTweet id: 838561493054533637\tDownload Time: 0.5514862537384033 sec\tTweets Downloaded: 280.\nException occured for tweet 837012587749474308 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 835574547218894849\tDownload Time: 0.5296859741210938 sec\tTweets Downloaded: 300.\nTweet id: 833722901757046785\tDownload Time: 0.8748948574066162 sec\tTweets Downloaded: 320.\nTweet id: 831670449226514432\tDownload Time: 0.8104021549224854 sec\tTweets Downloaded: 340.\nTweet id: 828708714936930305\tDownload Time: 0.5546495914459229 sec\tTweets Downloaded: 360.\nException occured for tweet 827228250799742977 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 826476773533745153\tDownload Time: 0.5224771499633789 sec\tTweets Downloaded: 380.\nTweet id: 823333489516937216\tDownload Time: 0.7836413383483887 sec\tTweets Downloaded: 400.\nTweet id: 821107785811234820\tDownload Time: 0.5480344295501709 sec\tTweets Downloaded: 420.\nTweet id: 819004803107983360\tDownload Time: 0.6690359115600586 sec\tTweets Downloaded: 440.\nTweet id: 816829038950027264\tDownload Time: 0.7895915508270264 sec\tTweets Downloaded: 460.\nTweet id: 813910438903693312\tDownload Time: 0.5418002605438232 sec\tTweets Downloaded: 480.\nTweet id: 812466873996607488\tDownload Time: 0.560112476348877 sec\tTweets Downloaded: 500.\nTweet id: 808344865868283904\tDownload Time: 0.5334210395812988 sec\tTweets Downloaded: 520.\nTweet id: 805207613751304193\tDownload Time: 0.5202441215515137 sec\tTweets Downloaded: 540.\nException occured for tweet 802247111496568832 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 801854953262350336\tDownload Time: 0.5162663459777832 sec\tTweets Downloaded: 560.\nTweet id: 799297110730567681\tDownload Time: 0.5272245407104492 sec\tTweets Downloaded: 580.\nTweet id: 797236660651966464\tDownload Time: 0.8399364948272705 sec\tTweets Downloaded: 600.\nTweet id: 794332329137291264\tDownload Time: 0.5941627025604248 sec\tTweets Downloaded: 620.\nTweet id: 792883833364439040\tDownload Time: 0.9444077014923096 sec\tTweets Downloaded: 640.\nTweet id: 789986466051088384\tDownload Time: 0.5694825649261475 sec\tTweets Downloaded: 660.\nTweet id: 787397959788929025\tDownload Time: 0.5749082565307617 sec\tTweets Downloaded: 680.\nTweet id: 784826020293709826\tDownload Time: 0.5000665187835693 sec\tTweets Downloaded: 700.\nTweet id: 781661882474196992\tDownload Time: 0.5563163757324219 sec\tTweets Downloaded: 720.\nTweet id: 779123168116150273\tDownload Time: 0.5381345748901367 sec\tTweets Downloaded: 740.\nTweet id: 776819012571455488\tDownload Time: 0.548180341720581 sec\tTweets Downloaded: 760.\nException occured for tweet 775096608509886464 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 773704687002451968\tDownload Time: 0.5356895923614502 sec\tTweets Downloaded: 780.\nTweet id: 771171053431250945\tDownload Time: 0.8415000438690186 sec\tTweets Downloaded: 800.\nException occured for tweet 771004394259247104 : [{'code': 179, 'message': 'Sorry, you are not authorized to see this status.'}]\nException occured for tweet 770743923962707968 : [{'code': 144, 'message': 'No status found with that ID.'}]\nTweet id: 768554158521745409\tDownload Time: 0.5458414554595947 sec\tTweets Downloaded: 820.\nTweet id: 765371061932261376\tDownload Time: 0.56394362449646 sec\tTweets Downloaded: 840.\nTweet id: 761334018830917632\tDownload Time: 0.5361735820770264 sec\tTweets Downloaded: 860.\nTweet id: 759446261539934208\tDownload Time: 0.5952544212341309 sec\tTweets Downloaded: 880.\n"
],
[
"print('There are {} records for which tweets does not exist in Twitter database.'.format(len(df) - tweet_count))",
"There are 15 records for which tweets does not exist in Twitter database.\n"
],
[
"tweet_ids = []\nfavorite_count = []\nretweet_count = []",
"_____no_output_____"
],
[
"with open('tweet_json.txt', 'r') as file:\n for line in file.readlines():\n data = json.loads(line)\n tweet_ids.append(data['id'])\n favorite_count.append(data['favorite_count'])\n retweet_count.append(data['retweet_count'])",
"_____no_output_____"
],
[
"favorite_retweet_df = pd.DataFrame(data={'tweet_id': tweet_ids, 'favorite_count': favorite_count, \n 'retweet_count': retweet_count})",
"_____no_output_____"
],
[
"favorite_retweet_df.head()",
"_____no_output_____"
],
[
"favorite_retweet_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2341 entries, 0 to 2340\nData columns (total 3 columns):\ntweet_id 2341 non-null int64\nfavorite_count 2341 non-null int64\nretweet_count 2341 non-null int64\ndtypes: int64(3)\nmemory usage: 54.9 KB\n"
]
],
[
[
"## Assessing\n\n### Quality Issues\n\n - The WeRateDogs Twitter archive contains some retweets also. We only need to consider original tweets for this project.\n - `tweet_id` column having integer datatype in all the dataframes. Conversion to string required. Since, We are not going to do any mathematical operations with it.\n - `rating_denominator` value should be 10 since we are giving rating out of 10.\n - For some tweets, The values of `rating_numerator` column are very high, possibly outliers.\n - `name` column has **None** string and *a*, *an*, *the* as values.\n - Extract dog **stages** from the tweet text (if present) for null values.\n - `timestamp` column is given as **string**. Convert it to date.\n - Since we are not using retweets, `in_reply_to_status_id`, `in_reply_to_user_id`, `source`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp` columns are not required.\n - `image_df` contains tweets that do not belong to a dog. \n - `source` column in main dataframe is of no use in the analysis as it only tell us about the source of tweet.\n \n### Tidiness Issues\n\n - A dog can have one stage at a time, still we are having 4 columns to store one piece of information.\n - We are having 3 predicted breeds of dogs in the image prediction file. But only required the one with higher probability, given that it is a breed of dog.\n - All 3 dataframes contains same tweet_id column, which we can use to merge them and use as one dataframe for our analysis.",
"_____no_output_____"
]
],
[
[
"sum(df.duplicated())",
"_____no_output_____"
],
[
"# name and stages having wrong values for no. of records\ndf.tail()",
"_____no_output_____"
],
[
"# tweet id is int64, retweet related columns are not required\n# name and dog stages showing full data but most of them are None.\n# timestamp having string datatype\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 17 columns):\ntweet_id 2356 non-null int64\nin_reply_to_status_id 78 non-null float64\nin_reply_to_user_id 78 non-null float64\ntimestamp 2356 non-null object\nsource 2356 non-null object\ntext 2356 non-null object\nretweeted_status_id 181 non-null float64\nretweeted_status_user_id 181 non-null float64\nretweeted_status_timestamp 181 non-null object\nexpanded_urls 2297 non-null object\nrating_numerator 2356 non-null int64\nrating_denominator 2356 non-null int64\nname 2356 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(4), int64(3), object(10)\nmemory usage: 313.0+ KB\n"
],
[
"# rating_numerator is having min value of 0 and max of 1776 (outliers)\ndf.describe()",
"_____no_output_____"
],
[
"# denominator value should be 10\nsum(df['rating_denominator'] != 10)",
"_____no_output_____"
],
[
"df.rating_numerator.value_counts()",
"_____no_output_____"
]
],
[
[
"## Cleaning",
"_____no_output_____"
]
],
[
[
"# Making a copy of the data so that original data will be unchanged\ndf_clean = df.copy()\nimage_df_clean = image_df.copy()\nfavorite_retweet_df_clean = favorite_retweet_df.copy()",
"_____no_output_____"
]
],
[
[
"## Quality Issues\n\n### Define\n\n - Replacing all the **None** strings and *a*, *an* and *the* to **np.nan** in `name` column using pandas **_replace()_** function. \n \n### Code",
"_____no_output_____"
]
],
[
[
"df_clean['name'] = df_clean['name'].replace({'None': np.nan, 'a': np.nan, 'an': np.nan, 'the': np.nan})",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 17 columns):\ntweet_id 2356 non-null int64\nin_reply_to_status_id 78 non-null float64\nin_reply_to_user_id 78 non-null float64\ntimestamp 2356 non-null object\nsource 2356 non-null object\ntext 2356 non-null object\nretweeted_status_id 181 non-null float64\nretweeted_status_user_id 181 non-null float64\nretweeted_status_timestamp 181 non-null object\nexpanded_urls 2297 non-null object\nrating_numerator 2356 non-null int64\nrating_denominator 2356 non-null int64\nname 1541 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(4), int64(3), object(10)\nmemory usage: 313.0+ KB\n"
]
],
[
[
"### Define\n\n- `tweet_id` column having integer datatype in all the dataframes. Converting it to string using pandas **_astype()_** function.\n- `timestamp` column is given as string. Converting it to date using pandas **to_datetime()** function.\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean['tweet_id'] = df_clean['tweet_id'].astype(str)\nimage_df_clean['tweet_id'] = image_df_clean['tweet_id'].astype(str)\nfavorite_retweet_df_clean['tweet_id'] = favorite_retweet_df_clean['tweet_id'].astype(str)",
"_____no_output_____"
],
[
"df_clean.timestamp = pd.to_datetime(df_clean.timestamp)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.dtypes",
"_____no_output_____"
],
[
"image_df_clean.dtypes",
"_____no_output_____"
],
[
"favorite_retweet_df_clean.dtypes",
"_____no_output_____"
]
],
[
[
"### Define\n\n- Converting `rating_denominator` column value to 10 value when it is not, using pandas boolean indexing. Since we are giving rating out of 10.\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean.loc[df_clean.rating_denominator != 10, 'rating_denominator'] = 10",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"sum(df_clean.rating_denominator != 10)",
"_____no_output_____"
]
],
[
[
"### Define\n\n- Removing retweets since we are only concerned about original tweets. Since, there is null present in `in_reply_to_status_id`, `retweeted_status_id` columns for retweets. Droping these records using the pandas drop() function.\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean.drop(index=df_clean[df_clean.retweeted_status_id.notnull()].index, inplace=True)",
"_____no_output_____"
],
[
"df_clean.drop(index=df_clean[df_clean.in_reply_to_status_id.notnull()].index, inplace=True)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"sum(df_clean.retweeted_status_id.notnull())",
"_____no_output_____"
],
[
"sum(df_clean.in_reply_to_status_id.notnull())",
"_____no_output_____"
]
],
[
[
"### Define\n\n- Removing `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp` columns since they are related to retweet info. Hence, not required.\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id', \n 'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace=True)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2097 entries, 0 to 2355\nData columns (total 12 columns):\ntweet_id 2097 non-null object\ntimestamp 2097 non-null datetime64[ns]\nsource 2097 non-null object\ntext 2097 non-null object\nexpanded_urls 2094 non-null object\nrating_numerator 2097 non-null int64\nrating_denominator 2097 non-null int64\nname 1425 non-null object\ndoggo 2097 non-null object\nfloofer 2097 non-null object\npupper 2097 non-null object\npuppo 2097 non-null object\ndtypes: datetime64[ns](1), int64(2), object(9)\nmemory usage: 213.0+ KB\n"
]
],
[
[
"### Define\n\n- Removing `source` column from df_clean using pandas **_drop_** function.\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean.drop(columns=['source'], inplace=True)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2097 entries, 0 to 2355\nData columns (total 11 columns):\ntweet_id 2097 non-null object\ntimestamp 2097 non-null datetime64[ns]\ntext 2097 non-null object\nexpanded_urls 2094 non-null object\nrating_numerator 2097 non-null int64\nrating_denominator 2097 non-null int64\nname 1425 non-null object\ndoggo 2097 non-null object\nfloofer 2097 non-null object\npupper 2097 non-null object\npuppo 2097 non-null object\ndtypes: datetime64[ns](1), int64(2), object(8)\nmemory usage: 196.6+ KB\n"
]
],
[
[
"## Tidiness Issues\n\n### Define\n\n- Replacing 3 predictions to one dog breed with higher probability, given that it is a breed of dog, with the use of pandas **_apply()_** function.\n\n### Code",
"_____no_output_____"
]
],
[
[
"non_dog_ind = image_df_clean.query('not p1_dog and not p2_dog and not p3_dog').index",
"_____no_output_____"
],
[
"image_df_clean.drop(index=non_dog_ind, inplace=True)",
"_____no_output_____"
],
[
"def get_priority_dog(dog):\n return dog['p1'] if dog['p1_dog'] else dog['p2'] if dog['p2_dog'] else dog['p3']",
"_____no_output_____"
],
[
"image_df_clean['dog_breed'] = image_df_clean.apply(get_priority_dog, axis=1)",
"_____no_output_____"
],
[
"image_df_clean.drop(columns=['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog', 'img_num'], \n inplace=True)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"image_df_clean.head()",
"_____no_output_____"
]
],
[
[
"### Define\n\n- There are 4 column present for specifing dog breed, which can be done using one column only. Creating a column name `dog_stage` and adding present dog breed in it.\n\n### Code",
"_____no_output_____"
]
],
[
[
"def get_dog_stage(dog):\n if dog['doggo'] != 'None':\n return dog['doggo']\n elif dog['floofer'] != 'None':\n return dog['floofer']\n elif dog['pupper'] != 'None':\n return dog['pupper']\n else:\n return dog['puppo'] # if last entry is also nan, we have to return nan anyway",
"_____no_output_____"
],
[
"df_clean['dog_stage'] = df_clean.apply(get_dog_stage, axis=1)",
"_____no_output_____"
],
[
"df_clean.drop(columns=['doggo', 'floofer', 'pupper', 'puppo'], inplace=True)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2097 entries, 0 to 2355\nData columns (total 8 columns):\ntweet_id 2097 non-null object\ntimestamp 2097 non-null datetime64[ns]\ntext 2097 non-null object\nexpanded_urls 2094 non-null object\nrating_numerator 2097 non-null int64\nrating_denominator 2097 non-null int64\nname 1425 non-null object\ndog_stage 2097 non-null object\ndtypes: datetime64[ns](1), int64(2), object(5)\nmemory usage: 147.4+ KB\n"
]
],
[
[
"### Define\n\n- All 3 dataframes contains same `tweet_id` column, which we can use ot merge them and use as one dataframe for our analysis.\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean = pd.merge(df_clean, image_df_clean, on='tweet_id')",
"_____no_output_____"
],
[
"df_clean = pd.merge(df_clean, favorite_retweet_df_clean, on='tweet_id')",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.head()",
"_____no_output_____"
]
],
[
[
"## Quality Issues\n\n### Define\n\n- Try to extract dog dstage from tweet text using regular expressions and series **_str.extract()_** function.\n\n### Code",
"_____no_output_____"
]
],
[
[
"stages = df_clean[df_clean.dog_stage == 'None'].text.str.extract(r'(doggo|pupper|floof|puppo|pup)', expand=True)",
"_____no_output_____"
],
[
"len(df_clean[df_clean.dog_stage == 'None'])",
"_____no_output_____"
],
[
"df_clean.loc[stages.index, 'dog_stage'] = stages[0]",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"len(df_clean[df_clean.dog_stage.isnull()])",
"_____no_output_____"
]
],
[
[
"### Define\n\n- Removing outliers from `rating_numerator`\n\n### Code",
"_____no_output_____"
]
],
[
[
"df_clean.boxplot(column=['rating_numerator'], figsize=(20,8), vert=False)",
"_____no_output_____"
]
],
[
[
"- As clear from the boxplot, `rating_numerator` has a number of outliers which can affect our analysis. So, removing all the rating points above 15 abd below 7 to reduce the effect of outliers.",
"_____no_output_____"
]
],
[
[
"df_clean.drop(index=df_clean.query('rating_numerator > 15 or rating_numerator < 7').index, inplace=True)",
"_____no_output_____"
]
],
[
[
"### Test",
"_____no_output_____"
]
],
[
[
"df_clean.boxplot(column=['rating_numerator'], figsize=(20,8), vert=False)",
"_____no_output_____"
]
],
[
[
"- As we can see now, there are no outliers present in the datat anymore.",
"_____no_output_____"
],
[
"## Storing Data in a CSV file",
"_____no_output_____"
]
],
[
[
"df_clean.to_csv('twitter_archive_master.csv', index=False)\nprint('Save Done !')",
"Save Done !\n"
]
],
[
[
"## Data Analysis and Visualization",
"_____no_output_____"
]
],
[
[
"ax = df_clean.plot.scatter('rating_numerator', 'favorite_count', figsize=(10, 10), title='Rating VS. Favorites')\nax.set_xlabel('Ratings')\nax.set_ylabel('Favorites')",
"_____no_output_____"
]
],
[
[
"## Insight 1:\n\n- Number of favorite count is increasing with the rating. i.e. Dogs getting more rating in the tweets are likely to receive more likes (favorites). ",
"_____no_output_____"
]
],
[
[
"df_clean.dog_breed.value_counts()",
"_____no_output_____"
]
],
[
[
"## Insight 2\n\n- Most of the pictures present in @WeLoveDogs twitter account are of Golden Retriever, followed by Labrador Retriever, Pembroke and Chihuahua .",
"_____no_output_____"
]
],
[
[
"df_clean.loc[df_clean.favorite_count.idxmax()][['dog_breed', 'favorite_count']]",
"_____no_output_____"
],
[
"df_clean.loc[df_clean.favorite_count.idxmin()][['dog_breed', 'favorite_count']]",
"_____no_output_____"
],
[
"df_clean.loc[df_clean.retweet_count.idxmax()][['dog_breed', 'retweet_count']]",
"_____no_output_____"
],
[
"df_clean.loc[df_clean.retweet_count.idxmin()][['dog_breed', 'retweet_count']]",
"_____no_output_____"
]
],
[
[
"## Insight 3\n\n- The dog with highest number of favorites (likes) is an Labrador retriever while the one with lowest number of favorites is an english setter.\n- The same dogs who got highest and lowest favorites count also received highest and lowest retweet count respectively.\n\n**So, if a tweet got more likes(favorites), it got better chances to have more retweets than the one who got low number of likes(favorites) and vice versa.**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d040c597c43fee720180887dab8e7d60816e50d9 | 449,536 | ipynb | Jupyter Notebook | analysis/optimus5/invase_optimus5_conv_full_data.ipynb | willshi88/scrambler | fd77c05824fc99e6965d204c4f5baa1e3b0c4fb3 | [
"MIT"
] | 19 | 2021-04-30T04:12:58.000Z | 2022-03-07T19:09:32.000Z | analysis/optimus5/invase_optimus5_conv_full_data.ipynb | willshi88/scrambler | fd77c05824fc99e6965d204c4f5baa1e3b0c4fb3 | [
"MIT"
] | 4 | 2021-07-02T15:07:27.000Z | 2021-08-01T12:41:28.000Z | analysis/optimus5/invase_optimus5_conv_full_data.ipynb | willshi88/scrambler | fd77c05824fc99e6965d204c4f5baa1e3b0c4fb3 | [
"MIT"
] | 4 | 2021-06-28T09:41:01.000Z | 2022-02-28T09:13:29.000Z | 272.942319 | 21,332 | 0.899779 | [
[
[
"import keras\nimport keras.backend as K\nfrom keras.datasets import mnist\nfrom keras.models import Sequential, Model, load_model\nfrom keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda\nfrom keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional\nfrom keras.layers import Concatenate, Reshape, Conv2DTranspose, Embedding, Multiply, Activation\n\nfrom functools import partial\n\nfrom collections import defaultdict\n\nimport os\nimport pickle\nimport numpy as np\n\nimport scipy.sparse as sp\nimport scipy.io as spio\n\nimport isolearn.io as isoio\nimport isolearn.keras as isol\n\nimport matplotlib.pyplot as plt\n\nfrom sequence_logo_helper import dna_letter_at, plot_dna_logo\n\nfrom sklearn import preprocessing\nimport pandas as pd\n\nimport tensorflow as tf\n\nfrom keras.backend.tensorflow_backend import set_session\n\ndef contain_tf_gpu_mem_usage() :\n config = tf.ConfigProto()\n config.gpu_options.allow_growth = True\n sess = tf.Session(config=config)\n set_session(sess)\n\ncontain_tf_gpu_mem_usage()\n",
"Using TensorFlow backend.\n"
],
[
"\n#optimus 5-prime functions \ndef test_data(df, model, test_seq, obs_col, output_col='pred'):\n '''Predict mean ribosome load using model and test set UTRs'''\n \n # Scale the test set mean ribosome load\n scaler = preprocessing.StandardScaler()\n scaler.fit(df[obs_col].reshape(-1,1))\n \n # Make predictions\n predictions = model.predict(test_seq).reshape(-1)\n \n # Inverse scaled predicted mean ribosome load and return in a column labeled 'pred'\n df.loc[:,output_col] = scaler.inverse_transform(predictions)\n return df\n\n\ndef one_hot_encode(df, col='utr', seq_len=50):\n # Dictionary returning one-hot encoding of nucleotides. \n nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}\n \n # Creat empty matrix.\n vectors=np.empty([len(df),seq_len,4])\n \n # Iterate through UTRs and one-hot encode\n for i,seq in enumerate(df[col].str[:seq_len]): \n seq = seq.lower()\n a = np.array([nuc_d[x] for x in seq])\n vectors[i] = a\n return vectors\n\n\ndef r2(x,y):\n slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)\n return r_value**2\n\n\n#Train data\ndf = pd.read_csv(\"../../../seqprop/examples/optimus5/GSM3130435_egfp_unmod_1.csv\")\n\ndf.sort_values('total_reads', inplace=True, ascending=False)\ndf.reset_index(inplace=True, drop=True)\ndf = df.iloc[:280000]\n# The training set has 260k UTRs and the test set has 20k UTRs.\n#e_test = df.iloc[:20000].copy().reset_index(drop = True)\ne_train = df.iloc[20000:].copy().reset_index(drop = True)\ne_train.loc[:,'scaled_rl'] = preprocessing.StandardScaler().fit_transform(e_train.loc[:,'rl'].values.reshape(-1,1))\n\nseq_e_train = one_hot_encode(e_train,seq_len=50)\nx_train = seq_e_train\nx_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1], x_train.shape[2]))\ny_train = np.array(e_train['scaled_rl'].values)\ny_train = np.reshape(y_train, (y_train.shape[0],1))\n\nprint(\"x_train.shape = \" + str(x_train.shape))\nprint(\"y_train.shape = \" + str(y_train.shape))\n",
"x_train.shape = (260000, 1, 50, 4)\ny_train.shape = (260000, 1)\n"
],
[
"#Load Predictor\npredictor_path = 'optimusRetrainedMain.hdf5'\n\npredictor = load_model(predictor_path)\n\npredictor.trainable = False\npredictor.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mean_squared_error')\n",
"WARNING:tensorflow:From /home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From /home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nWARNING:tensorflow:From /home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\n"
],
[
"#Generate (original) predictions\n\npred_train = predictor.predict(x_train[:, 0, ...], batch_size=32)\n\ny_train = (y_train >= 0.)\ny_train = np.concatenate([1. - y_train, y_train], axis=1)\n\npred_train = (pred_train >= 0.)\npred_train = np.concatenate([1. - pred_train, pred_train], axis=1)\n",
"_____no_output_____"
],
[
"from keras.layers import Input, Dense, Multiply, Flatten, Reshape, Conv2D, MaxPooling2D, GlobalMaxPooling2D, Activation\nfrom keras.layers import BatchNormalization\nfrom keras.models import Sequential, Model\nfrom keras.optimizers import Adam\nfrom keras import regularizers\nfrom keras import backend as K\n\nimport tensorflow as tf\nimport numpy as np\n\nfrom keras.layers import Layer, InputSpec\nfrom keras import initializers, regularizers, constraints\n\nclass InstanceNormalization(Layer):\n def __init__(self, axes=(1, 2), trainable=True, **kwargs):\n super(InstanceNormalization, self).__init__(**kwargs)\n self.axes = axes\n self.trainable = trainable\n def build(self, input_shape):\n self.beta = self.add_weight(name='beta',shape=(input_shape[-1],),\n initializer='zeros',trainable=self.trainable)\n self.gamma = self.add_weight(name='gamma',shape=(input_shape[-1],),\n initializer='ones',trainable=self.trainable)\n def call(self, inputs):\n mean, variance = tf.nn.moments(inputs, self.axes, keep_dims=True)\n return tf.nn.batch_normalization(inputs, mean, variance, self.beta, self.gamma, 1e-6)\n\ndef bernoulli_sampling (prob):\n \"\"\" Sampling Bernoulli distribution by given probability.\n\n Args:\n - prob: P(Y = 1) in Bernoulli distribution.\n\n Returns:\n - samples: samples from Bernoulli distribution\n \"\"\" \n\n n, x_len, y_len, d = prob.shape\n samples = np.random.binomial(1, prob, (n, x_len, y_len, d))\n\n return samples\n\nclass INVASE():\n \"\"\"INVASE class.\n\n Attributes:\n - x_train: training features\n - y_train: training labels\n - model_type: invase or invase_minus\n - model_parameters:\n - actor_h_dim: hidden state dimensions for actor\n - critic_h_dim: hidden state dimensions for critic\n - n_layer: the number of layers\n - batch_size: the number of samples in mini batch\n - iteration: the number of iterations\n - activation: activation function of models\n - learning_rate: learning rate of model training\n - lamda: hyper-parameter of INVASE\n \"\"\"\n\n def __init__(self, x_train, y_train, model_type, model_parameters):\n\n self.lamda = model_parameters['lamda']\n self.actor_h_dim = model_parameters['actor_h_dim']\n self.critic_h_dim = model_parameters['critic_h_dim']\n self.n_layer = model_parameters['n_layer']\n self.batch_size = model_parameters['batch_size']\n self.iteration = model_parameters['iteration']\n self.activation = model_parameters['activation']\n self.learning_rate = model_parameters['learning_rate']\n\n #Modified Code\n self.x_len = x_train.shape[1]\n self.y_len = x_train.shape[2]\n self.dim = x_train.shape[3] \n self.label_dim = y_train.shape[1]\n\n self.model_type = model_type\n\n optimizer = Adam(self.learning_rate)\n\n # Build and compile critic\n self.critic = self.build_critic()\n self.critic.compile(loss='categorical_crossentropy', \n optimizer=optimizer, metrics=['acc'])\n\n # Build and compile the actor\n self.actor = self.build_actor()\n self.actor.compile(loss=self.actor_loss, optimizer=optimizer)\n\n if self.model_type == 'invase':\n # Build and compile the baseline\n self.baseline = self.build_baseline()\n self.baseline.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['acc'])\n\n\n def actor_loss(self, y_true, y_pred):\n \"\"\"Custom loss for the actor.\n\n Args:\n - y_true:\n - actor_out: actor output after sampling\n - critic_out: critic output \n - baseline_out: baseline output (only for invase)\n - y_pred: output of the actor network\n\n Returns:\n - loss: actor loss\n \"\"\"\n \n y_pred = K.reshape(y_pred, (K.shape(y_pred)[0], self.x_len*self.y_len*1))\n y_true = y_true[:, 0, 0, :]\n\n # Actor output\n actor_out = y_true[:, :self.x_len*self.y_len*1]\n # Critic output\n critic_out = y_true[:, self.x_len*self.y_len*1:(self.x_len*self.y_len*1+self.label_dim)]\n\n if self.model_type == 'invase':\n # Baseline output\n baseline_out = \\\n y_true[:, (self.x_len*self.y_len*1+self.label_dim):(self.x_len*self.y_len*1+2*self.label_dim)]\n # Ground truth label\n y_out = y_true[:, (self.x_len*self.y_len*1+2*self.label_dim):] \n elif self.model_type == 'invase_minus':\n # Ground truth label\n y_out = y_true[:, (self.x_len*self.y_len*1+self.label_dim):] \n\n # Critic loss\n critic_loss = -tf.reduce_sum(y_out * tf.log(critic_out + 1e-8), axis = 1) \n\n if self.model_type == 'invase': \n # Baseline loss\n baseline_loss = -tf.reduce_sum(y_out * tf.log(baseline_out + 1e-8), \n axis = 1) \n # Reward\n Reward = -(critic_loss - baseline_loss)\n elif self.model_type == 'invase_minus':\n Reward = -critic_loss\n\n # Policy gradient loss computation. \n custom_actor_loss = \\\n Reward * tf.reduce_sum(actor_out * K.log(y_pred + 1e-8) + \\\n (1-actor_out) * K.log(1-y_pred + 1e-8), axis = 1) - \\\n self.lamda * tf.reduce_mean(y_pred, axis = 1)\n\n # custom actor loss\n custom_actor_loss = tf.reduce_mean(-custom_actor_loss)\n\n return custom_actor_loss\n\n\n def build_actor(self):\n \"\"\"Build actor.\n\n Use feature as the input and output selection probability\n \"\"\"\n actor_model = Sequential()\n \n actor_model.add(Conv2D(self.actor_h_dim, (1, 7), padding='same', activation='linear'))\n actor_model.add(InstanceNormalization())\n actor_model.add(Activation(self.activation))\n for _ in range(self.n_layer - 2):\n actor_model.add(Conv2D(self.actor_h_dim, (1, 7), padding='same', activation='linear'))\n actor_model.add(InstanceNormalization())\n actor_model.add(Activation(self.activation))\n actor_model.add(Conv2D(1, (1, 1), padding='same', activation='sigmoid'))\n\n feature = Input(shape=(self.x_len, self.y_len, self.dim), dtype='float32')\n selection_probability = actor_model(feature)\n\n return Model(feature, selection_probability)\n\n\n def build_critic(self):\n \"\"\"Build critic.\n\n Use selected feature as the input and predict labels\n \"\"\"\n critic_model = Sequential()\n \n critic_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))\n critic_model.add(InstanceNormalization())\n critic_model.add(Activation(self.activation))\n for _ in range(self.n_layer - 2):\n critic_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))\n critic_model.add(InstanceNormalization())\n critic_model.add(Activation(self.activation))\n critic_model.add(Flatten())\n critic_model.add(Dense(self.critic_h_dim, activation=self.activation))\n critic_model.add(Dropout(0.2))\n critic_model.add(Dense(self.label_dim, activation ='softmax'))\n\n ## Inputs\n # Features\n feature = Input(shape=(self.x_len, self.y_len, self.dim), dtype='float32')\n # Binary selection\n selection = Input(shape=(self.x_len, self.y_len, 1), dtype='float32') \n\n # Element-wise multiplication\n critic_model_input = Multiply()([feature, selection])\n y_hat = critic_model(critic_model_input)\n\n return Model([feature, selection], y_hat)\n\n\n def build_baseline(self):\n \"\"\"Build baseline.\n\n Use the feature as the input and predict labels\n \"\"\"\n baseline_model = Sequential()\n \n baseline_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))\n baseline_model.add(InstanceNormalization())\n baseline_model.add(Activation(self.activation))\n for _ in range(self.n_layer - 2):\n baseline_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))\n baseline_model.add(InstanceNormalization())\n baseline_model.add(Activation(self.activation))\n baseline_model.add(Flatten())\n baseline_model.add(Dense(self.critic_h_dim, activation=self.activation))\n baseline_model.add(Dropout(0.2))\n baseline_model.add(Dense(self.label_dim, activation ='softmax'))\n\n # Input\n feature = Input(shape=(self.x_len, self.y_len, self.dim), dtype='float32') \n # Output \n y_hat = baseline_model(feature)\n\n return Model(feature, y_hat)\n\n\n def train(self, x_train, y_train):\n \"\"\"Train INVASE.\n\n Args:\n - x_train: training features\n - y_train: training labels\n \"\"\"\n\n for iter_idx in range(self.iteration):\n\n ## Train critic\n # Select a random batch of samples\n idx = np.random.randint(0, x_train.shape[0], self.batch_size)\n x_batch = x_train[idx,:]\n y_batch = y_train[idx,:]\n\n # Generate a batch of selection probability\n selection_probability = self.actor.predict(x_batch) \n # Sampling the features based on the selection_probability\n selection = bernoulli_sampling(selection_probability) \n # Critic loss\n critic_loss = self.critic.train_on_batch([x_batch, selection], y_batch) \n # Critic output\n critic_out = self.critic.predict([x_batch, selection])\n\n # Baseline output\n if self.model_type == 'invase': \n # Baseline loss\n baseline_loss = self.baseline.train_on_batch(x_batch, y_batch) \n # Baseline output\n baseline_out = self.baseline.predict(x_batch)\n\n ## Train actor\n # Use multiple things as the y_true: \n # - selection, critic_out, baseline_out, and ground truth (y_batch)\n if self.model_type == 'invase':\n y_batch_final = np.concatenate((np.reshape(selection, (y_batch.shape[0], -1)), \n np.asarray(critic_out), \n np.asarray(baseline_out), \n y_batch), axis = 1)\n elif self.model_type == 'invase_minus':\n y_batch_final = np.concatenate((np.reshape(selection, (y_batch.shape[0], -1)), \n np.asarray(critic_out), \n y_batch), axis = 1)\n\n y_batch_final = y_batch_final[:, None, None, :]\n \n # Train the actor\n actor_loss = self.actor.train_on_batch(x_batch, y_batch_final)\n\n if self.model_type == 'invase':\n # Print the progress\n dialog = 'Iterations: ' + str(iter_idx) + \\\n ', critic accuracy: ' + str(critic_loss[1]) + \\\n ', baseline accuracy: ' + str(baseline_loss[1]) + \\\n ', actor loss: ' + str(np.round(actor_loss,4))\n elif self.model_type == 'invase_minus':\n # Print the progress\n dialog = 'Iterations: ' + str(iter_idx) + \\\n ', critic accuracy: ' + str(critic_loss[1]) + \\\n ', actor loss: ' + str(np.round(actor_loss,4))\n\n if iter_idx % 100 == 0:\n print(dialog)\n\n def importance_score(self, x):\n \"\"\"Return featuer importance score.\n\n Args:\n - x: feature\n\n Returns:\n - feature_importance: instance-wise feature importance for x\n \"\"\" \n feature_importance = self.actor.predict(x) \n return np.asarray(feature_importance)\n\n\n def predict(self, x):\n \"\"\"Predict outcomes.\n\n Args:\n - x: feature\n\n Returns:\n - y_hat: predictions \n \"\"\" \n # Generate a batch of selection probability\n selection_probability = self.actor.predict(x) \n # Sampling the features based on the selection_probability\n selection = bernoulli_sampling(selection_probability) \n # Prediction \n y_hat = self.critic.predict([x, selection])\n\n return np.asarray(y_hat)\n",
"_____no_output_____"
],
[
"#Gradient saliency/backprop visualization\n\nimport matplotlib.collections as collections\nimport operator\nimport matplotlib.pyplot as plt\n\nimport matplotlib.cm as cm\nimport matplotlib.colors as colors\n\nimport matplotlib as mpl\nfrom matplotlib.text import TextPath\nfrom matplotlib.patches import PathPatch, Rectangle\nfrom matplotlib.font_manager import FontProperties\nfrom matplotlib import gridspec\nfrom matplotlib.ticker import FormatStrFormatter\n\ndef plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) :\n\n end_pos = ref_seq.find(\"#\")\n \n fig = plt.figure(figsize=figsize)\n \n ax = plt.gca()\n \n if score_clip is not None :\n importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)\n \n max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01\n\n for i in range(0, len(ref_seq)) :\n mutability_score = np.sum(importance_scores[:, i])\n dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax)\n\n plt.sca(ax)\n plt.xlim((0, len(ref_seq)))\n plt.ylim((0, max_score))\n plt.axis('off')\n plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)\n \n for axis in fig.axes :\n axis.get_xaxis().set_visible(False)\n axis.get_yaxis().set_visible(False)\n \n plt.tight_layout()\n \n plt.show()\n",
"_____no_output_____"
],
[
"#Execute INVASE benchmark on synthetic datasets\n\nmask_penalty = 0.5#0.1\nhidden_dims = 32\nn_layers = 4\nepochs = 25\nbatch_size = 128\n\nmodel_parameters = {\n 'lamda': mask_penalty,\n 'actor_h_dim': hidden_dims, \n 'critic_h_dim': hidden_dims,\n 'n_layer': n_layers,\n 'batch_size': batch_size,\n 'iteration': int(x_train.shape[0] * epochs / batch_size), \n 'activation': 'relu', \n 'learning_rate': 0.0001\n}\n\nencoder = isol.OneHotEncoder(50)\n\nscore_clip = None\n\nallFiles = [\"optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv\",\n \"optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv\",\n \"optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv\",\n \"optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv\",\n \"optimus5_synthetic_examples_3.csv\"]\n\n#Train INVASE\ninvase_model = INVASE(x_train, pred_train, 'invase', model_parameters)\n\ninvase_model.train(x_train, pred_train) \n\nfor csv_to_open in allFiles :\n \n #Load dataset for benchmarking \n dataset_name = csv_to_open.replace(\".csv\", \"\")\n benchmarkSet = pd.read_csv(csv_to_open)\n \n seq_e_test = one_hot_encode(benchmarkSet, seq_len=50)\n x_test = seq_e_test[:, None, ...]\n \n print(x_test.shape)\n \n pred_test = predictor.predict(x_test[:, 0, ...], batch_size=32)\n y_test = pred_test\n\n y_test = (y_test >= 0.)\n y_test = np.concatenate([1. - y_test, y_test], axis=1)\n\n pred_test = (pred_test >= 0.)\n pred_test = np.concatenate([1. - pred_test, pred_test], axis=1) \n\n importance_scores_test = invase_model.importance_score(x_test)\n \n #Evaluate INVASE model on train and test data\n\n invase_pred_train = invase_model.predict(x_train)\n invase_pred_test = invase_model.predict(x_test)\n\n print(\"Training Accuracy = \" + str(np.sum(np.argmax(invase_pred_train, axis=1) == np.argmax(pred_train, axis=1)) / float(pred_train.shape[0])))\n print(\"Test Accuracy = \" + str(np.sum(np.argmax(invase_pred_test, axis=1) == np.argmax(pred_test, axis=1)) / float(pred_test.shape[0])))\n\n for plot_i in range(0, 3) :\n\n print(\"Test sequence \" + str(plot_i) + \":\")\n\n plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, plot_sequence_template=True, figsize=(12, 1), plot_start=0, plot_end=50)\n plot_importance_scores(np.maximum(importance_scores_test[plot_i, 0, :, :].T, 0.), encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)\n\n #Save predicted importance scores\n\n model_name = \"invase_\" + dataset_name + \"_conv_full_data\"\n\n np.save(model_name + \"_importance_scores_test\", importance_scores_test)\n",
"Iterations: 0, critic accuracy: 0.390625, baseline accuracy: 0.609375, actor loss: -11.6949\nIterations: 100, critic accuracy: 0.6171875, baseline accuracy: 0.640625, actor loss: -1.1258\nIterations: 200, critic accuracy: 0.6484375, baseline accuracy: 0.6640625, actor loss: -0.1016\nIterations: 300, critic accuracy: 0.5546875, baseline accuracy: 0.59375, actor loss: -0.5168\nIterations: 400, critic accuracy: 0.59375, baseline accuracy: 0.5703125, actor loss: -1.1017\nIterations: 500, critic accuracy: 0.65625, baseline accuracy: 0.609375, actor loss: -0.796\nIterations: 600, critic accuracy: 0.6015625, baseline accuracy: 0.6171875, actor loss: -1.1892\nIterations: 700, critic accuracy: 0.6171875, baseline accuracy: 0.6015625, actor loss: -0.5415\nIterations: 800, critic accuracy: 0.6484375, baseline accuracy: 0.6796875, actor loss: -0.5854\nIterations: 900, critic accuracy: 0.609375, baseline accuracy: 0.578125, actor loss: -0.7383\nIterations: 1000, critic accuracy: 0.671875, baseline accuracy: 0.78125, actor loss: -2.6702\nIterations: 1100, critic accuracy: 0.6015625, baseline accuracy: 0.71875, actor loss: -1.918\nIterations: 1200, critic accuracy: 0.5703125, baseline accuracy: 0.6875, actor loss: -2.127\nIterations: 1300, critic accuracy: 0.609375, baseline accuracy: 0.6953125, actor loss: -2.5803\nIterations: 1400, critic accuracy: 0.5546875, baseline accuracy: 0.703125, actor loss: -2.9973\nIterations: 1500, critic accuracy: 0.65625, baseline accuracy: 0.8046875, actor loss: -5.6637\nIterations: 1600, critic accuracy: 0.6171875, baseline accuracy: 0.765625, actor loss: -4.6882\nIterations: 1700, critic accuracy: 0.6171875, baseline accuracy: 0.8828125, actor loss: -6.1885\nIterations: 1800, critic accuracy: 0.625, baseline accuracy: 0.875, actor loss: -6.5106\nIterations: 1900, critic accuracy: 0.640625, baseline accuracy: 0.875, actor loss: -7.0292\nIterations: 2000, critic accuracy: 0.578125, baseline accuracy: 0.8515625, actor loss: -5.8939\nIterations: 2100, critic accuracy: 0.5703125, baseline accuracy: 0.90625, actor loss: -7.6099\nIterations: 2200, critic accuracy: 0.625, baseline accuracy: 0.8671875, actor loss: -6.8492\nIterations: 2300, critic accuracy: 0.59375, baseline accuracy: 0.8984375, actor loss: -7.5056\nIterations: 2400, critic accuracy: 0.6640625, baseline accuracy: 0.921875, actor loss: -7.237\nIterations: 2500, critic accuracy: 0.625, baseline accuracy: 0.890625, actor loss: -5.3026\nIterations: 2600, critic accuracy: 0.6484375, baseline accuracy: 0.890625, actor loss: -6.5486\nIterations: 2700, critic accuracy: 0.546875, baseline accuracy: 0.8984375, actor loss: -6.223\nIterations: 2800, critic accuracy: 0.578125, baseline accuracy: 0.859375, actor loss: -6.8185\nIterations: 2900, critic accuracy: 0.5859375, baseline accuracy: 0.90625, actor loss: -6.8197\nIterations: 3000, critic accuracy: 0.625, baseline accuracy: 0.921875, actor loss: -6.7912\nIterations: 3100, critic accuracy: 0.6015625, baseline accuracy: 0.9375, actor loss: -6.8764\nIterations: 3200, critic accuracy: 0.6015625, baseline accuracy: 0.8515625, actor loss: -5.4695\nIterations: 3300, critic accuracy: 0.5703125, baseline accuracy: 0.9140625, actor loss: -6.5216\nIterations: 3400, critic accuracy: 0.609375, baseline accuracy: 0.8984375, actor loss: -5.6773\nIterations: 3500, critic accuracy: 0.6875, baseline accuracy: 0.9296875, actor loss: -5.1542\nIterations: 3600, critic accuracy: 0.59375, baseline accuracy: 0.8828125, actor loss: -5.1956\nIterations: 3700, critic accuracy: 0.625, baseline accuracy: 0.9296875, actor loss: -5.6007\nIterations: 3800, critic accuracy: 0.65625, baseline accuracy: 0.8828125, actor loss: -4.5584\nIterations: 3900, critic accuracy: 0.625, baseline accuracy: 0.921875, actor loss: -5.4642\nIterations: 4000, critic accuracy: 0.625, baseline accuracy: 0.9453125, actor loss: -6.1186\nIterations: 4100, critic accuracy: 0.5703125, baseline accuracy: 0.8984375, actor loss: -4.9557\nIterations: 4200, critic accuracy: 0.5703125, baseline accuracy: 0.9375, actor loss: -5.6133\nIterations: 4300, critic accuracy: 0.625, baseline accuracy: 0.9453125, actor loss: -5.0569\nIterations: 4400, critic accuracy: 0.6484375, baseline accuracy: 0.90625, actor loss: -4.8592\nIterations: 4500, critic accuracy: 0.6328125, baseline accuracy: 0.8984375, actor loss: -4.5599\nIterations: 4600, critic accuracy: 0.5390625, baseline accuracy: 0.953125, actor loss: -5.5266\nIterations: 4700, critic accuracy: 0.59375, baseline accuracy: 0.9140625, actor loss: -4.9825\nIterations: 4800, critic accuracy: 0.578125, baseline accuracy: 0.9609375, actor loss: -5.0746\nIterations: 4900, critic accuracy: 0.703125, baseline accuracy: 0.90625, actor loss: -4.6208\nIterations: 5000, critic accuracy: 0.6328125, baseline accuracy: 0.90625, actor loss: -3.8968\nIterations: 5100, critic accuracy: 0.59375, baseline accuracy: 0.9375, actor loss: -4.678\nIterations: 5200, critic accuracy: 0.609375, baseline accuracy: 0.8828125, actor loss: -3.6911\nIterations: 5300, critic accuracy: 0.546875, baseline accuracy: 0.9140625, actor loss: -4.1596\nIterations: 5400, critic accuracy: 0.65625, baseline accuracy: 0.9296875, actor loss: -3.6451\nIterations: 5500, critic accuracy: 0.5859375, baseline accuracy: 0.9453125, actor loss: -4.4246\nIterations: 5600, critic accuracy: 0.6328125, baseline accuracy: 0.96875, actor loss: -4.284\nIterations: 5700, critic accuracy: 0.625, baseline accuracy: 0.9296875, actor loss: -4.0006\nIterations: 5800, critic accuracy: 0.5390625, baseline accuracy: 0.9375, actor loss: -4.5173\nIterations: 5900, critic accuracy: 0.65625, baseline accuracy: 0.9296875, actor loss: -3.8275\nIterations: 6000, critic accuracy: 0.6015625, baseline accuracy: 0.953125, actor loss: -4.0728\nIterations: 6100, critic accuracy: 0.65625, baseline accuracy: 0.9609375, actor loss: -4.1116\nIterations: 6200, critic accuracy: 0.6640625, baseline accuracy: 0.96875, actor loss: -3.5808\nIterations: 6300, critic accuracy: 0.6015625, baseline accuracy: 0.9296875, actor loss: -3.7118\nIterations: 6400, critic accuracy: 0.65625, baseline accuracy: 0.9375, actor loss: -3.3352\nIterations: 6500, critic accuracy: 0.640625, baseline accuracy: 0.96875, actor loss: -4.1158\nIterations: 6600, critic accuracy: 0.7109375, baseline accuracy: 0.9609375, actor loss: -3.5784\nIterations: 6700, critic accuracy: 0.6328125, baseline accuracy: 0.9140625, actor loss: -3.5007\nIterations: 6800, critic accuracy: 0.4921875, baseline accuracy: 0.9453125, actor loss: -4.053\nIterations: 6900, critic accuracy: 0.6015625, baseline accuracy: 0.9453125, actor loss: -3.4824\nIterations: 7000, critic accuracy: 0.6640625, baseline accuracy: 0.9140625, actor loss: -3.1074\nIterations: 7100, critic accuracy: 0.546875, baseline accuracy: 0.953125, actor loss: -3.521\nIterations: 7200, critic accuracy: 0.5859375, baseline accuracy: 0.9140625, actor loss: -2.9394\nIterations: 7300, critic accuracy: 0.609375, baseline accuracy: 0.953125, actor loss: -3.8403\nIterations: 7400, critic accuracy: 0.6328125, baseline accuracy: 0.9140625, actor loss: -2.6146\nIterations: 7500, critic accuracy: 0.59375, baseline accuracy: 0.9375, actor loss: -3.491\nIterations: 7600, critic accuracy: 0.6328125, baseline accuracy: 0.9375, actor loss: -2.9986\nIterations: 7700, critic accuracy: 0.625, baseline accuracy: 0.9609375, actor loss: -3.3308\nIterations: 7800, critic accuracy: 0.625, baseline accuracy: 0.9296875, actor loss: -3.4045\nIterations: 7900, critic accuracy: 0.59375, baseline accuracy: 0.9375, actor loss: -3.2892\nIterations: 8000, critic accuracy: 0.5859375, baseline accuracy: 0.9453125, actor loss: -2.9703\nIterations: 8100, critic accuracy: 0.625, baseline accuracy: 0.90625, actor loss: -2.4756\nIterations: 8200, critic accuracy: 0.578125, baseline accuracy: 0.9375, actor loss: -3.5445\nIterations: 8300, critic accuracy: 0.6015625, baseline accuracy: 0.953125, actor loss: -3.3239\nIterations: 8400, critic accuracy: 0.640625, baseline accuracy: 0.9375, actor loss: -3.0634\nIterations: 8500, critic accuracy: 0.59375, baseline accuracy: 0.9609375, actor loss: -3.2403\nIterations: 8600, critic accuracy: 0.6484375, baseline accuracy: 0.9765625, actor loss: -3.0564\nIterations: 8700, critic accuracy: 0.625, baseline accuracy: 0.90625, actor loss: -2.5869\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d040c5ed1866698e04536e31afd7d84035146fcf | 617,194 | ipynb | Jupyter Notebook | 08-Time-Series-Forecasting/03_moving_average.ipynb | iVibudh/TensorFlow-for-DeepLearning | 7d0b8e26d0051cfeec4b368c05cf5aea3a578126 | [
"MIT"
] | null | null | null | 08-Time-Series-Forecasting/03_moving_average.ipynb | iVibudh/TensorFlow-for-DeepLearning | 7d0b8e26d0051cfeec4b368c05cf5aea3a578126 | [
"MIT"
] | null | null | null | 08-Time-Series-Forecasting/03_moving_average.ipynb | iVibudh/TensorFlow-for-DeepLearning | 7d0b8e26d0051cfeec4b368c05cf5aea3a578126 | [
"MIT"
] | null | null | null | 851.302069 | 106,797 | 0.950098 | [
[
[
"<a href=\"https://colab.research.google.com/github/iVibudh/TensorFlow-for-DeepLearning/blob/main/08-Time-Series-Forecasting/moving_average.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Moving average",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\nkeras = tf.keras",
"_____no_output_____"
],
[
"def plot_series(time, series, format=\"-\", start=0, end=None, label=None):\n plt.plot(time[start:end], series[start:end], format, label=label)\n plt.xlabel(\"Time\")\n plt.ylabel(\"Value\")\n if label:\n plt.legend(fontsize=14)\n plt.grid(True)\n \ndef trend(time, slope=0):\n return slope * time\n\ndef seasonal_pattern(season_time):\n \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n return np.where(season_time < 0.4,\n np.cos(season_time * 2 * np.pi),\n 1 / np.exp(3 * season_time))\n\ndef seasonality(time, period, amplitude=1, phase=0):\n \"\"\"Repeats the same pattern at each period\"\"\"\n season_time = ((time + phase) % period) / period\n return amplitude * seasonal_pattern(season_time)\n\ndef white_noise(time, noise_level=1, seed=None):\n rnd = np.random.RandomState(seed)\n return rnd.randn(len(time)) * noise_level",
"_____no_output_____"
]
],
[
[
"## Trend and Seasonality",
"_____no_output_____"
]
],
[
[
"time = np.arange(4 * 365 + 1)\n\nslope = 0.05\nbaseline = 10\namplitude = 40\nseries = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n\nnoise_level = 5\nnoise = white_noise(time, noise_level, seed=42)\n\nseries += noise\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Naive Forecast",
"_____no_output_____"
]
],
[
[
"split_time = 1000\ntime_train = time[:split_time]\nx_train = series[:split_time]\ntime_valid = time[split_time:]\nx_valid = series[split_time:]\n\nnaive_forecast = series[split_time - 1:-1]\n\nplt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid, start=0, end=150, label=\"Series\")\nplot_series(time_valid, naive_forecast, start=1, end=151, label=\"Forecast\")",
"_____no_output_____"
]
],
[
[
"Now let's compute the mean absolute error between the forecasts and the predictions in the validation period:",
"_____no_output_____"
]
],
[
[
"keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()",
"_____no_output_____"
]
],
[
[
"That's our baseline, now let's try a moving average.",
"_____no_output_____"
],
[
"## Moving Average",
"_____no_output_____"
]
],
[
[
"def moving_average_forecast(series, window_size):\n \"\"\"Forecasts the mean of the last few values.\n If window_size=1, then this is equivalent to naive forecast\"\"\"\n forecast = []\n for time in range(len(series) - window_size):\n forecast.append(series[time:time + window_size].mean())\n return np.array(forecast)",
"_____no_output_____"
],
[
"def moving_average_forecast(series, window_size):\n \"\"\"Forecasts the mean of the last few values.\n If window_size=1, then this is equivalent to naive forecast\n This implementation is *much* faster than the previous one\"\"\"\n mov = np.cumsum(series)\n mov[window_size:] = mov[window_size:] - mov[:-window_size]\n return mov[window_size - 1:-1] / window_size",
"_____no_output_____"
],
[
"moving_avg = moving_average_forecast(series, 30)[split_time - 30:]\n\nplt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid, label=\"Series\")\nplot_series(time_valid, moving_avg, label=\"Moving average (30 days)\")",
"_____no_output_____"
],
[
"keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()",
"_____no_output_____"
]
],
[
[
"That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time *t* – 365 from the value at time *t*.",
"_____no_output_____"
]
],
[
[
"time_a = np.array(range(1, 21))\na = np.array([1.1, 1.5, 1.6, 1.4, 1.5, 1.6,1.7, 1.8, 1.9, 2, \n 2.1, 2.5, 2.6, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3])\nprint(\"a: \", a, len(a))\nprint(\"time_a: \", time_a, len (time_a))\n\nma_a = moving_average_forecast(a, 10)\nprint(\"ma_a: \", ma_a)\n\ndiff_a = (a[10:] - a[:-10]) ### This removes TREND + SEASONALITY, so, Only Noise is left\ndiff_time = time_a[10:]\nprint(\"diff_time: \", diff_time)\nprint(\"diff_a: \", diff_a)",
"a: [1.1 1.5 1.6 1.4 1.5 1.6 1.7 1.8 1.9 2. 2.1 2.5 2.6 2.4 2.5 2.6 2.7 2.8\n 2.9 3. ] 20\ntime_a: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] 20\nma_a: [1.61 1.71 1.81 1.91 2.01 2.11 2.21 2.31 2.41 2.51]\ndiff_time: [11 12 13 14 15 16 17 18 19 20]\ndiff_a: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n"
],
[
"a[10:]",
"_____no_output_____"
],
[
"a[:-10]",
"_____no_output_____"
],
[
"diff_series = (series[365:] - series[:-365])\ndiff_time = time[365:]\n\nplt.figure(figsize=(10, 6))\nplot_series(diff_time, diff_series, label=\"Series(t) – Series(t–365)\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Focusing on the validation period:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10, 6))\nplot_series(time_valid, diff_series[split_time - 365:], label=\"Series(t) – Series(t–365)\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Great, the trend and seasonality seem to be gone, so now we can use the moving average:",
"_____no_output_____"
]
],
[
[
"diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]\n\nplt.figure(figsize=(10, 6))\nplot_series(time_valid, diff_series[split_time - 365:], label=\"Series(t) – Series(t–365)\")\nplot_series(time_valid, diff_moving_avg, label=\"Moving Average of Diff\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now let's bring back the trend and seasonality by adding the past values from t – 365:",
"_____no_output_____"
]
],
[
[
"diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg\n\nplt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid, label=\"Series\")\nplot_series(time_valid, diff_moving_avg_plus_past, label=\"Forecasts\")\nplt.show()",
"_____no_output_____"
],
[
"keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()",
"_____no_output_____"
]
],
[
[
"Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:",
"_____no_output_____"
]
],
[
[
"diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg\n# moving_average_forecast(series[split_time - 370:-359], 11) = Past series have been smoothed to reduce the effects of the past noise\n\nplt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid, label=\"Series\")\nplot_series(time_valid, diff_moving_avg_plus_smooth_past, label=\"Forecasts\")\nplt.show()",
"_____no_output_____"
],
[
"keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()",
"_____no_output_____"
]
],
[
[
"That's starting to look pretty good! Let's see if we can do better with a Machine Learning model.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d040d50fb00b6afc3b739020b2edb73a3d64d1d6 | 25,500 | ipynb | Jupyter Notebook | week4_approx/practice_approx_qlearning.ipynb | Innuendo1975/Practical_RL | 0aeec50ac838042124a5902c302a85800d92421f | [
"MIT"
] | null | null | null | week4_approx/practice_approx_qlearning.ipynb | Innuendo1975/Practical_RL | 0aeec50ac838042124a5902c302a85800d92421f | [
"MIT"
] | null | null | null | week4_approx/practice_approx_qlearning.ipynb | Innuendo1975/Practical_RL | 0aeec50ac838042124a5902c302a85800d92421f | [
"MIT"
] | 1 | 2019-11-24T00:54:58.000Z | 2019-11-24T00:54:58.000Z | 42.14876 | 6,536 | 0.676706 | [
[
[
"# Approximate q-learning\n\nIn this notebook you will teach a __tensorflow__ neural network to do Q-learning.",
"_____no_output_____"
],
[
"__Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.",
"_____no_output_____"
]
],
[
[
"#XVFB will be launched if you run on a server\nimport os\nif os.environ.get(\"DISPLAY\") is not str or len(os.environ.get(\"DISPLAY\"))==0:\n !bash ../xvfb start\n %env DISPLAY=:1",
"Starting virtual X frame buffer: Xvfb../xvfb: line 12: start-stop-daemon: command not found\n.\nenv: DISPLAY=:1\n"
],
[
"import gym\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"env = gym.make(\"CartPole-v0\").env\nenv.reset()\nn_actions = env.action_space.n\nstate_dim = env.observation_space.shape\n\nplt.imshow(env.render(\"rgb_array\"))",
"_____no_output_____"
],
[
"state_dim",
"_____no_output_____"
]
],
[
[
"# Approximate (deep) Q-learning: building the network\n\nTo train a neural network policy one must have a neural network policy. Let's build it.\n\n\nSince we're working with a pre-extracted features (cart positions, angles and velocities), we don't need a complicated network yet. In fact, let's build something like this for starters:\n\n\n\nFor your first run, please only use linear layers (L.Dense) and activations. Stuff like batch normalization or dropout may ruin everything if used haphazardly. \n\nAlso please avoid using nonlinearities like sigmoid & tanh: agent's observations are not normalized so sigmoids may become saturated from init.\n\nIdeally you should start small with maybe 1-2 hidden layers with < 200 neurons and then increase network size if agent doesn't beat the target score.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport keras\nimport keras.layers as L\ntf.reset_default_graph()\nsess = tf.InteractiveSession()\nkeras.backend.set_session(sess)",
"Using TensorFlow backend.\n"
],
[
"network = keras.models.Sequential()\nnetwork.add(L.InputLayer(state_dim))\n\n# let's create a network for approximate q-learning following guidelines above\n# <YOUR CODE: stack more layers!!!1 >\nnetwork.add(L.Dense(128, activation='relu'))\nnetwork.add(L.Dense(196, activation='relu'))\nnetwork.add(L.Dense(n_actions, activation='linear'))",
"_____no_output_____"
],
[
"import numpy as np\nimport random\n\n\ndef get_action(state, epsilon=0):\n \"\"\"\n sample actions with epsilon-greedy policy\n recap: with p = epsilon pick random action, else pick action with highest Q(s,a)\n \"\"\"\n \n q_values = network.predict(state[None])[0]\n n = len(q_values)\n if random.random() < epsilon:\n return random.randint(0, n - 1)\n else:\n return np.argmax(q_values)\n\n# return <epsilon-greedily selected action>\n",
"_____no_output_____"
],
[
"assert network.output_shape == (None, n_actions), \"please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]\"\nassert network.layers[-1].activation == keras.activations.linear, \"please make sure you predict q-values without nonlinearity\"\n\n# test epsilon-greedy exploration\ns = env.reset()\nassert np.shape(get_action(s)) == (), \"please return just one action (integer)\"\nfor eps in [0., 0.1, 0.5, 1.0]:\n state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=n_actions)\n best_action = state_frequencies.argmax()\n assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / n_actions)) < 200\n for other_action in range(n_actions):\n if other_action != best_action:\n assert abs(state_frequencies[other_action] - 10000 * (eps / n_actions)) < 200\n print('e=%.1f tests passed'%eps)",
"e=0.0 tests passed\ne=0.1 tests passed\ne=0.5 tests passed\ne=1.0 tests passed\n"
]
],
[
[
"### Q-learning via gradient descent\n\nWe shall now train our agent's Q-function by minimizing the TD loss:\n$$ L = { 1 \\over N} \\sum_i (Q_{\\theta}(s,a) - [r(s,a) + \\gamma \\cdot max_{a'} Q_{-}(s', a')]) ^2 $$\n\n\nWhere\n* $s, a, r, s'$ are current state, action, reward and next state respectively\n* $\\gamma$ is a discount factor defined two cells above.\n\nThe tricky part is with $Q_{-}(s',a')$. From an engineering standpoint, it's the same as $Q_{\\theta}$ - the output of your neural network policy. However, when doing gradient descent, __we won't propagate gradients through it__ to make training more stable (see lectures).\n\nTo do so, we shall use `tf.stop_gradient` function which basically says \"consider this thing constant when doingbackprop\".",
"_____no_output_____"
]
],
[
[
"# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)\nstates_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)\nactions_ph = keras.backend.placeholder(dtype='int32', shape=[None])\nrewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])\nnext_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)\nis_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])",
"_____no_output_____"
],
[
"#get q-values for all actions in current states\npredicted_qvalues = network(states_ph)\n\n#select q-values for chosen actions\npredicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions), axis=1)",
"_____no_output_____"
],
[
"gamma = 0.99\n\n# compute q-values for all actions in next states\n# predicted_next_qvalues = <YOUR CODE - apply network to get q-values for next_states_ph>\npredicted_next_qvalues = network(next_states_ph)\n\n# compute V*(next_states) using predicted next q-values\nnext_state_values = tf.reduce_max(predicted_next_qvalues, axis=1)\n\n# compute \"target q-values\" for loss - it's what's inside square parentheses in the above formula.\ntarget_qvalues_for_actions = rewards_ph + gamma * next_state_values\n\n# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist\ntarget_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)",
"_____no_output_____"
],
[
"#mean squared error loss to minimize\nloss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2\nloss = tf.reduce_mean(loss)\n\n# training function that resembles agent.update(state, action, reward, next_state) from tabular agent\ntrain_step = tf.train.AdamOptimizer(1e-4).minimize(loss)",
"_____no_output_____"
],
[
"assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, \"make sure you update q-values for chosen actions and not just all actions\"\nassert tf.gradients(loss, [predicted_next_qvalues])[0] is None, \"make sure you don't propagate gradient w.r.t. Q_(s',a')\"\nassert predicted_next_qvalues.shape.ndims == 2, \"make sure you predicted q-values for all actions in next state\"\nassert next_state_values.shape.ndims == 1, \"make sure you computed V(s') as maximum over just the actions axis and not all axes\"\nassert target_qvalues_for_actions.shape.ndims == 1, \"there's something wrong with target q-values, they must be a vector\"",
"_____no_output_____"
]
],
[
[
"### Playing the game",
"_____no_output_____"
]
],
[
[
"def generate_session(t_max=1000, epsilon=0, train=False):\n \"\"\"play env with approximate q-learning agent and train it at the same time\"\"\"\n total_reward = 0\n s = env.reset()\n \n for t in range(t_max):\n a = get_action(s, epsilon=epsilon) \n next_s, r, done, _ = env.step(a)\n \n if train:\n sess.run(train_step,{\n states_ph: [s], actions_ph: [a], rewards_ph: [r], \n next_states_ph: [next_s], is_done_ph: [done]\n })\n\n total_reward += r\n s = next_s\n if done: break\n \n return total_reward",
"_____no_output_____"
],
[
"epsilon = 0.5",
"_____no_output_____"
],
[
"for i in range(1000):\n session_rewards = [generate_session(epsilon=epsilon, train=True) for _ in range(100)]\n print(\"epoch #{}\\tmean reward = {:.3f}\\tepsilon = {:.3f}\".format(i, np.mean(session_rewards), epsilon))\n \n epsilon *= 0.99\n assert epsilon >= 1e-4, \"Make sure epsilon is always nonzero during training\"\n \n if np.mean(session_rewards) > 300:\n print (\"You Win!\")\n break\n",
"epoch #0\tmean reward = 16.740\tepsilon = 0.500\nepoch #1\tmean reward = 16.610\tepsilon = 0.495\nepoch #2\tmean reward = 14.050\tepsilon = 0.490\nepoch #3\tmean reward = 15.400\tepsilon = 0.485\nepoch #4\tmean reward = 14.950\tepsilon = 0.480\nepoch #5\tmean reward = 19.480\tepsilon = 0.475\nepoch #6\tmean reward = 16.150\tepsilon = 0.471\nepoch #7\tmean reward = 17.380\tepsilon = 0.466\nepoch #8\tmean reward = 15.730\tepsilon = 0.461\nepoch #9\tmean reward = 25.080\tepsilon = 0.457\nepoch #10\tmean reward = 29.930\tepsilon = 0.452\nepoch #11\tmean reward = 37.970\tepsilon = 0.448\nepoch #12\tmean reward = 45.640\tepsilon = 0.443\nepoch #13\tmean reward = 46.730\tepsilon = 0.439\nepoch #14\tmean reward = 43.390\tepsilon = 0.434\nepoch #15\tmean reward = 79.620\tepsilon = 0.430\nepoch #16\tmean reward = 92.650\tepsilon = 0.426\nepoch #17\tmean reward = 125.850\tepsilon = 0.421\nepoch #18\tmean reward = 100.480\tepsilon = 0.417\nepoch #19\tmean reward = 150.340\tepsilon = 0.413\nepoch #20\tmean reward = 161.240\tepsilon = 0.409\nepoch #21\tmean reward = 131.490\tepsilon = 0.405\nepoch #22\tmean reward = 184.870\tepsilon = 0.401\nepoch #23\tmean reward = 191.210\tepsilon = 0.397\nepoch #24\tmean reward = 247.040\tepsilon = 0.393\nepoch #25\tmean reward = 207.300\tepsilon = 0.389\nepoch #26\tmean reward = 249.010\tepsilon = 0.385\nepoch #27\tmean reward = 252.960\tepsilon = 0.381\nepoch #28\tmean reward = 276.790\tepsilon = 0.377\nepoch #29\tmean reward = 376.700\tepsilon = 0.374\nYou Win!\n"
]
],
[
[
"### How to interpret results\n\n\nWelcome to the f.. world of deep f...n reinforcement learning. Don't expect agent's reward to smoothly go up. Hope for it to go increase eventually. If it deems you worthy.\n\nSeriously though,\n* __ mean reward__ is the average reward per game. For a correct implementation it may stay low for some 10 epochs, then start growing while oscilating insanely and converges by ~50-100 steps depending on the network architecture. \n* If it never reaches target score by the end of for loop, try increasing the number of hidden neurons or look at the epsilon.\n* __ epsilon__ - agent's willingness to explore. If you see that agent's already at < 0.01 epsilon before it's is at least 200, just reset it back to 0.1 - 0.5.",
"_____no_output_____"
],
[
"### Record videos\n\nAs usual, we now use `gym.wrappers.Monitor` to record a video of our agent playing the game. Unlike our previous attempts with state binarization, this time we expect our agent to act ~~(or fail)~~ more smoothly since there's no more binarization error at play.\n\nAs you already did with tabular q-learning, we set epsilon=0 for final evaluation to prevent agent from exploring himself to death.",
"_____no_output_____"
]
],
[
[
"#record sessions\nimport gym.wrappers\nenv = gym.wrappers.Monitor(gym.make(\"CartPole-v0\"),directory=\"videos\",force=True)\nsessions = [generate_session(epsilon=0, train=False) for _ in range(100)]\nenv.close()\n",
"_____no_output_____"
],
[
"#show video\nfrom IPython.display import HTML\nimport os\n\nvideo_names = list(filter(lambda s:s.endswith(\".mp4\"),os.listdir(\"./videos/\")))\n\nHTML(\"\"\"\n<video width=\"640\" height=\"480\" controls>\n <source src=\"{}\" type=\"video/mp4\">\n</video>\n\"\"\".format(\"./videos/\"+video_names[-1])) #this may or may not be _last_ video. Try other indices",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### Submit to coursera",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from submit2 import submit_cartpole\nsubmit_cartpole(generate_session, '[email protected]', 'gMmaBajRboD6YXKK')",
"Submitted to Coursera platform. See results on assignment page!\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d040e82f601ddd907df1960083df81789a5a6af7 | 247,456 | ipynb | Jupyter Notebook | api-examples/ndbc-wavewatch-iii.ipynb | evan1997123/PlanetOSDatathon | 91c324619c05f28dcacc622d97369813926fb52d | [
"MIT"
] | 1 | 2020-05-12T19:47:47.000Z | 2020-05-12T19:47:47.000Z | api-examples/ndbc-wavewatch-iii.ipynb | evan1997123/PlanetOSDatathon | 91c324619c05f28dcacc622d97369813926fb52d | [
"MIT"
] | null | null | null | api-examples/ndbc-wavewatch-iii.ipynb | evan1997123/PlanetOSDatathon | 91c324619c05f28dcacc622d97369813926fb52d | [
"MIT"
] | null | null | null | 545.057269 | 152,242 | 0.933168 | [
[
[
"# NOAA Wave Watch 3 and NDBC Buoy Data Comparison",
"_____no_output_____"
],
[
"*Note: this notebook requires python3.*\n\nThis notebook demostrates how to compare [WaveWatch III Global Ocean Wave Model](http://data.planetos.com/datasets/noaa_ww3_global_1.25x1d:noaa-wave-watch-iii-nww3-ocean-wave-model?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) and [NOAA NDBC buoy data](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) using the Planet OS API.\n\nAPI documentation is available at http://docs.planetos.com. If you have questions or comments, join the [Planet OS Slack community](http://slack.planetos.com/) to chat with our development team.\n\n\nFor general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/. This notebook also makes use of the [matplotlib basemap toolkit.](http://matplotlib.org/basemap/index.html)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport dateutil.parser\nimport datetime\nfrom urllib.request import urlopen, Request\nimport simplejson as json\nfrom datetime import date, timedelta, datetime\nimport matplotlib.dates as mdates\nfrom mpl_toolkits.basemap import Basemap",
"_____no_output_____"
]
],
[
[
"**Important!** You'll need to replace apikey below with your actual Planet OS API key, which you'll find [on the Planet OS account settings page.](#http://data.planetos.com/account/settings/?utm_source=github&utm_medium=notebook&utm_campaign=ww3-api-notebook) and NDBC buoy station name in which you are intrested.",
"_____no_output_____"
]
],
[
[
"dataset_id = 'noaa_ndbc_stdmet_stations'\n## stations with wave height available: '46006', '46013', '46029'\n## stations without wave height: icac1', '41047', 'bepb6', '32st0', '51004'\n## stations too close to coastline (no point to compare to ww3)'sacv4', 'gelo1', 'hcef1'\nstation = '46029'\napikey = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'",
"_____no_output_____"
]
],
[
[
"Let's first query the API to see what stations are available for the [NDBC Standard Meteorological Data dataset.](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook)",
"_____no_output_____"
]
],
[
[
"API_url = 'http://api.planetos.com/v1/datasets/%s/stations?apikey=%s' % (dataset_id, apikey)\nrequest = Request(API_url)\nresponse = urlopen(request)\nAPI_data_locations = json.loads(response.read())\n# print(API_data_locations)",
"_____no_output_____"
]
],
[
[
"Now we'll use matplotlib to visualize the stations on a simple basemap.",
"_____no_output_____"
]
],
[
[
"m = Basemap(projection='merc',llcrnrlat=-80,urcrnrlat=80,\\\n llcrnrlon=-180,urcrnrlon=180,lat_ts=20,resolution='c')\nfig=plt.figure(figsize=(15,10))\nm.drawcoastlines()\n##m.fillcontinents()\nfor i in API_data_locations['station']:\n x,y=m(API_data_locations['station'][i]['SpatialExtent']['coordinates'][0],\n API_data_locations['station'][i]['SpatialExtent']['coordinates'][1])\n plt.scatter(x,y,color='r')\nx,y=m(API_data_locations['station'][station]['SpatialExtent']['coordinates'][0],\n API_data_locations['station'][station]['SpatialExtent']['coordinates'][1])\nplt.scatter(x,y,s=100,color='b')",
"_____no_output_____"
]
],
[
[
"Let's examine the last five days of data. For the WaveWatch III forecast, we'll use the reference time parameter to pull forecast data from the 18:00 model run from five days ago.",
"_____no_output_____"
]
],
[
[
"## Find suitable reference time values\natthemoment = datetime.utcnow()\natthemoment = atthemoment.strftime('%Y-%m-%dT%H:%M:%S') \n\nbefore5days = datetime.utcnow() - timedelta(days=5)\nbefore5days_long = before5days.strftime('%Y-%m-%dT%H:%M:%S')\nbefore5days_short = before5days.strftime('%Y-%m-%d')\n\nstart = before5days_long\nend = atthemoment\n\nreftime_start = str(before5days_short) + 'T18:00:00'\nreftime_end = reftime_start",
"_____no_output_____"
]
],
[
[
"API request for NOAA NDBC buoy station data",
"_____no_output_____"
]
],
[
[
"API_url = \"http://api.planetos.com/v1/datasets/{0}/point?station={1}&apikey={2}&start={3}&end={4}&count=1000\".format(dataset_id,station,apikey,start,end)\nprint(API_url)",
"http://api.planetos.com/v1/datasets/noaa_ndbc_stdmet_stations/point?station=46029&apikey=8428878e4b944abeb84790e832c633fc&start=2018-06-30T08:33:58&end=2018-07-05T08:33:58&count=1000\n"
],
[
"request = Request(API_url)\nresponse = urlopen(request)\nAPI_data_buoy = json.loads(response.read())",
"_____no_output_____"
],
[
"buoy_variables = []\nfor k,v in set([(j,i['context']) for i in API_data_buoy['entries'] for j in i['data'].keys()]):\n buoy_variables.append(k)",
"_____no_output_____"
]
],
[
[
"Find buoy station coordinates to use them later for finding NOAA Wave Watch III data",
"_____no_output_____"
]
],
[
[
"for i in API_data_buoy['entries']:\n #print(i['axes']['time'])\n if i['context'] == 'time_latitude_longitude':\n longitude = (i['axes']['longitude'])\n latitude = (i['axes']['latitude'])\n\nprint ('Latitude: '+ str(latitude))\nprint ('Longitude: '+ str(longitude))\n",
"Latitude: 46.159000396728516\nLongitude: -124.51399993896484\n"
]
],
[
[
"API request for NOAA WaveWatch III (NWW3) Ocean Wave Model near the point of selected station. Note that data may not be available at the requested reference time. If the response is empty, try removing the reference time parameters `reftime_start` and `reftime_end` from the query.",
"_____no_output_____"
]
],
[
[
"API_url = 'http://api.planetos.com/v1/datasets/noaa_ww3_global_1.25x1d/point?lat={0}&lon={1}&verbose=true&apikey={2}&count=100&end={3}&reftime_start={4}&reftime_end={5}'.format(latitude,longitude,apikey,end,reftime_start,reftime_end)\nrequest = Request(API_url)\nresponse = urlopen(request)\nAPI_data_ww3 = json.loads(response.read())\nprint(API_url)",
"http://api.planetos.com/v1/datasets/noaa_ww3_global_1.25x1d/point?lat=46.159000396728516&lon=-124.51399993896484&verbose=true&apikey=8428878e4b944abeb84790e832c633fc&count=100&end=2018-07-05T08:33:58&reftime_start=2018-06-30T18:00:00&reftime_end=2018-06-30T18:00:00\n"
],
[
"ww3_variables = []\nfor k,v in set([(j,i['context']) for i in API_data_ww3['entries'] for j in i['data'].keys()]):\n ww3_variables.append(k)",
"_____no_output_____"
]
],
[
[
"Manually review the list of WaveWatch and NDBC data variables to determine which parameters are equivalent for comparison.",
"_____no_output_____"
]
],
[
[
"print(ww3_variables)\nprint(buoy_variables)",
"['Wind_speed_surface', 'v-component_of_wind_surface', 'u-component_of_wind_surface', 'Primary_wave_direction_surface', 'Significant_height_of_combined_wind_waves_and_swell_surface', 'Significant_height_of_swell_waves_ordered_sequence_of_data', 'Mean_period_of_swell_waves_ordered_sequence_of_data', 'Mean_period_of_wind_waves_surface', 'Direction_of_swell_waves_ordered_sequence_of_data', 'Primary_wave_mean_period_surface', 'Secondary_wave_direction_surface', 'Secondary_wave_mean_period_surface', 'Wind_direction_from_which_blowing_surface', 'Direction_of_wind_waves_surface', 'Significant_height_of_wind_waves_surface']\n['wind_spd', 'wave_height', 'air_temperature', 'dewpt_temperature', 'mean_wave_dir', 'average_wpd', 'dominant_wpd', 'sea_surface_temperature', 'air_pressure', 'wind_dir', 'water_level', 'visibility', 'gust']\n"
]
],
[
[
"Next we'll build a dictionary of corresponding variables that we want to compare.",
"_____no_output_____"
]
],
[
[
"buoy_model = {'wave_height':'Significant_height_of_combined_wind_waves_and_swell_surface',\n 'mean_wave_dir':'Primary_wave_direction_surface',\n 'average_wpd':'Primary_wave_mean_period_surface',\n 'wind_spd':'Wind_speed_surface'}",
"_____no_output_____"
]
],
[
[
"Read data from the JSON responses and convert the values to floats for plotting. Note that depending on the dataset, some variables have different timesteps than others, so a separate time array for each variable is recommended.",
"_____no_output_____"
]
],
[
[
"def append_data(in_string):\n if in_string == None:\n return np.nan\n elif in_string == 'None':\n return np.nan\n else:\n return float(in_string)\n\nww3_data = {}\nww3_times = {}\nbuoy_data = {}\nbuoy_times = {}\nfor k,v in buoy_model.items():\n ww3_data[v] = []\n ww3_times[v] = []\n buoy_data[k] = []\n buoy_times[k] = []\n\nfor i in API_data_ww3['entries']:\n for j in i['data']:\n if j in buoy_model.values():\n ww3_data[j].append(append_data(i['data'][j]))\n ww3_times[j].append(dateutil.parser.parse(i['axes']['time']))\n \nfor i in API_data_buoy['entries']:\n for j in i['data']:\n if j in buoy_model.keys():\n buoy_data[j].append(append_data(i['data'][j]))\n buoy_times[j].append(dateutil.parser.parse(i['axes']['time']))\nfor i in ww3_data:\n ww3_data[i] = np.array(ww3_data[i])\n ww3_times[i] = np.array(ww3_times[i])",
"_____no_output_____"
]
],
[
[
"Finally, let's plot the data using matplotlib.",
"_____no_output_____"
]
],
[
[
"buoy_label = \"NDBC Station %s\" % station\nww3_label = \"WW3 at %s\" % reftime_start\nfor k,v in buoy_model.items():\n if np.abs(np.nansum(buoy_data[k]))>0:\n fig=plt.figure(figsize=(10,5))\n plt.title(k+' '+v)\n plt.plot(ww3_times[v],ww3_data[v], label=ww3_label)\n plt.plot(buoy_times[k],buoy_data[k],'*',label=buoy_label)\n plt.legend(bbox_to_anchor=(1.5, 0.22), loc=1, borderaxespad=0.)\n plt.xlabel('Time')\n plt.ylabel(k)\n fig.autofmt_xdate()\n plt.grid()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d040e9985e53c0af4d9b39efba1b75245645551b | 2,766 | ipynb | Jupyter Notebook | examples/Tutorials/ExportToInterConnect.ipynb | joamatab/SiPANN | 1bb303e1494806bb31c5c0052a844be4cc7e5803 | [
"MIT"
] | 11 | 2020-08-06T08:25:57.000Z | 2022-03-13T14:29:06.000Z | examples/Tutorials/ExportToInterConnect.ipynb | joamatab/SiPANN | 1bb303e1494806bb31c5c0052a844be4cc7e5803 | [
"MIT"
] | 4 | 2020-04-24T20:30:24.000Z | 2020-05-22T18:22:54.000Z | examples/Tutorials/ExportToInterConnect.ipynb | BYUCamachoLab/SiPANN | d5cb94ce4303c0a068902f697fe08b73f62fee6a | [
"MIT"
] | 3 | 2021-03-21T23:42:13.000Z | 2021-05-24T20:26:06.000Z | 26.09434 | 403 | 0.605206 | [
[
[
"# SCEE and Interconnect",
"_____no_output_____"
],
[
"The SCEE module in SiPANN also has built in functionality to export any of it's models directly into a format readable by Lumerical Interconnect via the `export_interconnect()` function. This gives the user multiple options (Interconnect or Simphony) to cascade devices into complex structures. To export to a Interconnect file is as simple as a function call. First we declare all of our imports:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom SiPANN import scee",
"_____no_output_____"
]
],
[
[
"Then make our device and calculate it's scattering parameters (we arbitrarily choose a half ring resonator here)",
"_____no_output_____"
]
],
[
[
"r = 10000\nw = 500\nt = 220\nwavelength = np.linspace(1500, 1600)\ngap = 100\n\nhr = scee.HalfRing(w, t, r, gap)\nsparams = hr.sparams(wavelength)",
"_____no_output_____"
]
],
[
[
"And then export. Note `export_interconnect` takes in wavelengths in nms, but the Lumerical file will have frequency in meters, as is standard in Interconnect. To export:",
"_____no_output_____"
]
],
[
[
"filename = \"halfring_10microns_sparams.txt\"\nscee.export_interconnect(sparams, wavelength, filename)",
"_____no_output_____"
]
],
[
[
"As a final parameter, `export_interconnect` also has a `clear=True` parameter that will empty the file being written to before writing. If you'd like to append to an existing file, simply set `clear=False`.\n\nThis is available as a jupyter notebook [here](https://github.com/contagon/SiPANN/blob/master/examples/Tutorials/ExportToInterConnect.ipynb)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d040ef4167e453337d0f59fa6701e314b3a27163 | 273,121 | ipynb | Jupyter Notebook | Desafios_aula01respostas.ipynb | Adrianacms/ImersaoDados_Alura | 811782e21608a7ddb18b9f3f5db431b122dbe5eb | [
"MIT"
] | 1 | 2021-05-08T19:02:38.000Z | 2021-05-08T19:02:38.000Z | Desafios_aula01respostas.ipynb | Adrianacms/ImersaoDados_Alura | 811782e21608a7ddb18b9f3f5db431b122dbe5eb | [
"MIT"
] | null | null | null | Desafios_aula01respostas.ipynb | Adrianacms/ImersaoDados_Alura | 811782e21608a7ddb18b9f3f5db431b122dbe5eb | [
"MIT"
] | null | null | null | 45.550534 | 22,346 | 0.386148 | [
[
[
"# Aula 1",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"url_dados = 'https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true'",
"_____no_output_____"
],
[
"dados = pd.read_csv(url_dados, compression = 'zip')\ndados",
"_____no_output_____"
],
[
"dados.head()",
"_____no_output_____"
],
[
"dados.shape",
"_____no_output_____"
],
[
"dados['tratamento']",
"_____no_output_____"
],
[
"dados['tratamento'].unique()",
"_____no_output_____"
],
[
"dados['tempo'].unique()",
"_____no_output_____"
],
[
"dados['dose'].unique()",
"_____no_output_____"
],
[
"dados['droga'].unique()",
"_____no_output_____"
],
[
"dados['g-0'].unique()",
"_____no_output_____"
],
[
"dados['tratamento'].value_counts()",
"_____no_output_____"
],
[
"dados['dose'].value_counts()",
"_____no_output_____"
],
[
"dados['tratamento'].value_counts(normalize = True)",
"_____no_output_____"
],
[
"dados['dose'].value_counts(normalize = True)",
"_____no_output_____"
],
[
"dados['tratamento'].value_counts().plot.pie()",
"_____no_output_____"
],
[
"dados['tempo'].value_counts().plot.pie()",
"_____no_output_____"
],
[
"dados['tempo'].value_counts().plot.bar()",
"_____no_output_____"
],
[
"dados_filtrados = dados[dados['g-0'] > 0]\ndados_filtrados.head()",
"_____no_output_____"
]
],
[
[
"#Desafios Aula 1",
"_____no_output_____"
],
[
"## Desafio 01: Investigar por que a classe tratamento é tão desbalanceada?",
"_____no_output_____"
],
[
"Dependendo o tipo de pesquisa é possível usar o mesmo controle para mais de um caso. Repare que o grupo de controle é um grupo onde não estamos aplicando o efeito de uma determinada droga. Então, esse mesmo grupo pode ser utilizado como controle para cada uma das drogas estudadas. ",
"_____no_output_____"
],
[
"Um ponto relevante da base de dados que estamos trabalhando é que todos os dados de controle estão relacionados ao estudo de apenas uma droga.",
"_____no_output_____"
]
],
[
[
"print(f\"Total de dados {len(dados['id'])}\\n\")\nprint(f\"Quantidade de drogas {len(dados.groupby(['droga', 'tratamento']).count()['id'])}\\n\")\ndisplay(dados.query('tratamento == \"com_controle\"').value_counts('droga'))\nprint()\ndisplay(dados.query('droga == \"cacb2b860\"').value_counts('tratamento'))\nprint()",
"Total de dados 23814\n\nQuantidade de drogas 3289\n\n"
]
],
[
[
"## Desafio 02: Plotar as 5 últimas linhas da tabela",
"_____no_output_____"
]
],
[
[
"dados.tail()",
"_____no_output_____"
]
],
[
[
"Outra opção seria usar o seguinte comando:",
"_____no_output_____"
]
],
[
[
"dados[-5:]",
"_____no_output_____"
]
],
[
[
"## Desafio 03: Proporção das classes tratamento.",
"_____no_output_____"
]
],
[
[
"dados['tratamento'].value_counts(normalize = True)",
"_____no_output_____"
]
],
[
[
"## Desafio 04: Quantas tipos de drogas foram investigadas.",
"_____no_output_____"
]
],
[
[
"dados['droga'].unique().shape[0]",
"_____no_output_____"
]
],
[
[
"Outra opção de solução:",
"_____no_output_____"
]
],
[
[
"len(dados['droga'].unique())",
"_____no_output_____"
]
],
[
[
"## Desafio 05: Procurar na documentação o método query(pandas).",
"_____no_output_____"
],
[
"https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html",
"_____no_output_____"
],
[
"## Desafio 06: Renomear as colunas tirando o hífen.",
"_____no_output_____"
]
],
[
[
"dados.columns",
"_____no_output_____"
],
[
"nome_das_colunas = dados.columns",
"_____no_output_____"
],
[
"novo_nome_coluna = []\nfor coluna in nome_das_colunas:\n coluna = coluna.replace('-', '_')\n novo_nome_coluna.append(coluna)\ndados.columns = novo_nome_coluna \ndados.head()",
"_____no_output_____"
]
],
[
[
"Agora podemos comparar o resultado usando Query com o resultado usando máscara + slice",
"_____no_output_____"
]
],
[
[
"dados_filtrados = dados[dados['g_0'] > 0]\ndados_filtrados.head()",
"_____no_output_____"
],
[
"dados_filtrados = dados.query('g_0 > 0')\ndados_filtrados.head()",
"_____no_output_____"
]
],
[
[
"## Desafio 07: Deixar os gráficos bonitões. (Matplotlib.pyplot)",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"valore_tempo = dados['tempo'].value_counts(ascending=True)\nvalore_tempo.sort_index()",
"_____no_output_____"
],
[
"plt.figure(figsize=(15, 10))\nvalore_tempo = dados['tempo'].value_counts(ascending=True)\nax = valore_tempo.sort_index().plot.bar()\nax.set_title('Janelas de tempo', fontsize=20)\nax.set_xlabel('Tempo', fontsize=18)\nax.set_ylabel('Quantidade', fontsize=18)\nplt.xticks(rotation = 0, fontsize=16)\nplt.yticks(fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"##Desafio 08: Resumo do que você aprendeu com os dados",
"_____no_output_____"
],
[
"Nesta aula utilizei a biblioteca Pandas, diversas funcionalidades da mesma para explorar dados. Durante a análise de dados, descobri fatores importantes para a obtenção de insights e também aprendi como plotar os gráficos de pizza e de colunas discutindo pontos positivos e negativos. ",
"_____no_output_____"
],
[
"Para mais informações a base dados estudada na imersão é uma versão simplificada [deste desafio](https://www.kaggle.com/c/lish-moa/overview/description) do Kaggle (em inglês).\n\n\nTambém recomendo acessar o\n[Connectopedia](https://clue.io/connectopedia/). O Connectopedia é um dicionário gratuito de termos e conceitos que incluem definições de viabilidade de células e expressão de genes. \n\n\nO desafio do Kaggle também está relacionado a estes artigos científicos:\n\nCorsello et al. “Discovering the anticancer potential of non-oncology drugs by systematic viability profiling,” Nature Cancer, 2020, https://doi.org/10.1038/s43018-019-0018-6\n\nSubramanian et al. “A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles,” Cell, 2017, https://doi.org/10.1016/j.cell.2017.10.049",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d040f9132106b5bfb59d30ea1f433dfea82eef49 | 8,353 | ipynb | Jupyter Notebook | 2 - Lists.ipynb | grahampullan/pythonical | 34ca0dfc825b76c7f8e38b4db2bb31739a239bf5 | [
"MIT"
] | null | null | null | 2 - Lists.ipynb | grahampullan/pythonical | 34ca0dfc825b76c7f8e38b4db2bb31739a239bf5 | [
"MIT"
] | null | null | null | 2 - Lists.ipynb | grahampullan/pythonical | 34ca0dfc825b76c7f8e38b4db2bb31739a239bf5 | [
"MIT"
] | null | null | null | 24.072046 | 366 | 0.558243 | [
[
[
"# Part 2 - Lists\n\nWe learned about **variables** in Part 1. Sometimes it makes sense to group lots of items of information together in a **list**. This is a good idea when the items are all connected in some way.\n\nFor example, we might want to store the names of our friends. We could create several variables and assign the name of one friend to each of them (remember to do a Shift-Return on your keyboard, or click the Run button, to run the code):",
"_____no_output_____"
]
],
[
[
"friend_1 = \"Fred\"\nfriend_2 = \"Jane\"\nfriend_3 = \"Rob\"\nfriend_4 = \"Sophie\"\nfriend_5 = \"Rachel\"",
"_____no_output_____"
]
],
[
[
"This works OK. All the names are stored. But the problem is that they are all stored separately - Python does not know that they are all part of the same collection of friends.\n\nBut there's a really nice way to solve this problem: **lists**. We can create a list like this:",
"_____no_output_____"
]
],
[
[
"friends = [ \"Fred\", \"Jane\", \"Rob\", \"Sophie\", \"Rachel\" ]",
"_____no_output_____"
]
],
[
[
"We create a list using square brackets `[` `]`, with each item in the list separated by a comma `,`.\n\n### List methods\nThere are some special **list** tools (called **methods**) that we can use on our `friends` list. \n\nWe can add a name to the list using `append`:",
"_____no_output_____"
]
],
[
[
"friends.append(\"David\")\nprint(friends)",
"_____no_output_____"
]
],
[
[
"We can put the names in alphabetical order using `sort`:",
"_____no_output_____"
]
],
[
[
"friends.sort()\nprint(friends)",
"_____no_output_____"
]
],
[
[
"We can reverse the order using `reverse`:",
"_____no_output_____"
]
],
[
[
"friends.reverse()\nprint(friends)",
"_____no_output_____"
]
],
[
[
"These **list** **methods** allow us to do lots of things like this without having to write much code ourselves.\n\n### What is a list index?\n\nLet's make a new list. This time, we will start with an empty list, and then add to it:",
"_____no_output_____"
]
],
[
[
"shapes = []\nprint(shapes)",
"_____no_output_____"
],
[
"shapes.append(\"triangle\")\nshapes.append(\"square\")\nshapes.append(\"pentagon\")\nshapes.append(\"hexagon\")\nprint(shapes)",
"_____no_output_____"
]
],
[
[
"These lists are great, but we might not always want to operate on the whole list at once. Instead, we might want to pick out one particular item in the list. In a **list**, each item is numbered, *starting from zero*. This number is called an **index**. So, remembering that the numbering *starts from zero*, we can access the first item in the list like this:",
"_____no_output_____"
]
],
[
[
"print(shapes[0])",
"_____no_output_____"
]
],
[
[
"Notice that the **index** is surrounded by square brackets `[` `]`. We can get the second item in `shapes` like this:",
"_____no_output_____"
]
],
[
[
"print(shapes[1])",
"_____no_output_____"
]
],
[
[
"and the last one like this:",
"_____no_output_____"
]
],
[
[
"print(shapes[3])",
"_____no_output_____"
]
],
[
[
"Python also has a special way of indexing the last item in a list, and this is especially useful if you're not sure how many items there are in your list. You can use a negative index:",
"_____no_output_____"
]
],
[
[
"print(shapes[-1])",
"_____no_output_____"
]
],
[
[
"We can ask Python to tell us the index of an item in the list:",
"_____no_output_____"
]
],
[
[
"idx = shapes.index(\"square\")\nprint(idx)",
"_____no_output_____"
]
],
[
[
"Actually, the `index` method tells us the index of the first time that an item appears in a list.\n\n### Displaying our data in a bar chart",
"_____no_output_____"
],
[
"Let's have another look at our `shapes` list:",
"_____no_output_____"
]
],
[
[
"print(shapes)",
"_____no_output_____"
]
],
[
[
"and let's make another list that has the number of sides that each shape has:",
"_____no_output_____"
]
],
[
[
"sides=[3,4,5,6]\nprint(sides)",
"_____no_output_____"
]
],
[
[
"It would be nice to draw a bar chart showing the number of sides that each shape has. But drawing all those bars and axes would take quite a lot of code. Don't worry - someone has already done this, and we can use the code that they have written using the **import** command.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"(The `%matplotlib inline` is a Jupyter notebook command that means the plots we make will appear right here in our notebook)",
"_____no_output_____"
],
[
"Now we can plot our bar chart using only three lines of code:",
"_____no_output_____"
]
],
[
[
"plt.bar(shapes,sides)\nplt.xlabel(\"Shape\")\nplt.ylabel(\"Number of sides\");",
"_____no_output_____"
]
],
[
[
"That's a really nice, neat bar chart!",
"_____no_output_____"
],
[
"Try changing the lists and re-plotting the bar chart. You can add another shape, for example.",
"_____no_output_____"
],
[
"### What have we covered in this notebook?\nWe've learned about **lists** and the list **methods** that we can use to work with them. We've also learned about how to access an item in the list using an **index**. We then plotted a bar chart using code written by someone else - we used **import** to get this code.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d040fe4094d9b2172be82825f7315f4ff472f0aa | 18,433 | ipynb | Jupyter Notebook | pyt0/Demo_Data_Visualization.ipynb | nsingh216/edu | 95cbd3dc25c9b15160ba4b146b52eeeb2fa54ecb | [
"Apache-2.0"
] | null | null | null | pyt0/Demo_Data_Visualization.ipynb | nsingh216/edu | 95cbd3dc25c9b15160ba4b146b52eeeb2fa54ecb | [
"Apache-2.0"
] | null | null | null | pyt0/Demo_Data_Visualization.ipynb | nsingh216/edu | 95cbd3dc25c9b15160ba4b146b52eeeb2fa54ecb | [
"Apache-2.0"
] | null | null | null | 24.676037 | 569 | 0.435035 | [
[
[
"<a href=\"https://colab.research.google.com/github/osipov/edu/blob/master/pyt0/Demo_Data_Visualization.ipynb\" target=\"_blank\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a>",
"_____no_output_____"
],
[
"# Data Visualization\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import torch as pt\nimport matplotlib.pyplot as plt\n\nx = pt.linspace(0, 10, 100)\n\nfig = plt.figure()\nplt.plot(x, pt.sin(x), '-')\nplt.plot(x, pt.cos(x), '--')\nplt.show() # not needed in notebook, but needed in production",
"_____no_output_____"
]
],
[
[
"## You can save your plots...",
"_____no_output_____"
]
],
[
[
"fig.savefig('my_figure.png')",
"_____no_output_____"
],
[
"!ls -lh my_figure.png\n# For Windows, comment out the above and replace with below\n# On Windows, comment out above and uncomment below\n#!dir my_figure.png\"",
"_____no_output_____"
]
],
[
[
"## ...and reload saved images for display inside the notebook",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage('my_figure.png')",
"_____no_output_____"
],
[
"# matplotlib supports many different file types\nfig.canvas.get_supported_filetypes()",
"_____no_output_____"
]
],
[
[
"## MATLAB-Style Interface",
"_____no_output_____"
]
],
[
[
"plt.figure() # create a plot figure\n\n# create the first of two panels and set current axis\nplt.subplot(2, 1, 1) # (rows, columns, panel number)\nplt.plot(x, pt.sin(x))\n\n# create the second panel and set current axis\nplt.subplot(2, 1, 2)\nplt.plot(x, pt.cos(x));",
"_____no_output_____"
]
],
[
[
"## Grids",
"_____no_output_____"
]
],
[
[
"plt.style.use('seaborn-whitegrid')\nfig = plt.figure()\nax = plt.axes()",
"_____no_output_____"
]
],
[
[
"## Draw a Function",
"_____no_output_____"
]
],
[
[
"plt.style.use('seaborn-whitegrid')\nfig = plt.figure()\nax = plt.axes()\nx = pt.linspace(0, 10, 1000)\nax.plot(x, pt.sin(x));",
"_____no_output_____"
]
],
[
[
"## Specify axes limits...\n",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x))\n\nplt.xlim(-1, 11)\nplt.ylim(-1.5, 1.5);",
"_____no_output_____"
]
],
[
[
"## Flipping the Axes Limits",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x))\n\nplt.xlim(10, 0)\nplt.ylim(1.2, -1.2);",
"_____no_output_____"
]
],
[
[
"## Axis",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x))\nplt.axis([-1, 11, -1.5, 1.5]);",
"_____no_output_____"
]
],
[
[
"## ...or let matplotlib \"tighten\" the axes...",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x))\nplt.axis('tight');",
"_____no_output_____"
]
],
[
[
"## ...or make the limits equal",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x))\nplt.axis('equal');",
"_____no_output_____"
]
],
[
[
"## Add titles and axis labels",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x))\nplt.title(\"A Sine Curve\")\nplt.xlabel(\"x\")\nplt.ylabel(\"sin(x)\");",
"_____no_output_____"
]
],
[
[
"## ...and a legend",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x), '-g', label='sin(x)')\nplt.plot(x, pt.cos(x), ':b', label='cos(x)')\nplt.axis('equal')\n\nplt.legend();",
"_____no_output_____"
]
],
[
[
"## Object-Oriented Interface",
"_____no_output_____"
]
],
[
[
"# First create a grid of plots\n# ax will be an array of two Axes objects\nfig, ax = plt.subplots(2)\n\n# Call plot() method on the appropriate object\nax[0].plot(x, pt.sin(x))\nax[1].plot(x, pt.cos(x));",
"_____no_output_____"
]
],
[
[
"## OO interface to axes",
"_____no_output_____"
]
],
[
[
"ax = plt.axes()\nax.plot(x, pt.sin(x))\nax.set(xlim=(0, 10), ylim=(-2, 2),\n xlabel='x', ylabel='sin(x)',\n title='A Simple Plot');",
"_____no_output_____"
]
],
[
[
"## Interface Differences\n| MATLAB-Style | OO Style |\n|--------------|-----------------|\n| plt.xlabel() | ax.set_xlabel() |\n| plt.ylabel() | ax.set_ylabel() |\n| plt.xlim() | ax.set_xlim() |\n| plt.ylim() | ax.set_ylim() |\n| plt.title() | ax.set_title() |",
"_____no_output_____"
],
[
"## Custom legends",
"_____no_output_____"
]
],
[
[
"x = pt.linspace(0, 10, 1000)\n\nplt.style.use('classic')\nplt.figure(figsize=(12,6))\n\nplt.rc('xtick', labelsize=20)\nplt.rc('ytick', labelsize=20)\n\nfig, ax = plt.subplots()\nax.plot(x, pt.sin(x), '-b', label='Sine')\nax.plot(x, pt.cos(x), '--r', label='Cosine')\nax.axis('equal')\nleg = ax.legend()",
"_____no_output_____"
],
[
"ax.legend(loc='upper left', frameon=False)\nfig",
"_____no_output_____"
],
[
"ax.legend(frameon=False, loc='lower center', ncol=2)\nfig",
"_____no_output_____"
]
],
[
[
"## Many ways to specify color...",
"_____no_output_____"
]
],
[
[
"plt.plot(x, pt.sin(x - 0), color='blue') # specify color by name\nplt.plot(x, pt.sin(x - 1), color='g') # short color code (rgbcmyk)\nplt.plot(x, pt.sin(x - 2), color='0.75') # Grayscale between 0 and 1\nplt.plot(x, pt.sin(x - 3), color='#FFDD44') # Hex code (RRGGBB from 00 to FF)\nplt.plot(x, pt.sin(x - 4), color=(1.0,0.2,0.3)) # RGB tuple, values 0 to 1\nplt.plot(x, pt.sin(x - 5), color='chartreuse'); # all HTML color names supported",
"_____no_output_____"
]
],
[
[
"## Specifying different line styles...",
"_____no_output_____"
]
],
[
[
"plt.plot(x, x + 0, linestyle='solid')\nplt.plot(x, x + 1, linestyle='dashed')\nplt.plot(x, x + 2, linestyle='dashdot')\nplt.plot(x, x + 3, linestyle='dotted');\n\n# For short, you can use the following codes:\nplt.plot(x, x + 4, linestyle='-') # solid\nplt.plot(x, x + 5, linestyle='--') # dashed\nplt.plot(x, x + 6, linestyle='-.') # dashdot\nplt.plot(x, x + 7, linestyle=':'); # dotted",
"_____no_output_____"
]
],
[
[
"## Specify different plot markers ",
"_____no_output_____"
]
],
[
[
"rnd1 = pt.manual_seed(0)\nrnd2 = pt.manual_seed(1)\nfor marker in 'o.,x+v^<>sd':\n plt.plot(pt.rand(5, generator = rnd1), pt.rand(5, generator = rnd2), marker,\n label='marker={}'.format(marker))\nplt.legend(numpoints=1)\nplt.xlim(0, 1.8);",
"_____no_output_____"
]
],
[
[
"## Scatterplots with Colors and Sizes",
"_____no_output_____"
]
],
[
[
"pt.manual_seed(0);\nx = pt.randn(100)\ny = pt.randn(100)\ncolors = pt.rand(100)\nsizes = 1000 * pt.rand(100)\n\nplt.scatter(x, y, c=colors, s=sizes, alpha=0.3,\n cmap='viridis')\nplt.colorbar(); # show color scale",
"_____no_output_____"
]
],
[
[
"## Visualizing Multiple Dimensions",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_iris\niris = load_iris()\nfeatures = iris.data.T\n\nplt.scatter(features[0], features[1], alpha=0.2,\n s=100*features[3], c=iris.target, cmap='viridis')\nplt.xlabel(iris.feature_names[0])\nplt.ylabel(iris.feature_names[1]);",
"_____no_output_____"
]
],
[
[
"## Histograms",
"_____no_output_____"
]
],
[
[
"data = pt.randn(10000)\nplt.hist(data);",
"_____no_output_____"
],
[
"plt.hist(data, bins=30, alpha=0.5,\n histtype='stepfilled', color='steelblue',\n edgecolor='none')",
"_____no_output_____"
]
],
[
[
"## Display a grid of images",
"_____no_output_____"
]
],
[
[
"# load images of the digits 0 through 5 and visualize several of them\nfrom sklearn.datasets import load_digits\ndigits = load_digits(n_class=6)\n\nfig, ax = plt.subplots(8, 8, figsize=(6, 6))\nfor i, axi in enumerate(ax.flat):\n axi.imshow(digits.images[i], cmap='binary')\n axi.set(xticks=[], yticks=[])",
"_____no_output_____"
]
],
[
[
"Copyright 2020 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d041122a1cec894b2d19d4c522db2d57a94101fa | 201,490 | ipynb | Jupyter Notebook | 03_classification.ipynb | matyama/homl | e8abd9ff4572c7a7002fb535b30728698a98e816 | [
"MIT"
] | null | null | null | 03_classification.ipynb | matyama/homl | e8abd9ff4572c7a7002fb535b30728698a98e816 | [
"MIT"
] | null | null | null | 03_classification.ipynb | matyama/homl | e8abd9ff4572c7a7002fb535b30728698a98e816 | [
"MIT"
] | null | null | null | 137.723855 | 33,456 | 0.884585 | [
[
[
"# Classification",
"_____no_output_____"
],
[
"## MNIST",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import fetch_openml\n\nmnist = fetch_openml('mnist_784', version=1)\nmnist.keys()",
"_____no_output_____"
],
[
"X, y = mnist['data'], mnist['target']\nX.shape, y.shape",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nsome_digit = X[0]\nsome_digit_img = some_digit.reshape(28, 28)\n\nplt.imshow(some_digit_img, cmap='binary')\nplt.axis('off')\nplt.show()",
"_____no_output_____"
],
[
"y[0]",
"_____no_output_____"
],
[
"import numpy as np\n\ny = y.astype(np.uint8)\n\ny[0]",
"_____no_output_____"
],
[
"# MNIST is already split into training and test set\nX_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]",
"_____no_output_____"
]
],
[
[
"## Training a Binary Classifier\nFor start let's make a binary classifier that will indentify single digit - digit 5.",
"_____no_output_____"
]
],
[
[
"y_train_5, y_test_5 = (y_train == 5), (y_test == 5)",
"_____no_output_____"
],
[
"from sklearn.linear_model import SGDClassifier\n\nsgd_clf = SGDClassifier(random_state=42)\nsgd_clf.fit(X_train, y_train_5)\n\nsgd_clf.predict([some_digit])",
"_____no_output_____"
]
],
[
[
"## Performance Measures",
"_____no_output_____"
],
[
"### Measuring Accuracy Using Cross-Validation\n\n#### Implementing Cross-Validation\nFollowing code is roughly equivalent to *Scikit-Learn*'s function `cross_val_score`.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import StratifiedKFold\nfrom sklearn.base import clone\n\nskfolds = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)\n\nfor train_ix, test_ix in skfolds.split(X_train, y_train_5):\n clone_clf = clone(sgd_clf)\n \n X_train_folds = X_train[train_ix]\n y_train_folds = y_train_5[train_ix]\n \n X_test_folds = X_train[test_ix]\n y_test_folds = y_train_5[test_ix]\n \n clone_clf.fit(X_train_folds, y_train_folds)\n y_pred = clone_clf.predict(X_test_folds)\n n_correct = np.sum(y_pred == y_test_folds)\n print(n_correct / len(y_pred))",
"0.9669\n0.91625\n0.96785\n"
],
[
"from sklearn.model_selection import cross_val_score\n\ncross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring='accuracy')",
"_____no_output_____"
]
],
[
[
"This seems pretty good! However, let's check a classifier that always classifies an image as **not 5**.",
"_____no_output_____"
]
],
[
[
"from sklearn.base import BaseEstimator\n\nclass Never5Classifier(BaseEstimator):\n \n def fit(self, X, y=None):\n return self\n \n def predict(self, X):\n return np.zeros((len(X), 1), dtype=bool)\n\nnever_5_clf = Never5Classifier()\ncross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring='accuracy')",
"_____no_output_____"
]
],
[
[
"Over 90% accuracy! Well, the problem is that just about 10% of the whole dataset are images of 5 (there are 10 numbers in total). Hence the 90% accuracy.",
"_____no_output_____"
],
[
"### Confusion Matrix\nThe idea of a *confusion matrix* is to count the number of times class A is classified as class B and so on. \n\nTo compute the confusion matrix one must first get predicions (here on the train set, let's keep test set aside). We can take predictions for a cross-validation with `cross_val_predict` and pass them to `confusion_matrix`.\n\nFor a binary classification the confusion matrix looks like this:\n\n| | N | P |\n|-----|----|----|\n| N | TN | FP |\n| P | FN | TP |\n\nRows are the *actual* class and columns are the predicted class, furthermore\n* *P* - *positive* (class)\n* *N* - *negative* (class)\n* *TN* - *true negative*\n* *TP* - *true positive*\n* *FN* - *false negative*\n* *FP* - *false negative*",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import cross_val_predict\nfrom sklearn.metrics import confusion_matrix\n\ny_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)\nconfusion_matrix(y_train_5, y_train_pred)",
"_____no_output_____"
],
[
"y_train_perfect_predictions = y_train_5 # pretend we reached perfection\nconfusion_matrix(y_train_5, y_train_perfect_predictions)",
"_____no_output_____"
]
],
[
[
"### Precision and Recall\n\n**Precision** is the accuracy of positive predictions and is defined as $\\text{precision} = \\frac{TP}{TP + FP}$\n\n*Trivial way to ensure 100% precision is to make single prediction and make sure it's correct.*\n\n**Recall (sensitivity, true positive rate)** is the ratio of positive instances that are correctly detected and is defined as $\\text{recall} = \\frac{TP}{TP + FN}$\n\nIntuitive notion of precision and recall:\n* *precision* - how often is the predictor correct when the actual class is the positive one\n* *recall* - how likely does the predictor detect the positive class",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import precision_score, recall_score\n\nprecision = precision_score(y_train_5, y_train_pred)\nrecall = recall_score(y_train_5, y_train_pred)\nprecision, recall",
"_____no_output_____"
]
],
[
[
"Precision and recall are handy but it's even better to have single score based on which we can compare classifiers.\n\n$\\mathbf{F_1}$ score is the *harmonic mean* of precision and recall. Regular mean puts the same weight to all values, harmonic mean gives much more importance to lower values. So in order to have high $F_1$ score, both precision and mean must be high.\n\n$$\nF_1 = \\frac{2}{\\frac{1}{\\text{precision}} + \\frac{1}{\\text{recall}}} = 2 \\times \\frac{\\text{precision} \\times \\text{recall}}{\\text{precision} + \\text{recall}} = \\frac{TP}{TP + \\frac{FN + FP}{2}}\n$$",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import f1_score\n\nf1_score(y_train_5, y_train_pred)",
"_____no_output_____"
]
],
[
[
"### Precision/Recall Trade-off\n*Increasing precision reduces recall and vice versa.*\n\nHow does the classification work? The `SGDClassifier`, for instance, computes for each instance a score based on a *decision function*. If this score is greater than *decision threshold*, it assigns the instance to the positive class. Shifting this threshold will likely result a change in precision and recall.",
"_____no_output_____"
]
],
[
[
"y_scores = sgd_clf.decision_function([some_digit])\ny_scores",
"_____no_output_____"
],
[
"def predict_some_digit(threshold):\n return (y_scores > threshold)\n\n# Raising the threshold decreases recall\npredict_some_digit(threshold=0), predict_some_digit(threshold=8000)",
"_____no_output_____"
]
],
[
[
"From the example above, increasing the decision threshold decreases recall (`some_digit` is actually a 5 and with the increased thresholt is is no longer recognized).\n\nBut how to decide which threshold to use?",
"_____no_output_____"
]
],
[
[
"y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method='decision_function')",
"_____no_output_____"
],
[
"from sklearn.metrics import precision_recall_curve\n\ndef plot_precision_recall_vs_threshold(precisions, recalls, thresholds):\n plt.plot(thresholds, precisions[:-1], 'b--', label='Precision')\n plt.plot(thresholds, recalls[:-1], 'g-', label='Recall')\n plt.xlabel('Threshold')\n plt.legend(loc='center right', fontsize=16)\n plt.grid(True)\n plt.axis([-50000, 50000, 0, 1])\n\nprecisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)\n\nrecall_90_precision = recalls[np.argmax(precisions >= 0.9)]\nthreshold_90_precision = thresholds[np.argmax(precisions >= 0.9)]\n\nplt.figure(figsize=(8, 4))\n\n# plot precision and recall curves vs decision threshold\nplot_precision_recall_vs_threshold(precisions, recalls, thresholds)\n\n# plot threshold corresponding to 90% precision\nplt.plot([threshold_90_precision, threshold_90_precision], [0., 0.9], 'r:')\n\n# plot precision level up to 90% precision threshold\nplt.plot([-50000, threshold_90_precision], [0.9, 0.9], 'r:')\n\n# plot recall level up to 90% precision threshold\nplt.plot([-50000, threshold_90_precision], [recall_90_precision, recall_90_precision], 'r:')\n\n# plot points on precision and recall curves corresponding to 90% precision threshold\nplt.plot([threshold_90_precision], [0.9], 'ro')\nplt.plot([threshold_90_precision], [recall_90_precision], 'ro')\n\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 6))\n\n# plot precision vs recall\nplt.plot(recalls, precisions, \"b-\", linewidth=2)\nplt.xlabel('Precision', fontsize=16)\nplt.ylabel('Recall', fontsize=16)\n\n# style the plot\nplt.axis([0, 1, 0, 1])\nplt.grid(True)\nplt.title('Precision vs Recall')\n\n# plot 90% precision point\nplt.plot([recall_90_precision], [0.9], 'ro')\nplt.plot([recall_90_precision, recall_90_precision], [0., 0.9], 'r:')\nplt.plot([0.0, recall_90_precision], [0.9, 0.9], 'r:')\n\nplt.show()",
"_____no_output_____"
],
[
"y_train_pred_90 = (y_scores >= threshold_90_precision)\n\nprecision_90 = precision_score(y_train_5, y_train_pred_90)\nrecall_90_precision = recall_score(y_train_5, y_train_pred_90)\nprecision_90, recall_90_precision",
"_____no_output_____"
]
],
[
[
"### The ROC Curve\nThe **receiver operating characteristic** curve is similar to precesion-recall curve but instead plots *true positive rate (recall, sensitivity)* agains *false positive rate* (FPR). The FPR is 1 minus *true negative rate rate (specificity*. I.e. ROC curve plots *sensitivity* against 1 - *specificity*.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import roc_curve\n\ndef plot_roc_curve(fpr, tpr, label=None):\n plt.plot(fpr, tpr, linewidth=2, label=label)\n plt.plot([0, 1], [0, 1], 'k--')\n plt.axis([0, 1, 0, 1])\n plt.xlabel('False Positive Rate', fontsize=16)\n plt.ylabel('True Positive Rate', fontsize=16)\n plt.grid(True)\n\nfpr, tpr, thresholds = roc_curve(y_train_5, y_scores)\nfpr_90 = fpr[np.argmax(tpr >= recall_90_precision)]\n\nplt.figure(figsize=(8, 6))\n\n# plot the ROC curve\nplot_roc_curve(fpr, tpr)\n\n# plot point of 90% precision on the ROC curve\nplt.plot([fpr_90], [recall_90_precision], 'ro')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Another way to compare classifiers is to measure the **area under the curve (AUC)**. Prfect classifier would have AUC score of 1 whereas completely random one would have 0.5 (this corresponds to the diagonal line in the ROC plot).",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import roc_auc_score\n\nroc_auc_score(y_train_5, y_scores)",
"_____no_output_____"
]
],
[
[
"As a rule of thumb, use PR curve when\n* positive class is rare\n* we care more about the false positives\n\notherwise ROC curve might be better.\n\n*For instance in the plot above, it might seem that the AUC is quite good but that's just because there's only few examples of the positive class (5s). In this case, the PR curve presents much more realistic view.*\n\nFollowing example shows a DT which does not have a `decision_function` method. Instead, it has `predict_proba` method returning class probabilities. In general *Scikit-Learn* models will have one or the other method or both.",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\nforest_clf = RandomForestClassifier(random_state=42)\n\ny_proba_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method='predict_proba')\ny_scores_forest = y_proba_forest[:, 1] # score = probability of the positive class\n\nfpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)\nrecall_90_precision_forest = tpr_forest[np.argmax(fpr_forest >= fpr_90)]\n\nplt.figure(figsize=(8, 6))\n\n# plot the ROC curve of the SGD\nplot_roc_curve(fpr, tpr, label='SGD')\n\n# plot the ROC curve of the Random Forest\nplot_roc_curve(fpr_forest, tpr_forest, label='Random Forest')\n\n# plot point of 90% precision on the SGD ROC curve\nplt.plot([fpr_90], [recall_90_precision], 'ro')\n\n# plot point of 90% precision on the Random Forest ROC curve\nplt.plot([fpr_90], [recall_90_precision_forest], 'ro')\n\nplt.legend(loc='lower right', fontsize=16)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Multiclass Classification\n\n**Multiclass (Multinominal) Classifiers**:\n* *Logistic Regression*\n* *Random Forrest*\n* *Naive Bayes*\n\n**Binary Classifiers**:\n* *SGD*\n* *SVM*\n\nStrategies to turn binary classifiers into multiclass:\n* **One-versus-the-rest (OvR)**: Train one classifier per class. When predicting class for new instance, get the score from each one and choose the class with the highest score.\n* **One-versus-one (OvO)**: Train one classifier for each pair of classes (for $N$ classes it's $N \\times (N - 1) / 2$ classifiers). When predicting, run the instance through all classifiers and choose class which wins the most duels. Main advantage is that each classifier needs only portion of the training set which contains it's pair of classes which is good for classifiers which don't scale well (e.g. SVM).",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import SVC\n\nsvm_clf = SVC(gamma=\"auto\", random_state=42)\nsvm_clf.fit(X_train[:1000], y_train[:1000])\n\nsvm_clf.predict([some_digit])",
"_____no_output_____"
],
[
"some_digit_scores = svm_clf.decision_function([some_digit])\nsome_digit_scores",
"_____no_output_____"
],
[
"some_digit_class = np.argmax(some_digit_scores)\nsvm_clf.classes_[some_digit_class]",
"_____no_output_____"
]
],
[
[
"One can manually select the strategy by wrapping the model class into `OneVsRestClassifier` or `OneVsOneClassifier`.",
"_____no_output_____"
]
],
[
[
"from sklearn.multiclass import OneVsRestClassifier\n\novr_clf = OneVsRestClassifier(SVC(gamma=\"auto\", random_state=42))\novr_clf.fit(X_train[:1000], y_train[:1000])\n\novr_clf.predict([some_digit])",
"_____no_output_____"
],
[
"len(ovr_clf.estimators_)",
"_____no_output_____"
]
],
[
[
"`SGDClassifier` uses *OvR* under the hood",
"_____no_output_____"
]
],
[
[
"sgd_clf.fit(X_train, y_train)\nsgd_clf.predict([some_digit])",
"_____no_output_____"
],
[
"sgd_clf.decision_function([some_digit])",
"_____no_output_____"
],
[
"cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring='accuracy')",
"_____no_output_____"
]
],
[
[
"CV on the SGD classifier shows pretty good accuracy compared to dummy (random) classifier which would have around 10%. This can be improved even further by simply scaling the input.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train.astype(np.float64))\n\ncross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring='accuracy')",
"_____no_output_____"
]
],
[
[
"### Error Analysis",
"_____no_output_____"
]
],
[
[
"y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)\n\nconf_mx = confusion_matrix(y_train, y_train_pred)\nconf_mx",
"_____no_output_____"
],
[
"plt.matshow(conf_mx, cmap=plt.cm.gray)\nplt.title('Training set confusion matrix for the SGD classifier')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let's transform the confusion matrix a bit to focus on the errors:\n1. divide each value by the number of instances (images in this case) in that class\n1. fill diagonal with zeros to keep just the errors",
"_____no_output_____"
]
],
[
[
"row_sums = conf_mx.sum(axis=1, keepdims=True)\nnorm_conf_mx = conf_mx / row_sums\nnp.fill_diagonal(norm_conf_mx, 0)\n\nplt.matshow(norm_conf_mx, cmap=plt.cm.gray)\nplt.title('Class-normalized confusion matrix with 0 on diagonal')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Multilabel Classification\n*Multilabel classification* refers to a classification task where the classifier predicts multiple classes at once (output is a boolean vector).",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier\n\ny_train_large = (y_train >= 7)\ny_train_odd = (y_train % 2 == 1)\ny_multilabel = np.c_[y_train_large, y_train_odd]\n\nknn_clf = KNeighborsClassifier()\nknn_clf.fit(X_train, y_multilabel)",
"_____no_output_____"
],
[
"knn_clf.predict([some_digit])",
"_____no_output_____"
],
[
"# This takes too long to evaluate but normally it would output the F1 score\n\n# y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)\n# f1_score(y_multilabel, y_train_knn_pred, average='macro')",
"_____no_output_____"
]
],
[
[
"## Multioutput Classification\n*Multioutput-multiclass* or just *multioutput classification* is a generalization of multilabel classification where each label can be multiclass (categorical, not just boolean).\n\nFollowing example removes noise from images. In this setup the output is one label per pixel (multilabel) and each pixel's label can have multiple values - pixel intensities (multioutput).",
"_____no_output_____"
]
],
[
[
"# modified training set\nnoise = np.random.randint(0, 100, (len(X_train), 784))\nX_train_mod = X_train + noise\n\n# modified test set\nnoise = np.random.randint(0, 100, (len(X_test), 784))\nX_test_mod = X_test + noise\n\n# targets are original images\ny_train_mod = X_train\ny_test_mod = X_test",
"_____no_output_____"
],
[
"some_index = 0\n\n# noisy image\nplt.subplot(121)\nplt.imshow(X_test_mod[some_index].reshape(28, 28), cmap='binary')\nplt.axis('off')\n\n# original image\nplt.subplot(122)\nplt.imshow(y_test_mod[some_index].reshape(28, 28), cmap='binary')\nplt.axis('off')\n\nplt.show()",
"_____no_output_____"
],
[
"knn_clf.fit(X_train_mod, y_train_mod)\n\nclean_digit = knn_clf.predict([X_test_mod[some_index]])\n\nplt.imshow(clean_digit.reshape(28, 28), cmap='binary')\nplt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Extra Material",
"_____no_output_____"
],
[
"### Dummy Classifier",
"_____no_output_____"
]
],
[
[
"from sklearn.dummy import DummyClassifier\n\ndummy_clf = DummyClassifier(strategy='prior')\n\ny_probas_dummy = cross_val_predict(dummy_clf, X_train, y_train_5, cv=3, method='predict_proba')\ny_scores_dummy = y_probas_dummy[:, 1]\n\nfprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dummy)\nplot_roc_curve(fprr, tprr)",
"_____no_output_____"
]
],
[
[
"## Exercises",
"_____no_output_____"
],
[
"### Data Augmentation",
"_____no_output_____"
]
],
[
[
"from scipy.ndimage.interpolation import shift\n\ndef shift_image(image, dx, dy):\n image = image.reshape((28, 28))\n shifted_image = shift(image, [dy, dx], cval=0, mode='constant')\n return shifted_image.reshape([-1])\n\nimage = X_train[1000]\nshifted_image_down = shift_image(image, 0, 5)\nshifted_image_left = shift_image(image, -5, 0)\n\nplt.figure(figsize=(12, 3))\n\n# original image\nplt.subplot(131)\nplt.title('Original', fontsize=14)\nplt.imshow(image.reshape(28, 28), interpolation='nearest', cmap='Greys')\n\n# image shifted down\nplt.subplot(132)\nplt.title('Shifted down', fontsize=14)\nplt.imshow(shifted_image_down.reshape(28, 28), interpolation='nearest', cmap='Greys')\n\n# image shifted left\nplt.subplot(133)\nplt.title('Shifted left', fontsize=14)\nplt.imshow(shifted_image_left.reshape(28, 28), interpolation='nearest', cmap='Greys')\n\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\n\nX_train_augmented = [image for image in X_train]\ny_train_augmented = [label for label in y_train]\n\nshifts = ((1, 0), (-1, 0), (0, 1), (0, -1))\n\nfor dx, dy in shifts:\n for image, label in zip(X_train, y_train):\n X_train_augmented.append(shift_image(image, dx, dy))\n y_train_augmented.append(label)\n\nX_train_augmented = np.array(X_train_augmented)\ny_train_augmented = np.array(y_train_augmented)\n\nshuffle_idx = np.random.permutation(len(X_train_augmented))\nX_train_augmented = X_train_augmented[shuffle_idx]\ny_train_augmented = y_train_augmented[shuffle_idx]\n\n# Best params without augmentation\nknn_clf = KNeighborsClassifier(n_neighbors=4, weights='distance')\nknn_clf.fit(X_train_augmented, y_train_augmented)\n\n# Accuracy without augmentation: 0.9714\ny_pred = knn_clf.predict(X_test)\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d04112658c89a89d1051b3e9bb388f956cea35df | 6,468 | ipynb | Jupyter Notebook | Collecting Data using Web Scraping.ipynb | SMSesay/Data_Analyst_Capstone | 6a7d3517ff0421f24ffe78759280f30115758aaf | [
"MIT"
] | null | null | null | Collecting Data using Web Scraping.ipynb | SMSesay/Data_Analyst_Capstone | 6a7d3517ff0421f24ffe78759280f30115758aaf | [
"MIT"
] | null | null | null | Collecting Data using Web Scraping.ipynb | SMSesay/Data_Analyst_Capstone | 6a7d3517ff0421f24ffe78759280f30115758aaf | [
"MIT"
] | null | null | null | 38.730539 | 444 | 0.473253 | [
[
[
"<center>\n <img src=\"https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png\" width=\"300\" alt=\"cognitiveclass.ai logo\" />\n</center>\n",
"_____no_output_____"
],
[
"# **Hands-on Lab : Web Scraping**\n",
"_____no_output_____"
],
[
"Estimated time needed: **30 to 45** minutes\n",
"_____no_output_____"
],
[
"## Objectives\n",
"_____no_output_____"
],
[
"In this lab you will perform the following:\n",
"_____no_output_____"
],
[
"* Extract information from a given web site\n* Write the scraped data into a csv file.\n",
"_____no_output_____"
],
[
"## Extract information from the given web site\n\nYou will extract the data from the below web site: <br>\n",
"_____no_output_____"
]
],
[
[
"#this url contains the data you need to scrape\nurl = \"https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/labs/datasets/Programming_Languages.html\"",
"_____no_output_____"
]
],
[
[
"The data you need to scrape is the **name of the programming language** and **average annual salary**.<br> It is a good idea to open the url in your web broswer and study the contents of the web page before you start to scrape.\n",
"_____no_output_____"
],
[
"Import the required libraries\n",
"_____no_output_____"
]
],
[
[
"# Your code here\nfrom bs4 import BeautifulSoup\nimport requests\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"Download the webpage at the url\n",
"_____no_output_____"
]
],
[
[
"#your code goes here\ndata = requests.get(url).text",
"_____no_output_____"
]
],
[
[
"Create a soup object\n",
"_____no_output_____"
]
],
[
[
"#your code goes here\nsoup = BeautifulSoup(data, 'html5lib')",
"_____no_output_____"
]
],
[
[
"Scrape the `Language name` and `annual average salary`.\n",
"_____no_output_____"
]
],
[
[
"#your code goes here\nlang_data = pd.DataFrame(columns=['Language', 'Avg_Salary'])\ntable = soup.find('table')\nfor row in table.find_all('tr'):\n \n cols = row.find_all('td')\n lang_name = cols[1].getText()\n avg_salary = cols[3].getText()\n lang_data = lang_data.append({\"Language\":lang_name, \"Avg_Salary\":avg_salary}, ignore_index=True)\n #print(\"{}----------{}\".format(lang_name, avg_salary))",
"_____no_output_____"
]
],
[
[
"Save the scrapped data into a file named *popular-languages.csv*\n",
"_____no_output_____"
]
],
[
[
"# your code goes here\n#Drop the first row\n#lang_data.drop(0, axis=0, inplace=True)\nlang_data.to_csv('popular-languages.csv', index=False)",
"_____no_output_____"
]
],
[
[
"## Authors\n",
"_____no_output_____"
],
[
"Ramesh Sannareddy\n",
"_____no_output_____"
],
[
"### Other Contributors\n",
"_____no_output_____"
],
[
"Rav Ahuja\n",
"_____no_output_____"
],
[
"## Change Log\n",
"_____no_output_____"
],
[
"| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ----------------- | ---------------------------------- |\n| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |\n",
"_____no_output_____"
],
[
"Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license/?utm_medium=Exinfluencer\\&utm_source=Exinfluencer\\&utm_content=000026UJ\\&utm_term=10006555\\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01).\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0411f65ced5e6ac3f7e4f96ba0015689ccf020a | 44,223 | ipynb | Jupyter Notebook | examples/getting_started/2_Pipeline.ipynb | odidev/datashader | 0091d0ac48b6dd5c8a9203e1f123822aaa57bfff | [
"BSD-3-Clause"
] | null | null | null | examples/getting_started/2_Pipeline.ipynb | odidev/datashader | 0091d0ac48b6dd5c8a9203e1f123822aaa57bfff | [
"BSD-3-Clause"
] | 3 | 2020-04-23T08:51:10.000Z | 2020-05-26T10:45:44.000Z | examples/getting_started/2_Pipeline.ipynb | odidev/datashader | 0091d0ac48b6dd5c8a9203e1f123822aaa57bfff | [
"BSD-3-Clause"
] | 3 | 2020-01-07T14:48:46.000Z | 2020-01-09T13:56:02.000Z | 61.763966 | 1,200 | 0.668657 | [
[
[
"Datashader provides a flexible series of processing stages that map from raw data into viewable images. As shown in the [Introduction](1-Introduction.ipynb), using datashader can be as simple as calling ``datashade()``, but understanding each of these stages will help you get the most out of the library. \n\nThe stages in a datashader pipeline are similar to those in a [3D graphics shading pipeline](https://en.wikipedia.org/wiki/Graphics_pipeline):\n\n\n\nHere the computational steps are listed across the top of the diagram, while the data structures or objects are listed along the bottom. Breaking up the computations in this way is what makes Datashader able to handle arbitrarily large datasets, because only one stage (Aggregation) requires access to the entire dataset. The remaining stages use a fixed-sized data structure regardless of the input dataset, allowing you to use any visualization or embedding methods you prefer without running into performance limitations.\n\nIn this notebook, we'll first put together a simple, artificial example to get some data, and then show how to configure and customize each of the data-processing stages involved:\n\n1. [Projection](#Projection)\n2. [Aggregation](#Aggregation)\n3. [Transformation](#Transformation)\n4. [Colormapping](#Colormapping)\n5. [Embedding](#Embedding)\n\n## Data\n\nFor an example, we'll construct a dataset made of five overlapping 2D Gaussian distributions with different σs (spatial scales). By default we'll have 10,000 datapoints from each category, but you should see sub-second response times even for 1 million datapoints per category if you increase `num`. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom collections import OrderedDict as odict\n\nnum=10000\nnp.random.seed(1)\n\ndists = {cat: pd.DataFrame(odict([('x',np.random.normal(x,s,num)), \n ('y',np.random.normal(y,s,num)), \n ('val',val), \n ('cat',cat)])) \n for x, y, s, val, cat in \n [( 2, 2, 0.03, 10, \"d1\"), \n ( 2, -2, 0.10, 20, \"d2\"), \n ( -2, -2, 0.50, 30, \"d3\"), \n ( -2, 2, 1.00, 40, \"d4\"), \n ( 0, 0, 3.00, 50, \"d5\")] }\n\ndf = pd.concat(dists,ignore_index=True)\ndf[\"cat\"]=df[\"cat\"].astype(\"category\")",
"_____no_output_____"
]
],
[
[
"Datashader can work many different data objects provided by different data libraries depending on the type of data involved, such as columnar data in [Pandas](http://pandas.pydata.org) or [Dask](http://dask.pydata.org) dataframes, gridded multidimensional array data using [xarray](http://xarray.pydata.org), columnar data on GPUs using [cuDF](https://github.com/rapidsai/cudf), multidimensional arrays on GPUs using [CuPy](https://cupy.chainer.org/), and ragged arrays using [SpatialPandas](https://github.com/holoviz/spatialpandas) (see the [Performance User Guide](../10_Performance.ipynb) for a guide to selecting an appropriate library). Here, we're using a Pandas dataframe, with 50,000 rows by default:",
"_____no_output_____"
]
],
[
[
"df.tail()",
"_____no_output_____"
]
],
[
[
"To illustrate this dataset, we'll make a quick-and-dirty Datashader plot that dumps these x,y coordinates into an image:",
"_____no_output_____"
]
],
[
[
"import datashader as ds\nimport datashader.transfer_functions as tf\n\n%time tf.shade(ds.Canvas().points(df,'x','y'))",
"_____no_output_____"
]
],
[
[
"Without any special tweaking, datashader is able to reveal the overall shape of this distribution faithfully: four summed 2D normal distributions of different variances, arranged at the corners of a square, overlapping another very high-variance 2D normal distribution centered in the square. This immediately obvious structure makes a great starting point for exploring the data, and you can then customize each of the various stages involved as described below.\n\nOf course, this is just a static plot, and you can't see what the axes are, so we can instead embed this data into an interactive plot if we prefer:",
"_____no_output_____"
],
[
"Here, if you are running a live Python process, you can enable the \"wheel zoom\" tool on the right, zoom in anywhere in the distribution, and datashader will render a new image that shows the full distribution at that new location. If you are viewing this on a static web site, zooming will simply make the existing set of pixels larger, because this dynamic updating requires Python.\n\nNow that you can see the overall result, we'll unpack each of the steps in the Datashader pipeline and show how this image is constructed from the data.\n\n\n## Projection\n\nDatashader is designed to render datasets projected on to a 2D rectangular grid, eventually generating an image where each pixel corresponds to one cell in that grid. The ***Projection*** stage is primarily conceptual, as it consists of you deciding what you want to plot and how you want to plot it:\n\n- **Variables**: Select which variable you want to have on the *x* axis, and which one for the *y* axis. If those variables are not already columns in your dataframe (e.g. if you want to do a coordinate transformation), you'll need to create suitable columns mapping directly to *x* and *y* for use in the next step. For this example, the \"x\" and \"y\" columns are conveniently named `x` and `y` already, but any column name can be used for these axes.\n- **Ranges**: Decide what ranges of those values you want to map onto the scene. If you omit the ranges, datashader will calculate the ranges from the data values, but you will often wish to supply explicit ranges for three reasons:\n 1. Calculating the ranges requires a complete pass over the data, which takes nearly as much time as actually aggregating the data, so your plots will be about twice as fast if you specify the ranges.\n 2. Real-world datasets often have some outliers with invalid values, which can make it difficult to see the real data, so after your first plot you will often want to specify only the range that appears to have valid data.\n 3. Over the valid range of data, you will often be mainly interested in a specific region, allowing you to zoom in to that area (though with an interactive plot you can always do that as needed).\n- **Axis types**: Decide whether you want `'linear'` or `'log'` axes.\n- **Resolution**: Decide what size of aggregate array you are going to want. \n\nHere's an example of specifying a ``Canvas`` (a.k.a. \"Scene\") object for a 200x200-pixel image covering the range +/-8.0 on both axes:",
"_____no_output_____"
]
],
[
[
"canvas = ds.Canvas(plot_width=300, plot_height=300, \n x_range=(-8,8), y_range=(-8,8), \n x_axis_type='linear', y_axis_type='linear')",
"_____no_output_____"
]
],
[
[
"At this stage, no computation has actually been done -- the `canvas` object is a purely declarative, recording your preferences to be applied in the next stage. \n\n<!-- Need to move the Points/Lines/Rasters discussion into the section above once the API is rationalized, and rename Canvas to Scene. -->\n\n\n## Aggregation\n\n<!-- This section really belongs under Scene, above-->\n\nOnce a `Canvas` object has been specified, it can then be used to guide aggregating the data into a fixed-sized grid. Data is assumed to consist of a series of items, each of which has some visible representation (its rendering as a \"glyph\") that is combined with the representation of other items to produce an aggregate representation of the whole set of items in the rectangular grid. The available glyph types for representing a data item are currently:\n - **Canvas.points**: each data item is a coordinate location (an x,y pair), mapping into the single closest grid cell to that datapoint's location.\n - **Canvas.line**: each data item is a coordinate location, mapping into every grid cell falling between this point's location and the next in a straight line segment.\n - **Canvas.area**: each data item is a coordinate location, rendered as a shape filling the axis-aligned area between this point, the next point, and a baseline (e.g. zero, filling the area between a line and a base).\n - **Canvas.trimesh**: each data item is a triple of coordinate locations specifying a triangle, filling in the region bounded by that triangle.\n - **Canvas.polygons**: each data item is a sequence of coordinate locations specifying a polygon, filling in the region bounded by that polygon (minus holes if specified separately).\n - **Canvas.raster**: the collection of data items is an array specifying regularly spaced axis-aligned rectangles forming a regular grid; each cell in this array is rendered as a filled rectangle.\n - **Canvas.quadmesh**: the collection of data items is an array specifying irregularly spaced quadrilaterals forming a grid that is regular in the input space but can have arbitrary rectilinear or curvilinear shapes in the aggregate grid; each cell in this array is rendered as a filled quadrilateral.\n\nThese types are each covered in detail in the [User Guide](../user_guide/). Datashader can be extended to add additional types here and in each section below; see [Extending Datashader](../user_guide/9-Extending.ipynb) for more details. Many other plots like time series and network graphs can be constructed out of these basic primitives.\n\n\n<!-- (to here) -->",
"_____no_output_____"
],
[
"### Reductions\n\nOne you have determined your mapping, you'll next need to choose a reduction operator to use when aggregating multiple datapoints into a given pixel. For points, each datapoint is mapped into a single pixel, while the other glyphs have spatial extent and can thus map into multiple pixels, each of which operates the same way. All glyphs act like points if the entire glyph is contained within that pixel. Here we will talk only about \"datapoints\" for simplicity, which for an area-based glyph should be interpreted as \"the part of that glyph that falls into this pixel\".\n\nAll of the currently supported reduction operators are incremental, which means that we can efficiently process datasets in a single pass. Given an aggregate bin to update (typically corresponding to one eventual pixel) and a new datapoint, the reduction operator updates the state of the bin in some way. (Actually, datapoints are normally processed in batches for efficiency, but it's simplest to think about the operator as being applied per data point, and the mathematical result should be the same.) A large number of useful [reduction operators]((https://datashader.org/api.html#reductions) are supplied in `ds.reductions`, including:\n\n**`count(column=None)`**:\n increment an integer count each time a datapoint maps to this bin. The resulting aggregate array will be an unsigned integer type, allowing counts to be distinguished from the other types that are normally floating point.\n \n**`any(column=None)`**:\n the bin is set to 1 if any datapoint maps to it, and 0 otherwise.\n \n**`sum(column)`**:\n add the value of the given column for this datapoint to a running total for this bin.\n \n**`by(column, reduction)`**:\n given a bin with categorical data (i.e., [Pandas' `categorical` datatype](https://pandas-docs.github.io/pandas-docs-travis/categorical.html)), aggregate each category separately, accumulating the given datapoint in an appropriate category within this bin. These categories can later be collapsed into a single aggregate if needed; see examples below.\n \n**`summary(name1=op1,name2=op2,...)`**:\n allows multiple reduction operators to be computed in a single pass over the data; just provide a name for each resulting aggregate and the corresponding reduction operator to use when creating that aggregate. If multiple aggregates are needed for the same dataset and the same Canvas, using `summary` will generally be much more efficient than making multiple separate passes over the dataset.\n \nThe API documentation contains the complete list of [reduction operators]((https://datashader.org/api.html#reductions) provided, including `mean`, `min`, `max`, `var` (variance), `std` (standard deviation). The reductions are also imported into the ``datashader`` namespace for convenience, so that they can be accessed like ``ds.mean()`` here.\n\nFor the operators above, those accepting a `column` argument will only do the operation if the value of that column for this datapoint is not `NaN`. E.g. `count` with a column specified will count the datapoints having non-`NaN` values for that column.\n\nOnce you have selected your reduction operator, you can compute the aggregation for each pixel-sized aggregate bin:",
"_____no_output_____"
]
],
[
[
"canvas.points(df, 'x', 'y', agg=ds.count())",
"_____no_output_____"
]
],
[
[
"The result of will be an [xarray](http://xarray.pydata.org) `DataArray` data structure containing the bin values (typically one value per bin, but more for multiple category or multiple-aggregate operators) along with axis range and type information.\n\nWe can visualize this array in many different ways by customizing the pipeline stages described in the following sections, but for now we'll use HoloViews to render images using the default parameters to show the effects of a few different aggregate operators:",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.shade( canvas.points(df,'x','y', ds.count()), name=\"count()\"),\n tf.shade( canvas.points(df,'x','y', ds.any()), name=\"any()\"),\n tf.shade( canvas.points(df,'x','y', ds.mean('y')), name=\"mean('y')\"),\n tf.shade(50-canvas.points(df,'x','y', ds.mean('val')), name=\"50- mean('val')\"))",
"_____no_output_____"
]
],
[
[
"Here ``count()`` renders each bin's count in a different color, to show the true distribution, while ``any()`` turns on a pixel if any point lands in that bin, and ``mean('y')`` averages the `y` column for every datapoint that falls in that bin. Of course, since ever datapoint falling into a bin happens to have the same `y` value, the mean reduction with `y` simply scales each pixel by its `y` location. \n\nFor the last image above, we specified that the `val` column should be used for the `mean` reduction, which in this case results in each category being assigned a different color, because in our dataset all items in the same category happen to have the same `val`. Here we also manipulated the result of the aggregation before displaying it by subtracting it from 50, as detailed in the next section.\n\n\n\n## Transformation\n\nNow that the data has been projected and aggregated into a gridded data structure, it can be processed in any way you like, before converting it to an image as will be described in the following section. At this stage, the data is still stored as bin data, not pixels, which makes a very wide variety of operations and transformations simple to express. \n\nFor instance, instead of plotting all the data, we can easily plot only those bins in the 99th percentile by count (left), or apply any [NumPy ufunc](http://docs.scipy.org/doc/numpy/reference/ufuncs.html) to the bin values (whether or not it makes any sense!):",
"_____no_output_____"
]
],
[
[
"agg = canvas.points(df, 'x', 'y')\n\ntf.Images(tf.shade(agg.where(agg>=np.percentile(agg,99)), name=\"99th Percentile\"),\n tf.shade(np.power(agg,2), name=\"Numpy square ufunc\"),\n tf.shade(np.sin(agg), name=\"Numpy sin ufunc\"))",
"_____no_output_____"
]
],
[
[
"The [xarray documentation](http://xarray.pydata.org/en/stable/computation.html) describes all the various transformations you can apply from within xarray, and of course you can always extract the data values and operate on them outside of xarray for any transformation not directly supported by xarray, then construct a suitable xarray object for use in the following stage. Once the data is in the aggregate array, you generally don't have to worry much about optimization, because it's a fixed-sized grid regardless of your data size, and so it is very straightforward to apply arbitrary transformations to the aggregates.\n\nThe above examples focus on a single aggregate, but there are many ways that you can use multiple data values per bin as well. For instance, you can apply any aggregation \"categorically\", aggregating `by` some categorical value so that datapoints for each unique value are aggregated independently:",
"_____no_output_____"
]
],
[
[
"aggc = canvas.points(df, 'x', 'y', ds.by('cat', ds.count()))\naggc",
"_____no_output_____"
]
],
[
[
"Here the `count()` aggregate has been collected into not just a 2D aggregate array, but a whole stack of aggregate arrays, one per `cat` value, making the aggregate be three dimensional (x,y,cat) rather than just two (x,y). With this 3D aggregate of counts per category, you can then select a specific category or subset of them for further processing, where `.sum(dim='cat')` will collapse across such a subset to give a single aggregate array:",
"_____no_output_____"
]
],
[
[
"agg_d3_d5=aggc.sel(cat=['d3', 'd5']).sum(dim='cat')\n\ntf.Images(tf.shade(aggc.sel(cat='d3'), name=\"Category d3\"),\n tf.shade(agg_d3_d5, name=\"Categories d3 and d5\")) ",
"_____no_output_____"
]
],
[
[
"You can also combine multiple aggregates however you like, as long as they were all constructed using the same Canvas object (which ensures that their aggregate arrays are the same size) and cover the same axis ranges:",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.shade(agg_d3_d5.where(aggc.sel(cat='d3') == aggc.sel(cat='d5')), name=\"d3+d5 where d3==d5\"),\n tf.shade( agg.where(aggc.sel(cat='d3') == aggc.sel(cat='d5')), name=\"d1+d2+d3+d4+d5 where d3==d5\"))",
"_____no_output_____"
]
],
[
[
"The above two results are using the same mask (only those bins `where` the counts for 'd3' and 'd5' are equal), but applied to different aggregates (either just the `d3` and `d5` categories, or the entire set of counts).\n\n## Colormapping\n\nAs you can see above, the usual way to visualize an aggregate array is to map from each array bin into a color for a corresponding pixel in an image. The above examples use the `tf.shade()` method, which maps a scalar aggregate bin value into an RGB (color) triple and an alpha (opacity) value. By default, the colors are chosen from the colormap ['lightblue','darkblue'] (i.e., `#ADD8E6` to `#00008B`), with intermediate colors chosen as a linear interpolation independently for the red, green, and blue color channels (e.g. `AD` to `00` for the red channel, in this case). The alpha (opacity) value is set to 0 for empty bins and 1 for non-empty bins, allowing the page background to show through wherever there is no data. You can supply any colormap you like, including Bokeh palettes, Matplotlib colormaps, or a list of colors (using the color names from `ds.colors`, integer triples, or hexadecimal strings):",
"_____no_output_____"
]
],
[
[
"from bokeh.palettes import RdBu9\ntf.Images(tf.shade(agg,cmap=[\"darkred\", \"yellow\"], name=\"darkred, yellow\"),\n tf.shade(agg,cmap=[(230,230,0), \"orangered\", \"#300030\"], name=\"yellow, orange red, dark purple\"), \n tf.shade(agg,cmap=list(RdBu9), name=\"Bokeh RdBu9\"),\n tf.shade(agg,cmap=\"black\", name=\"Black\"))",
"_____no_output_____"
]
],
[
[
"As a special case (\"Black\", above), if you supply only a single color, the color will be kept constant at the given value but the alpha (opacity) channel will vary with the data.",
"_____no_output_____"
],
[
"#### Colormapping categorical data\n\nIf you want to use `tf.shade` with a categorical aggregate, you can use a colormap just as for a non-categorical aggregate if you first select a single category using something like `aggc.sel(cat='d3')` or else collapse all categories into a single aggregate using something like `aggc.sum(dim='cat')`. \n\nIf you want to visualize all the categories in one image, you can use `tf.shade` with the categorical aggregate directly, which will assign a color to each category and then calculate the transparency and color of each pixel according to each category's contribution to that pixel:",
"_____no_output_____"
]
],
[
[
"color_key = dict(d1='blue', d2='green', d3='red', d4='orange', d5='purple')\n\ntf.Images(tf.shade(aggc, name=\"Default color key\"),\n tf.shade(aggc, color_key=color_key, name=\"Custom color key\"))",
"_____no_output_____"
]
],
[
[
"Here the different colors mix not just visually due to blurring, but are actually mixed mathematically per pixel, with pixels that include data from multiple categories taking intermediate color values. The total (summed) data values across all categories are used to calculate the alpha channel, with the previously computed color being revealed to a greater or lesser extent depending on the value of the aggregate for that bin. See [Colormapping with negative values](#Colormapping-with-negative-values) below for more details on how these colors and transparencies are calculated.",
"_____no_output_____"
],
[
"The default color key for categorical data provides distinguishable colors for a couple of dozen categories, but you can provide an explicit color_key if you prefer. Choosing colors for different categories is more of an art than a science, because the colors not only need to be distinguishable, their combinations also need to be distinguishable if those categories ever overlap in nearby pixels, or else the results will be ambiguous. In practice, only a few categories can be reliably distinguished in this way, but [zooming in](3_Interactivity.ipynb) can be used to help disambiguate overlapping colors, as long as the basic set of colors is itself distinguishable.",
"_____no_output_____"
],
[
"#### Transforming data values for colormapping\n\nIn each of the above examples, you may have noticed that we were never required to specify any parameters about the data values; the plots just appear like magic. That magic is implemented in `tf.shade`. What `tf.shade` does for a 2D aggregate (non-categorical) is:\n\n1. **Mask** out all bins with a `NaN` value (for floating-point arrays) or a zero value (for the unsigned integer arrays that are returned from `count`); these bins will not have any effect on subsequent computations. \n\n2. **Transform** the bin values using a specified scalar function `how`. Calculates the value of that function for the difference between each bin value and the minimum non-masked bin value. E.g. for `how=\"linear\"`, simply returns the difference unchanged. Other `how` functions are discussed below.\n\n3. **Map** the resulting transformed data array into the provided colormap. First finds the value span (*l*,*h*) for the resulting transformed data array -- what are the lowest and highest non-masked values? -- and then maps the range (*l*,*h*) into the full range of the colormap provided. If a colormap is used, masked values are given a fully transparent alpha value, and non-masked ones are given a fully opaque alpha value. If a single color is used, the alpha value starts at `min_alpha` and increases proportionally to the mapped data value up to the full `alpha` value.\n\nThe result is thus auto-ranged to show whatever data values are found in the aggregate bins, with the `span` argument (described below) allowing you to override the range explicitly if you need to.\n\nAs described in [Plotting Pitfalls](../user_guide/1_Plotting_Pitfalls.ipynb), auto-ranging is only part of what is required to reveal the structure of the dataset; it's also crucial to automatically and potentially nonlinearly map from the aggregate values (e.g. bin counts) into the colormap. If we used a linear mapping, we'd see very little of the structure of the data:",
"_____no_output_____"
]
],
[
[
"tf.shade(agg,how='linear')",
"_____no_output_____"
]
],
[
[
"In the linear version, you can see that the bins that have zero count show the background color, since they have been masked out using the alpha channel of the image, and that the rest of the pixels have been mapped to colors near the bottom of the colormap. If you peer closely at it, you may even be able to see that one pixel (from the smallest Gaussian) has been mapped to the highest color in the colormap (here dark blue). But no other structure is visible, because the highest-count bin is so much higher than all of the other bins:",
"_____no_output_____"
]
],
[
[
"top15=agg.values.flat[np.argpartition(agg.values.flat, -15)[-15:]]\nprint(sorted(top15))\nprint(sorted(np.round(top15*255.0/agg.values.max()).astype(int)))",
"_____no_output_____"
]
],
[
[
"I.e., if using a colormap with 255 colors, the largest bin (`agg.values.max()`) is mapped to the highest color, but with a linear scale all of the other bins map to only the first 24 colors, leaving all intermediate colors unused. If we want to see any structure for these intermediate ranges, we need to transform these numerical values somehow before displaying them. For instance, if we take the logarithm of these large values, they will be mapped into a more tractable range:",
"_____no_output_____"
]
],
[
[
"print(np.log1p(sorted(top15)))",
"_____no_output_____"
]
],
[
[
"So we can plot the logarithms of the values (``how='log'``, below), which is an arbitrary transform but is appropriate for many types of data. Alternatively, we can make a histogram of the numeric values, then assign a pixel color to each equal-sized histogram bin to ensure even usage of every displayable color (``how='eq_hist'``; see [plotting pitfalls](../user_guide/1_Plotting_Pitfalls.ipynb). We can even supply any arbitrary transformation to the colormapper as a callable, such as a twenty-third root:",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.shade(agg,how='log', name=\"log\"),\n tf.shade(agg,how='eq_hist', name=\"eq_hist\"),\n tf.shade(agg,how=lambda d, m: np.where(m, np.nan, d)**(1/23.), name=\"23rd root\"))",
"_____no_output_____"
]
],
[
[
"Usually, however, such custom operations are done directly on the aggregate during the ***Transformation*** stage; the `how` operations are meant for simple, well-defined transformations solely for the final steps of visualization, which allows the main aggregate array to stay in the original units and scale in which it was measured. Using `how` also helps simplify the subsequent ***Embedding*** stage, letting it provide one of a fixed set of legend types, either linear (for `how=linear`), logarithmic (for `how=log`) or percentile (for `how=eq_hist`). See the [shade docs](https://datashader.org/api.html#datashader.transfer_functions.shade) for more details on the `how` functions. \n\nFor categorical aggregates, the `shade` function works similarly to providing a single color to a non-categorical aggregate, with the alpha (opacity) calculated from the total value across all categories (and the color calculated as a weighted mixture of the colors for each category).",
"_____no_output_____"
],
[
"#### Controlling ranges for colormapping\n\nBy default, `shade` will autorange on the aggregate array, mapping the lowest and highest values of the aggregate array into the lowest and highest values of the colormap (or the available alpha values, for single colors). You can instead focus on a specific `span` of the aggregate data values, mapping that span into the available colors or the available alpha values:",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.shade(agg,cmap=[\"grey\", \"blue\"], name=\"gb 0 20\", span=[0,20], how=\"linear\"),\n tf.shade(agg,cmap=[\"grey\", \"blue\"], name=\"gb 50 200\", span=[50,200], how=\"linear\"),\n tf.shade(agg,cmap=\"green\", name=\"Green 10 20\", span=[10,20], how=\"linear\"))",
"_____no_output_____"
]
],
[
[
"On the left, all counts above 20 are mapped to the highest value in the colormap (blue in this case), losing the ability to distinguish between values above 20 but providing the maximum color precision for the specific range 0 to 20. In the middle, all values 0 to 50 map to the first color in the colormap (grey in this case), and the colors are then linearly interpolated up to 200, with all values 200 and above mapping to the highest value in the colormap (blue in this case). With the single color mapping to alpha on the right, counts up to 10 are all mapped to `min_alpha`, counts 20 and above are all mapped to the specified `alpha` (255 in this case), and alpha is scaled linearly in between.",
"_____no_output_____"
],
[
"For plots that scale with alpha (i.e., categorical or single-color non-categorical plots), you can control the range of alpha values generated by setting `min_alpha` (lower bound) and `alpha` (upper bound), on a scale 0 to 255):",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.shade(agg,cmap=\"green\", name=\"Green\"),\n tf.shade(agg,cmap=\"green\", name=\"No min_alpha\", min_alpha=0), \n tf.shade(agg,cmap=\"green\", name=\"Small alpha range\", min_alpha=50, alpha=80))",
"_____no_output_____"
]
],
[
[
"Here you can see that the faintest pixels are more visible with the default `min_alpha` (normally 40, left) than if you explicitly set the `min_alpha=0` (middle), which is why the `min_alpha` default is non-zero; otherwise low values would be indistinguishable from the background (see [Plotting Pitfalls](../user_guide/1_Plotting_Pitfalls.ipynb)).\n\nYou can combine `span` and `alpha` ranges to specifically control the data value range that maps to an opacity range, for single-color and categorical plotting:",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.shade(agg,cmap=\"green\", name=\"g 0,20\", span=[ 0,20], how=\"linear\"),\n tf.shade(agg,cmap=\"green\", name=\"g 10,20\", span=[10,20], how=\"linear\"),\n tf.shade(agg,cmap=\"green\", name=\"g 10,20 0\", span=[10,20], how=\"linear\", min_alpha=0))",
"_____no_output_____"
],
[
"tf.Images(tf.shade(aggc, name=\"eq_hist\"),\n tf.shade(aggc, name=\"linear\", how='linear'),\n tf.shade(aggc, name=\"span 0,10\", how='linear', span=(0,10)), \n tf.shade(aggc, name=\"span 0,10\", how='linear', span=(0,20), min_alpha=0))",
"_____no_output_____"
]
],
[
[
"The categorical examples above focus on counts, but `ds.by` works on other aggregate types as well, colorizing by category but aggregating by sum, mean, etc. (but see the [following section](#Colormapping-with-negative-values) for details on how to interpret such colors):",
"_____no_output_____"
]
],
[
[
"agg_c = canvas.points(df,'x','y', ds.by('cat', ds.count()))\nagg_s = canvas.points(df,'x','y', ds.by(\"cat\", ds.sum(\"val\")))\nagg_m = canvas.points(df,'x','y', ds.by(\"cat\", ds.mean(\"val\")))\n\ntf.Images(tf.shade(agg_c), tf.shade(agg_s), tf.shade(agg_m))",
"_____no_output_____"
]
],
[
[
"#### Colormapping with negative values\n\nThe above examples all use positive data values to avoid confusion when there is no colorbar or other explicit indication of a z (color) axis range. Negative values are also supported, in which case for a non-categorical plot you should normally use a [diverging colormap](https://colorcet.holoviz.org/user_guide/Continuous.html#Diverging-colormaps,-for-plotting-magnitudes-increasing-or-decreasing-from-a-central-point:):",
"_____no_output_____"
]
],
[
[
"from colorcet import coolwarm, CET_D8\ndfn = df.copy()\ndfn.val.replace({20:-20, 30:0, 40:-40}, inplace=True)\naggn = ds.Canvas().points(dfn,'x','y', agg=ds.mean(\"val\"))\n\ntf.Images(tf.shade(aggn, name=\"Sequential\", cmap=[\"lightblue\",\"blue\"], how=\"linear\"),\n tf.shade(aggn, name=\"DivergingW\", cmap=coolwarm[::-1], span=(-50,50), how=\"linear\"),\n tf.shade(aggn, name=\"DivergingB\", cmap=CET_D8[::-1], span=(-50,50), how=\"linear\"))",
"_____no_output_____"
]
],
[
[
"In both of the above plots, values with no data are transparent as usual, showing white. With a sequential lightblue to blue colormap, increasing `val` numeric values are mapped to the colormap in order, with the smallest values (-40; large blob in the top left) getting the lowest color value (lightblue), less negative values (-20, blob in the bottom right) getting an intermediate color, and the largest average values (50, large distribution in the background) getting the highest color. Looking at such a plot, viewers have no easy way to determine which values are negative. Using a diverging colormap (right two plots) and forcing the span to be symmetric around zero ensures that negative values are plotted in one color range (reds) and positive are plotted in a clearly different range (blues). Note that when using a diverging colormap with transparent values, you should carefully consider what you want to happen around the zero point; here values with nearly zero average (blob in bottom left) disappear when using a white-centered diverging map (\"coolwarm\"), while they show up but in a neutral color when using a diverging map with a contrasting central color (\"CET_D8\").\n\nFor categorical plots of values that can be negative, the results are often quite difficult to interpret, for the same reason as for the Sequential case above:",
"_____no_output_____"
]
],
[
[
"agg_c = canvas.points(dfn,'x','y', ds.by('cat', ds.count()))\nagg_s = canvas.points(dfn,'x','y', ds.by(\"cat\", ds.sum(\"val\")))\nagg_m = canvas.points(dfn,'x','y', ds.by(\"cat\", ds.mean(\"val\")))\n\ntf.Images(tf.shade(agg_c, name=\"count\"), \n tf.shade(agg_s, name=\"sum\"), \n tf.shade(agg_s, name=\"sum baseline=0\", color_baseline=0))",
"_____no_output_____"
]
],
[
[
"Here a `count` aggregate ignores the negative values and thus works the same as when values were positive, but `sum` and other aggregates like `mean` take the negative values into account. By default, a pixel with the lowest value (whether negative or positive) maps to `min_alpha`, and the highest maps to `alpha`. The color is determined by how different each category's value is from the minimum value across all categories; categories with high values relative to the minimum contribute more to the color. There is not currently any way to tell which data values are positive or negative, as you can using a diverging colormap in the non-categorical case.\n\nInstead of using the default of the data minimum, you can pass a specific `color_baseline`, which is appropriate if your data has a well-defined reference value such as zero. Here, when we pass `color_baseline=0` the negative values are essentially ignored for color calculations, which can be seen on the green blob, where any orange data point is fully orange despite the presence of green-category datapoints; the middle plot `sum` shows a more appropriate color mixture in that case.",
"_____no_output_____"
],
[
"#### Spreading\n\nOnce an image has been created, it can be further transformed with a set of functions from `ds.transfer_functions`.\n\nFor instance, because it can be difficult to see individual dots, particularly for zoomed-in plots, you can transform the image to replace each non-transparent pixel with a shape, such as a circle (default) or square. This process is called spreading:",
"_____no_output_____"
]
],
[
[
"img = tf.shade(aggc, name=\"Original image\")\n \ntf.Images(img,\n tf.spread(img, name=\"spread 1px\"),\n tf.spread(img, px=2, name=\"spread 2px\"),\n tf.spread(img, px=3, shape='square', name=\"spread square\"))",
"_____no_output_____"
]
],
[
[
"As you can see, spreading is very effective for isolated datapoints, which is what it's normally used for, but it has overplotting-like effects for closely spaced points like in the green and purple regions above, and so it would not normally be used when the datapoints are dense.\n\nSpreading can be used with a custom mask, as long as it is square and an odd width and height (so that it will be centered over the original pixel):",
"_____no_output_____"
]
],
[
[
"mask = np.array([[1, 1, 1, 1, 1],\n [1, 0, 0, 0, 1],\n [1, 0, 0, 0, 1],\n [1, 0, 0, 0, 1],\n [1, 1, 1, 1, 1]])\n\ntf.spread(img, mask=mask)",
"_____no_output_____"
]
],
[
[
"To support interactive zooming, where spreading would be needed only in sparse regions of the dataset, we provide the dynspread function. `dynspread` will dynamically calculate the spreading size to use by counting the fraction of non-masked bins that have non-masked neighbors; see the\n[dynspread docs](https://datashader.org/api.html#datashader.transfer_functions.dynspread) for more details.\n\n\n#### Other image transfer_functions\n\nOther useful image operations are also provided, such as setting the background color or combining images:",
"_____no_output_____"
]
],
[
[
"tf.Images(tf.set_background(img,\"black\", name=\"Black bg\"),\n tf.stack(img,tf.shade(aggc.sel(cat=['d2', 'd3']).sum(dim='cat')), name=\"Sum d2 and d3 colors\"),\n tf.stack(img,tf.shade(aggc.sel(cat=['d2', 'd3']).sum(dim='cat')), how='saturate', name=\"d2+d3 saturated\")) ",
"_____no_output_____"
]
],
[
[
"See [the API docs](https://datashader.org/api.html#transfer-functions) for more details. Image composition operators to provide for the `how` argument of `tf.stack` (e.g. `over` (default), `source`, `add`, and `saturate`) are listed in [composite.py](https://raw.githubusercontent.com/holoviz/datashader/master/datashader/composite.py) and illustrated [here](http://cairographics.org/operators).\n\n## Moving on\n\nThe steps outlined above represent a complete pipeline from data to images, which is one way to use Datashader. However, in practice one will usually want to add one last additional step, which is to embed these images into a plotting program to be able to get axes, legends, interactive zooming and panning, etc. The [next notebook](3_Interactivity.ipynb) shows how to do such embedding.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d04124d4ae07617b765f3946744f5c6b62480abe | 41,228 | ipynb | Jupyter Notebook | module1/1.Assignment/1.Assignment_AppliedModeling_Module1.ipynb | CVanchieri/DS-Unit2-Sprint3-AppliedModeling | 4714cd731ab2e9bb93fe86cc8dda55aed0077315 | [
"MIT"
] | null | null | null | module1/1.Assignment/1.Assignment_AppliedModeling_Module1.ipynb | CVanchieri/DS-Unit2-Sprint3-AppliedModeling | 4714cd731ab2e9bb93fe86cc8dda55aed0077315 | [
"MIT"
] | null | null | null | module1/1.Assignment/1.Assignment_AppliedModeling_Module1.ipynb | CVanchieri/DS-Unit2-Sprint3-AppliedModeling | 4714cd731ab2e9bb93fe86cc8dda55aed0077315 | [
"MIT"
] | null | null | null | 41,228 | 41,228 | 0.512419 | [
[
[
"Lambda School Data Science, Unit 2: Predictive Modeling\n\n# Applied Modeling, Module 1\n\nYou will use your portfolio project dataset for all assignments this sprint.\n\n## Assignment\n\nComplete these tasks for your project, and document your decisions.\n\n- [ ] Choose your target. Which column in your tabular dataset will you predict?\n- [ ] Choose which observations you will use to train, validate, and test your model. And which observations, if any, to exclude.\n- [ ] Determine whether your problem is regression or classification.\n- [ ] Choose your evaluation metric.\n- [ ] Begin with baselines: majority class baseline for classification, or mean baseline for regression, with your metric of choice.\n- [ ] Begin to clean and explore your data.\n- [ ] Begin to choose which features, if any, to exclude. Would some features \"leak\" information from the future?\n\n## Reading\n\n### ROC AUC\n- [Machine Learning Meets Economics](http://blog.mldb.ai/blog/posts/2016/01/ml-meets-economics/)\n- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)\n- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)\n\n### Imbalanced Classes\n- [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn)\n- [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/)\n\n### Last lesson\n- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _\"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score.\"_\n- [How Shopify Capital Uses Quantile Regression To Help Merchants Succeed](https://engineering.shopify.com/blogs/engineering/how-shopify-uses-machine-learning-to-help-our-merchants-grow-their-business)\n- [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.\n- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)\n- [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by Kevin Markham, with video\n- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)",
"_____no_output_____"
],
[
"# Import the final Liverpool Football Club data file.",
"_____no_output_____"
]
],
[
[
"# import pandas library as pd.\nimport pandas as pd \n\n# read in the LiverpoolFootballClub_all csv file.\nLPFC = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/LSDS-DataSets/master/EnglishPremierLeagueData/LiverpoolFootballClubData_EPL.csv')\n# show the data frame shape.\nprint(LPFC.shape)\n# show the data frame with headers.\nLPFC.head()",
"(1003, 161)\n"
]
],
[
[
"## Organizing & cleaning.",
"_____no_output_____"
]
],
[
[
"# group the columns we want to use.\ncolumns = [\"Div\", \"Date\", \"HomeTeam\", \"AwayTeam\", \"FTHG\", \"FTAG\", \"FTR\", \n \"HTHG\", \"HTAG\", \"HTR\", \"HS\", \"AS\", \"HST\", \"AST\", \"HHW\", \"AHW\", \n \"HC\", \"AC\", \"HF\", \"AF\", \"HO\", \"AO\", \"HY\", \"AY\", \"HR\", \"AR\", \"HBP\", \"ABP\"]\n# create a new data frame with just the grouped columns.\nLPFC = LPFC[columns]\n# show the data frame shape.\nprint(LPFC.shape)\n# show the data frame with headers.\nLPFC.head()",
"(1003, 28)\n"
],
[
"# relableing columns for better understanding.\nLPFC = LPFC.rename(columns={\"Div\": \"Division\", \"Date\": \"GameDate\", \"FTHG\": \"FullTimeHomeGoals\", \"FTAG\": \"FullTimeAwayGoals\", \"FTR\": \"FullTimeResult\", \"HTHG\": \"HalfTimeHomeGoals\", \n \"HTAG\": \"HalfTimeAwayGoals\", \"HTR\": \"HalfTimeResult\", \"HS\": \"HomeShots\", \"AS\": \"AwayShots\", \n \"HST\": \"HomeShotsOnTarget\", \"AST\": \"AwayShotsOnTarget\", \"HHW\": \"HomeShotsHitFrame\", \n \"AHW\": \"AwayShotsHitFrame\", \"HC\": \"HomeCorners\", \"AC\": \"AwayCorners\", \"HF\": \"HomeFouls\", \n \"AF\": \"AwayFouls\", \"HO\": \"HomeOffSides\", \"AO\": \"AwayOffSides\", \"HY\": \"HomeYellowCards\", \n \"AY\": \"AwayYellowCards\", \"HR\": \"HomeRedCards\", \"AR\": \"AwayRedCards\", \"HBP\": \"HomeBookingPoints_Y5_R10\", \n \"ABP\": \"AwayBookingPoints_Y5_R10\"})\n# show the data frame with headers.\nLPFC.head()",
"_____no_output_____"
]
],
[
[
"## Baseline accuracy score.",
"_____no_output_____"
]
],
[
[
"# import accuracy_score from sklearn.metrics library.\nfrom sklearn.metrics import accuracy_score\n\n# determine 'majority class' baseline starting point for every prediction.\n# single out the target, 'FullTimeResult' column.\ntarget = LPFC['FullTimeResult']\n# create the majority class with setting the 'mode' on the target data.\nmajority_class = target.mode()[0]\n# create the y_pred data.\ny_pred = [majority_class] * len(target)\n# accuracy score for the majority class baseline = frequency of the majority class.\nac = accuracy_score(target, y_pred)\nprint(\"'Majority Baseline' Accuracy Score =\", ac)",
"'Majority Baseline' Accuracy Score = 0.4745762711864407\n"
]
],
[
[
"## Train/test split the data frame, train/val/test.",
"_____no_output_____"
]
],
[
[
"df = LPFC.copy()",
"_____no_output_____"
],
[
"target = 'FullTimeResult'\ny = df[target]",
"_____no_output_____"
],
[
"# import train_test_split from sklearn.model_selection library.\nfrom sklearn.model_selection import train_test_split\n\ntarget = ['FullTimeResult']\ny = df[target]\n\n# split data into train, test.\nX_train, X_val, y_train, y_val = train_test_split(df, y, test_size=0.20,\n stratify=y, random_state=42)\n# show the data frame shapes.\nprint(\"train =\", X_train.shape, y_train.shape, \"val =\", X_val.shape, y_val.shape)",
"train = (802, 28) (802, 1) val = (201, 28) (201, 1)\n"
]
],
[
[
"## LogisticREgression model.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom datetime import datetime\n\ndef wrangle(X):\n \"\"\"Wrangle train, validate, and test sets in the same way\"\"\"\n \n # prevent SettingWithCopyWarning with a copy.\n X = X.copy()\n \n\n # make 'GameDate' useable with datetime.\n X['GameDate'] = pd.to_datetime(X['GameDate'], infer_datetime_format=True) \n \n # create new columns for 'YearOfGame', 'MonthOfGame', 'DayOfGame'.\n X['YearOfGame'] = X['GameDate'].dt.year\n X['MonthOfGame'] = X['GameDate'].dt.month\n X['DayOfGame'] = X['GameDate'].dt.day\n \n # removing 'FullTimeHomeGoals', 'FullTimeAwayGoals' as they directly coorelated to the result.\n dropped_columns = ['FullTimeHomeGoals', 'FullTimeAwayGoals', 'Division', 'GameDate']\n X = X.drop(columns=dropped_columns)\n \n # return the wrangled dataframe\n return X\n\nX_train = wrangle(X_train)\nX_val = wrangle(X_val)",
"_____no_output_____"
],
[
"# create the target as status_group.\ntarget = 'FullTimeResult'\n# set the features, remove target and id column.\ntrain_features = X_train.drop(columns=[target])\n# group all the numeric features.\nnumeric_features = train_features.select_dtypes(include='number').columns.tolist()\n# group the cardinality of the nonnumeric features.\ncardinality = train_features.select_dtypes(exclude='number').nunique()\n# group all categorical features with cardinality <= 100.\ncategorical_features = cardinality[cardinality <= 500].index.tolist()\n# create features with numeric + categorical\nfeatures = numeric_features + categorical_features\n# create the new vaules with the new features/target data.\nX_train = X_train[features]\nX_val = X_val[features]",
"_____no_output_____"
],
[
"!pip install category_encoders",
"Collecting category_encoders\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/a0/52/c54191ad3782de633ea3d6ee3bb2837bda0cf3bc97644bb6375cf14150a0/category_encoders-2.1.0-py2.py3-none-any.whl (100kB)\n\r\u001b[K |███▎ | 10kB 16.1MB/s eta 0:00:01\r\u001b[K |██████▌ | 20kB 7.2MB/s eta 0:00:01\r\u001b[K |█████████▉ | 30kB 10.0MB/s eta 0:00:01\r\u001b[K |█████████████ | 40kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████▍ | 51kB 7.7MB/s eta 0:00:01\r\u001b[K |███████████████████▋ | 61kB 9.1MB/s eta 0:00:01\r\u001b[K |██████████████████████▉ | 71kB 10.3MB/s eta 0:00:01\r\u001b[K |██████████████████████████▏ | 81kB 11.6MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 92kB 12.8MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 102kB 9.2MB/s \n\u001b[?25hRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.10.1)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.16.5)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.24.2)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.21.3)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.3.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders) (1.12.0)\nRequirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.5.3)\nRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders) (0.14.0)\nInstalling collected packages: category-encoders\nSuccessfully installed category-encoders-2.1.0\n"
],
[
"import category_encoders as ce\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\n\npipeline = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True), \n SimpleImputer(strategy='median'), \n StandardScaler(),\n LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=1000, n_jobs=-1)\n)\n\n# Fit on train, score on val\npipeline.fit(X_train, y_train)\nprint ('Training Accuracy', pipeline.score(X_train, y_train))\nprint('Validation Accuracy', pipeline.score(X_val, y_val))\ny_pred = pipeline.predict(X_val)",
"/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d041348da376c4c1341afd3f595853258afaa5a1 | 226,557 | ipynb | Jupyter Notebook | udemy-ds-bc/Py_DS_ML_bootcamp/00-my-practice/05-Data-Visualization-with-Matplotlib/02_my_matplotlib_exercise.ipynb | JennEYoon/python-ml | 637b5a4a7b7596d1888156e5999ce16da8be33bc | [
"Apache-2.0"
] | 1 | 2021-06-20T23:35:50.000Z | 2021-06-20T23:35:50.000Z | udemy-ds-bc/Py_DS_ML_bootcamp/00-my-practice/05-Data-Visualization-with-Matplotlib/02_my_matplotlib_exercise.ipynb | JennEYoon/python-ml | 637b5a4a7b7596d1888156e5999ce16da8be33bc | [
"Apache-2.0"
] | null | null | null | udemy-ds-bc/Py_DS_ML_bootcamp/00-my-practice/05-Data-Visualization-with-Matplotlib/02_my_matplotlib_exercise.ipynb | JennEYoon/python-ml | 637b5a4a7b7596d1888156e5999ce16da8be33bc | [
"Apache-2.0"
] | 1 | 2021-04-14T03:41:36.000Z | 2021-04-14T03:41:36.000Z | 346.947933 | 25,420 | 0.932993 | [
[
[
"___\n\n<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n___\n# Matplotlib Exercises \n\nWelcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.\n\nAlso don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!\n\n** * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * **\n\n# Exercises\n\nFollow the instructions to recreate the plots using this data:\n\n## Data",
"_____no_output_____"
]
],
[
[
"import numpy as np\nx = np.arange(0,100)\ny = x*2\nz = x**2",
"_____no_output_____"
]
],
[
[
"** Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?**",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Exercise 1\n\n** Follow along with these steps: **\n* ** Create a figure object called fig using plt.figure() **\n* ** Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. **\n* ** Plot (x,y) on that axes and set the labels and titles to match the plot below:**",
"_____no_output_____"
]
],
[
[
"# Functional Method\nfig = plt.figure()\nax = fig.add_axes([0, 0, 1, 1])\nax.plot(x, y)\nax.set_title('title')\nax.set_xlabel('X')\nax.set_ylabel('Y')",
"_____no_output_____"
]
],
[
[
"## Exercise 2\n** Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.**",
"_____no_output_____"
]
],
[
[
"# create figure canvas\nfig = plt.figure()\n# create axes\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,.2,.2])\n\nplt.xticks(np.arange(0, 1.2, step=0.2))\nplt.yticks(np.arange(0, 1.2, step=0.2))",
"_____no_output_____"
]
],
[
[
"** Now plot (x,y) on both axes. And call your figure object to show it.**",
"_____no_output_____"
]
],
[
[
"# create figure canvas\nfig = plt.figure()\n# create axes\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,.2,.2])\n\nax1.set_xlabel('x1')\nax1.set_ylabel('y1')\nax2.set_xlabel('x2')\nax2.set_ylabel('y2')\n\nax1.plot(x, y, 'r-')\nax2.plot(x, y, 'b--')\n\nplt.xticks(np.arange(0, 120, step=20))\nplt.yticks(np.arange(0, 220, step=50))",
"_____no_output_____"
]
],
[
[
"## Exercise 3\n\n** Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]**",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,.4,.4])",
"_____no_output_____"
]
],
[
[
"** Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:**",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,.4,.4])\nax1.plot(x, z)\nax2.plot(x, y, 'r--') # zoom using xlimit (20, 22), ylimit (30, 50)\nax2.set_xlim([20, 22])\nax2.set_ylim([30, 50])\n\nax2.set_title('zoom')\nax2.set_xlabel('X')\nax2.set_ylabel('Y')\n\nax1.set_xlabel('X')\nax1.set_ylabel('Z')\n",
"_____no_output_____"
]
],
[
[
"## Exercise 4\n\n** Use plt.subplots(nrows=1, ncols=2) to create the plot below.**",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(nrows=1, ncols=2)\n# axes object is an array of subplot axis.\nplt.tight_layout() # add space between rows & columns.",
"_____no_output_____"
]
],
[
[
"** Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style**",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(nrows=1, ncols=2)\n# axes object is an array of subplot axis.\n\naxes[0].plot(x, y, 'b--', lw=3)\naxes[1].plot(x, z, 'r-.', lw=2)\nplt.tight_layout() # add space between rows & columns.\n\n",
"_____no_output_____"
]
],
[
[
"** See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.**",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10, 7))\n# axes object is an array of subplot axis.\n\naxes[0].plot(x, y, 'b--', lw=3)\naxes[1].plot(x, z, 'r-.', lw=2)\nplt.tight_layout() # add space between rows & columns.",
"_____no_output_____"
]
],
[
[
"# Great Job!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0413c58832335769b3c357373bcc200aa7b036a | 39,282 | ipynb | Jupyter Notebook | report-annexes/pa-play-with-w2v-enwiki-lemmatized.ipynb | Penguinazor/mse.pa | 3bf8441c0667997242a30f7dd72279493b73afc2 | [
"MIT"
] | null | null | null | report-annexes/pa-play-with-w2v-enwiki-lemmatized.ipynb | Penguinazor/mse.pa | 3bf8441c0667997242a30f7dd72279493b73afc2 | [
"MIT"
] | null | null | null | report-annexes/pa-play-with-w2v-enwiki-lemmatized.ipynb | Penguinazor/mse.pa | 3bf8441c0667997242a30f7dd72279493b73afc2 | [
"MIT"
] | null | null | null | 29.872243 | 382 | 0.54987 | [
[
[
"# Turn on Auto-Complete\n%config IPCompleter.greedy=True",
"_____no_output_____"
],
[
"# Start logging process at root level\nimport logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\nlogging.root.setLevel(level=logging.INFO)",
"_____no_output_____"
],
[
"# Load model and dictionary\n#model_id_current = 99999\n#model_path_current = \"models/enwiki-full-dict-\"+str(model_id_current)+\".model\"\n#model_path_99999 = \"models/enwiki-20190319-lemmatized-99999.model\"\nmodel_path_current =\"models/enwiki-20190409-lemmatized.model\"\n\ndictionary_full_wikien_lem_path = \"dictionaries/enwiki-20190409-dict-lemmatized.txt.bz2\"",
"_____no_output_____"
],
[
"# Load word2vec unlemmatized model\nfrom gensim.models import Word2Vec\nmodel = Word2Vec.load(model_path_current, mmap='r')",
"2019-04-23 12:51:02,012 : INFO : 'pattern' package found; tag filters are available for English\n2019-04-23 12:51:02,021 : INFO : loading Word2Vec object from models/enwiki-20190409-lemmatized.model\n2019-04-23 12:52:19,106 : INFO : loading wv recursively from models/enwiki-20190409-lemmatized.model.wv.* with mmap=r\n2019-04-23 12:52:19,109 : INFO : loading vectors from models/enwiki-20190409-lemmatized.model.wv.vectors.npy with mmap=r\n2019-04-23 12:52:19,130 : INFO : setting ignored attribute vectors_norm to None\n2019-04-23 12:52:19,135 : INFO : loading vocabulary recursively from models/enwiki-20190409-lemmatized.model.vocabulary.* with mmap=r\n2019-04-23 12:52:19,137 : INFO : loading trainables recursively from models/enwiki-20190409-lemmatized.model.trainables.* with mmap=r\n2019-04-23 12:52:19,139 : INFO : loading syn1neg from models/enwiki-20190409-lemmatized.model.trainables.syn1neg.npy with mmap=r\n2019-04-23 12:52:19,143 : INFO : loading vectors_lockf from models/enwiki-20190409-lemmatized.model.trainables.vectors_lockf.npy with mmap=r\n2019-04-23 12:52:19,146 : INFO : setting ignored attribute cum_table to None\n2019-04-23 12:52:19,147 : INFO : loaded models/enwiki-20190409-lemmatized.model\n"
],
[
"# Custom lemmatizer function to play with word\nfrom gensim.utils import lemmatize\n#vocabulary = set(wv.index2word)\ndef lem(word):\n try:\n return lemmatize(word)[0].decode(\"utf-8\")\n except:\n pass\n \nprint(lem(\"dog\"))\nprint(lem(\"that\"))",
"dog/NN\nNone\n"
],
[
"# Testing similarity\nprint(\"Most similar to\",\"woman\")\nprint(model.wv.most_similar(lem(\"woman\")))",
"Most similar to woman\n[('man/NN', 0.6361385583877563), ('individual/NN', 0.5763572454452515), ('person/NN', 0.5568535327911377), ('girl/NN', 0.5561789274215698), ('female/JJ', 0.54877769947052), ('female/NN', 0.5409574508666992), ('lesbian/NN', 0.5204395055770874), ('gender/NN', 0.5188673734664917), ('child/NN', 0.4933265447616577), ('feminist/NN', 0.4769814610481262)]\n"
],
[
"print(\"Most similar to\",\"doctor\")\nprint(model.wv.most_similar(lem(\"doctor\")))",
"Most similar to doctor\n[('dentist/NN', 0.5610849261283875), ('nardole/NN', 0.5584279894828796), ('nurse/NN', 0.5565731525421143), ('physician/NN', 0.5523163080215454), ('dolittle/RB', 0.5519494414329529), ('psychiatrist/NN', 0.5512733459472656), ('veterinarian/NN', 0.523552417755127), ('naakudu/RB', 0.5198249816894531), ('zhivago/VB', 0.5178859233856201), ('senocak/RB', 0.5164788961410522)]\n"
],
[
"# Saving some ram by using the KeyedVectors instance\nwv = model.wv\n#del model",
"_____no_output_____"
],
[
"# Testing similarity with KeyedVectors\nprint(\"Most similar to\",\"woman\")\nprint(wv.most_similar(lem(\"woman\")))\nprint(\"\\nMost similar to\",\"man\")\nprint(wv.most_similar(lem(\"man\")))\nprint(\"\\nMost similar to\",\"doctor\")\nprint(wv.most_similar(lem(\"doctor\")))\nprint(\"\\nMost similar to\",\"doctor\",\"cosmul\")\nprint(wv.most_similar_cosmul(positive=[lem(\"doctor\")]))",
"Most similar to woman\n[('man/NN', 0.6361385583877563), ('individual/NN', 0.5763572454452515), ('person/NN', 0.5568535327911377), ('girl/NN', 0.5561789274215698), ('female/JJ', 0.54877769947052), ('female/NN', 0.5409574508666992), ('lesbian/NN', 0.5204395055770874), ('gender/NN', 0.5188673734664917), ('child/NN', 0.4933265447616577), ('feminist/NN', 0.4769814610481262)]\n\nMost similar to man\n[('woman/NN', 0.6361386179924011), ('boy/NN', 0.5653619170188904), ('person/NN', 0.5352815389633179), ('girl/NN', 0.5206164121627808), ('individual/NN', 0.49065500497817993), ('soldier/NN', 0.4820939302444458), ('spiderlongclose/VB', 0.46156802773475647), ('someone/NN', 0.4581664502620697), ('mcglurk/NN', 0.4518073499202728), ('one/NN', 0.4458999037742615)]\n\nMost similar to doctor\n[('dentist/NN', 0.5610849261283875), ('nardole/NN', 0.5584279894828796), ('nurse/NN', 0.5565731525421143), ('physician/NN', 0.5523163080215454), ('dolittle/RB', 0.5519494414329529), ('psychiatrist/NN', 0.5512733459472656), ('veterinarian/NN', 0.523552417755127), ('naakudu/RB', 0.5198249816894531), ('zhivago/VB', 0.5178859233856201), ('senocak/RB', 0.5164788961410522)]\n\nMost similar to doctor cosmul\n[('dentist/NN', 0.7805417776107788), ('nardole/NN', 0.7792133092880249), ('nurse/NN', 0.7782858610153198), ('physician/NN', 0.7761574387550354), ('dolittle/RB', 0.7759740352630615), ('psychiatrist/NN', 0.7756359577178955), ('veterinarian/NN', 0.7617754936218262), ('naakudu/RB', 0.7599117755889893), ('zhivago/VB', 0.7589422464370728), ('senocak/RB', 0.7582387328147888)]\n"
],
[
"print(\"similarity of doctor + woman - man\")\nwv.most_similar(positive=[lem(\"doctor\"),lem(\"woman\")], negative=[lem(\"man\")])",
"similarity of doctor + woman - man\n"
],
[
"# Get cosmul of logic\nprint(\"cosmul of doctor + woman - man\")\nwv.most_similar_cosmul(positive=[lem(\"doctor\"),lem(\"woman\")], negative=[lem(\"man\")])",
"cosmul of doctor + woman - man\n"
],
[
"# Ways to retrive word vector\nprint(\"Get item dog\")\nvec_dog = wv.__getitem__(\"dog/NN\")\nvec_dog = wv.get_vector(\"dog/NN\")\nvec_dog = wv.word_vec(\"dog/NN\")\nprint(\"vec_dog\", vec_dog.shape, vec_dog[:10])",
"Get item dog\nvec_dog (300,) [-0.13817333 -1.8090004 -0.45946687 -2.2184215 1.4197063 0.19401991\n -0.4230487 -2.7905297 -3.1192808 0.02542385]\n"
],
[
"# Get similar words to vector\nprint(\"Similar by vector to dog vector at top 10\")\nprint(wv.similar_by_vector(vector=vec_dog, topn=10, restrict_vocab=None))\nprint(\"Most similar to dog vector\")\nprint(wv.most_similar(positive=[vec_dog]))\nprint(\"Similar to cat vector\")\nvec_cat = wv.word_vec(\"cat/NN\")\nprint(wv.most_similar(positive=[vec_cat]))",
"Similar by vector to dog vector at top 10\n[('dog/NN', 1.0000001192092896), ('cat/NN', 0.7325705289840698), ('puppy/NN', 0.7017959356307983), ('dogs/NN', 0.6933757066726685), ('dachshund/NN', 0.6861976981163025), ('poodle/NN', 0.6616296768188477), ('rottweiler/JJ', 0.6588126420974731), ('hound/NN', 0.6473891139030457), ('pet/NN', 0.6361759901046753), ('pekingese/JJ', 0.6296164989471436)]\nMost similar to dog vector\n[('dog/NN', 1.0000001192092896), ('cat/NN', 0.7325705289840698), ('puppy/NN', 0.7017959356307983), ('dogs/NN', 0.6933757066726685), ('dachshund/NN', 0.6861976981163025), ('poodle/NN', 0.6616296768188477), ('rottweiler/JJ', 0.6588126420974731), ('hound/NN', 0.6473891139030457), ('pet/NN', 0.6361759901046753), ('pekingese/JJ', 0.6296164989471436)]\nSimilar to cat vector\n[('cat/NN', 1.0), ('dog/NN', 0.7325705885887146), ('meow/VB', 0.6924092769622803), ('kitten/NN', 0.6764312982559204), ('catjuan/JJ', 0.639890193939209), ('rabbit/NN', 0.6368660926818848), ('meow/NN', 0.635756254196167), ('fraidy/JJ', 0.6297948360443115), ('poodle/NN', 0.6230608224868774), ('pet/NN', 0.6119345426559448)]\n"
],
[
"# closer to __ than __\nprint(\"closer to dog than cat\")\nprint(wv.words_closer_than(\"dog/NN\", \"cat/NN\"))\nprint(\"\\ncloser to cat than dog\")\nprint(wv.words_closer_than(\"cat/NN\", \"dog/NN\"))",
"closer to dog than cat\n[]\n\ncloser to cat than dog\n[]\n"
],
[
"# Normalized Vector\nvec_king_norm = wv.word_vec(\"king/NN\", use_norm=True)\nprint(\"vec_king_norm:\",vec_king_norm.shape, vec_king_norm[:10])\n# Not normalized vectore\nvec_king_unnorm = wv.word_vec(\"king/NN\", use_norm=False)\nprint(\"vec_king_unnorm:\",vec_king_norm.shape, vec_king_unnorm[:10])",
"vec_king_norm: (300,) [ 0.02464886 0.09053605 0.00468578 -0.01604057 0.0808396 0.10550086\n 0.01262516 -0.0464116 -0.06513052 -0.08347644]\nvec_king_unnorm: (300,) [ 0.6677862 2.4528 0.12694712 -0.43457067 2.190104 2.8582263\n 0.34204054 -1.2573817 -1.764514 -2.2615411 ]\n"
],
[
"wv.most_similar(positive=[vec_king_norm], negative=[vec_king_unnorm])",
"_____no_output_____"
],
[
"# Generate random vector\nimport numpy as np\nvec_random = np.random.rand(300,)\nvec_random_norm = vec_random / vec_random.max(axis=0)\nprint(\"similar to random vector\")\nprint(wv.most_similar(positive=[vec_random]))\nprint(\"\\n similar to nomalized random vector\")\nprint(wv.most_similar(positive=[vec_random_norm]))",
"similar to random vector\n[('parigine/VB', 0.28092506527900696), ('nmcue/NN', 0.2804727852344513), ('mozart_kv/NN', 0.2786831855773926), ('tabuur/VB', 0.27440786361694336), ('tanantella/NN', 0.2741648852825165), ('kalfhani/NN', 0.27331894636154175), ('小法廷/VB', 0.27157384157180786), ('molagholi/VB', 0.2650672495365143), ('kriesinger/NN', 0.2646629810333252), ('shōhōtei/NN', 0.26364269852638245)]\n\n similar to nomalized random vector\n[('parigine/VB', 0.28092506527900696), ('nmcue/NN', 0.2804727852344513), ('mozart_kv/NN', 0.2786831855773926), ('tabuur/VB', 0.27440786361694336), ('tanantella/NN', 0.2741648852825165), ('kalfhani/NN', 0.27331894636154175), ('小法廷/VB', 0.27157384157180786), ('molagholi/VB', 0.2650672495365143), ('kriesinger/NN', 0.2646629810333252), ('shōhōtei/NN', 0.26364269852638245)]\n"
],
[
"# Get similarity from a random vector and normilized king vector\nprint(\"similarity from a normalized random vector to normalized vector of king\")\nwv.most_similar(positive=[vec_random_norm,vec_king_norm])",
"similarity from a normalized random vector to normalized vector of king\n"
],
[
"# Get similarity from a random vector and unormalized king vector\nprint(\"similarity from a random vector to unormalized vector of king\")\nwv.most_similar(positive=[vec_random,vec_king_unnorm])",
"similarity from a random vector to unormalized vector of king\n"
],
[
"# Get cosine similarities from a vector to an array of vectors\nprint(\"cosine similarity from a random vector to unormalized vector of king\")\nwv.cosine_similarities(vec_random, [vec_king_unnorm])",
"cosine similarity from a random vector to unormalized vector of king\n"
],
[
"# Tests analogies based on a text file\nanalogy_scores = wv.accuracy('datasets/questions-words.txt')\n#print(analogy_scores)",
"_____no_output_____"
],
[
"# The the distance of two words\nprint(\"distance between dog and cat\")\nwv.distance(\"dog/NN\",\"cat/NN\")",
"distance between dog and cat\n"
],
[
"# Get the distance of a word for the list of word\nprint(\"distance from dog to king and cat\")\nwv.distances(\"dog/NN\",[\"king/NN\",\"cat/NN\"])",
"distance from dog to king and cat\n"
],
[
"# Evaluate pairs of words\n#wv.evaluate_word_pairs(\"datasets/SimLex-999.txt\")",
"_____no_output_____"
],
[
"# Get sentence similarities\n\nfrom gensim.models import KeyedVectors\nfrom gensim.utils import simple_preprocess \n\ndef tokemmized(sentence, vocabulary):\n tokens = [lem(word) for word in simple_preprocess(sentence)]\n return [word for word in tokens if word in vocabulary] \n\ndef compute_sentence_similarity(sentence_1, sentence_2, model_wv):\n vocabulary = set(model_wv.index2word)\n tokens_1 = tokemmized(sentence_1, vocabulary)\n tokens_2 = tokemmized(sentence_2, vocabulary)\n del vocabulary\n print(tokens_1, tokens_2)\n return model_wv.n_similarity(tokens_1, tokens_2)\n\nsimilarity = compute_sentence_similarity('this is a sentence', 'this is also a sentence', wv)\nprint(similarity,\"\\n\")\n\nsimilarity = compute_sentence_similarity('the cat is a mammal', 'the bird is a aves', wv)\nprint(similarity,\"\\n\")\n\nsimilarity = compute_sentence_similarity('the cat is a mammal', 'the dog is a mammal', wv)\nprint(similarity)",
"['be/VB', 'sentence/NN'] ['be/VB', 'also/RB', 'sentence/NN']\n0.9267933550381176 \n\n['cat/NN', 'be/VB', 'mammal/NN'] ['bird/NN', 'be/VB', 'ave/NN']\n0.6503839221443558 \n\n['cat/NN', 'be/VB', 'mammal/NN'] ['dog/NN', 'be/VB', 'mammal/NN']\n0.9425444280677167\n"
],
[
"# Analogy with not normalized vectors\nprint(\"france is to paris as berlin is to ?\")\nwv.most_similar([wv['france/NN'] - wv['paris/NN'] + wv['berlin/NN']])",
"france is to paris as berlin is to ?\n"
],
[
"# Analogy with normalized Vector\nvec_france_norm = wv.word_vec('france/NN', use_norm=True)\nvec_paris_norm = wv.word_vec('paris/NN', use_norm=True)\nvec_berlin_norm = wv.word_vec('berlin/NN', use_norm=True)\nvec_germany_norm = wv.word_vec('germany/NN', use_norm=True)\nvec_country_norm = wv.word_vec('country/NN', use_norm=True)\nprint(\"france is to paris as berlin is to ?\")\nwv.most_similar([vec_france_norm - vec_paris_norm + vec_berlin_norm])",
"france is to paris as berlin is to ?\n"
],
[
"# Cosine Similarities\nprint(\"cosine_similarities of france and paris\")\nprint(wv.cosine_similarities(vec_france_norm, [vec_paris_norm]),wv.distance(\"france/NN\", \"paris/NN\"))\nprint(\"cosine_similarities of france and berlin\")\nprint(wv.cosine_similarities(vec_france_norm, [vec_berlin_norm]),wv.distance(\"france/NN\", \"berlin/NN\"))\nprint(\"cosine_similarities of france and germany\")\nprint(wv.cosine_similarities(vec_france_norm, [vec_germany_norm]),wv.distance(\"france/NN\", \"germany/NN\"))\nprint(\"cosine_similarities of france and country\")\nprint(wv.cosine_similarities(vec_france_norm, [vec_country_norm]),wv.distance(\"france/NN\", \"country/NN\"))",
"cosine_similarities of france and paris\n[0.62629485] 0.37370521250384203\ncosine_similarities of france and berlin\n[0.26217574] 0.7378242844644337\ncosine_similarities of france and germany\n[0.56096226] 0.4390377899399447\ncosine_similarities of france and country\n[0.35918537] 0.6408146341093731\n"
],
[
"# Analogy\nprint(\"king is to man what woman is to ?\")\n#wv.most_similar([wv['man/NN'] - wv['woman/NN'] + wv['king/NN']])\nwv.most_similar([wv['king/NN'] - wv['man/NN'] + wv['woman/NN']])",
"Man is to Woman what King is to ?\n"
],
[
"# Analogy\nprint(\"paris is to france as germany is to ?\")\nwv.most_similar([wv['paris/NN'] - wv['france/NN'] + wv['germany/NN']])",
"paris is to france as germany is to ?\n"
],
[
"# Analogy\nprint(\"cat is to mammal as sparrow is to ?\")\nwv.most_similar([wv['cat/NN'] - wv['mammal/NN'] + wv['bird/NN']])",
"cat is to mammal as sparrow is to ?\n"
],
[
"# Analogy\nprint(\"grass is to green as sky is to ?\")\nwv.most_similar([wv['sky/NN'] - wv['blue/NN'] + wv['green/NN']])",
"grass is to green as sky is to ?\n"
],
[
"# Analogy\nprint(\"athens is to greece as baghdad is to ?\")\nwv.most_similar([wv['athens/NN'] - wv['greece/NN'] + wv['afghanistan/NN']])",
"athens is to greece as baghdad is to ?\n"
],
[
"wv.most_similar([wv[\"country/NN\"]])",
"_____no_output_____"
],
[
"wv.most_similar([wv[\"capital/NN\"]])",
"_____no_output_____"
],
[
"wv.most_similar([wv[\"paris/NN\"]-wv[\"capital/NN\"]])",
"_____no_output_____"
],
[
"wv.most_similar([wv[\"bern/NN\"]-wv[\"capital/NN\"]])",
"_____no_output_____"
],
[
"wv.most_similar([wv[\"switzerland/NN\"]-wv[\"bern/NN\"]])",
"_____no_output_____"
],
[
"wv.distance(\"dog/NN\", \"dogs/NN\")",
"_____no_output_____"
],
[
"wv.cosine_similarities(wv[\"dog/NN\"],[wv[\"dogs/NN\"]])",
"_____no_output_____"
],
[
"wv.distance(\"switzerland/NN\", \"bern/NN\")",
"_____no_output_____"
],
[
"wv.cosine_similarities(wv[\"switzerland/NN\"],[wv[\"bern/NN\"]])",
"_____no_output_____"
],
[
"wv.distance(\"paris/NN\", \"bern/NN\")",
"_____no_output_____"
],
[
"wv.cosine_similarities(wv[\"paris/NN\"],[wv[\"bern/NN\"]])",
"_____no_output_____"
],
[
"wv.cosine_similarities(wv[\"paris/NN\"],[wv[\"dog/NN\"]])",
"_____no_output_____"
],
[
"# Analogy\nprint(\"capital + science\")\nwv.most_similar([wv['capital/NN'] + wv['science/NN']])",
"_____no_output_____"
],
[
"wv.cosine_similarities(wv[\"education/NN\"], [wv[\"natality/NN\"],wv[\"salubrity/NN\"],wv[\"economy/NN\"]]\n\n#wv.distance(\"education\",\"natality\")\n\n# education, natality, salubrity, economy\n\n#wv.most_similar_cosmul(positive=[\"doctor\",\"woman\"], negative=[\"man\"])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04149df3a1b53a1634b16ec39b12e23175bc497 | 27,051 | ipynb | Jupyter Notebook | FinalProject/Code/tool_app.ipynb | riggiobill/brmfcr-finalproject | 45b467f299c6d7411de5c84688fdfe5c9cf66e24 | [
"MIT"
] | null | null | null | FinalProject/Code/tool_app.ipynb | riggiobill/brmfcr-finalproject | 45b467f299c6d7411de5c84688fdfe5c9cf66e24 | [
"MIT"
] | null | null | null | FinalProject/Code/tool_app.ipynb | riggiobill/brmfcr-finalproject | 45b467f299c6d7411de5c84688fdfe5c9cf66e24 | [
"MIT"
] | null | null | null | 31.164747 | 280 | 0.552882 | [
[
[
"import pandas as pd\nimport psycopg2\nimport sqlalchemy\nimport config\nimport json\nimport numpy as np\nimport scrape",
"_____no_output_____"
],
[
"scrape.scrape_bls()",
"Table 'blsdata' already exists.\n"
],
[
"from sqlalchemy import create_engine\nfrom config import password\nEngine = create_engine(f\"postgresql://postgres:{password}@localhost:5432/Employee_Turnover\")\nConnection = Engine.connect()\ninitial_df=pd.read_sql(\"select * from turnover_data\", Connection)\nbls_df=pd.read_sql(\"select * from blsdata\", Connection)\nConnection.close()",
"_____no_output_____"
],
[
"bls_df.drop(bls_df.index[[35,36]])\nbls_df.drop(\"index\",axis=1,inplace=True)\nbls_df.head()",
"_____no_output_____"
],
[
"# Clean data: remove less helpful columns; rename values to be more user-friendly\ndf = initial_df.drop(['index','EducationField','EmployeeCount','EmployeeNumber','Education','StandardHours','JobRole','MaritalStatus',\n 'DailyRate','MonthlyRate','HourlyRate','Over18','OverTime','TotalWorkingYears'], axis=1).drop_duplicates()\ndf.rename(columns={'Attrition': 'Employment Status','BusinessTravel':'Business Travel','DistanceFromHome':'Commute (Miles)','EnvironmentSatisfaction':'Environment Satisfaction',\n 'JobInvolvement':'Job Involvement','JobLevel':'Job Level','JobSatisfaction':'Job Satisfaction',\n 'MonthlyIncome':'Monthly Income','NumCompaniesWorked':'Num Companies Worked','PercentSalaryHike':'Last Increase %',\n 'PerformanceRating':'Performance Rating','RelationshipSatisfaction':'Relationship Satisfaction','StockOptionLevel':'Stock Option Level',\n 'TrainingTimesLastYear':'Training Last Year','WorkLifeBalance':'Work/Life Balance','YearsAtCompany':'Tenure (Years)',\n 'YearsInCurrentRole':'Years in Role','YearsSinceLastPromotion':'Years Since Promotion','YearsWithCurrManager':'Years with Manager'}, inplace = True)\n\ndf['Employment Status'] = df['Employment Status'].replace(['No','Yes'],['Active','Terminated'])\ndf['Business Travel'] = df['Business Travel'].replace(['Travel_Rarely','Travel_Frequently','Non-Travel'],['Rare','Frequent','None'])\n\ncolumns = list(df)\nprint(columns)\ndf.head()",
"_____no_output_____"
],
[
"factors = ['Age', 'Business Travel', 'Department', 'Commute (Miles)', 'Environment Satisfaction', 'Gender', 'Job Involvement', 'Job Level', 'Job Satisfaction', 'Monthly Income', 'Performance Rating', 'Relationship Satisfaction', 'Stock Option Level', 'Training Last Year']\n",
"_____no_output_____"
],
[
"count18to29 = 0\ncount30to39 = 0\ncount40to49 = 0\ncount50to59 = 0\ncount60up = 0\n\n\nfor x in df['Age']:\n \n if x >=18 and x<30:\n count18to29 += 1 \n \n elif x >= 30 and x<40:\n count30to39 += 1\n \n elif x >= 40 and x<50:\n count40to49 += 1\n \n elif x >= 50 and x<60:\n count50to59 += 1\n \n elif x >= 60: \n count60up += 1\n \n else: pass\n ",
"_____no_output_____"
],
[
"age18to29_df= df.loc[(df['Age']>=18) & (df['Age']<30)]\nage18to29_act_df=age18to29_df.loc[(df['Employment Status'] == 'Active')]\nage18to29_term_df=age18to29_df.loc[(df['Employment Status'] == 'Terminated')]\n\nage30to39_df= df.loc[(df['Age']>=30) & (df['Age']<40)]\nage30to39_act_df=age30to39_df.loc[(df['Employment Status'] == 'Active')]\nage30to39_term_df=age30to39_df.loc[(df['Employment Status'] == 'Terminated')]\n\nage40to49_df= df.loc[(df['Age']>=40) & (df['Age']<50)]\nage40to49_act_df=age40to49_df.loc[(df['Employment Status'] == 'Active')]\nage40to49_term_df=age40to49_df.loc[(df['Employment Status'] == 'Terminated')]\n\nage50to59_df= df.loc[(df['Age']>=50) & (df['Age']<60)]\nage50to59_act_df=age50to59_df.loc[(df['Employment Status'] == 'Active')]\nage50to59_term_df=age50to59_df.loc[(df['Employment Status'] == 'Terminated')]\n\nage60up_df= df.loc[(df['Age']>=60)]\nage60up_act_df=age60up_df.loc[(df['Employment Status'] == 'Active')]\nage60up_term_df=age60up_df.loc[(df['Employment Status'] == 'Terminated')]\n\n\nper18to29Act=round((len(age18to29_act_df)/count18to29*100),1)\nper18to29Term=round((len(age18to29_term_df)/count18to29*100),1)\nper30to39Act=round((len(age30to39_act_df)/count30to39*100),1)\nper30to39Term=round((len(age30to39_term_df)/count30to39*100),1)\nper40to49Act=round((len(age40to49_act_df)/count40to49*100),1)\nper40to49Term=round((len(age40to49_term_df)/count40to49*100),1)\nper50to59Act=round((len(age50to59_act_df)/count50to59*100),1)\nper50to59Term=round((len(age50to59_term_df)/count50to59*100),1)\nper60upAct=round((len(age60up_act_df)/count60up*100),1)\nper60upTerm=round((len(age60up_term_df)/count60up*100),1)\n",
"_____no_output_____"
],
[
"age_dict={}\n\nage_dict['18_to_29Act'] = per18to29Act\nage_dict['18_to_29Term'] = per18to29Term\nage_dict['30_to_39Act'] = per30to39Act\nage_dict['30_to_39Term'] = per30to39Term\nage_dict['40_to_49Act'] = per40to49Act\nage_dict['40_to_49Term'] = per40to49Term\nage_dict['50_to_59Act'] = per50to59Act\nage_dict['50_to_59Term'] = per50to59Term\nage_dict['60upAct'] = per60upAct\nage_dict['60upTerm'] = per60upTerm\n\nparsed = json.loads(json.dumps(age_dict))\njson_age=json.dumps(parsed, indent=4, sort_keys=True)\n",
"_____no_output_____"
],
[
"new_df=df.groupby([\"Business Travel\",\"Employment Status\"]).count().reset_index()\nnew_df.head(8)",
"_____no_output_____"
],
[
"trav_df = new_df[\"Age\"].astype(float) \ntrav_df.head(6)",
"_____no_output_____"
],
[
"none_act=trav_df[2]\nnone_term=trav_df[3]\nnone_total=none_act + none_term\nrare_act=trav_df[4]\nrare_term=trav_df[5]\nrare_total=rare_act + rare_term\nfreq_act=trav_df[0]\nfreq_term=trav_df[1]\nfreq_total=freq_act+freq_term\n\nnone_act_rate=(none_act/none_total*100).round(1)\nnone_term_rate=(none_term/none_total*100).round(1)\nrare_act_rate=(rare_act/rare_total*100).round(1)\nrare__term_rate=(rare_term/rare_total*100).round(1)\nfreq_act_rate=(freq_act/freq_total*100).round(1)\nfreq_term_rate=(freq_term/freq_total*100).round(1)",
"_____no_output_____"
],
[
"trav_dict = {}\n\ntrav_dict['factor'] = 'Business Travel'\ntrav_dict['category'] = ['None','Rare','Frequent']\ntrav_dict['counts'] = [none_total,rare_total,freq_total]\ntrav_dict['Active'] = [none_act_rate,rare_act_rate,freq_act_rate]\ntrav_dict['Terminated'] = [none_term_rate, rare_term_rate, freq_term_rate]",
"_____no_output_____"
],
[
"new_dep_df=df.groupby([\"Department\",\"Employment Status\"]).count().reset_index()\nnew_dep_df.head()",
"_____no_output_____"
],
[
"dept_df = new_dep_df[\"Age\"].astype(float) \ndept_df.head(6)",
"_____no_output_____"
],
[
"hr_act=dept_df[0]\nhr_term=dept_df[1]\nhr_total=hr_act + hr_term\nrd_act=dept_df[2]\nrd_term=dept_df[3]\nrd_total=rd_act + rd_term\nsales_act=dept_df[4]\nsales_term=dept_df[5]\nsales_total=sales_act+sales_term\n\nhr_act_rate=(hr_act/hr_total*100).round(1)\nhr_term_rate=(hr_term/hr_total*100).round(1)\nrd_act_rate=(rd_act/rd_total*100).round(1)\nrd_term_rate=(rd_term/rd_total*100).round(1)\nsales_act_rate=(sales_act/sales_total*100).round(1)\nsales_term_rate=(sales_term/sales_total*100).round(1)",
"_____no_output_____"
],
[
"dept_dict = {}\n\ndept_dict['factor'] = 'Department'\ndept_dict['category'] = ['HR','R&D','Sales']\ndept_dict['counts'] = [hr_total,rd_total,sales_total]\ndept_dict['Active'] = [hr_act_rate,rd_act_rate,sales_act_rate]\ndept_dict['Terminated'] = [hr_term_rate, rd_term_rate, sales_term_rate]",
"_____no_output_____"
],
[
"jobIn_df=df.groupby([\"Job Involvement\",\"Employment Status\"]).count().reset_index() \n\njobIn_df.head(8)",
"_____no_output_____"
],
[
"JI=jobIn_df[\"Age\"].astype(float)\nJI.head(8)\ntype(JI)",
"_____no_output_____"
],
[
"JI1_act=JI[0]\nJI1_term=JI[1]\nJI1_total=(JI1_act + JI1_term)\nJI2_act=JI[2]\nJI2_term=JI[3]\nJI2_total=(JI2_act + JI2_term)\nJI3_act=JI[4]\nJI3_term=JI[5]\nJI3_total=(JI3_act + JI3_term)\nJI4_act=JI[6]\nJI4_term=JI[7]\nJI4_total=(JI4_act + JI4_term)\n\nrateJI1_act=(JI1_act/JI1_total*100).round(1)\nrateJI1_term=(JI1_term/JI1_total*100).round(1)\nrateJI2_act=(JI2_act/JI2_total*100).round(1)\nrateJI2_term=(JI2_term/JI2_total*100).round(1)\nrateJI3_act=(JI3_act/JI3_total*100).round(1)\nrateJI3_term=(JI3_term/JI3_total*100).round(1)\nrateJI4_act=(JI4_act/JI4_total*100).round(1)\nrateJI4_term=(JI4_term/JI4_total*100).round(1)\n\njobInvol_dict={}\n\njobInvol_dict['factor'] = 'Job Involvement'\njobInvol_dict['category'] = ['1','2','3','4']\njobInvol_dict['counts'] = [JI1_total, JI2_total, JI3_total, JI4_total]\njobInvol_dict['Active'] = [rateJI1_act, rateJI2_act, rateJI3_act, rateJI4_act]\njobInvol_dict['Terminated'] = [rateJI1_term, rateJI2_term, rateJI3_term, rateJI4_term]\n",
"_____no_output_____"
],
[
"gender_df=df.groupby([\"Gender\",\"Employment Status\"]).count().reset_index()\ngender_df = gender_df[\"Age\"].astype(float) \ngender_df.head()\n",
"_____no_output_____"
],
[
"fem_act_count=gender_df[0]\nfem_term_count=gender_df[1]\nfem_count=fem_act_count+fem_term_count\nmale_act_count=gender_df[2]\nmale_term_count=gender_df[3]\nmale_count=male_act_count+male_term_count\nfem_act_rate=(fem_act_count/fem_count*100).round(1)\nfem_term_rate=(fem_term_count/fem_count*100).round(1)\nmale_act_rate=(male_act_count/male_count*100).round(1)\nmale_term_rate=(male_term_count/male_count*100).round(1)\nprint(male_act_rate,male_term_rate)",
"_____no_output_____"
],
[
"gender_dict = {}\n\ngender_dict['factor'] = 'Gender'\ngender_dict['category'] = ['Male','Female']\ngender_dict['counts'] = [male_count, fem_count]\ngender_dict['Active'] = [male_act_rate, fem_act_rate]\ngender_dict['Terminated'] = [male_term_rate, fem_term_rate]",
"_____no_output_____"
],
[
"perf_df=df.groupby([\"Performance Rating\",\"Employment Status\"]).count().reset_index() \n",
"_____no_output_____"
],
[
"perf_df=perf_df[\"Age\"].astype(float)\n",
"_____no_output_____"
],
[
"count1_total=0\ncount2_total=0\ncount3_act=perf_df[0]\ncount3_term=perf_df[1]\ncount4_act=perf_df[2]\ncount4_term=perf_df[3]\ncount3_total=count3_act+count3_term\ncount4_total=count4_act+count4_term\n\nrate1_act=0\nrate1_term=0\nrate2_act=0\nrate2_term=0\nrate3_act=(count3_act/count3_total*100).round(1)\nrate3_term=(count3_term/count3_total*100).round(1)\nrate4_act=(count4_act/count4_total*100).round(1)\nrate4_term=(count4_term/count4_total*100).round(1)\n\nperf_dict={}\n\nperf_dict['factor'] = 'Performance Rating'\nperf_dict['category'] = ['1','2','3','4']\nperf_dict['counts'] = [count1_total, count2_total, count3_total, count4_total]\nperf_dict['Active'] = [rate1_act, rate2_act, rate3_act, rate4_act]\nperf_dict['Terminated'] = [rate1_term,rate2_term,rate3_term,rate4_term]\n",
"_____no_output_____"
],
[
"# Job Satisfaction \njs_df=df.groupby([\"Job Satisfaction\",\"Employment Status\"]).count().reset_index() \n",
"_____no_output_____"
],
[
"js=js_df[\"Age\"].astype(int)",
"_____no_output_____"
],
[
"actjs1 = js[0]\nterjs1 = js[1]\nalljs1 = js[0] + js[1]\nactjs2 = js[2]\nterjs2 = js[3]\nalljs2 = js[2] + js[3]\nactjs3 = js[4]\nterjs3 = js[5]\nalljs3 = js[4] + js[5]\nactjs4 = js[6]\nterjs4 = js[7]\nalljs4 = js[6] + js[7]\n\nrate_actjs1 = (actjs1/alljs1*100).round(1)\nrate_actjs2 = (actjs2/alljs2*100).round(1)\nrate_actjs3 = (actjs3/alljs3*100).round(1)\nrate_actjs4 = (actjs4/alljs4*100).round(1)\nrate_termjs1 = (terjs1/alljs1*100).round(1)\nrate_termjs2 = (terjs2/alljs2*100).round(1)\nrate_termjs3 = (terjs3/alljs3*100).round(1)\nrate_termjs4 = (terjs4/alljs4*100).round(1)\n\njs_dict = {}\n\njs_dict['factor'] = 'Job Satisfaction'\njs_dict['category'] = ['1','2','3','4']\njs_dict['counts'] = [alljs1,alljs2,alljs3,alljs4]\njs_dict['Active'] = [rate_actjs1, rate_actjs2, rate_actjs3, rate_actjs4]\njs_dict['Terminated'] = [rate_termjs1, rate_termjs2, rate_termjs3, rate_termjs4]\n",
"_____no_output_____"
],
[
"# Monthly Income\nbins = [1000, 3000, 5000, 7000, 20000]\nlabels=['<$3000','$3000-4999','$5000-6999','$7000 and up']\n\ngroups = df.groupby(['Employment Status', pd.cut(df['Monthly Income'], bins=bins, labels=labels)])\nmi=groups.size().reset_index().rename(columns={0:\"Count\"})",
"_____no_output_____"
],
[
"mi1_act=int(mi.iloc[0,2])\nmi1_term=int(mi.iloc[4,2])\nmi1_total=int((mi1_act + mi1_term))\nmi2_act=int(mi.iloc[1,2])\nmi2_term=int(mi.iloc[5,2])\nmi2_total=int((mi2_act + mi2_term))\nmi3_act=int(mi.iloc[2,2])\nmi3_term=int(mi.iloc[6,2])\nmi3_total=int((mi3_act + mi3_term))\nmi4_act=int(mi.iloc[3,2])\nmi4_term=int(mi.iloc[7,2])\nmi4_total=int((mi4_act + mi4_term))\n\nrate_mi1_act=(mi1_act/mi1_total*100)\nrate_mi1_term=(mi1_term/mi1_total*100)\nrate_mi2_act=(mi2_act/mi2_total*100)\nrate_mi2_term=(mi2_term/mi2_total*100)\nrate_mi3_act=(mi3_act/mi3_total*100)\nrate_mi3_term=(mi3_term/mi3_total*100)\nrate_mi4_act=(mi4_act/mi4_total*100)\nrate_mi4_term=(mi4_term/mi4_total*100)\n\nmi_dict={}\n\nmi_dict['factor'] = 'Monthly Income'\nmi_dict['category'] = [labels]\nmi_dict['counts'] = [mi1_total, mi2_total, mi3_total, mi4_total]\nmi_dict['Active'] = [rate_mi1_act, rate_mi2_act, rate_mi3_act, rate_mi4_act]\nmi_dict['Terminated'] = [rate_mi1_term, rate_mi2_term, rate_mi3_term, rate_mi4_term]",
"_____no_output_____"
],
[
"# Stock Options\nso_df=df.groupby([\"Stock Option Level\",\"Employment Status\"]).count().reset_index() ",
"_____no_output_____"
],
[
"so=so_df[\"Age\"].astype(float)\n",
"_____no_output_____"
],
[
"actso1 = so[0]\nterso1 = so[1]\nallso1 = so[0] + so[1]\nactso2 = so[2]\nterso2 = so[3]\nallso2 = so[2] + so[3]\nactso3 = so[4]\nterso3 = so[5]\nallso3 = so[4] + so[5]\nactso4 = so[6]\nterso4 = so[7]\nallso4 = so[6] + so[7]\n\nrate_actso1 = (actso1/allso1*100).round(1)\nrate_actso2 = (actso2/allso2*100).round(1)\nrate_actso3 = (actso3/allso3*100).round(1)\nrate_actso4 = (actso4/allso4*100).round(1)\nrate_termso1 = (terso1/allso1*100).round(1)\nrate_termso2 = (terso2/allso2*100).round(1)\nrate_termso3 = (terso3/allso3*100).round(1)\nrate_termso4 = (terso4/allso4*100).round(1)\n\nso_dict = {}\n\nso_dict['factor'] = 'Stock Option Level'\nso_dict['category'] = ['1','2','3','4']\nso_dict['counts'] = [allso1,allso2,allso3,allso4]\nso_dict['Active'] = [rate_actso1, rate_actso2, rate_actso3, rate_actso4]\nso_dict['Terminated'] = [rate_termso1, rate_termso2, rate_termso3, rate_termso4]",
"_____no_output_____"
],
[
"bins = [0,1,3,5,7]\nlabels=['None','1 or 2', '3 or 4', '5 or 6']\n\ngroupsT = df.groupby(['Employment Status', pd.cut(df['Training Last Year'], bins=bins, labels=labels)])\ntr=groupsT.size().reset_index().rename(columns={0:\"Count\",\"Training Last Year\":\"Trainings Last Year\"})\ntr.head(8)",
"_____no_output_____"
],
[
"tr1_act=int(tr.iloc[0,2])\ntr1_term=int(tr.iloc[4,2])\ntr1_total=int((tr1_act + tr1_term))\ntr2_act=int(tr.iloc[1,2])\ntr2_term=int(tr.iloc[5,2])\ntr2_total=int((tr2_act + tr2_term))\ntr3_act=int(tr.iloc[2,2])\ntr3_term=int(tr.iloc[6,2])\ntr3_total=int((tr3_act + tr3_term))\ntr4_act=int(tr.iloc[3,2])\ntr4_term=int(tr.iloc[7,2])\ntr4_total=int((tr4_act + tr4_term))\n\nrate_tr1_act=(tr1_act/tr1_total*100).round()\nrate_tr1_term=(tr1_term/tr1_total*100)\nrate_tr2_act=(tr2_act/tr2_total*100)\nrate_tr2_term=(tr2_term/tr2_total*100)\nrate_tr3_act=(tr3_act/tr3_total*100)\nrate_tr3_term=(tr3_term/tr3_total*100)\nrate_tr4_act=(tr4_act/tr4_total*100)\nrate_tr4_term=(tr4_term/tr4_total*100)\n\ntr_dict={}\n\ntr_dict['factor'] = 'Monthly Income'\ntr_dict['category'] = [labels]\ntr_dict['counts'] = [tr1_total, tr2_total, tr3_total, tr4_total]\ntr_dict['Active'] = [rate_tr1_act, rate_tr2_act, rate_tr3_act, rate_tr4_act]\ntr_dict['Terminated'] = [rate_tr1_term, rate_tr2_term, rate_tr3_term, rate_tr4_term]",
"_____no_output_____"
],
[
"bins = [0, 10, 20, 30]\nlabels=['<10 mi','10-19 mi','20-29 mi']\ngroups = df.groupby(['Employment Status', pd.cut(df['Commute (Miles)'], bins=bins, labels=labels)])\ncm=groups.size().reset_index().rename(columns={0:\"Count\"})\ncm.head(6)\n ",
"_____no_output_____"
],
[
"cm1_act=int(cm.iloc[0,2])\ncm1_term=int(cm.iloc[3,2])\ncm1_total=int((cm1_act + cm1_term))\ncm2_act=int(cm.iloc[1,2])\ncm2_term=int(cm.iloc[4,2])\ncm2_total=int((cm2_act + cm2_term))\ncm3_act=int(cm.iloc[2,2])\ncm3_term=int(cm.iloc[5,2])\ncm3_total=int((cm3_act + cm3_term))",
"_____no_output_____"
],
[
"rate_cm1_act=(cm1_act/cm1_total*100)\nrate_cm1_term=(cm1_term/cm1_total*100)\nrate_cm2_act=(cm2_act/cm2_total*100)\nrate_cm2_term=(cm2_term/cm2_total*100)\nrate_cm3_act=(cm3_act/cm3_total*100)\nrate_cm3_term=(cm3_term/cm3_total*100)\n ",
"_____no_output_____"
],
[
"cm_dict={}\n\ncm_dict['factor'] = 'Commute (Miles)'\ncm_dict['Category'] = [labels]\ncm_dict['Count'] = [cm1_total, cm2_total, cm3_total]\ncm_dict['Active'] = [rate_cm1_act, rate_cm2_act, rate_cm3_act]\ncm_dict['Terminated'] = [rate_cm1_term, rate_cm2_term, rate_cm3_term]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0414c8fd8f41b40af6483bb4be757559493b526 | 141,814 | ipynb | Jupyter Notebook | intro-neural-networks/gradient-descent/GradientDescent.ipynb | basilcea/deep-learning-v2-pytorch | 7dc874d915537853f5b1ee05742d86e62062a906 | [
"MIT"
] | null | null | null | intro-neural-networks/gradient-descent/GradientDescent.ipynb | basilcea/deep-learning-v2-pytorch | 7dc874d915537853f5b1ee05742d86e62062a906 | [
"MIT"
] | null | null | null | intro-neural-networks/gradient-descent/GradientDescent.ipynb | basilcea/deep-learning-v2-pytorch | 7dc874d915537853f5b1ee05742d86e62062a906 | [
"MIT"
] | null | null | null | 406.34384 | 104,148 | 0.933293 | [
[
[
"# Implementing the Gradient Descent Algorithm\n\nIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n#Some helper functions for plotting and drawing lines\n\ndef plot_points(X, y):\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')\n\ndef display(m, b, color='g--'):\n plt.xlim(-0.05,1.05)\n plt.ylim(-0.05,1.05)\n x = np.arange(-10, 10, 0.1)\n plt.plot(x, m*x+b, color)",
"_____no_output_____"
]
],
[
[
"## Reading and plotting the data",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('data.csv', header=None)\nX = np.array(data[[0,1]])\ny = np.array(data[2])\nplot_points(X,y)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## TODO: Implementing the basic functions\nHere is your turn to shine. Implement the following formulas, as explained in the text.\n- Sigmoid activation function\n\n$$\\sigma(x) = \\frac{1}{1+e^{-x}}$$\n\n- Output (prediction) formula\n\n$$\\hat{y} = \\sigma(w_1 x_1 + w_2 x_2 + b)$$\n\n- Error function\n\n$$Error(y, \\hat{y}) = - y \\log(\\hat{y}) - (1-y) \\log(1-\\hat{y})$$\n\n- The function that updates the weights\n\n$$ w_i \\longrightarrow w_i + \\alpha (y - \\hat{y}) x_i$$\n\n$$ b \\longrightarrow b + \\alpha (y - \\hat{y})$$",
"_____no_output_____"
]
],
[
[
"# Implement the following functions\n\n# Activation (sigmoid) function\ndef sigmoid(x):\n exp = np.exp(-x)\n return 1/(1+exp)\n\n# Output (prediction) formula\ndef output_formula(features, weights, bias):\n return sigmoid(np.dot(features,weights)+bias)\n \n \n \n\n# Error (log-loss) formula\ndef error_formula(y, output):\n return -y*np.log(output)- (1 - y) * np.log(1-output)\n# Gradient descent step\ndef update_weights(x, y, weights, bias, learnrate):\n output = output_formula(x , weights , bias)\n error = y- output\n weights += learnrate*x*error\n bias += learnrate*error\n return weights, bias\n\n\n# # Activation (sigmoid) function\n# def sigmoid(x):\n# return 1 / (1 + np.exp(-x))\n\n# def output_formula(features, weights, bias):\n# return sigmoid(np.dot(features, weights) + bias)\n\n# def error_formula(y, output):\n# return - y*np.log(output) - (1 - y) * np.log(1-output)\n\n# def update_weights(x, y, weights, bias, learnrate):\n# output = output_formula(x, weights, bias)\n# d_error = y - output\n# weights += learnrate * d_error * x\n# bias += learnrate * d_error\n# return weights, bias\n",
"_____no_output_____"
]
],
[
[
"## Training function\nThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.",
"_____no_output_____"
]
],
[
[
"np.random.seed(44)\n\nepochs = 100\nlearnrate = 0.01\n\ndef train(features, targets, epochs, learnrate, graph_lines=False):\n \n errors = []\n n_records, n_features = features.shape\n last_loss = None\n weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n bias = 0\n for e in range(epochs):\n del_w = np.zeros(weights.shape)\n for x, y in zip(features, targets):\n output = output_formula(x, weights, bias)\n error = error_formula(y, output)\n weights, bias = update_weights(x, y, weights, bias, learnrate)\n \n # Printing out the log-loss error on the training set\n out = output_formula(features, weights, bias)\n loss = np.mean(error_formula(targets, out))\n errors.append(loss)\n if e % (epochs / 10) == 0:\n print(\"\\n========== Epoch\", e,\"==========\")\n if last_loss and last_loss < loss:\n print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n else:\n print(\"Train loss: \", loss)\n last_loss = loss\n predictions = out > 0.5\n accuracy = np.mean(predictions == targets)\n print(\"Accuracy: \", accuracy)\n if graph_lines and e % (epochs / 100) == 0:\n display(-weights[0]/weights[1], -bias/weights[1])\n \n\n # Plotting the solution boundary\n plt.title(\"Solution boundary\")\n display(-weights[0]/weights[1], -bias/weights[1], 'black')\n\n # Plotting the data\n plot_points(features, targets)\n plt.show()\n\n # Plotting the error\n plt.title(\"Error Plot\")\n plt.xlabel('Number of epochs')\n plt.ylabel('Error')\n plt.plot(errors)\n plt.show()",
"_____no_output_____"
]
],
[
[
"## Time to train the algorithm!\nWhen we run the function, we'll obtain the following:\n- 10 updates with the current training loss and accuracy\n- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.\n- A plot of the error function. Notice how it decreases as we go through more epochs.",
"_____no_output_____"
]
],
[
[
"train(X, y, epochs, learnrate, True)",
"\n========== Epoch 0 ==========\nTrain loss: 0.7135845195381633\nAccuracy: 0.4\n\n========== Epoch 10 ==========\nTrain loss: 0.6225835210454962\nAccuracy: 0.59\n\n========== Epoch 20 ==========\nTrain loss: 0.5548744083669508\nAccuracy: 0.74\n\n========== Epoch 30 ==========\nTrain loss: 0.501606141872473\nAccuracy: 0.84\n\n========== Epoch 40 ==========\nTrain loss: 0.4593334641861401\nAccuracy: 0.86\n\n========== Epoch 50 ==========\nTrain loss: 0.42525543433469976\nAccuracy: 0.93\n\n========== Epoch 60 ==========\nTrain loss: 0.39734615716713984\nAccuracy: 0.93\n\n========== Epoch 70 ==========\nTrain loss: 0.3741469765239074\nAccuracy: 0.93\n\n========== Epoch 80 ==========\nTrain loss: 0.3545997336816197\nAccuracy: 0.94\n\n========== Epoch 90 ==========\nTrain loss: 0.3379273658879921\nAccuracy: 0.94\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0414e9754f3e7629fc48f588118f018946d0ca0 | 1,999 | ipynb | Jupyter Notebook | ml1-decision-tree-pass-or-fail.ipynb | raveeshwara/littleml | 277a4f8761046879a9511782611328b2cccb629c | [
"CC0-1.0"
] | null | null | null | ml1-decision-tree-pass-or-fail.ipynb | raveeshwara/littleml | 277a4f8761046879a9511782611328b2cccb629c | [
"CC0-1.0"
] | null | null | null | ml1-decision-tree-pass-or-fail.ipynb | raveeshwara/littleml | 277a4f8761046879a9511782611328b2cccb629c | [
"CC0-1.0"
] | null | null | null | 22.460674 | 90 | 0.502251 | [
[
[
"from sklearn import tree\nclassifier = tree.DecisionTreeClassifier()",
"_____no_output_____"
],
[
"trainData = [[34],[35]]\ntrainLabel= [0, 1] # 0 - Fail, 1 - Pass\nclassifier.fit(trainData, trainLabel)",
"_____no_output_____"
],
[
"testData = [[25], [40], [90], [35], [34]]\ntestLabel = classifier.predict(testData)\nprint(testLabel)",
"[0 1 1 1 0]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0415bee394c3ba453e906fc286365fc624cebb9 | 26,304 | ipynb | Jupyter Notebook | code/notebooks/coupon.ipynb | nzw0301/Understanding-Negative-Samples-in-Instance-Discriminative-Self-supervised-Representation-Learning | 957173bd8ec5b5e00994099d8b4467c74b802303 | [
"MIT"
] | 4 | 2021-10-06T07:04:43.000Z | 2022-01-28T09:31:29.000Z | code/notebooks/coupon.ipynb | nzw0301/Understanding-Negative-Samples | 957173bd8ec5b5e00994099d8b4467c74b802303 | [
"MIT"
] | null | null | null | code/notebooks/coupon.ipynb | nzw0301/Understanding-Negative-Samples | 957173bd8ec5b5e00994099d8b4467c74b802303 | [
"MIT"
] | null | null | null | 79.709091 | 18,788 | 0.819153 | [
[
[
"import json\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import quad\nfrom scipy.special import comb\nfrom tabulate import tabulate\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Expected numbers on Table 3.",
"_____no_output_____"
]
],
[
[
"rows = []\ndatasets = {\n 'Binary': 2,\n 'AG news': 4,\n 'CIFAR10': 10,\n 'CIFAR100': 100,\n 'Wiki3029': 3029, \n}\n\n\ndef expectations(C: int) -> float:\n \"\"\"\n C is the number of latent classes.\n \"\"\"\n e = 0.\n\n for k in range(1, C + 1):\n e += C / k\n return e\n\n\nfor dataset_name, C in datasets.items():\n e = expectations(C)\n\n rows.append((dataset_name, C, np.ceil(e)))",
"_____no_output_____"
],
[
"# ImageNet is non-uniform label distribution on the training dataset\n\ndata = json.load(open(\"./imagenet_count.json\"))\ncounts = np.array(list(data.values()))\ntotal_num = np.sum(counts)\nprob = counts / total_num\n\n\ndef integrand(t: float, prob: np.ndarray) -> float:\n return 1. - np.prod(1 - np.exp(-prob * t))\n\n\nrows.append((\"ImageNet\", len(prob), np.ceil(quad(integrand, 0, np.inf, args=(prob))[0])))\n",
"_____no_output_____"
],
[
"print(tabulate(rows, headers=[\"Dataset\", \"\\# classes\", \"\\mathbb{E}[K+1]\"]))",
"Dataset \\# classes \\mathbb{E}[K+1]\n--------- ------------ -----------------\nBinary 2 3\nAG news 4 9\nCIFAR10 10 30\nCIFAR100 100 519\nWiki3029 3029 26030\nImageNet 1000 7709\n"
]
],
[
[
"## Probability $\\upsilon$",
"_____no_output_____"
]
],
[
[
"def prob(C, N):\n \"\"\"\n C: the number of latent class \n N: the number of samples to draw \n \"\"\"\n\n theoretical = []\n for n in range(C, N + 1):\n p = 0.\n for m in range(C - 1):\n p += comb(C - 1, m) * ((-1) ** m) * np.exp((n - 1) * np.log(1. - (m + 1) / C))\n\n theoretical.append((n, max(p, 0.)))\n\n return np.array(theoretical)\n",
"_____no_output_____"
],
[
"# example of CIFAR-10\n\nC = 10\nfor N in [32, 63, 128, 256, 512]:\n p = np.sum(prob(C, N).T[1])\n print(\"{:3d} {:.7f}\".format(N, p))\n",
" 32 0.6909756\n 63 0.9869351\n128 0.9999861\n256 1.0000000\n512 1.0000000\n"
],
[
"# example of CIFAR-100\n\nC = 100\nps = []\nns = []\n\nfor N in 128 * np.arange(1, 9):\n p = np.sum(prob(C, N).T[1])\n print(\"{:4d} {}\".format(N, p))\n ps.append(p)\n ns.append(N)",
" 128 0.0004517171443332115\n 256 0.0005750103110269027\n 384 0.10845377001311465\n 512 0.5531327628081966\n 640 0.8510308810769567\n 768 0.956899070354311\n 896 0.9882414056661265\n1024 0.9970649738141432\n"
]
],
[
[
"## Simulation",
"_____no_output_____"
]
],
[
[
"n_loop = 10\n\nrnd = np.random.RandomState(7)\nlabels = np.arange(C).repeat(100)\n\nresults = {}\n\nfor N in ns:\n\n num_iters = int(len(labels) / N)\n total_samples_for_bounds = float(num_iters * N * (n_loop))\n\n for _ in range(n_loop):\n rnd.shuffle(labels)\n\n for batch_id in range(len(labels) // N):\n\n if len(set(labels[N * batch_id:N * (batch_id + 1)])) == C:\n\n results[N] = results.get(N, 0.) + N / total_samples_for_bounds\n else:\n results[N] = results.get(N, 0.) + 0.\n\nxs = []\nys = []\nfor k, v in results.items():\n print(k, v)\n ys.append(v)\n xs.append(k)\n\n",
"128 0.0\n256 0.0\n384 0.13076923076923075\n512 0.5789473684210534\n640 0.8733333333333351\n768 0.984615384615382\n896 0.9999999999999972\n1024 0.9999999999999984\n"
],
[
"plt.plot(ns, ps, label=\"Theoretical\")\nplt.plot(xs, ys, label=\"Empirical\")\nplt.ylabel(\"probability\")\nplt.xlabel(\"$K+1$\")\nplt.title(\"CIFAR-100 simulation\")\nplt.legend()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d041669b245be7c188d24dc7551765a77dead4b0 | 21,225 | ipynb | Jupyter Notebook | notebooks/organism/rice/rg-export-usage.ipynb | tomwhite/gwas-analysis | 5b219607b8311722f16f7df8a8aad09ba69dc448 | [
"Apache-2.0"
] | 19 | 2020-03-18T01:06:58.000Z | 2022-02-06T19:59:30.000Z | notebooks/organism/rice/rg-export-usage.ipynb | tomwhite/gwas-analysis | 5b219607b8311722f16f7df8a8aad09ba69dc448 | [
"Apache-2.0"
] | 39 | 2020-01-20T19:50:19.000Z | 2021-01-07T19:01:48.000Z | notebooks/organism/rice/rg-export-usage.ipynb | tomwhite/gwas-analysis | 5b219607b8311722f16f7df8a8aad09ba69dc448 | [
"Apache-2.0"
] | 5 | 2020-03-13T20:47:24.000Z | 2022-01-13T09:43:35.000Z | 31.491098 | 155 | 0.398492 | [
[
[
"### 3K Rice Genome GWAS Dataset Export Usage",
"_____no_output_____"
],
[
"Data for this was exported as single Hail MatrixTable (`.mt`) as well as individual variants (`csv.gz`), samples (`csv`), and call datasets (`zarr`).",
"_____no_output_____"
]
],
[
[
"from pathlib import Path\nimport pandas as pd\nimport numpy as np\nimport hail as hl\nimport zarr\nhl.init()",
"Running on Apache Spark version 2.4.4\nSparkUI available at http://8352602c2ab9:4041\nWelcome to\n __ __ <>__\n / /_/ /__ __/ /\n / __ / _ `/ / /\n /_/ /_/\\_,_/_/_/ version 0.2.32-a5876a0a2853\nLOGGING: writing to /home/eczech/repos/gwas-analysis/notebooks/organism/rice/hail-20200514-1737-0.2.32-a5876a0a2853.log\n"
],
[
"path = Path('~/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export').expanduser()\npath",
"_____no_output_____"
],
[
"!du -sh {str(path)}/*",
"582M\t/home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.calls.zarr\n336K\t/home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.cols.csv\n471M\t/home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.mt\n7.5M\t/home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.rows.csv.gz\n"
]
],
[
[
"#### Hail",
"_____no_output_____"
]
],
[
[
"# The entire table with row, col, and call data:\nhl.read_matrix_table(str(path / 'rg-3k-gwas-export.mt')).describe()",
"----------------------------------------\nGlobal fields:\n None\n----------------------------------------\nColumn fields:\n 's': str\n 'acc_seq_no': int64\n 'acc_stock_id': int64\n 'acc_gs_acc': float64\n 'acc_gs_variety_name': str\n 'acc_igrc_acc_src': int64\n 'pt_APANTH_REPRO': float64\n 'pt_APSH': float64\n 'pt_APCO_REV_POST': float64\n 'pt_APCO_REV_REPRO': float64\n 'pt_AWCO_LREV': float64\n 'pt_AWCO_REV': float64\n 'pt_AWDIST': float64\n 'pt_BLANTHPR_VEG': float64\n 'pt_BLANTHDI_VEG': float64\n 'pt_BLPUB_VEG': float64\n 'pt_BLSCO_ANTH_VEG': float64\n 'pt_BLSCO_REV_VEG': float64\n 'pt_CCO_REV_VEG': float64\n 'pt_CUAN_REPRO': float64\n 'pt_ENDO': float64\n 'pt_FLA_EREPRO': float64\n 'pt_FLA_REPRO': float64\n 'pt_INANTH': float64\n 'pt_LIGCO_REV_VEG': float64\n 'pt_LIGSH': float64\n 'pt_LPCO_REV_POST': float64\n 'pt_LPPUB': float64\n 'pt_LSEN': float64\n 'pt_NOANTH': float64\n 'pt_PEX_REPRO': float64\n 'pt_PTH': float64\n 'pt_SCCO_REV': float64\n 'pt_SECOND_BR_REPRO': float64\n 'pt_SLCO_REV': float64\n 'pt_SPKF': float64\n 'pt_SLLT_CODE': float64\n----------------------------------------\nRow fields:\n 'locus': locus<GRCh37>\n 'alleles': array<str>\n 'rsid': str\n 'cm_position': float64\n----------------------------------------\nEntry fields:\n 'GT': call\n----------------------------------------\nColumn key: ['s']\nRow key: ['locus', 'alleles']\n----------------------------------------\n"
]
],
[
[
"### Pandas",
"_____no_output_____"
],
[
"Sample data contains phenotypes prefixed by `pt_` and `s` (sample_id) in the MatrixTable matches to the `s` in this table, as does the order:",
"_____no_output_____"
]
],
[
[
"pd.read_csv(path / 'rg-3k-gwas-export.cols.csv').head()",
"_____no_output_____"
]
],
[
[
"Variant data shouldn't be needed for much, but it's here:",
"_____no_output_____"
]
],
[
[
"pd.read_csv(path / 'rg-3k-gwas-export.rows.csv.gz').head()",
"_____no_output_____"
]
],
[
[
"### Zarr",
"_____no_output_____"
],
[
"Call data (dense and mean imputed in this case) can be sliced from a zarr array:",
"_____no_output_____"
]
],
[
[
"gt = zarr.open(str(path / 'rg-3k-gwas-export.calls.zarr'), mode='r')\n# Get calls for 10 variants and 5 samples\ngt[5:15, 5:10]",
"_____no_output_____"
]
],
[
[
"### Selecting Phenotypes",
"_____no_output_____"
],
[
"Pick a phenotype:\n \n- Definitions are in https://s3-ap-southeast-1.amazonaws.com/oryzasnp-atcg-irri-org/3kRG-phenotypes/3kRG_PhenotypeData_v20170411.xlsx\n - The \">2007 Dictionary\" sheet\n- Choose one with low sparsity\n",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(path / 'rg-3k-gwas-export.cols.csv')\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2113 entries, 0 to 2112\nData columns (total 37 columns):\ns 2113 non-null object\nacc_seq_no 2113 non-null int64\nacc_stock_id 2113 non-null int64\nacc_gs_acc 2113 non-null float64\nacc_gs_variety_name 2113 non-null object\nacc_igrc_acc_src 2113 non-null int64\npt_APANTH_REPRO 91 non-null float64\npt_APSH 133 non-null float64\npt_APCO_REV_POST 552 non-null float64\npt_APCO_REV_REPRO 2108 non-null float64\npt_AWCO_LREV 133 non-null float64\npt_AWCO_REV 2112 non-null float64\npt_AWDIST 30 non-null float64\npt_BLANTHPR_VEG 133 non-null float64\npt_BLANTHDI_VEG 13 non-null float64\npt_BLPUB_VEG 2112 non-null float64\npt_BLSCO_ANTH_VEG 133 non-null float64\npt_BLSCO_REV_VEG 2111 non-null float64\npt_CCO_REV_VEG 2110 non-null float64\npt_CUAN_REPRO 2111 non-null float64\npt_ENDO 1976 non-null float64\npt_FLA_EREPRO 133 non-null float64\npt_FLA_REPRO 2109 non-null float64\npt_INANTH 133 non-null float64\npt_LIGCO_REV_VEG 2111 non-null float64\npt_LIGSH 1430 non-null float64\npt_LPCO_REV_POST 2109 non-null float64\npt_LPPUB 1430 non-null float64\npt_LSEN 2110 non-null float64\npt_NOANTH 133 non-null float64\npt_PEX_REPRO 2112 non-null float64\npt_PTH 2109 non-null float64\npt_SCCO_REV 1985 non-null float64\npt_SECOND_BR_REPRO 1427 non-null float64\npt_SLCO_REV 1914 non-null float64\npt_SPKF 2111 non-null float64\npt_SLLT_CODE 2109 non-null float64\ndtypes: float64(32), int64(3), object(2)\nmemory usage: 610.9+ KB\n"
],
[
"# First 1k variants with samples having data for this phenotype\nmask = df['pt_FLA_REPRO'].notnull()\ngtp = gt[:1000][:,mask]\ngtp.shape, gtp.dtype",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0416efdd87664718f280b37ef154f20048a0f60 | 12,846 | ipynb | Jupyter Notebook | notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb | hlinsen/cugraph | ad92c1e8a7219b3eb57104f5242452d3c5a6e9a6 | [
"Apache-2.0"
] | 991 | 2018-12-05T22:07:52.000Z | 2022-03-31T10:45:45.000Z | notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb | hlinsen/cugraph | ad92c1e8a7219b3eb57104f5242452d3c5a6e9a6 | [
"Apache-2.0"
] | 1,929 | 2018-12-06T14:06:18.000Z | 2022-03-31T20:01:00.000Z | notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb | hlinsen/cugraph | ad92c1e8a7219b3eb57104f5242452d3c5a6e9a6 | [
"Apache-2.0"
] | 227 | 2018-12-06T18:10:15.000Z | 2022-03-28T19:03:15.000Z | 27.507495 | 316 | 0.502258 | [
[
[
"# PageRank Performance Benchmarking\n# Skip notebook test\n\nThis notebook benchmarks performance of running PageRank within cuGraph against NetworkX. NetworkX contains several implementations of PageRank. This benchmark will compare cuGraph versus the defaukt Nx implementation as well as the SciPy version\n\nNotebook Credits\n\n Original Authors: Bradley Rees\n Last Edit: 08/16/2020\n \nRAPIDS Versions: 0.15\n\nTest Hardware\n\n GV100 32G, CUDA 10,0\n Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz\n 32GB system memory\n \n",
"_____no_output_____"
],
[
"### Test Data\n\n| File Name | Num of Vertices | Num of Edges |\n|:---------------------- | --------------: | -----------: |\n| preferentialAttachment | 100,000 | 999,970 |\n| caidaRouterLevel | 192,244 | 1,218,132 |\n| coAuthorsDBLP | 299,067 | 1,955,352 |\n| dblp-2010 | 326,186 | 1,615,400 |\n| citationCiteseer | 268,495 | 2,313,294 |\n| coPapersDBLP | 540,486 | 30,491,458 |\n| coPapersCiteseer | 434,102 | 32,073,440 |\n| as-Skitter | 1,696,415 | 22,190,596 |\n\n\n",
"_____no_output_____"
],
[
"### Timing \nWhat is not timed: Reading the data\n\nWhat is timmed: (1) creating a Graph, (2) running PageRank\n\nThe data file is read in once for all flavors of PageRank. Each timed block will craete a Graph and then execute the algorithm. The results of the algorithm are not compared. If you are interested in seeing the comparison of results, then please see PageRank in the __notebooks__ repo. ",
"_____no_output_____"
],
[
"## NOTICE\n_You must have run the __dataPrep__ script prior to running this notebook so that the data is downloaded_\n\nSee the README file in this folder for a discription of how to get the data",
"_____no_output_____"
],
[
"## Now load the required libraries",
"_____no_output_____"
]
],
[
[
"# Import needed libraries\nimport gc\nimport time\nimport rmm\nimport cugraph\nimport cudf",
"_____no_output_____"
],
[
"# NetworkX libraries\nimport networkx as nx\nfrom scipy.io import mmread",
"_____no_output_____"
],
[
"try: \n import matplotlib\nexcept ModuleNotFoundError:\n os.system('pip install matplotlib')",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt; plt.rcdefaults()\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"### Define the test data",
"_____no_output_____"
]
],
[
[
"# Test File\ndata = {\n 'preferentialAttachment' : './data/preferentialAttachment.mtx',\n 'caidaRouterLevel' : './data/caidaRouterLevel.mtx',\n 'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx',\n 'dblp' : './data/dblp-2010.mtx',\n 'citationCiteseer' : './data/citationCiteseer.mtx',\n 'coPapersDBLP' : './data/coPapersDBLP.mtx',\n 'coPapersCiteseer' : './data/coPapersCiteseer.mtx',\n 'as-Skitter' : './data/as-Skitter.mtx'\n}",
"_____no_output_____"
]
],
[
[
"### Define the testing functions",
"_____no_output_____"
]
],
[
[
"# Data reader - the file format is MTX, so we will use the reader from SciPy\ndef read_mtx_file(mm_file):\n print('Reading ' + str(mm_file) + '...')\n M = mmread(mm_file).asfptype()\n \n return M",
"_____no_output_____"
],
[
"# CuGraph PageRank\n\ndef cugraph_call(M, max_iter, tol, alpha):\n\n gdf = cudf.DataFrame()\n gdf['src'] = M.row\n gdf['dst'] = M.col\n \n print('\\tcuGraph Solving... ')\n \n t1 = time.time()\n \n # cugraph Pagerank Call\n G = cugraph.DiGraph()\n G.from_cudf_edgelist(gdf, source='src', destination='dst', renumber=False)\n \n df = cugraph.pagerank(G, alpha=alpha, max_iter=max_iter, tol=tol)\n t2 = time.time() - t1\n \n return t2\n ",
"_____no_output_____"
],
[
"# Basic NetworkX PageRank\n\ndef networkx_call(M, max_iter, tol, alpha):\n nnz_per_row = {r: 0 for r in range(M.get_shape()[0])}\n for nnz in range(M.getnnz()):\n nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]]\n for nnz in range(M.getnnz()):\n M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]])\n\n M = M.tocsr()\n if M is None:\n raise TypeError('Could not read the input graph')\n if M.shape[0] != M.shape[1]:\n raise TypeError('Shape is not square')\n\n # should be autosorted, but check just to make sure\n if not M.has_sorted_indices:\n print('sort_indices ... ')\n M.sort_indices()\n\n z = {k: 1.0/M.shape[0] for k in range(M.shape[0])}\n \n print('\\tNetworkX Solving... ')\n \n # start timer\n t1 = time.time()\n \n Gnx = nx.DiGraph(M)\n\n pr = nx.pagerank(Gnx, alpha, z, max_iter, tol)\n \n t2 = time.time() - t1\n\n return t2",
"_____no_output_____"
],
[
"# SciPy PageRank\n\ndef networkx_scipy_call(M, max_iter, tol, alpha):\n nnz_per_row = {r: 0 for r in range(M.get_shape()[0])}\n for nnz in range(M.getnnz()):\n nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]]\n for nnz in range(M.getnnz()):\n M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]])\n\n M = M.tocsr()\n if M is None:\n raise TypeError('Could not read the input graph')\n if M.shape[0] != M.shape[1]:\n raise TypeError('Shape is not square')\n\n # should be autosorted, but check just to make sure\n if not M.has_sorted_indices:\n print('sort_indices ... ')\n M.sort_indices()\n\n z = {k: 1.0/M.shape[0] for k in range(M.shape[0])}\n\n # SciPy Pagerank Call\n print('\\tSciPy Solving... ')\n t1 = time.time()\n \n Gnx = nx.DiGraph(M) \n \n pr = nx.pagerank_scipy(Gnx, alpha, z, max_iter, tol)\n t2 = time.time() - t1\n\n return t2",
"_____no_output_____"
]
],
[
[
"### Run the benchmarks",
"_____no_output_____"
]
],
[
[
"# arrays to capture performance gains\ntime_cu = []\ntime_nx = []\ntime_sp = []\nperf_nx = []\nperf_sp = []\nnames = []\n\n# init libraries by doing a simple task \nv = './data/preferentialAttachment.mtx'\nM = read_mtx_file(v)\ntrapids = cugraph_call(M, 100, 0.00001, 0.85)\ndel M\n\n\nfor k,v in data.items():\n gc.collect()\n\n # Saved the file Name\n names.append(k)\n \n # read the data\n M = read_mtx_file(v)\n \n # call cuGraph - this will be the baseline\n trapids = cugraph_call(M, 100, 0.00001, 0.85)\n time_cu.append(trapids)\n \n # Now call NetworkX\n tn = networkx_call(M, 100, 0.00001, 0.85)\n speedUp = (tn / trapids)\n perf_nx.append(speedUp)\n time_nx.append(tn)\n \n # Now call SciPy\n tsp = networkx_scipy_call(M, 100, 0.00001, 0.85)\n speedUp = (tsp / trapids)\n perf_sp.append(speedUp) \n time_sp.append(tsp)\n \n print(\"cuGraph (\" + str(trapids) + \") Nx (\" + str(tn) + \") SciPy (\" + str(tsp) + \")\" )\n del M",
"_____no_output_____"
]
],
[
[
"### plot the output",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nplt.figure(figsize=(10,8))\n\nbar_width = 0.35\nindex = np.arange(len(names))\n\n_ = plt.bar(index, perf_nx, bar_width, color='g', label='vs Nx')\n_ = plt.bar(index + bar_width, perf_sp, bar_width, color='b', label='vs SciPy')\n\nplt.xlabel('Datasets')\nplt.ylabel('Speedup')\nplt.title('PageRank Performance Speedup')\nplt.xticks(index + (bar_width / 2), names)\nplt.xticks(rotation=90) \n\n# Text on the top of each barplot\nfor i in range(len(perf_nx)):\n plt.text(x = (i - 0.55) + bar_width, y = perf_nx[i] + 25, s = round(perf_nx[i], 1), size = 12)\n\nfor i in range(len(perf_sp)):\n plt.text(x = (i - 0.1) + bar_width, y = perf_sp[i] + 25, s = round(perf_sp[i], 1), size = 12)\n\n\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Dump the raw stats",
"_____no_output_____"
]
],
[
[
"perf_nx",
"_____no_output_____"
],
[
"perf_sp",
"_____no_output_____"
],
[
"time_cu",
"_____no_output_____"
],
[
"time_nx",
"_____no_output_____"
],
[
"time_sp",
"_____no_output_____"
]
],
[
[
"___\nCopyright (c) 2020, NVIDIA CORPORATION.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n___",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d04178a7b604389c6fdf889b6d41965ef6ad0313 | 1,273 | ipynb | Jupyter Notebook | nbcollection/tests/data/my_notebooks/sub_path1/notebook2.ipynb | jonathansick/nbcollection | 516c17230c0eac09170275b4cd1f46b2f9bfb9da | [
"BSD-3-Clause"
] | 6 | 2021-04-13T23:08:14.000Z | 2021-11-14T03:23:20.000Z | nbcollection/tests/data/my_notebooks/sub_path1/notebook2.ipynb | jonathansick/nbcollection | 516c17230c0eac09170275b4cd1f46b2f9bfb9da | [
"BSD-3-Clause"
] | 11 | 2020-07-08T14:10:47.000Z | 2022-01-18T16:04:34.000Z | nbcollection/tests/data/my_notebooks/sub_path1/notebook2.ipynb | adrn/nbstatic | a1101efbf140d872a8220a6bbb3d95f29a9887f0 | [
"BSD-3-Clause"
] | 3 | 2020-07-21T19:55:24.000Z | 2021-09-21T15:44:26.000Z | 17.929577 | 47 | 0.527101 | [
[
[
"# My Notebook 2",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
],
[
"print(\"I am notebook 2\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
d04178f934cb4930e639715151964d272670e712 | 8,121 | ipynb | Jupyter Notebook | nick L/2021 fall/word2vec2.ipynb | excellalabs/howard-vip-notebooks-winter-2021 | 2c62ab75fc9e75dc88a2d3d1945bba975dfb67d7 | [
"MIT"
] | null | null | null | nick L/2021 fall/word2vec2.ipynb | excellalabs/howard-vip-notebooks-winter-2021 | 2c62ab75fc9e75dc88a2d3d1945bba975dfb67d7 | [
"MIT"
] | null | null | null | nick L/2021 fall/word2vec2.ipynb | excellalabs/howard-vip-notebooks-winter-2021 | 2c62ab75fc9e75dc88a2d3d1945bba975dfb67d7 | [
"MIT"
] | null | null | null | 22.495845 | 162 | 0.535771 | [
[
[
"pip install contractions",
"_____no_output_____"
],
[
"import pandas as pd\nimport boto3, sys,sagemaker \nimport pandas as pd \nimport pandas as pd\nimport numpy as np\nimport nltk\nimport string\nimport contractions\nfrom nltk.tokenize import word_tokenize\nfrom nltk.tokenize import sent_tokenize\nfrom nltk.corpus import stopwords, wordnet\nfrom nltk.stem import WordNetLemmatizer\n# plt.xticks(rotation=70)\npd.options.mode.chained_assignment = None\npd.set_option('display.max_colwidth', 100)\n%matplotlib inline\npd.set_option('display.max_rows', None)\npd.set_option('display.max_columns', None)\n",
"_____no_output_____"
],
[
"df = pd.read_csv('cleanedText.csv', index_col=0)",
"_____no_output_____"
],
[
"df['no_punc'][1]",
"_____no_output_____"
],
[
"table = df.loc[:,['TEXT']]",
"_____no_output_____"
],
[
"nltk.download('punkt')\ntable['tokenizedSent'] = table['TEXT'].apply(sent_tokenize)\n",
"_____no_output_____"
],
[
"table['tokenizedSent'][1]",
"_____no_output_____"
],
[
"table['tokenized_words'] = table['tokenizedSent'].apply(lambda x: [word_tokenize(sent) for sent in x])\n# punc = string.punctuation\n# table['no_punc/numbers'] = table['tokenizedSent'].apply(lambda x: [word.lower() for word in x if word not in punc])",
"_____no_output_____"
],
[
"table['tokenized_words'][1]",
"_____no_output_____"
],
[
" table['no_punc/numbers'] = table['tokenized_words'].apply(lambda x: [[word.lower() for word in sent if word not in punc and word.isalpha()] for sent in x])",
"_____no_output_____"
],
[
" table['no_punc/numbers'][1]",
"_____no_output_____"
],
[
"nltk.download('stopwords')\nstop_words = set(stopwords.words('english'))\ntable['stopwords_removed'] = table['no_punc/numbers'].apply(lambda x: [[word for word in sent if word not in stop_words]for sent in x])",
"_____no_output_____"
],
[
"table['stopwords_removed'][1]",
"_____no_output_____"
],
[
"nltk.download('averaged_perceptron_tagger')\ntable['pos_tags'] = table['stopwords_removed'].apply(lambda x: [nltk.tag.pos_tag(sent) for sent in x if len(sent)> 0 ])",
"_____no_output_____"
],
[
"table['pos_tags'][1]",
"_____no_output_____"
],
[
"nltk.download('wordnet')\ndef get_wordnet_pos(tag):\n if tag.startswith('J'):\n return wordnet.ADJ\n elif tag.startswith('V'):\n return wordnet.VERB\n elif tag.startswith('N'):\n return wordnet.NOUN\n elif tag.startswith('R'):\n return wordnet.ADV\n else:\n return wordnet.NOUN\ntable['wordnet_pos'] = table['pos_tags'].apply(lambda x: [[(word, get_wordnet_pos(pos_tag)) for (word, pos_tag) in sent]for sent in x])",
"_____no_output_____"
],
[
"table['wordnet_pos'][1]",
"_____no_output_____"
],
[
"wnl = WordNetLemmatizer()\ntable['lemmatized'] = table['wordnet_pos'].apply(lambda x: [[wnl.lemmatize(word, tag) for word, tag in sent] for sent in x])\n",
"_____no_output_____"
],
[
"table.head()",
"_____no_output_____"
],
[
"table.to_csv('sentenceRowClean.csv')",
"_____no_output_____"
],
[
"pip install gensim",
"_____no_output_____"
],
[
"import logging\nfrom gensim.models.word2vec import Word2Vec\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\nw2v_model2 = Word2Vec(table['lemmatized'], vector_size=100, min_count=2)",
"_____no_output_____"
],
[
"y = list(table['lemmatized'])\nsentences = []\nfor row in y:\n for sent in row:\n sentences.append(sent)\n ",
"_____no_output_____"
],
[
"import logging\nfrom gensim.models.word2vec import Word2Vec\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\nw2v_model2 = Word2Vec(sentences, vector_size=100, min_count=2)",
"_____no_output_____"
],
[
"w2v_model2.wv.most_similar(['disease'])",
"_____no_output_____"
],
[
"w2v_model2.wv.most_similar(['cancer'])",
"_____no_output_____"
],
[
"w2v_model2.save('w2vec2')",
"_____no_output_____"
],
[
"model1[w]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d041838f841fe9d5736648a8526950cf671a89c9 | 81,503 | ipynb | Jupyter Notebook | Matplotlib/Violin_and_Box_Plot_Practice.ipynb | iamleeg/AIPND | a8be775c03c67e52da53b938886d0a1319e518c0 | [
"MIT"
] | null | null | null | Matplotlib/Violin_and_Box_Plot_Practice.ipynb | iamleeg/AIPND | a8be775c03c67e52da53b938886d0a1319e518c0 | [
"MIT"
] | null | null | null | Matplotlib/Violin_and_Box_Plot_Practice.ipynb | iamleeg/AIPND | a8be775c03c67e52da53b938886d0a1319e518c0 | [
"MIT"
] | null | null | null | 245.490964 | 35,476 | 0.894507 | [
[
[
"# prerequisite package imports\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sb\n\n%matplotlib inline\n\nfrom solutions_biv import violinbox_solution_1",
"_____no_output_____"
]
],
[
[
"We'll continue to make use of the fuel economy dataset in this workspace.",
"_____no_output_____"
]
],
[
[
"fuel_econ = pd.read_csv('./data/fuel_econ.csv')\nfuel_econ.head()",
"_____no_output_____"
]
],
[
[
"**Task**: What is the relationship between the size of a car and the size of its engine? The cars in this dataset are categorized into one of five different vehicle classes based on size. Starting from the smallest, they are: {Minicompact Cars, Subcompact Cars, Compact Cars, Midsize Cars, and Large Cars}. The vehicle classes can be found in the 'VClass' variable, while the engine sizes are in the 'displ' column (in liters). **Hint**: Make sure that the order of vehicle classes makes sense in your plot!",
"_____no_output_____"
]
],
[
[
"# YOUR CODE HERE\ncar_classes = ['Minicompact Cars', 'Subcompact Cars', 'Compact Cars', 'Midsize Cars', 'Large Cars']\nvclasses = pd.api.types.CategoricalDtype(ordered = True, categories = car_classes)\nfuel_econ['VClass'] = fuel_econ['VClass'].astype(vclasses)\nsb.violinplot(data = fuel_econ, x = 'VClass', y = 'displ')\nplt.xticks(rotation = 15)",
"_____no_output_____"
],
[
"# run this cell to check your work against ours\nviolinbox_solution_1()",
"I used a violin plot to depict the data in this case; you might have chosen a box plot instead. One of the interesting things about the relationship between variables is that it isn't consistent. Compact cars tend to have smaller engine sizes than the minicompact and subcompact cars, even though those two vehicle sizes are smaller. The box plot would make it easier to see that the median displacement for the two smallest vehicle classes is greater than the third quartile of the compact car class.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0418e48dd29cd10515f953e669da1c9459d0719 | 80,354 | ipynb | Jupyter Notebook | examples/Entrainment-LF17/plot_Entrainment-LF17.ipynb | jithuraju1290/gotmtool | 8cb811a2b02a68b6f14136745fe65a931ec77ef5 | [
"MIT"
] | null | null | null | examples/Entrainment-LF17/plot_Entrainment-LF17.ipynb | jithuraju1290/gotmtool | 8cb811a2b02a68b6f14136745fe65a931ec77ef5 | [
"MIT"
] | null | null | null | examples/Entrainment-LF17/plot_Entrainment-LF17.ipynb | jithuraju1290/gotmtool | 8cb811a2b02a68b6f14136745fe65a931ec77ef5 | [
"MIT"
] | null | null | null | 264.322368 | 69,644 | 0.90507 | [
[
[
"# Langmuir-enhanced entrainment\n\nThis notebook reproduces Fig. 15 of [Li et al., 2019](https://doi.org/10.1029/2019MS001810).",
"_____no_output_____"
]
],
[
[
"import sys\nimport numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt \nfrom matplotlib import cm\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nsys.path.append(\"../../../gotmtool\")\nfrom gotmtool import *",
"_____no_output_____"
],
[
"def plot_hLL_dpedt(hLL, dpedt, casename_list, ax=None, xlabel_on=True):\n if ax is None: \n ax = plt.gca()\n idx_WD05 = [('WD05' in casename) for casename in casename_list]\n idx_WD08 = [('WD08' in casename) for casename in casename_list]\n idx_WD10 = [('WD10' in casename) for casename in casename_list]\n b0_str = [casename[2:4] for casename in casename_list]\n b0 = np.array([float(tmp[0])*100 if 'h' in tmp else float(tmp) for tmp in b0_str])\n b0_min = b0.min()\n b0_max = b0.max()\n ax.plot(hLL, dpedt, color='k', linewidth=1, linestyle=':', zorder=1)\n im = ax.scatter(hLL[idx_WD05], dpedt[idx_WD05], c=b0[idx_WD05], marker='d', edgecolors='k',\n linewidth=1, zorder=2, label='$U_{10}=5$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max)\n ax.scatter(hLL[idx_WD08], dpedt[idx_WD08], c=b0[idx_WD08], marker='s', edgecolors='k',\n linewidth=1, zorder=2, label='$U_{10}=8$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max)\n ax.scatter(hLL[idx_WD10], dpedt[idx_WD10], c=b0[idx_WD10], marker='^', edgecolors='k',\n linewidth=1, zorder=2, label='$U_{10}=10$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max)\n ax.legend(loc='upper left')\n # add colorbar\n ax_inset = inset_axes(ax, width=\"30%\", height=\"3%\", loc='lower right',\n bbox_to_anchor=(-0.05, 0.1, 1, 1),\n bbox_transform=ax.transAxes,\n borderpad=0,)\n cb = plt.colorbar(im, cax=ax_inset, orientation='horizontal', shrink=0.35,\n ticks=[5, 100, 300, 500])\n cb.ax.set_xticklabels(['-5','-100','-300','-500'])\n ax.text(0.75, 0.2, '$Q_0$ (W m$^{-2}$)', color='black', transform=ax.transAxes,\n fontsize=10, va='top', ha='left')\n # get axes ratio\n ll, ur = ax.get_position() * plt.gcf().get_size_inches()\n width, height = ur - ll\n axes_ratio = height / width\n # add arrow and label\n add_arrow(ax, 0.6, 0.2, 0.3, 0.48, axes_ratio, color='gray', text='Increasing Convection')\n add_arrow(ax, 0.3, 0.25, -0.2, 0.1, axes_ratio, color='black', text='Increasing Langmuir')\n add_arrow(ax, 0.65, 0.75, -0.25, 0.01, axes_ratio, color='black', text='Increasing Langmuir')\n ax.set_xscale('log')\n ax.set_yscale('log')\n if xlabel_on:\n ax.set_xlabel('$h/\\kappa L$', fontsize=14)\n ax.set_ylabel('$d\\mathrm{PE}/dt$', fontsize=14)\n ax.set_xlim([3e-3, 4e1])\n ax.set_ylim([2e-4, 5e-2])\n # set the tick labels font\n for label in (ax.get_xticklabels() + ax.get_yticklabels()):\n label.set_fontsize(14)\n\ndef plot_hLL_R(hLL, R, colors, legend_list, ax=None, xlabel_on=True):\n if ax is None: \n ax = plt.gca()\n ax.axhline(y=1, linewidth=1, color='black')\n nm = R.shape[0]\n for i in np.arange(nm):\n ax.scatter(hLL, R[i,:], color=colors[i], edgecolors='k', linewidth=0.5, zorder=10)\n ax.set_xscale('log')\n ax.set_xlim([3e-3, 4e1])\n if xlabel_on:\n ax.set_xlabel('$h/L_L$', fontsize=14)\n ax.set_ylabel('$R$', fontsize=14)\n # set the tick labels font\n for label in (ax.get_xticklabels() + ax.get_yticklabels()):\n label.set_fontsize(14)\n # legend\n if nm > 1:\n xshift = 0.2 + 0.05*(11-nm)\n xx = np.arange(nm)+1\n xx = xx*0.06+xshift\n yy = np.ones(xx.size)*0.1\n for i in np.arange(nm):\n ax.text(xx[i], yy[i], legend_list[i], color='black', transform=ax.transAxes,\n fontsize=12, rotation=45, va='bottom', ha='left')\n ax.scatter(xx[i], 0.07, s=60, color=colors[i], edgecolors='k', linewidth=1, transform=ax.transAxes)\n\ndef add_arrow(ax, x, y, dx, dy, axes_ratio, color='black', text=None):\n ax.arrow(x, y, dx, dy, width=0.006, color=color, transform=ax.transAxes)\n if text is not None:\n dl = np.sqrt(dx**2+dy**2)\n xx = x + 0.5*dx + dy/dl*0.06\n yy = y + 0.5*dy - dx/dl*0.06\n angle = np.degrees(np.arctan(dy/dx*axes_ratio))\n ax.text(xx, yy, text, color=color, transform=ax.transAxes, fontsize=11,\n rotation=angle, va='center', ha='center')",
"_____no_output_____"
]
],
[
[
"### Load LF17 data",
"_____no_output_____"
]
],
[
[
"# load LF17 data\nlf17_data = np.load('LF17_dPEdt.npz')\nus0 = lf17_data['us0']\nb0 = lf17_data['b0']\nustar = lf17_data['ustar']\nhb = lf17_data['hb']\ndpedt = lf17_data['dpedt']\ncasenames = lf17_data['casenames']\nncase = len(casenames)\n\n# get parameter h/L_L= w*^3/u*^2/u^s(0)\ninds = us0==0\nus0[inds] = np.nan\nhLL = b0*hb/ustar**2/us0",
"_____no_output_____"
]
],
[
[
"### Compute the rate of change in potential energy in GOTM runs",
"_____no_output_____"
]
],
[
[
"turbmethods = [\n 'GLS-C01A',\n 'KPP-CVMix',\n 'KPPLT-VR12',\n 'KPPLT-LF17',\n ]\nntm = len(turbmethods)\ncmap = cm.get_cmap('rainbow')\nif ntm == 1:\n colors = ['gray']\nelse:\n colors = cmap(np.linspace(0,1,ntm))\n",
"_____no_output_____"
],
[
"m = Model(name='Entrainment-LF17', environ='../../.gotm_env.yaml')\ngotmdir = m.environ['gotmdir_run']+'/'+m.name\nprint(gotmdir)",
"/Users/qingli/develop/dev_gotmtool/gotm/run/Entrainment-LF17\n"
],
[
"# Coriolis parameter (s^{-1})\nf = 4*np.pi/86400*np.sin(np.pi/4)\n# Inertial period (s)\nTi = 2*np.pi/f\n# get dPEdt from GOTM run\nrdpedt = np.zeros([ntm, ncase])\nfor i in np.arange(ntm):\n print(turbmethods[i])\n for j in np.arange(ncase):\n sim = Simulation(path=gotmdir+'/'+casenames[j]+'/'+turbmethods[i])\n var_gotm = sim.load_data().Epot\n epot_gotm = var_gotm.data.squeeze()\n dtime = var_gotm.time - var_gotm.time[0]\n time_gotm = (dtime.dt.days*86400.+dtime.dt.seconds).data\n # starting index for the last inertial period\n t0_gotm = time_gotm[-1]-Ti\n tidx0_gotm = np.argmin(np.abs(time_gotm-t0_gotm))\n # linear fit\n xx_gotm = time_gotm[tidx0_gotm:]-time_gotm[tidx0_gotm]\n yy_gotm = epot_gotm[tidx0_gotm:]-epot_gotm[tidx0_gotm]\n slope_gotm, intercept_gotm, r_value_gotm, p_value_gotm, std_err_gotm = stats.linregress(xx_gotm,yy_gotm)\n rdpedt[i,j] = slope_gotm/dpedt[j]",
"GLS-C01A\nKPP-CVMix\nKPPLT-VR12\nKPPLT-LF17\n"
],
[
"fig, axarr = plt.subplots(2, 1, sharex='col')\nfig.set_size_inches(6, 7)\nplt.subplots_adjust(left=0.15, right=0.95, bottom=0.09, top=0.95, hspace=0.1)\nplot_hLL_dpedt(hLL, dpedt, casenames, ax=axarr[0])\nplot_hLL_R(hLL, rdpedt, colors, turbmethods, ax=axarr[1])\naxarr[0].text(0.04, 0.14, '(a)', color='black', transform=axarr[0].transAxes,\n fontsize=14, va='top', ha='left')\naxarr[1].text(0.88, 0.94, '(b)', color='black', transform=axarr[1].transAxes,\n fontsize=14, va='top', ha='left')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d041a3d8eb1715f0c95f596b3a99ab145b673844 | 12,750 | ipynb | Jupyter Notebook | 05-statistics.ipynb | eduardo-rodrigues/2020-03-03_DESY_Scikit-HEP_HandsOn | 5fc9443799b2e8b6fbcbb08c64727ba9514994d4 | [
"BSD-3-Clause"
] | 1 | 2021-03-08T17:34:06.000Z | 2021-03-08T17:34:06.000Z | 05-statistics.ipynb | eduardo-rodrigues/2020-03-03_DESY_Scikit-HEP_HandsOn | 5fc9443799b2e8b6fbcbb08c64727ba9514994d4 | [
"BSD-3-Clause"
] | 2 | 2020-02-28T15:51:38.000Z | 2020-03-01T10:45:02.000Z | 05-statistics.ipynb | eduardo-rodrigues/2020-03-03_DESY_Scikit-HEP_HandsOn | 5fc9443799b2e8b6fbcbb08c64727ba9514994d4 | [
"BSD-3-Clause"
] | 2 | 2020-02-29T11:10:20.000Z | 2020-05-18T20:02:33.000Z | 122.596154 | 9,600 | 0.879686 | [
[
[
"<center><h1><b><span style=\"color:blue\">Statistics</span></b></h1></center>\n\n\n#### **Quick intro to the following packages**\n- `hepstats`.\n\nI will not discuss here the `pyhf` package, which is very niche.\nPlease refer to the [GitHub repository](https://github.com/scikit-hep/pyhf) or related material at https://scikit-hep.org/resources.",
"_____no_output_____"
],
[
"## **`hepstats` - statistics tools and utilities**\n\nThe package contains 2 submodules:\n- `hypotests`: provides tools to do hypothesis tests such as discovery test and computations of upper limits or confidence intervals.\n- `modeling`: includes the Bayesian Block algorithm that can be used to improve the binning of histograms.\n\nNote: feel free to complement the introduction below with the several tutorials available from the [GitHub repository](https://github.com/scikit-hep/hepstats).",
"_____no_output_____"
],
[
"### **1. Adaptive binning determination**\n\nThe Bayesian Block algorithm produces histograms that accurately represent the underlying distribution while being robust to statistical fluctuations.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom hepstats.modeling import bayesian_blocks\n\ndata = np.append(np.random.laplace(size=10000), np.random.normal(5., 1., size=15000))\n\nbblocks = bayesian_blocks(data)\n\nplt.hist(data, bins=1000, label='Fine Binning', density=True)\nplt.hist(data, bins=bblocks, label='Bayesian Blocks', histtype='step', linewidth=2, density=True)\nplt.legend(loc=2);",
"_____no_output_____"
]
],
[
[
"### **2. Modelling**\n\nSo far the modelling functionality in the package makes use of the `zfit` package, see for example the `README` file and the `notebooks` directory in the [GitHub repository](https://github.com/scikit-hep/hepstats). Usage of `zfit` will be the topic of the last notebook in this tutorial, hence no further examples on `hepstats` are provided at this stage.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d041ab68d1247c39bc8b0ed68f8030b200258d90 | 113,263 | ipynb | Jupyter Notebook | data/Output-Python/Tirmzi_istep4-Copy2.ipynb | maroniea/xsede-spm | d272c62ecce7df9f6923b456e54d59cd0738e6c6 | [
"MIT"
] | 2 | 2021-10-04T22:22:40.000Z | 2021-10-04T22:44:46.000Z | data/Output-Python/Tirmzi_istep4-Copy2.ipynb | maroniea/xsede-spm | d272c62ecce7df9f6923b456e54d59cd0738e6c6 | [
"MIT"
] | 1 | 2021-07-27T02:00:38.000Z | 2021-07-27T02:00:38.000Z | data/Tirmzi_istep4-Copy2.ipynb | maroniea/xsede-spm | d272c62ecce7df9f6923b456e54d59cd0738e6c6 | [
"MIT"
] | null | null | null | 116.167179 | 20,224 | 0.85041 | [
[
[
"##Tirmzi Analysis\nn=1000 m+=1000 nm-=120 istep= 4 min=150 max=700",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nfrom scipy import signal",
"_____no_output_____"
],
[
"ls",
" Volume in drive C is Local Disk\n Volume Serial Number is AA1B-997A\n\n Directory of C:\\Users\\emaro\\OneDrive - University of Mount Union\\XSEDE Summer 2021\\xsede-spm\\data\n\n07/28/2021 11:41 AM <DIR> .\n07/28/2021 11:41 AM <DIR> ..\n07/28/2021 11:39 AM <DIR> .ipynb_checkpoints\n07/28/2021 11:37 AM 6,163 C' v. Z for 1nm thick sample 06-28-2021.png\n07/28/2021 11:37 AM 6,179 C' v. Z for varying sample thickness, 06-28-2021.png\n07/28/2021 11:37 AM 5,715 Cz v. Z for varying sample thickness, 06-28-2021.png\n07/28/2021 11:42 AM <DIR> FortranOutputTest\n07/28/2021 11:28 AM <DIR> PythonOutputTest\n07/28/2021 11:41 AM 91,187 Tirmzi_istep4-Copy2.ipynb\n 4 File(s) 109,244 bytes\n 5 Dir(s) 142,724,673,536 bytes free\n"
],
[
"import capsol.newanalyzecapsol as ac",
"_____no_output_____"
],
[
"ac.get_gridparameters",
"_____no_output_____"
],
[
"import glob",
"_____no_output_____"
],
[
"folders = glob.glob(\"FortranOutputTest/*/\")\nfolders\n",
"_____no_output_____"
],
[
"all_data= dict() \nfor folder in folders:\n params = ac.get_gridparameters(folder + 'capsol.in')\n data = ac.np.loadtxt(folder + 'Z-U.dat')\n process_data = ac.process_data(params, data, smoothing=False, std=5*10**-9)\n all_data[folder]= (process_data)\nall_params= dict()\nfor folder in folders:\n params=ac.get_gridparameters(folder + 'capsol.in')\n all_params[folder]= (params)",
"_____no_output_____"
],
[
"all_data",
"_____no_output_____"
],
[
"all_data.keys()",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:\n data=all_data[key]\n thickness =all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')\n \n \nplt.title('C v. Z for 1nm thick sample') \nplt.ylabel(\"C(m)\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"C' v. Z for 1nm thick sample 06-28-2021.png\")",
"No handles with labels found to put in legend.\n"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:\n data=all_data[key]\n thickness =all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')\n \n \nplt.title('C v. Z for 10nm thick sample') \nplt.ylabel(\"C(m)\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"C' v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:\n data=all_data[key]\n thickness =all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')\n \n \nplt.title('C v. Z for 100nm sample') \nplt.ylabel(\"C(m)\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"C' v. Z for varying sample thickness, 06-28-2021.png\")",
"No handles with labels found to put in legend.\n"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:\n data=all_data[key]\n thickness =all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')\n \n \nplt.title('C v. Z for 500nm sample') \nplt.ylabel(\"C(m)\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"C' v. Z for varying sample thickness, 06-28-2021.png\")",
"No handles with labels found to put in legend.\n"
]
],
[
[
"cut off last experiment because capacitance was off the scale",
"_____no_output_____"
]
],
[
[
"for params in all_params.values():\n print(params['Thickness_sample'])\n print(params['m-'])",
"10.0\n20\n"
],
[
"all_params",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(4,-3)\n plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Cz vs. Z for 1.0nm') \nplt.ylabel(\"Cz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Cz v. Z for varying sample thickness, 06-28-2021.png\")",
"No handles with labels found to put in legend.\n"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(4,-3)\n plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Cz vs. Z for 10.0nm') \nplt.ylabel(\"Cz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Cz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(4,-3)\n plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Cz vs. Z for 100.0nm') \nplt.ylabel(\"Cz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Cz v. Z for varying sample thickness, 06-28-2021.png\")",
"No handles with labels found to put in legend.\n"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(4,-3)\n plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Cz vs. Z for 500.0nm') \nplt.ylabel(\"Cz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Cz v. Z for varying sample thickness, 06-28-2021.png\")",
"No handles with labels found to put in legend.\n"
],
[
"hoepker_data= np.loadtxt(\"Default Dataset (2).csv\" , delimiter= \",\")\nhoepker_data",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(5,-5)\n plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Czz vs. Z for 1.0nm') \nplt.ylabel(\"Czz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"params",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(5,-5)\n plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Czz vs. Z for 10.0nm') \nplt.ylabel(\"Czz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(5,-5)\n plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Czz vs. Z for 100.0nm') \nplt.ylabel(\"Czz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(5,-5)\n plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Czz vs. Z for 500.0 nm') \nplt.ylabel(\"Czz\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(8,-8)\n plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('alpha vs. Z for 1.0nm') \nplt.ylabel(\"$\\\\alpha$\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Alpha v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(8,-8)\n plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Alpha vs. Z for 10.0 nm') \nplt.ylabel(\"$\\\\alpha$\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(8,-8)\n plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Alpha vs. Z for 100.0nm') \nplt.ylabel(\"$\\\\alpha$\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:\n data=all_data[key]\n thickness=all_params[key]['Thickness_sample']\n rtip= all_params[key]['Rtip']\n er=all_params[key]['eps_r']\n s=slice(8,-8)\n plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )\n \nplt.title('Alpha vs. Z for 500.0nm')\nplt.ylabel(\"$\\\\alpha$\")\nplt.xlabel(\"Z(m)\")\nplt.legend()\nplt.savefig(\"Czz v. Z for varying sample thickness, 06-28-2021.png\")",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"from scipy.optimize import curve_fit\n",
"_____no_output_____"
],
[
"def Cz_model(z, a, n, b,):\n return(a*z**n + b)",
"_____no_output_____"
],
[
"all_data.keys()",
"_____no_output_____"
],
[
"data= all_data['capsol-calc\\\\0001-capsol\\\\']\nz= data['z'][1:-1]\ncz= data['cz'][1:-1]",
"_____no_output_____"
],
[
"popt, pcov= curve_fit(Cz_model, z, cz, p0=[cz[0]*z[0], -1, 0])\na=popt[0]\nn=popt[1]\nb=popt[2]\nstd_devs= np.sqrt(pcov.diagonal())\nsigma_a = std_devs[0]\nsigma_n = std_devs[1]\nmodel_output= Cz_model(z, a, n, b)\nrmse= np.sqrt(np.mean((cz - model_output)**2))\n",
"_____no_output_____"
],
[
"f\"a= {a} ± {sigma_a}\"",
"_____no_output_____"
],
[
"f\"n= {n}± {sigma_n}\"",
"_____no_output_____"
],
[
"model_output",
"_____no_output_____"
],
[
"\"Root Mean Square Error\"",
"_____no_output_____"
],
[
"rmse/np.mean(-cz)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d041b7fb3ed02f3e842bf0efb1af02d82b87659d | 122,178 | ipynb | Jupyter Notebook | notebooks/TD Learning Black Scholes.ipynb | FinTechies/HedgingRL | 28297e3e4edc3c4c1a26eb58e49c470c59a6697d | [
"MIT"
] | 21 | 2017-04-22T03:46:03.000Z | 2022-03-05T13:41:51.000Z | notebooks/TD Learning Black Scholes.ipynb | FinTechies/HedgingRL | 28297e3e4edc3c4c1a26eb58e49c470c59a6697d | [
"MIT"
] | null | null | null | notebooks/TD Learning Black Scholes.ipynb | FinTechies/HedgingRL | 28297e3e4edc3c4c1a26eb58e49c470c59a6697d | [
"MIT"
] | 11 | 2019-02-20T14:09:49.000Z | 2022-02-23T02:52:37.000Z | 82.330189 | 29,672 | 0.746247 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"r = np.random.randn((1000))\nS0 = 1\nS = np.cumsum(r) + S0",
"_____no_output_____"
],
[
"T = 2\nmu = 0.\nsigma = 0.01\nS0 = 20\ndt = 0.01\nN = round(T/dt)\nt = np.linspace(0, T, N)\nW = np.random.standard_normal(size = N) \nW = np.cumsum(W)*np.sqrt(dt) ### standard brownian motion ###\nX = (mu-0.5*sigma**2)*t + sigma*W \nS = S0*np.exp(X) ### geometric brownian motion ###\nplt.plot(t, S)\nplt.show()",
"_____no_output_____"
],
[
"from ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets",
"_____no_output_____"
],
[
"from blackscholes import geometric_brownian_motion, blackScholes\nfrom scipy.stats import norm",
"_____no_output_____"
],
[
"geometric_brownian_motion(mu=0., sigma=0.01, s0=1, dt=0.01);",
"_____no_output_____"
],
[
"t = 2.\ndt = 0.01\nN = int(round(t / dt))\nnp.linspace(0, t, N)\ntt = np.linspace(0, t, N)\nW = norm((N))",
"_____no_output_____"
],
[
"@interact(mu=(-0.02, 0.05, 0.01), sigma=(0.01, 0.1, 0.005), S0=(1,100,10), dt=(0.001, 0.1, 0.001))\ndef plot_gbm(mu, sigma, S0, dt):\n s, t = geometric_brownian_motion(mu=mu, sigma=sigma, t=2, dt=dt, s0=S0)\n pd.Series(t, s).plot()\n plt.show()",
"_____no_output_____"
],
[
"df.ix[0.1:,:].gamma.plot()",
"_____no_output_____"
],
[
"tau = np.clip( np.linspace(1.0, .0, 101), 0.0000001, 100)\nS = 1.\nK = 1.\nsigma = 1\ndf = pd.DataFrame.from_dict(blackScholes(tau, S, K, sigma))\ndf.index = tau",
"_____no_output_____"
],
[
"@interact(mu=(-0.02, 0.05, 0.01), sigma=(0.01, 0.1, 0.005), S0=(1,100,10), dt=(0.001, 0.1, 0.001))\ndef plot_gbm(mu, sigma, S0, dt):\n s, t = geometric_brownian_motion(mu=mu, sigma=sigma, t=2, dt=dt, s0=S0)\n pd.Series(t, s).plot()\n plt.show()",
"_____no_output_____"
]
],
[
[
"## Q-learning",
"_____no_output_____"
],
[
"- Initialize $V(s)$ arbitrarily\n- Repeat for each episode\n- Initialize s\n- Repeat (for each step of episode)\n- - $\\alpha \\leftarrow$ action given by $\\pi$ for $s$\n- - Take action a, observe reward r, and next state s'\n- - $V(s) \\leftarrow V(s) + \\alpha [r = \\gamma V(s') - V(s)]$ \n- - $s \\leftarrow s'$\n- until $s$ is terminal",
"_____no_output_____"
]
],
[
[
"import td",
"_____no_output_____"
],
[
"import scipy as sp",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"α = 0.05\nγ = 0.1\ntd_learning = td.TD(α, γ)",
"_____no_output_____"
]
],
[
[
"## Black Scholes",
"_____no_output_____"
],
[
"$${\\displaystyle d_{1}={\\frac {1}{\\sigma {\\sqrt {T-t}}}}\\left[\\ln \\left({\\frac {S_{t}}{K}}\\right)+(r-q+{\\frac {1}{2}}\\sigma ^{2})(T-t)\\right]}$$",
"_____no_output_____"
],
[
"$${\\displaystyle C(S_{t},t)=e^{-r(T-t)}[FN(d_{1})-KN(d_{2})]\\,}$$",
"_____no_output_____"
],
[
"$${\\displaystyle d_{2}=d_{1}-\\sigma {\\sqrt {T-t}}={\\frac {1}{\\sigma {\\sqrt {T-t}}}}\\left[\\ln \\left({\\frac {S_{t}}{K}}\\right)+(r-q-{\\frac {1}{2}}\\sigma ^{2})(T-t)\\right]}$$",
"_____no_output_____"
]
],
[
[
"d_1 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (σ ** 2) * (T-t))\nd_2 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (σ ** 2) * (T-t))\n\ncall = lambda σ, T, t, S, K: S * sp.stats.norm.cdf( d_1(σ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(σ, T, t, S, K) )",
"_____no_output_____"
],
[
"plt.plot(np.linspace(0.1, 4., 100), call(1., 1., .9, np.linspace(0.1, 4., 100), 1.))",
"_____no_output_____"
],
[
"d_1(1., 1., 0., 1.9, 1)",
"_____no_output_____"
],
[
"plt.plot(d_1(1., 1., 0., np.linspace(0.1, 2.9, 10), 1))",
"_____no_output_____"
],
[
"plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.2, np.linspace(0.01, 1.9, 100), 1)))\nplt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.6, np.linspace(0.01, 1.9, 100), 1)))\nplt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.9, np.linspace(0.01, 1.9, 100), 1)))\nplt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.99, np.linspace(0.01, 1.9, 100), 1)))",
"_____no_output_____"
],
[
"def iterate_series(n=1000, S0 = 1):\n while True:\n r = np.random.randn((n))\n S = np.cumsum(r) + S0\n yield S, r",
"_____no_output_____"
],
[
"def iterate_world(n=1000, S0=1, N=5):\n for (s, r) in take(N, iterate_series(n=n, S0=S0)):\n t, t_0 = 0, 0\n for t in np.linspace(0, len(s)-1, 100):\n r = s[int(t)] / s[int(t_0)]\n yield r, s[int(t)]\n t_0 = t",
"_____no_output_____"
],
[
"from cytoolz import take",
"_____no_output_____"
],
[
"import gym\nimport gym_bs",
"_____no_output_____"
],
[
"from test_cem_future import *",
"[2017-05-10 23:26:52,381] Making new env: fbs-v1\n"
],
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"# df.iloc[3] = (0.2, 1, 3)\ndf",
"_____no_output_____"
],
[
"rwd, df, agent = noisy_evaluation(np.array([0.1, 0, 0]))\nrwd\ndf\nagent;",
"_____no_output_____"
],
[
"env.observation_space",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d041c1dd6da23c2d01447360f6f24aa519572354 | 6,553 | ipynb | Jupyter Notebook | 001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb | willirath/jupyter-jsc-notebooks | e64aa9c6217543c4ffb5535e7a478b2c9457629a | [
"BSD-3-Clause"
] | null | null | null | 001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb | willirath/jupyter-jsc-notebooks | e64aa9c6217543c4ffb5535e7a478b2c9457629a | [
"BSD-3-Clause"
] | null | null | null | 001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb | willirath/jupyter-jsc-notebooks | e64aa9c6217543c4ffb5535e7a478b2c9457629a | [
"BSD-3-Clause"
] | 1 | 2022-01-13T18:49:12.000Z | 2022-01-13T18:49:12.000Z | 25.597656 | 387 | 0.558065 | [
[
[
"# Plotting with Matplotlib",
"_____no_output_____"
],
[
"IPython works with the [Matplotlib](http://matplotlib.org/) plotting library, which integrates Matplotlib with IPython's display system and event loop handling.",
"_____no_output_____"
],
[
"## matplotlib mode",
"_____no_output_____"
],
[
"To make plots using Matplotlib, you must first enable IPython's matplotlib mode.\n\nTo do this, run the `%matplotlib` magic command to enable plotting in the current Notebook.\n\nThis magic takes an optional argument that specifies which Matplotlib backend should be used. Most of the time, in the Notebook, you will want to use the `inline` backend, which will embed plots inside the Notebook:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"You can also use Matplotlib GUI backends in the Notebook, such as the Qt backend (`%matplotlib qt`). This will use Matplotlib's interactive Qt UI in a floating window to the side of your browser. Of course, this only works if your browser is running on the same system as the Notebook Server. You can always call the `display` function to paste figures into the Notebook document.",
"_____no_output_____"
],
[
"## Making a simple plot",
"_____no_output_____"
],
[
"With matplotlib enabled, plotting should just work.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"x = np.linspace(0, 3*np.pi, 500)\nplt.plot(x, np.sin(x**2))\nplt.title('A simple chirp');",
"_____no_output_____"
]
],
[
[
"These images can be resized by dragging the handle in the lower right corner. Double clicking will return them to their original size.",
"_____no_output_____"
],
[
"One thing to be aware of is that by default, the `Figure` object is cleared at the end of each cell, so you will need to issue all plotting commands for a single figure in a single cell.",
"_____no_output_____"
],
[
"## Loading Matplotlib demos with %load",
"_____no_output_____"
],
[
"IPython's `%load` magic can be used to load any Matplotlib demo by its URL:",
"_____no_output_____"
]
],
[
[
"# %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py\n\"\"\"\nPlot demonstrating the integral as the area under a curve.\n\nAlthough this is a simple example, it demonstrates some important tweaks:\n\n * A simple line plot with custom color and line width.\n * A shaded region created using a Polygon patch.\n * A text label with mathtext rendering.\n * figtext calls to label the x- and y-axes.\n * Use of axis spines to hide the top and right spines.\n * Custom tick placement and labels.\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Polygon\n\n\ndef func(x):\n return (x - 3) * (x - 5) * (x - 7) + 85\n\n\na, b = 2, 9 # integral limits\nx = np.linspace(0, 10)\ny = func(x)\n\nfig, ax = plt.subplots()\nplt.plot(x, y, 'r', linewidth=2)\nplt.ylim(bottom=0)\n\n# Make the shaded region\nix = np.linspace(a, b)\niy = func(ix)\nverts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]\npoly = Polygon(verts, facecolor='0.9', edgecolor='0.5')\nax.add_patch(poly)\n\nplt.text(0.5 * (a + b), 30, r\"$\\int_a^b f(x)\\mathrm{d}x$\",\n horizontalalignment='center', fontsize=20)\n\nplt.figtext(0.9, 0.05, '$x$')\nplt.figtext(0.1, 0.9, '$y$')\n\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.xaxis.set_ticks_position('bottom')\n\nax.set_xticks((a, b))\nax.set_xticklabels(('$a$', '$b$'))\nax.set_yticks([])\n\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"Matplotlib 1.4 introduces an interactive backend for use in the notebook,\ncalled 'nbagg'. You can enable this with `%matplotlib notebook`.\n\nWith this backend, you will get interactive panning and zooming of matplotlib figures in your browser.",
"_____no_output_____"
]
],
[
[
"%matplotlib widget",
"_____no_output_____"
],
[
"plt.figure()\nx = np.linspace(0, 5 * np.pi, 1000)\nfor n in range(1, 4):\n plt.plot(np.sin(n * x))\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d041e7a8c339d3343681ad7713552cd8684e0cd9 | 101,928 | ipynb | Jupyter Notebook | day2/nn_qso_finder.ipynb | mjvakili/MLcourse | c5748b3616629eed235ee113f3f00e2f45f7eaad | [
"MIT"
] | 2 | 2019-12-02T15:22:08.000Z | 2020-09-20T04:32:22.000Z | day2/nn_qso_finder.ipynb | lauramog/MLcourse | c5748b3616629eed235ee113f3f00e2f45f7eaad | [
"MIT"
] | null | null | null | day2/nn_qso_finder.ipynb | lauramog/MLcourse | c5748b3616629eed235ee113f3f00e2f45f7eaad | [
"MIT"
] | 5 | 2019-12-03T02:37:46.000Z | 2020-09-20T04:32:24.000Z | 205.5 | 40,006 | 0.869947 | [
[
[
"<a href=\"https://colab.research.google.com/github/mjvakili/MLcourse/blob/master/day2/nn_qso_finder.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Let's start by importing the libraries that we need for this exercise.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom sklearn.model_selection import train_test_split\n#matplotlib settings\nmatplotlib.rcParams['xtick.major.size'] = 7\nmatplotlib.rcParams['xtick.labelsize'] = 'x-large'\nmatplotlib.rcParams['ytick.major.size'] = 7\nmatplotlib.rcParams['ytick.labelsize'] = 'x-large'\nmatplotlib.rcParams['xtick.top'] = False\nmatplotlib.rcParams['ytick.right'] = False\nmatplotlib.rcParams['ytick.direction'] = 'in'\nmatplotlib.rcParams['xtick.direction'] = 'in'\nmatplotlib.rcParams['font.size'] = 15\nmatplotlib.rcParams['figure.figsize'] = [7,7]",
"_____no_output_____"
],
[
"#We need the astroml library to fetch the photometric datasets of sdss qsos and stars\npip install astroml",
"Requirement already satisfied: astroml in /usr/local/lib/python3.6/dist-packages (0.4.1)\nRequirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from astroml) (0.21.3)\nRequirement already satisfied: numpy>=1.4 in /usr/local/lib/python3.6/dist-packages (from astroml) (1.17.4)\nRequirement already satisfied: astropy>=1.2 in /usr/local/lib/python3.6/dist-packages (from astroml) (3.0.5)\nRequirement already satisfied: scipy>=0.11 in /usr/local/lib/python3.6/dist-packages (from astroml) (1.3.2)\nRequirement already satisfied: matplotlib>=0.99 in /usr/local/lib/python3.6/dist-packages (from astroml) (3.1.1)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->astroml) (0.14.0)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=0.99->astroml) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=0.99->astroml) (2.6.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=0.99->astroml) (2.4.5)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=0.99->astroml) (1.1.0)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=0.99->astroml) (1.12.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=0.99->astroml) (41.6.0)\n"
],
[
"from astroML.datasets import fetch_dr7_quasar\nfrom astroML.datasets import fetch_sdss_sspp\n\nquasars = fetch_dr7_quasar()\nstars = fetch_sdss_sspp()",
"_____no_output_____"
],
[
"# Data procesing taken from \n#https://www.astroml.org/book_figures/chapter9/fig_star_quasar_ROC.html by Jake Van der Plus\n\n# stack colors into matrix X\nNqso = len(quasars)\nNstars = len(stars)\nX = np.empty((Nqso + Nstars, 4), dtype=float)\n\nX[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g']\nX[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r']\nX[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i']\nX[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z']\n\nX[Nqso:, 0] = stars['upsf'] - stars['gpsf']\nX[Nqso:, 1] = stars['gpsf'] - stars['rpsf']\nX[Nqso:, 2] = stars['rpsf'] - stars['ipsf']\nX[Nqso:, 3] = stars['ipsf'] - stars['zpsf']\n\ny = np.zeros(Nqso + Nstars, dtype=int)\ny[:Nqso] = 1",
"_____no_output_____"
],
[
"X = X/np.max(X, axis=0)",
"_____no_output_____"
],
[
"# split into training and test sets\nX_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.9)",
"_____no_output_____"
],
[
"#Now let's build a simple Sequential model in which fully connected layers come after one another\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(), #this flattens input\n tf.keras.layers.Dense(128, activation = \"relu\"),\n tf.keras.layers.Dense(64, activation = \"relu\"),\n tf.keras.layers.Dense(32, activation = \"relu\"),\n tf.keras.layers.Dense(32, activation = \"relu\"),\n tf.keras.layers.Dense(1, activation=\"sigmoid\")\n])\nmodel.compile(optimizer='adam', loss='binary_crossentropy')\nhistory = model.fit(X_train, y_train, validation_data = (X_test, y_test), batch_size = 32, epochs=20, verbose = 1)\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nTrain on 389738 samples, validate on 43305 samples\nEpoch 1/20\n389738/389738 [==============================] - 24s 61us/sample - loss: 0.0495 - val_loss: 0.0281\nEpoch 2/20\n389738/389738 [==============================] - 23s 59us/sample - loss: 0.0311 - val_loss: 0.0275\nEpoch 3/20\n389738/389738 [==============================] - 23s 59us/sample - loss: 0.0290 - val_loss: 0.0336\nEpoch 4/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0278 - val_loss: 0.0261\nEpoch 5/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0269 - val_loss: 0.0238\nEpoch 6/20\n389738/389738 [==============================] - 25s 64us/sample - loss: 0.0263 - val_loss: 0.0231\nEpoch 7/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0259 - val_loss: 0.0250\nEpoch 8/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0257 - val_loss: 0.0230\nEpoch 9/20\n389738/389738 [==============================] - 23s 59us/sample - loss: 0.0254 - val_loss: 0.0224\nEpoch 10/20\n389738/389738 [==============================] - 23s 60us/sample - loss: 0.0251 - val_loss: 0.0284\nEpoch 11/20\n389738/389738 [==============================] - 23s 59us/sample - loss: 0.0248 - val_loss: 0.0223\nEpoch 12/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0245 - val_loss: 0.0234\nEpoch 13/20\n389738/389738 [==============================] - 23s 59us/sample - loss: 0.0245 - val_loss: 0.0228\nEpoch 14/20\n389738/389738 [==============================] - 23s 59us/sample - loss: 0.0243 - val_loss: 0.0269\nEpoch 15/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0243 - val_loss: 0.0227\nEpoch 16/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0242 - val_loss: 0.0230\nEpoch 17/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0239 - val_loss: 0.0248\nEpoch 18/20\n389738/389738 [==============================] - 23s 58us/sample - loss: 0.0239 - val_loss: 0.0221\nEpoch 19/20\n389738/389738 [==============================] - 25s 63us/sample - loss: 0.0239 - val_loss: 0.0217\nEpoch 20/20\n389738/389738 [==============================] - 23s 60us/sample - loss: 0.0236 - val_loss: 0.0235\n"
],
[
"loss = history.history['loss']\nval_loss = history.history['val_loss']\nepochs = range(len(loss))\nplt.plot(epochs, loss, lw = 5, label='Training loss')\nplt.plot(epochs, val_loss, lw = 5, label='validation loss')\n\nplt.title('Loss')\nplt.legend(loc=0)\nplt.show()",
"_____no_output_____"
],
[
"prob = model.predict_proba(X_test) #model probabilities",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import roc_curve",
"_____no_output_____"
],
[
"fpr, tpr, thresholds = roc_curve(y_test, prob)",
"_____no_output_____"
],
[
"plt.loglog(fpr, tpr, lw = 4)\nplt.xlabel('false positive rate')\nplt.ylabel('true positive rate')\nplt.xlim(0.0, 0.15)\nplt.ylim(0.6, 1.01)\nplt.show()\n\nplt.plot(thresholds, tpr, lw = 4)\nplt.plot(thresholds, fpr, lw = 4)\nplt.xlim(0,1)\nplt.yscale(\"log\")\nplt.show()\n#plt.xlabel('false positive rate')\n#plt.ylabel('true positive rate')\n##plt.xlim(0.0, 0.15)\n#plt.ylim(0.6, 1.01)",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: UserWarning: Attempted to set non-positive left xlim on a log-scaled axis.\nInvalid limit will be ignored.\n after removing the cwd from sys.path.\n"
],
[
"#Now let's look at the confusion matrix\ny_pred = model.predict(X_test)\nz_pred = np.zeros(y_pred.shape[0], dtype = int)\nmask = np.where(y_pred>.5)[0]\nz_pred[mask] = 1\nconfusion_matrix(y_test, z_pred.astype(int))",
"_____no_output_____"
],
[
"import os, signal\nos.kill(os.getpid(), signal.SIGKILL)",
"_____no_output_____"
]
],
[
[
"#Exercise1:\n\nTry to change the number of layers, batchsize, as well as the default learning rate, one at a time. See which one can make a more significant impact on the performance of the model.\n\n#Exercise 2:\nWrite a simple function for visualizing the predicted decision boundaries in the feature space. Try to identify the regions of the parameter space which contribute significantly to the false positive rates.\n\n#Exercise 3:\nThis dataset is a bit imbalanced in that the QSOs are outnumbered by the stars. Can you think of a wighting scheme to pass to the loss function, such that the detection rate of QSOs increases?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d041ef9b73b3e7f2dddd9b00a8d4bc8b150051fc | 1,906 | ipynb | Jupyter Notebook | PandasQureys.ipynb | nealonleo9/SQL | 8f369ea9911afc651dba62eb7da4cdd8a2f11b10 | [
"MIT"
] | null | null | null | PandasQureys.ipynb | nealonleo9/SQL | 8f369ea9911afc651dba62eb7da4cdd8a2f11b10 | [
"MIT"
] | null | null | null | PandasQureys.ipynb | nealonleo9/SQL | 8f369ea9911afc651dba62eb7da4cdd8a2f11b10 | [
"MIT"
] | null | null | null | 24.126582 | 223 | 0.483736 | [
[
[
"<a href=\"https://colab.research.google.com/github/nealonleo9/SQL/blob/main/PandasQureys.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"df1.query('age == 10')",
"_____no_output_____"
],
[
"You can also achieve this result via the traditional filtering method.\r\n\r\nfilter_1 = df['Mon'] > df['Tues']\r\ndf[filter_1]",
"_____no_output_____"
],
[
"If needed you can also use an environment variable to filter your data.\r\n Make sure to put an \"@\" sign in front of your variable within the string.\r\n\r\ndinner_limit=120\r\ndf.query('Thurs > @dinner_limit')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0420237f434fa5b1a5aecdcc25fab097008f791 | 23,141 | ipynb | Jupyter Notebook | udacity-pytorch-final-lab-guide-part-1.ipynb | styluna7/notebooks | f0973ac4e067d62a989584810dce9088811bfad5 | [
"MIT"
] | null | null | null | udacity-pytorch-final-lab-guide-part-1.ipynb | styluna7/notebooks | f0973ac4e067d62a989584810dce9088811bfad5 | [
"MIT"
] | null | null | null | udacity-pytorch-final-lab-guide-part-1.ipynb | styluna7/notebooks | f0973ac4e067d62a989584810dce9088811bfad5 | [
"MIT"
] | null | null | null | 23,141 | 23,141 | 0.720194 | [
[
[
"# Udacity PyTorch Scholarship Final Lab Challenge Guide \n**A hands-on guide to get 90% + accuracy and complete the challenge**",
"_____no_output_____"
],
[
"**By [Soumya Ranjan Behera](https://www.linkedin.com/in/soumya044)**",
"_____no_output_____"
],
[
"## This Tutorial will be divided into Two Parts, \n### [1. Model Building and Training](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-1/)\n### [2. Submit in Udcaity's Workspace for evaluation](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/)",
"_____no_output_____"
],
[
"**Note:** This tutorial is like a template or guide for newbies to overcome the fear of the final lab challenge. My intent is not to promote plagiarism or any means of cheating. Users are encourage to take this tutorial as a baseline and build their own better model. Cheers!",
"_____no_output_____"
],
[
"**Fork this Notebook and Run it from Top-To-Bottom Step by Step**",
"_____no_output_____"
],
[
"# Part 1: Build and Train a Model",
"_____no_output_____"
],
[
"**Credits:** The dataset credit goes to [Lalu Erfandi Maula Yusnu](https://www.kaggle.com/nunenuh)",
"_____no_output_____"
],
[
"## 1. Import Data set and visualiza some data",
"_____no_output_____"
]
],
[
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport os\nprint(os.listdir(\"../input/\"))\n\n# Any results you write to the current directory are saved as output.",
"_____no_output_____"
]
],
[
[
"**Import some visualization Libraries**",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\nimport cv2",
"_____no_output_____"
],
[
"# Set Train and Test Directory Variables\nTRAIN_DATA_DIR = \"../input/flower_data/flower_data/train/\"\nVALID_DATA_DIR = \"../input/flower_data/flower_data/valid/\"",
"_____no_output_____"
],
[
"#Visualiza Some Images of any Random Directory-cum-Class\nFILE_DIR = str(np.random.randint(1,103))\nprint(\"Class Directory: \",FILE_DIR)\nfor file_name in os.listdir(os.path.join(TRAIN_DATA_DIR, FILE_DIR))[1:3]:\n img_array = cv2.imread(os.path.join(TRAIN_DATA_DIR, FILE_DIR, file_name))\n img_array = cv2.resize(img_array,(224, 224), interpolation = cv2.INTER_CUBIC)\n plt.imshow(img_array)\n plt.show()\n print(img_array.shape)",
"_____no_output_____"
]
],
[
[
"## 2. Data Preprocessing (Image Augmentation)",
"_____no_output_____"
],
[
"**Import PyTorch libraries**",
"_____no_output_____"
]
],
[
[
"import torch\nimport torchvision\nfrom torchvision import datasets, models, transforms\nimport torch.nn as nn\ntorch.__version__",
"_____no_output_____"
]
],
[
[
"**Note:** **Look carefully! Kaggle uses v1.0.0 while Udcaity workspace has v0.4.0 (Some issues may arise but we'll solve them)**",
"_____no_output_____"
]
],
[
[
"# check if CUDA is available\ntrain_on_gpu = torch.cuda.is_available()\n\nif not train_on_gpu:\n print('CUDA is not available. Training on CPU ...')\nelse:\n print('CUDA is available! Training on GPU ...')",
"_____no_output_____"
]
],
[
[
"**Make a Class Variable i.e a list of Target Categories (List of 102 species) **",
"_____no_output_____"
]
],
[
[
"# I used os.listdir() to maintain the ordering \nclasses = os.listdir(VALID_DATA_DIR)",
"_____no_output_____"
]
],
[
[
"**Load and Transform (Image Augmentation)** \nSoucre: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb",
"_____no_output_____"
]
],
[
[
"# Load and transform data using ImageFolder\n\n# VGG-16 Takes 224x224 images as input, so we resize all of them\ndata_transform = transforms.Compose([transforms.RandomResizedCrop(224),\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225])])\n\ntrain_data = datasets.ImageFolder(TRAIN_DATA_DIR, transform=data_transform)\ntest_data = datasets.ImageFolder(VALID_DATA_DIR, transform=data_transform)\n\n# print out some data stats\nprint('Num training images: ', len(train_data))\nprint('Num test images: ', len(test_data))",
"_____no_output_____"
]
],
[
[
"### Find more on Image Transforms using PyTorch Here (https://pytorch.org/docs/stable/torchvision/transforms.html)",
"_____no_output_____"
],
[
"## 3. Make a DataLoader",
"_____no_output_____"
]
],
[
[
"# define dataloader parameters\nbatch_size = 32\nnum_workers=0\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, \n num_workers=num_workers, shuffle=True)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers, shuffle=True)",
"_____no_output_____"
]
],
[
[
"**Visualize Sample Images**",
"_____no_output_____"
]
],
[
[
"# Visualize some sample data\n\n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy() # convert images to numpy for display\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n plt.imshow(np.transpose(images[idx], (1, 2, 0)))\n ax.set_title(classes[labels[idx]])",
"_____no_output_____"
]
],
[
[
"**Here plt.imshow() clips our data into [0,....,255] range to show the images. The Warning message is due to our Transform Function. We can Ignore it.**",
"_____no_output_____"
],
[
"## 4. Use a Pre-Trained Model (VGG16) \nHere we used a VGG16. You can experiment with other models. \nReferences: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/transfer-learning/Transfer_Learning_Solution.ipynb",
"_____no_output_____"
],
[
"**Try More Models: ** https://pytorch.org/docs/stable/torchvision/models.html",
"_____no_output_____"
]
],
[
[
"# Load the pretrained model from pytorch\nmodel = models.<ModelNameHere>(pretrained=True) \nprint(model)",
"_____no_output_____"
]
],
[
[
"### We can see from above output that the last ,i.e, 6th Layer is a Fully-connected Layer with in_features=4096, out_features=1000",
"_____no_output_____"
]
],
[
[
"print(model.classifier[6].in_features) \nprint(model.classifier[6].out_features)\n# The above lines work for vgg only. For other models refer to print(model) and look for last FC layer",
"_____no_output_____"
]
],
[
[
"**Freeze Training for all 'Features Layers', Only Train Classifier Layers**",
"_____no_output_____"
]
],
[
[
"# Freeze training for all \"features\" layers\nfor param in model.features.parameters():\n param.requires_grad = False\n\n\n#For models like ResNet or Inception use the following,\n\n# Freeze training for all \"features\" layers\n# for _, param in model.named_parameters():\n# param.requires_grad = False",
"_____no_output_____"
]
],
[
[
"## Let's Add our own Last Layer which will have 102 out_features for 102 species",
"_____no_output_____"
]
],
[
[
"# VGG16 \nn_inputs = model.classifier[6].in_features\n\n#Others\n# n_inputs = model.fc.in_features\n\n# add last linear layer (n_inputs -> 102 flower classes)\n# new layers automatically have requires_grad = True\nlast_layer = nn.Linear(n_inputs, len(classes))\n\n# VGG16\nmodel.classifier[6] = last_layer\n\n# Others\n#model.fc = last_layer\n\n# if GPU is available, move the model to GPU\nif train_on_gpu:\n model.cuda()\n\n# check to see that your last layer produces the expected number of outputs\n\n#VGG\nprint(model.classifier[6].out_features)\n#Others\n#print(model.fc.out_features)",
"_____no_output_____"
]
],
[
[
"# 5. Specify our Loss Function and Optimzer",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\n\n# specify loss function (categorical cross-entropy)\ncriterion = #TODO\n\n# specify optimizer (stochastic gradient descent) and learning rate = 0.01 or 0.001\noptimizer = #TODO",
"_____no_output_____"
]
],
[
[
"# 6. Train our Model and Save necessary checkpoints",
"_____no_output_____"
]
],
[
[
"# Define epochs (between 50-200)\nepochs = 20\n# initialize tracker for minimum validation loss\nvalid_loss_min = np.Inf # set initial \"min\" to infinity\n\n# Some lists to keep track of loss and accuracy during each epoch\nepoch_list = []\ntrain_loss_list = []\nval_loss_list = []\ntrain_acc_list = []\nval_acc_list = []\n# Start epochs\nfor epoch in range(epochs):\n \n #adjust_learning_rate(optimizer, epoch)\n \n # monitor training loss\n train_loss = 0.0\n val_loss = 0.0\n \n ###################\n # train the model #\n ###################\n # Set the training mode ON -> Activate Dropout Layers\n model.train() # prepare model for training\n # Calculate Accuracy \n correct = 0\n total = 0\n \n # Load Train Images with Labels(Targets)\n for data, target in train_loader:\n \n if train_on_gpu:\n data, target = data.cuda(), target.cuda()\n \n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n \n if type(output) == tuple:\n output, _ = output\n \n # Calculate Training Accuracy \n predicted = torch.max(output.data, 1)[1] \n # Total number of labels\n total += len(target)\n # Total correct predictions\n correct += (predicted == target).sum()\n \n # calculate the loss\n loss = criterion(output, target)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*data.size(0)\n \n # calculate average training loss over an epoch\n train_loss = train_loss/len(train_loader.dataset)\n \n # Avg Accuracy\n accuracy = 100 * correct / float(total)\n \n # Put them in their list\n train_acc_list.append(accuracy)\n train_loss_list.append(train_loss)\n \n \n # Implement Validation like K-fold Cross-validation \n \n # Set Evaluation Mode ON -> Turn Off Dropout\n model.eval() # Required for Evaluation/Test\n\n # Calculate Test/Validation Accuracy \n correct = 0\n total = 0\n with torch.no_grad():\n for data, target in test_loader:\n\n\n if train_on_gpu:\n data, target = data.cuda(), target.cuda()\n\n # Predict Output\n output = model(data)\n if type(output) == tuple:\n output, _ = output\n\n # Calculate Loss\n loss = criterion(output, target)\n val_loss += loss.item()*data.size(0)\n # Get predictions from the maximum value\n predicted = torch.max(output.data, 1)[1]\n\n # Total number of labels\n total += len(target)\n\n # Total correct predictions\n correct += (predicted == target).sum()\n \n # calculate average training loss and accuracy over an epoch\n val_loss = val_loss/len(test_loader.dataset)\n accuracy = 100 * correct/ float(total)\n \n # Put them in their list\n val_acc_list.append(accuracy)\n val_loss_list.append(val_loss)\n \n # Print the Epoch and Training Loss Details with Validation Accuracy \n print('Epoch: {} \\tTraining Loss: {:.4f}\\t Val. acc: {:.2f}%'.format(\n epoch+1, \n train_loss,\n accuracy\n ))\n # save model if validation loss has decreased\n if val_loss <= valid_loss_min:\n print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(\n valid_loss_min,\n val_loss))\n # Save Model State on Checkpoint\n torch.save(model.state_dict(), 'model.pt')\n valid_loss_min = val_loss\n # Move to next epoch\n epoch_list.append(epoch + 1)",
"_____no_output_____"
]
],
[
[
"## Load Model State from Checkpoint",
"_____no_output_____"
]
],
[
[
"model.load_state_dict(torch.load('model.pt'))",
"_____no_output_____"
]
],
[
[
"## Save the whole Model (Pickling)",
"_____no_output_____"
]
],
[
[
"#Save/Pickle the Model\ntorch.save(model, 'classifier.pth')",
"_____no_output_____"
]
],
[
[
"# 7. Visualize Model Training and Validation",
"_____no_output_____"
]
],
[
[
"# Training / Validation Loss\nplt.plot(epoch_list,train_loss_list)\nplt.plot(val_loss_list)\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Loss\")\nplt.title(\"Training/Validation Loss vs Number of Epochs\")\nplt.legend(['Train', 'Valid'], loc='upper right')\nplt.show()",
"_____no_output_____"
],
[
"# Train/Valid Accuracy\nplt.plot(epoch_list,train_acc_list)\nplt.plot(val_acc_list)\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Training/Validation Accuracy\")\nplt.title(\"Accuracy vs Number of Epochs\")\nplt.legend(['Train', 'Valid'], loc='best')\nplt.show()",
"_____no_output_____"
]
],
[
[
"From the above graphs we get some really impressive results",
"_____no_output_____"
],
[
"**Overall Accuracy\n**",
"_____no_output_____"
]
],
[
[
"val_acc = sum(val_acc_list[:]).item()/len(val_acc_list)\nprint(\"Validation Accuracy of model = {} %\".format(val_acc))",
"_____no_output_____"
]
],
[
[
"# 8. Test our Model Performance ",
"_____no_output_____"
]
],
[
[
"# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\nimg = images.numpy()\n\n# move model inputs to cuda, if GPU available\nif train_on_gpu:\n images = images.cuda()\n\nmodel.eval() # Required for Evaluation/Test\n# get sample outputs\noutput = model(images)\nif type(output) == tuple:\n output, _ = output\n# convert output probabilities to predicted class\n_, preds_tensor = torch.max(output, 1)\npreds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())\n\n# plot the images in the batch, along with predicted and true labels\nfig = plt.figure(figsize=(20, 5))\nfor idx in np.arange(12):\n ax = fig.add_subplot(3, 4, idx+1, xticks=[], yticks=[])\n plt.imshow(np.transpose(img[idx], (1, 2, 0)))\n ax.set_title(\"Pr: {} Ac: {}\".format(classes[preds[idx]], classes[labels[idx]]),\n color=(\"green\" if preds[idx]==labels[idx].item() else \"red\"))",
"_____no_output_____"
]
],
[
[
"**We can see that the Correctly Classifies Results are Marked in \"Green\" and the misclassifies ones are \"Red\"**",
"_____no_output_____"
],
[
"## 8.1 Test our Model Performance with Gabriele Picco's Program",
"_____no_output_____"
],
[
"**Credits: ** **Gabriele Picco** (https://github.com/GabrielePicco/deep-learning-flower-identifier)",
"_____no_output_____"
],
[
"**Special Instruction:** \n1. **Uncomment the following two code cells while running the notebook.**\n2. Comment these two blocks while **Commit**, otherwise you will get an error \"Too many Output Files\" in Kaggle Only.\n3. If you find a solution to this then let me know.",
"_____no_output_____"
]
],
[
[
"# !git clone https://github.com/GabrielePicco/deep-learning-flower-identifier\n# !pip install airtable\n# import sys\n# sys.path.insert(0, 'deep-learning-flower-identifier')",
"_____no_output_____"
],
[
"# from test_model_pytorch_facebook_challenge import calc_accuracy\n# calc_accuracy(model, input_image_size=224, use_google_testset=False)",
"_____no_output_____"
]
],
[
[
"## **Congrats! We got almost 90% accuracy with just a simple configuration!** \n(We will get almost 90% accuracy in Gabriele's Test Suite. Just Uncomment above two code cells and see.)",
"_____no_output_____"
],
[
"# 9. Export our Model Checkpoint File or Model Pickle File",
"_____no_output_____"
],
[
"**Just Right-click on Below link and Copy the Link** \n**And Proceed to [Part 2 Tutorial](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/)**",
"_____no_output_____"
],
[
"## Links Here: \n**Model State Checkpoint File: [model.pt](./model.pt)** (Preferred) \n**Classifier Pickle File: [classifier.pth](./classifier.pth)** \n(Right-click on model.pt and copy the link address) \n\n* If the links don't work then just modify the (link) as ./model.pt or ./classifier.pth",
"_____no_output_____"
],
[
"# **Proceed To Part 2: [Click Here](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/)**",
"_____no_output_____"
],
[
"# Thank You \n\nIf you liked this kernel please **Upvote**. Don't forget to drop a comment or suggestion. \n\n### *Soumya Ranjan Behera*\nLet's stay Connected! [LinkedIn](https://www.linkedin.com/in/soumya044) \n\n**Happy Coding !**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0420b1de4bd6bd00f7224fa8525fa88bfe63e6a | 449,841 | ipynb | Jupyter Notebook | test_signal_generation/create_signals.ipynb | jain-nikunj/radioML | 24bcfce5f189e22679881c3eea3819e7a19e7301 | [
"MIT"
] | 5 | 2018-03-07T03:46:32.000Z | 2021-02-20T11:59:40.000Z | test_signal_generation/create_signals.ipynb | jain-nikunj/radioML | 24bcfce5f189e22679881c3eea3819e7a19e7301 | [
"MIT"
] | 1 | 2018-03-11T03:19:03.000Z | 2018-03-11T03:19:03.000Z | test_signal_generation/create_signals.ipynb | jain-nikunj/radioML | 24bcfce5f189e22679881c3eea3819e7a19e7301 | [
"MIT"
] | 3 | 2018-03-14T18:16:25.000Z | 2018-11-14T07:19:41.000Z | 845.565789 | 57,586 | 0.93714 | [
[
[
"import numpy as np\nfrom scipy import pi\nimport matplotlib.pyplot as plt\nimport pickle as cPickle\n#Sine wave\n\nN = 128\n\ndef get_sine_wave():\n x_sin = np.array([0.0 for i in range(N)])\n # print(x_sin)\n for i in range(N):\n # print(\"h\")\n x_sin[i] = np.sin(2.0*pi*i/16.0)\n\n plt.plot(x_sin)\n plt.title('Sine wave')\n plt.show()\n\n y_sin = np.fft.fftshift(np.fft.fft(x_sin[:16], 16))\n plt.plot(abs(y_sin))\n plt.title('FFT sine wave')\n plt.show()\n\n return x_sin\n\ndef get_bpsk_carrier():\n x = np.fromfile('gnuradio_dumps/bpsk_carrier', dtype = 'float32')\n x_bpsk_carrier = x[9000:9000+N]\n plt.plot(x_bpsk_carrier)\n plt.title('BPSK carrier')\n plt.show()\n\n # y_bpsk_carrier = np.fft.fft(x_bpsk_carrier, N)\n # plt.plot(abs(y_bpsk_carrier))\n # plt.title('FFT BPSK carrier')\n # plt.show()\n\ndef get_qpsk_carrier():\n x = np.fromfile('gnuradio_dumps/qpsk_carrier', dtype = 'float32')\n x_qpsk_carrier = x[12000:12000+N]\n plt.plot(x_qpsk_carrier)\n plt.title('QPSK carrier')\n plt.show()\n\n\n # y_qpsk_carrier = np.fft.fft(x_qpsk_carrier, N)\n # plt.plot(abs(y_qpsk_carrier))\n # plt.title('FFT QPSK carrier')\n # plt.show()\n\ndef get_bpsk():\n x = np.fromfile('gnuradio_dumps/bpsk', dtype = 'complex64')\n x_bpsk = x[9000:9000+N]\n plt.plot(x_bpsk.real)\n plt.plot(x_bpsk.imag)\n plt.title('BPSK')\n plt.show()\n\n\n\n # y_bpsk = np.fft.fft(x_bpsk, N)\n # plt.plot(abs(y_bpsk))\n # plt.title('FFT BPSK')\n # plt.show()\n\ndef get_qpsk():\n x = np.fromfile('gnuradio_dumps/qpsk', dtype = 'complex64')\n x_qpsk = x[11000:11000+N]\n plt.plot(x_qpsk.real)\n plt.plot(x_qpsk.imag)\n plt.title('QPSK')\n plt.show()\n\n\n\n # y_qpsk = np.fft.fft(x_bpsk, N)\n # plt.plot(abs(y_bqsk))\n # plt.title('FFT QPSK')\n # plt.show()\n\ndef load_dataset(location=\"../../datasets/radioml.dat\"):\n f = open(location, \"rb\")\n ds = cPickle.load(f, encoding = 'latin-1')\n return ds\n\n\ndef get_from_dataset(dataset, key):\n \"\"\"Returns complex version of dataset[key][500]\"\"\"\n xr = dataset[key][500][0]\n xi = dataset[key][500][1]\n plt.plot(xr)\n plt.plot(xi)\n plt.title(key)\n plt.show()\n return xr\n\nx_sin = get_sine_wave()\nx_bpsk_carrier = get_bpsk_carrier()\nx_qpsk_carrier = get_qpsk_carrier()\nx_bpsk = get_bpsk()\nx_qpsk = get_qpsk()\n\nds = load_dataset()\nx_amssb = get_from_dataset(dataset=ds, key=('AM-SSB', 16))\nx_amdsb = get_from_dataset(dataset=ds, key= ('AM-DSB', 18))\nx_gfsk = get_from_dataset(dataset=ds, key=('GFSK', 18))\n\n",
"_____no_output_____"
],
[
"nfft = 16\ncyclo_averaging = 8\noffsets = [0,1,2,3,4,5,6,7]\n\ndef compute_cyclo_fft(data, nfft): \n \n data_reshape = np.reshape(data, (-1, nfft)) \n y = np.fft.fftshift(np.fft.fft(data_reshape, axis=1), axes=1) \n \n \n return y.T\n\ndef compute_cyclo_ifft(data, nfft):\n return np.fft.fftshift(np.fft.fft(data))\n\ndef single_fft_cyclo(fft, offset):\n left = np.roll(fft, -offset)\n right = np.roll(fft, offset)\n spec = right * np.conj(left)\n return spec\n\n\ndef create_sc(spec, offset):\n left = np.roll(spec, -offset)\n right = np.roll(spec, offset)\n denom = left * right \n denom_norm = np.sqrt(denom)\n return np.divide(spec, denom_norm)\n\n\ndef cyclo_stationary(data):\n # fft\n cyc_fft = compute_cyclo_fft(data, nfft)\n \n # average\n num_ffts = int(cyc_fft.shape[0])\n cyc_fft = cyc_fft[:num_ffts] \n \n cyc_fft = np.mean(np.reshape(cyc_fft, (nfft, cyclo_averaging)), axis=1)\n \n print(cyc_fft)\n plt.title('cyc_fft')\n plt.plot(abs(cyc_fft))\n plt.show()\n\n specs = np.zeros((len(offsets)*16), dtype=np.complex64)\n scs = np.zeros((len(offsets)*16), dtype=np.complex64)\n cdp = {offset: 0 for offset in offsets}\n for j, offset in enumerate(offsets):\n spec = single_fft_cyclo(cyc_fft, offset)\n print(spec)\n plt.plot(abs(spec))\n plt.title(offset)\n plt.show()\n sc = create_sc(spec, offset)\n specs[j*16:j*16+16] = spec\n scs[j*16:j*16+16] = sc\n cdp[offset] = max(sc)\n return specs, scs, cdp\n\n\nspecs, scs, cdp = cyclo_stationary(x_sin)\nplt.plot(np.arange(128), scs.real)\nplt.plot(np.arange(128), scs.imag)\n\nplt.show()\n",
"[ 6.69535287e-17 +0.00000000e+00j 6.19398243e-16 -3.33066907e-16j\n 3.40853099e-16 +1.04444071e-15j -2.53085959e-15 -3.55271368e-15j\n 6.63698404e-16 -3.30291350e-15j 3.97353638e-16 +5.27355937e-16j\n 7.92254681e-16 -2.53602855e-15j -5.60213720e-16 +8.00000000e+00j\n 8.71865222e-16 +0.00000000e+00j -1.07603878e-14 -8.00000000e+00j\n 7.92254681e-16 +2.53602855e-15j -2.75290419e-15 +3.10862447e-15j\n 6.63698404e-16 +3.30291350e-15j 6.19398243e-16 -1.94289029e-16j\n 3.40853099e-16 -1.04444071e-15j -3.28026013e-15 -5.55111512e-17j]\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d0421509443889b31ae3e5248ccab98e7ebf2213 | 428,985 | ipynb | Jupyter Notebook | 06_shakespeare_exercise.ipynb | flaviomerenda/tutorial | f58eb202d4a346542994c7e7ed948f1a6b98714d | [
"MIT"
] | 14 | 2018-10-08T16:17:51.000Z | 2022-02-09T23:10:36.000Z | 06_shakespeare_exercise.ipynb | flaviomerenda/tutorial | f58eb202d4a346542994c7e7ed948f1a6b98714d | [
"MIT"
] | null | null | null | 06_shakespeare_exercise.ipynb | flaviomerenda/tutorial | f58eb202d4a346542994c7e7ed948f1a6b98714d | [
"MIT"
] | 11 | 2018-10-08T16:14:27.000Z | 2020-07-29T17:24:30.000Z | 81.46316 | 524 | 0.664979 | [
[
[
"# Exercise: Find correspondences between old and modern english ",
"_____no_output_____"
],
[
"The purpose of this execise is to use two vecsigrafos, one built on UMBC and Wordnet and another one produced by directly running Swivel against a corpus of Shakespeare's complete works, to try to find corelations between old and modern English, e.g. \"thou\" -> \"you\", \"dost\" -> \"do\", \"raiment\" -> \"clothing\". For example, you can try to pick a set of 100 words in \"ye olde\" English corpus and see how they correlate to UMBC over WordNet. \n\n",
"_____no_output_____"
],
[
"Next, we prepare the embeddings from the Shakespeare corpus and load a UMBC vecsigrafo, which will provide the two vector spaces to correlate.",
"_____no_output_____"
],
[
"## Download a small text corpus",
"_____no_output_____"
],
[
"First, we download the corpus into our environment. We will use the Shakespeare's complete works corpus, published as part of Project Gutenberg and pbublicly available.",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
],
[
"%ls",
"\u001b[0m\u001b[01;34m5class\u001b[0m/ figures_5class.h5\n5class.zip figures_5class_weights.h5\ncaptions_5class_cross.h5 quality5class.h5\ncaptions_5class_cross_weights.h5 qualityMix5class.h5\ncaptions_5class.h5 qualityUni5class.h5\ncaptions_5class_weights.h5 \u001b[01;34msample_data\u001b[0m/\ncross.h5 title_abstract_5class.h5\ncross_weights.h5 title_abstract_5class_weights.h5\nfigures_5class_cross.h5 \u001b[01;34mtutorial\u001b[0m/\nfigures_5class_cross_weights.h5\n"
],
[
"#!rm -r tutorial\n!git clone https://github.com/HybridNLP2018/tutorial",
"fatal: destination path 'tutorial' already exists and is not an empty directory.\n"
]
],
[
[
"Let us see if the corpus is where we think it is:",
"_____no_output_____"
]
],
[
[
"%cd tutorial/lit\n%ls ",
"/content/tutorial/lit\n\u001b[0m\u001b[01;34mcoocs\u001b[0m/ shakespeare_complete_works.txt \u001b[01;34mswivel\u001b[0m/ wget-log\n"
]
],
[
[
"Downloading Swivel",
"_____no_output_____"
]
],
[
[
"!wget http://expertsystemlab.com/hybridNLP18/swivel.zip\n!unzip swivel.zip\n!rm swivel/*\n!rm swivel.zip",
"\nRedirecting output to ‘wget-log.1’.\nArchive: swivel.zip\n inflating: swivel/analogy.cc \n inflating: swivel/distributed.sh \n inflating: swivel/eval.mk \n inflating: swivel/fastprep.cc \n inflating: swivel/fastprep.mk \n inflating: swivel/glove_to_shards.py \n inflating: swivel/nearest.py \n inflating: swivel/prep.py \n inflating: swivel/README.md \n inflating: swivel/swivel.py \n inflating: swivel/text2bin.py \n inflating: swivel/vecs.py \n inflating: swivel/wordsim.py \n"
]
],
[
[
"## Learn the Swivel embeddings over the Old Shakespeare corpus",
"_____no_output_____"
],
[
"### Calculating the co-occurrence matrix",
"_____no_output_____"
]
],
[
[
"corpus_path = '/content/tutorial/lit/shakespeare_complete_works.txt'\ncoocs_path = '/content/tutorial/lit/coocs'\nshard_size = 512\nfreq=3\n!python /content/tutorial/scripts/swivel/prep.py --input={corpus_path} --output_dir={coocs_path} --shard_size={shard_size} --min_count={freq}",
"running with flags \n/content/tutorial/scripts/swivel/prep.py:\n --bufsz: The number of co-occurrences to buffer\n (default: '16777216')\n (an integer)\n --input: The input text.\n (default: '')\n --max_vocab: The maximum vocabulary size\n (default: '1048576')\n (an integer)\n --min_count: The minimum number of times a word should occur to be included in\n the vocabulary\n (default: '5')\n (an integer)\n --output_dir: Output directory for Swivel data\n (default: '/tmp/swivel_data')\n --shard_size: The size for each shard\n (default: '4096')\n (an integer)\n --vocab: Vocabulary to use instead of generating one\n (default: '')\n --window_size: The window size\n (default: '10')\n (an integer)\n\ntensorflow.python.platform.app:\n -h,--[no]help: show this help\n (default: 'false')\n --[no]helpfull: show full help\n (default: 'false')\n --[no]helpshort: show this help\n (default: 'false')\n\nabsl.flags:\n --flagfile: Insert flag definitions from the given file into the command line.\n (default: '')\n --undefok: comma-separated list of flag names that it is okay to specify on\n the command line even if the program does not define a flag with that name.\n IMPORTANT: flags in this list that have arguments MUST use the --flag=value\n format.\n (default: '')\n\nvocabulary contains 23552 tokens\nComputing co-occurrences: 140000..., last lid 1820, sum(1820)=188.256746\nwriting shard 2116/2116\nWrote vocab and sum files to /content/tutorial/lit/coocs\nWrote vocab and sum files to /content/tutorial/lit/coocs\ndone!\n"
],
[
"%ls {coocs_path} | head -n 10",
"col_sums.txt\ncol_vocab.txt\nrow_sums.txt\nrow_vocab.txt\nshard-000-000.pb\nshard-000-001.pb\nshard-000-002.pb\nshard-000-003.pb\nshard-000-004.pb\nshard-000-005.pb\n"
]
],
[
[
"### Learning the embeddings from the matrix",
"_____no_output_____"
]
],
[
[
"vec_path = '/content/tutorial/lit/vec/'\n!python /content/tutorial/scripts/swivel/swivel.py --input_base_path={coocs_path} \\\n --output_base_path={vec_path} \\\n --num_epochs=20 --dim=300 \\\n --submatrix_rows={shard_size} --submatrix_cols={shard_size}",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/input.py:187: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/input.py:187: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\nWARNING:tensorflow:From /content/tutorial/scripts/swivel/swivel.py:495: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease switch to tf.train.MonitoredTrainingSession\nWARNING:tensorflow:Issue encountered when serializing global_step.\nType is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.\n'Tensor' object has no attribute 'to_proto'\n2018-10-08 13:14:16.156023: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2018-10-08 13:14:16.156566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties: \nname: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235\npciBusID: 0000:00:04.0\ntotalMemory: 11.17GiB freeMemory: 11.10GiB\n2018-10-08 13:14:16.156611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0\n2018-10-08 13:14:18.064223: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:\n2018-10-08 13:14:18.064387: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0 \n2018-10-08 13:14:18.064482: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N \n2018-10-08 13:14:18.064823: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n2018-10-08 13:14:18.069298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10759 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Starting standard services.\nINFO:tensorflow:Saving checkpoint to path /content/tutorial/lit/vec/model.ckpt\nINFO:tensorflow:Starting queue runners.\nINFO:tensorflow:global_step/sec: 0\nWARNING:tensorflow:Issue encountered when serializing global_step.\nType is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.\n'Tensor' object has no attribute 'to_proto'\nINFO:tensorflow:Recording summary at step 0.\nINFO:tensorflow:local_step=10 global_step=10 loss=19.0, 0.0% complete\nINFO:tensorflow:local_step=20 global_step=20 loss=18.7, 0.0% complete\nINFO:tensorflow:local_step=30 global_step=30 loss=19.0, 0.1% complete\nINFO:tensorflow:local_step=40 global_step=40 loss=18.1, 0.1% complete\nINFO:tensorflow:local_step=50 global_step=50 loss=19.5, 0.1% complete\nINFO:tensorflow:local_step=60 global_step=60 loss=17.5, 0.1% complete\nINFO:tensorflow:local_step=70 global_step=70 loss=17.1, 0.2% complete\nINFO:tensorflow:local_step=80 global_step=80 loss=17.3, 0.2% complete\nINFO:tensorflow:local_step=90 global_step=90 loss=18.9, 0.2% complete\nINFO:tensorflow:local_step=100 global_step=100 loss=16.0, 0.2% complete\nINFO:tensorflow:local_step=110 global_step=110 loss=153.8, 0.3% complete\nINFO:tensorflow:local_step=120 global_step=120 loss=12.1, 0.3% complete\nINFO:tensorflow:local_step=130 global_step=130 loss=27.2, 0.3% complete\nINFO:tensorflow:local_step=140 global_step=140 loss=20.3, 0.3% complete\nINFO:tensorflow:local_step=150 global_step=150 loss=18.5, 0.4% complete\nINFO:tensorflow:local_step=160 global_step=160 loss=14.9, 0.4% complete\nINFO:tensorflow:local_step=170 global_step=170 loss=17.0, 0.4% complete\nINFO:tensorflow:local_step=180 global_step=180 loss=14.8, 0.4% complete\nINFO:tensorflow:local_step=190 global_step=190 loss=13.7, 0.4% complete\nINFO:tensorflow:local_step=200 global_step=200 loss=17.0, 0.5% complete\nINFO:tensorflow:local_step=210 global_step=210 loss=22.4, 0.5% complete\nINFO:tensorflow:local_step=220 global_step=220 loss=21.1, 0.5% complete\nINFO:tensorflow:local_step=230 global_step=230 loss=15.8, 0.5% complete\nINFO:tensorflow:local_step=240 global_step=240 loss=14.4, 0.6% complete\nINFO:tensorflow:local_step=250 global_step=250 loss=27.9, 0.6% complete\nINFO:tensorflow:local_step=260 global_step=260 loss=22.9, 0.6% complete\nINFO:tensorflow:local_step=270 global_step=270 loss=22.8, 0.6% complete\nINFO:tensorflow:local_step=280 global_step=280 loss=21.2, 0.7% complete\nINFO:tensorflow:local_step=290 global_step=290 loss=23.8, 0.7% complete\nINFO:tensorflow:local_step=300 global_step=300 loss=24.1, 0.7% complete\nINFO:tensorflow:local_step=310 global_step=310 loss=23.2, 0.7% complete\nINFO:tensorflow:local_step=320 global_step=320 loss=23.9, 0.8% complete\nINFO:tensorflow:local_step=330 global_step=330 loss=21.7, 0.8% complete\nINFO:tensorflow:local_step=340 global_step=340 loss=23.8, 0.8% complete\nINFO:tensorflow:local_step=350 global_step=350 loss=23.8, 0.8% complete\nINFO:tensorflow:local_step=360 global_step=360 loss=23.2, 0.9% complete\nINFO:tensorflow:local_step=370 global_step=370 loss=23.9, 0.9% complete\nINFO:tensorflow:local_step=380 global_step=380 loss=21.7, 0.9% complete\nINFO:tensorflow:local_step=390 global_step=390 loss=22.4, 0.9% complete\nINFO:tensorflow:local_step=400 global_step=400 loss=23.0, 0.9% complete\nINFO:tensorflow:local_step=410 global_step=410 loss=23.7, 1.0% complete\nINFO:tensorflow:local_step=420 global_step=420 loss=22.8, 1.0% complete\nINFO:tensorflow:local_step=430 global_step=430 loss=20.2, 1.0% complete\nINFO:tensorflow:local_step=440 global_step=440 loss=22.3, 1.0% complete\nINFO:tensorflow:local_step=450 global_step=450 loss=24.1, 1.1% complete\nINFO:tensorflow:local_step=460 global_step=460 loss=23.8, 1.1% complete\nINFO:tensorflow:local_step=470 global_step=470 loss=22.3, 1.1% complete\nINFO:tensorflow:local_step=480 global_step=480 loss=24.3, 1.1% complete\nINFO:tensorflow:local_step=490 global_step=490 loss=23.4, 1.2% complete\nINFO:tensorflow:local_step=500 global_step=500 loss=23.1, 1.2% complete\nINFO:tensorflow:local_step=510 global_step=510 loss=18.4, 1.2% complete\nINFO:tensorflow:local_step=520 global_step=520 loss=22.5, 1.2% complete\nINFO:tensorflow:local_step=530 global_step=530 loss=26.4, 1.3% complete\nINFO:tensorflow:local_step=540 global_step=540 loss=20.4, 1.3% complete\nINFO:tensorflow:local_step=550 global_step=550 loss=24.1, 1.3% complete\nINFO:tensorflow:local_step=560 global_step=560 loss=23.7, 1.3% complete\nINFO:tensorflow:local_step=570 global_step=570 loss=22.6, 1.3% complete\nINFO:tensorflow:local_step=580 global_step=580 loss=19.6, 1.4% complete\nINFO:tensorflow:local_step=590 global_step=590 loss=25.3, 1.4% complete\nINFO:tensorflow:local_step=600 global_step=600 loss=23.2, 1.4% complete\nINFO:tensorflow:local_step=610 global_step=610 loss=23.2, 1.4% complete\nINFO:tensorflow:local_step=620 global_step=620 loss=22.9, 1.5% complete\nINFO:tensorflow:local_step=630 global_step=630 loss=22.6, 1.5% complete\nINFO:tensorflow:local_step=640 global_step=640 loss=23.3, 1.5% complete\nINFO:tensorflow:local_step=650 global_step=650 loss=22.8, 1.5% complete\nINFO:tensorflow:local_step=660 global_step=660 loss=24.0, 1.6% complete\nINFO:tensorflow:local_step=670 global_step=670 loss=22.9, 1.6% complete\nINFO:tensorflow:local_step=680 global_step=680 loss=23.2, 1.6% complete\nINFO:tensorflow:local_step=690 global_step=690 loss=18.6, 1.6% complete\nINFO:tensorflow:local_step=700 global_step=700 loss=21.2, 1.7% complete\nINFO:tensorflow:local_step=710 global_step=710 loss=24.1, 1.7% complete\nINFO:tensorflow:local_step=720 global_step=720 loss=22.8, 1.7% complete\nINFO:tensorflow:local_step=730 global_step=730 loss=23.1, 1.7% complete\nINFO:tensorflow:local_step=740 global_step=740 loss=23.4, 1.7% complete\nINFO:tensorflow:local_step=750 global_step=750 loss=22.9, 1.8% complete\nINFO:tensorflow:local_step=760 global_step=760 loss=19.3, 1.8% complete\nINFO:tensorflow:local_step=770 global_step=770 loss=22.7, 1.8% complete\nINFO:tensorflow:local_step=780 global_step=780 loss=23.2, 1.8% complete\nINFO:tensorflow:local_step=790 global_step=790 loss=23.4, 1.9% complete\nINFO:tensorflow:local_step=800 global_step=800 loss=24.2, 1.9% complete\nINFO:tensorflow:local_step=810 global_step=810 loss=23.5, 1.9% complete\nINFO:tensorflow:local_step=820 global_step=820 loss=22.4, 1.9% complete\nINFO:tensorflow:local_step=830 global_step=830 loss=23.1, 2.0% complete\nINFO:tensorflow:local_step=840 global_step=840 loss=22.9, 2.0% complete\nINFO:tensorflow:local_step=850 global_step=850 loss=21.5, 2.0% complete\nINFO:tensorflow:local_step=860 global_step=860 loss=23.9, 2.0% complete\nINFO:tensorflow:local_step=870 global_step=870 loss=20.0, 2.1% complete\nINFO:tensorflow:local_step=880 global_step=880 loss=22.9, 2.1% complete\nINFO:tensorflow:local_step=890 global_step=890 loss=21.5, 2.1% complete\nINFO:tensorflow:local_step=900 global_step=900 loss=22.6, 2.1% complete\nINFO:tensorflow:local_step=910 global_step=910 loss=23.1, 2.2% complete\nINFO:tensorflow:local_step=920 global_step=920 loss=22.8, 2.2% complete\nINFO:tensorflow:local_step=930 global_step=930 loss=22.9, 2.2% complete\nINFO:tensorflow:local_step=940 global_step=940 loss=22.5, 2.2% complete\nINFO:tensorflow:local_step=950 global_step=950 loss=22.7, 2.2% complete\nINFO:tensorflow:local_step=960 global_step=960 loss=20.3, 2.3% complete\nINFO:tensorflow:local_step=970 global_step=970 loss=20.2, 2.3% complete\nINFO:tensorflow:local_step=980 global_step=980 loss=19.1, 2.3% complete\nINFO:tensorflow:local_step=990 global_step=990 loss=21.8, 2.3% complete\nINFO:tensorflow:local_step=1000 global_step=1000 loss=19.9, 2.4% complete\nINFO:tensorflow:local_step=1010 global_step=1010 loss=22.7, 2.4% complete\nINFO:tensorflow:local_step=1020 global_step=1020 loss=23.2, 2.4% complete\nINFO:tensorflow:local_step=1030 global_step=1030 loss=20.6, 2.4% complete\nINFO:tensorflow:local_step=1040 global_step=1040 loss=20.4, 2.5% complete\nINFO:tensorflow:local_step=1050 global_step=1050 loss=22.0, 2.5% complete\nINFO:tensorflow:local_step=1060 global_step=1060 loss=22.3, 2.5% complete\nINFO:tensorflow:local_step=1070 global_step=1070 loss=23.5, 2.5% complete\nINFO:tensorflow:local_step=1080 global_step=1080 loss=22.6, 2.6% complete\nINFO:tensorflow:local_step=1090 global_step=1090 loss=22.3, 2.6% complete\nINFO:tensorflow:local_step=1100 global_step=1100 loss=21.7, 2.6% complete\nINFO:tensorflow:local_step=1110 global_step=1110 loss=20.8, 2.6% complete\nINFO:tensorflow:local_step=1120 global_step=1120 loss=22.3, 2.6% complete\nINFO:tensorflow:local_step=1130 global_step=1130 loss=20.6, 2.7% complete\nINFO:tensorflow:local_step=1140 global_step=1140 loss=23.4, 2.7% complete\nINFO:tensorflow:local_step=1150 global_step=1150 loss=18.5, 2.7% complete\nINFO:tensorflow:local_step=1160 global_step=1160 loss=18.3, 2.7% complete\nINFO:tensorflow:local_step=1170 global_step=1170 loss=23.1, 2.8% complete\nINFO:tensorflow:local_step=1180 global_step=1180 loss=19.3, 2.8% complete\nINFO:tensorflow:local_step=1190 global_step=1190 loss=16.4, 2.8% complete\nINFO:tensorflow:local_step=1200 global_step=1200 loss=19.2, 2.8% complete\nINFO:tensorflow:local_step=1210 global_step=1210 loss=20.3, 2.9% complete\nINFO:tensorflow:local_step=1220 global_step=1220 loss=23.3, 2.9% complete\nINFO:tensorflow:local_step=1230 global_step=1230 loss=19.8, 2.9% complete\nINFO:tensorflow:local_step=1240 global_step=1240 loss=18.6, 2.9% complete\nINFO:tensorflow:local_step=1250 global_step=1250 loss=22.9, 3.0% complete\nINFO:tensorflow:local_step=1260 global_step=1260 loss=19.7, 3.0% complete\nINFO:tensorflow:local_step=1270 global_step=1270 loss=20.5, 3.0% complete\nINFO:tensorflow:local_step=1280 global_step=1280 loss=22.1, 3.0% complete\nINFO:tensorflow:local_step=1290 global_step=1290 loss=20.0, 3.0% complete\nINFO:tensorflow:local_step=1300 global_step=1300 loss=19.4, 3.1% complete\nINFO:tensorflow:local_step=1310 global_step=1310 loss=19.8, 3.1% complete\nINFO:tensorflow:local_step=1320 global_step=1320 loss=22.6, 3.1% complete\nINFO:tensorflow:local_step=1330 global_step=1330 loss=23.0, 3.1% complete\nINFO:tensorflow:local_step=1340 global_step=1340 loss=23.1, 3.2% complete\nINFO:tensorflow:local_step=1350 global_step=1350 loss=22.1, 3.2% complete\nINFO:tensorflow:local_step=1360 global_step=1360 loss=20.8, 3.2% complete\nINFO:tensorflow:local_step=1370 global_step=1370 loss=22.9, 3.2% complete\nINFO:tensorflow:local_step=1380 global_step=1380 loss=24.6, 3.3% complete\nINFO:tensorflow:local_step=1390 global_step=1390 loss=20.9, 3.3% complete\nINFO:tensorflow:local_step=1400 global_step=1400 loss=20.9, 3.3% complete\nINFO:tensorflow:local_step=1410 global_step=1410 loss=19.6, 3.3% complete\nINFO:tensorflow:local_step=1420 global_step=1420 loss=20.1, 3.4% complete\nINFO:tensorflow:local_step=1430 global_step=1430 loss=23.3, 3.4% complete\nINFO:tensorflow:local_step=1440 global_step=1440 loss=17.6, 3.4% complete\nINFO:tensorflow:local_step=1450 global_step=1450 loss=18.5, 3.4% complete\nINFO:tensorflow:local_step=1460 global_step=1460 loss=22.2, 3.4% complete\nINFO:tensorflow:local_step=1470 global_step=1470 loss=22.2, 3.5% complete\nINFO:tensorflow:local_step=1480 global_step=1480 loss=19.0, 3.5% complete\nINFO:tensorflow:local_step=1490 global_step=1490 loss=22.8, 3.5% complete\nINFO:tensorflow:local_step=1500 global_step=1500 loss=21.0, 3.5% complete\nINFO:tensorflow:local_step=1510 global_step=1510 loss=23.4, 3.6% complete\nINFO:tensorflow:local_step=1520 global_step=1520 loss=22.2, 3.6% complete\nINFO:tensorflow:local_step=1530 global_step=1530 loss=19.5, 3.6% complete\nINFO:tensorflow:local_step=1540 global_step=1540 loss=23.1, 3.6% complete\nINFO:tensorflow:local_step=1550 global_step=1550 loss=18.9, 3.7% complete\nINFO:tensorflow:local_step=1560 global_step=1560 loss=18.2, 3.7% complete\nINFO:tensorflow:local_step=1570 global_step=1570 loss=22.8, 3.7% complete\nINFO:tensorflow:local_step=1580 global_step=1580 loss=18.0, 3.7% complete\nINFO:tensorflow:local_step=1590 global_step=1590 loss=22.7, 3.8% complete\nINFO:tensorflow:local_step=1600 global_step=1600 loss=19.2, 3.8% complete\nINFO:tensorflow:local_step=1610 global_step=1610 loss=18.7, 3.8% complete\nINFO:tensorflow:local_step=1620 global_step=1620 loss=21.3, 3.8% complete\nINFO:tensorflow:local_step=1630 global_step=1630 loss=19.7, 3.9% complete\nINFO:tensorflow:local_step=1640 global_step=1640 loss=22.5, 3.9% complete\nINFO:tensorflow:local_step=1650 global_step=1650 loss=20.0, 3.9% complete\nINFO:tensorflow:local_step=1660 global_step=1660 loss=18.5, 3.9% complete\nINFO:tensorflow:local_step=1670 global_step=1670 loss=19.9, 3.9% complete\nINFO:tensorflow:local_step=1680 global_step=1680 loss=18.2, 4.0% complete\nINFO:tensorflow:local_step=1690 global_step=1690 loss=19.0, 4.0% complete\nINFO:tensorflow:local_step=1700 global_step=1700 loss=20.2, 4.0% complete\nINFO:tensorflow:local_step=1710 global_step=1710 loss=21.7, 4.0% complete\nINFO:tensorflow:local_step=1720 global_step=1720 loss=20.4, 4.1% complete\nINFO:tensorflow:local_step=1730 global_step=1730 loss=23.6, 4.1% complete\nINFO:tensorflow:local_step=1740 global_step=1740 loss=19.1, 4.1% complete\nINFO:tensorflow:local_step=1750 global_step=1750 loss=23.2, 4.1% complete\nINFO:tensorflow:local_step=1760 global_step=1760 loss=19.2, 4.2% complete\nINFO:tensorflow:local_step=1770 global_step=1770 loss=18.2, 4.2% complete\nINFO:tensorflow:local_step=1780 global_step=1780 loss=18.7, 4.2% complete\nINFO:tensorflow:local_step=1790 global_step=1790 loss=18.2, 4.2% complete\nINFO:tensorflow:local_step=1800 global_step=1800 loss=21.7, 4.3% complete\nINFO:tensorflow:local_step=1810 global_step=1810 loss=18.1, 4.3% complete\nINFO:tensorflow:local_step=1820 global_step=1820 loss=244.1, 4.3% complete\nINFO:tensorflow:local_step=1830 global_step=1830 loss=22.1, 4.3% complete\nINFO:tensorflow:local_step=1840 global_step=1840 loss=18.9, 4.3% complete\nINFO:tensorflow:local_step=1850 global_step=1850 loss=19.2, 4.4% complete\nINFO:tensorflow:local_step=1860 global_step=1860 loss=20.4, 4.4% complete\nINFO:tensorflow:local_step=1870 global_step=1870 loss=20.8, 4.4% complete\nINFO:tensorflow:local_step=1880 global_step=1880 loss=20.4, 4.4% complete\nINFO:tensorflow:local_step=1890 global_step=1890 loss=22.0, 4.5% complete\nINFO:tensorflow:local_step=1900 global_step=1900 loss=23.0, 4.5% complete\nINFO:tensorflow:local_step=1910 global_step=1910 loss=20.1, 4.5% complete\nINFO:tensorflow:local_step=1920 global_step=1920 loss=19.1, 4.5% complete\nINFO:tensorflow:local_step=1930 global_step=1930 loss=19.1, 4.6% complete\nINFO:tensorflow:local_step=1940 global_step=1940 loss=20.4, 4.6% complete\nINFO:tensorflow:local_step=1950 global_step=1950 loss=19.0, 4.6% complete\nINFO:tensorflow:local_step=1960 global_step=1960 loss=20.4, 4.6% complete\nINFO:tensorflow:local_step=1970 global_step=1970 loss=19.1, 4.7% complete\nINFO:tensorflow:local_step=1980 global_step=1980 loss=19.8, 4.7% complete\nINFO:tensorflow:local_step=1990 global_step=1990 loss=21.8, 4.7% complete\nINFO:tensorflow:local_step=2000 global_step=2000 loss=20.6, 4.7% complete\nINFO:tensorflow:local_step=2010 global_step=2010 loss=22.5, 4.7% complete\nINFO:tensorflow:local_step=2020 global_step=2020 loss=20.2, 4.8% complete\nINFO:tensorflow:local_step=2030 global_step=2030 loss=18.5, 4.8% complete\nINFO:tensorflow:local_step=2040 global_step=2040 loss=18.8, 4.8% complete\nINFO:tensorflow:local_step=2050 global_step=2050 loss=19.7, 4.8% complete\nINFO:tensorflow:local_step=2060 global_step=2060 loss=20.7, 4.9% complete\nINFO:tensorflow:local_step=2070 global_step=2070 loss=21.7, 4.9% complete\nINFO:tensorflow:local_step=2080 global_step=2080 loss=22.1, 4.9% complete\nINFO:tensorflow:local_step=2090 global_step=2090 loss=18.7, 4.9% complete\nINFO:tensorflow:local_step=2100 global_step=2100 loss=20.6, 5.0% complete\nINFO:tensorflow:local_step=2110 global_step=2110 loss=21.2, 5.0% complete\nINFO:tensorflow:local_step=2120 global_step=2120 loss=212.7, 5.0% complete\nINFO:tensorflow:local_step=2130 global_step=2130 loss=19.2, 5.0% complete\nINFO:tensorflow:local_step=2140 global_step=2140 loss=17.8, 5.1% complete\nINFO:tensorflow:local_step=2150 global_step=2150 loss=18.9, 5.1% complete\nINFO:tensorflow:local_step=2160 global_step=2160 loss=17.9, 5.1% complete\nINFO:tensorflow:local_step=2170 global_step=2170 loss=18.9, 5.1% complete\nINFO:tensorflow:local_step=2180 global_step=2180 loss=19.7, 5.2% complete\nINFO:tensorflow:local_step=2190 global_step=2190 loss=18.5, 5.2% complete\nINFO:tensorflow:local_step=2200 global_step=2200 loss=20.4, 5.2% complete\nINFO:tensorflow:local_step=2210 global_step=2210 loss=18.6, 5.2% complete\nINFO:tensorflow:local_step=2220 global_step=2220 loss=18.4, 5.2% complete\nINFO:tensorflow:local_step=2230 global_step=2230 loss=17.6, 5.3% complete\nINFO:tensorflow:local_step=2240 global_step=2240 loss=18.9, 5.3% complete\nINFO:tensorflow:local_step=2250 global_step=2250 loss=19.9, 5.3% complete\nINFO:tensorflow:local_step=2260 global_step=2260 loss=19.4, 5.3% complete\nINFO:tensorflow:local_step=2270 global_step=2270 loss=290.6, 5.4% complete\nINFO:tensorflow:local_step=2280 global_step=2280 loss=20.2, 5.4% complete\nINFO:tensorflow:local_step=2290 global_step=2290 loss=20.8, 5.4% complete\nINFO:tensorflow:local_step=2300 global_step=2300 loss=19.2, 5.4% complete\nINFO:tensorflow:local_step=2310 global_step=2310 loss=19.5, 5.5% complete\nINFO:tensorflow:local_step=2320 global_step=2320 loss=18.5, 5.5% complete\nINFO:tensorflow:local_step=2330 global_step=2330 loss=18.2, 5.5% complete\nINFO:tensorflow:local_step=2340 global_step=2340 loss=18.8, 5.5% complete\nINFO:tensorflow:local_step=2350 global_step=2350 loss=17.3, 5.6% complete\nINFO:tensorflow:local_step=2360 global_step=2360 loss=18.3, 5.6% complete\nINFO:tensorflow:local_step=2370 global_step=2370 loss=18.0, 5.6% complete\nINFO:tensorflow:local_step=2380 global_step=2380 loss=19.3, 5.6% complete\nINFO:tensorflow:local_step=2390 global_step=2390 loss=19.6, 5.6% complete\nINFO:tensorflow:local_step=2400 global_step=2400 loss=19.9, 5.7% complete\nINFO:tensorflow:local_step=2410 global_step=2410 loss=18.8, 5.7% complete\nINFO:tensorflow:local_step=2420 global_step=2420 loss=18.8, 5.7% complete\nINFO:tensorflow:local_step=2430 global_step=2430 loss=20.1, 5.7% complete\nINFO:tensorflow:local_step=2440 global_step=2440 loss=18.4, 5.8% complete\nINFO:tensorflow:local_step=2450 global_step=2450 loss=19.4, 5.8% complete\nINFO:tensorflow:local_step=2460 global_step=2460 loss=19.8, 5.8% complete\nINFO:tensorflow:local_step=2470 global_step=2470 loss=17.8, 5.8% complete\nINFO:tensorflow:local_step=2480 global_step=2480 loss=18.5, 5.9% complete\nINFO:tensorflow:local_step=2490 global_step=2490 loss=19.0, 5.9% complete\nINFO:tensorflow:local_step=2500 global_step=2500 loss=20.0, 5.9% complete\nINFO:tensorflow:local_step=2510 global_step=2510 loss=18.0, 5.9% complete\nINFO:tensorflow:local_step=2520 global_step=2520 loss=18.9, 6.0% complete\nINFO:tensorflow:local_step=2530 global_step=2530 loss=20.0, 6.0% complete\nINFO:tensorflow:local_step=2540 global_step=2540 loss=18.9, 6.0% complete\nINFO:tensorflow:local_step=2550 global_step=2550 loss=18.1, 6.0% complete\nINFO:tensorflow:local_step=2560 global_step=2560 loss=18.9, 6.0% complete\nINFO:tensorflow:local_step=2570 global_step=2570 loss=19.2, 6.1% complete\nINFO:tensorflow:local_step=2580 global_step=2580 loss=18.5, 6.1% complete\nINFO:tensorflow:local_step=2590 global_step=2590 loss=17.8, 6.1% complete\nINFO:tensorflow:local_step=2600 global_step=2600 loss=19.5, 6.1% complete\nINFO:tensorflow:local_step=2610 global_step=2610 loss=18.6, 6.2% complete\nINFO:tensorflow:local_step=2620 global_step=2620 loss=19.9, 6.2% complete\nINFO:tensorflow:local_step=2630 global_step=2630 loss=19.6, 6.2% complete\nINFO:tensorflow:local_step=2640 global_step=2640 loss=20.1, 6.2% complete\nINFO:tensorflow:local_step=2650 global_step=2650 loss=18.8, 6.3% complete\nINFO:tensorflow:local_step=2660 global_step=2660 loss=20.1, 6.3% complete\nINFO:tensorflow:local_step=2670 global_step=2670 loss=20.1, 6.3% complete\nINFO:tensorflow:local_step=2680 global_step=2680 loss=18.0, 6.3% complete\nINFO:tensorflow:local_step=2690 global_step=2690 loss=18.1, 6.4% complete\nINFO:tensorflow:local_step=2700 global_step=2700 loss=18.3, 6.4% complete\nINFO:tensorflow:local_step=2710 global_step=2710 loss=18.2, 6.4% complete\nINFO:tensorflow:local_step=2720 global_step=2720 loss=15.2, 6.4% complete\nINFO:tensorflow:local_step=2730 global_step=2730 loss=19.5, 6.5% complete\nINFO:tensorflow:local_step=2740 global_step=2740 loss=19.8, 6.5% complete\nINFO:tensorflow:local_step=2750 global_step=2750 loss=18.3, 6.5% complete\nINFO:tensorflow:local_step=2760 global_step=2760 loss=20.5, 6.5% complete\nINFO:tensorflow:local_step=2770 global_step=2770 loss=20.1, 6.5% complete\nINFO:tensorflow:local_step=2780 global_step=2780 loss=20.4, 6.6% complete\nINFO:tensorflow:local_step=2790 global_step=2790 loss=19.7, 6.6% complete\nINFO:tensorflow:local_step=2800 global_step=2800 loss=18.9, 6.6% complete\nINFO:tensorflow:local_step=2810 global_step=2810 loss=19.5, 6.6% complete\nINFO:tensorflow:local_step=2820 global_step=2820 loss=19.4, 6.7% complete\nINFO:tensorflow:local_step=2830 global_step=2830 loss=18.4, 6.7% complete\nINFO:tensorflow:local_step=2840 global_step=2840 loss=18.0, 6.7% complete\nINFO:tensorflow:local_step=2850 global_step=2850 loss=18.0, 6.7% complete\nINFO:tensorflow:local_step=2860 global_step=2860 loss=19.5, 6.8% complete\nINFO:tensorflow:local_step=2870 global_step=2870 loss=18.1, 6.8% complete\nINFO:tensorflow:local_step=2880 global_step=2880 loss=18.3, 6.8% complete\nINFO:tensorflow:local_step=2890 global_step=2890 loss=21.4, 6.8% complete\nINFO:tensorflow:local_step=2900 global_step=2900 loss=18.6, 6.9% complete\nINFO:tensorflow:local_step=2910 global_step=2910 loss=19.8, 6.9% complete\nINFO:tensorflow:local_step=2920 global_step=2920 loss=20.7, 6.9% complete\nINFO:tensorflow:local_step=2930 global_step=2930 loss=17.7, 6.9% complete\nINFO:tensorflow:local_step=2940 global_step=2940 loss=20.3, 6.9% complete\nINFO:tensorflow:local_step=2950 global_step=2950 loss=19.9, 7.0% complete\nINFO:tensorflow:local_step=2960 global_step=2960 loss=18.7, 7.0% complete\nINFO:tensorflow:local_step=2970 global_step=2970 loss=20.2, 7.0% complete\nINFO:tensorflow:local_step=2980 global_step=2980 loss=20.1, 7.0% complete\nINFO:tensorflow:local_step=2990 global_step=2990 loss=18.3, 7.1% complete\nINFO:tensorflow:local_step=3000 global_step=3000 loss=17.8, 7.1% complete\nINFO:tensorflow:local_step=3010 global_step=3010 loss=18.5, 7.1% complete\nINFO:tensorflow:local_step=3020 global_step=3020 loss=18.3, 7.1% complete\nINFO:tensorflow:local_step=3030 global_step=3030 loss=19.9, 7.2% complete\nINFO:tensorflow:local_step=3040 global_step=3040 loss=18.1, 7.2% complete\nINFO:tensorflow:local_step=3050 global_step=3050 loss=18.5, 7.2% complete\nINFO:tensorflow:local_step=3060 global_step=3060 loss=18.7, 7.2% complete\nINFO:tensorflow:local_step=3070 global_step=3070 loss=20.0, 7.3% complete\nINFO:tensorflow:local_step=3080 global_step=3080 loss=17.9, 7.3% complete\nINFO:tensorflow:local_step=3090 global_step=3090 loss=20.7, 7.3% complete\nINFO:tensorflow:local_step=3100 global_step=3100 loss=18.3, 7.3% complete\nINFO:tensorflow:local_step=3110 global_step=3110 loss=22.3, 7.3% complete\nINFO:tensorflow:local_step=3120 global_step=3120 loss=18.0, 7.4% complete\nINFO:tensorflow:local_step=3130 global_step=3130 loss=19.7, 7.4% complete\nINFO:tensorflow:local_step=3140 global_step=3140 loss=17.6, 7.4% complete\nINFO:tensorflow:local_step=3150 global_step=3150 loss=18.5, 7.4% complete\nINFO:tensorflow:local_step=3160 global_step=3160 loss=18.5, 7.5% complete\nINFO:tensorflow:local_step=3170 global_step=3170 loss=18.5, 7.5% complete\nINFO:tensorflow:local_step=3180 global_step=3180 loss=19.3, 7.5% complete\nINFO:tensorflow:local_step=3190 global_step=3190 loss=18.2, 7.5% complete\nINFO:tensorflow:local_step=3200 global_step=3200 loss=19.3, 7.6% complete\nINFO:tensorflow:local_step=3210 global_step=3210 loss=18.7, 7.6% complete\nINFO:tensorflow:local_step=3220 global_step=3220 loss=19.5, 7.6% complete\nINFO:tensorflow:local_step=3230 global_step=3230 loss=21.9, 7.6% complete\nINFO:tensorflow:local_step=3240 global_step=3240 loss=19.9, 7.7% complete\nINFO:tensorflow:local_step=3250 global_step=3250 loss=19.5, 7.7% complete\nINFO:tensorflow:local_step=3260 global_step=3260 loss=19.1, 7.7% complete\nINFO:tensorflow:local_step=3270 global_step=3270 loss=19.1, 7.7% complete\nINFO:tensorflow:local_step=3280 global_step=3280 loss=20.3, 7.8% complete\nINFO:tensorflow:local_step=3290 global_step=3290 loss=19.1, 7.8% complete\nINFO:tensorflow:local_step=3300 global_step=3300 loss=17.8, 7.8% complete\nINFO:tensorflow:local_step=3310 global_step=3310 loss=18.4, 7.8% complete\nINFO:tensorflow:local_step=3320 global_step=3320 loss=19.8, 7.8% complete\nINFO:tensorflow:local_step=3330 global_step=3330 loss=19.2, 7.9% complete\nINFO:tensorflow:local_step=3340 global_step=3340 loss=19.1, 7.9% complete\nINFO:tensorflow:local_step=3350 global_step=3350 loss=20.5, 7.9% complete\nINFO:tensorflow:local_step=3360 global_step=3360 loss=18.5, 7.9% complete\nINFO:tensorflow:local_step=3370 global_step=3370 loss=18.0, 8.0% complete\nINFO:tensorflow:local_step=3380 global_step=3380 loss=18.9, 8.0% complete\nINFO:tensorflow:local_step=3390 global_step=3390 loss=19.8, 8.0% complete\nINFO:tensorflow:local_step=3400 global_step=3400 loss=18.8, 8.0% complete\nINFO:tensorflow:local_step=3410 global_step=3410 loss=19.4, 8.1% complete\nINFO:tensorflow:local_step=3420 global_step=3420 loss=349.4, 8.1% complete\nINFO:tensorflow:local_step=3430 global_step=3430 loss=18.4, 8.1% complete\nINFO:tensorflow:local_step=3440 global_step=3440 loss=21.1, 8.1% complete\nINFO:tensorflow:local_step=3450 global_step=3450 loss=17.8, 8.2% complete\nINFO:tensorflow:local_step=3460 global_step=3460 loss=18.1, 8.2% complete\nINFO:tensorflow:local_step=3470 global_step=3470 loss=20.0, 8.2% complete\nINFO:tensorflow:local_step=3480 global_step=3480 loss=20.4, 8.2% complete\nINFO:tensorflow:local_step=3490 global_step=3490 loss=18.5, 8.2% complete\nINFO:tensorflow:local_step=3500 global_step=3500 loss=20.0, 8.3% complete\nINFO:tensorflow:local_step=3510 global_step=3510 loss=19.8, 8.3% complete\nINFO:tensorflow:local_step=3520 global_step=3520 loss=20.1, 8.3% complete\nINFO:tensorflow:local_step=3530 global_step=3530 loss=18.9, 8.3% complete\nINFO:tensorflow:local_step=3540 global_step=3540 loss=21.8, 8.4% complete\nINFO:tensorflow:local_step=3550 global_step=3550 loss=19.0, 8.4% complete\nINFO:tensorflow:local_step=3560 global_step=3560 loss=20.7, 8.4% complete\nINFO:tensorflow:local_step=3570 global_step=3570 loss=18.9, 8.4% complete\nINFO:tensorflow:local_step=3580 global_step=3580 loss=20.7, 8.5% complete\nINFO:tensorflow:local_step=3590 global_step=3590 loss=18.3, 8.5% complete\nINFO:tensorflow:local_step=3600 global_step=3600 loss=20.9, 8.5% complete\nINFO:tensorflow:local_step=3610 global_step=3610 loss=18.6, 8.5% complete\nINFO:tensorflow:local_step=3620 global_step=3620 loss=19.0, 8.6% complete\nINFO:tensorflow:local_step=3630 global_step=3630 loss=20.0, 8.6% complete\nINFO:tensorflow:local_step=3640 global_step=3640 loss=19.1, 8.6% complete\nINFO:tensorflow:local_step=3650 global_step=3650 loss=19.7, 8.6% complete\nINFO:tensorflow:local_step=3660 global_step=3660 loss=18.7, 8.6% complete\nINFO:tensorflow:local_step=3670 global_step=3670 loss=19.1, 8.7% complete\nINFO:tensorflow:local_step=3680 global_step=3680 loss=18.2, 8.7% complete\nINFO:tensorflow:local_step=3690 global_step=3690 loss=17.8, 8.7% complete\nINFO:tensorflow:local_step=3700 global_step=3700 loss=19.2, 8.7% complete\nINFO:tensorflow:local_step=3710 global_step=3710 loss=17.9, 8.8% complete\nINFO:tensorflow:local_step=3720 global_step=3720 loss=19.5, 8.8% complete\nINFO:tensorflow:local_step=3730 global_step=3730 loss=14.8, 8.8% complete\nINFO:tensorflow:local_step=3740 global_step=3740 loss=18.2, 8.8% complete\nINFO:tensorflow:local_step=3750 global_step=3750 loss=18.7, 8.9% complete\nINFO:tensorflow:local_step=3760 global_step=3760 loss=19.4, 8.9% complete\nINFO:tensorflow:local_step=3770 global_step=3770 loss=19.1, 8.9% complete\nINFO:tensorflow:local_step=3780 global_step=3780 loss=19.2, 8.9% complete\nINFO:tensorflow:local_step=3790 global_step=3790 loss=19.0, 9.0% complete\nINFO:tensorflow:local_step=3800 global_step=3800 loss=17.7, 9.0% complete\nINFO:tensorflow:local_step=3810 global_step=3810 loss=20.6, 9.0% complete\nINFO:tensorflow:local_step=3820 global_step=3820 loss=20.0, 9.0% complete\nINFO:tensorflow:local_step=3830 global_step=3830 loss=18.0, 9.1% complete\nINFO:tensorflow:local_step=3840 global_step=3840 loss=19.3, 9.1% complete\nINFO:tensorflow:local_step=3850 global_step=3850 loss=19.3, 9.1% complete\nINFO:tensorflow:local_step=3860 global_step=3860 loss=19.4, 9.1% complete\nINFO:tensorflow:local_step=3870 global_step=3870 loss=19.0, 9.1% complete\nINFO:tensorflow:local_step=3880 global_step=3880 loss=19.2, 9.2% complete\nINFO:tensorflow:local_step=3890 global_step=3890 loss=17.0, 9.2% complete\nINFO:tensorflow:local_step=3900 global_step=3900 loss=17.3, 9.2% complete\nINFO:tensorflow:local_step=3910 global_step=3910 loss=20.6, 9.2% complete\nINFO:tensorflow:local_step=3920 global_step=3920 loss=19.5, 9.3% complete\nINFO:tensorflow:local_step=3930 global_step=3930 loss=21.9, 9.3% complete\nINFO:tensorflow:local_step=3940 global_step=3940 loss=18.9, 9.3% complete\nINFO:tensorflow:local_step=3950 global_step=3950 loss=17.0, 9.3% complete\nINFO:tensorflow:local_step=3960 global_step=3960 loss=20.4, 9.4% complete\nINFO:tensorflow:local_step=3970 global_step=3970 loss=22.4, 9.4% complete\nINFO:tensorflow:local_step=3980 global_step=3980 loss=18.6, 9.4% complete\nINFO:tensorflow:local_step=3990 global_step=3990 loss=20.2, 9.4% complete\nINFO:tensorflow:local_step=4000 global_step=4000 loss=17.9, 9.5% complete\nINFO:tensorflow:local_step=4010 global_step=4010 loss=18.7, 9.5% complete\nINFO:tensorflow:local_step=4020 global_step=4020 loss=18.5, 9.5% complete\nINFO:tensorflow:local_step=4030 global_step=4030 loss=18.9, 9.5% complete\nINFO:tensorflow:local_step=4040 global_step=4040 loss=18.4, 9.5% complete\nINFO:tensorflow:local_step=4050 global_step=4050 loss=18.6, 9.6% complete\nINFO:tensorflow:local_step=4060 global_step=4060 loss=19.6, 9.6% complete\nINFO:tensorflow:local_step=4070 global_step=4070 loss=19.6, 9.6% complete\nINFO:tensorflow:local_step=4080 global_step=4080 loss=17.9, 9.6% complete\nINFO:tensorflow:local_step=4090 global_step=4090 loss=18.4, 9.7% complete\nINFO:tensorflow:local_step=4100 global_step=4100 loss=19.5, 9.7% complete\nINFO:tensorflow:local_step=4110 global_step=4110 loss=19.3, 9.7% complete\nINFO:tensorflow:local_step=4120 global_step=4120 loss=18.5, 9.7% complete\nINFO:tensorflow:local_step=4130 global_step=4130 loss=19.1, 9.8% complete\nINFO:tensorflow:local_step=4140 global_step=4140 loss=19.5, 9.8% complete\nINFO:tensorflow:local_step=4150 global_step=4150 loss=19.2, 9.8% complete\nINFO:tensorflow:local_step=4160 global_step=4160 loss=19.1, 9.8% complete\nINFO:tensorflow:local_step=4170 global_step=4170 loss=18.5, 9.9% complete\nINFO:tensorflow:local_step=4180 global_step=4180 loss=18.6, 9.9% complete\nINFO:tensorflow:local_step=4190 global_step=4190 loss=18.8, 9.9% complete\nINFO:tensorflow:local_step=4200 global_step=4200 loss=19.0, 9.9% complete\nINFO:tensorflow:local_step=4210 global_step=4210 loss=18.5, 9.9% complete\nINFO:tensorflow:local_step=4220 global_step=4220 loss=18.7, 10.0% complete\nINFO:tensorflow:local_step=4230 global_step=4230 loss=15.2, 10.0% complete\nINFO:tensorflow:local_step=4240 global_step=4240 loss=16.0, 10.0% complete\nINFO:tensorflow:local_step=4250 global_step=4250 loss=14.5, 10.0% complete\nINFO:tensorflow:local_step=4260 global_step=4260 loss=18.5, 10.1% complete\nINFO:tensorflow:local_step=4270 global_step=4270 loss=18.2, 10.1% complete\nINFO:tensorflow:local_step=4280 global_step=4280 loss=18.5, 10.1% complete\nINFO:tensorflow:local_step=4290 global_step=4290 loss=18.1, 10.1% complete\nINFO:tensorflow:local_step=4300 global_step=4300 loss=19.8, 10.2% complete\nINFO:tensorflow:local_step=4310 global_step=4310 loss=17.7, 10.2% complete\nINFO:tensorflow:local_step=4320 global_step=4320 loss=16.6, 10.2% complete\nINFO:tensorflow:local_step=4330 global_step=4330 loss=17.9, 10.2% complete\nINFO:tensorflow:local_step=4340 global_step=4340 loss=14.6, 10.3% complete\nINFO:tensorflow:local_step=4350 global_step=4350 loss=18.2, 10.3% complete\nINFO:tensorflow:local_step=4360 global_step=4360 loss=19.5, 10.3% complete\nINFO:tensorflow:local_step=4370 global_step=4370 loss=19.3, 10.3% complete\nINFO:tensorflow:local_step=4380 global_step=4380 loss=17.6, 10.3% complete\nINFO:tensorflow:local_step=4390 global_step=4390 loss=18.3, 10.4% complete\nINFO:tensorflow:local_step=4400 global_step=4400 loss=19.1, 10.4% complete\nINFO:tensorflow:local_step=4410 global_step=4410 loss=18.2, 10.4% complete\nINFO:tensorflow:local_step=4420 global_step=4420 loss=19.6, 10.4% complete\nINFO:tensorflow:local_step=4430 global_step=4430 loss=19.6, 10.5% complete\nINFO:tensorflow:local_step=4440 global_step=4440 loss=19.1, 10.5% complete\nINFO:tensorflow:local_step=4450 global_step=4450 loss=19.3, 10.5% complete\nINFO:tensorflow:local_step=4460 global_step=4460 loss=18.5, 10.5% complete\nINFO:tensorflow:local_step=4470 global_step=4470 loss=19.2, 10.6% complete\nINFO:tensorflow:local_step=4480 global_step=4480 loss=18.9, 10.6% complete\nINFO:tensorflow:local_step=4490 global_step=4490 loss=19.1, 10.6% complete\nINFO:tensorflow:local_step=4500 global_step=4500 loss=18.8, 10.6% complete\nINFO:tensorflow:local_step=4510 global_step=4510 loss=19.1, 10.7% complete\nINFO:tensorflow:local_step=4520 global_step=4520 loss=18.1, 10.7% complete\nINFO:tensorflow:local_step=4530 global_step=4530 loss=18.4, 10.7% complete\nINFO:tensorflow:local_step=4540 global_step=4540 loss=19.6, 10.7% complete\nINFO:tensorflow:local_step=4550 global_step=4550 loss=18.3, 10.8% complete\nINFO:tensorflow:local_step=4560 global_step=4560 loss=19.5, 10.8% complete\nINFO:tensorflow:local_step=4570 global_step=4570 loss=18.5, 10.8% complete\nINFO:tensorflow:local_step=4580 global_step=4580 loss=17.7, 10.8% complete\nINFO:tensorflow:local_step=4590 global_step=4590 loss=18.8, 10.8% complete\nINFO:tensorflow:local_step=4600 global_step=4600 loss=18.1, 10.9% complete\nINFO:tensorflow:local_step=4610 global_step=4610 loss=18.1, 10.9% complete\nINFO:tensorflow:local_step=4620 global_step=4620 loss=17.7, 10.9% complete\nINFO:tensorflow:local_step=4630 global_step=4630 loss=19.0, 10.9% complete\nINFO:tensorflow:local_step=4640 global_step=4640 loss=17.9, 11.0% complete\nINFO:tensorflow:local_step=4650 global_step=4650 loss=18.3, 11.0% complete\nINFO:tensorflow:local_step=4660 global_step=4660 loss=19.1, 11.0% complete\nINFO:tensorflow:local_step=4670 global_step=4670 loss=17.9, 11.0% complete\nINFO:tensorflow:local_step=4680 global_step=4680 loss=18.6, 11.1% complete\nINFO:tensorflow:local_step=4690 global_step=4690 loss=18.7, 11.1% complete\nINFO:tensorflow:local_step=4700 global_step=4700 loss=19.2, 11.1% complete\nINFO:tensorflow:local_step=4710 global_step=4710 loss=19.0, 11.1% complete\nINFO:tensorflow:local_step=4720 global_step=4720 loss=18.3, 11.2% complete\nINFO:tensorflow:local_step=4730 global_step=4730 loss=18.9, 11.2% complete\nINFO:tensorflow:local_step=4740 global_step=4740 loss=18.8, 11.2% complete\nINFO:tensorflow:local_step=4750 global_step=4750 loss=18.8, 11.2% complete\nINFO:tensorflow:local_step=4760 global_step=4760 loss=17.4, 11.2% complete\nINFO:tensorflow:local_step=4770 global_step=4770 loss=18.9, 11.3% complete\nINFO:tensorflow:local_step=4780 global_step=4780 loss=18.5, 11.3% complete\nINFO:tensorflow:local_step=4790 global_step=4790 loss=18.3, 11.3% complete\nINFO:tensorflow:local_step=4800 global_step=4800 loss=15.0, 11.3% complete\nINFO:tensorflow:local_step=4810 global_step=4810 loss=19.8, 11.4% complete\nINFO:tensorflow:local_step=4820 global_step=4820 loss=18.7, 11.4% complete\nINFO:tensorflow:local_step=4830 global_step=4830 loss=19.1, 11.4% complete\nINFO:tensorflow:local_step=4840 global_step=4840 loss=21.1, 11.4% complete\nINFO:tensorflow:local_step=4850 global_step=4850 loss=15.0, 11.5% complete\nINFO:tensorflow:local_step=4860 global_step=4860 loss=17.8, 11.5% complete\nINFO:tensorflow:local_step=4870 global_step=4870 loss=17.9, 11.5% complete\nINFO:tensorflow:local_step=4880 global_step=4880 loss=18.0, 11.5% complete\nINFO:tensorflow:local_step=4890 global_step=4890 loss=18.7, 11.6% complete\nINFO:tensorflow:local_step=4900 global_step=4900 loss=19.0, 11.6% complete\nINFO:tensorflow:local_step=4910 global_step=4910 loss=19.9, 11.6% complete\nINFO:tensorflow:local_step=4920 global_step=4920 loss=15.3, 11.6% complete\nINFO:tensorflow:local_step=4930 global_step=4930 loss=19.2, 11.6% complete\nINFO:tensorflow:local_step=4940 global_step=4940 loss=20.4, 11.7% complete\nINFO:tensorflow:local_step=4950 global_step=4950 loss=19.0, 11.7% complete\nINFO:tensorflow:local_step=4960 global_step=4960 loss=18.1, 11.7% complete\nINFO:tensorflow:local_step=4970 global_step=4970 loss=19.3, 11.7% complete\nINFO:tensorflow:local_step=4980 global_step=4980 loss=18.8, 11.8% complete\nINFO:tensorflow:local_step=4990 global_step=4990 loss=17.8, 11.8% complete\nINFO:tensorflow:local_step=5000 global_step=5000 loss=17.7, 11.8% complete\nINFO:tensorflow:local_step=5010 global_step=5010 loss=18.5, 11.8% complete\nINFO:tensorflow:local_step=5020 global_step=5020 loss=19.8, 11.9% complete\nINFO:tensorflow:local_step=5030 global_step=5030 loss=18.6, 11.9% complete\nINFO:tensorflow:local_step=5040 global_step=5040 loss=21.3, 11.9% complete\nINFO:tensorflow:local_step=5050 global_step=5050 loss=17.7, 11.9% complete\nINFO:tensorflow:local_step=5060 global_step=5060 loss=18.8, 12.0% complete\nINFO:tensorflow:local_step=5070 global_step=5070 loss=19.0, 12.0% complete\nINFO:tensorflow:local_step=5080 global_step=5080 loss=312.3, 12.0% complete\nINFO:tensorflow:local_step=5090 global_step=5090 loss=18.1, 12.0% complete\nINFO:tensorflow:local_step=5100 global_step=5100 loss=19.9, 12.1% complete\nINFO:tensorflow:local_step=5110 global_step=5110 loss=19.0, 12.1% complete\nINFO:tensorflow:local_step=5120 global_step=5120 loss=19.5, 12.1% complete\nINFO:tensorflow:local_step=5130 global_step=5130 loss=17.7, 12.1% complete\nINFO:tensorflow:local_step=5140 global_step=5140 loss=20.0, 12.1% complete\nINFO:tensorflow:local_step=5150 global_step=5150 loss=19.0, 12.2% complete\nINFO:tensorflow:local_step=5160 global_step=5160 loss=18.7, 12.2% complete\nINFO:tensorflow:local_step=5170 global_step=5170 loss=19.6, 12.2% complete\nINFO:tensorflow:local_step=5180 global_step=5180 loss=18.5, 12.2% complete\nINFO:tensorflow:local_step=5190 global_step=5190 loss=19.2, 12.3% complete\nINFO:tensorflow:local_step=5200 global_step=5200 loss=19.7, 12.3% complete\nINFO:tensorflow:local_step=5210 global_step=5210 loss=19.6, 12.3% complete\nINFO:tensorflow:local_step=5220 global_step=5220 loss=17.9, 12.3% complete\nINFO:tensorflow:local_step=5230 global_step=5230 loss=18.7, 12.4% complete\nINFO:tensorflow:local_step=5240 global_step=5240 loss=18.1, 12.4% complete\nINFO:tensorflow:local_step=5250 global_step=5250 loss=19.5, 12.4% complete\nINFO:tensorflow:local_step=5260 global_step=5260 loss=18.8, 12.4% complete\nINFO:tensorflow:local_step=5270 global_step=5270 loss=18.1, 12.5% complete\nINFO:tensorflow:local_step=5280 global_step=5280 loss=22.3, 12.5% complete\nINFO:tensorflow:local_step=5290 global_step=5290 loss=18.1, 12.5% complete\nINFO:tensorflow:local_step=5300 global_step=5300 loss=20.0, 12.5% complete\nINFO:tensorflow:local_step=5310 global_step=5310 loss=18.0, 12.5% complete\nINFO:tensorflow:local_step=5320 global_step=5320 loss=18.0, 12.6% complete\nINFO:tensorflow:local_step=5330 global_step=5330 loss=19.3, 12.6% complete\nINFO:tensorflow:local_step=5340 global_step=5340 loss=18.2, 12.6% complete\nINFO:tensorflow:local_step=5350 global_step=5350 loss=15.9, 12.6% complete\nINFO:tensorflow:local_step=5360 global_step=5360 loss=17.4, 12.7% complete\nINFO:tensorflow:local_step=5370 global_step=5370 loss=17.7, 12.7% complete\nINFO:tensorflow:local_step=5380 global_step=5380 loss=18.0, 12.7% complete\nINFO:tensorflow:local_step=5390 global_step=5390 loss=19.7, 12.7% complete\nINFO:tensorflow:local_step=5400 global_step=5400 loss=18.2, 12.8% complete\nINFO:tensorflow:local_step=5410 global_step=5410 loss=19.5, 12.8% complete\nINFO:tensorflow:local_step=5420 global_step=5420 loss=18.5, 12.8% complete\nINFO:tensorflow:local_step=5430 global_step=5430 loss=19.3, 12.8% complete\nINFO:tensorflow:local_step=5440 global_step=5440 loss=18.3, 12.9% complete\nINFO:tensorflow:local_step=5450 global_step=5450 loss=17.7, 12.9% complete\nINFO:tensorflow:local_step=5460 global_step=5460 loss=19.4, 12.9% complete\nINFO:tensorflow:local_step=5470 global_step=5470 loss=17.7, 12.9% complete\nINFO:tensorflow:local_step=5480 global_step=5480 loss=19.2, 12.9% complete\nINFO:tensorflow:local_step=5490 global_step=5490 loss=18.7, 13.0% complete\nINFO:tensorflow:local_step=5500 global_step=5500 loss=18.1, 13.0% complete\nINFO:tensorflow:local_step=5510 global_step=5510 loss=18.0, 13.0% complete\nINFO:tensorflow:local_step=5520 global_step=5520 loss=18.7, 13.0% complete\nINFO:tensorflow:local_step=5530 global_step=5530 loss=19.4, 13.1% complete\nINFO:tensorflow:local_step=5540 global_step=5540 loss=17.7, 13.1% complete\nINFO:tensorflow:local_step=5550 global_step=5550 loss=19.3, 13.1% complete\nINFO:tensorflow:local_step=5560 global_step=5560 loss=19.8, 13.1% complete\nINFO:tensorflow:local_step=5570 global_step=5570 loss=19.1, 13.2% complete\nINFO:tensorflow:local_step=5580 global_step=5580 loss=18.2, 13.2% complete\nINFO:tensorflow:local_step=5590 global_step=5590 loss=18.2, 13.2% complete\nINFO:tensorflow:local_step=5600 global_step=5600 loss=17.6, 13.2% complete\nINFO:tensorflow:local_step=5610 global_step=5610 loss=17.9, 13.3% complete\nINFO:tensorflow:local_step=5620 global_step=5620 loss=18.4, 13.3% complete\nINFO:tensorflow:local_step=5630 global_step=5630 loss=18.7, 13.3% complete\nINFO:tensorflow:local_step=5640 global_step=5640 loss=18.5, 13.3% complete\nINFO:tensorflow:local_step=5650 global_step=5650 loss=19.1, 13.4% complete\nINFO:tensorflow:local_step=5660 global_step=5660 loss=18.9, 13.4% complete\nINFO:tensorflow:local_step=5670 global_step=5670 loss=18.9, 13.4% complete\nINFO:tensorflow:local_step=5680 global_step=5680 loss=19.6, 13.4% complete\nINFO:tensorflow:local_step=5690 global_step=5690 loss=18.9, 13.4% complete\nINFO:tensorflow:local_step=5700 global_step=5700 loss=18.7, 13.5% complete\nINFO:tensorflow:local_step=5710 global_step=5710 loss=17.5, 13.5% complete\nINFO:tensorflow:local_step=5720 global_step=5720 loss=19.5, 13.5% complete\nINFO:tensorflow:local_step=5730 global_step=5730 loss=20.6, 13.5% complete\nINFO:tensorflow:local_step=5740 global_step=5740 loss=18.0, 13.6% complete\nINFO:tensorflow:local_step=5750 global_step=5750 loss=19.8, 13.6% complete\nINFO:tensorflow:local_step=5760 global_step=5760 loss=19.7, 13.6% complete\nINFO:tensorflow:local_step=5770 global_step=5770 loss=18.4, 13.6% complete\nINFO:tensorflow:local_step=5780 global_step=5780 loss=19.4, 13.7% complete\nINFO:tensorflow:local_step=5790 global_step=5790 loss=20.5, 13.7% complete\nINFO:tensorflow:local_step=5800 global_step=5800 loss=17.6, 13.7% complete\nINFO:tensorflow:local_step=5810 global_step=5810 loss=18.4, 13.7% complete\nINFO:tensorflow:local_step=5820 global_step=5820 loss=19.1, 13.8% complete\nINFO:tensorflow:local_step=5830 global_step=5830 loss=20.1, 13.8% complete\nINFO:tensorflow:local_step=5840 global_step=5840 loss=19.0, 13.8% complete\nINFO:tensorflow:local_step=5850 global_step=5850 loss=18.1, 13.8% complete\nINFO:tensorflow:local_step=5860 global_step=5860 loss=20.0, 13.8% complete\nINFO:tensorflow:local_step=5870 global_step=5870 loss=19.3, 13.9% complete\nINFO:tensorflow:local_step=5880 global_step=5880 loss=18.4, 13.9% complete\nINFO:tensorflow:local_step=5890 global_step=5890 loss=18.2, 13.9% complete\nINFO:tensorflow:local_step=5900 global_step=5900 loss=18.1, 13.9% complete\nINFO:tensorflow:local_step=5910 global_step=5910 loss=18.3, 14.0% complete\nINFO:tensorflow:local_step=5920 global_step=5920 loss=17.3, 14.0% complete\nINFO:tensorflow:local_step=5930 global_step=5930 loss=19.1, 14.0% complete\nINFO:tensorflow:local_step=5940 global_step=5940 loss=20.5, 14.0% complete\nINFO:tensorflow:local_step=5950 global_step=5950 loss=17.8, 14.1% complete\nINFO:tensorflow:local_step=5960 global_step=5960 loss=19.8, 14.1% complete\nINFO:tensorflow:local_step=5970 global_step=5970 loss=18.0, 14.1% complete\nINFO:tensorflow:local_step=5980 global_step=5980 loss=18.3, 14.1% complete\nINFO:tensorflow:local_step=5990 global_step=5990 loss=18.3, 14.2% complete\nINFO:tensorflow:local_step=6000 global_step=6000 loss=18.8, 14.2% complete\nINFO:tensorflow:local_step=6010 global_step=6010 loss=267.1, 14.2% complete\nINFO:tensorflow:local_step=6020 global_step=6020 loss=18.8, 14.2% complete\nINFO:tensorflow:local_step=6030 global_step=6030 loss=21.6, 14.2% complete\nINFO:tensorflow:local_step=6040 global_step=6040 loss=344.7, 14.3% complete\nINFO:tensorflow:local_step=6050 global_step=6050 loss=18.4, 14.3% complete\nINFO:tensorflow:local_step=6060 global_step=6060 loss=18.8, 14.3% complete\nINFO:tensorflow:local_step=6070 global_step=6070 loss=18.6, 14.3% complete\nINFO:tensorflow:local_step=6080 global_step=6080 loss=17.2, 14.4% complete\nINFO:tensorflow:local_step=6090 global_step=6090 loss=18.3, 14.4% complete\nINFO:tensorflow:local_step=6100 global_step=6100 loss=18.7, 14.4% complete\nINFO:tensorflow:local_step=6110 global_step=6110 loss=18.7, 14.4% complete\nINFO:tensorflow:local_step=6120 global_step=6120 loss=20.3, 14.5% complete\nINFO:tensorflow:local_step=6130 global_step=6130 loss=18.3, 14.5% complete\nINFO:tensorflow:local_step=6140 global_step=6140 loss=18.8, 14.5% complete\nINFO:tensorflow:local_step=6150 global_step=6150 loss=318.2, 14.5% complete\nINFO:tensorflow:local_step=6160 global_step=6160 loss=18.1, 14.6% complete\nINFO:tensorflow:local_step=6170 global_step=6170 loss=18.7, 14.6% complete\nINFO:tensorflow:local_step=6180 global_step=6180 loss=18.3, 14.6% complete\nINFO:tensorflow:local_step=6190 global_step=6190 loss=20.5, 14.6% complete\nINFO:tensorflow:local_step=6200 global_step=6200 loss=18.8, 14.7% complete\nINFO:tensorflow:local_step=6210 global_step=6210 loss=18.8, 14.7% complete\nINFO:tensorflow:local_step=6220 global_step=6220 loss=19.4, 14.7% complete\nINFO:tensorflow:local_step=6230 global_step=6230 loss=19.1, 14.7% complete\nINFO:tensorflow:local_step=6240 global_step=6240 loss=18.8, 14.7% complete\nINFO:tensorflow:local_step=6250 global_step=6250 loss=19.2, 14.8% complete\nINFO:tensorflow:local_step=6260 global_step=6260 loss=18.9, 14.8% complete\nINFO:tensorflow:local_step=6270 global_step=6270 loss=19.7, 14.8% complete\nINFO:tensorflow:local_step=6280 global_step=6280 loss=19.7, 14.8% complete\nINFO:tensorflow:local_step=6290 global_step=6290 loss=18.6, 14.9% complete\nINFO:tensorflow:local_step=6300 global_step=6300 loss=19.1, 14.9% complete\nINFO:tensorflow:local_step=6310 global_step=6310 loss=18.7, 14.9% complete\nINFO:tensorflow:local_step=6320 global_step=6320 loss=20.3, 14.9% complete\nINFO:tensorflow:local_step=6330 global_step=6330 loss=19.1, 15.0% complete\nINFO:tensorflow:local_step=6340 global_step=6340 loss=18.6, 15.0% complete\nINFO:tensorflow:local_step=6350 global_step=6350 loss=19.3, 15.0% complete\nINFO:tensorflow:local_step=6360 global_step=6360 loss=18.2, 15.0% complete\nINFO:tensorflow:local_step=6370 global_step=6370 loss=18.4, 15.1% complete\nINFO:tensorflow:local_step=6380 global_step=6380 loss=18.9, 15.1% complete\nINFO:tensorflow:local_step=6390 global_step=6390 loss=17.8, 15.1% complete\nINFO:tensorflow:local_step=6400 global_step=6400 loss=18.8, 15.1% complete\nINFO:tensorflow:local_step=6410 global_step=6410 loss=17.9, 15.1% complete\nINFO:tensorflow:local_step=6420 global_step=6420 loss=18.6, 15.2% complete\nINFO:tensorflow:local_step=6430 global_step=6430 loss=18.8, 15.2% complete\nINFO:tensorflow:local_step=6440 global_step=6440 loss=18.5, 15.2% complete\nINFO:tensorflow:local_step=6450 global_step=6450 loss=19.2, 15.2% complete\nINFO:tensorflow:local_step=6460 global_step=6460 loss=18.6, 15.3% complete\nINFO:tensorflow:local_step=6470 global_step=6470 loss=18.5, 15.3% complete\nINFO:tensorflow:local_step=6480 global_step=6480 loss=19.0, 15.3% complete\nINFO:tensorflow:local_step=6490 global_step=6490 loss=18.0, 15.3% complete\nINFO:tensorflow:local_step=6500 global_step=6500 loss=18.4, 15.4% complete\nINFO:tensorflow:local_step=6510 global_step=6510 loss=17.8, 15.4% complete\nINFO:tensorflow:local_step=6520 global_step=6520 loss=18.0, 15.4% complete\nINFO:tensorflow:local_step=6530 global_step=6530 loss=18.0, 15.4% complete\nINFO:tensorflow:local_step=6540 global_step=6540 loss=18.7, 15.5% complete\nINFO:tensorflow:local_step=6550 global_step=6550 loss=18.1, 15.5% complete\nINFO:tensorflow:local_step=6560 global_step=6560 loss=18.5, 15.5% complete\nINFO:tensorflow:local_step=6570 global_step=6570 loss=17.8, 15.5% complete\nINFO:tensorflow:local_step=6580 global_step=6580 loss=18.0, 15.5% complete\nINFO:tensorflow:local_step=6590 global_step=6590 loss=19.1, 15.6% complete\nINFO:tensorflow:local_step=6600 global_step=6600 loss=18.0, 15.6% complete\nINFO:tensorflow:local_step=6610 global_step=6610 loss=18.3, 15.6% complete\nINFO:tensorflow:local_step=6620 global_step=6620 loss=17.6, 15.6% complete\nINFO:tensorflow:local_step=6630 global_step=6630 loss=18.4, 15.7% complete\nINFO:tensorflow:local_step=6640 global_step=6640 loss=18.6, 15.7% complete\nINFO:tensorflow:local_step=6650 global_step=6650 loss=19.0, 15.7% complete\nINFO:tensorflow:local_step=6660 global_step=6660 loss=18.2, 15.7% complete\nINFO:tensorflow:local_step=6670 global_step=6670 loss=17.9, 15.8% complete\nINFO:tensorflow:local_step=6680 global_step=6680 loss=17.9, 15.8% complete\nINFO:tensorflow:local_step=6690 global_step=6690 loss=19.5, 15.8% complete\nINFO:tensorflow:local_step=6700 global_step=6700 loss=18.4, 15.8% complete\nINFO:tensorflow:local_step=6710 global_step=6710 loss=17.7, 15.9% complete\nINFO:tensorflow:local_step=6720 global_step=6720 loss=19.2, 15.9% complete\nINFO:tensorflow:local_step=6730 global_step=6730 loss=18.8, 15.9% complete\nINFO:tensorflow:local_step=6740 global_step=6740 loss=16.3, 15.9% complete\nINFO:tensorflow:local_step=6750 global_step=6750 loss=18.0, 15.9% complete\nINFO:tensorflow:local_step=6760 global_step=6760 loss=18.1, 16.0% complete\nINFO:tensorflow:local_step=6770 global_step=6770 loss=18.9, 16.0% complete\nINFO:tensorflow:local_step=6780 global_step=6780 loss=17.9, 16.0% complete\nINFO:tensorflow:local_step=6790 global_step=6790 loss=17.8, 16.0% complete\nINFO:tensorflow:local_step=6800 global_step=6800 loss=254.3, 16.1% complete\nINFO:tensorflow:local_step=6810 global_step=6810 loss=19.0, 16.1% complete\nINFO:tensorflow:local_step=6820 global_step=6820 loss=17.6, 16.1% complete\nINFO:tensorflow:local_step=6830 global_step=6830 loss=19.2, 16.1% complete\nINFO:tensorflow:local_step=6840 global_step=6840 loss=18.3, 16.2% complete\nINFO:tensorflow:local_step=6850 global_step=6850 loss=18.2, 16.2% complete\nINFO:tensorflow:local_step=6860 global_step=6860 loss=17.9, 16.2% complete\nINFO:tensorflow:local_step=6870 global_step=6870 loss=18.7, 16.2% complete\nINFO:tensorflow:local_step=6880 global_step=6880 loss=18.8, 16.3% complete\nINFO:tensorflow:local_step=6890 global_step=6890 loss=18.0, 16.3% complete\nINFO:tensorflow:local_step=6900 global_step=6900 loss=18.3, 16.3% complete\nINFO:tensorflow:local_step=6910 global_step=6910 loss=18.1, 16.3% complete\nINFO:tensorflow:local_step=6920 global_step=6920 loss=18.0, 16.4% complete\nINFO:tensorflow:local_step=6930 global_step=6930 loss=21.6, 16.4% complete\nINFO:tensorflow:local_step=6940 global_step=6940 loss=17.4, 16.4% complete\nINFO:tensorflow:local_step=6950 global_step=6950 loss=15.0, 16.4% complete\nINFO:tensorflow:local_step=6960 global_step=6960 loss=18.0, 16.4% complete\nINFO:tensorflow:local_step=6970 global_step=6970 loss=21.1, 16.5% complete\nINFO:tensorflow:local_step=6980 global_step=6980 loss=18.5, 16.5% complete\nINFO:tensorflow:local_step=6990 global_step=6990 loss=18.0, 16.5% complete\nINFO:tensorflow:local_step=7000 global_step=7000 loss=18.5, 16.5% complete\nINFO:tensorflow:local_step=7010 global_step=7010 loss=18.0, 16.6% complete\nINFO:tensorflow:local_step=7020 global_step=7020 loss=18.3, 16.6% complete\nINFO:tensorflow:local_step=7030 global_step=7030 loss=18.9, 16.6% complete\nINFO:tensorflow:local_step=7040 global_step=7040 loss=19.0, 16.6% complete\nINFO:tensorflow:local_step=7050 global_step=7050 loss=19.0, 16.7% complete\nINFO:tensorflow:local_step=7060 global_step=7060 loss=20.2, 16.7% complete\nINFO:tensorflow:local_step=7070 global_step=7070 loss=17.7, 16.7% complete\nINFO:tensorflow:local_step=7080 global_step=7080 loss=18.0, 16.7% complete\nINFO:tensorflow:local_step=7090 global_step=7090 loss=19.1, 16.8% complete\nINFO:tensorflow:local_step=7100 global_step=7100 loss=19.0, 16.8% complete\nINFO:tensorflow:local_step=7110 global_step=7110 loss=18.8, 16.8% complete\nINFO:tensorflow:local_step=7120 global_step=7120 loss=18.8, 16.8% complete\nINFO:tensorflow:local_step=7130 global_step=7130 loss=18.4, 16.8% complete\nINFO:tensorflow:local_step=7140 global_step=7140 loss=18.6, 16.9% complete\nINFO:tensorflow:local_step=7150 global_step=7150 loss=17.5, 16.9% complete\nINFO:tensorflow:local_step=7160 global_step=7160 loss=19.0, 16.9% complete\nINFO:tensorflow:local_step=7170 global_step=7170 loss=17.5, 16.9% complete\nINFO:tensorflow:local_step=7180 global_step=7180 loss=20.1, 17.0% complete\nINFO:tensorflow:local_step=7190 global_step=7190 loss=19.4, 17.0% complete\nINFO:tensorflow:local_step=7200 global_step=7200 loss=373.0, 17.0% complete\nINFO:tensorflow:local_step=7210 global_step=7210 loss=19.3, 17.0% complete\nINFO:tensorflow:local_step=7220 global_step=7220 loss=18.1, 17.1% complete\nINFO:tensorflow:local_step=7230 global_step=7230 loss=19.4, 17.1% complete\nINFO:tensorflow:local_step=7240 global_step=7240 loss=18.6, 17.1% complete\nINFO:tensorflow:local_step=7250 global_step=7250 loss=18.7, 17.1% complete\nINFO:tensorflow:local_step=7260 global_step=7260 loss=17.7, 17.2% complete\nINFO:tensorflow:local_step=7270 global_step=7270 loss=19.4, 17.2% complete\nINFO:tensorflow:local_step=7280 global_step=7280 loss=18.2, 17.2% complete\nINFO:tensorflow:local_step=7290 global_step=7290 loss=18.4, 17.2% complete\nINFO:tensorflow:local_step=7300 global_step=7300 loss=19.7, 17.2% complete\nINFO:tensorflow:local_step=7310 global_step=7310 loss=19.5, 17.3% complete\nINFO:tensorflow:local_step=7320 global_step=7320 loss=18.4, 17.3% complete\nINFO:tensorflow:local_step=7330 global_step=7330 loss=18.8, 17.3% complete\nINFO:tensorflow:local_step=7340 global_step=7340 loss=17.9, 17.3% complete\nINFO:tensorflow:local_step=7350 global_step=7350 loss=18.5, 17.4% complete\nINFO:tensorflow:local_step=7360 global_step=7360 loss=19.0, 17.4% complete\nINFO:tensorflow:local_step=7370 global_step=7370 loss=18.6, 17.4% complete\nINFO:tensorflow:local_step=7380 global_step=7380 loss=17.8, 17.4% complete\nINFO:tensorflow:local_step=7390 global_step=7390 loss=19.2, 17.5% complete\nINFO:tensorflow:Recording summary at step 7393.\nINFO:tensorflow:global_step/sec: 123.316\nINFO:tensorflow:local_step=7400 global_step=7400 loss=19.7, 17.5% complete\nINFO:tensorflow:local_step=7410 global_step=7410 loss=18.2, 17.5% complete\nINFO:tensorflow:local_step=7420 global_step=7420 loss=18.7, 17.5% complete\nINFO:tensorflow:local_step=7430 global_step=7430 loss=17.7, 17.6% complete\nINFO:tensorflow:local_step=7440 global_step=7440 loss=19.8, 17.6% complete\nINFO:tensorflow:local_step=7450 global_step=7450 loss=18.0, 17.6% complete\nINFO:tensorflow:local_step=7460 global_step=7460 loss=18.7, 17.6% complete\nINFO:tensorflow:local_step=7470 global_step=7470 loss=17.8, 17.7% complete\nINFO:tensorflow:local_step=7480 global_step=7480 loss=17.9, 17.7% complete\nINFO:tensorflow:local_step=7490 global_step=7490 loss=18.7, 17.7% complete\nINFO:tensorflow:local_step=7500 global_step=7500 loss=19.1, 17.7% complete\nINFO:tensorflow:local_step=7510 global_step=7510 loss=19.2, 17.7% complete\nINFO:tensorflow:local_step=7520 global_step=7520 loss=18.0, 17.8% complete\nINFO:tensorflow:local_step=7530 global_step=7530 loss=17.9, 17.8% complete\nINFO:tensorflow:local_step=7540 global_step=7540 loss=17.6, 17.8% complete\nINFO:tensorflow:local_step=7550 global_step=7550 loss=19.3, 17.8% complete\nINFO:tensorflow:local_step=7560 global_step=7560 loss=18.1, 17.9% complete\nINFO:tensorflow:local_step=7570 global_step=7570 loss=18.6, 17.9% complete\nINFO:tensorflow:local_step=7580 global_step=7580 loss=18.9, 17.9% complete\nINFO:tensorflow:local_step=7590 global_step=7590 loss=19.3, 17.9% complete\nINFO:tensorflow:local_step=7600 global_step=7600 loss=19.1, 18.0% complete\nINFO:tensorflow:local_step=7610 global_step=7610 loss=17.7, 18.0% complete\nINFO:tensorflow:local_step=7620 global_step=7620 loss=19.2, 18.0% complete\nINFO:tensorflow:local_step=7630 global_step=7630 loss=19.6, 18.0% complete\nINFO:tensorflow:local_step=7640 global_step=7640 loss=18.5, 18.1% complete\nINFO:tensorflow:local_step=7650 global_step=7650 loss=18.7, 18.1% complete\nINFO:tensorflow:local_step=7660 global_step=7660 loss=19.0, 18.1% complete\nINFO:tensorflow:local_step=7670 global_step=7670 loss=19.5, 18.1% complete\nINFO:tensorflow:local_step=7680 global_step=7680 loss=14.4, 18.1% complete\nINFO:tensorflow:local_step=7690 global_step=7690 loss=20.0, 18.2% complete\nINFO:tensorflow:local_step=7700 global_step=7700 loss=18.7, 18.2% complete\nINFO:tensorflow:local_step=7710 global_step=7710 loss=18.7, 18.2% complete\nINFO:tensorflow:local_step=7720 global_step=7720 loss=19.5, 18.2% complete\nINFO:tensorflow:local_step=7730 global_step=7730 loss=19.0, 18.3% complete\nINFO:tensorflow:local_step=7740 global_step=7740 loss=16.4, 18.3% complete\nINFO:tensorflow:local_step=7750 global_step=7750 loss=18.7, 18.3% complete\nINFO:tensorflow:local_step=7760 global_step=7760 loss=20.1, 18.3% complete\nINFO:tensorflow:local_step=7770 global_step=7770 loss=18.6, 18.4% complete\nINFO:tensorflow:local_step=7780 global_step=7780 loss=18.0, 18.4% complete\nINFO:tensorflow:local_step=7790 global_step=7790 loss=19.0, 18.4% complete\nINFO:tensorflow:local_step=7800 global_step=7800 loss=17.4, 18.4% complete\nINFO:tensorflow:local_step=7810 global_step=7810 loss=18.9, 18.5% complete\nINFO:tensorflow:local_step=7820 global_step=7820 loss=18.1, 18.5% complete\nINFO:tensorflow:local_step=7830 global_step=7830 loss=19.1, 18.5% complete\nINFO:tensorflow:local_step=7840 global_step=7840 loss=18.2, 18.5% complete\nINFO:tensorflow:local_step=7850 global_step=7850 loss=19.8, 18.5% complete\nINFO:tensorflow:local_step=7860 global_step=7860 loss=17.9, 18.6% complete\nINFO:tensorflow:local_step=7870 global_step=7870 loss=19.0, 18.6% complete\nINFO:tensorflow:local_step=7880 global_step=7880 loss=18.8, 18.6% complete\nINFO:tensorflow:local_step=7890 global_step=7890 loss=20.0, 18.6% complete\nINFO:tensorflow:local_step=7900 global_step=7900 loss=19.2, 18.7% complete\nINFO:tensorflow:local_step=7910 global_step=7910 loss=18.9, 18.7% complete\nINFO:tensorflow:local_step=7920 global_step=7920 loss=19.3, 18.7% complete\nINFO:tensorflow:local_step=7930 global_step=7930 loss=19.0, 18.7% complete\nINFO:tensorflow:local_step=7940 global_step=7940 loss=18.1, 18.8% complete\nINFO:tensorflow:local_step=7950 global_step=7950 loss=20.0, 18.8% complete\nINFO:tensorflow:local_step=7960 global_step=7960 loss=18.9, 18.8% complete\nINFO:tensorflow:local_step=7970 global_step=7970 loss=19.4, 18.8% complete\nINFO:tensorflow:local_step=7980 global_step=7980 loss=19.1, 18.9% complete\nINFO:tensorflow:local_step=7990 global_step=7990 loss=18.7, 18.9% complete\nINFO:tensorflow:local_step=8000 global_step=8000 loss=18.5, 18.9% complete\nINFO:tensorflow:local_step=8010 global_step=8010 loss=18.1, 18.9% complete\nINFO:tensorflow:local_step=8020 global_step=8020 loss=18.9, 19.0% complete\nINFO:tensorflow:local_step=8030 global_step=8030 loss=18.3, 19.0% complete\nINFO:tensorflow:local_step=8040 global_step=8040 loss=19.2, 19.0% complete\nINFO:tensorflow:local_step=8050 global_step=8050 loss=19.4, 19.0% complete\nINFO:tensorflow:local_step=8060 global_step=8060 loss=19.2, 19.0% complete\nINFO:tensorflow:local_step=8070 global_step=8070 loss=18.4, 19.1% complete\nINFO:tensorflow:local_step=8080 global_step=8080 loss=20.4, 19.1% complete\nINFO:tensorflow:local_step=8090 global_step=8090 loss=18.2, 19.1% complete\nINFO:tensorflow:local_step=8100 global_step=8100 loss=18.2, 19.1% complete\nINFO:tensorflow:local_step=8110 global_step=8110 loss=19.8, 19.2% complete\nINFO:tensorflow:local_step=8120 global_step=8120 loss=18.1, 19.2% complete\nINFO:tensorflow:local_step=8130 global_step=8130 loss=18.9, 19.2% complete\nINFO:tensorflow:local_step=8140 global_step=8140 loss=18.5, 19.2% complete\nINFO:tensorflow:local_step=8150 global_step=8150 loss=19.5, 19.3% complete\nINFO:tensorflow:local_step=8160 global_step=8160 loss=19.2, 19.3% complete\nINFO:tensorflow:local_step=8170 global_step=8170 loss=19.5, 19.3% complete\nINFO:tensorflow:local_step=8180 global_step=8180 loss=19.6, 19.3% complete\nINFO:tensorflow:local_step=8190 global_step=8190 loss=18.8, 19.4% complete\nINFO:tensorflow:local_step=8200 global_step=8200 loss=18.7, 19.4% complete\nINFO:tensorflow:local_step=8210 global_step=8210 loss=18.5, 19.4% complete\nINFO:tensorflow:local_step=8220 global_step=8220 loss=18.0, 19.4% complete\nINFO:tensorflow:local_step=8230 global_step=8230 loss=17.7, 19.4% complete\nINFO:tensorflow:local_step=8240 global_step=8240 loss=18.5, 19.5% complete\nINFO:tensorflow:local_step=8250 global_step=8250 loss=17.7, 19.5% complete\nINFO:tensorflow:local_step=8260 global_step=8260 loss=18.4, 19.5% complete\nINFO:tensorflow:local_step=8270 global_step=8270 loss=18.7, 19.5% complete\nINFO:tensorflow:local_step=8280 global_step=8280 loss=19.9, 19.6% complete\nINFO:tensorflow:local_step=8290 global_step=8290 loss=18.7, 19.6% complete\nINFO:tensorflow:local_step=8300 global_step=8300 loss=18.9, 19.6% complete\nINFO:tensorflow:local_step=8310 global_step=8310 loss=19.0, 19.6% complete\nINFO:tensorflow:local_step=8320 global_step=8320 loss=18.9, 19.7% complete\nINFO:tensorflow:local_step=8330 global_step=8330 loss=18.1, 19.7% complete\nINFO:tensorflow:local_step=8340 global_step=8340 loss=18.0, 19.7% complete\nINFO:tensorflow:local_step=8350 global_step=8350 loss=15.6, 19.7% complete\nINFO:tensorflow:local_step=8360 global_step=8360 loss=19.5, 19.8% complete\nINFO:tensorflow:local_step=8370 global_step=8370 loss=20.3, 19.8% complete\nINFO:tensorflow:local_step=8380 global_step=8380 loss=18.2, 19.8% complete\nINFO:tensorflow:local_step=8390 global_step=8390 loss=18.4, 19.8% complete\nINFO:tensorflow:local_step=8400 global_step=8400 loss=19.4, 19.8% complete\nINFO:tensorflow:local_step=8410 global_step=8410 loss=19.4, 19.9% complete\nINFO:tensorflow:local_step=8420 global_step=8420 loss=17.6, 19.9% complete\nINFO:tensorflow:local_step=8430 global_step=8430 loss=19.0, 19.9% complete\nINFO:tensorflow:local_step=8440 global_step=8440 loss=19.2, 19.9% complete\nINFO:tensorflow:local_step=8450 global_step=8450 loss=298.8, 20.0% complete\nINFO:tensorflow:local_step=8460 global_step=8460 loss=18.6, 20.0% complete\nINFO:tensorflow:local_step=8470 global_step=8470 loss=18.1, 20.0% complete\nINFO:tensorflow:local_step=8480 global_step=8480 loss=18.1, 20.0% complete\nINFO:tensorflow:local_step=8490 global_step=8490 loss=13.3, 20.1% complete\nINFO:tensorflow:local_step=8500 global_step=8500 loss=17.9, 20.1% complete\nINFO:tensorflow:local_step=8510 global_step=8510 loss=21.1, 20.1% complete\nINFO:tensorflow:local_step=8520 global_step=8520 loss=18.5, 20.1% complete\nINFO:tensorflow:local_step=8530 global_step=8530 loss=18.0, 20.2% complete\nINFO:tensorflow:local_step=8540 global_step=8540 loss=17.9, 20.2% complete\nINFO:tensorflow:local_step=8550 global_step=8550 loss=18.0, 20.2% complete\nINFO:tensorflow:local_step=8560 global_step=8560 loss=15.3, 20.2% complete\nINFO:tensorflow:local_step=8570 global_step=8570 loss=18.8, 20.3% complete\nINFO:tensorflow:local_step=8580 global_step=8580 loss=17.9, 20.3% complete\nINFO:tensorflow:local_step=8590 global_step=8590 loss=17.8, 20.3% complete\nINFO:tensorflow:local_step=8600 global_step=8600 loss=18.4, 20.3% complete\nINFO:tensorflow:local_step=8610 global_step=8610 loss=18.7, 20.3% complete\nINFO:tensorflow:local_step=8620 global_step=8620 loss=18.6, 20.4% complete\nINFO:tensorflow:local_step=8630 global_step=8630 loss=18.5, 20.4% complete\nINFO:tensorflow:local_step=8640 global_step=8640 loss=14.9, 20.4% complete\nINFO:tensorflow:local_step=8650 global_step=8650 loss=21.3, 20.4% complete\nINFO:tensorflow:local_step=8660 global_step=8660 loss=18.4, 20.5% complete\nINFO:tensorflow:local_step=8670 global_step=8670 loss=18.2, 20.5% complete\nINFO:tensorflow:local_step=8680 global_step=8680 loss=18.8, 20.5% complete\nINFO:tensorflow:local_step=8690 global_step=8690 loss=18.3, 20.5% complete\nINFO:tensorflow:local_step=8700 global_step=8700 loss=16.9, 20.6% complete\nINFO:tensorflow:local_step=8710 global_step=8710 loss=14.8, 20.6% complete\nINFO:tensorflow:local_step=8720 global_step=8720 loss=18.4, 20.6% complete\nINFO:tensorflow:local_step=8730 global_step=8730 loss=19.4, 20.6% complete\nINFO:tensorflow:local_step=8740 global_step=8740 loss=17.9, 20.7% complete\nINFO:tensorflow:local_step=8750 global_step=8750 loss=18.5, 20.7% complete\nINFO:tensorflow:local_step=8760 global_step=8760 loss=19.2, 20.7% complete\nINFO:tensorflow:local_step=8770 global_step=8770 loss=18.4, 20.7% complete\nINFO:tensorflow:local_step=8780 global_step=8780 loss=19.2, 20.7% complete\nINFO:tensorflow:local_step=8790 global_step=8790 loss=19.7, 20.8% complete\nINFO:tensorflow:local_step=8800 global_step=8800 loss=19.2, 20.8% complete\nINFO:tensorflow:local_step=8810 global_step=8810 loss=18.3, 20.8% complete\nINFO:tensorflow:local_step=8820 global_step=8820 loss=18.2, 20.8% complete\nINFO:tensorflow:local_step=8830 global_step=8830 loss=18.0, 20.9% complete\nINFO:tensorflow:local_step=8840 global_step=8840 loss=18.3, 20.9% complete\nINFO:tensorflow:local_step=8850 global_step=8850 loss=18.5, 20.9% complete\nINFO:tensorflow:local_step=8860 global_step=8860 loss=17.9, 20.9% complete\nINFO:tensorflow:local_step=8870 global_step=8870 loss=18.4, 21.0% complete\nINFO:tensorflow:local_step=8880 global_step=8880 loss=17.8, 21.0% complete\nINFO:tensorflow:local_step=8890 global_step=8890 loss=18.4, 21.0% complete\nINFO:tensorflow:local_step=8900 global_step=8900 loss=18.1, 21.0% complete\nINFO:tensorflow:local_step=8910 global_step=8910 loss=20.0, 21.1% complete\nINFO:tensorflow:local_step=8920 global_step=8920 loss=18.4, 21.1% complete\nINFO:tensorflow:local_step=8930 global_step=8930 loss=18.0, 21.1% complete\nINFO:tensorflow:local_step=8940 global_step=8940 loss=18.3, 21.1% complete\nINFO:tensorflow:local_step=8950 global_step=8950 loss=17.8, 21.1% complete\nINFO:tensorflow:local_step=8960 global_step=8960 loss=18.2, 21.2% complete\nINFO:tensorflow:local_step=8970 global_step=8970 loss=18.2, 21.2% complete\nINFO:tensorflow:local_step=8980 global_step=8980 loss=18.1, 21.2% complete\nINFO:tensorflow:local_step=8990 global_step=8990 loss=19.0, 21.2% complete\nINFO:tensorflow:local_step=9000 global_step=9000 loss=18.2, 21.3% complete\nINFO:tensorflow:local_step=9010 global_step=9010 loss=19.3, 21.3% complete\nINFO:tensorflow:local_step=9020 global_step=9020 loss=18.4, 21.3% complete\nINFO:tensorflow:local_step=9030 global_step=9030 loss=18.0, 21.3% complete\nINFO:tensorflow:local_step=9040 global_step=9040 loss=17.7, 21.4% complete\nINFO:tensorflow:local_step=9050 global_step=9050 loss=18.6, 21.4% complete\nINFO:tensorflow:local_step=9060 global_step=9060 loss=19.8, 21.4% complete\nINFO:tensorflow:local_step=9070 global_step=9070 loss=18.7, 21.4% complete\nINFO:tensorflow:local_step=9080 global_step=9080 loss=18.6, 21.5% complete\nINFO:tensorflow:local_step=9090 global_step=9090 loss=19.9, 21.5% complete\nINFO:tensorflow:local_step=9100 global_step=9100 loss=18.4, 21.5% complete\nINFO:tensorflow:local_step=9110 global_step=9110 loss=17.8, 21.5% complete\nINFO:tensorflow:local_step=9120 global_step=9120 loss=18.3, 21.6% complete\nINFO:tensorflow:local_step=9130 global_step=9130 loss=18.1, 21.6% complete\nINFO:tensorflow:local_step=9140 global_step=9140 loss=18.1, 21.6% complete\nINFO:tensorflow:local_step=9150 global_step=9150 loss=19.4, 21.6% complete\nINFO:tensorflow:local_step=9160 global_step=9160 loss=19.0, 21.6% complete\nINFO:tensorflow:local_step=9170 global_step=9170 loss=14.3, 21.7% complete\nINFO:tensorflow:local_step=9180 global_step=9180 loss=19.3, 21.7% complete\nINFO:tensorflow:local_step=9190 global_step=9190 loss=18.5, 21.7% complete\nINFO:tensorflow:local_step=9200 global_step=9200 loss=18.5, 21.7% complete\nINFO:tensorflow:local_step=9210 global_step=9210 loss=18.2, 21.8% complete\nINFO:tensorflow:local_step=9220 global_step=9220 loss=18.3, 21.8% complete\nINFO:tensorflow:local_step=9230 global_step=9230 loss=20.1, 21.8% complete\nINFO:tensorflow:local_step=9240 global_step=9240 loss=19.0, 21.8% complete\nINFO:tensorflow:local_step=9250 global_step=9250 loss=18.6, 21.9% complete\nINFO:tensorflow:local_step=9260 global_step=9260 loss=19.0, 21.9% complete\nINFO:tensorflow:local_step=9270 global_step=9270 loss=18.6, 21.9% complete\nINFO:tensorflow:local_step=9280 global_step=9280 loss=17.8, 21.9% complete\nINFO:tensorflow:local_step=9290 global_step=9290 loss=17.9, 22.0% complete\nINFO:tensorflow:local_step=9300 global_step=9300 loss=17.6, 22.0% complete\nINFO:tensorflow:local_step=9310 global_step=9310 loss=277.3, 22.0% complete\nINFO:tensorflow:local_step=9320 global_step=9320 loss=19.4, 22.0% complete\nINFO:tensorflow:local_step=9330 global_step=9330 loss=18.2, 22.0% complete\nINFO:tensorflow:local_step=9340 global_step=9340 loss=19.2, 22.1% complete\nINFO:tensorflow:local_step=9350 global_step=9350 loss=19.5, 22.1% complete\nINFO:tensorflow:local_step=9360 global_step=9360 loss=18.1, 22.1% complete\nINFO:tensorflow:local_step=9370 global_step=9370 loss=18.8, 22.1% complete\nINFO:tensorflow:local_step=9380 global_step=9380 loss=19.1, 22.2% complete\nINFO:tensorflow:local_step=9390 global_step=9390 loss=18.9, 22.2% complete\nINFO:tensorflow:local_step=9400 global_step=9400 loss=18.0, 22.2% complete\nINFO:tensorflow:local_step=9410 global_step=9410 loss=18.7, 22.2% complete\nINFO:tensorflow:local_step=9420 global_step=9420 loss=18.9, 22.3% complete\nINFO:tensorflow:local_step=9430 global_step=9430 loss=16.6, 22.3% complete\nINFO:tensorflow:local_step=9440 global_step=9440 loss=18.4, 22.3% complete\nINFO:tensorflow:local_step=9450 global_step=9450 loss=19.1, 22.3% complete\nINFO:tensorflow:local_step=9460 global_step=9460 loss=17.4, 22.4% complete\nINFO:tensorflow:local_step=9470 global_step=9470 loss=18.1, 22.4% complete\nINFO:tensorflow:local_step=9480 global_step=9480 loss=18.8, 22.4% complete\nINFO:tensorflow:local_step=9490 global_step=9490 loss=19.3, 22.4% complete\nINFO:tensorflow:local_step=9500 global_step=9500 loss=18.2, 22.4% complete\nINFO:tensorflow:local_step=9510 global_step=9510 loss=18.2, 22.5% complete\nINFO:tensorflow:local_step=9520 global_step=9520 loss=18.5, 22.5% complete\nINFO:tensorflow:local_step=9530 global_step=9530 loss=18.4, 22.5% complete\nINFO:tensorflow:local_step=9540 global_step=9540 loss=19.2, 22.5% complete\nINFO:tensorflow:local_step=9550 global_step=9550 loss=18.8, 22.6% complete\nINFO:tensorflow:local_step=9560 global_step=9560 loss=18.0, 22.6% complete\nINFO:tensorflow:local_step=9570 global_step=9570 loss=19.1, 22.6% complete\nINFO:tensorflow:local_step=9580 global_step=9580 loss=18.4, 22.6% complete\nINFO:tensorflow:local_step=9590 global_step=9590 loss=17.8, 22.7% complete\nINFO:tensorflow:local_step=9600 global_step=9600 loss=19.1, 22.7% complete\nINFO:tensorflow:local_step=9610 global_step=9610 loss=18.7, 22.7% complete\nINFO:tensorflow:local_step=9620 global_step=9620 loss=17.7, 22.7% complete\nINFO:tensorflow:local_step=9630 global_step=9630 loss=21.5, 22.8% complete\nINFO:tensorflow:local_step=9640 global_step=9640 loss=18.0, 22.8% complete\nINFO:tensorflow:local_step=9650 global_step=9650 loss=17.8, 22.8% complete\nINFO:tensorflow:local_step=9660 global_step=9660 loss=17.8, 22.8% complete\nINFO:tensorflow:local_step=9670 global_step=9670 loss=18.0, 22.8% complete\nINFO:tensorflow:local_step=9680 global_step=9680 loss=283.7, 22.9% complete\nINFO:tensorflow:local_step=9690 global_step=9690 loss=18.8, 22.9% complete\nINFO:tensorflow:local_step=9700 global_step=9700 loss=19.8, 22.9% complete\nINFO:tensorflow:local_step=9710 global_step=9710 loss=18.1, 22.9% complete\nINFO:tensorflow:local_step=9720 global_step=9720 loss=17.7, 23.0% complete\nINFO:tensorflow:local_step=9730 global_step=9730 loss=298.2, 23.0% complete\nINFO:tensorflow:local_step=9740 global_step=9740 loss=17.3, 23.0% complete\nINFO:tensorflow:local_step=9750 global_step=9750 loss=18.8, 23.0% complete\nINFO:tensorflow:local_step=9760 global_step=9760 loss=19.2, 23.1% complete\nINFO:tensorflow:local_step=9770 global_step=9770 loss=15.0, 23.1% complete\nINFO:tensorflow:local_step=9780 global_step=9780 loss=17.2, 23.1% complete\nINFO:tensorflow:local_step=9790 global_step=9790 loss=20.8, 23.1% complete\nINFO:tensorflow:local_step=9800 global_step=9800 loss=19.1, 23.2% complete\nINFO:tensorflow:local_step=9810 global_step=9810 loss=19.2, 23.2% complete\nINFO:tensorflow:local_step=9820 global_step=9820 loss=17.8, 23.2% complete\nINFO:tensorflow:local_step=9830 global_step=9830 loss=19.2, 23.2% complete\nINFO:tensorflow:local_step=9840 global_step=9840 loss=15.2, 23.3% complete\nINFO:tensorflow:local_step=9850 global_step=9850 loss=18.0, 23.3% complete\nINFO:tensorflow:local_step=9860 global_step=9860 loss=18.4, 23.3% complete\nINFO:tensorflow:local_step=9870 global_step=9870 loss=17.9, 23.3% complete\nINFO:tensorflow:local_step=9880 global_step=9880 loss=19.8, 23.3% complete\nINFO:tensorflow:local_step=9890 global_step=9890 loss=17.9, 23.4% complete\nINFO:tensorflow:local_step=9900 global_step=9900 loss=19.5, 23.4% complete\nINFO:tensorflow:local_step=9910 global_step=9910 loss=18.4, 23.4% complete\nINFO:tensorflow:local_step=9920 global_step=9920 loss=18.8, 23.4% complete\nINFO:tensorflow:local_step=9930 global_step=9930 loss=293.9, 23.5% complete\nINFO:tensorflow:local_step=9940 global_step=9940 loss=17.9, 23.5% complete\nINFO:tensorflow:local_step=9950 global_step=9950 loss=17.3, 23.5% complete\nINFO:tensorflow:local_step=9960 global_step=9960 loss=18.1, 23.5% complete\nINFO:tensorflow:local_step=9970 global_step=9970 loss=18.9, 23.6% complete\nINFO:tensorflow:local_step=9980 global_step=9980 loss=18.3, 23.6% complete\nINFO:tensorflow:local_step=9990 global_step=9990 loss=20.0, 23.6% complete\nINFO:tensorflow:local_step=10000 global_step=10000 loss=19.1, 23.6% complete\nINFO:tensorflow:local_step=10010 global_step=10010 loss=18.2, 23.7% complete\nINFO:tensorflow:local_step=10020 global_step=10020 loss=16.3, 23.7% complete\nINFO:tensorflow:local_step=10030 global_step=10030 loss=18.3, 23.7% complete\nINFO:tensorflow:local_step=10040 global_step=10040 loss=18.1, 23.7% complete\nINFO:tensorflow:local_step=10050 global_step=10050 loss=18.3, 23.7% complete\nINFO:tensorflow:local_step=10060 global_step=10060 loss=19.2, 23.8% complete\nINFO:tensorflow:local_step=10070 global_step=10070 loss=18.8, 23.8% complete\nINFO:tensorflow:local_step=10080 global_step=10080 loss=18.8, 23.8% complete\nINFO:tensorflow:local_step=10090 global_step=10090 loss=19.1, 23.8% complete\nINFO:tensorflow:local_step=10100 global_step=10100 loss=19.5, 23.9% complete\nINFO:tensorflow:local_step=10110 global_step=10110 loss=19.2, 23.9% complete\nINFO:tensorflow:local_step=10120 global_step=10120 loss=18.3, 23.9% complete\nINFO:tensorflow:local_step=10130 global_step=10130 loss=18.8, 23.9% complete\nINFO:tensorflow:local_step=10140 global_step=10140 loss=18.8, 24.0% complete\nINFO:tensorflow:local_step=10150 global_step=10150 loss=19.0, 24.0% complete\nINFO:tensorflow:local_step=10160 global_step=10160 loss=18.9, 24.0% complete\nINFO:tensorflow:local_step=10170 global_step=10170 loss=18.7, 24.0% complete\nINFO:tensorflow:local_step=10180 global_step=10180 loss=18.2, 24.1% complete\nINFO:tensorflow:local_step=10190 global_step=10190 loss=18.0, 24.1% complete\nINFO:tensorflow:local_step=10200 global_step=10200 loss=18.5, 24.1% complete\nINFO:tensorflow:local_step=10210 global_step=10210 loss=18.4, 24.1% complete\nINFO:tensorflow:local_step=10220 global_step=10220 loss=18.4, 24.1% complete\nINFO:tensorflow:local_step=10230 global_step=10230 loss=18.2, 24.2% complete\nINFO:tensorflow:local_step=10240 global_step=10240 loss=18.2, 24.2% complete\nINFO:tensorflow:local_step=10250 global_step=10250 loss=19.6, 24.2% complete\nINFO:tensorflow:local_step=10260 global_step=10260 loss=18.8, 24.2% complete\nINFO:tensorflow:local_step=10270 global_step=10270 loss=19.7, 24.3% complete\nINFO:tensorflow:local_step=10280 global_step=10280 loss=19.9, 24.3% complete\nINFO:tensorflow:local_step=10290 global_step=10290 loss=17.6, 24.3% complete\nINFO:tensorflow:local_step=10300 global_step=10300 loss=15.0, 24.3% complete\nINFO:tensorflow:local_step=10310 global_step=10310 loss=17.9, 24.4% complete\nINFO:tensorflow:local_step=10320 global_step=10320 loss=18.4, 24.4% complete\nINFO:tensorflow:local_step=10330 global_step=10330 loss=18.6, 24.4% complete\nINFO:tensorflow:local_step=10340 global_step=10340 loss=18.2, 24.4% complete\nINFO:tensorflow:local_step=10350 global_step=10350 loss=19.7, 24.5% complete\nINFO:tensorflow:local_step=10360 global_step=10360 loss=17.4, 24.5% complete\nINFO:tensorflow:local_step=10370 global_step=10370 loss=18.0, 24.5% complete\nINFO:tensorflow:local_step=10380 global_step=10380 loss=18.5, 24.5% complete\nINFO:tensorflow:local_step=10390 global_step=10390 loss=18.3, 24.6% complete\nINFO:tensorflow:local_step=10400 global_step=10400 loss=19.2, 24.6% complete\nINFO:tensorflow:local_step=10410 global_step=10410 loss=18.8, 24.6% complete\nINFO:tensorflow:local_step=10420 global_step=10420 loss=20.8, 24.6% complete\nINFO:tensorflow:local_step=10430 global_step=10430 loss=19.5, 24.6% complete\nINFO:tensorflow:local_step=10440 global_step=10440 loss=17.7, 24.7% complete\nINFO:tensorflow:local_step=10450 global_step=10450 loss=18.2, 24.7% complete\nINFO:tensorflow:local_step=10460 global_step=10460 loss=18.7, 24.7% complete\nINFO:tensorflow:local_step=10470 global_step=10470 loss=19.2, 24.7% complete\nINFO:tensorflow:local_step=10480 global_step=10480 loss=19.0, 24.8% complete\nINFO:tensorflow:local_step=10490 global_step=10490 loss=18.2, 24.8% complete\nINFO:tensorflow:local_step=10500 global_step=10500 loss=18.3, 24.8% complete\nINFO:tensorflow:local_step=10510 global_step=10510 loss=18.6, 24.8% complete\nINFO:tensorflow:local_step=10520 global_step=10520 loss=17.7, 24.9% complete\nINFO:tensorflow:local_step=10530 global_step=10530 loss=18.3, 24.9% complete\nINFO:tensorflow:local_step=10540 global_step=10540 loss=18.0, 24.9% complete\nINFO:tensorflow:local_step=10550 global_step=10550 loss=18.8, 24.9% complete\nINFO:tensorflow:local_step=10560 global_step=10560 loss=18.2, 25.0% complete\nINFO:tensorflow:local_step=10570 global_step=10570 loss=19.2, 25.0% complete\nINFO:tensorflow:local_step=10580 global_step=10580 loss=18.0, 25.0% complete\nINFO:tensorflow:local_step=10590 global_step=10590 loss=18.6, 25.0% complete\nINFO:tensorflow:local_step=10600 global_step=10600 loss=18.0, 25.0% complete\nINFO:tensorflow:local_step=10610 global_step=10610 loss=19.1, 25.1% complete\nINFO:tensorflow:local_step=10620 global_step=10620 loss=17.8, 25.1% complete\nINFO:tensorflow:local_step=10630 global_step=10630 loss=18.8, 25.1% complete\nINFO:tensorflow:local_step=10640 global_step=10640 loss=18.0, 25.1% complete\nINFO:tensorflow:local_step=10650 global_step=10650 loss=18.6, 25.2% complete\nINFO:tensorflow:local_step=10660 global_step=10660 loss=18.5, 25.2% complete\nINFO:tensorflow:local_step=10670 global_step=10670 loss=18.3, 25.2% complete\nINFO:tensorflow:local_step=10680 global_step=10680 loss=18.7, 25.2% complete\nINFO:tensorflow:local_step=10690 global_step=10690 loss=19.2, 25.3% complete\nINFO:tensorflow:local_step=10700 global_step=10700 loss=19.0, 25.3% complete\nINFO:tensorflow:local_step=10710 global_step=10710 loss=18.0, 25.3% complete\nINFO:tensorflow:local_step=10720 global_step=10720 loss=18.8, 25.3% complete\nINFO:tensorflow:local_step=10730 global_step=10730 loss=18.1, 25.4% complete\nINFO:tensorflow:local_step=10740 global_step=10740 loss=18.8, 25.4% complete\nINFO:tensorflow:local_step=10750 global_step=10750 loss=19.3, 25.4% complete\nINFO:tensorflow:local_step=10760 global_step=10760 loss=17.8, 25.4% complete\nINFO:tensorflow:local_step=10770 global_step=10770 loss=18.6, 25.4% complete\nINFO:tensorflow:local_step=10780 global_step=10780 loss=18.3, 25.5% complete\nINFO:tensorflow:local_step=10790 global_step=10790 loss=19.3, 25.5% complete\nINFO:tensorflow:local_step=10800 global_step=10800 loss=18.0, 25.5% complete\nINFO:tensorflow:local_step=10810 global_step=10810 loss=18.0, 25.5% complete\nINFO:tensorflow:local_step=10820 global_step=10820 loss=17.5, 25.6% complete\nINFO:tensorflow:local_step=10830 global_step=10830 loss=18.7, 25.6% complete\nINFO:tensorflow:local_step=10840 global_step=10840 loss=18.6, 25.6% complete\nINFO:tensorflow:local_step=10850 global_step=10850 loss=18.1, 25.6% complete\nINFO:tensorflow:local_step=10860 global_step=10860 loss=17.9, 25.7% complete\nINFO:tensorflow:local_step=10870 global_step=10870 loss=18.5, 25.7% complete\nINFO:tensorflow:local_step=10880 global_step=10880 loss=18.0, 25.7% complete\nINFO:tensorflow:local_step=10890 global_step=10890 loss=19.0, 25.7% complete\nINFO:tensorflow:local_step=10900 global_step=10900 loss=17.7, 25.8% complete\nINFO:tensorflow:local_step=10910 global_step=10910 loss=17.8, 25.8% complete\nINFO:tensorflow:local_step=10920 global_step=10920 loss=17.7, 25.8% complete\nINFO:tensorflow:local_step=10930 global_step=10930 loss=16.9, 25.8% complete\nINFO:tensorflow:local_step=10940 global_step=10940 loss=19.3, 25.9% complete\nINFO:tensorflow:local_step=10950 global_step=10950 loss=18.5, 25.9% complete\nINFO:tensorflow:local_step=10960 global_step=10960 loss=17.7, 25.9% complete\nINFO:tensorflow:local_step=10970 global_step=10970 loss=18.6, 25.9% complete\nINFO:tensorflow:local_step=10980 global_step=10980 loss=18.1, 25.9% complete\nINFO:tensorflow:local_step=10990 global_step=10990 loss=16.3, 26.0% complete\nINFO:tensorflow:local_step=11000 global_step=11000 loss=17.6, 26.0% complete\nINFO:tensorflow:local_step=11010 global_step=11010 loss=18.8, 26.0% complete\nINFO:tensorflow:local_step=11020 global_step=11020 loss=14.9, 26.0% complete\nINFO:tensorflow:local_step=11030 global_step=11030 loss=18.4, 26.1% complete\nINFO:tensorflow:local_step=11040 global_step=11040 loss=18.1, 26.1% complete\nINFO:tensorflow:local_step=11050 global_step=11050 loss=15.2, 26.1% complete\nINFO:tensorflow:local_step=11060 global_step=11060 loss=18.6, 26.1% complete\nINFO:tensorflow:local_step=11070 global_step=11070 loss=18.0, 26.2% complete\nINFO:tensorflow:local_step=11080 global_step=11080 loss=14.0, 26.2% complete\nINFO:tensorflow:local_step=11090 global_step=11090 loss=16.0, 26.2% complete\nINFO:tensorflow:local_step=11100 global_step=11100 loss=19.0, 26.2% complete\nINFO:tensorflow:local_step=11110 global_step=11110 loss=18.8, 26.3% complete\nINFO:tensorflow:local_step=11120 global_step=11120 loss=18.0, 26.3% complete\nINFO:tensorflow:local_step=11130 global_step=11130 loss=18.2, 26.3% complete\nINFO:tensorflow:local_step=11140 global_step=11140 loss=17.9, 26.3% complete\nINFO:tensorflow:local_step=11150 global_step=11150 loss=16.9, 26.3% complete\nINFO:tensorflow:local_step=11160 global_step=11160 loss=18.1, 26.4% complete\nINFO:tensorflow:local_step=11170 global_step=11170 loss=18.0, 26.4% complete\nINFO:tensorflow:local_step=11180 global_step=11180 loss=18.1, 26.4% complete\nINFO:tensorflow:local_step=11190 global_step=11190 loss=18.5, 26.4% complete\nINFO:tensorflow:local_step=11200 global_step=11200 loss=21.4, 26.5% complete\nINFO:tensorflow:local_step=11210 global_step=11210 loss=17.9, 26.5% complete\nINFO:tensorflow:local_step=11220 global_step=11220 loss=18.3, 26.5% complete\nINFO:tensorflow:local_step=11230 global_step=11230 loss=18.8, 26.5% complete\nINFO:tensorflow:local_step=11240 global_step=11240 loss=18.0, 26.6% complete\nINFO:tensorflow:local_step=11250 global_step=11250 loss=18.7, 26.6% complete\nINFO:tensorflow:local_step=11260 global_step=11260 loss=18.6, 26.6% complete\nINFO:tensorflow:local_step=11270 global_step=11270 loss=20.5, 26.6% complete\nINFO:tensorflow:local_step=11280 global_step=11280 loss=19.3, 26.7% complete\nINFO:tensorflow:local_step=11290 global_step=11290 loss=18.7, 26.7% complete\nINFO:tensorflow:local_step=11300 global_step=11300 loss=18.4, 26.7% complete\nINFO:tensorflow:local_step=11310 global_step=11310 loss=17.0, 26.7% complete\nINFO:tensorflow:local_step=11320 global_step=11320 loss=18.2, 26.7% complete\nINFO:tensorflow:local_step=11330 global_step=11330 loss=17.6, 26.8% complete\nINFO:tensorflow:local_step=11340 global_step=11340 loss=19.3, 26.8% complete\nINFO:tensorflow:local_step=11350 global_step=11350 loss=18.4, 26.8% complete\nINFO:tensorflow:local_step=11360 global_step=11360 loss=19.0, 26.8% complete\nINFO:tensorflow:local_step=11370 global_step=11370 loss=17.7, 26.9% complete\nINFO:tensorflow:local_step=11380 global_step=11380 loss=18.6, 26.9% complete\nINFO:tensorflow:local_step=11390 global_step=11390 loss=17.9, 26.9% complete\nINFO:tensorflow:local_step=11400 global_step=11400 loss=18.4, 26.9% complete\nINFO:tensorflow:local_step=11410 global_step=11410 loss=17.7, 27.0% complete\nINFO:tensorflow:local_step=11420 global_step=11420 loss=18.7, 27.0% complete\nINFO:tensorflow:local_step=11430 global_step=11430 loss=15.2, 27.0% complete\nINFO:tensorflow:local_step=11440 global_step=11440 loss=18.0, 27.0% complete\nINFO:tensorflow:local_step=11450 global_step=11450 loss=17.9, 27.1% complete\nINFO:tensorflow:local_step=11460 global_step=11460 loss=19.1, 27.1% complete\nINFO:tensorflow:local_step=11470 global_step=11470 loss=17.5, 27.1% complete\nINFO:tensorflow:local_step=11480 global_step=11480 loss=19.0, 27.1% complete\nINFO:tensorflow:local_step=11490 global_step=11490 loss=18.2, 27.2% complete\nINFO:tensorflow:local_step=11500 global_step=11500 loss=18.0, 27.2% complete\nINFO:tensorflow:local_step=11510 global_step=11510 loss=18.7, 27.2% complete\nINFO:tensorflow:local_step=11520 global_step=11520 loss=19.9, 27.2% complete\nINFO:tensorflow:local_step=11530 global_step=11530 loss=19.1, 27.2% complete\nINFO:tensorflow:local_step=11540 global_step=11540 loss=17.6, 27.3% complete\nINFO:tensorflow:local_step=11550 global_step=11550 loss=20.0, 27.3% complete\nINFO:tensorflow:local_step=11560 global_step=11560 loss=19.1, 27.3% complete\nINFO:tensorflow:local_step=11570 global_step=11570 loss=19.4, 27.3% complete\nINFO:tensorflow:local_step=11580 global_step=11580 loss=18.4, 27.4% complete\nINFO:tensorflow:local_step=11590 global_step=11590 loss=19.0, 27.4% complete\nINFO:tensorflow:local_step=11600 global_step=11600 loss=19.0, 27.4% complete\nINFO:tensorflow:local_step=11610 global_step=11610 loss=18.5, 27.4% complete\nINFO:tensorflow:local_step=11620 global_step=11620 loss=18.3, 27.5% complete\nINFO:tensorflow:local_step=11630 global_step=11630 loss=19.5, 27.5% complete\nINFO:tensorflow:local_step=11640 global_step=11640 loss=19.3, 27.5% complete\nINFO:tensorflow:local_step=11650 global_step=11650 loss=18.5, 27.5% complete\nINFO:tensorflow:local_step=11660 global_step=11660 loss=18.5, 27.6% complete\nINFO:tensorflow:local_step=11670 global_step=11670 loss=18.6, 27.6% complete\nINFO:tensorflow:local_step=11680 global_step=11680 loss=17.6, 27.6% complete\nINFO:tensorflow:local_step=11690 global_step=11690 loss=19.4, 27.6% complete\nINFO:tensorflow:local_step=11700 global_step=11700 loss=18.5, 27.6% complete\nINFO:tensorflow:local_step=11710 global_step=11710 loss=18.0, 27.7% complete\nINFO:tensorflow:local_step=11720 global_step=11720 loss=17.9, 27.7% complete\nINFO:tensorflow:local_step=11730 global_step=11730 loss=19.5, 27.7% complete\nINFO:tensorflow:local_step=11740 global_step=11740 loss=16.0, 27.7% complete\nINFO:tensorflow:local_step=11750 global_step=11750 loss=19.5, 27.8% complete\nINFO:tensorflow:local_step=11760 global_step=11760 loss=18.2, 27.8% complete\nINFO:tensorflow:local_step=11770 global_step=11770 loss=18.2, 27.8% complete\nINFO:tensorflow:local_step=11780 global_step=11780 loss=17.7, 27.8% complete\nINFO:tensorflow:local_step=11790 global_step=11790 loss=19.2, 27.9% complete\nINFO:tensorflow:local_step=11800 global_step=11800 loss=18.6, 27.9% complete\nINFO:tensorflow:local_step=11810 global_step=11810 loss=18.5, 27.9% complete\nINFO:tensorflow:local_step=11820 global_step=11820 loss=19.1, 27.9% complete\nINFO:tensorflow:local_step=11830 global_step=11830 loss=18.3, 28.0% complete\nINFO:tensorflow:local_step=11840 global_step=11840 loss=18.7, 28.0% complete\nINFO:tensorflow:local_step=11850 global_step=11850 loss=19.5, 28.0% complete\nINFO:tensorflow:local_step=11860 global_step=11860 loss=18.9, 28.0% complete\nINFO:tensorflow:local_step=11870 global_step=11870 loss=18.1, 28.0% complete\nINFO:tensorflow:local_step=11880 global_step=11880 loss=19.2, 28.1% complete\nINFO:tensorflow:local_step=11890 global_step=11890 loss=17.5, 28.1% complete\nINFO:tensorflow:local_step=11900 global_step=11900 loss=18.1, 28.1% complete\nINFO:tensorflow:local_step=11910 global_step=11910 loss=18.0, 28.1% complete\nINFO:tensorflow:local_step=11920 global_step=11920 loss=19.2, 28.2% complete\nINFO:tensorflow:local_step=11930 global_step=11930 loss=18.4, 28.2% complete\nINFO:tensorflow:local_step=11940 global_step=11940 loss=18.8, 28.2% complete\nINFO:tensorflow:local_step=11950 global_step=11950 loss=17.7, 28.2% complete\nINFO:tensorflow:local_step=11960 global_step=11960 loss=18.0, 28.3% complete\nINFO:tensorflow:local_step=11970 global_step=11970 loss=17.8, 28.3% complete\nINFO:tensorflow:local_step=11980 global_step=11980 loss=21.4, 28.3% complete\nINFO:tensorflow:local_step=11990 global_step=11990 loss=16.1, 28.3% complete\nINFO:tensorflow:local_step=12000 global_step=12000 loss=18.2, 28.4% complete\nINFO:tensorflow:local_step=12010 global_step=12010 loss=19.3, 28.4% complete\nINFO:tensorflow:local_step=12020 global_step=12020 loss=17.6, 28.4% complete\nINFO:tensorflow:local_step=12030 global_step=12030 loss=17.6, 28.4% complete\nINFO:tensorflow:local_step=12040 global_step=12040 loss=18.5, 28.4% complete\nINFO:tensorflow:local_step=12050 global_step=12050 loss=18.0, 28.5% complete\nINFO:tensorflow:local_step=12060 global_step=12060 loss=19.3, 28.5% complete\nINFO:tensorflow:local_step=12070 global_step=12070 loss=18.2, 28.5% complete\nINFO:tensorflow:local_step=12080 global_step=12080 loss=17.9, 28.5% complete\nINFO:tensorflow:local_step=12090 global_step=12090 loss=17.5, 28.6% complete\nINFO:tensorflow:local_step=12100 global_step=12100 loss=19.4, 28.6% complete\nINFO:tensorflow:local_step=12110 global_step=12110 loss=18.3, 28.6% complete\nINFO:tensorflow:local_step=12120 global_step=12120 loss=18.3, 28.6% complete\nINFO:tensorflow:local_step=12130 global_step=12130 loss=19.3, 28.7% complete\nINFO:tensorflow:local_step=12140 global_step=12140 loss=18.1, 28.7% complete\nINFO:tensorflow:local_step=12150 global_step=12150 loss=18.3, 28.7% complete\nINFO:tensorflow:local_step=12160 global_step=12160 loss=18.4, 28.7% complete\nINFO:tensorflow:local_step=12170 global_step=12170 loss=19.3, 28.8% complete\nINFO:tensorflow:local_step=12180 global_step=12180 loss=17.7, 28.8% complete\nINFO:tensorflow:local_step=12190 global_step=12190 loss=18.8, 28.8% complete\nINFO:tensorflow:local_step=12200 global_step=12200 loss=18.1, 28.8% complete\nINFO:tensorflow:local_step=12210 global_step=12210 loss=18.2, 28.9% complete\nINFO:tensorflow:local_step=12220 global_step=12220 loss=18.5, 28.9% complete\nINFO:tensorflow:local_step=12230 global_step=12230 loss=19.5, 28.9% complete\nINFO:tensorflow:local_step=12240 global_step=12240 loss=18.5, 28.9% complete\nINFO:tensorflow:local_step=12250 global_step=12250 loss=19.1, 28.9% complete\nINFO:tensorflow:local_step=12260 global_step=12260 loss=19.2, 29.0% complete\nINFO:tensorflow:local_step=12270 global_step=12270 loss=18.8, 29.0% complete\nINFO:tensorflow:local_step=12280 global_step=12280 loss=18.4, 29.0% complete\nINFO:tensorflow:local_step=12290 global_step=12290 loss=19.0, 29.0% complete\nINFO:tensorflow:local_step=12300 global_step=12300 loss=17.7, 29.1% complete\nINFO:tensorflow:local_step=12310 global_step=12310 loss=19.6, 29.1% complete\nINFO:tensorflow:local_step=12320 global_step=12320 loss=17.4, 29.1% complete\nINFO:tensorflow:local_step=12330 global_step=12330 loss=17.9, 29.1% complete\nINFO:tensorflow:local_step=12340 global_step=12340 loss=18.7, 29.2% complete\nINFO:tensorflow:local_step=12350 global_step=12350 loss=18.9, 29.2% complete\nINFO:tensorflow:local_step=12360 global_step=12360 loss=18.2, 29.2% complete\nINFO:tensorflow:local_step=12370 global_step=12370 loss=18.2, 29.2% complete\nINFO:tensorflow:local_step=12380 global_step=12380 loss=18.9, 29.3% complete\nINFO:tensorflow:local_step=12390 global_step=12390 loss=18.6, 29.3% complete\nINFO:tensorflow:local_step=12400 global_step=12400 loss=18.6, 29.3% complete\nINFO:tensorflow:local_step=12410 global_step=12410 loss=18.4, 29.3% complete\nINFO:tensorflow:local_step=12420 global_step=12420 loss=18.8, 29.3% complete\nINFO:tensorflow:local_step=12430 global_step=12430 loss=18.5, 29.4% complete\nINFO:tensorflow:local_step=12440 global_step=12440 loss=19.5, 29.4% complete\nINFO:tensorflow:local_step=12450 global_step=12450 loss=18.9, 29.4% complete\nINFO:tensorflow:local_step=12460 global_step=12460 loss=17.9, 29.4% complete\nINFO:tensorflow:local_step=12470 global_step=12470 loss=18.3, 29.5% complete\nINFO:tensorflow:local_step=12480 global_step=12480 loss=281.5, 29.5% complete\nINFO:tensorflow:local_step=12490 global_step=12490 loss=17.8, 29.5% complete\nINFO:tensorflow:local_step=12500 global_step=12500 loss=17.7, 29.5% complete\nINFO:tensorflow:local_step=12510 global_step=12510 loss=15.2, 29.6% complete\nINFO:tensorflow:local_step=12520 global_step=12520 loss=19.4, 29.6% complete\nINFO:tensorflow:local_step=12530 global_step=12530 loss=18.2, 29.6% complete\nINFO:tensorflow:local_step=12540 global_step=12540 loss=17.4, 29.6% complete\nINFO:tensorflow:local_step=12550 global_step=12550 loss=19.0, 29.7% complete\nINFO:tensorflow:local_step=12560 global_step=12560 loss=18.3, 29.7% complete\nINFO:tensorflow:local_step=12570 global_step=12570 loss=18.7, 29.7% complete\nINFO:tensorflow:local_step=12580 global_step=12580 loss=18.2, 29.7% complete\nINFO:tensorflow:local_step=12590 global_step=12590 loss=18.7, 29.7% complete\nINFO:tensorflow:local_step=12600 global_step=12600 loss=18.2, 29.8% complete\nINFO:tensorflow:local_step=12610 global_step=12610 loss=18.7, 29.8% complete\nINFO:tensorflow:local_step=12620 global_step=12620 loss=17.8, 29.8% complete\nINFO:tensorflow:local_step=12630 global_step=12630 loss=17.9, 29.8% complete\nINFO:tensorflow:local_step=12640 global_step=12640 loss=19.2, 29.9% complete\nINFO:tensorflow:local_step=12650 global_step=12650 loss=19.1, 29.9% complete\nINFO:tensorflow:local_step=12660 global_step=12660 loss=19.6, 29.9% complete\nINFO:tensorflow:local_step=12670 global_step=12670 loss=18.0, 29.9% complete\nINFO:tensorflow:local_step=12680 global_step=12680 loss=18.1, 30.0% complete\nINFO:tensorflow:local_step=12690 global_step=12690 loss=18.3, 30.0% complete\nINFO:tensorflow:local_step=12700 global_step=12700 loss=14.7, 30.0% complete\nINFO:tensorflow:local_step=12710 global_step=12710 loss=14.8, 30.0% complete\nINFO:tensorflow:local_step=12720 global_step=12720 loss=18.3, 30.1% complete\nINFO:tensorflow:local_step=12730 global_step=12730 loss=18.6, 30.1% complete\nINFO:tensorflow:local_step=12740 global_step=12740 loss=18.0, 30.1% complete\nINFO:tensorflow:local_step=12750 global_step=12750 loss=18.7, 30.1% complete\nINFO:tensorflow:local_step=12760 global_step=12760 loss=18.0, 30.2% complete\nINFO:tensorflow:local_step=12770 global_step=12770 loss=17.9, 30.2% complete\nINFO:tensorflow:local_step=12780 global_step=12780 loss=18.1, 30.2% complete\nINFO:tensorflow:local_step=12790 global_step=12790 loss=18.1, 30.2% complete\nINFO:tensorflow:local_step=12800 global_step=12800 loss=18.3, 30.2% complete\nINFO:tensorflow:local_step=12810 global_step=12810 loss=18.1, 30.3% complete\nINFO:tensorflow:local_step=12820 global_step=12820 loss=18.0, 30.3% complete\nINFO:tensorflow:local_step=12830 global_step=12830 loss=19.2, 30.3% complete\nINFO:tensorflow:local_step=12840 global_step=12840 loss=18.6, 30.3% complete\nINFO:tensorflow:local_step=12850 global_step=12850 loss=18.7, 30.4% complete\nINFO:tensorflow:local_step=12860 global_step=12860 loss=18.2, 30.4% complete\nINFO:tensorflow:local_step=12870 global_step=12870 loss=18.0, 30.4% complete\nINFO:tensorflow:local_step=12880 global_step=12880 loss=18.4, 30.4% complete\nINFO:tensorflow:local_step=12890 global_step=12890 loss=18.1, 30.5% complete\nINFO:tensorflow:local_step=12900 global_step=12900 loss=19.3, 30.5% complete\nINFO:tensorflow:local_step=12910 global_step=12910 loss=19.0, 30.5% complete\nINFO:tensorflow:local_step=12920 global_step=12920 loss=19.4, 30.5% complete\nINFO:tensorflow:local_step=12930 global_step=12930 loss=17.8, 30.6% complete\nINFO:tensorflow:local_step=12940 global_step=12940 loss=17.7, 30.6% complete\nINFO:tensorflow:local_step=12950 global_step=12950 loss=18.1, 30.6% complete\nINFO:tensorflow:local_step=12960 global_step=12960 loss=19.1, 30.6% complete\nINFO:tensorflow:local_step=12970 global_step=12970 loss=17.8, 30.6% complete\nINFO:tensorflow:local_step=12980 global_step=12980 loss=18.2, 30.7% complete\nINFO:tensorflow:local_step=12990 global_step=12990 loss=17.9, 30.7% complete\nINFO:tensorflow:local_step=13000 global_step=13000 loss=19.0, 30.7% complete\nINFO:tensorflow:local_step=13010 global_step=13010 loss=215.0, 30.7% complete\nINFO:tensorflow:local_step=13020 global_step=13020 loss=17.9, 30.8% complete\nINFO:tensorflow:local_step=13030 global_step=13030 loss=18.3, 30.8% complete\nINFO:tensorflow:local_step=13040 global_step=13040 loss=21.5, 30.8% complete\nINFO:tensorflow:local_step=13050 global_step=13050 loss=17.8, 30.8% complete\nINFO:tensorflow:local_step=13060 global_step=13060 loss=15.9, 30.9% complete\nINFO:tensorflow:local_step=13070 global_step=13070 loss=18.6, 30.9% complete\nINFO:tensorflow:local_step=13080 global_step=13080 loss=18.4, 30.9% complete\nINFO:tensorflow:local_step=13090 global_step=13090 loss=18.0, 30.9% complete\nINFO:tensorflow:local_step=13100 global_step=13100 loss=18.0, 31.0% complete\nINFO:tensorflow:local_step=13110 global_step=13110 loss=19.0, 31.0% complete\nINFO:tensorflow:local_step=13120 global_step=13120 loss=17.7, 31.0% complete\nINFO:tensorflow:local_step=13130 global_step=13130 loss=17.8, 31.0% complete\nINFO:tensorflow:local_step=13140 global_step=13140 loss=17.9, 31.0% complete\nINFO:tensorflow:local_step=13150 global_step=13150 loss=18.9, 31.1% complete\nINFO:tensorflow:local_step=13160 global_step=13160 loss=19.1, 31.1% complete\nINFO:tensorflow:local_step=13170 global_step=13170 loss=18.2, 31.1% complete\nINFO:tensorflow:local_step=13180 global_step=13180 loss=18.7, 31.1% complete\nINFO:tensorflow:local_step=13190 global_step=13190 loss=19.0, 31.2% complete\nINFO:tensorflow:local_step=13200 global_step=13200 loss=14.6, 31.2% complete\nINFO:tensorflow:local_step=13210 global_step=13210 loss=17.4, 31.2% complete\nINFO:tensorflow:local_step=13220 global_step=13220 loss=17.9, 31.2% complete\nINFO:tensorflow:local_step=13230 global_step=13230 loss=18.3, 31.3% complete\nINFO:tensorflow:local_step=13240 global_step=13240 loss=18.1, 31.3% complete\nINFO:tensorflow:local_step=13250 global_step=13250 loss=19.8, 31.3% complete\nINFO:tensorflow:local_step=13260 global_step=13260 loss=19.1, 31.3% complete\nINFO:tensorflow:local_step=13270 global_step=13270 loss=19.0, 31.4% complete\nINFO:tensorflow:local_step=13280 global_step=13280 loss=18.3, 31.4% complete\nINFO:tensorflow:local_step=13290 global_step=13290 loss=18.1, 31.4% complete\nINFO:tensorflow:local_step=13300 global_step=13300 loss=18.0, 31.4% complete\nINFO:tensorflow:local_step=13310 global_step=13310 loss=18.4, 31.5% complete\nINFO:tensorflow:local_step=13320 global_step=13320 loss=18.6, 31.5% complete\nINFO:tensorflow:local_step=13330 global_step=13330 loss=18.7, 31.5% complete\nINFO:tensorflow:local_step=13340 global_step=13340 loss=18.3, 31.5% complete\nINFO:tensorflow:local_step=13350 global_step=13350 loss=21.3, 31.5% complete\nINFO:tensorflow:local_step=13360 global_step=13360 loss=18.4, 31.6% complete\nINFO:tensorflow:local_step=13370 global_step=13370 loss=18.4, 31.6% complete\nINFO:tensorflow:local_step=13380 global_step=13380 loss=18.8, 31.6% complete\nINFO:tensorflow:local_step=13390 global_step=13390 loss=17.9, 31.6% complete\nINFO:tensorflow:local_step=13400 global_step=13400 loss=19.1, 31.7% complete\nINFO:tensorflow:local_step=13410 global_step=13410 loss=21.0, 31.7% complete\nINFO:tensorflow:local_step=13420 global_step=13420 loss=18.3, 31.7% complete\nINFO:tensorflow:local_step=13430 global_step=13430 loss=19.2, 31.7% complete\nINFO:tensorflow:local_step=13440 global_step=13440 loss=18.3, 31.8% complete\nINFO:tensorflow:local_step=13450 global_step=13450 loss=18.8, 31.8% complete\nINFO:tensorflow:local_step=13460 global_step=13460 loss=17.7, 31.8% complete\nINFO:tensorflow:local_step=13470 global_step=13470 loss=18.8, 31.8% complete\nINFO:tensorflow:local_step=13480 global_step=13480 loss=18.3, 31.9% complete\nINFO:tensorflow:local_step=13490 global_step=13490 loss=18.8, 31.9% complete\nINFO:tensorflow:local_step=13500 global_step=13500 loss=18.5, 31.9% complete\nINFO:tensorflow:local_step=13510 global_step=13510 loss=17.9, 31.9% complete\nINFO:tensorflow:local_step=13520 global_step=13520 loss=19.6, 31.9% complete\nINFO:tensorflow:local_step=13530 global_step=13530 loss=17.8, 32.0% complete\nINFO:tensorflow:local_step=13540 global_step=13540 loss=18.9, 32.0% complete\nINFO:tensorflow:local_step=13550 global_step=13550 loss=18.2, 32.0% complete\nINFO:tensorflow:local_step=13560 global_step=13560 loss=18.4, 32.0% complete\nINFO:tensorflow:local_step=13570 global_step=13570 loss=17.6, 32.1% complete\nINFO:tensorflow:local_step=13580 global_step=13580 loss=17.7, 32.1% complete\nINFO:tensorflow:local_step=13590 global_step=13590 loss=18.4, 32.1% complete\nINFO:tensorflow:local_step=13600 global_step=13600 loss=18.6, 32.1% complete\nINFO:tensorflow:local_step=13610 global_step=13610 loss=19.0, 32.2% complete\nINFO:tensorflow:local_step=13620 global_step=13620 loss=18.3, 32.2% complete\nINFO:tensorflow:local_step=13630 global_step=13630 loss=18.7, 32.2% complete\nINFO:tensorflow:local_step=13640 global_step=13640 loss=17.7, 32.2% complete\nINFO:tensorflow:local_step=13650 global_step=13650 loss=17.7, 32.3% complete\nINFO:tensorflow:local_step=13660 global_step=13660 loss=17.8, 32.3% complete\nINFO:tensorflow:local_step=13670 global_step=13670 loss=18.6, 32.3% complete\nINFO:tensorflow:local_step=13680 global_step=13680 loss=18.4, 32.3% complete\nINFO:tensorflow:local_step=13690 global_step=13690 loss=18.1, 32.3% complete\nINFO:tensorflow:local_step=13700 global_step=13700 loss=18.1, 32.4% complete\nINFO:tensorflow:local_step=13710 global_step=13710 loss=17.7, 32.4% complete\nINFO:tensorflow:local_step=13720 global_step=13720 loss=17.8, 32.4% complete\nINFO:tensorflow:local_step=13730 global_step=13730 loss=18.1, 32.4% complete\nINFO:tensorflow:local_step=13740 global_step=13740 loss=19.0, 32.5% complete\nINFO:tensorflow:local_step=13750 global_step=13750 loss=19.0, 32.5% complete\nINFO:tensorflow:local_step=13760 global_step=13760 loss=19.0, 32.5% complete\nINFO:tensorflow:local_step=13770 global_step=13770 loss=18.0, 32.5% complete\nINFO:tensorflow:local_step=13780 global_step=13780 loss=18.2, 32.6% complete\nINFO:tensorflow:local_step=13790 global_step=13790 loss=18.5, 32.6% complete\nINFO:tensorflow:local_step=13800 global_step=13800 loss=18.5, 32.6% complete\nINFO:tensorflow:local_step=13810 global_step=13810 loss=17.8, 32.6% complete\nINFO:tensorflow:local_step=13820 global_step=13820 loss=17.7, 32.7% complete\nINFO:tensorflow:local_step=13830 global_step=13830 loss=19.4, 32.7% complete\nINFO:tensorflow:local_step=13840 global_step=13840 loss=19.0, 32.7% complete\nINFO:tensorflow:local_step=13850 global_step=13850 loss=18.4, 32.7% complete\nINFO:tensorflow:local_step=13860 global_step=13860 loss=18.3, 32.8% complete\nINFO:tensorflow:local_step=13870 global_step=13870 loss=19.3, 32.8% complete\nINFO:tensorflow:local_step=13880 global_step=13880 loss=18.8, 32.8% complete\nINFO:tensorflow:local_step=13890 global_step=13890 loss=17.6, 32.8% complete\nINFO:tensorflow:local_step=13900 global_step=13900 loss=18.0, 32.8% complete\nINFO:tensorflow:local_step=13910 global_step=13910 loss=17.7, 32.9% complete\nINFO:tensorflow:local_step=13920 global_step=13920 loss=18.1, 32.9% complete\nINFO:tensorflow:local_step=13930 global_step=13930 loss=17.9, 32.9% complete\nINFO:tensorflow:local_step=13940 global_step=13940 loss=19.8, 32.9% complete\nINFO:tensorflow:local_step=13950 global_step=13950 loss=18.4, 33.0% complete\nINFO:tensorflow:local_step=13960 global_step=13960 loss=17.8, 33.0% complete\nINFO:tensorflow:local_step=13970 global_step=13970 loss=15.4, 33.0% complete\nINFO:tensorflow:local_step=13980 global_step=13980 loss=18.5, 33.0% complete\nINFO:tensorflow:local_step=13990 global_step=13990 loss=18.6, 33.1% complete\nINFO:tensorflow:local_step=14000 global_step=14000 loss=19.6, 33.1% complete\nINFO:tensorflow:local_step=14010 global_step=14010 loss=18.0, 33.1% complete\nINFO:tensorflow:local_step=14020 global_step=14020 loss=17.9, 33.1% complete\nINFO:tensorflow:local_step=14030 global_step=14030 loss=18.8, 33.2% complete\nINFO:tensorflow:local_step=14040 global_step=14040 loss=18.3, 33.2% complete\nINFO:tensorflow:local_step=14050 global_step=14050 loss=18.5, 33.2% complete\nINFO:tensorflow:local_step=14060 global_step=14060 loss=17.8, 33.2% complete\nINFO:tensorflow:local_step=14070 global_step=14070 loss=18.6, 33.2% complete\nINFO:tensorflow:local_step=14080 global_step=14080 loss=18.0, 33.3% complete\nINFO:tensorflow:local_step=14090 global_step=14090 loss=19.3, 33.3% complete\nINFO:tensorflow:local_step=14100 global_step=14100 loss=19.1, 33.3% complete\nINFO:tensorflow:local_step=14110 global_step=14110 loss=18.5, 33.3% complete\nINFO:tensorflow:local_step=14120 global_step=14120 loss=19.4, 33.4% complete\nINFO:tensorflow:local_step=14130 global_step=14130 loss=18.6, 33.4% complete\nINFO:tensorflow:local_step=14140 global_step=14140 loss=18.7, 33.4% complete\nINFO:tensorflow:local_step=14150 global_step=14150 loss=18.6, 33.4% complete\nINFO:tensorflow:local_step=14160 global_step=14160 loss=15.3, 33.5% complete\nINFO:tensorflow:local_step=14170 global_step=14170 loss=18.5, 33.5% complete\nINFO:tensorflow:local_step=14180 global_step=14180 loss=17.9, 33.5% complete\nINFO:tensorflow:local_step=14190 global_step=14190 loss=19.7, 33.5% complete\nINFO:tensorflow:local_step=14200 global_step=14200 loss=18.9, 33.6% complete\nINFO:tensorflow:local_step=14210 global_step=14210 loss=18.9, 33.6% complete\nINFO:tensorflow:local_step=14220 global_step=14220 loss=18.1, 33.6% complete\nINFO:tensorflow:local_step=14230 global_step=14230 loss=18.6, 33.6% complete\nINFO:tensorflow:local_step=14240 global_step=14240 loss=18.3, 33.6% complete\nINFO:tensorflow:local_step=14250 global_step=14250 loss=18.1, 33.7% complete\nINFO:tensorflow:local_step=14260 global_step=14260 loss=17.2, 33.7% complete\nINFO:tensorflow:local_step=14270 global_step=14270 loss=18.0, 33.7% complete\nINFO:tensorflow:local_step=14280 global_step=14280 loss=18.7, 33.7% complete\nINFO:tensorflow:local_step=14290 global_step=14290 loss=18.2, 33.8% complete\nINFO:tensorflow:local_step=14300 global_step=14300 loss=18.8, 33.8% complete\nINFO:tensorflow:local_step=14310 global_step=14310 loss=17.9, 33.8% complete\nINFO:tensorflow:local_step=14320 global_step=14320 loss=19.5, 33.8% complete\nINFO:tensorflow:local_step=14330 global_step=14330 loss=18.3, 33.9% complete\nINFO:tensorflow:local_step=14340 global_step=14340 loss=18.0, 33.9% complete\nINFO:tensorflow:local_step=14350 global_step=14350 loss=18.1, 33.9% complete\nINFO:tensorflow:local_step=14360 global_step=14360 loss=18.7, 33.9% complete\nINFO:tensorflow:local_step=14370 global_step=14370 loss=17.6, 34.0% complete\nINFO:tensorflow:local_step=14380 global_step=14380 loss=18.2, 34.0% complete\nINFO:tensorflow:local_step=14390 global_step=14390 loss=17.8, 34.0% complete\nINFO:tensorflow:local_step=14400 global_step=14400 loss=19.6, 34.0% complete\nINFO:tensorflow:local_step=14410 global_step=14410 loss=17.8, 34.1% complete\nINFO:tensorflow:local_step=14420 global_step=14420 loss=14.6, 34.1% complete\nINFO:tensorflow:local_step=14430 global_step=14430 loss=18.5, 34.1% complete\nINFO:tensorflow:local_step=14440 global_step=14440 loss=17.3, 34.1% complete\nINFO:tensorflow:local_step=14450 global_step=14450 loss=20.3, 34.1% complete\nINFO:tensorflow:local_step=14460 global_step=14460 loss=17.8, 34.2% complete\nINFO:tensorflow:local_step=14470 global_step=14470 loss=14.6, 34.2% complete\nINFO:tensorflow:local_step=14480 global_step=14480 loss=14.4, 34.2% complete\nINFO:tensorflow:local_step=14490 global_step=14490 loss=18.3, 34.2% complete\nINFO:tensorflow:local_step=14500 global_step=14500 loss=18.6, 34.3% complete\nINFO:tensorflow:local_step=14510 global_step=14510 loss=19.4, 34.3% complete\nINFO:tensorflow:local_step=14520 global_step=14520 loss=18.9, 34.3% complete\nINFO:tensorflow:local_step=14530 global_step=14530 loss=19.1, 34.3% complete\nINFO:tensorflow:local_step=14540 global_step=14540 loss=15.2, 34.4% complete\nINFO:tensorflow:local_step=14550 global_step=14550 loss=18.1, 34.4% complete\nINFO:tensorflow:local_step=14560 global_step=14560 loss=18.8, 34.4% complete\nINFO:tensorflow:local_step=14570 global_step=14570 loss=18.5, 34.4% complete\nINFO:tensorflow:local_step=14580 global_step=14580 loss=273.3, 34.5% complete\nINFO:tensorflow:local_step=14590 global_step=14590 loss=19.1, 34.5% complete\nINFO:tensorflow:local_step=14600 global_step=14600 loss=18.9, 34.5% complete\nINFO:tensorflow:local_step=14610 global_step=14610 loss=19.0, 34.5% complete\nINFO:tensorflow:local_step=14620 global_step=14620 loss=17.9, 34.5% complete\nINFO:tensorflow:local_step=14630 global_step=14630 loss=19.0, 34.6% complete\nINFO:tensorflow:local_step=14640 global_step=14640 loss=19.7, 34.6% complete\nINFO:tensorflow:local_step=14650 global_step=14650 loss=18.1, 34.6% complete\nINFO:tensorflow:local_step=14660 global_step=14660 loss=18.2, 34.6% complete\nINFO:tensorflow:local_step=14670 global_step=14670 loss=15.4, 34.7% complete\nINFO:tensorflow:local_step=14680 global_step=14680 loss=19.2, 34.7% complete\nINFO:tensorflow:local_step=14690 global_step=14690 loss=18.8, 34.7% complete\nINFO:tensorflow:local_step=14700 global_step=14700 loss=18.5, 34.7% complete\nINFO:tensorflow:local_step=14710 global_step=14710 loss=18.8, 34.8% complete\nINFO:tensorflow:local_step=14720 global_step=14720 loss=18.3, 34.8% complete\nINFO:tensorflow:local_step=14730 global_step=14730 loss=13.5, 34.8% complete\nINFO:tensorflow:local_step=14740 global_step=14740 loss=302.7, 34.8% complete\nINFO:tensorflow:local_step=14750 global_step=14750 loss=18.6, 34.9% complete\nINFO:tensorflow:local_step=14760 global_step=14760 loss=18.4, 34.9% complete\nINFO:tensorflow:local_step=14770 global_step=14770 loss=18.6, 34.9% complete\nINFO:tensorflow:local_step=14780 global_step=14780 loss=18.6, 34.9% complete\nINFO:tensorflow:local_step=14790 global_step=14790 loss=19.5, 34.9% complete\nINFO:tensorflow:local_step=14800 global_step=14800 loss=18.0, 35.0% complete\nINFO:tensorflow:local_step=14810 global_step=14810 loss=17.9, 35.0% complete\nINFO:tensorflow:local_step=14820 global_step=14820 loss=18.4, 35.0% complete\nINFO:tensorflow:local_step=14830 global_step=14830 loss=17.3, 35.0% complete\nINFO:tensorflow:local_step=14840 global_step=14840 loss=18.7, 35.1% complete\nINFO:tensorflow:local_step=14850 global_step=14850 loss=17.8, 35.1% complete\nINFO:tensorflow:local_step=14860 global_step=14860 loss=17.9, 35.1% complete\nINFO:tensorflow:local_step=14870 global_step=14870 loss=18.0, 35.1% complete\nINFO:tensorflow:local_step=14880 global_step=14880 loss=18.8, 35.2% complete\nINFO:tensorflow:local_step=14890 global_step=14890 loss=18.7, 35.2% complete\nINFO:tensorflow:local_step=14900 global_step=14900 loss=18.2, 35.2% complete\nINFO:tensorflow:local_step=14910 global_step=14910 loss=18.4, 35.2% complete\nINFO:tensorflow:local_step=14920 global_step=14920 loss=17.9, 35.3% complete\nINFO:tensorflow:local_step=14930 global_step=14930 loss=18.2, 35.3% complete\nINFO:tensorflow:local_step=14940 global_step=14940 loss=18.7, 35.3% complete\nINFO:tensorflow:local_step=14950 global_step=14950 loss=18.4, 35.3% complete\nINFO:tensorflow:Recording summary at step 14956.\nINFO:tensorflow:global_step/sec: 126.049\nINFO:tensorflow:local_step=14960 global_step=14960 loss=18.0, 35.3% complete\nINFO:tensorflow:local_step=14970 global_step=14970 loss=19.5, 35.4% complete\nINFO:tensorflow:local_step=14980 global_step=14980 loss=18.6, 35.4% complete\nINFO:tensorflow:local_step=14990 global_step=14990 loss=17.9, 35.4% complete\nINFO:tensorflow:local_step=15000 global_step=15000 loss=18.5, 35.4% complete\nINFO:tensorflow:local_step=15010 global_step=15010 loss=17.1, 35.5% complete\nINFO:tensorflow:local_step=15020 global_step=15020 loss=18.2, 35.5% complete\nINFO:tensorflow:local_step=15030 global_step=15030 loss=22.4, 35.5% complete\nINFO:tensorflow:local_step=15040 global_step=15040 loss=18.4, 35.5% complete\nINFO:tensorflow:local_step=15050 global_step=15050 loss=18.1, 35.6% complete\nINFO:tensorflow:local_step=15060 global_step=15060 loss=18.0, 35.6% complete\nINFO:tensorflow:local_step=15070 global_step=15070 loss=18.4, 35.6% complete\nINFO:tensorflow:local_step=15080 global_step=15080 loss=18.2, 35.6% complete\nINFO:tensorflow:local_step=15090 global_step=15090 loss=19.4, 35.7% complete\nINFO:tensorflow:local_step=15100 global_step=15100 loss=17.9, 35.7% complete\nINFO:tensorflow:local_step=15110 global_step=15110 loss=14.6, 35.7% complete\nINFO:tensorflow:local_step=15120 global_step=15120 loss=18.2, 35.7% complete\nINFO:tensorflow:local_step=15130 global_step=15130 loss=19.4, 35.8% complete\nINFO:tensorflow:local_step=15140 global_step=15140 loss=17.6, 35.8% complete\nINFO:tensorflow:local_step=15150 global_step=15150 loss=17.5, 35.8% complete\nINFO:tensorflow:local_step=15160 global_step=15160 loss=17.9, 35.8% complete\nINFO:tensorflow:local_step=15170 global_step=15170 loss=17.5, 35.8% complete\nINFO:tensorflow:local_step=15180 global_step=15180 loss=18.3, 35.9% complete\nINFO:tensorflow:local_step=15190 global_step=15190 loss=18.3, 35.9% complete\nINFO:tensorflow:local_step=15200 global_step=15200 loss=18.3, 35.9% complete\nINFO:tensorflow:local_step=15210 global_step=15210 loss=17.4, 35.9% complete\nINFO:tensorflow:local_step=15220 global_step=15220 loss=18.0, 36.0% complete\nINFO:tensorflow:local_step=15230 global_step=15230 loss=19.1, 36.0% complete\nINFO:tensorflow:local_step=15240 global_step=15240 loss=19.6, 36.0% complete\nINFO:tensorflow:local_step=15250 global_step=15250 loss=19.3, 36.0% complete\nINFO:tensorflow:local_step=15260 global_step=15260 loss=17.5, 36.1% complete\nINFO:tensorflow:local_step=15270 global_step=15270 loss=18.1, 36.1% complete\nINFO:tensorflow:local_step=15280 global_step=15280 loss=167.1, 36.1% complete\nINFO:tensorflow:local_step=15290 global_step=15290 loss=18.6, 36.1% complete\nINFO:tensorflow:local_step=15300 global_step=15300 loss=18.1, 36.2% complete\nINFO:tensorflow:local_step=15310 global_step=15310 loss=18.6, 36.2% complete\nINFO:tensorflow:local_step=15320 global_step=15320 loss=20.6, 36.2% complete\nINFO:tensorflow:local_step=15330 global_step=15330 loss=16.1, 36.2% complete\nINFO:tensorflow:local_step=15340 global_step=15340 loss=18.0, 36.2% complete\nINFO:tensorflow:local_step=15350 global_step=15350 loss=18.4, 36.3% complete\nINFO:tensorflow:local_step=15360 global_step=15360 loss=19.8, 36.3% complete\nINFO:tensorflow:local_step=15370 global_step=15370 loss=17.7, 36.3% complete\nINFO:tensorflow:local_step=15380 global_step=15380 loss=18.8, 36.3% complete\nINFO:tensorflow:local_step=15390 global_step=15390 loss=18.9, 36.4% complete\nINFO:tensorflow:local_step=15400 global_step=15400 loss=20.0, 36.4% complete\nINFO:tensorflow:local_step=15410 global_step=15410 loss=18.7, 36.4% complete\nINFO:tensorflow:local_step=15420 global_step=15420 loss=18.1, 36.4% complete\nINFO:tensorflow:local_step=15430 global_step=15430 loss=18.9, 36.5% complete\nINFO:tensorflow:local_step=15440 global_step=15440 loss=21.6, 36.5% complete\nINFO:tensorflow:local_step=15450 global_step=15450 loss=17.9, 36.5% complete\nINFO:tensorflow:local_step=15460 global_step=15460 loss=14.3, 36.5% complete\nINFO:tensorflow:local_step=15470 global_step=15470 loss=18.5, 36.6% complete\nINFO:tensorflow:local_step=15480 global_step=15480 loss=20.4, 36.6% complete\nINFO:tensorflow:local_step=15490 global_step=15490 loss=18.9, 36.6% complete\nINFO:tensorflow:local_step=15500 global_step=15500 loss=19.0, 36.6% complete\nINFO:tensorflow:local_step=15510 global_step=15510 loss=19.0, 36.6% complete\nINFO:tensorflow:local_step=15520 global_step=15520 loss=18.9, 36.7% complete\nINFO:tensorflow:local_step=15530 global_step=15530 loss=18.9, 36.7% complete\nINFO:tensorflow:local_step=15540 global_step=15540 loss=19.1, 36.7% complete\nINFO:tensorflow:local_step=15550 global_step=15550 loss=18.5, 36.7% complete\nINFO:tensorflow:local_step=15560 global_step=15560 loss=18.5, 36.8% complete\nINFO:tensorflow:local_step=15570 global_step=15570 loss=18.1, 36.8% complete\nINFO:tensorflow:local_step=15580 global_step=15580 loss=18.5, 36.8% complete\nINFO:tensorflow:local_step=15590 global_step=15590 loss=18.4, 36.8% complete\nINFO:tensorflow:local_step=15600 global_step=15600 loss=18.9, 36.9% complete\nINFO:tensorflow:local_step=15610 global_step=15610 loss=18.5, 36.9% complete\nINFO:tensorflow:local_step=15620 global_step=15620 loss=16.6, 36.9% complete\nINFO:tensorflow:local_step=15630 global_step=15630 loss=18.4, 36.9% complete\nINFO:tensorflow:local_step=15640 global_step=15640 loss=18.4, 37.0% complete\nINFO:tensorflow:local_step=15650 global_step=15650 loss=16.9, 37.0% complete\nINFO:tensorflow:local_step=15660 global_step=15660 loss=19.8, 37.0% complete\nINFO:tensorflow:local_step=15670 global_step=15670 loss=17.5, 37.0% complete\nINFO:tensorflow:local_step=15680 global_step=15680 loss=17.6, 37.1% complete\nINFO:tensorflow:local_step=15690 global_step=15690 loss=18.9, 37.1% complete\nINFO:tensorflow:local_step=15700 global_step=15700 loss=17.7, 37.1% complete\nINFO:tensorflow:local_step=15710 global_step=15710 loss=18.0, 37.1% complete\nINFO:tensorflow:local_step=15720 global_step=15720 loss=18.3, 37.1% complete\nINFO:tensorflow:local_step=15730 global_step=15730 loss=18.2, 37.2% complete\nINFO:tensorflow:local_step=15740 global_step=15740 loss=18.5, 37.2% complete\nINFO:tensorflow:local_step=15750 global_step=15750 loss=18.5, 37.2% complete\nINFO:tensorflow:local_step=15760 global_step=15760 loss=17.9, 37.2% complete\nINFO:tensorflow:local_step=15770 global_step=15770 loss=17.3, 37.3% complete\nINFO:tensorflow:local_step=15780 global_step=15780 loss=17.9, 37.3% complete\nINFO:tensorflow:local_step=15790 global_step=15790 loss=18.2, 37.3% complete\nINFO:tensorflow:local_step=15800 global_step=15800 loss=17.6, 37.3% complete\nINFO:tensorflow:local_step=15810 global_step=15810 loss=19.0, 37.4% complete\nINFO:tensorflow:local_step=15820 global_step=15820 loss=18.3, 37.4% complete\nINFO:tensorflow:local_step=15830 global_step=15830 loss=18.3, 37.4% complete\nINFO:tensorflow:local_step=15840 global_step=15840 loss=18.5, 37.4% complete\nINFO:tensorflow:local_step=15850 global_step=15850 loss=19.5, 37.5% complete\nINFO:tensorflow:local_step=15860 global_step=15860 loss=19.0, 37.5% complete\nINFO:tensorflow:local_step=15870 global_step=15870 loss=18.1, 37.5% complete\nINFO:tensorflow:local_step=15880 global_step=15880 loss=15.8, 37.5% complete\nINFO:tensorflow:local_step=15890 global_step=15890 loss=19.3, 37.5% complete\nINFO:tensorflow:local_step=15900 global_step=15900 loss=18.4, 37.6% complete\nINFO:tensorflow:local_step=15910 global_step=15910 loss=15.6, 37.6% complete\nINFO:tensorflow:local_step=15920 global_step=15920 loss=18.5, 37.6% complete\nINFO:tensorflow:local_step=15930 global_step=15930 loss=17.6, 37.6% complete\nINFO:tensorflow:local_step=15940 global_step=15940 loss=17.8, 37.7% complete\nINFO:tensorflow:local_step=15950 global_step=15950 loss=17.7, 37.7% complete\nINFO:tensorflow:local_step=15960 global_step=15960 loss=18.2, 37.7% complete\nINFO:tensorflow:local_step=15970 global_step=15970 loss=18.3, 37.7% complete\nINFO:tensorflow:local_step=15980 global_step=15980 loss=18.6, 37.8% complete\nINFO:tensorflow:local_step=15990 global_step=15990 loss=16.2, 37.8% complete\nINFO:tensorflow:local_step=16000 global_step=16000 loss=18.3, 37.8% complete\nINFO:tensorflow:local_step=16010 global_step=16010 loss=19.5, 37.8% complete\nINFO:tensorflow:local_step=16020 global_step=16020 loss=18.9, 37.9% complete\nINFO:tensorflow:local_step=16030 global_step=16030 loss=18.0, 37.9% complete\nINFO:tensorflow:local_step=16040 global_step=16040 loss=18.0, 37.9% complete\nINFO:tensorflow:local_step=16050 global_step=16050 loss=18.5, 37.9% complete\nINFO:tensorflow:local_step=16060 global_step=16060 loss=18.0, 37.9% complete\nINFO:tensorflow:local_step=16070 global_step=16070 loss=19.2, 38.0% complete\nINFO:tensorflow:local_step=16080 global_step=16080 loss=19.1, 38.0% complete\nINFO:tensorflow:local_step=16090 global_step=16090 loss=18.1, 38.0% complete\nINFO:tensorflow:local_step=16100 global_step=16100 loss=18.4, 38.0% complete\nINFO:tensorflow:local_step=16110 global_step=16110 loss=17.8, 38.1% complete\nINFO:tensorflow:local_step=16120 global_step=16120 loss=19.2, 38.1% complete\nINFO:tensorflow:local_step=16130 global_step=16130 loss=315.4, 38.1% complete\nINFO:tensorflow:local_step=16140 global_step=16140 loss=18.9, 38.1% complete\nINFO:tensorflow:local_step=16150 global_step=16150 loss=22.1, 38.2% complete\nINFO:tensorflow:local_step=16160 global_step=16160 loss=17.4, 38.2% complete\nINFO:tensorflow:local_step=16170 global_step=16170 loss=18.9, 38.2% complete\nINFO:tensorflow:local_step=16180 global_step=16180 loss=19.2, 38.2% complete\nINFO:tensorflow:local_step=16190 global_step=16190 loss=19.8, 38.3% complete\nINFO:tensorflow:local_step=16200 global_step=16200 loss=18.0, 38.3% complete\nINFO:tensorflow:local_step=16210 global_step=16210 loss=17.9, 38.3% complete\nINFO:tensorflow:local_step=16220 global_step=16220 loss=18.3, 38.3% complete\nINFO:tensorflow:local_step=16230 global_step=16230 loss=17.9, 38.4% complete\nINFO:tensorflow:local_step=16240 global_step=16240 loss=290.6, 38.4% complete\nINFO:tensorflow:local_step=16250 global_step=16250 loss=18.9, 38.4% complete\nINFO:tensorflow:local_step=16260 global_step=16260 loss=18.8, 38.4% complete\nINFO:tensorflow:local_step=16270 global_step=16270 loss=19.2, 38.4% complete\nINFO:tensorflow:local_step=16280 global_step=16280 loss=18.5, 38.5% complete\nINFO:tensorflow:local_step=16290 global_step=16290 loss=19.4, 38.5% complete\nINFO:tensorflow:local_step=16300 global_step=16300 loss=21.2, 38.5% complete\nINFO:tensorflow:local_step=16310 global_step=16310 loss=17.9, 38.5% complete\nINFO:tensorflow:local_step=16320 global_step=16320 loss=18.3, 38.6% complete\nINFO:tensorflow:local_step=16330 global_step=16330 loss=18.5, 38.6% complete\nINFO:tensorflow:local_step=16340 global_step=16340 loss=20.2, 38.6% complete\nINFO:tensorflow:local_step=16350 global_step=16350 loss=289.9, 38.6% complete\nINFO:tensorflow:local_step=16360 global_step=16360 loss=18.3, 38.7% complete\nINFO:tensorflow:local_step=16370 global_step=16370 loss=18.1, 38.7% complete\nINFO:tensorflow:local_step=16380 global_step=16380 loss=18.0, 38.7% complete\nINFO:tensorflow:local_step=16390 global_step=16390 loss=21.6, 38.7% complete\nINFO:tensorflow:local_step=16400 global_step=16400 loss=16.2, 38.8% complete\nINFO:tensorflow:local_step=16410 global_step=16410 loss=18.4, 38.8% complete\nINFO:tensorflow:local_step=16420 global_step=16420 loss=16.3, 38.8% complete\nINFO:tensorflow:local_step=16430 global_step=16430 loss=18.3, 38.8% complete\nINFO:tensorflow:local_step=16440 global_step=16440 loss=17.7, 38.8% complete\nINFO:tensorflow:local_step=16450 global_step=16450 loss=17.7, 38.9% complete\nINFO:tensorflow:local_step=16460 global_step=16460 loss=18.1, 38.9% complete\nINFO:tensorflow:local_step=16470 global_step=16470 loss=19.6, 38.9% complete\nINFO:tensorflow:local_step=16480 global_step=16480 loss=18.5, 38.9% complete\nINFO:tensorflow:local_step=16490 global_step=16490 loss=17.9, 39.0% complete\nINFO:tensorflow:local_step=16500 global_step=16500 loss=18.8, 39.0% complete\nINFO:tensorflow:local_step=16510 global_step=16510 loss=18.8, 39.0% complete\nINFO:tensorflow:local_step=16520 global_step=16520 loss=17.7, 39.0% complete\nINFO:tensorflow:local_step=16530 global_step=16530 loss=18.7, 39.1% complete\nINFO:tensorflow:local_step=16540 global_step=16540 loss=17.5, 39.1% complete\nINFO:tensorflow:local_step=16550 global_step=16550 loss=19.6, 39.1% complete\nINFO:tensorflow:local_step=16560 global_step=16560 loss=17.8, 39.1% complete\nINFO:tensorflow:local_step=16570 global_step=16570 loss=18.5, 39.2% complete\nINFO:tensorflow:local_step=16580 global_step=16580 loss=18.7, 39.2% complete\nINFO:tensorflow:local_step=16590 global_step=16590 loss=19.8, 39.2% complete\nINFO:tensorflow:local_step=16600 global_step=16600 loss=17.3, 39.2% complete\nINFO:tensorflow:local_step=16610 global_step=16610 loss=19.8, 39.2% complete\nINFO:tensorflow:local_step=16620 global_step=16620 loss=20.1, 39.3% complete\nINFO:tensorflow:local_step=16630 global_step=16630 loss=18.7, 39.3% complete\nINFO:tensorflow:local_step=16640 global_step=16640 loss=18.2, 39.3% complete\nINFO:tensorflow:local_step=16650 global_step=16650 loss=18.5, 39.3% complete\nINFO:tensorflow:local_step=16660 global_step=16660 loss=18.4, 39.4% complete\nINFO:tensorflow:local_step=16670 global_step=16670 loss=19.3, 39.4% complete\nINFO:tensorflow:local_step=16680 global_step=16680 loss=17.4, 39.4% complete\nINFO:tensorflow:local_step=16690 global_step=16690 loss=18.0, 39.4% complete\nINFO:tensorflow:local_step=16700 global_step=16700 loss=19.0, 39.5% complete\nINFO:tensorflow:local_step=16710 global_step=16710 loss=16.5, 39.5% complete\nINFO:tensorflow:local_step=16720 global_step=16720 loss=18.3, 39.5% complete\nINFO:tensorflow:local_step=16730 global_step=16730 loss=18.8, 39.5% complete\nINFO:tensorflow:local_step=16740 global_step=16740 loss=18.8, 39.6% complete\nINFO:tensorflow:local_step=16750 global_step=16750 loss=21.4, 39.6% complete\nINFO:tensorflow:local_step=16760 global_step=16760 loss=18.1, 39.6% complete\nINFO:tensorflow:local_step=16770 global_step=16770 loss=18.7, 39.6% complete\nINFO:tensorflow:local_step=16780 global_step=16780 loss=15.7, 39.7% complete\nINFO:tensorflow:local_step=16790 global_step=16790 loss=17.8, 39.7% complete\nINFO:tensorflow:local_step=16800 global_step=16800 loss=18.1, 39.7% complete\nINFO:tensorflow:local_step=16810 global_step=16810 loss=18.5, 39.7% complete\nINFO:tensorflow:local_step=16820 global_step=16820 loss=19.0, 39.7% complete\nINFO:tensorflow:local_step=16830 global_step=16830 loss=18.5, 39.8% complete\nINFO:tensorflow:local_step=16840 global_step=16840 loss=18.6, 39.8% complete\nINFO:tensorflow:local_step=16850 global_step=16850 loss=17.0, 39.8% complete\nINFO:tensorflow:local_step=16860 global_step=16860 loss=18.7, 39.8% complete\nINFO:tensorflow:local_step=16870 global_step=16870 loss=18.1, 39.9% complete\nINFO:tensorflow:local_step=16880 global_step=16880 loss=19.8, 39.9% complete\nINFO:tensorflow:local_step=16890 global_step=16890 loss=18.0, 39.9% complete\nINFO:tensorflow:local_step=16900 global_step=16900 loss=18.6, 39.9% complete\nINFO:tensorflow:local_step=16910 global_step=16910 loss=19.1, 40.0% complete\nINFO:tensorflow:local_step=16920 global_step=16920 loss=18.3, 40.0% complete\nINFO:tensorflow:local_step=16930 global_step=16930 loss=18.0, 40.0% complete\nINFO:tensorflow:local_step=16940 global_step=16940 loss=17.8, 40.0% complete\nINFO:tensorflow:local_step=16950 global_step=16950 loss=18.5, 40.1% complete\nINFO:tensorflow:local_step=16960 global_step=16960 loss=19.1, 40.1% complete\nINFO:tensorflow:local_step=16970 global_step=16970 loss=17.6, 40.1% complete\nINFO:tensorflow:local_step=16980 global_step=16980 loss=17.9, 40.1% complete\nINFO:tensorflow:local_step=16990 global_step=16990 loss=18.5, 40.1% complete\nINFO:tensorflow:local_step=17000 global_step=17000 loss=18.1, 40.2% complete\nINFO:tensorflow:local_step=17010 global_step=17010 loss=18.7, 40.2% complete\nINFO:tensorflow:local_step=17020 global_step=17020 loss=18.4, 40.2% complete\nINFO:tensorflow:local_step=17030 global_step=17030 loss=18.8, 40.2% complete\nINFO:tensorflow:local_step=17040 global_step=17040 loss=17.3, 40.3% complete\nINFO:tensorflow:local_step=17050 global_step=17050 loss=18.6, 40.3% complete\nINFO:tensorflow:local_step=17060 global_step=17060 loss=18.0, 40.3% complete\nINFO:tensorflow:local_step=17070 global_step=17070 loss=18.1, 40.3% complete\nINFO:tensorflow:local_step=17080 global_step=17080 loss=15.5, 40.4% complete\nINFO:tensorflow:local_step=17090 global_step=17090 loss=18.1, 40.4% complete\nINFO:tensorflow:local_step=17100 global_step=17100 loss=19.1, 40.4% complete\nINFO:tensorflow:local_step=17110 global_step=17110 loss=19.2, 40.4% complete\nINFO:tensorflow:local_step=17120 global_step=17120 loss=18.5, 40.5% complete\nINFO:tensorflow:local_step=17130 global_step=17130 loss=18.4, 40.5% complete\nINFO:tensorflow:local_step=17140 global_step=17140 loss=18.9, 40.5% complete\nINFO:tensorflow:local_step=17150 global_step=17150 loss=18.5, 40.5% complete\nINFO:tensorflow:local_step=17160 global_step=17160 loss=19.2, 40.5% complete\nINFO:tensorflow:local_step=17170 global_step=17170 loss=18.2, 40.6% complete\nINFO:tensorflow:local_step=17180 global_step=17180 loss=18.2, 40.6% complete\nINFO:tensorflow:local_step=17190 global_step=17190 loss=18.4, 40.6% complete\nINFO:tensorflow:local_step=17200 global_step=17200 loss=19.1, 40.6% complete\nINFO:tensorflow:local_step=17210 global_step=17210 loss=18.7, 40.7% complete\nINFO:tensorflow:local_step=17220 global_step=17220 loss=18.2, 40.7% complete\nINFO:tensorflow:local_step=17230 global_step=17230 loss=17.8, 40.7% complete\nINFO:tensorflow:local_step=17240 global_step=17240 loss=17.8, 40.7% complete\nINFO:tensorflow:local_step=17250 global_step=17250 loss=17.9, 40.8% complete\nINFO:tensorflow:local_step=17260 global_step=17260 loss=18.1, 40.8% complete\nINFO:tensorflow:local_step=17270 global_step=17270 loss=18.0, 40.8% complete\nINFO:tensorflow:local_step=17280 global_step=17280 loss=17.6, 40.8% complete\nINFO:tensorflow:local_step=17290 global_step=17290 loss=18.3, 40.9% complete\nINFO:tensorflow:local_step=17300 global_step=17300 loss=19.5, 40.9% complete\nINFO:tensorflow:local_step=17310 global_step=17310 loss=18.2, 40.9% complete\nINFO:tensorflow:local_step=17320 global_step=17320 loss=18.5, 40.9% complete\nINFO:tensorflow:local_step=17330 global_step=17330 loss=18.9, 40.9% complete\nINFO:tensorflow:local_step=17340 global_step=17340 loss=18.2, 41.0% complete\nINFO:tensorflow:local_step=17350 global_step=17350 loss=18.4, 41.0% complete\nINFO:tensorflow:local_step=17360 global_step=17360 loss=18.5, 41.0% complete\nINFO:tensorflow:local_step=17370 global_step=17370 loss=18.6, 41.0% complete\nINFO:tensorflow:local_step=17380 global_step=17380 loss=17.8, 41.1% complete\nINFO:tensorflow:local_step=17390 global_step=17390 loss=18.7, 41.1% complete\nINFO:tensorflow:local_step=17400 global_step=17400 loss=233.2, 41.1% complete\nINFO:tensorflow:local_step=17410 global_step=17410 loss=18.1, 41.1% complete\nINFO:tensorflow:local_step=17420 global_step=17420 loss=19.2, 41.2% complete\nINFO:tensorflow:local_step=17430 global_step=17430 loss=17.6, 41.2% complete\nINFO:tensorflow:local_step=17440 global_step=17440 loss=18.1, 41.2% complete\nINFO:tensorflow:local_step=17450 global_step=17450 loss=17.8, 41.2% complete\nINFO:tensorflow:local_step=17460 global_step=17460 loss=18.7, 41.3% complete\nINFO:tensorflow:local_step=17470 global_step=17470 loss=18.8, 41.3% complete\nINFO:tensorflow:local_step=17480 global_step=17480 loss=18.8, 41.3% complete\nINFO:tensorflow:local_step=17490 global_step=17490 loss=19.3, 41.3% complete\nINFO:tensorflow:local_step=17500 global_step=17500 loss=18.1, 41.4% complete\nINFO:tensorflow:local_step=17510 global_step=17510 loss=17.6, 41.4% complete\nINFO:tensorflow:local_step=17520 global_step=17520 loss=18.8, 41.4% complete\nINFO:tensorflow:local_step=17530 global_step=17530 loss=19.0, 41.4% complete\nINFO:tensorflow:local_step=17540 global_step=17540 loss=18.5, 41.4% complete\nINFO:tensorflow:local_step=17550 global_step=17550 loss=18.9, 41.5% complete\nINFO:tensorflow:local_step=17560 global_step=17560 loss=17.8, 41.5% complete\nINFO:tensorflow:local_step=17570 global_step=17570 loss=18.5, 41.5% complete\nINFO:tensorflow:local_step=17580 global_step=17580 loss=17.9, 41.5% complete\nINFO:tensorflow:local_step=17590 global_step=17590 loss=18.3, 41.6% complete\nINFO:tensorflow:local_step=17600 global_step=17600 loss=21.5, 41.6% complete\nINFO:tensorflow:local_step=17610 global_step=17610 loss=18.1, 41.6% complete\nINFO:tensorflow:local_step=17620 global_step=17620 loss=18.5, 41.6% complete\nINFO:tensorflow:local_step=17630 global_step=17630 loss=18.1, 41.7% complete\nINFO:tensorflow:local_step=17640 global_step=17640 loss=18.7, 41.7% complete\nINFO:tensorflow:local_step=17650 global_step=17650 loss=18.9, 41.7% complete\nINFO:tensorflow:local_step=17660 global_step=17660 loss=19.1, 41.7% complete\nINFO:tensorflow:local_step=17670 global_step=17670 loss=18.9, 41.8% complete\nINFO:tensorflow:local_step=17680 global_step=17680 loss=249.3, 41.8% complete\nINFO:tensorflow:local_step=17690 global_step=17690 loss=20.1, 41.8% complete\nINFO:tensorflow:local_step=17700 global_step=17700 loss=18.2, 41.8% complete\nINFO:tensorflow:local_step=17710 global_step=17710 loss=18.4, 41.8% complete\nINFO:tensorflow:local_step=17720 global_step=17720 loss=18.9, 41.9% complete\nINFO:tensorflow:local_step=17730 global_step=17730 loss=18.8, 41.9% complete\nINFO:tensorflow:local_step=17740 global_step=17740 loss=18.9, 41.9% complete\nINFO:tensorflow:local_step=17750 global_step=17750 loss=18.2, 41.9% complete\nINFO:tensorflow:local_step=17760 global_step=17760 loss=18.8, 42.0% complete\nINFO:tensorflow:local_step=17770 global_step=17770 loss=18.6, 42.0% complete\nINFO:tensorflow:local_step=17780 global_step=17780 loss=19.2, 42.0% complete\nINFO:tensorflow:local_step=17790 global_step=17790 loss=19.0, 42.0% complete\nINFO:tensorflow:local_step=17800 global_step=17800 loss=18.0, 42.1% complete\nINFO:tensorflow:local_step=17810 global_step=17810 loss=18.6, 42.1% complete\nINFO:tensorflow:local_step=17820 global_step=17820 loss=249.7, 42.1% complete\nINFO:tensorflow:local_step=17830 global_step=17830 loss=277.7, 42.1% complete\nINFO:tensorflow:local_step=17840 global_step=17840 loss=18.2, 42.2% complete\nINFO:tensorflow:local_step=17850 global_step=17850 loss=19.6, 42.2% complete\nINFO:tensorflow:local_step=17860 global_step=17860 loss=15.4, 42.2% complete\nINFO:tensorflow:local_step=17870 global_step=17870 loss=18.7, 42.2% complete\nINFO:tensorflow:local_step=17880 global_step=17880 loss=18.3, 42.2% complete\nINFO:tensorflow:local_step=17890 global_step=17890 loss=18.8, 42.3% complete\nINFO:tensorflow:local_step=17900 global_step=17900 loss=18.8, 42.3% complete\nINFO:tensorflow:local_step=17910 global_step=17910 loss=18.7, 42.3% complete\nINFO:tensorflow:local_step=17920 global_step=17920 loss=19.6, 42.3% complete\nINFO:tensorflow:local_step=17930 global_step=17930 loss=18.6, 42.4% complete\nINFO:tensorflow:local_step=17940 global_step=17940 loss=18.6, 42.4% complete\nINFO:tensorflow:local_step=17950 global_step=17950 loss=18.4, 42.4% complete\nINFO:tensorflow:local_step=17960 global_step=17960 loss=18.5, 42.4% complete\nINFO:tensorflow:local_step=17970 global_step=17970 loss=19.0, 42.5% complete\nINFO:tensorflow:local_step=17980 global_step=17980 loss=18.3, 42.5% complete\nINFO:tensorflow:local_step=17990 global_step=17990 loss=19.0, 42.5% complete\nINFO:tensorflow:local_step=18000 global_step=18000 loss=18.4, 42.5% complete\nINFO:tensorflow:local_step=18010 global_step=18010 loss=18.8, 42.6% complete\nINFO:tensorflow:local_step=18020 global_step=18020 loss=19.0, 42.6% complete\nINFO:tensorflow:local_step=18030 global_step=18030 loss=17.9, 42.6% complete\nINFO:tensorflow:local_step=18040 global_step=18040 loss=17.9, 42.6% complete\nINFO:tensorflow:local_step=18050 global_step=18050 loss=17.5, 42.7% complete\nINFO:tensorflow:local_step=18060 global_step=18060 loss=17.9, 42.7% complete\nINFO:tensorflow:local_step=18070 global_step=18070 loss=18.7, 42.7% complete\nINFO:tensorflow:local_step=18080 global_step=18080 loss=18.5, 42.7% complete\nINFO:tensorflow:local_step=18090 global_step=18090 loss=19.1, 42.7% complete\nINFO:tensorflow:local_step=18100 global_step=18100 loss=18.0, 42.8% complete\nINFO:tensorflow:local_step=18110 global_step=18110 loss=17.6, 42.8% complete\nINFO:tensorflow:local_step=18120 global_step=18120 loss=18.3, 42.8% complete\nINFO:tensorflow:local_step=18130 global_step=18130 loss=18.0, 42.8% complete\nINFO:tensorflow:local_step=18140 global_step=18140 loss=18.4, 42.9% complete\nINFO:tensorflow:local_step=18150 global_step=18150 loss=18.4, 42.9% complete\nINFO:tensorflow:local_step=18160 global_step=18160 loss=18.4, 42.9% complete\nINFO:tensorflow:local_step=18170 global_step=18170 loss=328.5, 42.9% complete\nINFO:tensorflow:local_step=18180 global_step=18180 loss=19.7, 43.0% complete\nINFO:tensorflow:local_step=18190 global_step=18190 loss=18.5, 43.0% complete\nINFO:tensorflow:local_step=18200 global_step=18200 loss=18.0, 43.0% complete\nINFO:tensorflow:local_step=18210 global_step=18210 loss=19.5, 43.0% complete\nINFO:tensorflow:local_step=18220 global_step=18220 loss=17.6, 43.1% complete\nINFO:tensorflow:local_step=18230 global_step=18230 loss=14.8, 43.1% complete\nINFO:tensorflow:local_step=18240 global_step=18240 loss=20.2, 43.1% complete\nINFO:tensorflow:local_step=18250 global_step=18250 loss=19.1, 43.1% complete\nINFO:tensorflow:local_step=18260 global_step=18260 loss=18.6, 43.1% complete\nINFO:tensorflow:local_step=18270 global_step=18270 loss=19.3, 43.2% complete\nINFO:tensorflow:local_step=18280 global_step=18280 loss=19.3, 43.2% complete\nINFO:tensorflow:local_step=18290 global_step=18290 loss=18.5, 43.2% complete\nINFO:tensorflow:local_step=18300 global_step=18300 loss=19.0, 43.2% complete\nINFO:tensorflow:local_step=18310 global_step=18310 loss=295.1, 43.3% complete\nINFO:tensorflow:local_step=18320 global_step=18320 loss=21.5, 43.3% complete\nINFO:tensorflow:local_step=18330 global_step=18330 loss=18.5, 43.3% complete\nINFO:tensorflow:local_step=18340 global_step=18340 loss=17.7, 43.3% complete\nINFO:tensorflow:local_step=18350 global_step=18350 loss=17.7, 43.4% complete\nINFO:tensorflow:local_step=18360 global_step=18360 loss=19.1, 43.4% complete\nINFO:tensorflow:local_step=18370 global_step=18370 loss=18.0, 43.4% complete\nINFO:tensorflow:local_step=18380 global_step=18380 loss=17.4, 43.4% complete\nINFO:tensorflow:local_step=18390 global_step=18390 loss=17.8, 43.5% complete\nINFO:tensorflow:local_step=18400 global_step=18400 loss=19.4, 43.5% complete\nINFO:tensorflow:local_step=18410 global_step=18410 loss=18.7, 43.5% complete\nINFO:tensorflow:local_step=18420 global_step=18420 loss=18.4, 43.5% complete\nINFO:tensorflow:local_step=18430 global_step=18430 loss=18.6, 43.5% complete\nINFO:tensorflow:local_step=18440 global_step=18440 loss=19.5, 43.6% complete\nINFO:tensorflow:local_step=18450 global_step=18450 loss=18.2, 43.6% complete\nINFO:tensorflow:local_step=18460 global_step=18460 loss=18.5, 43.6% complete\nINFO:tensorflow:local_step=18470 global_step=18470 loss=17.7, 43.6% complete\nINFO:tensorflow:local_step=18480 global_step=18480 loss=19.1, 43.7% complete\nINFO:tensorflow:local_step=18490 global_step=18490 loss=18.3, 43.7% complete\nINFO:tensorflow:local_step=18500 global_step=18500 loss=19.0, 43.7% complete\nINFO:tensorflow:local_step=18510 global_step=18510 loss=18.5, 43.7% complete\nINFO:tensorflow:local_step=18520 global_step=18520 loss=18.3, 43.8% complete\nINFO:tensorflow:local_step=18530 global_step=18530 loss=18.4, 43.8% complete\nINFO:tensorflow:local_step=18540 global_step=18540 loss=18.4, 43.8% complete\nINFO:tensorflow:local_step=18550 global_step=18550 loss=17.7, 43.8% complete\nINFO:tensorflow:local_step=18560 global_step=18560 loss=19.1, 43.9% complete\nINFO:tensorflow:local_step=18570 global_step=18570 loss=18.3, 43.9% complete\nINFO:tensorflow:local_step=18580 global_step=18580 loss=18.4, 43.9% complete\nINFO:tensorflow:local_step=18590 global_step=18590 loss=17.8, 43.9% complete\nINFO:tensorflow:local_step=18600 global_step=18600 loss=18.5, 44.0% complete\nINFO:tensorflow:local_step=18610 global_step=18610 loss=17.9, 44.0% complete\nINFO:tensorflow:local_step=18620 global_step=18620 loss=17.6, 44.0% complete\nINFO:tensorflow:local_step=18630 global_step=18630 loss=18.5, 44.0% complete\nINFO:tensorflow:local_step=18640 global_step=18640 loss=18.4, 44.0% complete\nINFO:tensorflow:local_step=18650 global_step=18650 loss=19.6, 44.1% complete\nINFO:tensorflow:local_step=18660 global_step=18660 loss=15.7, 44.1% complete\nINFO:tensorflow:local_step=18670 global_step=18670 loss=19.1, 44.1% complete\nINFO:tensorflow:local_step=18680 global_step=18680 loss=15.8, 44.1% complete\nINFO:tensorflow:local_step=18690 global_step=18690 loss=18.7, 44.2% complete\nINFO:tensorflow:local_step=18700 global_step=18700 loss=18.6, 44.2% complete\nINFO:tensorflow:local_step=18710 global_step=18710 loss=17.9, 44.2% complete\nINFO:tensorflow:local_step=18720 global_step=18720 loss=17.8, 44.2% complete\nINFO:tensorflow:local_step=18730 global_step=18730 loss=17.6, 44.3% complete\nINFO:tensorflow:local_step=18740 global_step=18740 loss=17.8, 44.3% complete\nINFO:tensorflow:local_step=18750 global_step=18750 loss=18.5, 44.3% complete\nINFO:tensorflow:local_step=18760 global_step=18760 loss=19.4, 44.3% complete\nINFO:tensorflow:local_step=18770 global_step=18770 loss=13.5, 44.4% complete\nINFO:tensorflow:local_step=18780 global_step=18780 loss=15.3, 44.4% complete\nINFO:tensorflow:local_step=18790 global_step=18790 loss=16.7, 44.4% complete\nINFO:tensorflow:local_step=18800 global_step=18800 loss=17.9, 44.4% complete\nINFO:tensorflow:local_step=18810 global_step=18810 loss=18.9, 44.4% complete\nINFO:tensorflow:local_step=18820 global_step=18820 loss=15.2, 44.5% complete\nINFO:tensorflow:local_step=18830 global_step=18830 loss=18.3, 44.5% complete\nINFO:tensorflow:local_step=18840 global_step=18840 loss=14.7, 44.5% complete\nINFO:tensorflow:local_step=18850 global_step=18850 loss=19.9, 44.5% complete\nINFO:tensorflow:local_step=18860 global_step=18860 loss=18.3, 44.6% complete\nINFO:tensorflow:local_step=18870 global_step=18870 loss=18.2, 44.6% complete\nINFO:tensorflow:local_step=18880 global_step=18880 loss=18.7, 44.6% complete\nINFO:tensorflow:local_step=18890 global_step=18890 loss=19.3, 44.6% complete\nINFO:tensorflow:local_step=18900 global_step=18900 loss=18.5, 44.7% complete\nINFO:tensorflow:local_step=18910 global_step=18910 loss=18.5, 44.7% complete\nINFO:tensorflow:local_step=18920 global_step=18920 loss=18.1, 44.7% complete\nINFO:tensorflow:local_step=18930 global_step=18930 loss=18.9, 44.7% complete\nINFO:tensorflow:local_step=18940 global_step=18940 loss=18.0, 44.8% complete\nINFO:tensorflow:local_step=18950 global_step=18950 loss=17.8, 44.8% complete\nINFO:tensorflow:local_step=18960 global_step=18960 loss=18.5, 44.8% complete\nINFO:tensorflow:local_step=18970 global_step=18970 loss=19.9, 44.8% complete\nINFO:tensorflow:local_step=18980 global_step=18980 loss=19.1, 44.8% complete\nINFO:tensorflow:local_step=18990 global_step=18990 loss=18.4, 44.9% complete\nINFO:tensorflow:local_step=19000 global_step=19000 loss=19.2, 44.9% complete\nINFO:tensorflow:local_step=19010 global_step=19010 loss=17.8, 44.9% complete\nINFO:tensorflow:local_step=19020 global_step=19020 loss=18.9, 44.9% complete\nINFO:tensorflow:local_step=19030 global_step=19030 loss=18.2, 45.0% complete\nINFO:tensorflow:local_step=19040 global_step=19040 loss=18.7, 45.0% complete\nINFO:tensorflow:local_step=19050 global_step=19050 loss=18.5, 45.0% complete\nINFO:tensorflow:local_step=19060 global_step=19060 loss=18.7, 45.0% complete\nINFO:tensorflow:local_step=19070 global_step=19070 loss=18.7, 45.1% complete\nINFO:tensorflow:local_step=19080 global_step=19080 loss=18.2, 45.1% complete\nINFO:tensorflow:local_step=19090 global_step=19090 loss=18.2, 45.1% complete\nINFO:tensorflow:local_step=19100 global_step=19100 loss=17.5, 45.1% complete\nINFO:tensorflow:local_step=19110 global_step=19110 loss=19.4, 45.2% complete\nINFO:tensorflow:local_step=19120 global_step=19120 loss=17.7, 45.2% complete\nINFO:tensorflow:local_step=19130 global_step=19130 loss=18.0, 45.2% complete\nINFO:tensorflow:local_step=19140 global_step=19140 loss=18.0, 45.2% complete\nINFO:tensorflow:local_step=19150 global_step=19150 loss=18.0, 45.3% complete\nINFO:tensorflow:local_step=19160 global_step=19160 loss=14.4, 45.3% complete\nINFO:tensorflow:local_step=19170 global_step=19170 loss=18.8, 45.3% complete\nINFO:tensorflow:local_step=19180 global_step=19180 loss=18.1, 45.3% complete\nINFO:tensorflow:local_step=19190 global_step=19190 loss=18.3, 45.3% complete\nINFO:tensorflow:local_step=19200 global_step=19200 loss=17.7, 45.4% complete\nINFO:tensorflow:local_step=19210 global_step=19210 loss=19.3, 45.4% complete\nINFO:tensorflow:local_step=19220 global_step=19220 loss=18.4, 45.4% complete\nINFO:tensorflow:local_step=19230 global_step=19230 loss=17.8, 45.4% complete\nINFO:tensorflow:local_step=19240 global_step=19240 loss=17.7, 45.5% complete\nINFO:tensorflow:local_step=19250 global_step=19250 loss=15.1, 45.5% complete\nINFO:tensorflow:local_step=19260 global_step=19260 loss=18.7, 45.5% complete\nINFO:tensorflow:local_step=19270 global_step=19270 loss=17.8, 45.5% complete\nINFO:tensorflow:local_step=19280 global_step=19280 loss=18.3, 45.6% complete\nINFO:tensorflow:local_step=19290 global_step=19290 loss=17.9, 45.6% complete\nINFO:tensorflow:local_step=19300 global_step=19300 loss=18.5, 45.6% complete\nINFO:tensorflow:local_step=19310 global_step=19310 loss=17.5, 45.6% complete\nINFO:tensorflow:local_step=19320 global_step=19320 loss=17.5, 45.7% complete\nINFO:tensorflow:local_step=19330 global_step=19330 loss=18.7, 45.7% complete\nINFO:tensorflow:local_step=19340 global_step=19340 loss=18.0, 45.7% complete\nINFO:tensorflow:local_step=19350 global_step=19350 loss=16.9, 45.7% complete\nINFO:tensorflow:local_step=19360 global_step=19360 loss=18.6, 45.7% complete\nINFO:tensorflow:local_step=19370 global_step=19370 loss=18.3, 45.8% complete\nINFO:tensorflow:local_step=19380 global_step=19380 loss=18.4, 45.8% complete\nINFO:tensorflow:local_step=19390 global_step=19390 loss=18.8, 45.8% complete\nINFO:tensorflow:local_step=19400 global_step=19400 loss=19.0, 45.8% complete\nINFO:tensorflow:local_step=19410 global_step=19410 loss=18.3, 45.9% complete\nINFO:tensorflow:local_step=19420 global_step=19420 loss=18.5, 45.9% complete\nINFO:tensorflow:local_step=19430 global_step=19430 loss=18.9, 45.9% complete\nINFO:tensorflow:local_step=19440 global_step=19440 loss=18.5, 45.9% complete\nINFO:tensorflow:local_step=19450 global_step=19450 loss=19.0, 46.0% complete\nINFO:tensorflow:local_step=19460 global_step=19460 loss=18.7, 46.0% complete\nINFO:tensorflow:local_step=19470 global_step=19470 loss=19.1, 46.0% complete\nINFO:tensorflow:local_step=19480 global_step=19480 loss=17.8, 46.0% complete\nINFO:tensorflow:local_step=19490 global_step=19490 loss=18.1, 46.1% complete\nINFO:tensorflow:local_step=19500 global_step=19500 loss=17.8, 46.1% complete\nINFO:tensorflow:local_step=19510 global_step=19510 loss=16.4, 46.1% complete\nINFO:tensorflow:local_step=19520 global_step=19520 loss=18.4, 46.1% complete\nINFO:tensorflow:local_step=19530 global_step=19530 loss=17.7, 46.1% complete\nINFO:tensorflow:local_step=19540 global_step=19540 loss=18.7, 46.2% complete\nINFO:tensorflow:local_step=19550 global_step=19550 loss=17.4, 46.2% complete\nINFO:tensorflow:local_step=19560 global_step=19560 loss=18.2, 46.2% complete\nINFO:tensorflow:local_step=19570 global_step=19570 loss=19.1, 46.2% complete\nINFO:tensorflow:local_step=19580 global_step=19580 loss=17.7, 46.3% complete\nINFO:tensorflow:local_step=19590 global_step=19590 loss=19.8, 46.3% complete\nINFO:tensorflow:local_step=19600 global_step=19600 loss=18.7, 46.3% complete\nINFO:tensorflow:local_step=19610 global_step=19610 loss=17.9, 46.3% complete\nINFO:tensorflow:local_step=19620 global_step=19620 loss=18.6, 46.4% complete\nINFO:tensorflow:local_step=19630 global_step=19630 loss=17.7, 46.4% complete\nINFO:tensorflow:local_step=19640 global_step=19640 loss=18.4, 46.4% complete\nINFO:tensorflow:local_step=19650 global_step=19650 loss=19.6, 46.4% complete\nINFO:tensorflow:local_step=19660 global_step=19660 loss=19.2, 46.5% complete\nINFO:tensorflow:local_step=19670 global_step=19670 loss=18.8, 46.5% complete\nINFO:tensorflow:local_step=19680 global_step=19680 loss=18.5, 46.5% complete\nINFO:tensorflow:local_step=19690 global_step=19690 loss=18.5, 46.5% complete\nINFO:tensorflow:local_step=19700 global_step=19700 loss=18.5, 46.6% complete\nINFO:tensorflow:local_step=19710 global_step=19710 loss=18.7, 46.6% complete\nINFO:tensorflow:local_step=19720 global_step=19720 loss=18.5, 46.6% complete\nINFO:tensorflow:local_step=19730 global_step=19730 loss=18.2, 46.6% complete\nINFO:tensorflow:local_step=19740 global_step=19740 loss=19.3, 46.6% complete\nINFO:tensorflow:local_step=19750 global_step=19750 loss=19.3, 46.7% complete\nINFO:tensorflow:local_step=19760 global_step=19760 loss=18.0, 46.7% complete\nINFO:tensorflow:local_step=19770 global_step=19770 loss=18.7, 46.7% complete\nINFO:tensorflow:local_step=19780 global_step=19780 loss=18.0, 46.7% complete\nINFO:tensorflow:local_step=19790 global_step=19790 loss=18.9, 46.8% complete\nINFO:tensorflow:local_step=19800 global_step=19800 loss=18.3, 46.8% complete\nINFO:tensorflow:local_step=19810 global_step=19810 loss=17.9, 46.8% complete\nINFO:tensorflow:local_step=19820 global_step=19820 loss=18.6, 46.8% complete\nINFO:tensorflow:local_step=19830 global_step=19830 loss=18.9, 46.9% complete\nINFO:tensorflow:local_step=19840 global_step=19840 loss=19.9, 46.9% complete\nINFO:tensorflow:local_step=19850 global_step=19850 loss=19.1, 46.9% complete\nINFO:tensorflow:local_step=19860 global_step=19860 loss=18.2, 46.9% complete\nINFO:tensorflow:local_step=19870 global_step=19870 loss=18.5, 47.0% complete\nINFO:tensorflow:local_step=19880 global_step=19880 loss=18.0, 47.0% complete\nINFO:tensorflow:local_step=19890 global_step=19890 loss=21.3, 47.0% complete\nINFO:tensorflow:local_step=19900 global_step=19900 loss=16.3, 47.0% complete\nINFO:tensorflow:local_step=19910 global_step=19910 loss=18.2, 47.0% complete\nINFO:tensorflow:local_step=19920 global_step=19920 loss=19.1, 47.1% complete\nINFO:tensorflow:local_step=19930 global_step=19930 loss=18.4, 47.1% complete\nINFO:tensorflow:local_step=19940 global_step=19940 loss=18.5, 47.1% complete\nINFO:tensorflow:local_step=19950 global_step=19950 loss=17.2, 47.1% complete\nINFO:tensorflow:local_step=19960 global_step=19960 loss=18.0, 47.2% complete\nINFO:tensorflow:local_step=19970 global_step=19970 loss=18.7, 47.2% complete\nINFO:tensorflow:local_step=19980 global_step=19980 loss=18.2, 47.2% complete\nINFO:tensorflow:local_step=19990 global_step=19990 loss=18.6, 47.2% complete\nINFO:tensorflow:local_step=20000 global_step=20000 loss=19.3, 47.3% complete\nINFO:tensorflow:local_step=20010 global_step=20010 loss=18.7, 47.3% complete\nINFO:tensorflow:local_step=20020 global_step=20020 loss=17.5, 47.3% complete\nINFO:tensorflow:local_step=20030 global_step=20030 loss=18.2, 47.3% complete\nINFO:tensorflow:local_step=20040 global_step=20040 loss=18.3, 47.4% complete\nINFO:tensorflow:local_step=20050 global_step=20050 loss=17.7, 47.4% complete\nINFO:tensorflow:local_step=20060 global_step=20060 loss=18.2, 47.4% complete\nINFO:tensorflow:local_step=20070 global_step=20070 loss=18.2, 47.4% complete\nINFO:tensorflow:local_step=20080 global_step=20080 loss=18.7, 47.4% complete\nINFO:tensorflow:local_step=20090 global_step=20090 loss=18.3, 47.5% complete\nINFO:tensorflow:local_step=20100 global_step=20100 loss=255.8, 47.5% complete\nINFO:tensorflow:local_step=20110 global_step=20110 loss=342.8, 47.5% complete\nINFO:tensorflow:local_step=20120 global_step=20120 loss=18.1, 47.5% complete\nINFO:tensorflow:local_step=20130 global_step=20130 loss=15.2, 47.6% complete\nINFO:tensorflow:local_step=20140 global_step=20140 loss=17.6, 47.6% complete\nINFO:tensorflow:local_step=20150 global_step=20150 loss=19.4, 47.6% complete\nINFO:tensorflow:local_step=20160 global_step=20160 loss=19.3, 47.6% complete\nINFO:tensorflow:local_step=20170 global_step=20170 loss=15.2, 47.7% complete\nINFO:tensorflow:local_step=20180 global_step=20180 loss=19.3, 47.7% complete\nINFO:tensorflow:local_step=20190 global_step=20190 loss=18.1, 47.7% complete\nINFO:tensorflow:local_step=20200 global_step=20200 loss=19.0, 47.7% complete\nINFO:tensorflow:local_step=20210 global_step=20210 loss=18.5, 47.8% complete\nINFO:tensorflow:local_step=20220 global_step=20220 loss=18.3, 47.8% complete\nINFO:tensorflow:local_step=20230 global_step=20230 loss=19.4, 47.8% complete\nINFO:tensorflow:local_step=20240 global_step=20240 loss=19.3, 47.8% complete\nINFO:tensorflow:local_step=20250 global_step=20250 loss=18.7, 47.8% complete\nINFO:tensorflow:local_step=20260 global_step=20260 loss=20.8, 47.9% complete\nINFO:tensorflow:local_step=20270 global_step=20270 loss=18.6, 47.9% complete\nINFO:tensorflow:local_step=20280 global_step=20280 loss=18.3, 47.9% complete\nINFO:tensorflow:local_step=20290 global_step=20290 loss=17.7, 47.9% complete\nINFO:tensorflow:local_step=20300 global_step=20300 loss=17.7, 48.0% complete\nINFO:tensorflow:local_step=20310 global_step=20310 loss=18.6, 48.0% complete\nINFO:tensorflow:local_step=20320 global_step=20320 loss=18.8, 48.0% complete\nINFO:tensorflow:local_step=20330 global_step=20330 loss=18.2, 48.0% complete\nINFO:tensorflow:local_step=20340 global_step=20340 loss=17.2, 48.1% complete\nINFO:tensorflow:local_step=20350 global_step=20350 loss=18.3, 48.1% complete\nINFO:tensorflow:local_step=20360 global_step=20360 loss=19.6, 48.1% complete\nINFO:tensorflow:local_step=20370 global_step=20370 loss=19.2, 48.1% complete\nINFO:tensorflow:local_step=20380 global_step=20380 loss=18.6, 48.2% complete\nINFO:tensorflow:local_step=20390 global_step=20390 loss=19.3, 48.2% complete\nINFO:tensorflow:local_step=20400 global_step=20400 loss=17.7, 48.2% complete\nINFO:tensorflow:local_step=20410 global_step=20410 loss=17.8, 48.2% complete\nINFO:tensorflow:local_step=20420 global_step=20420 loss=18.0, 48.3% complete\nINFO:tensorflow:local_step=20430 global_step=20430 loss=18.1, 48.3% complete\nINFO:tensorflow:local_step=20440 global_step=20440 loss=19.5, 48.3% complete\nINFO:tensorflow:local_step=20450 global_step=20450 loss=18.0, 48.3% complete\nINFO:tensorflow:local_step=20460 global_step=20460 loss=17.8, 48.3% complete\nINFO:tensorflow:local_step=20470 global_step=20470 loss=18.7, 48.4% complete\nINFO:tensorflow:local_step=20480 global_step=20480 loss=18.3, 48.4% complete\nINFO:tensorflow:local_step=20490 global_step=20490 loss=17.8, 48.4% complete\nINFO:tensorflow:local_step=20500 global_step=20500 loss=18.5, 48.4% complete\nINFO:tensorflow:local_step=20510 global_step=20510 loss=18.3, 48.5% complete\nINFO:tensorflow:local_step=20520 global_step=20520 loss=18.7, 48.5% complete\nINFO:tensorflow:local_step=20530 global_step=20530 loss=18.6, 48.5% complete\nINFO:tensorflow:local_step=20540 global_step=20540 loss=17.6, 48.5% complete\nINFO:tensorflow:local_step=20550 global_step=20550 loss=19.3, 48.6% complete\nINFO:tensorflow:local_step=20560 global_step=20560 loss=18.4, 48.6% complete\nINFO:tensorflow:local_step=20570 global_step=20570 loss=18.9, 48.6% complete\nINFO:tensorflow:local_step=20580 global_step=20580 loss=18.2, 48.6% complete\nINFO:tensorflow:local_step=20590 global_step=20590 loss=18.5, 48.7% complete\nINFO:tensorflow:local_step=20600 global_step=20600 loss=18.2, 48.7% complete\nINFO:tensorflow:local_step=20610 global_step=20610 loss=18.8, 48.7% complete\nINFO:tensorflow:local_step=20620 global_step=20620 loss=18.1, 48.7% complete\nINFO:tensorflow:local_step=20630 global_step=20630 loss=18.6, 48.7% complete\nINFO:tensorflow:local_step=20640 global_step=20640 loss=18.1, 48.8% complete\nINFO:tensorflow:local_step=20650 global_step=20650 loss=18.6, 48.8% complete\nINFO:tensorflow:local_step=20660 global_step=20660 loss=18.5, 48.8% complete\nINFO:tensorflow:local_step=20670 global_step=20670 loss=18.9, 48.8% complete\nINFO:tensorflow:local_step=20680 global_step=20680 loss=19.0, 48.9% complete\nINFO:tensorflow:local_step=20690 global_step=20690 loss=19.9, 48.9% complete\nINFO:tensorflow:local_step=20700 global_step=20700 loss=19.5, 48.9% complete\nINFO:tensorflow:local_step=20710 global_step=20710 loss=18.4, 48.9% complete\nINFO:tensorflow:local_step=20720 global_step=20720 loss=18.3, 49.0% complete\nINFO:tensorflow:local_step=20730 global_step=20730 loss=18.8, 49.0% complete\nINFO:tensorflow:local_step=20740 global_step=20740 loss=18.0, 49.0% complete\nINFO:tensorflow:local_step=20750 global_step=20750 loss=19.1, 49.0% complete\nINFO:tensorflow:local_step=20760 global_step=20760 loss=276.3, 49.1% complete\nINFO:tensorflow:local_step=20770 global_step=20770 loss=18.8, 49.1% complete\nINFO:tensorflow:local_step=20780 global_step=20780 loss=16.0, 49.1% complete\nINFO:tensorflow:local_step=20790 global_step=20790 loss=18.1, 49.1% complete\nINFO:tensorflow:local_step=20800 global_step=20800 loss=18.2, 49.1% complete\nINFO:tensorflow:local_step=20810 global_step=20810 loss=18.8, 49.2% complete\nINFO:tensorflow:local_step=20820 global_step=20820 loss=18.8, 49.2% complete\nINFO:tensorflow:local_step=20830 global_step=20830 loss=18.4, 49.2% complete\nINFO:tensorflow:local_step=20840 global_step=20840 loss=18.5, 49.2% complete\nINFO:tensorflow:local_step=20850 global_step=20850 loss=18.8, 49.3% complete\nINFO:tensorflow:local_step=20860 global_step=20860 loss=18.7, 49.3% complete\nINFO:tensorflow:local_step=20870 global_step=20870 loss=17.8, 49.3% complete\nINFO:tensorflow:local_step=20880 global_step=20880 loss=18.8, 49.3% complete\nINFO:tensorflow:local_step=20890 global_step=20890 loss=18.3, 49.4% complete\nINFO:tensorflow:local_step=20900 global_step=20900 loss=269.8, 49.4% complete\nINFO:tensorflow:local_step=20910 global_step=20910 loss=18.8, 49.4% complete\nINFO:tensorflow:local_step=20920 global_step=20920 loss=18.3, 49.4% complete\nINFO:tensorflow:local_step=20930 global_step=20930 loss=18.0, 49.5% complete\nINFO:tensorflow:local_step=20940 global_step=20940 loss=18.3, 49.5% complete\nINFO:tensorflow:local_step=20950 global_step=20950 loss=15.3, 49.5% complete\nINFO:tensorflow:local_step=20960 global_step=20960 loss=17.7, 49.5% complete\nINFO:tensorflow:local_step=20970 global_step=20970 loss=18.6, 49.6% complete\nINFO:tensorflow:local_step=20980 global_step=20980 loss=18.2, 49.6% complete\nINFO:tensorflow:local_step=20990 global_step=20990 loss=18.8, 49.6% complete\nINFO:tensorflow:local_step=21000 global_step=21000 loss=18.7, 49.6% complete\nINFO:tensorflow:local_step=21010 global_step=21010 loss=17.9, 49.6% complete\nINFO:tensorflow:local_step=21020 global_step=21020 loss=18.2, 49.7% complete\nINFO:tensorflow:local_step=21030 global_step=21030 loss=19.0, 49.7% complete\nINFO:tensorflow:local_step=21040 global_step=21040 loss=18.9, 49.7% complete\nINFO:tensorflow:local_step=21050 global_step=21050 loss=18.1, 49.7% complete\nINFO:tensorflow:local_step=21060 global_step=21060 loss=17.7, 49.8% complete\nINFO:tensorflow:local_step=21070 global_step=21070 loss=19.6, 49.8% complete\nINFO:tensorflow:local_step=21080 global_step=21080 loss=19.2, 49.8% complete\nINFO:tensorflow:local_step=21090 global_step=21090 loss=300.9, 49.8% complete\nINFO:tensorflow:local_step=21100 global_step=21100 loss=15.4, 49.9% complete\nINFO:tensorflow:local_step=21110 global_step=21110 loss=17.1, 49.9% complete\nINFO:tensorflow:local_step=21120 global_step=21120 loss=18.2, 49.9% complete\nINFO:tensorflow:local_step=21130 global_step=21130 loss=21.0, 49.9% complete\nINFO:tensorflow:local_step=21140 global_step=21140 loss=18.3, 50.0% complete\nINFO:tensorflow:local_step=21150 global_step=21150 loss=18.1, 50.0% complete\nINFO:tensorflow:local_step=21160 global_step=21160 loss=19.5, 50.0% complete\nINFO:tensorflow:local_step=21170 global_step=21170 loss=17.3, 50.0% complete\nINFO:tensorflow:local_step=21180 global_step=21180 loss=17.5, 50.0% complete\nINFO:tensorflow:local_step=21190 global_step=21190 loss=17.9, 50.1% complete\nINFO:tensorflow:local_step=21200 global_step=21200 loss=18.5, 50.1% complete\nINFO:tensorflow:local_step=21210 global_step=21210 loss=18.7, 50.1% complete\nINFO:tensorflow:local_step=21220 global_step=21220 loss=17.5, 50.1% complete\nINFO:tensorflow:local_step=21230 global_step=21230 loss=18.7, 50.2% complete\nINFO:tensorflow:local_step=21240 global_step=21240 loss=18.2, 50.2% complete\nINFO:tensorflow:local_step=21250 global_step=21250 loss=18.3, 50.2% complete\nINFO:tensorflow:local_step=21260 global_step=21260 loss=18.1, 50.2% complete\nINFO:tensorflow:local_step=21270 global_step=21270 loss=17.4, 50.3% complete\nINFO:tensorflow:local_step=21280 global_step=21280 loss=17.7, 50.3% complete\nINFO:tensorflow:local_step=21290 global_step=21290 loss=18.4, 50.3% complete\nINFO:tensorflow:local_step=21300 global_step=21300 loss=18.7, 50.3% complete\nINFO:tensorflow:local_step=21310 global_step=21310 loss=18.8, 50.4% complete\nINFO:tensorflow:local_step=21320 global_step=21320 loss=17.9, 50.4% complete\nINFO:tensorflow:local_step=21330 global_step=21330 loss=15.5, 50.4% complete\nINFO:tensorflow:local_step=21340 global_step=21340 loss=18.5, 50.4% complete\nINFO:tensorflow:local_step=21350 global_step=21350 loss=17.4, 50.4% complete\nINFO:tensorflow:local_step=21360 global_step=21360 loss=18.0, 50.5% complete\nINFO:tensorflow:local_step=21370 global_step=21370 loss=17.9, 50.5% complete\nINFO:tensorflow:local_step=21380 global_step=21380 loss=18.1, 50.5% complete\nINFO:tensorflow:local_step=21390 global_step=21390 loss=18.1, 50.5% complete\nINFO:tensorflow:local_step=21400 global_step=21400 loss=18.1, 50.6% complete\nINFO:tensorflow:local_step=21410 global_step=21410 loss=18.6, 50.6% complete\nINFO:tensorflow:local_step=21420 global_step=21420 loss=287.3, 50.6% complete\nINFO:tensorflow:local_step=21430 global_step=21430 loss=18.3, 50.6% complete\nINFO:tensorflow:local_step=21440 global_step=21440 loss=19.2, 50.7% complete\nINFO:tensorflow:local_step=21450 global_step=21450 loss=18.4, 50.7% complete\nINFO:tensorflow:local_step=21460 global_step=21460 loss=18.1, 50.7% complete\nINFO:tensorflow:local_step=21470 global_step=21470 loss=17.9, 50.7% complete\nINFO:tensorflow:local_step=21480 global_step=21480 loss=17.8, 50.8% complete\nINFO:tensorflow:local_step=21490 global_step=21490 loss=18.2, 50.8% complete\nINFO:tensorflow:local_step=21500 global_step=21500 loss=17.2, 50.8% complete\nINFO:tensorflow:local_step=21510 global_step=21510 loss=21.6, 50.8% complete\nINFO:tensorflow:local_step=21520 global_step=21520 loss=19.0, 50.9% complete\nINFO:tensorflow:local_step=21530 global_step=21530 loss=18.5, 50.9% complete\nINFO:tensorflow:local_step=21540 global_step=21540 loss=17.3, 50.9% complete\nINFO:tensorflow:local_step=21550 global_step=21550 loss=19.2, 50.9% complete\nINFO:tensorflow:local_step=21560 global_step=21560 loss=18.3, 50.9% complete\nINFO:tensorflow:local_step=21570 global_step=21570 loss=18.0, 51.0% complete\nINFO:tensorflow:local_step=21580 global_step=21580 loss=18.4, 51.0% complete\nINFO:tensorflow:local_step=21590 global_step=21590 loss=19.2, 51.0% complete\nINFO:tensorflow:local_step=21600 global_step=21600 loss=18.3, 51.0% complete\nINFO:tensorflow:local_step=21610 global_step=21610 loss=18.0, 51.1% complete\nINFO:tensorflow:local_step=21620 global_step=21620 loss=18.5, 51.1% complete\nINFO:tensorflow:local_step=21630 global_step=21630 loss=17.8, 51.1% complete\nINFO:tensorflow:local_step=21640 global_step=21640 loss=18.9, 51.1% complete\nINFO:tensorflow:local_step=21650 global_step=21650 loss=18.6, 51.2% complete\nINFO:tensorflow:local_step=21660 global_step=21660 loss=17.8, 51.2% complete\nINFO:tensorflow:local_step=21670 global_step=21670 loss=17.8, 51.2% complete\nINFO:tensorflow:local_step=21680 global_step=21680 loss=17.8, 51.2% complete\nINFO:tensorflow:local_step=21690 global_step=21690 loss=18.2, 51.3% complete\nINFO:tensorflow:local_step=21700 global_step=21700 loss=18.6, 51.3% complete\nINFO:tensorflow:local_step=21710 global_step=21710 loss=19.1, 51.3% complete\nINFO:tensorflow:local_step=21720 global_step=21720 loss=17.7, 51.3% complete\nINFO:tensorflow:local_step=21730 global_step=21730 loss=18.8, 51.3% complete\nINFO:tensorflow:local_step=21740 global_step=21740 loss=17.8, 51.4% complete\nINFO:tensorflow:local_step=21750 global_step=21750 loss=18.3, 51.4% complete\nINFO:tensorflow:local_step=21760 global_step=21760 loss=18.2, 51.4% complete\nINFO:tensorflow:local_step=21770 global_step=21770 loss=16.2, 51.4% complete\nINFO:tensorflow:local_step=21780 global_step=21780 loss=18.5, 51.5% complete\nINFO:tensorflow:local_step=21790 global_step=21790 loss=18.5, 51.5% complete\nINFO:tensorflow:local_step=21800 global_step=21800 loss=18.6, 51.5% complete\nINFO:tensorflow:local_step=21810 global_step=21810 loss=18.2, 51.5% complete\nINFO:tensorflow:local_step=21820 global_step=21820 loss=18.1, 51.6% complete\nINFO:tensorflow:local_step=21830 global_step=21830 loss=18.6, 51.6% complete\nINFO:tensorflow:local_step=21840 global_step=21840 loss=18.4, 51.6% complete\nINFO:tensorflow:local_step=21850 global_step=21850 loss=17.8, 51.6% complete\nINFO:tensorflow:local_step=21860 global_step=21860 loss=19.0, 51.7% complete\nINFO:tensorflow:local_step=21870 global_step=21870 loss=17.8, 51.7% complete\nINFO:tensorflow:local_step=21880 global_step=21880 loss=18.9, 51.7% complete\nINFO:tensorflow:local_step=21890 global_step=21890 loss=18.5, 51.7% complete\nINFO:tensorflow:local_step=21900 global_step=21900 loss=18.1, 51.7% complete\nINFO:tensorflow:local_step=21910 global_step=21910 loss=18.8, 51.8% complete\nINFO:tensorflow:local_step=21920 global_step=21920 loss=17.7, 51.8% complete\nINFO:tensorflow:local_step=21930 global_step=21930 loss=17.8, 51.8% complete\nINFO:tensorflow:local_step=21940 global_step=21940 loss=18.0, 51.8% complete\nINFO:tensorflow:local_step=21950 global_step=21950 loss=19.3, 51.9% complete\nINFO:tensorflow:local_step=21960 global_step=21960 loss=19.2, 51.9% complete\nINFO:tensorflow:local_step=21970 global_step=21970 loss=19.8, 51.9% complete\nINFO:tensorflow:local_step=21980 global_step=21980 loss=18.3, 51.9% complete\nINFO:tensorflow:local_step=21990 global_step=21990 loss=16.2, 52.0% complete\nINFO:tensorflow:local_step=22000 global_step=22000 loss=15.8, 52.0% complete\nINFO:tensorflow:local_step=22010 global_step=22010 loss=16.6, 52.0% complete\nINFO:tensorflow:local_step=22020 global_step=22020 loss=18.3, 52.0% complete\nINFO:tensorflow:local_step=22030 global_step=22030 loss=18.1, 52.1% complete\nINFO:tensorflow:local_step=22040 global_step=22040 loss=18.8, 52.1% complete\nINFO:tensorflow:local_step=22050 global_step=22050 loss=17.9, 52.1% complete\nINFO:tensorflow:local_step=22060 global_step=22060 loss=19.4, 52.1% complete\nINFO:tensorflow:local_step=22070 global_step=22070 loss=17.6, 52.2% complete\nINFO:tensorflow:local_step=22080 global_step=22080 loss=18.6, 52.2% complete\nINFO:tensorflow:local_step=22090 global_step=22090 loss=19.3, 52.2% complete\nINFO:tensorflow:local_step=22100 global_step=22100 loss=18.1, 52.2% complete\nINFO:tensorflow:local_step=22110 global_step=22110 loss=19.9, 52.2% complete\nINFO:tensorflow:local_step=22120 global_step=22120 loss=18.5, 52.3% complete\nINFO:tensorflow:local_step=22130 global_step=22130 loss=17.9, 52.3% complete\nINFO:tensorflow:local_step=22140 global_step=22140 loss=19.7, 52.3% complete\nINFO:tensorflow:local_step=22150 global_step=22150 loss=17.7, 52.3% complete\nINFO:tensorflow:local_step=22160 global_step=22160 loss=18.1, 52.4% complete\nINFO:tensorflow:local_step=22170 global_step=22170 loss=19.1, 52.4% complete\nINFO:tensorflow:local_step=22180 global_step=22180 loss=18.2, 52.4% complete\nINFO:tensorflow:local_step=22190 global_step=22190 loss=17.8, 52.4% complete\nINFO:tensorflow:local_step=22200 global_step=22200 loss=19.1, 52.5% complete\nINFO:tensorflow:local_step=22210 global_step=22210 loss=18.2, 52.5% complete\nINFO:tensorflow:local_step=22220 global_step=22220 loss=18.5, 52.5% complete\nINFO:tensorflow:local_step=22230 global_step=22230 loss=19.5, 52.5% complete\nINFO:tensorflow:local_step=22240 global_step=22240 loss=18.3, 52.6% complete\nINFO:tensorflow:local_step=22250 global_step=22250 loss=18.2, 52.6% complete\nINFO:tensorflow:local_step=22260 global_step=22260 loss=18.4, 52.6% complete\nINFO:tensorflow:local_step=22270 global_step=22270 loss=18.6, 52.6% complete\nINFO:tensorflow:local_step=22280 global_step=22280 loss=17.7, 52.6% complete\nINFO:tensorflow:local_step=22290 global_step=22290 loss=18.0, 52.7% complete\nINFO:tensorflow:local_step=22300 global_step=22300 loss=18.6, 52.7% complete\nINFO:tensorflow:local_step=22310 global_step=22310 loss=17.9, 52.7% complete\nINFO:tensorflow:local_step=22320 global_step=22320 loss=19.2, 52.7% complete\nINFO:tensorflow:local_step=22330 global_step=22330 loss=18.1, 52.8% complete\nINFO:tensorflow:local_step=22340 global_step=22340 loss=17.9, 52.8% complete\nINFO:tensorflow:local_step=22350 global_step=22350 loss=18.6, 52.8% complete\nINFO:tensorflow:local_step=22360 global_step=22360 loss=17.8, 52.8% complete\nINFO:tensorflow:local_step=22370 global_step=22370 loss=19.4, 52.9% complete\nINFO:tensorflow:local_step=22380 global_step=22380 loss=17.9, 52.9% complete\nINFO:tensorflow:local_step=22390 global_step=22390 loss=18.4, 52.9% complete\nINFO:tensorflow:local_step=22400 global_step=22400 loss=17.9, 52.9% complete\nINFO:tensorflow:local_step=22410 global_step=22410 loss=19.8, 53.0% complete\nINFO:tensorflow:local_step=22420 global_step=22420 loss=17.8, 53.0% complete\nINFO:tensorflow:local_step=22430 global_step=22430 loss=18.3, 53.0% complete\nINFO:tensorflow:local_step=22440 global_step=22440 loss=232.0, 53.0% complete\nINFO:tensorflow:local_step=22450 global_step=22450 loss=229.4, 53.0% complete\nINFO:tensorflow:local_step=22460 global_step=22460 loss=18.5, 53.1% complete\nINFO:tensorflow:local_step=22470 global_step=22470 loss=18.9, 53.1% complete\nINFO:tensorflow:Recording summary at step 22475.\nINFO:tensorflow:global_step/sec: 125.318\nINFO:tensorflow:local_step=22480 global_step=22480 loss=18.2, 53.1% complete\nINFO:tensorflow:local_step=22490 global_step=22490 loss=19.2, 53.1% complete\nINFO:tensorflow:local_step=22500 global_step=22500 loss=18.9, 53.2% complete\nINFO:tensorflow:local_step=22510 global_step=22510 loss=19.0, 53.2% complete\nINFO:tensorflow:local_step=22520 global_step=22520 loss=19.1, 53.2% complete\nINFO:tensorflow:local_step=22530 global_step=22530 loss=18.1, 53.2% complete\nINFO:tensorflow:local_step=22540 global_step=22540 loss=18.7, 53.3% complete\nINFO:tensorflow:local_step=22550 global_step=22550 loss=17.9, 53.3% complete\nINFO:tensorflow:local_step=22560 global_step=22560 loss=19.1, 53.3% complete\nINFO:tensorflow:local_step=22570 global_step=22570 loss=19.1, 53.3% complete\nINFO:tensorflow:local_step=22580 global_step=22580 loss=18.4, 53.4% complete\nINFO:tensorflow:local_step=22590 global_step=22590 loss=19.1, 53.4% complete\nINFO:tensorflow:local_step=22600 global_step=22600 loss=267.8, 53.4% complete\nINFO:tensorflow:local_step=22610 global_step=22610 loss=256.1, 53.4% complete\nINFO:tensorflow:local_step=22620 global_step=22620 loss=18.0, 53.4% complete\nINFO:tensorflow:local_step=22630 global_step=22630 loss=18.7, 53.5% complete\nINFO:tensorflow:local_step=22640 global_step=22640 loss=18.3, 53.5% complete\nINFO:tensorflow:local_step=22650 global_step=22650 loss=18.3, 53.5% complete\nINFO:tensorflow:local_step=22660 global_step=22660 loss=18.0, 53.5% complete\nINFO:tensorflow:local_step=22670 global_step=22670 loss=255.1, 53.6% complete\nINFO:tensorflow:local_step=22680 global_step=22680 loss=17.9, 53.6% complete\nINFO:tensorflow:local_step=22690 global_step=22690 loss=18.1, 53.6% complete\nINFO:tensorflow:local_step=22700 global_step=22700 loss=15.1, 53.6% complete\nINFO:tensorflow:local_step=22710 global_step=22710 loss=18.3, 53.7% complete\nINFO:tensorflow:local_step=22720 global_step=22720 loss=18.8, 53.7% complete\nINFO:tensorflow:local_step=22730 global_step=22730 loss=18.5, 53.7% complete\nINFO:tensorflow:local_step=22740 global_step=22740 loss=18.3, 53.7% complete\nINFO:tensorflow:local_step=22750 global_step=22750 loss=19.0, 53.8% complete\nINFO:tensorflow:local_step=22760 global_step=22760 loss=18.1, 53.8% complete\nINFO:tensorflow:local_step=22770 global_step=22770 loss=18.0, 53.8% complete\nINFO:tensorflow:local_step=22780 global_step=22780 loss=18.6, 53.8% complete\nINFO:tensorflow:local_step=22790 global_step=22790 loss=18.9, 53.9% complete\nINFO:tensorflow:local_step=22800 global_step=22800 loss=18.4, 53.9% complete\nINFO:tensorflow:local_step=22810 global_step=22810 loss=18.2, 53.9% complete\nINFO:tensorflow:local_step=22820 global_step=22820 loss=18.7, 53.9% complete\nINFO:tensorflow:local_step=22830 global_step=22830 loss=19.0, 53.9% complete\nINFO:tensorflow:local_step=22840 global_step=22840 loss=19.9, 54.0% complete\nINFO:tensorflow:local_step=22850 global_step=22850 loss=18.5, 54.0% complete\nINFO:tensorflow:local_step=22860 global_step=22860 loss=18.3, 54.0% complete\nINFO:tensorflow:local_step=22870 global_step=22870 loss=18.4, 54.0% complete\nINFO:tensorflow:local_step=22880 global_step=22880 loss=18.9, 54.1% complete\nINFO:tensorflow:local_step=22890 global_step=22890 loss=18.4, 54.1% complete\nINFO:tensorflow:local_step=22900 global_step=22900 loss=18.3, 54.1% complete\nINFO:tensorflow:local_step=22910 global_step=22910 loss=18.0, 54.1% complete\nINFO:tensorflow:local_step=22920 global_step=22920 loss=17.6, 54.2% complete\nINFO:tensorflow:local_step=22930 global_step=22930 loss=18.3, 54.2% complete\nINFO:tensorflow:local_step=22940 global_step=22940 loss=18.1, 54.2% complete\nINFO:tensorflow:local_step=22950 global_step=22950 loss=19.7, 54.2% complete\nINFO:tensorflow:local_step=22960 global_step=22960 loss=18.5, 54.3% complete\nINFO:tensorflow:local_step=22970 global_step=22970 loss=18.3, 54.3% complete\nINFO:tensorflow:local_step=22980 global_step=22980 loss=18.9, 54.3% complete\nINFO:tensorflow:local_step=22990 global_step=22990 loss=18.9, 54.3% complete\nINFO:tensorflow:local_step=23000 global_step=23000 loss=19.6, 54.3% complete\nINFO:tensorflow:local_step=23010 global_step=23010 loss=17.8, 54.4% complete\nINFO:tensorflow:local_step=23020 global_step=23020 loss=18.7, 54.4% complete\nINFO:tensorflow:local_step=23030 global_step=23030 loss=19.5, 54.4% complete\nINFO:tensorflow:local_step=23040 global_step=23040 loss=18.4, 54.4% complete\nINFO:tensorflow:local_step=23050 global_step=23050 loss=18.1, 54.5% complete\nINFO:tensorflow:local_step=23060 global_step=23060 loss=18.5, 54.5% complete\nINFO:tensorflow:local_step=23070 global_step=23070 loss=18.8, 54.5% complete\nINFO:tensorflow:local_step=23080 global_step=23080 loss=18.6, 54.5% complete\nINFO:tensorflow:local_step=23090 global_step=23090 loss=18.2, 54.6% complete\nINFO:tensorflow:local_step=23100 global_step=23100 loss=17.7, 54.6% complete\nINFO:tensorflow:local_step=23110 global_step=23110 loss=18.6, 54.6% complete\nINFO:tensorflow:local_step=23120 global_step=23120 loss=17.8, 54.6% complete\nINFO:tensorflow:local_step=23130 global_step=23130 loss=19.6, 54.7% complete\nINFO:tensorflow:local_step=23140 global_step=23140 loss=17.7, 54.7% complete\nINFO:tensorflow:local_step=23150 global_step=23150 loss=18.0, 54.7% complete\nINFO:tensorflow:local_step=23160 global_step=23160 loss=18.5, 54.7% complete\nINFO:tensorflow:local_step=23170 global_step=23170 loss=18.9, 54.7% complete\nINFO:tensorflow:local_step=23180 global_step=23180 loss=17.7, 54.8% complete\nINFO:tensorflow:local_step=23190 global_step=23190 loss=17.6, 54.8% complete\nINFO:tensorflow:local_step=23200 global_step=23200 loss=18.0, 54.8% complete\nINFO:tensorflow:local_step=23210 global_step=23210 loss=18.0, 54.8% complete\nINFO:tensorflow:local_step=23220 global_step=23220 loss=18.2, 54.9% complete\nINFO:tensorflow:local_step=23230 global_step=23230 loss=15.2, 54.9% complete\nINFO:tensorflow:local_step=23240 global_step=23240 loss=18.4, 54.9% complete\nINFO:tensorflow:local_step=23250 global_step=23250 loss=18.3, 54.9% complete\nINFO:tensorflow:local_step=23260 global_step=23260 loss=17.7, 55.0% complete\nINFO:tensorflow:local_step=23270 global_step=23270 loss=18.8, 55.0% complete\nINFO:tensorflow:local_step=23280 global_step=23280 loss=18.1, 55.0% complete\nINFO:tensorflow:local_step=23290 global_step=23290 loss=18.1, 55.0% complete\nINFO:tensorflow:local_step=23300 global_step=23300 loss=17.9, 55.1% complete\nINFO:tensorflow:local_step=23310 global_step=23310 loss=18.3, 55.1% complete\nINFO:tensorflow:local_step=23320 global_step=23320 loss=17.8, 55.1% complete\nINFO:tensorflow:local_step=23330 global_step=23330 loss=19.1, 55.1% complete\nINFO:tensorflow:local_step=23340 global_step=23340 loss=18.1, 55.2% complete\nINFO:tensorflow:local_step=23350 global_step=23350 loss=18.6, 55.2% complete\nINFO:tensorflow:local_step=23360 global_step=23360 loss=17.8, 55.2% complete\nINFO:tensorflow:local_step=23370 global_step=23370 loss=18.9, 55.2% complete\nINFO:tensorflow:local_step=23380 global_step=23380 loss=18.4, 55.2% complete\nINFO:tensorflow:local_step=23390 global_step=23390 loss=15.1, 55.3% complete\nINFO:tensorflow:local_step=23400 global_step=23400 loss=18.0, 55.3% complete\nINFO:tensorflow:local_step=23410 global_step=23410 loss=18.7, 55.3% complete\nINFO:tensorflow:local_step=23420 global_step=23420 loss=18.1, 55.3% complete\nINFO:tensorflow:local_step=23430 global_step=23430 loss=18.5, 55.4% complete\nINFO:tensorflow:local_step=23440 global_step=23440 loss=18.6, 55.4% complete\nINFO:tensorflow:local_step=23450 global_step=23450 loss=17.4, 55.4% complete\nINFO:tensorflow:local_step=23460 global_step=23460 loss=17.9, 55.4% complete\nINFO:tensorflow:local_step=23470 global_step=23470 loss=17.8, 55.5% complete\nINFO:tensorflow:local_step=23480 global_step=23480 loss=17.1, 55.5% complete\nINFO:tensorflow:local_step=23490 global_step=23490 loss=18.6, 55.5% complete\nINFO:tensorflow:local_step=23500 global_step=23500 loss=18.1, 55.5% complete\nINFO:tensorflow:local_step=23510 global_step=23510 loss=18.5, 55.6% complete\nINFO:tensorflow:local_step=23520 global_step=23520 loss=18.2, 55.6% complete\nINFO:tensorflow:local_step=23530 global_step=23530 loss=17.9, 55.6% complete\nINFO:tensorflow:local_step=23540 global_step=23540 loss=19.2, 55.6% complete\nINFO:tensorflow:local_step=23550 global_step=23550 loss=18.1, 55.6% complete\nINFO:tensorflow:local_step=23560 global_step=23560 loss=19.1, 55.7% complete\nINFO:tensorflow:local_step=23570 global_step=23570 loss=18.4, 55.7% complete\nINFO:tensorflow:local_step=23580 global_step=23580 loss=18.4, 55.7% complete\nINFO:tensorflow:local_step=23590 global_step=23590 loss=18.2, 55.7% complete\nINFO:tensorflow:local_step=23600 global_step=23600 loss=17.9, 55.8% complete\nINFO:tensorflow:local_step=23610 global_step=23610 loss=19.2, 55.8% complete\nINFO:tensorflow:local_step=23620 global_step=23620 loss=19.2, 55.8% complete\nINFO:tensorflow:local_step=23630 global_step=23630 loss=19.0, 55.8% complete\nINFO:tensorflow:local_step=23640 global_step=23640 loss=17.7, 55.9% complete\nINFO:tensorflow:local_step=23650 global_step=23650 loss=18.6, 55.9% complete\nINFO:tensorflow:local_step=23660 global_step=23660 loss=18.2, 55.9% complete\nINFO:tensorflow:local_step=23670 global_step=23670 loss=15.1, 55.9% complete\nINFO:tensorflow:local_step=23680 global_step=23680 loss=18.0, 56.0% complete\nINFO:tensorflow:local_step=23690 global_step=23690 loss=18.9, 56.0% complete\nINFO:tensorflow:local_step=23700 global_step=23700 loss=17.9, 56.0% complete\nINFO:tensorflow:local_step=23710 global_step=23710 loss=17.6, 56.0% complete\nINFO:tensorflow:local_step=23720 global_step=23720 loss=17.4, 56.0% complete\nINFO:tensorflow:local_step=23730 global_step=23730 loss=18.2, 56.1% complete\nINFO:tensorflow:local_step=23740 global_step=23740 loss=18.7, 56.1% complete\nINFO:tensorflow:local_step=23750 global_step=23750 loss=18.9, 56.1% complete\nINFO:tensorflow:local_step=23760 global_step=23760 loss=18.8, 56.1% complete\nINFO:tensorflow:local_step=23770 global_step=23770 loss=18.1, 56.2% complete\nINFO:tensorflow:local_step=23780 global_step=23780 loss=17.6, 56.2% complete\nINFO:tensorflow:local_step=23790 global_step=23790 loss=17.7, 56.2% complete\nINFO:tensorflow:local_step=23800 global_step=23800 loss=19.1, 56.2% complete\nINFO:tensorflow:local_step=23810 global_step=23810 loss=18.8, 56.3% complete\nINFO:tensorflow:local_step=23820 global_step=23820 loss=19.0, 56.3% complete\nINFO:tensorflow:local_step=23830 global_step=23830 loss=17.9, 56.3% complete\nINFO:tensorflow:local_step=23840 global_step=23840 loss=17.9, 56.3% complete\nINFO:tensorflow:local_step=23850 global_step=23850 loss=18.1, 56.4% complete\nINFO:tensorflow:local_step=23860 global_step=23860 loss=17.5, 56.4% complete\nINFO:tensorflow:local_step=23870 global_step=23870 loss=18.5, 56.4% complete\nINFO:tensorflow:local_step=23880 global_step=23880 loss=19.0, 56.4% complete\nINFO:tensorflow:local_step=23890 global_step=23890 loss=15.1, 56.5% complete\nINFO:tensorflow:local_step=23900 global_step=23900 loss=17.4, 56.5% complete\nINFO:tensorflow:local_step=23910 global_step=23910 loss=18.1, 56.5% complete\nINFO:tensorflow:local_step=23920 global_step=23920 loss=18.3, 56.5% complete\nINFO:tensorflow:local_step=23930 global_step=23930 loss=20.8, 56.5% complete\nINFO:tensorflow:local_step=23940 global_step=23940 loss=18.0, 56.6% complete\nINFO:tensorflow:local_step=23950 global_step=23950 loss=20.5, 56.6% complete\nINFO:tensorflow:local_step=23960 global_step=23960 loss=18.4, 56.6% complete\nINFO:tensorflow:local_step=23970 global_step=23970 loss=18.7, 56.6% complete\nINFO:tensorflow:local_step=23980 global_step=23980 loss=18.8, 56.7% complete\nINFO:tensorflow:local_step=23990 global_step=23990 loss=18.7, 56.7% complete\nINFO:tensorflow:local_step=24000 global_step=24000 loss=18.6, 56.7% complete\nINFO:tensorflow:local_step=24010 global_step=24010 loss=18.4, 56.7% complete\nINFO:tensorflow:local_step=24020 global_step=24020 loss=19.2, 56.8% complete\nINFO:tensorflow:local_step=24030 global_step=24030 loss=18.1, 56.8% complete\nINFO:tensorflow:local_step=24040 global_step=24040 loss=18.5, 56.8% complete\nINFO:tensorflow:local_step=24050 global_step=24050 loss=18.0, 56.8% complete\nINFO:tensorflow:local_step=24060 global_step=24060 loss=18.1, 56.9% complete\nINFO:tensorflow:local_step=24070 global_step=24070 loss=16.9, 56.9% complete\nINFO:tensorflow:local_step=24080 global_step=24080 loss=19.0, 56.9% complete\nINFO:tensorflow:local_step=24090 global_step=24090 loss=18.8, 56.9% complete\nINFO:tensorflow:local_step=24100 global_step=24100 loss=17.6, 56.9% complete\nINFO:tensorflow:local_step=24110 global_step=24110 loss=18.2, 57.0% complete\nINFO:tensorflow:local_step=24120 global_step=24120 loss=17.4, 57.0% complete\nINFO:tensorflow:local_step=24130 global_step=24130 loss=18.1, 57.0% complete\nINFO:tensorflow:local_step=24140 global_step=24140 loss=17.2, 57.0% complete\nINFO:tensorflow:local_step=24150 global_step=24150 loss=18.2, 57.1% complete\nINFO:tensorflow:local_step=24160 global_step=24160 loss=17.5, 57.1% complete\nINFO:tensorflow:local_step=24170 global_step=24170 loss=18.8, 57.1% complete\nINFO:tensorflow:local_step=24180 global_step=24180 loss=18.3, 57.1% complete\nINFO:tensorflow:local_step=24190 global_step=24190 loss=17.7, 57.2% complete\nINFO:tensorflow:local_step=24200 global_step=24200 loss=19.3, 57.2% complete\nINFO:tensorflow:local_step=24210 global_step=24210 loss=18.4, 57.2% complete\nINFO:tensorflow:local_step=24220 global_step=24220 loss=20.1, 57.2% complete\nINFO:tensorflow:local_step=24230 global_step=24230 loss=17.9, 57.3% complete\nINFO:tensorflow:local_step=24240 global_step=24240 loss=17.4, 57.3% complete\nINFO:tensorflow:local_step=24250 global_step=24250 loss=18.2, 57.3% complete\nINFO:tensorflow:local_step=24260 global_step=24260 loss=18.8, 57.3% complete\nINFO:tensorflow:local_step=24270 global_step=24270 loss=17.9, 57.3% complete\nINFO:tensorflow:local_step=24280 global_step=24280 loss=18.9, 57.4% complete\nINFO:tensorflow:local_step=24290 global_step=24290 loss=295.1, 57.4% complete\nINFO:tensorflow:local_step=24300 global_step=24300 loss=19.2, 57.4% complete\nINFO:tensorflow:local_step=24310 global_step=24310 loss=17.7, 57.4% complete\nINFO:tensorflow:local_step=24320 global_step=24320 loss=18.1, 57.5% complete\nINFO:tensorflow:local_step=24330 global_step=24330 loss=18.5, 57.5% complete\nINFO:tensorflow:local_step=24340 global_step=24340 loss=18.6, 57.5% complete\nINFO:tensorflow:local_step=24350 global_step=24350 loss=15.5, 57.5% complete\nINFO:tensorflow:local_step=24360 global_step=24360 loss=19.3, 57.6% complete\nINFO:tensorflow:local_step=24370 global_step=24370 loss=15.5, 57.6% complete\nINFO:tensorflow:local_step=24380 global_step=24380 loss=17.8, 57.6% complete\nINFO:tensorflow:local_step=24390 global_step=24390 loss=18.3, 57.6% complete\nINFO:tensorflow:local_step=24400 global_step=24400 loss=17.5, 57.7% complete\nINFO:tensorflow:local_step=24410 global_step=24410 loss=17.4, 57.7% complete\nINFO:tensorflow:local_step=24420 global_step=24420 loss=18.4, 57.7% complete\nINFO:tensorflow:local_step=24430 global_step=24430 loss=18.8, 57.7% complete\nINFO:tensorflow:local_step=24440 global_step=24440 loss=18.7, 57.8% complete\nINFO:tensorflow:local_step=24450 global_step=24450 loss=18.1, 57.8% complete\nINFO:tensorflow:local_step=24460 global_step=24460 loss=18.9, 57.8% complete\nINFO:tensorflow:local_step=24470 global_step=24470 loss=18.5, 57.8% complete\nINFO:tensorflow:local_step=24480 global_step=24480 loss=18.0, 57.8% complete\nINFO:tensorflow:local_step=24490 global_step=24490 loss=19.1, 57.9% complete\nINFO:tensorflow:local_step=24500 global_step=24500 loss=17.7, 57.9% complete\nINFO:tensorflow:local_step=24510 global_step=24510 loss=18.5, 57.9% complete\nINFO:tensorflow:local_step=24520 global_step=24520 loss=18.3, 57.9% complete\nINFO:tensorflow:local_step=24530 global_step=24530 loss=19.1, 58.0% complete\nINFO:tensorflow:local_step=24540 global_step=24540 loss=19.1, 58.0% complete\nINFO:tensorflow:local_step=24550 global_step=24550 loss=18.5, 58.0% complete\nINFO:tensorflow:local_step=24560 global_step=24560 loss=19.1, 58.0% complete\nINFO:tensorflow:local_step=24570 global_step=24570 loss=18.0, 58.1% complete\nINFO:tensorflow:local_step=24580 global_step=24580 loss=19.0, 58.1% complete\nINFO:tensorflow:local_step=24590 global_step=24590 loss=18.9, 58.1% complete\nINFO:tensorflow:local_step=24600 global_step=24600 loss=18.6, 58.1% complete\nINFO:tensorflow:local_step=24610 global_step=24610 loss=18.0, 58.2% complete\nINFO:tensorflow:local_step=24620 global_step=24620 loss=18.4, 58.2% complete\nINFO:tensorflow:local_step=24630 global_step=24630 loss=18.1, 58.2% complete\nINFO:tensorflow:local_step=24640 global_step=24640 loss=17.8, 58.2% complete\nINFO:tensorflow:local_step=24650 global_step=24650 loss=18.1, 58.2% complete\nINFO:tensorflow:local_step=24660 global_step=24660 loss=17.6, 58.3% complete\nINFO:tensorflow:local_step=24670 global_step=24670 loss=19.2, 58.3% complete\nINFO:tensorflow:local_step=24680 global_step=24680 loss=19.2, 58.3% complete\nINFO:tensorflow:local_step=24690 global_step=24690 loss=19.0, 58.3% complete\nINFO:tensorflow:local_step=24700 global_step=24700 loss=18.0, 58.4% complete\nINFO:tensorflow:local_step=24710 global_step=24710 loss=17.9, 58.4% complete\nINFO:tensorflow:local_step=24720 global_step=24720 loss=17.7, 58.4% complete\nINFO:tensorflow:local_step=24730 global_step=24730 loss=18.7, 58.4% complete\nINFO:tensorflow:local_step=24740 global_step=24740 loss=18.8, 58.5% complete\nINFO:tensorflow:local_step=24750 global_step=24750 loss=288.3, 58.5% complete\nINFO:tensorflow:local_step=24760 global_step=24760 loss=19.0, 58.5% complete\nINFO:tensorflow:local_step=24770 global_step=24770 loss=17.9, 58.5% complete\nINFO:tensorflow:local_step=24780 global_step=24780 loss=19.2, 58.6% complete\nINFO:tensorflow:local_step=24790 global_step=24790 loss=18.0, 58.6% complete\nINFO:tensorflow:local_step=24800 global_step=24800 loss=18.1, 58.6% complete\nINFO:tensorflow:local_step=24810 global_step=24810 loss=17.8, 58.6% complete\nINFO:tensorflow:local_step=24820 global_step=24820 loss=18.6, 58.6% complete\nINFO:tensorflow:local_step=24830 global_step=24830 loss=18.7, 58.7% complete\nINFO:tensorflow:local_step=24840 global_step=24840 loss=17.8, 58.7% complete\nINFO:tensorflow:local_step=24850 global_step=24850 loss=18.3, 58.7% complete\nINFO:tensorflow:local_step=24860 global_step=24860 loss=18.6, 58.7% complete\nINFO:tensorflow:local_step=24870 global_step=24870 loss=19.2, 58.8% complete\nINFO:tensorflow:local_step=24880 global_step=24880 loss=18.2, 58.8% complete\nINFO:tensorflow:local_step=24890 global_step=24890 loss=17.5, 58.8% complete\nINFO:tensorflow:local_step=24900 global_step=24900 loss=18.1, 58.8% complete\nINFO:tensorflow:local_step=24910 global_step=24910 loss=18.1, 58.9% complete\nINFO:tensorflow:local_step=24920 global_step=24920 loss=18.8, 58.9% complete\nINFO:tensorflow:local_step=24930 global_step=24930 loss=17.6, 58.9% complete\nINFO:tensorflow:local_step=24940 global_step=24940 loss=17.5, 58.9% complete\nINFO:tensorflow:local_step=24950 global_step=24950 loss=18.5, 59.0% complete\nINFO:tensorflow:local_step=24960 global_step=24960 loss=18.9, 59.0% complete\nINFO:tensorflow:local_step=24970 global_step=24970 loss=19.4, 59.0% complete\nINFO:tensorflow:local_step=24980 global_step=24980 loss=18.7, 59.0% complete\nINFO:tensorflow:local_step=24990 global_step=24990 loss=17.7, 59.1% complete\nINFO:tensorflow:local_step=25000 global_step=25000 loss=18.7, 59.1% complete\nINFO:tensorflow:local_step=25010 global_step=25010 loss=18.4, 59.1% complete\nINFO:tensorflow:local_step=25020 global_step=25020 loss=17.8, 59.1% complete\nINFO:tensorflow:local_step=25030 global_step=25030 loss=16.0, 59.1% complete\nINFO:tensorflow:local_step=25040 global_step=25040 loss=18.6, 59.2% complete\nINFO:tensorflow:local_step=25050 global_step=25050 loss=17.9, 59.2% complete\nINFO:tensorflow:local_step=25060 global_step=25060 loss=18.1, 59.2% complete\nINFO:tensorflow:local_step=25070 global_step=25070 loss=17.5, 59.2% complete\nINFO:tensorflow:local_step=25080 global_step=25080 loss=18.7, 59.3% complete\nINFO:tensorflow:local_step=25090 global_step=25090 loss=18.3, 59.3% complete\nINFO:tensorflow:local_step=25100 global_step=25100 loss=18.1, 59.3% complete\nINFO:tensorflow:local_step=25110 global_step=25110 loss=18.4, 59.3% complete\nINFO:tensorflow:local_step=25120 global_step=25120 loss=18.5, 59.4% complete\nINFO:tensorflow:local_step=25130 global_step=25130 loss=18.1, 59.4% complete\nINFO:tensorflow:local_step=25140 global_step=25140 loss=18.1, 59.4% complete\nINFO:tensorflow:local_step=25150 global_step=25150 loss=297.4, 59.4% complete\nINFO:tensorflow:local_step=25160 global_step=25160 loss=18.4, 59.5% complete\nINFO:tensorflow:local_step=25170 global_step=25170 loss=18.1, 59.5% complete\nINFO:tensorflow:local_step=25180 global_step=25180 loss=18.4, 59.5% complete\nINFO:tensorflow:local_step=25190 global_step=25190 loss=19.0, 59.5% complete\nINFO:tensorflow:local_step=25200 global_step=25200 loss=17.4, 59.5% complete\nINFO:tensorflow:local_step=25210 global_step=25210 loss=17.5, 59.6% complete\nINFO:tensorflow:local_step=25220 global_step=25220 loss=20.0, 59.6% complete\nINFO:tensorflow:local_step=25230 global_step=25230 loss=17.8, 59.6% complete\nINFO:tensorflow:local_step=25240 global_step=25240 loss=19.0, 59.6% complete\nINFO:tensorflow:local_step=25250 global_step=25250 loss=18.5, 59.7% complete\nINFO:tensorflow:local_step=25260 global_step=25260 loss=17.8, 59.7% complete\nINFO:tensorflow:local_step=25270 global_step=25270 loss=18.5, 59.7% complete\nINFO:tensorflow:local_step=25280 global_step=25280 loss=18.6, 59.7% complete\nINFO:tensorflow:local_step=25290 global_step=25290 loss=18.8, 59.8% complete\nINFO:tensorflow:local_step=25300 global_step=25300 loss=18.5, 59.8% complete\nINFO:tensorflow:local_step=25310 global_step=25310 loss=19.0, 59.8% complete\nINFO:tensorflow:local_step=25320 global_step=25320 loss=18.1, 59.8% complete\nINFO:tensorflow:local_step=25330 global_step=25330 loss=17.4, 59.9% complete\nINFO:tensorflow:local_step=25340 global_step=25340 loss=17.7, 59.9% complete\nINFO:tensorflow:local_step=25350 global_step=25350 loss=18.8, 59.9% complete\nINFO:tensorflow:local_step=25360 global_step=25360 loss=15.4, 59.9% complete\nINFO:tensorflow:local_step=25370 global_step=25370 loss=18.8, 59.9% complete\nINFO:tensorflow:local_step=25380 global_step=25380 loss=19.1, 60.0% complete\nINFO:tensorflow:local_step=25390 global_step=25390 loss=18.3, 60.0% complete\nINFO:tensorflow:local_step=25400 global_step=25400 loss=17.4, 60.0% complete\nINFO:tensorflow:local_step=25410 global_step=25410 loss=17.3, 60.0% complete\nINFO:tensorflow:local_step=25420 global_step=25420 loss=17.6, 60.1% complete\nINFO:tensorflow:local_step=25430 global_step=25430 loss=18.3, 60.1% complete\nINFO:tensorflow:local_step=25440 global_step=25440 loss=18.1, 60.1% complete\nINFO:tensorflow:local_step=25450 global_step=25450 loss=18.4, 60.1% complete\nINFO:tensorflow:local_step=25460 global_step=25460 loss=165.0, 60.2% complete\nINFO:tensorflow:local_step=25470 global_step=25470 loss=18.0, 60.2% complete\nINFO:tensorflow:local_step=25480 global_step=25480 loss=18.6, 60.2% complete\nINFO:tensorflow:local_step=25490 global_step=25490 loss=18.2, 60.2% complete\nINFO:tensorflow:local_step=25500 global_step=25500 loss=18.0, 60.3% complete\nINFO:tensorflow:local_step=25510 global_step=25510 loss=18.6, 60.3% complete\nINFO:tensorflow:local_step=25520 global_step=25520 loss=18.0, 60.3% complete\nINFO:tensorflow:local_step=25530 global_step=25530 loss=18.1, 60.3% complete\nINFO:tensorflow:local_step=25540 global_step=25540 loss=18.4, 60.3% complete\nINFO:tensorflow:local_step=25550 global_step=25550 loss=18.1, 60.4% complete\nINFO:tensorflow:local_step=25560 global_step=25560 loss=18.8, 60.4% complete\nINFO:tensorflow:local_step=25570 global_step=25570 loss=18.2, 60.4% complete\nINFO:tensorflow:local_step=25580 global_step=25580 loss=18.2, 60.4% complete\nINFO:tensorflow:local_step=25590 global_step=25590 loss=17.8, 60.5% complete\nINFO:tensorflow:local_step=25600 global_step=25600 loss=17.8, 60.5% complete\nINFO:tensorflow:local_step=25610 global_step=25610 loss=18.5, 60.5% complete\nINFO:tensorflow:local_step=25620 global_step=25620 loss=18.2, 60.5% complete\nINFO:tensorflow:local_step=25630 global_step=25630 loss=17.2, 60.6% complete\nINFO:tensorflow:local_step=25640 global_step=25640 loss=17.7, 60.6% complete\nINFO:tensorflow:local_step=25650 global_step=25650 loss=18.3, 60.6% complete\nINFO:tensorflow:local_step=25660 global_step=25660 loss=18.2, 60.6% complete\nINFO:tensorflow:local_step=25670 global_step=25670 loss=18.2, 60.7% complete\nINFO:tensorflow:local_step=25680 global_step=25680 loss=18.5, 60.7% complete\nINFO:tensorflow:local_step=25690 global_step=25690 loss=18.4, 60.7% complete\nINFO:tensorflow:local_step=25700 global_step=25700 loss=17.7, 60.7% complete\nINFO:tensorflow:local_step=25710 global_step=25710 loss=17.9, 60.8% complete\nINFO:tensorflow:local_step=25720 global_step=25720 loss=18.5, 60.8% complete\nINFO:tensorflow:local_step=25730 global_step=25730 loss=17.1, 60.8% complete\nINFO:tensorflow:local_step=25740 global_step=25740 loss=17.5, 60.8% complete\nINFO:tensorflow:local_step=25750 global_step=25750 loss=18.0, 60.8% complete\nINFO:tensorflow:local_step=25760 global_step=25760 loss=17.9, 60.9% complete\nINFO:tensorflow:local_step=25770 global_step=25770 loss=17.9, 60.9% complete\nINFO:tensorflow:local_step=25780 global_step=25780 loss=18.7, 60.9% complete\nINFO:tensorflow:local_step=25790 global_step=25790 loss=18.2, 60.9% complete\nINFO:tensorflow:local_step=25800 global_step=25800 loss=18.2, 61.0% complete\nINFO:tensorflow:local_step=25810 global_step=25810 loss=18.0, 61.0% complete\nINFO:tensorflow:local_step=25820 global_step=25820 loss=18.5, 61.0% complete\nINFO:tensorflow:local_step=25830 global_step=25830 loss=18.0, 61.0% complete\nINFO:tensorflow:local_step=25840 global_step=25840 loss=18.0, 61.1% complete\nINFO:tensorflow:local_step=25850 global_step=25850 loss=18.2, 61.1% complete\nINFO:tensorflow:local_step=25860 global_step=25860 loss=18.0, 61.1% complete\nINFO:tensorflow:local_step=25870 global_step=25870 loss=18.6, 61.1% complete\nINFO:tensorflow:local_step=25880 global_step=25880 loss=18.5, 61.2% complete\nINFO:tensorflow:local_step=25890 global_step=25890 loss=18.4, 61.2% complete\nINFO:tensorflow:local_step=25900 global_step=25900 loss=18.8, 61.2% complete\nINFO:tensorflow:local_step=25910 global_step=25910 loss=18.2, 61.2% complete\nINFO:tensorflow:local_step=25920 global_step=25920 loss=18.4, 61.2% complete\nINFO:tensorflow:local_step=25930 global_step=25930 loss=18.6, 61.3% complete\nINFO:tensorflow:local_step=25940 global_step=25940 loss=18.1, 61.3% complete\nINFO:tensorflow:local_step=25950 global_step=25950 loss=18.5, 61.3% complete\nINFO:tensorflow:local_step=25960 global_step=25960 loss=17.9, 61.3% complete\nINFO:tensorflow:local_step=25970 global_step=25970 loss=18.1, 61.4% complete\nINFO:tensorflow:local_step=25980 global_step=25980 loss=18.3, 61.4% complete\nINFO:tensorflow:local_step=25990 global_step=25990 loss=18.8, 61.4% complete\nINFO:tensorflow:local_step=26000 global_step=26000 loss=18.2, 61.4% complete\nINFO:tensorflow:local_step=26010 global_step=26010 loss=18.0, 61.5% complete\nINFO:tensorflow:local_step=26020 global_step=26020 loss=17.8, 61.5% complete\nINFO:tensorflow:local_step=26030 global_step=26030 loss=18.3, 61.5% complete\nINFO:tensorflow:local_step=26040 global_step=26040 loss=18.0, 61.5% complete\nINFO:tensorflow:local_step=26050 global_step=26050 loss=17.9, 61.6% complete\nINFO:tensorflow:local_step=26060 global_step=26060 loss=19.1, 61.6% complete\nINFO:tensorflow:local_step=26070 global_step=26070 loss=18.1, 61.6% complete\nINFO:tensorflow:local_step=26080 global_step=26080 loss=17.7, 61.6% complete\nINFO:tensorflow:local_step=26090 global_step=26090 loss=18.6, 61.6% complete\nINFO:tensorflow:local_step=26100 global_step=26100 loss=18.2, 61.7% complete\nINFO:tensorflow:local_step=26110 global_step=26110 loss=18.3, 61.7% complete\nINFO:tensorflow:local_step=26120 global_step=26120 loss=18.3, 61.7% complete\nINFO:tensorflow:local_step=26130 global_step=26130 loss=19.0, 61.7% complete\nINFO:tensorflow:local_step=26140 global_step=26140 loss=18.6, 61.8% complete\nINFO:tensorflow:local_step=26150 global_step=26150 loss=18.7, 61.8% complete\nINFO:tensorflow:local_step=26160 global_step=26160 loss=18.4, 61.8% complete\nINFO:tensorflow:local_step=26170 global_step=26170 loss=17.6, 61.8% complete\nINFO:tensorflow:local_step=26180 global_step=26180 loss=19.2, 61.9% complete\nINFO:tensorflow:local_step=26190 global_step=26190 loss=17.7, 61.9% complete\nINFO:tensorflow:local_step=26200 global_step=26200 loss=18.8, 61.9% complete\nINFO:tensorflow:local_step=26210 global_step=26210 loss=17.7, 61.9% complete\nINFO:tensorflow:local_step=26220 global_step=26220 loss=18.2, 62.0% complete\nINFO:tensorflow:local_step=26230 global_step=26230 loss=18.3, 62.0% complete\nINFO:tensorflow:local_step=26240 global_step=26240 loss=18.9, 62.0% complete\nINFO:tensorflow:local_step=26250 global_step=26250 loss=17.9, 62.0% complete\nINFO:tensorflow:local_step=26260 global_step=26260 loss=18.7, 62.1% complete\nINFO:tensorflow:local_step=26270 global_step=26270 loss=18.1, 62.1% complete\nINFO:tensorflow:local_step=26280 global_step=26280 loss=18.3, 62.1% complete\nINFO:tensorflow:local_step=26290 global_step=26290 loss=18.6, 62.1% complete\nINFO:tensorflow:local_step=26300 global_step=26300 loss=17.8, 62.1% complete\nINFO:tensorflow:local_step=26310 global_step=26310 loss=18.0, 62.2% complete\nINFO:tensorflow:local_step=26320 global_step=26320 loss=16.4, 62.2% complete\nINFO:tensorflow:local_step=26330 global_step=26330 loss=16.6, 62.2% complete\nINFO:tensorflow:local_step=26340 global_step=26340 loss=16.9, 62.2% complete\nINFO:tensorflow:local_step=26350 global_step=26350 loss=19.3, 62.3% complete\nINFO:tensorflow:local_step=26360 global_step=26360 loss=18.3, 62.3% complete\nINFO:tensorflow:local_step=26370 global_step=26370 loss=19.5, 62.3% complete\nINFO:tensorflow:local_step=26380 global_step=26380 loss=19.0, 62.3% complete\nINFO:tensorflow:local_step=26390 global_step=26390 loss=18.1, 62.4% complete\nINFO:tensorflow:local_step=26400 global_step=26400 loss=18.2, 62.4% complete\nINFO:tensorflow:local_step=26410 global_step=26410 loss=18.1, 62.4% complete\nINFO:tensorflow:local_step=26420 global_step=26420 loss=18.7, 62.4% complete\nINFO:tensorflow:local_step=26430 global_step=26430 loss=18.7, 62.5% complete\nINFO:tensorflow:local_step=26440 global_step=26440 loss=17.5, 62.5% complete\nINFO:tensorflow:local_step=26450 global_step=26450 loss=18.0, 62.5% complete\nINFO:tensorflow:local_step=26460 global_step=26460 loss=18.6, 62.5% complete\nINFO:tensorflow:local_step=26470 global_step=26470 loss=18.0, 62.5% complete\nINFO:tensorflow:local_step=26480 global_step=26480 loss=18.1, 62.6% complete\nINFO:tensorflow:local_step=26490 global_step=26490 loss=18.1, 62.6% complete\nINFO:tensorflow:local_step=26500 global_step=26500 loss=17.4, 62.6% complete\nINFO:tensorflow:local_step=26510 global_step=26510 loss=18.9, 62.6% complete\nINFO:tensorflow:local_step=26520 global_step=26520 loss=18.4, 62.7% complete\nINFO:tensorflow:local_step=26530 global_step=26530 loss=18.0, 62.7% complete\nINFO:tensorflow:local_step=26540 global_step=26540 loss=18.1, 62.7% complete\nINFO:tensorflow:local_step=26550 global_step=26550 loss=18.9, 62.7% complete\nINFO:tensorflow:local_step=26560 global_step=26560 loss=19.0, 62.8% complete\nINFO:tensorflow:local_step=26570 global_step=26570 loss=18.1, 62.8% complete\nINFO:tensorflow:local_step=26580 global_step=26580 loss=18.3, 62.8% complete\nINFO:tensorflow:local_step=26590 global_step=26590 loss=18.8, 62.8% complete\nINFO:tensorflow:local_step=26600 global_step=26600 loss=17.9, 62.9% complete\nINFO:tensorflow:local_step=26610 global_step=26610 loss=18.4, 62.9% complete\nINFO:tensorflow:local_step=26620 global_step=26620 loss=15.6, 62.9% complete\nINFO:tensorflow:local_step=26630 global_step=26630 loss=18.6, 62.9% complete\nINFO:tensorflow:local_step=26640 global_step=26640 loss=18.0, 62.9% complete\nINFO:tensorflow:local_step=26650 global_step=26650 loss=18.3, 63.0% complete\nINFO:tensorflow:local_step=26660 global_step=26660 loss=18.7, 63.0% complete\nINFO:tensorflow:local_step=26670 global_step=26670 loss=19.7, 63.0% complete\nINFO:tensorflow:local_step=26680 global_step=26680 loss=19.2, 63.0% complete\nINFO:tensorflow:local_step=26690 global_step=26690 loss=17.2, 63.1% complete\nINFO:tensorflow:local_step=26700 global_step=26700 loss=19.2, 63.1% complete\nINFO:tensorflow:local_step=26710 global_step=26710 loss=18.6, 63.1% complete\nINFO:tensorflow:local_step=26720 global_step=26720 loss=19.3, 63.1% complete\nINFO:tensorflow:local_step=26730 global_step=26730 loss=17.9, 63.2% complete\nINFO:tensorflow:local_step=26740 global_step=26740 loss=18.4, 63.2% complete\nINFO:tensorflow:local_step=26750 global_step=26750 loss=18.4, 63.2% complete\nINFO:tensorflow:local_step=26760 global_step=26760 loss=18.7, 63.2% complete\nINFO:tensorflow:local_step=26770 global_step=26770 loss=18.0, 63.3% complete\nINFO:tensorflow:local_step=26780 global_step=26780 loss=18.2, 63.3% complete\nINFO:tensorflow:local_step=26790 global_step=26790 loss=19.1, 63.3% complete\nINFO:tensorflow:local_step=26800 global_step=26800 loss=17.7, 63.3% complete\nINFO:tensorflow:local_step=26810 global_step=26810 loss=18.4, 63.4% complete\nINFO:tensorflow:local_step=26820 global_step=26820 loss=18.2, 63.4% complete\nINFO:tensorflow:local_step=26830 global_step=26830 loss=331.2, 63.4% complete\nINFO:tensorflow:local_step=26840 global_step=26840 loss=17.7, 63.4% complete\nINFO:tensorflow:local_step=26850 global_step=26850 loss=17.6, 63.4% complete\nINFO:tensorflow:local_step=26860 global_step=26860 loss=18.0, 63.5% complete\nINFO:tensorflow:local_step=26870 global_step=26870 loss=19.7, 63.5% complete\nINFO:tensorflow:local_step=26880 global_step=26880 loss=17.6, 63.5% complete\nINFO:tensorflow:local_step=26890 global_step=26890 loss=17.6, 63.5% complete\nINFO:tensorflow:local_step=26900 global_step=26900 loss=18.3, 63.6% complete\nINFO:tensorflow:local_step=26910 global_step=26910 loss=18.2, 63.6% complete\nINFO:tensorflow:local_step=26920 global_step=26920 loss=18.9, 63.6% complete\nINFO:tensorflow:local_step=26930 global_step=26930 loss=17.9, 63.6% complete\nINFO:tensorflow:local_step=26940 global_step=26940 loss=19.1, 63.7% complete\nINFO:tensorflow:local_step=26950 global_step=26950 loss=18.4, 63.7% complete\nINFO:tensorflow:local_step=26960 global_step=26960 loss=18.5, 63.7% complete\nINFO:tensorflow:local_step=26970 global_step=26970 loss=18.0, 63.7% complete\nINFO:tensorflow:local_step=26980 global_step=26980 loss=18.3, 63.8% complete\nINFO:tensorflow:local_step=26990 global_step=26990 loss=18.9, 63.8% complete\nINFO:tensorflow:local_step=27000 global_step=27000 loss=17.9, 63.8% complete\nINFO:tensorflow:local_step=27010 global_step=27010 loss=18.9, 63.8% complete\nINFO:tensorflow:local_step=27020 global_step=27020 loss=18.1, 63.8% complete\nINFO:tensorflow:local_step=27030 global_step=27030 loss=17.8, 63.9% complete\nINFO:tensorflow:local_step=27040 global_step=27040 loss=17.8, 63.9% complete\nINFO:tensorflow:local_step=27050 global_step=27050 loss=19.2, 63.9% complete\nINFO:tensorflow:local_step=27060 global_step=27060 loss=18.7, 63.9% complete\nINFO:tensorflow:local_step=27070 global_step=27070 loss=17.7, 64.0% complete\nINFO:tensorflow:local_step=27080 global_step=27080 loss=18.7, 64.0% complete\nINFO:tensorflow:local_step=27090 global_step=27090 loss=17.9, 64.0% complete\nINFO:tensorflow:local_step=27100 global_step=27100 loss=18.0, 64.0% complete\nINFO:tensorflow:local_step=27110 global_step=27110 loss=19.4, 64.1% complete\nINFO:tensorflow:local_step=27120 global_step=27120 loss=18.3, 64.1% complete\nINFO:tensorflow:local_step=27130 global_step=27130 loss=18.1, 64.1% complete\nINFO:tensorflow:local_step=27140 global_step=27140 loss=18.2, 64.1% complete\nINFO:tensorflow:local_step=27150 global_step=27150 loss=19.3, 64.2% complete\nINFO:tensorflow:local_step=27160 global_step=27160 loss=14.7, 64.2% complete\nINFO:tensorflow:local_step=27170 global_step=27170 loss=19.6, 64.2% complete\nINFO:tensorflow:local_step=27180 global_step=27180 loss=18.4, 64.2% complete\nINFO:tensorflow:local_step=27190 global_step=27190 loss=17.8, 64.2% complete\nINFO:tensorflow:local_step=27200 global_step=27200 loss=16.7, 64.3% complete\nINFO:tensorflow:local_step=27210 global_step=27210 loss=18.7, 64.3% complete\nINFO:tensorflow:local_step=27220 global_step=27220 loss=25.1, 64.3% complete\nINFO:tensorflow:local_step=27230 global_step=27230 loss=17.8, 64.3% complete\nINFO:tensorflow:local_step=27240 global_step=27240 loss=21.9, 64.4% complete\nINFO:tensorflow:local_step=27250 global_step=27250 loss=18.8, 64.4% complete\nINFO:tensorflow:local_step=27260 global_step=27260 loss=15.5, 64.4% complete\nINFO:tensorflow:local_step=27270 global_step=27270 loss=18.5, 64.4% complete\nINFO:tensorflow:local_step=27280 global_step=27280 loss=17.8, 64.5% complete\nINFO:tensorflow:local_step=27290 global_step=27290 loss=18.6, 64.5% complete\nINFO:tensorflow:local_step=27300 global_step=27300 loss=19.5, 64.5% complete\nINFO:tensorflow:local_step=27310 global_step=27310 loss=17.9, 64.5% complete\nINFO:tensorflow:local_step=27320 global_step=27320 loss=16.7, 64.6% complete\nINFO:tensorflow:local_step=27330 global_step=27330 loss=17.9, 64.6% complete\nINFO:tensorflow:local_step=27340 global_step=27340 loss=17.6, 64.6% complete\nINFO:tensorflow:local_step=27350 global_step=27350 loss=19.2, 64.6% complete\nINFO:tensorflow:local_step=27360 global_step=27360 loss=17.6, 64.7% complete\nINFO:tensorflow:local_step=27370 global_step=27370 loss=18.2, 64.7% complete\nINFO:tensorflow:local_step=27380 global_step=27380 loss=19.0, 64.7% complete\nINFO:tensorflow:local_step=27390 global_step=27390 loss=19.2, 64.7% complete\nINFO:tensorflow:local_step=27400 global_step=27400 loss=17.6, 64.7% complete\nINFO:tensorflow:local_step=27410 global_step=27410 loss=18.5, 64.8% complete\nINFO:tensorflow:local_step=27420 global_step=27420 loss=18.5, 64.8% complete\nINFO:tensorflow:local_step=27430 global_step=27430 loss=18.8, 64.8% complete\nINFO:tensorflow:local_step=27440 global_step=27440 loss=22.2, 64.8% complete\nINFO:tensorflow:local_step=27450 global_step=27450 loss=18.5, 64.9% complete\nINFO:tensorflow:local_step=27460 global_step=27460 loss=17.6, 64.9% complete\nINFO:tensorflow:local_step=27470 global_step=27470 loss=17.8, 64.9% complete\nINFO:tensorflow:local_step=27480 global_step=27480 loss=18.0, 64.9% complete\nINFO:tensorflow:local_step=27490 global_step=27490 loss=18.2, 65.0% complete\nINFO:tensorflow:local_step=27500 global_step=27500 loss=19.4, 65.0% complete\nINFO:tensorflow:local_step=27510 global_step=27510 loss=17.8, 65.0% complete\nINFO:tensorflow:local_step=27520 global_step=27520 loss=18.1, 65.0% complete\nINFO:tensorflow:local_step=27530 global_step=27530 loss=144.7, 65.1% complete\nINFO:tensorflow:local_step=27540 global_step=27540 loss=18.3, 65.1% complete\nINFO:tensorflow:local_step=27550 global_step=27550 loss=17.2, 65.1% complete\nINFO:tensorflow:local_step=27560 global_step=27560 loss=18.0, 65.1% complete\nINFO:tensorflow:local_step=27570 global_step=27570 loss=18.3, 65.1% complete\nINFO:tensorflow:local_step=27580 global_step=27580 loss=17.8, 65.2% complete\nINFO:tensorflow:local_step=27590 global_step=27590 loss=18.0, 65.2% complete\nINFO:tensorflow:local_step=27600 global_step=27600 loss=17.5, 65.2% complete\nINFO:tensorflow:local_step=27610 global_step=27610 loss=17.7, 65.2% complete\nINFO:tensorflow:local_step=27620 global_step=27620 loss=19.1, 65.3% complete\nINFO:tensorflow:local_step=27630 global_step=27630 loss=18.5, 65.3% complete\nINFO:tensorflow:local_step=27640 global_step=27640 loss=19.2, 65.3% complete\nINFO:tensorflow:local_step=27650 global_step=27650 loss=18.9, 65.3% complete\nINFO:tensorflow:local_step=27660 global_step=27660 loss=18.4, 65.4% complete\nINFO:tensorflow:local_step=27670 global_step=27670 loss=18.6, 65.4% complete\nINFO:tensorflow:local_step=27680 global_step=27680 loss=18.4, 65.4% complete\nINFO:tensorflow:local_step=27690 global_step=27690 loss=15.5, 65.4% complete\nINFO:tensorflow:local_step=27700 global_step=27700 loss=18.6, 65.5% complete\nINFO:tensorflow:local_step=27710 global_step=27710 loss=19.2, 65.5% complete\nINFO:tensorflow:local_step=27720 global_step=27720 loss=18.1, 65.5% complete\nINFO:tensorflow:local_step=27730 global_step=27730 loss=18.0, 65.5% complete\nINFO:tensorflow:local_step=27740 global_step=27740 loss=18.3, 65.5% complete\nINFO:tensorflow:local_step=27750 global_step=27750 loss=17.9, 65.6% complete\nINFO:tensorflow:local_step=27760 global_step=27760 loss=18.2, 65.6% complete\nINFO:tensorflow:local_step=27770 global_step=27770 loss=18.7, 65.6% complete\nINFO:tensorflow:local_step=27780 global_step=27780 loss=19.1, 65.6% complete\nINFO:tensorflow:local_step=27790 global_step=27790 loss=18.3, 65.7% complete\nINFO:tensorflow:local_step=27800 global_step=27800 loss=18.1, 65.7% complete\nINFO:tensorflow:local_step=27810 global_step=27810 loss=17.8, 65.7% complete\nINFO:tensorflow:local_step=27820 global_step=27820 loss=19.0, 65.7% complete\nINFO:tensorflow:local_step=27830 global_step=27830 loss=18.0, 65.8% complete\nINFO:tensorflow:local_step=27840 global_step=27840 loss=19.1, 65.8% complete\nINFO:tensorflow:local_step=27850 global_step=27850 loss=18.1, 65.8% complete\nINFO:tensorflow:local_step=27860 global_step=27860 loss=18.4, 65.8% complete\nINFO:tensorflow:local_step=27870 global_step=27870 loss=17.5, 65.9% complete\nINFO:tensorflow:local_step=27880 global_step=27880 loss=18.2, 65.9% complete\nINFO:tensorflow:local_step=27890 global_step=27890 loss=18.1, 65.9% complete\nINFO:tensorflow:local_step=27900 global_step=27900 loss=18.8, 65.9% complete\nINFO:tensorflow:local_step=27910 global_step=27910 loss=17.8, 65.9% complete\nINFO:tensorflow:local_step=27920 global_step=27920 loss=19.2, 66.0% complete\nINFO:tensorflow:local_step=27930 global_step=27930 loss=236.3, 66.0% complete\nINFO:tensorflow:local_step=27940 global_step=27940 loss=19.4, 66.0% complete\nINFO:tensorflow:local_step=27950 global_step=27950 loss=18.8, 66.0% complete\nINFO:tensorflow:local_step=27960 global_step=27960 loss=18.2, 66.1% complete\nINFO:tensorflow:local_step=27970 global_step=27970 loss=19.0, 66.1% complete\nINFO:tensorflow:local_step=27980 global_step=27980 loss=18.5, 66.1% complete\nINFO:tensorflow:local_step=27990 global_step=27990 loss=19.1, 66.1% complete\nINFO:tensorflow:local_step=28000 global_step=28000 loss=18.3, 66.2% complete\nINFO:tensorflow:local_step=28010 global_step=28010 loss=18.8, 66.2% complete\nINFO:tensorflow:local_step=28020 global_step=28020 loss=19.2, 66.2% complete\nINFO:tensorflow:local_step=28030 global_step=28030 loss=17.4, 66.2% complete\nINFO:tensorflow:local_step=28040 global_step=28040 loss=18.0, 66.3% complete\nINFO:tensorflow:local_step=28050 global_step=28050 loss=20.9, 66.3% complete\nINFO:tensorflow:local_step=28060 global_step=28060 loss=15.6, 66.3% complete\nINFO:tensorflow:local_step=28070 global_step=28070 loss=17.5, 66.3% complete\nINFO:tensorflow:local_step=28080 global_step=28080 loss=290.3, 66.4% complete\nINFO:tensorflow:local_step=28090 global_step=28090 loss=18.8, 66.4% complete\nINFO:tensorflow:local_step=28100 global_step=28100 loss=18.5, 66.4% complete\nINFO:tensorflow:local_step=28110 global_step=28110 loss=18.6, 66.4% complete\nINFO:tensorflow:local_step=28120 global_step=28120 loss=19.4, 66.4% complete\nINFO:tensorflow:local_step=28130 global_step=28130 loss=18.6, 66.5% complete\nINFO:tensorflow:local_step=28140 global_step=28140 loss=15.2, 66.5% complete\nINFO:tensorflow:local_step=28150 global_step=28150 loss=18.0, 66.5% complete\nINFO:tensorflow:local_step=28160 global_step=28160 loss=17.6, 66.5% complete\nINFO:tensorflow:local_step=28170 global_step=28170 loss=21.4, 66.6% complete\nINFO:tensorflow:local_step=28180 global_step=28180 loss=21.7, 66.6% complete\nINFO:tensorflow:local_step=28190 global_step=28190 loss=18.6, 66.6% complete\nINFO:tensorflow:local_step=28200 global_step=28200 loss=19.7, 66.6% complete\nINFO:tensorflow:local_step=28210 global_step=28210 loss=19.3, 66.7% complete\nINFO:tensorflow:local_step=28220 global_step=28220 loss=17.8, 66.7% complete\nINFO:tensorflow:local_step=28230 global_step=28230 loss=14.9, 66.7% complete\nINFO:tensorflow:local_step=28240 global_step=28240 loss=19.2, 66.7% complete\nINFO:tensorflow:local_step=28250 global_step=28250 loss=18.6, 66.8% complete\nINFO:tensorflow:local_step=28260 global_step=28260 loss=19.6, 66.8% complete\nINFO:tensorflow:local_step=28270 global_step=28270 loss=19.1, 66.8% complete\nINFO:tensorflow:local_step=28280 global_step=28280 loss=18.4, 66.8% complete\nINFO:tensorflow:local_step=28290 global_step=28290 loss=18.8, 66.8% complete\nINFO:tensorflow:local_step=28300 global_step=28300 loss=18.8, 66.9% complete\nINFO:tensorflow:local_step=28310 global_step=28310 loss=17.6, 66.9% complete\nINFO:tensorflow:local_step=28320 global_step=28320 loss=19.2, 66.9% complete\nINFO:tensorflow:local_step=28330 global_step=28330 loss=18.4, 66.9% complete\nINFO:tensorflow:local_step=28340 global_step=28340 loss=348.4, 67.0% complete\nINFO:tensorflow:local_step=28350 global_step=28350 loss=17.6, 67.0% complete\nINFO:tensorflow:local_step=28360 global_step=28360 loss=19.1, 67.0% complete\nINFO:tensorflow:local_step=28370 global_step=28370 loss=272.8, 67.0% complete\nINFO:tensorflow:local_step=28380 global_step=28380 loss=17.5, 67.1% complete\nINFO:tensorflow:local_step=28390 global_step=28390 loss=18.2, 67.1% complete\nINFO:tensorflow:local_step=28400 global_step=28400 loss=19.1, 67.1% complete\nINFO:tensorflow:local_step=28410 global_step=28410 loss=18.9, 67.1% complete\nINFO:tensorflow:local_step=28420 global_step=28420 loss=19.5, 67.2% complete\nINFO:tensorflow:local_step=28430 global_step=28430 loss=18.9, 67.2% complete\nINFO:tensorflow:local_step=28440 global_step=28440 loss=19.2, 67.2% complete\nINFO:tensorflow:local_step=28450 global_step=28450 loss=18.2, 67.2% complete\nINFO:tensorflow:local_step=28460 global_step=28460 loss=18.0, 67.2% complete\nINFO:tensorflow:local_step=28470 global_step=28470 loss=18.2, 67.3% complete\nINFO:tensorflow:local_step=28480 global_step=28480 loss=21.2, 67.3% complete\nINFO:tensorflow:local_step=28490 global_step=28490 loss=18.7, 67.3% complete\nINFO:tensorflow:local_step=28500 global_step=28500 loss=17.7, 67.3% complete\nINFO:tensorflow:local_step=28510 global_step=28510 loss=280.8, 67.4% complete\nINFO:tensorflow:local_step=28520 global_step=28520 loss=18.8, 67.4% complete\nINFO:tensorflow:local_step=28530 global_step=28530 loss=17.1, 67.4% complete\nINFO:tensorflow:local_step=28540 global_step=28540 loss=17.6, 67.4% complete\nINFO:tensorflow:local_step=28550 global_step=28550 loss=17.4, 67.5% complete\nINFO:tensorflow:local_step=28560 global_step=28560 loss=18.2, 67.5% complete\nINFO:tensorflow:local_step=28570 global_step=28570 loss=18.8, 67.5% complete\nINFO:tensorflow:local_step=28580 global_step=28580 loss=19.1, 67.5% complete\nINFO:tensorflow:local_step=28590 global_step=28590 loss=19.4, 67.6% complete\nINFO:tensorflow:local_step=28600 global_step=28600 loss=17.5, 67.6% complete\nINFO:tensorflow:local_step=28610 global_step=28610 loss=19.2, 67.6% complete\nINFO:tensorflow:local_step=28620 global_step=28620 loss=18.3, 67.6% complete\nINFO:tensorflow:local_step=28630 global_step=28630 loss=18.1, 67.7% complete\nINFO:tensorflow:local_step=28640 global_step=28640 loss=19.2, 67.7% complete\nINFO:tensorflow:local_step=28650 global_step=28650 loss=215.0, 67.7% complete\nINFO:tensorflow:local_step=28660 global_step=28660 loss=18.0, 67.7% complete\nINFO:tensorflow:local_step=28670 global_step=28670 loss=16.3, 67.7% complete\nINFO:tensorflow:local_step=28680 global_step=28680 loss=17.6, 67.8% complete\nINFO:tensorflow:local_step=28690 global_step=28690 loss=18.1, 67.8% complete\nINFO:tensorflow:local_step=28700 global_step=28700 loss=17.8, 67.8% complete\nINFO:tensorflow:local_step=28710 global_step=28710 loss=19.0, 67.8% complete\nINFO:tensorflow:local_step=28720 global_step=28720 loss=18.2, 67.9% complete\nINFO:tensorflow:local_step=28730 global_step=28730 loss=19.0, 67.9% complete\nINFO:tensorflow:local_step=28740 global_step=28740 loss=18.3, 67.9% complete\nINFO:tensorflow:local_step=28750 global_step=28750 loss=19.3, 67.9% complete\nINFO:tensorflow:local_step=28760 global_step=28760 loss=16.0, 68.0% complete\nINFO:tensorflow:local_step=28770 global_step=28770 loss=18.5, 68.0% complete\nINFO:tensorflow:local_step=28780 global_step=28780 loss=18.8, 68.0% complete\nINFO:tensorflow:local_step=28790 global_step=28790 loss=18.8, 68.0% complete\nINFO:tensorflow:local_step=28800 global_step=28800 loss=19.0, 68.1% complete\nINFO:tensorflow:local_step=28810 global_step=28810 loss=18.8, 68.1% complete\nINFO:tensorflow:local_step=28820 global_step=28820 loss=19.3, 68.1% complete\nINFO:tensorflow:local_step=28830 global_step=28830 loss=17.7, 68.1% complete\nINFO:tensorflow:local_step=28840 global_step=28840 loss=21.5, 68.1% complete\nINFO:tensorflow:local_step=28850 global_step=28850 loss=18.0, 68.2% complete\nINFO:tensorflow:local_step=28860 global_step=28860 loss=18.4, 68.2% complete\nINFO:tensorflow:local_step=28870 global_step=28870 loss=18.5, 68.2% complete\nINFO:tensorflow:local_step=28880 global_step=28880 loss=18.1, 68.2% complete\nINFO:tensorflow:local_step=28890 global_step=28890 loss=19.1, 68.3% complete\nINFO:tensorflow:local_step=28900 global_step=28900 loss=19.6, 68.3% complete\nINFO:tensorflow:local_step=28910 global_step=28910 loss=17.8, 68.3% complete\nINFO:tensorflow:local_step=28920 global_step=28920 loss=19.2, 68.3% complete\nINFO:tensorflow:local_step=28930 global_step=28930 loss=17.8, 68.4% complete\nINFO:tensorflow:local_step=28940 global_step=28940 loss=18.9, 68.4% complete\nINFO:tensorflow:local_step=28950 global_step=28950 loss=18.2, 68.4% complete\nINFO:tensorflow:local_step=28960 global_step=28960 loss=298.3, 68.4% complete\nINFO:tensorflow:local_step=28970 global_step=28970 loss=19.0, 68.5% complete\nINFO:tensorflow:local_step=28980 global_step=28980 loss=18.6, 68.5% complete\nINFO:tensorflow:local_step=28990 global_step=28990 loss=15.1, 68.5% complete\nINFO:tensorflow:local_step=29000 global_step=29000 loss=17.9, 68.5% complete\nINFO:tensorflow:local_step=29010 global_step=29010 loss=18.0, 68.5% complete\nINFO:tensorflow:local_step=29020 global_step=29020 loss=18.7, 68.6% complete\nINFO:tensorflow:local_step=29030 global_step=29030 loss=18.7, 68.6% complete\nINFO:tensorflow:local_step=29040 global_step=29040 loss=18.8, 68.6% complete\nINFO:tensorflow:local_step=29050 global_step=29050 loss=18.3, 68.6% complete\nINFO:tensorflow:local_step=29060 global_step=29060 loss=18.7, 68.7% complete\nINFO:tensorflow:local_step=29070 global_step=29070 loss=18.6, 68.7% complete\nINFO:tensorflow:local_step=29080 global_step=29080 loss=18.2, 68.7% complete\nINFO:tensorflow:local_step=29090 global_step=29090 loss=18.5, 68.7% complete\nINFO:tensorflow:local_step=29100 global_step=29100 loss=18.1, 68.8% complete\nINFO:tensorflow:local_step=29110 global_step=29110 loss=19.4, 68.8% complete\nINFO:tensorflow:local_step=29120 global_step=29120 loss=18.0, 68.8% complete\nINFO:tensorflow:local_step=29130 global_step=29130 loss=19.4, 68.8% complete\nINFO:tensorflow:local_step=29140 global_step=29140 loss=17.6, 68.9% complete\nINFO:tensorflow:local_step=29150 global_step=29150 loss=18.3, 68.9% complete\nINFO:tensorflow:local_step=29160 global_step=29160 loss=18.2, 68.9% complete\nINFO:tensorflow:local_step=29170 global_step=29170 loss=19.2, 68.9% complete\nINFO:tensorflow:local_step=29180 global_step=29180 loss=17.9, 69.0% complete\nINFO:tensorflow:local_step=29190 global_step=29190 loss=19.4, 69.0% complete\nINFO:tensorflow:local_step=29200 global_step=29200 loss=18.5, 69.0% complete\nINFO:tensorflow:local_step=29210 global_step=29210 loss=17.7, 69.0% complete\nINFO:tensorflow:local_step=29220 global_step=29220 loss=320.1, 69.0% complete\nINFO:tensorflow:local_step=29230 global_step=29230 loss=18.3, 69.1% complete\nINFO:tensorflow:local_step=29240 global_step=29240 loss=19.4, 69.1% complete\nINFO:tensorflow:local_step=29250 global_step=29250 loss=21.2, 69.1% complete\nINFO:tensorflow:local_step=29260 global_step=29260 loss=18.1, 69.1% complete\nINFO:tensorflow:local_step=29270 global_step=29270 loss=17.7, 69.2% complete\nINFO:tensorflow:local_step=29280 global_step=29280 loss=18.8, 69.2% complete\nINFO:tensorflow:local_step=29290 global_step=29290 loss=18.3, 69.2% complete\nINFO:tensorflow:local_step=29300 global_step=29300 loss=17.9, 69.2% complete\nINFO:tensorflow:local_step=29310 global_step=29310 loss=17.6, 69.3% complete\nINFO:tensorflow:local_step=29320 global_step=29320 loss=18.3, 69.3% complete\nINFO:tensorflow:local_step=29330 global_step=29330 loss=17.7, 69.3% complete\nINFO:tensorflow:local_step=29340 global_step=29340 loss=18.1, 69.3% complete\nINFO:tensorflow:local_step=29350 global_step=29350 loss=18.3, 69.4% complete\nINFO:tensorflow:local_step=29360 global_step=29360 loss=19.5, 69.4% complete\nINFO:tensorflow:local_step=29370 global_step=29370 loss=18.4, 69.4% complete\nINFO:tensorflow:local_step=29380 global_step=29380 loss=18.1, 69.4% complete\nINFO:tensorflow:local_step=29390 global_step=29390 loss=17.9, 69.4% complete\nINFO:tensorflow:local_step=29400 global_step=29400 loss=17.6, 69.5% complete\nINFO:tensorflow:local_step=29410 global_step=29410 loss=17.5, 69.5% complete\nINFO:tensorflow:local_step=29420 global_step=29420 loss=18.7, 69.5% complete\nINFO:tensorflow:local_step=29430 global_step=29430 loss=18.6, 69.5% complete\nINFO:tensorflow:local_step=29440 global_step=29440 loss=18.9, 69.6% complete\nINFO:tensorflow:local_step=29450 global_step=29450 loss=18.0, 69.6% complete\nINFO:tensorflow:local_step=29460 global_step=29460 loss=18.7, 69.6% complete\nINFO:tensorflow:local_step=29470 global_step=29470 loss=17.8, 69.6% complete\nINFO:tensorflow:local_step=29480 global_step=29480 loss=17.1, 69.7% complete\nINFO:tensorflow:local_step=29490 global_step=29490 loss=18.3, 69.7% complete\nINFO:tensorflow:local_step=29500 global_step=29500 loss=19.0, 69.7% complete\nINFO:tensorflow:local_step=29510 global_step=29510 loss=18.6, 69.7% complete\nINFO:tensorflow:local_step=29520 global_step=29520 loss=18.6, 69.8% complete\nINFO:tensorflow:local_step=29530 global_step=29530 loss=18.6, 69.8% complete\nINFO:tensorflow:local_step=29540 global_step=29540 loss=17.8, 69.8% complete\nINFO:tensorflow:local_step=29550 global_step=29550 loss=18.9, 69.8% complete\nINFO:tensorflow:local_step=29560 global_step=29560 loss=18.9, 69.8% complete\nINFO:tensorflow:local_step=29570 global_step=29570 loss=18.4, 69.9% complete\nINFO:tensorflow:local_step=29580 global_step=29580 loss=18.9, 69.9% complete\nINFO:tensorflow:local_step=29590 global_step=29590 loss=18.4, 69.9% complete\nINFO:tensorflow:local_step=29600 global_step=29600 loss=19.2, 69.9% complete\nINFO:tensorflow:local_step=29610 global_step=29610 loss=18.5, 70.0% complete\nINFO:tensorflow:local_step=29620 global_step=29620 loss=18.6, 70.0% complete\nINFO:tensorflow:local_step=29630 global_step=29630 loss=18.3, 70.0% complete\nINFO:tensorflow:local_step=29640 global_step=29640 loss=19.6, 70.0% complete\nINFO:tensorflow:local_step=29650 global_step=29650 loss=19.0, 70.1% complete\nINFO:tensorflow:local_step=29660 global_step=29660 loss=19.1, 70.1% complete\nINFO:tensorflow:local_step=29670 global_step=29670 loss=18.3, 70.1% complete\nINFO:tensorflow:local_step=29680 global_step=29680 loss=18.1, 70.1% complete\nINFO:tensorflow:local_step=29690 global_step=29690 loss=17.9, 70.2% complete\nINFO:tensorflow:local_step=29700 global_step=29700 loss=18.0, 70.2% complete\nINFO:tensorflow:local_step=29710 global_step=29710 loss=18.4, 70.2% complete\nINFO:tensorflow:local_step=29720 global_step=29720 loss=18.8, 70.2% complete\nINFO:tensorflow:local_step=29730 global_step=29730 loss=18.3, 70.3% complete\nINFO:tensorflow:local_step=29740 global_step=29740 loss=18.0, 70.3% complete\nINFO:tensorflow:local_step=29750 global_step=29750 loss=18.6, 70.3% complete\nINFO:tensorflow:local_step=29760 global_step=29760 loss=18.7, 70.3% complete\nINFO:tensorflow:local_step=29770 global_step=29770 loss=17.8, 70.3% complete\nINFO:tensorflow:local_step=29780 global_step=29780 loss=18.4, 70.4% complete\nINFO:tensorflow:local_step=29790 global_step=29790 loss=18.5, 70.4% complete\nINFO:tensorflow:local_step=29800 global_step=29800 loss=18.3, 70.4% complete\nINFO:tensorflow:local_step=29810 global_step=29810 loss=202.8, 70.4% complete\nINFO:tensorflow:local_step=29820 global_step=29820 loss=18.2, 70.5% complete\nINFO:tensorflow:local_step=29830 global_step=29830 loss=20.1, 70.5% complete\nINFO:tensorflow:local_step=29840 global_step=29840 loss=16.8, 70.5% complete\nINFO:tensorflow:local_step=29850 global_step=29850 loss=17.4, 70.5% complete\nINFO:tensorflow:local_step=29860 global_step=29860 loss=17.9, 70.6% complete\nINFO:tensorflow:local_step=29870 global_step=29870 loss=17.9, 70.6% complete\nINFO:tensorflow:local_step=29880 global_step=29880 loss=18.7, 70.6% complete\nINFO:tensorflow:local_step=29890 global_step=29890 loss=18.6, 70.6% complete\nINFO:tensorflow:local_step=29900 global_step=29900 loss=18.6, 70.7% complete\nINFO:tensorflow:local_step=29910 global_step=29910 loss=18.2, 70.7% complete\nINFO:tensorflow:local_step=29920 global_step=29920 loss=17.9, 70.7% complete\nINFO:tensorflow:local_step=29930 global_step=29930 loss=18.5, 70.7% complete\nINFO:tensorflow:local_step=29940 global_step=29940 loss=18.3, 70.7% complete\nINFO:tensorflow:local_step=29950 global_step=29950 loss=18.5, 70.8% complete\nINFO:tensorflow:local_step=29960 global_step=29960 loss=19.1, 70.8% complete\nINFO:tensorflow:local_step=29970 global_step=29970 loss=18.0, 70.8% complete\nINFO:tensorflow:local_step=29980 global_step=29980 loss=18.3, 70.8% complete\nINFO:tensorflow:local_step=29990 global_step=29990 loss=18.6, 70.9% complete\nINFO:tensorflow:local_step=30000 global_step=30000 loss=17.7, 70.9% complete\nINFO:tensorflow:local_step=30010 global_step=30010 loss=18.3, 70.9% complete\nINFO:tensorflow:local_step=30020 global_step=30020 loss=18.4, 70.9% complete\nINFO:tensorflow:local_step=30030 global_step=30030 loss=17.4, 71.0% complete\nINFO:tensorflow:local_step=30040 global_step=30040 loss=18.7, 71.0% complete\nINFO:tensorflow:local_step=30050 global_step=30050 loss=18.9, 71.0% complete\nINFO:tensorflow:local_step=30060 global_step=30060 loss=18.9, 71.0% complete\nINFO:tensorflow:Recording summary at step 30060.\nINFO:tensorflow:global_step/sec: 126.417\nINFO:tensorflow:local_step=30070 global_step=30070 loss=18.1, 71.1% complete\nINFO:tensorflow:local_step=30080 global_step=30080 loss=18.2, 71.1% complete\nINFO:tensorflow:local_step=30090 global_step=30090 loss=18.9, 71.1% complete\nINFO:tensorflow:local_step=30100 global_step=30100 loss=17.8, 71.1% complete\nINFO:tensorflow:local_step=30110 global_step=30110 loss=18.2, 71.1% complete\nINFO:tensorflow:local_step=30120 global_step=30120 loss=18.1, 71.2% complete\nINFO:tensorflow:local_step=30130 global_step=30130 loss=17.9, 71.2% complete\nINFO:tensorflow:local_step=30140 global_step=30140 loss=17.7, 71.2% complete\nINFO:tensorflow:local_step=30150 global_step=30150 loss=18.1, 71.2% complete\nINFO:tensorflow:local_step=30160 global_step=30160 loss=18.2, 71.3% complete\nINFO:tensorflow:local_step=30170 global_step=30170 loss=18.4, 71.3% complete\nINFO:tensorflow:local_step=30180 global_step=30180 loss=17.9, 71.3% complete\nINFO:tensorflow:local_step=30190 global_step=30190 loss=16.0, 71.3% complete\nINFO:tensorflow:local_step=30200 global_step=30200 loss=17.0, 71.4% complete\nINFO:tensorflow:local_step=30210 global_step=30210 loss=18.2, 71.4% complete\nINFO:tensorflow:local_step=30220 global_step=30220 loss=18.5, 71.4% complete\nINFO:tensorflow:local_step=30230 global_step=30230 loss=18.3, 71.4% complete\nINFO:tensorflow:local_step=30240 global_step=30240 loss=18.7, 71.5% complete\nINFO:tensorflow:local_step=30250 global_step=30250 loss=18.6, 71.5% complete\nINFO:tensorflow:local_step=30260 global_step=30260 loss=19.2, 71.5% complete\nINFO:tensorflow:local_step=30270 global_step=30270 loss=18.3, 71.5% complete\nINFO:tensorflow:local_step=30280 global_step=30280 loss=18.1, 71.6% complete\nINFO:tensorflow:local_step=30290 global_step=30290 loss=18.0, 71.6% complete\nINFO:tensorflow:local_step=30300 global_step=30300 loss=18.3, 71.6% complete\nINFO:tensorflow:local_step=30310 global_step=30310 loss=17.9, 71.6% complete\nINFO:tensorflow:local_step=30320 global_step=30320 loss=19.1, 71.6% complete\nINFO:tensorflow:local_step=30330 global_step=30330 loss=18.5, 71.7% complete\nINFO:tensorflow:local_step=30340 global_step=30340 loss=18.2, 71.7% complete\nINFO:tensorflow:local_step=30350 global_step=30350 loss=18.6, 71.7% complete\nINFO:tensorflow:local_step=30360 global_step=30360 loss=17.9, 71.7% complete\nINFO:tensorflow:local_step=30370 global_step=30370 loss=18.0, 71.8% complete\nINFO:tensorflow:local_step=30380 global_step=30380 loss=17.6, 71.8% complete\nINFO:tensorflow:local_step=30390 global_step=30390 loss=18.0, 71.8% complete\nINFO:tensorflow:local_step=30400 global_step=30400 loss=17.4, 71.8% complete\nINFO:tensorflow:local_step=30410 global_step=30410 loss=17.8, 71.9% complete\nINFO:tensorflow:local_step=30420 global_step=30420 loss=18.5, 71.9% complete\nINFO:tensorflow:local_step=30430 global_step=30430 loss=18.6, 71.9% complete\nINFO:tensorflow:local_step=30440 global_step=30440 loss=18.8, 71.9% complete\nINFO:tensorflow:local_step=30450 global_step=30450 loss=18.8, 72.0% complete\nINFO:tensorflow:local_step=30460 global_step=30460 loss=18.0, 72.0% complete\nINFO:tensorflow:local_step=30470 global_step=30470 loss=18.0, 72.0% complete\nINFO:tensorflow:local_step=30480 global_step=30480 loss=18.3, 72.0% complete\nINFO:tensorflow:local_step=30490 global_step=30490 loss=21.3, 72.0% complete\nINFO:tensorflow:local_step=30500 global_step=30500 loss=19.2, 72.1% complete\nINFO:tensorflow:local_step=30510 global_step=30510 loss=19.3, 72.1% complete\nINFO:tensorflow:local_step=30520 global_step=30520 loss=18.7, 72.1% complete\nINFO:tensorflow:local_step=30530 global_step=30530 loss=18.8, 72.1% complete\nINFO:tensorflow:local_step=30540 global_step=30540 loss=18.6, 72.2% complete\nINFO:tensorflow:local_step=30550 global_step=30550 loss=18.4, 72.2% complete\nINFO:tensorflow:local_step=30560 global_step=30560 loss=18.5, 72.2% complete\nINFO:tensorflow:local_step=30570 global_step=30570 loss=18.1, 72.2% complete\nINFO:tensorflow:local_step=30580 global_step=30580 loss=18.3, 72.3% complete\nINFO:tensorflow:local_step=30590 global_step=30590 loss=18.4, 72.3% complete\nINFO:tensorflow:local_step=30600 global_step=30600 loss=18.6, 72.3% complete\nINFO:tensorflow:local_step=30610 global_step=30610 loss=17.6, 72.3% complete\nINFO:tensorflow:local_step=30620 global_step=30620 loss=17.4, 72.4% complete\nINFO:tensorflow:local_step=30630 global_step=30630 loss=18.9, 72.4% complete\nINFO:tensorflow:local_step=30640 global_step=30640 loss=18.7, 72.4% complete\nINFO:tensorflow:local_step=30650 global_step=30650 loss=18.6, 72.4% complete\nINFO:tensorflow:local_step=30660 global_step=30660 loss=17.9, 72.4% complete\nINFO:tensorflow:local_step=30670 global_step=30670 loss=18.7, 72.5% complete\nINFO:tensorflow:local_step=30680 global_step=30680 loss=18.9, 72.5% complete\nINFO:tensorflow:local_step=30690 global_step=30690 loss=18.7, 72.5% complete\nINFO:tensorflow:local_step=30700 global_step=30700 loss=17.8, 72.5% complete\nINFO:tensorflow:local_step=30710 global_step=30710 loss=17.6, 72.6% complete\nINFO:tensorflow:local_step=30720 global_step=30720 loss=18.5, 72.6% complete\nINFO:tensorflow:local_step=30730 global_step=30730 loss=19.3, 72.6% complete\nINFO:tensorflow:local_step=30740 global_step=30740 loss=18.5, 72.6% complete\nINFO:tensorflow:local_step=30750 global_step=30750 loss=18.2, 72.7% complete\nINFO:tensorflow:local_step=30760 global_step=30760 loss=18.7, 72.7% complete\nINFO:tensorflow:local_step=30770 global_step=30770 loss=14.6, 72.7% complete\nINFO:tensorflow:local_step=30780 global_step=30780 loss=17.8, 72.7% complete\nINFO:tensorflow:local_step=30790 global_step=30790 loss=18.0, 72.8% complete\nINFO:tensorflow:local_step=30800 global_step=30800 loss=287.8, 72.8% complete\nINFO:tensorflow:local_step=30810 global_step=30810 loss=17.7, 72.8% complete\nINFO:tensorflow:local_step=30820 global_step=30820 loss=17.8, 72.8% complete\nINFO:tensorflow:local_step=30830 global_step=30830 loss=18.1, 72.8% complete\nINFO:tensorflow:local_step=30840 global_step=30840 loss=18.3, 72.9% complete\nINFO:tensorflow:local_step=30850 global_step=30850 loss=18.1, 72.9% complete\nINFO:tensorflow:local_step=30860 global_step=30860 loss=19.1, 72.9% complete\nINFO:tensorflow:local_step=30870 global_step=30870 loss=18.4, 72.9% complete\nINFO:tensorflow:local_step=30880 global_step=30880 loss=18.8, 73.0% complete\nINFO:tensorflow:local_step=30890 global_step=30890 loss=18.7, 73.0% complete\nINFO:tensorflow:local_step=30900 global_step=30900 loss=18.8, 73.0% complete\nINFO:tensorflow:local_step=30910 global_step=30910 loss=18.6, 73.0% complete\nINFO:tensorflow:local_step=30920 global_step=30920 loss=19.0, 73.1% complete\nINFO:tensorflow:local_step=30930 global_step=30930 loss=17.5, 73.1% complete\nINFO:tensorflow:local_step=30940 global_step=30940 loss=16.6, 73.1% complete\nINFO:tensorflow:local_step=30950 global_step=30950 loss=18.0, 73.1% complete\nINFO:tensorflow:local_step=30960 global_step=30960 loss=17.9, 73.2% complete\nINFO:tensorflow:local_step=30970 global_step=30970 loss=18.3, 73.2% complete\nINFO:tensorflow:local_step=30980 global_step=30980 loss=18.4, 73.2% complete\nINFO:tensorflow:local_step=30990 global_step=30990 loss=19.2, 73.2% complete\nINFO:tensorflow:local_step=31000 global_step=31000 loss=18.0, 73.3% complete\nINFO:tensorflow:local_step=31010 global_step=31010 loss=18.9, 73.3% complete\nINFO:tensorflow:local_step=31020 global_step=31020 loss=18.3, 73.3% complete\nINFO:tensorflow:local_step=31030 global_step=31030 loss=18.2, 73.3% complete\nINFO:tensorflow:local_step=31040 global_step=31040 loss=18.3, 73.3% complete\nINFO:tensorflow:local_step=31050 global_step=31050 loss=18.4, 73.4% complete\nINFO:tensorflow:local_step=31060 global_step=31060 loss=19.1, 73.4% complete\nINFO:tensorflow:local_step=31070 global_step=31070 loss=18.4, 73.4% complete\nINFO:tensorflow:local_step=31080 global_step=31080 loss=19.0, 73.4% complete\nINFO:tensorflow:local_step=31090 global_step=31090 loss=18.4, 73.5% complete\nINFO:tensorflow:local_step=31100 global_step=31100 loss=18.8, 73.5% complete\nINFO:tensorflow:local_step=31110 global_step=31110 loss=18.8, 73.5% complete\nINFO:tensorflow:local_step=31120 global_step=31120 loss=19.0, 73.5% complete\nINFO:tensorflow:local_step=31130 global_step=31130 loss=18.1, 73.6% complete\nINFO:tensorflow:local_step=31140 global_step=31140 loss=17.9, 73.6% complete\nINFO:tensorflow:local_step=31150 global_step=31150 loss=15.6, 73.6% complete\nINFO:tensorflow:local_step=31160 global_step=31160 loss=17.9, 73.6% complete\nINFO:tensorflow:local_step=31170 global_step=31170 loss=18.3, 73.7% complete\nINFO:tensorflow:local_step=31180 global_step=31180 loss=19.1, 73.7% complete\nINFO:tensorflow:local_step=31190 global_step=31190 loss=18.3, 73.7% complete\nINFO:tensorflow:local_step=31200 global_step=31200 loss=18.7, 73.7% complete\nINFO:tensorflow:local_step=31210 global_step=31210 loss=264.0, 73.7% complete\nINFO:tensorflow:local_step=31220 global_step=31220 loss=17.7, 73.8% complete\nINFO:tensorflow:local_step=31230 global_step=31230 loss=18.6, 73.8% complete\nINFO:tensorflow:local_step=31240 global_step=31240 loss=17.9, 73.8% complete\nINFO:tensorflow:local_step=31250 global_step=31250 loss=18.2, 73.8% complete\nINFO:tensorflow:local_step=31260 global_step=31260 loss=18.4, 73.9% complete\nINFO:tensorflow:local_step=31270 global_step=31270 loss=18.2, 73.9% complete\nINFO:tensorflow:local_step=31280 global_step=31280 loss=17.5, 73.9% complete\nINFO:tensorflow:local_step=31290 global_step=31290 loss=17.6, 73.9% complete\nINFO:tensorflow:local_step=31300 global_step=31300 loss=18.1, 74.0% complete\nINFO:tensorflow:local_step=31310 global_step=31310 loss=18.5, 74.0% complete\nINFO:tensorflow:local_step=31320 global_step=31320 loss=289.9, 74.0% complete\nINFO:tensorflow:local_step=31330 global_step=31330 loss=18.0, 74.0% complete\nINFO:tensorflow:local_step=31340 global_step=31340 loss=18.7, 74.1% complete\nINFO:tensorflow:local_step=31350 global_step=31350 loss=18.3, 74.1% complete\nINFO:tensorflow:local_step=31360 global_step=31360 loss=17.9, 74.1% complete\nINFO:tensorflow:local_step=31370 global_step=31370 loss=20.0, 74.1% complete\nINFO:tensorflow:local_step=31380 global_step=31380 loss=18.3, 74.1% complete\nINFO:tensorflow:local_step=31390 global_step=31390 loss=14.4, 74.2% complete\nINFO:tensorflow:local_step=31400 global_step=31400 loss=22.1, 74.2% complete\nINFO:tensorflow:local_step=31410 global_step=31410 loss=18.1, 74.2% complete\nINFO:tensorflow:local_step=31420 global_step=31420 loss=18.2, 74.2% complete\nINFO:tensorflow:local_step=31430 global_step=31430 loss=18.2, 74.3% complete\nINFO:tensorflow:local_step=31440 global_step=31440 loss=19.4, 74.3% complete\nINFO:tensorflow:local_step=31450 global_step=31450 loss=18.7, 74.3% complete\nINFO:tensorflow:local_step=31460 global_step=31460 loss=17.8, 74.3% complete\nINFO:tensorflow:local_step=31470 global_step=31470 loss=18.1, 74.4% complete\nINFO:tensorflow:local_step=31480 global_step=31480 loss=18.6, 74.4% complete\nINFO:tensorflow:local_step=31490 global_step=31490 loss=18.4, 74.4% complete\nINFO:tensorflow:local_step=31500 global_step=31500 loss=19.8, 74.4% complete\nINFO:tensorflow:local_step=31510 global_step=31510 loss=19.0, 74.5% complete\nINFO:tensorflow:local_step=31520 global_step=31520 loss=17.9, 74.5% complete\nINFO:tensorflow:local_step=31530 global_step=31530 loss=19.1, 74.5% complete\nINFO:tensorflow:local_step=31540 global_step=31540 loss=18.4, 74.5% complete\nINFO:tensorflow:local_step=31550 global_step=31550 loss=19.2, 74.6% complete\nINFO:tensorflow:local_step=31560 global_step=31560 loss=17.8, 74.6% complete\nINFO:tensorflow:local_step=31570 global_step=31570 loss=17.9, 74.6% complete\nINFO:tensorflow:local_step=31580 global_step=31580 loss=18.8, 74.6% complete\nINFO:tensorflow:local_step=31590 global_step=31590 loss=18.8, 74.6% complete\nINFO:tensorflow:local_step=31600 global_step=31600 loss=18.4, 74.7% complete\nINFO:tensorflow:local_step=31610 global_step=31610 loss=18.8, 74.7% complete\nINFO:tensorflow:local_step=31620 global_step=31620 loss=18.9, 74.7% complete\nINFO:tensorflow:local_step=31630 global_step=31630 loss=18.0, 74.7% complete\nINFO:tensorflow:local_step=31640 global_step=31640 loss=18.7, 74.8% complete\nINFO:tensorflow:local_step=31650 global_step=31650 loss=18.4, 74.8% complete\nINFO:tensorflow:local_step=31660 global_step=31660 loss=18.1, 74.8% complete\nINFO:tensorflow:local_step=31670 global_step=31670 loss=17.7, 74.8% complete\nINFO:tensorflow:local_step=31680 global_step=31680 loss=18.0, 74.9% complete\nINFO:tensorflow:local_step=31690 global_step=31690 loss=17.9, 74.9% complete\nINFO:tensorflow:local_step=31700 global_step=31700 loss=18.2, 74.9% complete\nINFO:tensorflow:local_step=31710 global_step=31710 loss=18.1, 74.9% complete\nINFO:tensorflow:local_step=31720 global_step=31720 loss=18.9, 75.0% complete\nINFO:tensorflow:local_step=31730 global_step=31730 loss=18.4, 75.0% complete\nINFO:tensorflow:local_step=31740 global_step=31740 loss=18.7, 75.0% complete\nINFO:tensorflow:local_step=31750 global_step=31750 loss=15.0, 75.0% complete\nINFO:tensorflow:local_step=31760 global_step=31760 loss=14.4, 75.0% complete\nINFO:tensorflow:local_step=31770 global_step=31770 loss=15.1, 75.1% complete\nINFO:tensorflow:local_step=31780 global_step=31780 loss=18.9, 75.1% complete\nINFO:tensorflow:local_step=31790 global_step=31790 loss=14.4, 75.1% complete\nINFO:tensorflow:local_step=31800 global_step=31800 loss=18.2, 75.1% complete\nINFO:tensorflow:local_step=31810 global_step=31810 loss=19.1, 75.2% complete\nINFO:tensorflow:local_step=31820 global_step=31820 loss=18.7, 75.2% complete\nINFO:tensorflow:local_step=31830 global_step=31830 loss=17.3, 75.2% complete\nINFO:tensorflow:local_step=31840 global_step=31840 loss=17.5, 75.2% complete\nINFO:tensorflow:local_step=31850 global_step=31850 loss=18.5, 75.3% complete\nINFO:tensorflow:local_step=31860 global_step=31860 loss=18.7, 75.3% complete\nINFO:tensorflow:local_step=31870 global_step=31870 loss=18.4, 75.3% complete\nINFO:tensorflow:local_step=31880 global_step=31880 loss=17.2, 75.3% complete\nINFO:tensorflow:local_step=31890 global_step=31890 loss=15.0, 75.4% complete\nINFO:tensorflow:local_step=31900 global_step=31900 loss=18.7, 75.4% complete\nINFO:tensorflow:local_step=31910 global_step=31910 loss=17.8, 75.4% complete\nINFO:tensorflow:local_step=31920 global_step=31920 loss=18.5, 75.4% complete\nINFO:tensorflow:local_step=31930 global_step=31930 loss=18.0, 75.4% complete\nINFO:tensorflow:local_step=31940 global_step=31940 loss=18.3, 75.5% complete\nINFO:tensorflow:local_step=31950 global_step=31950 loss=18.4, 75.5% complete\nINFO:tensorflow:local_step=31960 global_step=31960 loss=18.4, 75.5% complete\nINFO:tensorflow:local_step=31970 global_step=31970 loss=17.6, 75.5% complete\nINFO:tensorflow:local_step=31980 global_step=31980 loss=19.1, 75.6% complete\nINFO:tensorflow:local_step=31990 global_step=31990 loss=17.6, 75.6% complete\nINFO:tensorflow:local_step=32000 global_step=32000 loss=17.8, 75.6% complete\nINFO:tensorflow:local_step=32010 global_step=32010 loss=18.4, 75.6% complete\nINFO:tensorflow:local_step=32020 global_step=32020 loss=18.0, 75.7% complete\nINFO:tensorflow:local_step=32030 global_step=32030 loss=17.6, 75.7% complete\nINFO:tensorflow:local_step=32040 global_step=32040 loss=19.2, 75.7% complete\nINFO:tensorflow:local_step=32050 global_step=32050 loss=18.1, 75.7% complete\nINFO:tensorflow:local_step=32060 global_step=32060 loss=18.7, 75.8% complete\nINFO:tensorflow:local_step=32070 global_step=32070 loss=280.6, 75.8% complete\nINFO:tensorflow:local_step=32080 global_step=32080 loss=18.4, 75.8% complete\nINFO:tensorflow:local_step=32090 global_step=32090 loss=18.5, 75.8% complete\nINFO:tensorflow:local_step=32100 global_step=32100 loss=18.4, 75.9% complete\nINFO:tensorflow:local_step=32110 global_step=32110 loss=17.7, 75.9% complete\nINFO:tensorflow:local_step=32120 global_step=32120 loss=19.0, 75.9% complete\nINFO:tensorflow:local_step=32130 global_step=32130 loss=16.9, 75.9% complete\nINFO:tensorflow:local_step=32140 global_step=32140 loss=18.4, 75.9% complete\nINFO:tensorflow:local_step=32150 global_step=32150 loss=17.8, 76.0% complete\nINFO:tensorflow:local_step=32160 global_step=32160 loss=18.4, 76.0% complete\nINFO:tensorflow:local_step=32170 global_step=32170 loss=18.8, 76.0% complete\nINFO:tensorflow:local_step=32180 global_step=32180 loss=18.5, 76.0% complete\nINFO:tensorflow:local_step=32190 global_step=32190 loss=18.2, 76.1% complete\nINFO:tensorflow:local_step=32200 global_step=32200 loss=17.7, 76.1% complete\nINFO:tensorflow:local_step=32210 global_step=32210 loss=17.9, 76.1% complete\nINFO:tensorflow:local_step=32220 global_step=32220 loss=19.1, 76.1% complete\nINFO:tensorflow:local_step=32230 global_step=32230 loss=18.4, 76.2% complete\nINFO:tensorflow:local_step=32240 global_step=32240 loss=17.5, 76.2% complete\nINFO:tensorflow:local_step=32250 global_step=32250 loss=18.1, 76.2% complete\nINFO:tensorflow:local_step=32260 global_step=32260 loss=17.8, 76.2% complete\nINFO:tensorflow:local_step=32270 global_step=32270 loss=17.6, 76.3% complete\nINFO:tensorflow:local_step=32280 global_step=32280 loss=19.1, 76.3% complete\nINFO:tensorflow:local_step=32290 global_step=32290 loss=17.9, 76.3% complete\nINFO:tensorflow:local_step=32300 global_step=32300 loss=17.5, 76.3% complete\nINFO:tensorflow:local_step=32310 global_step=32310 loss=18.3, 76.3% complete\nINFO:tensorflow:local_step=32320 global_step=32320 loss=18.4, 76.4% complete\nINFO:tensorflow:local_step=32330 global_step=32330 loss=15.4, 76.4% complete\nINFO:tensorflow:local_step=32340 global_step=32340 loss=18.7, 76.4% complete\nINFO:tensorflow:local_step=32350 global_step=32350 loss=17.6, 76.4% complete\nINFO:tensorflow:local_step=32360 global_step=32360 loss=18.7, 76.5% complete\nINFO:tensorflow:local_step=32370 global_step=32370 loss=19.2, 76.5% complete\nINFO:tensorflow:local_step=32380 global_step=32380 loss=18.1, 76.5% complete\nINFO:tensorflow:local_step=32390 global_step=32390 loss=18.7, 76.5% complete\nINFO:tensorflow:local_step=32400 global_step=32400 loss=18.3, 76.6% complete\nINFO:tensorflow:local_step=32410 global_step=32410 loss=18.0, 76.6% complete\nINFO:tensorflow:local_step=32420 global_step=32420 loss=19.5, 76.6% complete\nINFO:tensorflow:local_step=32430 global_step=32430 loss=18.3, 76.6% complete\nINFO:tensorflow:local_step=32440 global_step=32440 loss=18.9, 76.7% complete\nINFO:tensorflow:local_step=32450 global_step=32450 loss=17.7, 76.7% complete\nINFO:tensorflow:local_step=32460 global_step=32460 loss=18.8, 76.7% complete\nINFO:tensorflow:local_step=32470 global_step=32470 loss=18.5, 76.7% complete\nINFO:tensorflow:local_step=32480 global_step=32480 loss=18.3, 76.7% complete\nINFO:tensorflow:local_step=32490 global_step=32490 loss=17.6, 76.8% complete\nINFO:tensorflow:local_step=32500 global_step=32500 loss=15.9, 76.8% complete\nINFO:tensorflow:local_step=32510 global_step=32510 loss=17.6, 76.8% complete\nINFO:tensorflow:local_step=32520 global_step=32520 loss=18.0, 76.8% complete\nINFO:tensorflow:local_step=32530 global_step=32530 loss=19.3, 76.9% complete\nINFO:tensorflow:local_step=32540 global_step=32540 loss=18.6, 76.9% complete\nINFO:tensorflow:local_step=32550 global_step=32550 loss=19.3, 76.9% complete\nINFO:tensorflow:local_step=32560 global_step=32560 loss=18.2, 76.9% complete\nINFO:tensorflow:local_step=32570 global_step=32570 loss=15.0, 77.0% complete\nINFO:tensorflow:local_step=32580 global_step=32580 loss=18.7, 77.0% complete\nINFO:tensorflow:local_step=32590 global_step=32590 loss=18.8, 77.0% complete\nINFO:tensorflow:local_step=32600 global_step=32600 loss=18.8, 77.0% complete\nINFO:tensorflow:local_step=32610 global_step=32610 loss=20.9, 77.1% complete\nINFO:tensorflow:local_step=32620 global_step=32620 loss=18.3, 77.1% complete\nINFO:tensorflow:local_step=32630 global_step=32630 loss=19.0, 77.1% complete\nINFO:tensorflow:local_step=32640 global_step=32640 loss=18.1, 77.1% complete\nINFO:tensorflow:local_step=32650 global_step=32650 loss=15.0, 77.2% complete\nINFO:tensorflow:local_step=32660 global_step=32660 loss=18.6, 77.2% complete\nINFO:tensorflow:local_step=32670 global_step=32670 loss=18.9, 77.2% complete\nINFO:tensorflow:local_step=32680 global_step=32680 loss=18.7, 77.2% complete\nINFO:tensorflow:local_step=32690 global_step=32690 loss=18.9, 77.2% complete\nINFO:tensorflow:local_step=32700 global_step=32700 loss=17.4, 77.3% complete\nINFO:tensorflow:local_step=32710 global_step=32710 loss=18.2, 77.3% complete\nINFO:tensorflow:local_step=32720 global_step=32720 loss=18.9, 77.3% complete\nINFO:tensorflow:local_step=32730 global_step=32730 loss=18.1, 77.3% complete\nINFO:tensorflow:local_step=32740 global_step=32740 loss=18.1, 77.4% complete\nINFO:tensorflow:local_step=32750 global_step=32750 loss=18.8, 77.4% complete\nINFO:tensorflow:local_step=32760 global_step=32760 loss=18.6, 77.4% complete\nINFO:tensorflow:local_step=32770 global_step=32770 loss=18.0, 77.4% complete\nINFO:tensorflow:local_step=32780 global_step=32780 loss=17.6, 77.5% complete\nINFO:tensorflow:local_step=32790 global_step=32790 loss=17.8, 77.5% complete\nINFO:tensorflow:local_step=32800 global_step=32800 loss=18.5, 77.5% complete\nINFO:tensorflow:local_step=32810 global_step=32810 loss=18.8, 77.5% complete\nINFO:tensorflow:local_step=32820 global_step=32820 loss=17.2, 77.6% complete\nINFO:tensorflow:local_step=32830 global_step=32830 loss=17.9, 77.6% complete\nINFO:tensorflow:local_step=32840 global_step=32840 loss=21.3, 77.6% complete\nINFO:tensorflow:local_step=32850 global_step=32850 loss=19.7, 77.6% complete\nINFO:tensorflow:local_step=32860 global_step=32860 loss=16.6, 77.6% complete\nINFO:tensorflow:local_step=32870 global_step=32870 loss=18.7, 77.7% complete\nINFO:tensorflow:local_step=32880 global_step=32880 loss=15.1, 77.7% complete\nINFO:tensorflow:local_step=32890 global_step=32890 loss=18.6, 77.7% complete\nINFO:tensorflow:local_step=32900 global_step=32900 loss=19.1, 77.7% complete\nINFO:tensorflow:local_step=32910 global_step=32910 loss=18.8, 77.8% complete\nINFO:tensorflow:local_step=32920 global_step=32920 loss=17.9, 77.8% complete\nINFO:tensorflow:local_step=32930 global_step=32930 loss=18.3, 77.8% complete\nINFO:tensorflow:local_step=32940 global_step=32940 loss=18.0, 77.8% complete\nINFO:tensorflow:local_step=32950 global_step=32950 loss=298.9, 77.9% complete\nINFO:tensorflow:local_step=32960 global_step=32960 loss=269.3, 77.9% complete\nINFO:tensorflow:local_step=32970 global_step=32970 loss=21.4, 77.9% complete\nINFO:tensorflow:local_step=32980 global_step=32980 loss=19.5, 77.9% complete\nINFO:tensorflow:local_step=32990 global_step=32990 loss=18.5, 78.0% complete\nINFO:tensorflow:local_step=33000 global_step=33000 loss=17.7, 78.0% complete\nINFO:tensorflow:local_step=33010 global_step=33010 loss=19.0, 78.0% complete\nINFO:tensorflow:local_step=33020 global_step=33020 loss=18.9, 78.0% complete\nINFO:tensorflow:local_step=33030 global_step=33030 loss=18.2, 78.0% complete\nINFO:tensorflow:local_step=33040 global_step=33040 loss=19.3, 78.1% complete\nINFO:tensorflow:local_step=33050 global_step=33050 loss=19.1, 78.1% complete\nINFO:tensorflow:local_step=33060 global_step=33060 loss=18.1, 78.1% complete\nINFO:tensorflow:local_step=33070 global_step=33070 loss=18.0, 78.1% complete\nINFO:tensorflow:local_step=33080 global_step=33080 loss=18.6, 78.2% complete\nINFO:tensorflow:local_step=33090 global_step=33090 loss=22.3, 78.2% complete\nINFO:tensorflow:local_step=33100 global_step=33100 loss=19.1, 78.2% complete\nINFO:tensorflow:local_step=33110 global_step=33110 loss=20.1, 78.2% complete\nINFO:tensorflow:local_step=33120 global_step=33120 loss=15.3, 78.3% complete\nINFO:tensorflow:local_step=33130 global_step=33130 loss=18.4, 78.3% complete\nINFO:tensorflow:local_step=33140 global_step=33140 loss=18.5, 78.3% complete\nINFO:tensorflow:local_step=33150 global_step=33150 loss=18.1, 78.3% complete\nINFO:tensorflow:local_step=33160 global_step=33160 loss=19.0, 78.4% complete\nINFO:tensorflow:local_step=33170 global_step=33170 loss=18.9, 78.4% complete\nINFO:tensorflow:local_step=33180 global_step=33180 loss=18.4, 78.4% complete\nINFO:tensorflow:local_step=33190 global_step=33190 loss=18.6, 78.4% complete\nINFO:tensorflow:local_step=33200 global_step=33200 loss=17.9, 78.4% complete\nINFO:tensorflow:local_step=33210 global_step=33210 loss=19.6, 78.5% complete\nINFO:tensorflow:local_step=33220 global_step=33220 loss=19.5, 78.5% complete\nINFO:tensorflow:local_step=33230 global_step=33230 loss=18.8, 78.5% complete\nINFO:tensorflow:local_step=33240 global_step=33240 loss=18.7, 78.5% complete\nINFO:tensorflow:local_step=33250 global_step=33250 loss=18.5, 78.6% complete\nINFO:tensorflow:local_step=33260 global_step=33260 loss=18.0, 78.6% complete\nINFO:tensorflow:local_step=33270 global_step=33270 loss=18.8, 78.6% complete\nINFO:tensorflow:local_step=33280 global_step=33280 loss=18.5, 78.6% complete\nINFO:tensorflow:local_step=33290 global_step=33290 loss=18.4, 78.7% complete\nINFO:tensorflow:local_step=33300 global_step=33300 loss=18.6, 78.7% complete\nINFO:tensorflow:local_step=33310 global_step=33310 loss=17.8, 78.7% complete\nINFO:tensorflow:local_step=33320 global_step=33320 loss=19.1, 78.7% complete\nINFO:tensorflow:local_step=33330 global_step=33330 loss=18.8, 78.8% complete\nINFO:tensorflow:local_step=33340 global_step=33340 loss=17.9, 78.8% complete\nINFO:tensorflow:local_step=33350 global_step=33350 loss=17.9, 78.8% complete\nINFO:tensorflow:local_step=33360 global_step=33360 loss=18.1, 78.8% complete\nINFO:tensorflow:local_step=33370 global_step=33370 loss=17.8, 78.9% complete\nINFO:tensorflow:local_step=33380 global_step=33380 loss=17.7, 78.9% complete\nINFO:tensorflow:local_step=33390 global_step=33390 loss=18.3, 78.9% complete\nINFO:tensorflow:local_step=33400 global_step=33400 loss=18.9, 78.9% complete\nINFO:tensorflow:local_step=33410 global_step=33410 loss=18.4, 78.9% complete\nINFO:tensorflow:local_step=33420 global_step=33420 loss=17.9, 79.0% complete\nINFO:tensorflow:local_step=33430 global_step=33430 loss=17.8, 79.0% complete\nINFO:tensorflow:local_step=33440 global_step=33440 loss=15.6, 79.0% complete\nINFO:tensorflow:local_step=33450 global_step=33450 loss=18.1, 79.0% complete\nINFO:tensorflow:local_step=33460 global_step=33460 loss=17.6, 79.1% complete\nINFO:tensorflow:local_step=33470 global_step=33470 loss=18.4, 79.1% complete\nINFO:tensorflow:local_step=33480 global_step=33480 loss=18.6, 79.1% complete\nINFO:tensorflow:local_step=33490 global_step=33490 loss=18.2, 79.1% complete\nINFO:tensorflow:local_step=33500 global_step=33500 loss=17.8, 79.2% complete\nINFO:tensorflow:local_step=33510 global_step=33510 loss=19.0, 79.2% complete\nINFO:tensorflow:local_step=33520 global_step=33520 loss=18.5, 79.2% complete\nINFO:tensorflow:local_step=33530 global_step=33530 loss=18.1, 79.2% complete\nINFO:tensorflow:local_step=33540 global_step=33540 loss=18.5, 79.3% complete\nINFO:tensorflow:local_step=33550 global_step=33550 loss=18.0, 79.3% complete\nINFO:tensorflow:local_step=33560 global_step=33560 loss=19.4, 79.3% complete\nINFO:tensorflow:local_step=33570 global_step=33570 loss=18.5, 79.3% complete\nINFO:tensorflow:local_step=33580 global_step=33580 loss=18.4, 79.3% complete\nINFO:tensorflow:local_step=33590 global_step=33590 loss=18.5, 79.4% complete\nINFO:tensorflow:local_step=33600 global_step=33600 loss=15.4, 79.4% complete\nINFO:tensorflow:local_step=33610 global_step=33610 loss=281.1, 79.4% complete\nINFO:tensorflow:local_step=33620 global_step=33620 loss=14.7, 79.4% complete\nINFO:tensorflow:local_step=33630 global_step=33630 loss=19.2, 79.5% complete\nINFO:tensorflow:local_step=33640 global_step=33640 loss=21.0, 79.5% complete\nINFO:tensorflow:local_step=33650 global_step=33650 loss=18.9, 79.5% complete\nINFO:tensorflow:local_step=33660 global_step=33660 loss=17.9, 79.5% complete\nINFO:tensorflow:local_step=33670 global_step=33670 loss=19.1, 79.6% complete\nINFO:tensorflow:local_step=33680 global_step=33680 loss=17.8, 79.6% complete\nINFO:tensorflow:local_step=33690 global_step=33690 loss=17.9, 79.6% complete\nINFO:tensorflow:local_step=33700 global_step=33700 loss=19.2, 79.6% complete\nINFO:tensorflow:local_step=33710 global_step=33710 loss=18.0, 79.7% complete\nINFO:tensorflow:local_step=33720 global_step=33720 loss=18.6, 79.7% complete\nINFO:tensorflow:local_step=33730 global_step=33730 loss=19.7, 79.7% complete\nINFO:tensorflow:local_step=33740 global_step=33740 loss=18.5, 79.7% complete\nINFO:tensorflow:local_step=33750 global_step=33750 loss=18.3, 79.7% complete\nINFO:tensorflow:local_step=33760 global_step=33760 loss=18.8, 79.8% complete\nINFO:tensorflow:local_step=33770 global_step=33770 loss=18.8, 79.8% complete\nINFO:tensorflow:local_step=33780 global_step=33780 loss=18.6, 79.8% complete\nINFO:tensorflow:local_step=33790 global_step=33790 loss=18.7, 79.8% complete\nINFO:tensorflow:local_step=33800 global_step=33800 loss=18.6, 79.9% complete\nINFO:tensorflow:local_step=33810 global_step=33810 loss=18.4, 79.9% complete\nINFO:tensorflow:local_step=33820 global_step=33820 loss=17.9, 79.9% complete\nINFO:tensorflow:local_step=33830 global_step=33830 loss=20.9, 79.9% complete\nINFO:tensorflow:local_step=33840 global_step=33840 loss=18.1, 80.0% complete\nINFO:tensorflow:local_step=33850 global_step=33850 loss=17.9, 80.0% complete\nINFO:tensorflow:local_step=33860 global_step=33860 loss=18.1, 80.0% complete\nINFO:tensorflow:local_step=33870 global_step=33870 loss=17.8, 80.0% complete\nINFO:tensorflow:local_step=33880 global_step=33880 loss=18.2, 80.1% complete\nINFO:tensorflow:local_step=33890 global_step=33890 loss=18.4, 80.1% complete\nINFO:tensorflow:local_step=33900 global_step=33900 loss=17.9, 80.1% complete\nINFO:tensorflow:local_step=33910 global_step=33910 loss=19.2, 80.1% complete\nINFO:tensorflow:local_step=33920 global_step=33920 loss=18.2, 80.2% complete\nINFO:tensorflow:local_step=33930 global_step=33930 loss=17.8, 80.2% complete\nINFO:tensorflow:local_step=33940 global_step=33940 loss=18.1, 80.2% complete\nINFO:tensorflow:local_step=33950 global_step=33950 loss=18.9, 80.2% complete\nINFO:tensorflow:local_step=33960 global_step=33960 loss=18.3, 80.2% complete\nINFO:tensorflow:local_step=33970 global_step=33970 loss=18.4, 80.3% complete\nINFO:tensorflow:local_step=33980 global_step=33980 loss=18.0, 80.3% complete\nINFO:tensorflow:local_step=33990 global_step=33990 loss=17.9, 80.3% complete\nINFO:tensorflow:local_step=34000 global_step=34000 loss=18.2, 80.3% complete\nINFO:tensorflow:local_step=34010 global_step=34010 loss=18.4, 80.4% complete\nINFO:tensorflow:local_step=34020 global_step=34020 loss=18.8, 80.4% complete\nINFO:tensorflow:local_step=34030 global_step=34030 loss=17.6, 80.4% complete\nINFO:tensorflow:local_step=34040 global_step=34040 loss=17.5, 80.4% complete\nINFO:tensorflow:local_step=34050 global_step=34050 loss=329.6, 80.5% complete\nINFO:tensorflow:local_step=34060 global_step=34060 loss=18.8, 80.5% complete\nINFO:tensorflow:local_step=34070 global_step=34070 loss=18.2, 80.5% complete\nINFO:tensorflow:local_step=34080 global_step=34080 loss=17.9, 80.5% complete\nINFO:tensorflow:local_step=34090 global_step=34090 loss=17.8, 80.6% complete\nINFO:tensorflow:local_step=34100 global_step=34100 loss=18.1, 80.6% complete\nINFO:tensorflow:local_step=34110 global_step=34110 loss=18.7, 80.6% complete\nINFO:tensorflow:local_step=34120 global_step=34120 loss=18.1, 80.6% complete\nINFO:tensorflow:local_step=34130 global_step=34130 loss=18.2, 80.6% complete\nINFO:tensorflow:local_step=34140 global_step=34140 loss=18.4, 80.7% complete\nINFO:tensorflow:local_step=34150 global_step=34150 loss=18.8, 80.7% complete\nINFO:tensorflow:local_step=34160 global_step=34160 loss=17.7, 80.7% complete\nINFO:tensorflow:local_step=34170 global_step=34170 loss=17.5, 80.7% complete\nINFO:tensorflow:local_step=34180 global_step=34180 loss=17.9, 80.8% complete\nINFO:tensorflow:local_step=34190 global_step=34190 loss=17.8, 80.8% complete\nINFO:tensorflow:local_step=34200 global_step=34200 loss=18.3, 80.8% complete\nINFO:tensorflow:local_step=34210 global_step=34210 loss=17.5, 80.8% complete\nINFO:tensorflow:local_step=34220 global_step=34220 loss=17.9, 80.9% complete\nINFO:tensorflow:local_step=34230 global_step=34230 loss=17.9, 80.9% complete\nINFO:tensorflow:local_step=34240 global_step=34240 loss=18.5, 80.9% complete\nINFO:tensorflow:local_step=34250 global_step=34250 loss=18.3, 80.9% complete\nINFO:tensorflow:local_step=34260 global_step=34260 loss=18.4, 81.0% complete\nINFO:tensorflow:local_step=34270 global_step=34270 loss=17.8, 81.0% complete\nINFO:tensorflow:local_step=34280 global_step=34280 loss=18.4, 81.0% complete\nINFO:tensorflow:local_step=34290 global_step=34290 loss=18.9, 81.0% complete\nINFO:tensorflow:local_step=34300 global_step=34300 loss=18.4, 81.0% complete\nINFO:tensorflow:local_step=34310 global_step=34310 loss=18.0, 81.1% complete\nINFO:tensorflow:local_step=34320 global_step=34320 loss=19.1, 81.1% complete\nINFO:tensorflow:local_step=34330 global_step=34330 loss=18.5, 81.1% complete\nINFO:tensorflow:local_step=34340 global_step=34340 loss=18.1, 81.1% complete\nINFO:tensorflow:local_step=34350 global_step=34350 loss=18.2, 81.2% complete\nINFO:tensorflow:local_step=34360 global_step=34360 loss=18.1, 81.2% complete\nINFO:tensorflow:local_step=34370 global_step=34370 loss=17.2, 81.2% complete\nINFO:tensorflow:local_step=34380 global_step=34380 loss=17.7, 81.2% complete\nINFO:tensorflow:local_step=34390 global_step=34390 loss=20.6, 81.3% complete\nINFO:tensorflow:local_step=34400 global_step=34400 loss=18.7, 81.3% complete\nINFO:tensorflow:local_step=34410 global_step=34410 loss=18.2, 81.3% complete\nINFO:tensorflow:local_step=34420 global_step=34420 loss=18.3, 81.3% complete\nINFO:tensorflow:local_step=34430 global_step=34430 loss=18.5, 81.4% complete\nINFO:tensorflow:local_step=34440 global_step=34440 loss=18.1, 81.4% complete\nINFO:tensorflow:local_step=34450 global_step=34450 loss=17.7, 81.4% complete\nINFO:tensorflow:local_step=34460 global_step=34460 loss=17.9, 81.4% complete\nINFO:tensorflow:local_step=34470 global_step=34470 loss=17.9, 81.5% complete\nINFO:tensorflow:local_step=34480 global_step=34480 loss=18.7, 81.5% complete\nINFO:tensorflow:local_step=34490 global_step=34490 loss=17.9, 81.5% complete\nINFO:tensorflow:local_step=34500 global_step=34500 loss=17.3, 81.5% complete\nINFO:tensorflow:local_step=34510 global_step=34510 loss=18.0, 81.5% complete\nINFO:tensorflow:local_step=34520 global_step=34520 loss=18.6, 81.6% complete\nINFO:tensorflow:local_step=34530 global_step=34530 loss=15.8, 81.6% complete\nINFO:tensorflow:local_step=34540 global_step=34540 loss=19.6, 81.6% complete\nINFO:tensorflow:local_step=34550 global_step=34550 loss=18.0, 81.6% complete\nINFO:tensorflow:local_step=34560 global_step=34560 loss=19.0, 81.7% complete\nINFO:tensorflow:local_step=34570 global_step=34570 loss=17.9, 81.7% complete\nINFO:tensorflow:local_step=34580 global_step=34580 loss=17.8, 81.7% complete\nINFO:tensorflow:local_step=34590 global_step=34590 loss=17.7, 81.7% complete\nINFO:tensorflow:local_step=34600 global_step=34600 loss=18.3, 81.8% complete\nINFO:tensorflow:local_step=34610 global_step=34610 loss=17.7, 81.8% complete\nINFO:tensorflow:local_step=34620 global_step=34620 loss=17.2, 81.8% complete\nINFO:tensorflow:local_step=34630 global_step=34630 loss=18.7, 81.8% complete\nINFO:tensorflow:local_step=34640 global_step=34640 loss=19.0, 81.9% complete\nINFO:tensorflow:local_step=34650 global_step=34650 loss=19.3, 81.9% complete\nINFO:tensorflow:local_step=34660 global_step=34660 loss=18.2, 81.9% complete\nINFO:tensorflow:local_step=34670 global_step=34670 loss=20.4, 81.9% complete\nINFO:tensorflow:local_step=34680 global_step=34680 loss=18.2, 81.9% complete\nINFO:tensorflow:local_step=34690 global_step=34690 loss=19.3, 82.0% complete\nINFO:tensorflow:local_step=34700 global_step=34700 loss=18.6, 82.0% complete\nINFO:tensorflow:local_step=34710 global_step=34710 loss=18.8, 82.0% complete\nINFO:tensorflow:local_step=34720 global_step=34720 loss=18.5, 82.0% complete\nINFO:tensorflow:local_step=34730 global_step=34730 loss=18.3, 82.1% complete\nINFO:tensorflow:local_step=34740 global_step=34740 loss=17.3, 82.1% complete\nINFO:tensorflow:local_step=34750 global_step=34750 loss=19.0, 82.1% complete\nINFO:tensorflow:local_step=34760 global_step=34760 loss=18.8, 82.1% complete\nINFO:tensorflow:local_step=34770 global_step=34770 loss=19.2, 82.2% complete\nINFO:tensorflow:local_step=34780 global_step=34780 loss=18.5, 82.2% complete\nINFO:tensorflow:local_step=34790 global_step=34790 loss=18.2, 82.2% complete\nINFO:tensorflow:local_step=34800 global_step=34800 loss=18.8, 82.2% complete\nINFO:tensorflow:local_step=34810 global_step=34810 loss=17.6, 82.3% complete\nINFO:tensorflow:local_step=34820 global_step=34820 loss=18.4, 82.3% complete\nINFO:tensorflow:local_step=34830 global_step=34830 loss=17.8, 82.3% complete\nINFO:tensorflow:local_step=34840 global_step=34840 loss=18.5, 82.3% complete\nINFO:tensorflow:local_step=34850 global_step=34850 loss=19.3, 82.3% complete\nINFO:tensorflow:local_step=34860 global_step=34860 loss=18.5, 82.4% complete\nINFO:tensorflow:local_step=34870 global_step=34870 loss=17.5, 82.4% complete\nINFO:tensorflow:local_step=34880 global_step=34880 loss=17.8, 82.4% complete\nINFO:tensorflow:local_step=34890 global_step=34890 loss=299.0, 82.4% complete\nINFO:tensorflow:local_step=34900 global_step=34900 loss=18.5, 82.5% complete\nINFO:tensorflow:local_step=34910 global_step=34910 loss=18.9, 82.5% complete\nINFO:tensorflow:local_step=34920 global_step=34920 loss=17.9, 82.5% complete\nINFO:tensorflow:local_step=34930 global_step=34930 loss=19.3, 82.5% complete\nINFO:tensorflow:local_step=34940 global_step=34940 loss=18.4, 82.6% complete\nINFO:tensorflow:local_step=34950 global_step=34950 loss=18.0, 82.6% complete\nINFO:tensorflow:local_step=34960 global_step=34960 loss=19.7, 82.6% complete\nINFO:tensorflow:local_step=34970 global_step=34970 loss=19.2, 82.6% complete\nINFO:tensorflow:local_step=34980 global_step=34980 loss=13.1, 82.7% complete\nINFO:tensorflow:local_step=34990 global_step=34990 loss=18.8, 82.7% complete\nINFO:tensorflow:local_step=35000 global_step=35000 loss=18.0, 82.7% complete\nINFO:tensorflow:local_step=35010 global_step=35010 loss=282.9, 82.7% complete\nINFO:tensorflow:local_step=35020 global_step=35020 loss=18.7, 82.8% complete\nINFO:tensorflow:local_step=35030 global_step=35030 loss=18.2, 82.8% complete\nINFO:tensorflow:local_step=35040 global_step=35040 loss=17.9, 82.8% complete\nINFO:tensorflow:local_step=35050 global_step=35050 loss=18.1, 82.8% complete\nINFO:tensorflow:local_step=35060 global_step=35060 loss=18.2, 82.8% complete\nINFO:tensorflow:local_step=35070 global_step=35070 loss=18.7, 82.9% complete\nINFO:tensorflow:local_step=35080 global_step=35080 loss=17.6, 82.9% complete\nINFO:tensorflow:local_step=35090 global_step=35090 loss=18.2, 82.9% complete\nINFO:tensorflow:local_step=35100 global_step=35100 loss=18.3, 82.9% complete\nINFO:tensorflow:local_step=35110 global_step=35110 loss=15.7, 83.0% complete\nINFO:tensorflow:local_step=35120 global_step=35120 loss=17.9, 83.0% complete\nINFO:tensorflow:local_step=35130 global_step=35130 loss=16.5, 83.0% complete\nINFO:tensorflow:local_step=35140 global_step=35140 loss=21.4, 83.0% complete\nINFO:tensorflow:local_step=35150 global_step=35150 loss=18.3, 83.1% complete\nINFO:tensorflow:local_step=35160 global_step=35160 loss=18.0, 83.1% complete\nINFO:tensorflow:local_step=35170 global_step=35170 loss=18.0, 83.1% complete\nINFO:tensorflow:local_step=35180 global_step=35180 loss=18.4, 83.1% complete\nINFO:tensorflow:local_step=35190 global_step=35190 loss=18.1, 83.2% complete\nINFO:tensorflow:local_step=35200 global_step=35200 loss=21.5, 83.2% complete\nINFO:tensorflow:local_step=35210 global_step=35210 loss=18.5, 83.2% complete\nINFO:tensorflow:local_step=35220 global_step=35220 loss=18.0, 83.2% complete\nINFO:tensorflow:local_step=35230 global_step=35230 loss=18.2, 83.2% complete\nINFO:tensorflow:local_step=35240 global_step=35240 loss=16.5, 83.3% complete\nINFO:tensorflow:local_step=35250 global_step=35250 loss=19.2, 83.3% complete\nINFO:tensorflow:local_step=35260 global_step=35260 loss=18.8, 83.3% complete\nINFO:tensorflow:local_step=35270 global_step=35270 loss=18.5, 83.3% complete\nINFO:tensorflow:local_step=35280 global_step=35280 loss=17.5, 83.4% complete\nINFO:tensorflow:local_step=35290 global_step=35290 loss=18.1, 83.4% complete\nINFO:tensorflow:local_step=35300 global_step=35300 loss=18.7, 83.4% complete\nINFO:tensorflow:local_step=35310 global_step=35310 loss=19.0, 83.4% complete\nINFO:tensorflow:local_step=35320 global_step=35320 loss=18.2, 83.5% complete\nINFO:tensorflow:local_step=35330 global_step=35330 loss=18.3, 83.5% complete\nINFO:tensorflow:local_step=35340 global_step=35340 loss=17.9, 83.5% complete\nINFO:tensorflow:local_step=35350 global_step=35350 loss=19.1, 83.5% complete\nINFO:tensorflow:local_step=35360 global_step=35360 loss=17.8, 83.6% complete\nINFO:tensorflow:local_step=35370 global_step=35370 loss=18.6, 83.6% complete\nINFO:tensorflow:local_step=35380 global_step=35380 loss=18.7, 83.6% complete\nINFO:tensorflow:local_step=35390 global_step=35390 loss=18.6, 83.6% complete\nINFO:tensorflow:local_step=35400 global_step=35400 loss=14.7, 83.6% complete\nINFO:tensorflow:local_step=35410 global_step=35410 loss=17.5, 83.7% complete\nINFO:tensorflow:local_step=35420 global_step=35420 loss=18.6, 83.7% complete\nINFO:tensorflow:local_step=35430 global_step=35430 loss=18.5, 83.7% complete\nINFO:tensorflow:local_step=35440 global_step=35440 loss=18.5, 83.7% complete\nINFO:tensorflow:local_step=35450 global_step=35450 loss=18.8, 83.8% complete\nINFO:tensorflow:local_step=35460 global_step=35460 loss=18.6, 83.8% complete\nINFO:tensorflow:local_step=35470 global_step=35470 loss=269.9, 83.8% complete\nINFO:tensorflow:local_step=35480 global_step=35480 loss=18.5, 83.8% complete\nINFO:tensorflow:local_step=35490 global_step=35490 loss=17.9, 83.9% complete\nINFO:tensorflow:local_step=35500 global_step=35500 loss=19.0, 83.9% complete\nINFO:tensorflow:local_step=35510 global_step=35510 loss=18.3, 83.9% complete\nINFO:tensorflow:local_step=35520 global_step=35520 loss=17.2, 83.9% complete\nINFO:tensorflow:local_step=35530 global_step=35530 loss=18.7, 84.0% complete\nINFO:tensorflow:local_step=35540 global_step=35540 loss=18.2, 84.0% complete\nINFO:tensorflow:local_step=35550 global_step=35550 loss=17.8, 84.0% complete\nINFO:tensorflow:local_step=35560 global_step=35560 loss=19.0, 84.0% complete\nINFO:tensorflow:local_step=35570 global_step=35570 loss=19.3, 84.1% complete\nINFO:tensorflow:local_step=35580 global_step=35580 loss=18.8, 84.1% complete\nINFO:tensorflow:local_step=35590 global_step=35590 loss=18.3, 84.1% complete\nINFO:tensorflow:local_step=35600 global_step=35600 loss=18.2, 84.1% complete\nINFO:tensorflow:local_step=35610 global_step=35610 loss=18.7, 84.1% complete\nINFO:tensorflow:local_step=35620 global_step=35620 loss=18.5, 84.2% complete\nINFO:tensorflow:local_step=35630 global_step=35630 loss=17.8, 84.2% complete\nINFO:tensorflow:local_step=35640 global_step=35640 loss=17.8, 84.2% complete\nINFO:tensorflow:local_step=35650 global_step=35650 loss=18.3, 84.2% complete\nINFO:tensorflow:local_step=35660 global_step=35660 loss=15.0, 84.3% complete\nINFO:tensorflow:local_step=35670 global_step=35670 loss=18.2, 84.3% complete\nINFO:tensorflow:local_step=35680 global_step=35680 loss=19.0, 84.3% complete\nINFO:tensorflow:local_step=35690 global_step=35690 loss=18.7, 84.3% complete\nINFO:tensorflow:local_step=35700 global_step=35700 loss=18.1, 84.4% complete\nINFO:tensorflow:local_step=35710 global_step=35710 loss=19.0, 84.4% complete\nINFO:tensorflow:local_step=35720 global_step=35720 loss=18.0, 84.4% complete\nINFO:tensorflow:local_step=35730 global_step=35730 loss=19.1, 84.4% complete\nINFO:tensorflow:local_step=35740 global_step=35740 loss=19.3, 84.5% complete\nINFO:tensorflow:local_step=35750 global_step=35750 loss=17.3, 84.5% complete\nINFO:tensorflow:local_step=35760 global_step=35760 loss=17.8, 84.5% complete\nINFO:tensorflow:local_step=35770 global_step=35770 loss=327.8, 84.5% complete\nINFO:tensorflow:local_step=35780 global_step=35780 loss=19.1, 84.5% complete\nINFO:tensorflow:local_step=35790 global_step=35790 loss=19.0, 84.6% complete\nINFO:tensorflow:local_step=35800 global_step=35800 loss=18.2, 84.6% complete\nINFO:tensorflow:local_step=35810 global_step=35810 loss=18.2, 84.6% complete\nINFO:tensorflow:local_step=35820 global_step=35820 loss=19.2, 84.6% complete\nINFO:tensorflow:local_step=35830 global_step=35830 loss=18.4, 84.7% complete\nINFO:tensorflow:local_step=35840 global_step=35840 loss=17.5, 84.7% complete\nINFO:tensorflow:local_step=35850 global_step=35850 loss=18.0, 84.7% complete\nINFO:tensorflow:local_step=35860 global_step=35860 loss=18.9, 84.7% complete\nINFO:tensorflow:local_step=35870 global_step=35870 loss=19.0, 84.8% complete\nINFO:tensorflow:local_step=35880 global_step=35880 loss=18.8, 84.8% complete\nINFO:tensorflow:local_step=35890 global_step=35890 loss=18.2, 84.8% complete\nINFO:tensorflow:local_step=35900 global_step=35900 loss=18.5, 84.8% complete\nINFO:tensorflow:local_step=35910 global_step=35910 loss=18.6, 84.9% complete\nINFO:tensorflow:local_step=35920 global_step=35920 loss=18.0, 84.9% complete\nINFO:tensorflow:local_step=35930 global_step=35930 loss=14.6, 84.9% complete\nINFO:tensorflow:local_step=35940 global_step=35940 loss=19.0, 84.9% complete\nINFO:tensorflow:local_step=35950 global_step=35950 loss=14.8, 84.9% complete\nINFO:tensorflow:local_step=35960 global_step=35960 loss=18.5, 85.0% complete\nINFO:tensorflow:local_step=35970 global_step=35970 loss=17.9, 85.0% complete\nINFO:tensorflow:local_step=35980 global_step=35980 loss=17.9, 85.0% complete\nINFO:tensorflow:local_step=35990 global_step=35990 loss=17.7, 85.0% complete\nINFO:tensorflow:local_step=36000 global_step=36000 loss=17.8, 85.1% complete\nINFO:tensorflow:local_step=36010 global_step=36010 loss=19.1, 85.1% complete\nINFO:tensorflow:local_step=36020 global_step=36020 loss=18.1, 85.1% complete\nINFO:tensorflow:local_step=36030 global_step=36030 loss=18.1, 85.1% complete\nINFO:tensorflow:local_step=36040 global_step=36040 loss=19.3, 85.2% complete\nINFO:tensorflow:local_step=36050 global_step=36050 loss=17.7, 85.2% complete\nINFO:tensorflow:local_step=36060 global_step=36060 loss=18.8, 85.2% complete\nINFO:tensorflow:local_step=36070 global_step=36070 loss=17.5, 85.2% complete\nINFO:tensorflow:local_step=36080 global_step=36080 loss=18.5, 85.3% complete\nINFO:tensorflow:local_step=36090 global_step=36090 loss=18.3, 85.3% complete\nINFO:tensorflow:local_step=36100 global_step=36100 loss=18.9, 85.3% complete\nINFO:tensorflow:local_step=36110 global_step=36110 loss=18.4, 85.3% complete\nINFO:tensorflow:local_step=36120 global_step=36120 loss=17.8, 85.3% complete\nINFO:tensorflow:local_step=36130 global_step=36130 loss=18.3, 85.4% complete\nINFO:tensorflow:local_step=36140 global_step=36140 loss=17.8, 85.4% complete\nINFO:tensorflow:local_step=36150 global_step=36150 loss=19.6, 85.4% complete\nINFO:tensorflow:local_step=36160 global_step=36160 loss=19.0, 85.4% complete\nINFO:tensorflow:local_step=36170 global_step=36170 loss=15.6, 85.5% complete\nINFO:tensorflow:local_step=36180 global_step=36180 loss=18.2, 85.5% complete\nINFO:tensorflow:local_step=36190 global_step=36190 loss=18.4, 85.5% complete\nINFO:tensorflow:local_step=36200 global_step=36200 loss=19.1, 85.5% complete\nINFO:tensorflow:local_step=36210 global_step=36210 loss=18.3, 85.6% complete\nINFO:tensorflow:local_step=36220 global_step=36220 loss=18.2, 85.6% complete\nINFO:tensorflow:local_step=36230 global_step=36230 loss=19.1, 85.6% complete\nINFO:tensorflow:local_step=36240 global_step=36240 loss=18.4, 85.6% complete\nINFO:tensorflow:local_step=36250 global_step=36250 loss=17.7, 85.7% complete\nINFO:tensorflow:local_step=36260 global_step=36260 loss=17.9, 85.7% complete\nINFO:tensorflow:local_step=36270 global_step=36270 loss=18.2, 85.7% complete\nINFO:tensorflow:local_step=36280 global_step=36280 loss=18.6, 85.7% complete\nINFO:tensorflow:local_step=36290 global_step=36290 loss=18.7, 85.8% complete\nINFO:tensorflow:local_step=36300 global_step=36300 loss=18.5, 85.8% complete\nINFO:tensorflow:local_step=36310 global_step=36310 loss=18.1, 85.8% complete\nINFO:tensorflow:local_step=36320 global_step=36320 loss=17.6, 85.8% complete\nINFO:tensorflow:local_step=36330 global_step=36330 loss=17.0, 85.8% complete\nINFO:tensorflow:local_step=36340 global_step=36340 loss=17.6, 85.9% complete\nINFO:tensorflow:local_step=36350 global_step=36350 loss=18.2, 85.9% complete\nINFO:tensorflow:local_step=36360 global_step=36360 loss=17.3, 85.9% complete\nINFO:tensorflow:local_step=36370 global_step=36370 loss=20.8, 85.9% complete\nINFO:tensorflow:local_step=36380 global_step=36380 loss=18.2, 86.0% complete\nINFO:tensorflow:local_step=36390 global_step=36390 loss=18.2, 86.0% complete\nINFO:tensorflow:local_step=36400 global_step=36400 loss=18.0, 86.0% complete\nINFO:tensorflow:local_step=36410 global_step=36410 loss=18.7, 86.0% complete\nINFO:tensorflow:local_step=36420 global_step=36420 loss=19.0, 86.1% complete\nINFO:tensorflow:local_step=36430 global_step=36430 loss=17.9, 86.1% complete\nINFO:tensorflow:local_step=36440 global_step=36440 loss=18.1, 86.1% complete\nINFO:tensorflow:local_step=36450 global_step=36450 loss=18.8, 86.1% complete\nINFO:tensorflow:local_step=36460 global_step=36460 loss=18.6, 86.2% complete\nINFO:tensorflow:local_step=36470 global_step=36470 loss=18.7, 86.2% complete\nINFO:tensorflow:local_step=36480 global_step=36480 loss=15.9, 86.2% complete\nINFO:tensorflow:local_step=36490 global_step=36490 loss=17.7, 86.2% complete\nINFO:tensorflow:local_step=36500 global_step=36500 loss=18.4, 86.2% complete\nINFO:tensorflow:local_step=36510 global_step=36510 loss=18.7, 86.3% complete\nINFO:tensorflow:local_step=36520 global_step=36520 loss=17.6, 86.3% complete\nINFO:tensorflow:local_step=36530 global_step=36530 loss=17.7, 86.3% complete\nINFO:tensorflow:local_step=36540 global_step=36540 loss=17.6, 86.3% complete\nINFO:tensorflow:local_step=36550 global_step=36550 loss=17.8, 86.4% complete\nINFO:tensorflow:local_step=36560 global_step=36560 loss=18.3, 86.4% complete\nINFO:tensorflow:local_step=36570 global_step=36570 loss=17.7, 86.4% complete\nINFO:tensorflow:local_step=36580 global_step=36580 loss=19.7, 86.4% complete\nINFO:tensorflow:local_step=36590 global_step=36590 loss=19.0, 86.5% complete\nINFO:tensorflow:local_step=36600 global_step=36600 loss=18.6, 86.5% complete\nINFO:tensorflow:local_step=36610 global_step=36610 loss=18.4, 86.5% complete\nINFO:tensorflow:local_step=36620 global_step=36620 loss=18.8, 86.5% complete\nINFO:tensorflow:local_step=36630 global_step=36630 loss=210.3, 86.6% complete\nINFO:tensorflow:local_step=36640 global_step=36640 loss=19.0, 86.6% complete\nINFO:tensorflow:local_step=36650 global_step=36650 loss=18.5, 86.6% complete\nINFO:tensorflow:local_step=36660 global_step=36660 loss=21.1, 86.6% complete\nINFO:tensorflow:local_step=36670 global_step=36670 loss=18.2, 86.6% complete\nINFO:tensorflow:local_step=36680 global_step=36680 loss=18.3, 86.7% complete\nINFO:tensorflow:local_step=36690 global_step=36690 loss=18.0, 86.7% complete\nINFO:tensorflow:local_step=36700 global_step=36700 loss=18.7, 86.7% complete\nINFO:tensorflow:local_step=36710 global_step=36710 loss=18.0, 86.7% complete\nINFO:tensorflow:local_step=36720 global_step=36720 loss=18.0, 86.8% complete\nINFO:tensorflow:local_step=36730 global_step=36730 loss=18.2, 86.8% complete\nINFO:tensorflow:local_step=36740 global_step=36740 loss=15.6, 86.8% complete\nINFO:tensorflow:local_step=36750 global_step=36750 loss=17.5, 86.8% complete\nINFO:tensorflow:local_step=36760 global_step=36760 loss=19.2, 86.9% complete\nINFO:tensorflow:local_step=36770 global_step=36770 loss=18.8, 86.9% complete\nINFO:tensorflow:local_step=36780 global_step=36780 loss=17.2, 86.9% complete\nINFO:tensorflow:local_step=36790 global_step=36790 loss=18.1, 86.9% complete\nINFO:tensorflow:local_step=36800 global_step=36800 loss=18.5, 87.0% complete\nINFO:tensorflow:local_step=36810 global_step=36810 loss=19.4, 87.0% complete\nINFO:tensorflow:local_step=36820 global_step=36820 loss=22.0, 87.0% complete\nINFO:tensorflow:local_step=36830 global_step=36830 loss=16.4, 87.0% complete\nINFO:tensorflow:local_step=36840 global_step=36840 loss=18.3, 87.1% complete\nINFO:tensorflow:local_step=36850 global_step=36850 loss=15.9, 87.1% complete\nINFO:tensorflow:local_step=36860 global_step=36860 loss=18.2, 87.1% complete\nINFO:tensorflow:local_step=36870 global_step=36870 loss=18.2, 87.1% complete\nINFO:tensorflow:local_step=36880 global_step=36880 loss=19.4, 87.1% complete\nINFO:tensorflow:local_step=36890 global_step=36890 loss=18.4, 87.2% complete\nINFO:tensorflow:local_step=36900 global_step=36900 loss=19.4, 87.2% complete\nINFO:tensorflow:local_step=36910 global_step=36910 loss=17.4, 87.2% complete\nINFO:tensorflow:local_step=36920 global_step=36920 loss=17.6, 87.2% complete\nINFO:tensorflow:local_step=36930 global_step=36930 loss=17.8, 87.3% complete\nINFO:tensorflow:local_step=36940 global_step=36940 loss=18.1, 87.3% complete\nINFO:tensorflow:local_step=36950 global_step=36950 loss=19.0, 87.3% complete\nINFO:tensorflow:local_step=36960 global_step=36960 loss=17.7, 87.3% complete\nINFO:tensorflow:local_step=36970 global_step=36970 loss=17.8, 87.4% complete\nINFO:tensorflow:local_step=36980 global_step=36980 loss=18.7, 87.4% complete\nINFO:tensorflow:local_step=36990 global_step=36990 loss=17.9, 87.4% complete\nINFO:tensorflow:local_step=37000 global_step=37000 loss=19.0, 87.4% complete\nINFO:tensorflow:local_step=37010 global_step=37010 loss=18.0, 87.5% complete\nINFO:tensorflow:local_step=37020 global_step=37020 loss=18.8, 87.5% complete\nINFO:tensorflow:local_step=37030 global_step=37030 loss=17.9, 87.5% complete\nINFO:tensorflow:local_step=37040 global_step=37040 loss=17.9, 87.5% complete\nINFO:tensorflow:local_step=37050 global_step=37050 loss=18.1, 87.5% complete\nINFO:tensorflow:local_step=37060 global_step=37060 loss=19.2, 87.6% complete\nINFO:tensorflow:local_step=37070 global_step=37070 loss=18.5, 87.6% complete\nINFO:tensorflow:local_step=37080 global_step=37080 loss=18.1, 87.6% complete\nINFO:tensorflow:local_step=37090 global_step=37090 loss=18.6, 87.6% complete\nINFO:tensorflow:local_step=37100 global_step=37100 loss=18.1, 87.7% complete\nINFO:tensorflow:local_step=37110 global_step=37110 loss=19.1, 87.7% complete\nINFO:tensorflow:local_step=37120 global_step=37120 loss=19.4, 87.7% complete\nINFO:tensorflow:local_step=37130 global_step=37130 loss=17.8, 87.7% complete\nINFO:tensorflow:local_step=37140 global_step=37140 loss=17.3, 87.8% complete\nINFO:tensorflow:local_step=37150 global_step=37150 loss=19.6, 87.8% complete\nINFO:tensorflow:local_step=37160 global_step=37160 loss=17.7, 87.8% complete\nINFO:tensorflow:local_step=37170 global_step=37170 loss=19.2, 87.8% complete\nINFO:tensorflow:local_step=37180 global_step=37180 loss=17.7, 87.9% complete\nINFO:tensorflow:local_step=37190 global_step=37190 loss=18.5, 87.9% complete\nINFO:tensorflow:local_step=37200 global_step=37200 loss=18.9, 87.9% complete\nINFO:tensorflow:local_step=37210 global_step=37210 loss=18.5, 87.9% complete\nINFO:tensorflow:local_step=37220 global_step=37220 loss=291.8, 87.9% complete\nINFO:tensorflow:local_step=37230 global_step=37230 loss=19.2, 88.0% complete\nINFO:tensorflow:local_step=37240 global_step=37240 loss=18.7, 88.0% complete\nINFO:tensorflow:local_step=37250 global_step=37250 loss=19.0, 88.0% complete\nINFO:tensorflow:local_step=37260 global_step=37260 loss=14.4, 88.0% complete\nINFO:tensorflow:local_step=37270 global_step=37270 loss=18.1, 88.1% complete\nINFO:tensorflow:local_step=37280 global_step=37280 loss=17.9, 88.1% complete\nINFO:tensorflow:local_step=37290 global_step=37290 loss=17.5, 88.1% complete\nINFO:tensorflow:local_step=37300 global_step=37300 loss=18.9, 88.1% complete\nINFO:tensorflow:local_step=37310 global_step=37310 loss=17.7, 88.2% complete\nINFO:tensorflow:local_step=37320 global_step=37320 loss=19.3, 88.2% complete\nINFO:tensorflow:local_step=37330 global_step=37330 loss=19.0, 88.2% complete\nINFO:tensorflow:local_step=37340 global_step=37340 loss=18.0, 88.2% complete\nINFO:tensorflow:local_step=37350 global_step=37350 loss=15.7, 88.3% complete\nINFO:tensorflow:local_step=37360 global_step=37360 loss=19.3, 88.3% complete\nINFO:tensorflow:local_step=37370 global_step=37370 loss=17.4, 88.3% complete\nINFO:tensorflow:local_step=37380 global_step=37380 loss=18.1, 88.3% complete\nINFO:tensorflow:local_step=37390 global_step=37390 loss=18.3, 88.4% complete\nINFO:tensorflow:local_step=37400 global_step=37400 loss=18.6, 88.4% complete\nINFO:tensorflow:local_step=37410 global_step=37410 loss=19.1, 88.4% complete\nINFO:tensorflow:local_step=37420 global_step=37420 loss=18.5, 88.4% complete\nINFO:tensorflow:local_step=37430 global_step=37430 loss=18.5, 88.4% complete\nINFO:tensorflow:local_step=37440 global_step=37440 loss=18.9, 88.5% complete\nINFO:tensorflow:local_step=37450 global_step=37450 loss=17.4, 88.5% complete\nINFO:tensorflow:local_step=37460 global_step=37460 loss=21.6, 88.5% complete\nINFO:tensorflow:local_step=37470 global_step=37470 loss=15.0, 88.5% complete\nINFO:tensorflow:local_step=37480 global_step=37480 loss=18.6, 88.6% complete\nINFO:tensorflow:local_step=37490 global_step=37490 loss=18.2, 88.6% complete\nINFO:tensorflow:local_step=37500 global_step=37500 loss=17.6, 88.6% complete\nINFO:tensorflow:local_step=37510 global_step=37510 loss=18.9, 88.6% complete\nINFO:tensorflow:local_step=37520 global_step=37520 loss=19.5, 88.7% complete\nINFO:tensorflow:local_step=37530 global_step=37530 loss=18.7, 88.7% complete\nINFO:tensorflow:local_step=37540 global_step=37540 loss=17.7, 88.7% complete\nINFO:tensorflow:local_step=37550 global_step=37550 loss=17.8, 88.7% complete\nINFO:tensorflow:local_step=37560 global_step=37560 loss=17.8, 88.8% complete\nINFO:tensorflow:local_step=37570 global_step=37570 loss=18.1, 88.8% complete\nINFO:tensorflow:local_step=37580 global_step=37580 loss=17.8, 88.8% complete\nINFO:tensorflow:local_step=37590 global_step=37590 loss=19.1, 88.8% complete\nINFO:tensorflow:local_step=37600 global_step=37600 loss=18.6, 88.8% complete\nINFO:tensorflow:local_step=37610 global_step=37610 loss=19.2, 88.9% complete\nINFO:tensorflow:local_step=37620 global_step=37620 loss=18.3, 88.9% complete\nINFO:tensorflow:local_step=37630 global_step=37630 loss=18.6, 88.9% complete\nINFO:tensorflow:local_step=37640 global_step=37640 loss=18.0, 88.9% complete\nINFO:tensorflow:local_step=37650 global_step=37650 loss=19.7, 89.0% complete\nINFO:tensorflow:local_step=37660 global_step=37660 loss=17.7, 89.0% complete\nINFO:tensorflow:Recording summary at step 37662.\nINFO:tensorflow:global_step/sec: 126.7\nINFO:tensorflow:local_step=37670 global_step=37670 loss=18.9, 89.0% complete\nINFO:tensorflow:local_step=37680 global_step=37680 loss=18.0, 89.0% complete\nINFO:tensorflow:local_step=37690 global_step=37690 loss=18.0, 89.1% complete\nINFO:tensorflow:local_step=37700 global_step=37700 loss=18.1, 89.1% complete\nINFO:tensorflow:local_step=37710 global_step=37710 loss=18.6, 89.1% complete\nINFO:tensorflow:local_step=37720 global_step=37720 loss=18.2, 89.1% complete\nINFO:tensorflow:local_step=37730 global_step=37730 loss=18.4, 89.2% complete\nINFO:tensorflow:local_step=37740 global_step=37740 loss=17.7, 89.2% complete\nINFO:tensorflow:local_step=37750 global_step=37750 loss=19.2, 89.2% complete\nINFO:tensorflow:local_step=37760 global_step=37760 loss=17.4, 89.2% complete\nINFO:tensorflow:local_step=37770 global_step=37770 loss=18.2, 89.2% complete\nINFO:tensorflow:local_step=37780 global_step=37780 loss=17.7, 89.3% complete\nINFO:tensorflow:local_step=37790 global_step=37790 loss=21.2, 89.3% complete\nINFO:tensorflow:local_step=37800 global_step=37800 loss=17.8, 89.3% complete\nINFO:tensorflow:local_step=37810 global_step=37810 loss=18.8, 89.3% complete\nINFO:tensorflow:local_step=37820 global_step=37820 loss=19.3, 89.4% complete\nINFO:tensorflow:local_step=37830 global_step=37830 loss=19.4, 89.4% complete\nINFO:tensorflow:local_step=37840 global_step=37840 loss=15.6, 89.4% complete\nINFO:tensorflow:local_step=37850 global_step=37850 loss=18.0, 89.4% complete\nINFO:tensorflow:local_step=37860 global_step=37860 loss=18.8, 89.5% complete\nINFO:tensorflow:local_step=37870 global_step=37870 loss=17.5, 89.5% complete\nINFO:tensorflow:local_step=37880 global_step=37880 loss=18.4, 89.5% complete\nINFO:tensorflow:local_step=37890 global_step=37890 loss=19.0, 89.5% complete\nINFO:tensorflow:local_step=37900 global_step=37900 loss=18.4, 89.6% complete\nINFO:tensorflow:local_step=37910 global_step=37910 loss=19.8, 89.6% complete\nINFO:tensorflow:local_step=37920 global_step=37920 loss=18.1, 89.6% complete\nINFO:tensorflow:local_step=37930 global_step=37930 loss=19.1, 89.6% complete\nINFO:tensorflow:local_step=37940 global_step=37940 loss=19.4, 89.7% complete\nINFO:tensorflow:local_step=37950 global_step=37950 loss=18.3, 89.7% complete\nINFO:tensorflow:local_step=37960 global_step=37960 loss=19.3, 89.7% complete\nINFO:tensorflow:local_step=37970 global_step=37970 loss=18.3, 89.7% complete\nINFO:tensorflow:local_step=37980 global_step=37980 loss=18.1, 89.7% complete\nINFO:tensorflow:local_step=37990 global_step=37990 loss=18.7, 89.8% complete\nINFO:tensorflow:local_step=38000 global_step=38000 loss=19.0, 89.8% complete\nINFO:tensorflow:local_step=38010 global_step=38010 loss=17.9, 89.8% complete\nINFO:tensorflow:local_step=38020 global_step=38020 loss=18.8, 89.8% complete\nINFO:tensorflow:local_step=38030 global_step=38030 loss=18.0, 89.9% complete\nINFO:tensorflow:local_step=38040 global_step=38040 loss=18.7, 89.9% complete\nINFO:tensorflow:local_step=38050 global_step=38050 loss=18.6, 89.9% complete\nINFO:tensorflow:local_step=38060 global_step=38060 loss=19.1, 89.9% complete\nINFO:tensorflow:local_step=38070 global_step=38070 loss=18.5, 90.0% complete\nINFO:tensorflow:local_step=38080 global_step=38080 loss=15.5, 90.0% complete\nINFO:tensorflow:local_step=38090 global_step=38090 loss=17.8, 90.0% complete\nINFO:tensorflow:local_step=38100 global_step=38100 loss=17.5, 90.0% complete\nINFO:tensorflow:local_step=38110 global_step=38110 loss=18.6, 90.1% complete\nINFO:tensorflow:local_step=38120 global_step=38120 loss=18.8, 90.1% complete\nINFO:tensorflow:local_step=38130 global_step=38130 loss=17.5, 90.1% complete\nINFO:tensorflow:local_step=38140 global_step=38140 loss=223.9, 90.1% complete\nINFO:tensorflow:local_step=38150 global_step=38150 loss=17.8, 90.1% complete\nINFO:tensorflow:local_step=38160 global_step=38160 loss=18.1, 90.2% complete\nINFO:tensorflow:local_step=38170 global_step=38170 loss=18.2, 90.2% complete\nINFO:tensorflow:local_step=38180 global_step=38180 loss=18.2, 90.2% complete\nINFO:tensorflow:local_step=38190 global_step=38190 loss=18.3, 90.2% complete\nINFO:tensorflow:local_step=38200 global_step=38200 loss=18.5, 90.3% complete\nINFO:tensorflow:local_step=38210 global_step=38210 loss=17.7, 90.3% complete\nINFO:tensorflow:local_step=38220 global_step=38220 loss=18.5, 90.3% complete\nINFO:tensorflow:local_step=38230 global_step=38230 loss=18.1, 90.3% complete\nINFO:tensorflow:local_step=38240 global_step=38240 loss=15.4, 90.4% complete\nINFO:tensorflow:local_step=38250 global_step=38250 loss=16.0, 90.4% complete\nINFO:tensorflow:local_step=38260 global_step=38260 loss=17.2, 90.4% complete\nINFO:tensorflow:local_step=38270 global_step=38270 loss=18.0, 90.4% complete\nINFO:tensorflow:local_step=38280 global_step=38280 loss=17.2, 90.5% complete\nINFO:tensorflow:local_step=38290 global_step=38290 loss=18.2, 90.5% complete\nINFO:tensorflow:local_step=38300 global_step=38300 loss=18.7, 90.5% complete\nINFO:tensorflow:local_step=38310 global_step=38310 loss=18.5, 90.5% complete\nINFO:tensorflow:local_step=38320 global_step=38320 loss=18.1, 90.5% complete\nINFO:tensorflow:local_step=38330 global_step=38330 loss=17.8, 90.6% complete\nINFO:tensorflow:local_step=38340 global_step=38340 loss=18.8, 90.6% complete\nINFO:tensorflow:local_step=38350 global_step=38350 loss=17.9, 90.6% complete\nINFO:tensorflow:local_step=38360 global_step=38360 loss=18.9, 90.6% complete\nINFO:tensorflow:local_step=38370 global_step=38370 loss=17.8, 90.7% complete\nINFO:tensorflow:local_step=38380 global_step=38380 loss=17.9, 90.7% complete\nINFO:tensorflow:local_step=38390 global_step=38390 loss=18.6, 90.7% complete\nINFO:tensorflow:local_step=38400 global_step=38400 loss=18.9, 90.7% complete\nINFO:tensorflow:local_step=38410 global_step=38410 loss=18.4, 90.8% complete\nINFO:tensorflow:local_step=38420 global_step=38420 loss=18.1, 90.8% complete\nINFO:tensorflow:local_step=38430 global_step=38430 loss=17.8, 90.8% complete\nINFO:tensorflow:local_step=38440 global_step=38440 loss=17.8, 90.8% complete\nINFO:tensorflow:local_step=38450 global_step=38450 loss=18.2, 90.9% complete\nINFO:tensorflow:local_step=38460 global_step=38460 loss=18.3, 90.9% complete\nINFO:tensorflow:local_step=38470 global_step=38470 loss=17.6, 90.9% complete\nINFO:tensorflow:local_step=38480 global_step=38480 loss=17.6, 90.9% complete\nINFO:tensorflow:local_step=38490 global_step=38490 loss=18.3, 90.9% complete\nINFO:tensorflow:local_step=38500 global_step=38500 loss=19.0, 91.0% complete\nINFO:tensorflow:local_step=38510 global_step=38510 loss=18.0, 91.0% complete\nINFO:tensorflow:local_step=38520 global_step=38520 loss=17.9, 91.0% complete\nINFO:tensorflow:local_step=38530 global_step=38530 loss=223.9, 91.0% complete\nINFO:tensorflow:local_step=38540 global_step=38540 loss=18.1, 91.1% complete\nINFO:tensorflow:local_step=38550 global_step=38550 loss=17.9, 91.1% complete\nINFO:tensorflow:local_step=38560 global_step=38560 loss=18.1, 91.1% complete\nINFO:tensorflow:local_step=38570 global_step=38570 loss=19.2, 91.1% complete\nINFO:tensorflow:local_step=38580 global_step=38580 loss=19.3, 91.2% complete\nINFO:tensorflow:local_step=38590 global_step=38590 loss=21.6, 91.2% complete\nINFO:tensorflow:local_step=38600 global_step=38600 loss=21.0, 91.2% complete\nINFO:tensorflow:local_step=38610 global_step=38610 loss=18.1, 91.2% complete\nINFO:tensorflow:local_step=38620 global_step=38620 loss=17.5, 91.3% complete\nINFO:tensorflow:local_step=38630 global_step=38630 loss=15.3, 91.3% complete\nINFO:tensorflow:local_step=38640 global_step=38640 loss=18.8, 91.3% complete\nINFO:tensorflow:local_step=38650 global_step=38650 loss=17.8, 91.3% complete\nINFO:tensorflow:local_step=38660 global_step=38660 loss=18.9, 91.4% complete\nINFO:tensorflow:local_step=38670 global_step=38670 loss=18.5, 91.4% complete\nINFO:tensorflow:local_step=38680 global_step=38680 loss=20.9, 91.4% complete\nINFO:tensorflow:local_step=38690 global_step=38690 loss=14.7, 91.4% complete\nINFO:tensorflow:local_step=38700 global_step=38700 loss=21.1, 91.4% complete\nINFO:tensorflow:local_step=38710 global_step=38710 loss=18.3, 91.5% complete\nINFO:tensorflow:local_step=38720 global_step=38720 loss=18.0, 91.5% complete\nINFO:tensorflow:local_step=38730 global_step=38730 loss=18.3, 91.5% complete\nINFO:tensorflow:local_step=38740 global_step=38740 loss=18.8, 91.5% complete\nINFO:tensorflow:local_step=38750 global_step=38750 loss=14.9, 91.6% complete\nINFO:tensorflow:local_step=38760 global_step=38760 loss=19.1, 91.6% complete\nINFO:tensorflow:local_step=38770 global_step=38770 loss=18.3, 91.6% complete\nINFO:tensorflow:local_step=38780 global_step=38780 loss=17.8, 91.6% complete\nINFO:tensorflow:local_step=38790 global_step=38790 loss=18.1, 91.7% complete\nINFO:tensorflow:local_step=38800 global_step=38800 loss=17.7, 91.7% complete\nINFO:tensorflow:local_step=38810 global_step=38810 loss=18.8, 91.7% complete\nINFO:tensorflow:local_step=38820 global_step=38820 loss=18.1, 91.7% complete\nINFO:tensorflow:local_step=38830 global_step=38830 loss=18.2, 91.8% complete\nINFO:tensorflow:local_step=38840 global_step=38840 loss=18.4, 91.8% complete\nINFO:tensorflow:local_step=38850 global_step=38850 loss=17.7, 91.8% complete\nINFO:tensorflow:local_step=38860 global_step=38860 loss=18.7, 91.8% complete\nINFO:tensorflow:local_step=38870 global_step=38870 loss=18.9, 91.8% complete\nINFO:tensorflow:local_step=38880 global_step=38880 loss=18.0, 91.9% complete\nINFO:tensorflow:local_step=38890 global_step=38890 loss=18.9, 91.9% complete\nINFO:tensorflow:local_step=38900 global_step=38900 loss=18.8, 91.9% complete\nINFO:tensorflow:local_step=38910 global_step=38910 loss=18.6, 91.9% complete\nINFO:tensorflow:local_step=38920 global_step=38920 loss=17.8, 92.0% complete\nINFO:tensorflow:local_step=38930 global_step=38930 loss=18.5, 92.0% complete\nINFO:tensorflow:local_step=38940 global_step=38940 loss=19.1, 92.0% complete\nINFO:tensorflow:local_step=38950 global_step=38950 loss=14.6, 92.0% complete\nINFO:tensorflow:local_step=38960 global_step=38960 loss=18.8, 92.1% complete\nINFO:tensorflow:local_step=38970 global_step=38970 loss=18.7, 92.1% complete\nINFO:tensorflow:local_step=38980 global_step=38980 loss=18.2, 92.1% complete\nINFO:tensorflow:local_step=38990 global_step=38990 loss=18.5, 92.1% complete\nINFO:tensorflow:local_step=39000 global_step=39000 loss=19.0, 92.2% complete\nINFO:tensorflow:local_step=39010 global_step=39010 loss=17.8, 92.2% complete\nINFO:tensorflow:local_step=39020 global_step=39020 loss=18.5, 92.2% complete\nINFO:tensorflow:local_step=39030 global_step=39030 loss=17.0, 92.2% complete\nINFO:tensorflow:local_step=39040 global_step=39040 loss=17.8, 92.2% complete\nINFO:tensorflow:local_step=39050 global_step=39050 loss=19.4, 92.3% complete\nINFO:tensorflow:local_step=39060 global_step=39060 loss=18.2, 92.3% complete\nINFO:tensorflow:local_step=39070 global_step=39070 loss=18.1, 92.3% complete\nINFO:tensorflow:local_step=39080 global_step=39080 loss=18.3, 92.3% complete\nINFO:tensorflow:local_step=39090 global_step=39090 loss=18.5, 92.4% complete\nINFO:tensorflow:local_step=39100 global_step=39100 loss=18.0, 92.4% complete\nINFO:tensorflow:local_step=39110 global_step=39110 loss=17.7, 92.4% complete\nINFO:tensorflow:local_step=39120 global_step=39120 loss=19.2, 92.4% complete\nINFO:tensorflow:local_step=39130 global_step=39130 loss=18.6, 92.5% complete\nINFO:tensorflow:local_step=39140 global_step=39140 loss=19.3, 92.5% complete\nINFO:tensorflow:local_step=39150 global_step=39150 loss=18.1, 92.5% complete\nINFO:tensorflow:local_step=39160 global_step=39160 loss=17.8, 92.5% complete\nINFO:tensorflow:local_step=39170 global_step=39170 loss=18.1, 92.6% complete\nINFO:tensorflow:local_step=39180 global_step=39180 loss=324.5, 92.6% complete\nINFO:tensorflow:local_step=39190 global_step=39190 loss=18.9, 92.6% complete\nINFO:tensorflow:local_step=39200 global_step=39200 loss=18.1, 92.6% complete\nINFO:tensorflow:local_step=39210 global_step=39210 loss=18.7, 92.7% complete\nINFO:tensorflow:local_step=39220 global_step=39220 loss=18.9, 92.7% complete\nINFO:tensorflow:local_step=39230 global_step=39230 loss=18.6, 92.7% complete\nINFO:tensorflow:local_step=39240 global_step=39240 loss=20.4, 92.7% complete\nINFO:tensorflow:local_step=39250 global_step=39250 loss=18.3, 92.7% complete\nINFO:tensorflow:local_step=39260 global_step=39260 loss=18.5, 92.8% complete\nINFO:tensorflow:local_step=39270 global_step=39270 loss=17.8, 92.8% complete\nINFO:tensorflow:local_step=39280 global_step=39280 loss=19.0, 92.8% complete\nINFO:tensorflow:local_step=39290 global_step=39290 loss=18.6, 92.8% complete\nINFO:tensorflow:local_step=39300 global_step=39300 loss=17.7, 92.9% complete\nINFO:tensorflow:local_step=39310 global_step=39310 loss=18.7, 92.9% complete\nINFO:tensorflow:local_step=39320 global_step=39320 loss=19.4, 92.9% complete\nINFO:tensorflow:local_step=39330 global_step=39330 loss=17.8, 92.9% complete\nINFO:tensorflow:local_step=39340 global_step=39340 loss=18.4, 93.0% complete\nINFO:tensorflow:local_step=39350 global_step=39350 loss=18.2, 93.0% complete\nINFO:tensorflow:local_step=39360 global_step=39360 loss=18.6, 93.0% complete\nINFO:tensorflow:local_step=39370 global_step=39370 loss=18.0, 93.0% complete\nINFO:tensorflow:local_step=39380 global_step=39380 loss=18.1, 93.1% complete\nINFO:tensorflow:local_step=39390 global_step=39390 loss=17.6, 93.1% complete\nINFO:tensorflow:local_step=39400 global_step=39400 loss=18.8, 93.1% complete\nINFO:tensorflow:local_step=39410 global_step=39410 loss=17.9, 93.1% complete\nINFO:tensorflow:local_step=39420 global_step=39420 loss=18.8, 93.1% complete\nINFO:tensorflow:local_step=39430 global_step=39430 loss=19.2, 93.2% complete\nINFO:tensorflow:local_step=39440 global_step=39440 loss=17.5, 93.2% complete\nINFO:tensorflow:local_step=39450 global_step=39450 loss=275.9, 93.2% complete\nINFO:tensorflow:local_step=39460 global_step=39460 loss=18.0, 93.2% complete\nINFO:tensorflow:local_step=39470 global_step=39470 loss=18.2, 93.3% complete\nINFO:tensorflow:local_step=39480 global_step=39480 loss=17.9, 93.3% complete\nINFO:tensorflow:local_step=39490 global_step=39490 loss=18.9, 93.3% complete\nINFO:tensorflow:local_step=39500 global_step=39500 loss=17.6, 93.3% complete\nINFO:tensorflow:local_step=39510 global_step=39510 loss=18.2, 93.4% complete\nINFO:tensorflow:local_step=39520 global_step=39520 loss=14.5, 93.4% complete\nINFO:tensorflow:local_step=39530 global_step=39530 loss=17.8, 93.4% complete\nINFO:tensorflow:local_step=39540 global_step=39540 loss=18.9, 93.4% complete\nINFO:tensorflow:local_step=39550 global_step=39550 loss=17.9, 93.5% complete\nINFO:tensorflow:local_step=39560 global_step=39560 loss=18.9, 93.5% complete\nINFO:tensorflow:local_step=39570 global_step=39570 loss=19.2, 93.5% complete\nINFO:tensorflow:local_step=39580 global_step=39580 loss=18.1, 93.5% complete\nINFO:tensorflow:local_step=39590 global_step=39590 loss=18.0, 93.5% complete\nINFO:tensorflow:local_step=39600 global_step=39600 loss=18.6, 93.6% complete\nINFO:tensorflow:local_step=39610 global_step=39610 loss=18.3, 93.6% complete\nINFO:tensorflow:local_step=39620 global_step=39620 loss=18.8, 93.6% complete\nINFO:tensorflow:local_step=39630 global_step=39630 loss=18.0, 93.6% complete\nINFO:tensorflow:local_step=39640 global_step=39640 loss=17.7, 93.7% complete\nINFO:tensorflow:local_step=39650 global_step=39650 loss=18.5, 93.7% complete\nINFO:tensorflow:local_step=39660 global_step=39660 loss=18.8, 93.7% complete\nINFO:tensorflow:local_step=39670 global_step=39670 loss=17.8, 93.7% complete\nINFO:tensorflow:local_step=39680 global_step=39680 loss=17.8, 93.8% complete\nINFO:tensorflow:local_step=39690 global_step=39690 loss=18.3, 93.8% complete\nINFO:tensorflow:local_step=39700 global_step=39700 loss=18.9, 93.8% complete\nINFO:tensorflow:local_step=39710 global_step=39710 loss=18.7, 93.8% complete\nINFO:tensorflow:local_step=39720 global_step=39720 loss=18.4, 93.9% complete\nINFO:tensorflow:local_step=39730 global_step=39730 loss=19.3, 93.9% complete\nINFO:tensorflow:local_step=39740 global_step=39740 loss=18.3, 93.9% complete\nINFO:tensorflow:local_step=39750 global_step=39750 loss=304.9, 93.9% complete\nINFO:tensorflow:local_step=39760 global_step=39760 loss=18.2, 94.0% complete\nINFO:tensorflow:local_step=39770 global_step=39770 loss=18.5, 94.0% complete\nINFO:tensorflow:local_step=39780 global_step=39780 loss=17.9, 94.0% complete\nINFO:tensorflow:local_step=39790 global_step=39790 loss=19.1, 94.0% complete\nINFO:tensorflow:local_step=39800 global_step=39800 loss=18.5, 94.0% complete\nINFO:tensorflow:local_step=39810 global_step=39810 loss=19.3, 94.1% complete\nINFO:tensorflow:local_step=39820 global_step=39820 loss=19.6, 94.1% complete\nINFO:tensorflow:local_step=39830 global_step=39830 loss=17.5, 94.1% complete\nINFO:tensorflow:local_step=39840 global_step=39840 loss=267.4, 94.1% complete\nINFO:tensorflow:local_step=39850 global_step=39850 loss=18.1, 94.2% complete\nINFO:tensorflow:local_step=39860 global_step=39860 loss=19.6, 94.2% complete\nINFO:tensorflow:local_step=39870 global_step=39870 loss=18.1, 94.2% complete\nINFO:tensorflow:local_step=39880 global_step=39880 loss=17.9, 94.2% complete\nINFO:tensorflow:local_step=39890 global_step=39890 loss=18.6, 94.3% complete\nINFO:tensorflow:local_step=39900 global_step=39900 loss=18.6, 94.3% complete\nINFO:tensorflow:local_step=39910 global_step=39910 loss=19.3, 94.3% complete\nINFO:tensorflow:local_step=39920 global_step=39920 loss=18.1, 94.3% complete\nINFO:tensorflow:local_step=39930 global_step=39930 loss=15.6, 94.4% complete\nINFO:tensorflow:local_step=39940 global_step=39940 loss=18.7, 94.4% complete\nINFO:tensorflow:local_step=39950 global_step=39950 loss=18.7, 94.4% complete\nINFO:tensorflow:local_step=39960 global_step=39960 loss=19.2, 94.4% complete\nINFO:tensorflow:local_step=39970 global_step=39970 loss=19.4, 94.4% complete\nINFO:tensorflow:local_step=39980 global_step=39980 loss=17.7, 94.5% complete\nINFO:tensorflow:local_step=39990 global_step=39990 loss=18.2, 94.5% complete\nINFO:tensorflow:local_step=40000 global_step=40000 loss=17.8, 94.5% complete\nINFO:tensorflow:local_step=40010 global_step=40010 loss=18.5, 94.5% complete\nINFO:tensorflow:local_step=40020 global_step=40020 loss=18.0, 94.6% complete\nINFO:tensorflow:local_step=40030 global_step=40030 loss=18.4, 94.6% complete\nINFO:tensorflow:local_step=40040 global_step=40040 loss=17.7, 94.6% complete\nINFO:tensorflow:local_step=40050 global_step=40050 loss=19.0, 94.6% complete\nINFO:tensorflow:local_step=40060 global_step=40060 loss=19.2, 94.7% complete\nINFO:tensorflow:local_step=40070 global_step=40070 loss=18.6, 94.7% complete\nINFO:tensorflow:local_step=40080 global_step=40080 loss=18.1, 94.7% complete\nINFO:tensorflow:local_step=40090 global_step=40090 loss=19.0, 94.7% complete\nINFO:tensorflow:local_step=40100 global_step=40100 loss=18.0, 94.8% complete\nINFO:tensorflow:local_step=40110 global_step=40110 loss=18.7, 94.8% complete\nINFO:tensorflow:local_step=40120 global_step=40120 loss=19.2, 94.8% complete\nINFO:tensorflow:local_step=40130 global_step=40130 loss=18.7, 94.8% complete\nINFO:tensorflow:local_step=40140 global_step=40140 loss=18.5, 94.8% complete\nINFO:tensorflow:local_step=40150 global_step=40150 loss=18.4, 94.9% complete\nINFO:tensorflow:local_step=40160 global_step=40160 loss=18.4, 94.9% complete\nINFO:tensorflow:local_step=40170 global_step=40170 loss=19.1, 94.9% complete\nINFO:tensorflow:local_step=40180 global_step=40180 loss=268.1, 94.9% complete\nINFO:tensorflow:local_step=40190 global_step=40190 loss=19.0, 95.0% complete\nINFO:tensorflow:local_step=40200 global_step=40200 loss=17.7, 95.0% complete\nINFO:tensorflow:local_step=40210 global_step=40210 loss=17.8, 95.0% complete\nINFO:tensorflow:local_step=40220 global_step=40220 loss=17.4, 95.0% complete\nINFO:tensorflow:local_step=40230 global_step=40230 loss=17.2, 95.1% complete\nINFO:tensorflow:local_step=40240 global_step=40240 loss=17.5, 95.1% complete\nINFO:tensorflow:local_step=40250 global_step=40250 loss=18.7, 95.1% complete\nINFO:tensorflow:local_step=40260 global_step=40260 loss=18.2, 95.1% complete\nINFO:tensorflow:local_step=40270 global_step=40270 loss=18.1, 95.2% complete\nINFO:tensorflow:local_step=40280 global_step=40280 loss=18.2, 95.2% complete\nINFO:tensorflow:local_step=40290 global_step=40290 loss=18.8, 95.2% complete\nINFO:tensorflow:local_step=40300 global_step=40300 loss=18.4, 95.2% complete\nINFO:tensorflow:local_step=40310 global_step=40310 loss=19.1, 95.3% complete\nINFO:tensorflow:local_step=40320 global_step=40320 loss=18.1, 95.3% complete\nINFO:tensorflow:local_step=40330 global_step=40330 loss=18.3, 95.3% complete\nINFO:tensorflow:local_step=40340 global_step=40340 loss=17.8, 95.3% complete\nINFO:tensorflow:local_step=40350 global_step=40350 loss=18.8, 95.3% complete\nINFO:tensorflow:local_step=40360 global_step=40360 loss=18.4, 95.4% complete\nINFO:tensorflow:local_step=40370 global_step=40370 loss=18.0, 95.4% complete\nINFO:tensorflow:local_step=40380 global_step=40380 loss=18.5, 95.4% complete\nINFO:tensorflow:local_step=40390 global_step=40390 loss=17.0, 95.4% complete\nINFO:tensorflow:local_step=40400 global_step=40400 loss=17.9, 95.5% complete\nINFO:tensorflow:local_step=40410 global_step=40410 loss=18.1, 95.5% complete\nINFO:tensorflow:local_step=40420 global_step=40420 loss=18.4, 95.5% complete\nINFO:tensorflow:local_step=40430 global_step=40430 loss=271.8, 95.5% complete\nINFO:tensorflow:local_step=40440 global_step=40440 loss=18.3, 95.6% complete\nINFO:tensorflow:local_step=40450 global_step=40450 loss=17.3, 95.6% complete\nINFO:tensorflow:local_step=40460 global_step=40460 loss=18.9, 95.6% complete\nINFO:tensorflow:local_step=40470 global_step=40470 loss=19.9, 95.6% complete\nINFO:tensorflow:local_step=40480 global_step=40480 loss=18.7, 95.7% complete\nINFO:tensorflow:local_step=40490 global_step=40490 loss=18.8, 95.7% complete\nINFO:tensorflow:local_step=40500 global_step=40500 loss=17.5, 95.7% complete\nINFO:tensorflow:local_step=40510 global_step=40510 loss=17.8, 95.7% complete\nINFO:tensorflow:local_step=40520 global_step=40520 loss=18.8, 95.7% complete\nINFO:tensorflow:local_step=40530 global_step=40530 loss=146.4, 95.8% complete\nINFO:tensorflow:local_step=40540 global_step=40540 loss=18.5, 95.8% complete\nINFO:tensorflow:local_step=40550 global_step=40550 loss=18.4, 95.8% complete\nINFO:tensorflow:local_step=40560 global_step=40560 loss=19.3, 95.8% complete\nINFO:tensorflow:local_step=40570 global_step=40570 loss=19.4, 95.9% complete\nINFO:tensorflow:local_step=40580 global_step=40580 loss=18.5, 95.9% complete\nINFO:tensorflow:local_step=40590 global_step=40590 loss=18.4, 95.9% complete\nINFO:tensorflow:local_step=40600 global_step=40600 loss=18.6, 95.9% complete\nINFO:tensorflow:local_step=40610 global_step=40610 loss=18.5, 96.0% complete\nINFO:tensorflow:local_step=40620 global_step=40620 loss=19.1, 96.0% complete\nINFO:tensorflow:local_step=40630 global_step=40630 loss=18.5, 96.0% complete\nINFO:tensorflow:local_step=40640 global_step=40640 loss=18.0, 96.0% complete\nINFO:tensorflow:local_step=40650 global_step=40650 loss=18.1, 96.1% complete\nINFO:tensorflow:local_step=40660 global_step=40660 loss=18.4, 96.1% complete\nINFO:tensorflow:local_step=40670 global_step=40670 loss=18.3, 96.1% complete\nINFO:tensorflow:local_step=40680 global_step=40680 loss=17.7, 96.1% complete\nINFO:tensorflow:local_step=40690 global_step=40690 loss=19.2, 96.1% complete\nINFO:tensorflow:local_step=40700 global_step=40700 loss=17.9, 96.2% complete\nINFO:tensorflow:local_step=40710 global_step=40710 loss=17.5, 96.2% complete\nINFO:tensorflow:local_step=40720 global_step=40720 loss=17.6, 96.2% complete\nINFO:tensorflow:local_step=40730 global_step=40730 loss=17.8, 96.2% complete\nINFO:tensorflow:local_step=40740 global_step=40740 loss=19.3, 96.3% complete\nINFO:tensorflow:local_step=40750 global_step=40750 loss=19.3, 96.3% complete\nINFO:tensorflow:local_step=40760 global_step=40760 loss=19.0, 96.3% complete\nINFO:tensorflow:local_step=40770 global_step=40770 loss=17.6, 96.3% complete\nINFO:tensorflow:local_step=40780 global_step=40780 loss=18.1, 96.4% complete\nINFO:tensorflow:local_step=40790 global_step=40790 loss=17.8, 96.4% complete\nINFO:tensorflow:local_step=40800 global_step=40800 loss=18.1, 96.4% complete\nINFO:tensorflow:local_step=40810 global_step=40810 loss=18.9, 96.4% complete\nINFO:tensorflow:local_step=40820 global_step=40820 loss=17.9, 96.5% complete\nINFO:tensorflow:local_step=40830 global_step=40830 loss=18.3, 96.5% complete\nINFO:tensorflow:local_step=40840 global_step=40840 loss=18.5, 96.5% complete\nINFO:tensorflow:local_step=40850 global_step=40850 loss=18.7, 96.5% complete\nINFO:tensorflow:local_step=40860 global_step=40860 loss=17.8, 96.6% complete\nINFO:tensorflow:local_step=40870 global_step=40870 loss=15.3, 96.6% complete\nINFO:tensorflow:local_step=40880 global_step=40880 loss=19.6, 96.6% complete\nINFO:tensorflow:local_step=40890 global_step=40890 loss=19.0, 96.6% complete\nINFO:tensorflow:local_step=40900 global_step=40900 loss=16.7, 96.6% complete\nINFO:tensorflow:local_step=40910 global_step=40910 loss=18.9, 96.7% complete\nINFO:tensorflow:local_step=40920 global_step=40920 loss=18.3, 96.7% complete\nINFO:tensorflow:local_step=40930 global_step=40930 loss=17.9, 96.7% complete\nINFO:tensorflow:local_step=40940 global_step=40940 loss=18.4, 96.7% complete\nINFO:tensorflow:local_step=40950 global_step=40950 loss=18.3, 96.8% complete\nINFO:tensorflow:local_step=40960 global_step=40960 loss=17.7, 96.8% complete\nINFO:tensorflow:local_step=40970 global_step=40970 loss=18.0, 96.8% complete\nINFO:tensorflow:local_step=40980 global_step=40980 loss=18.2, 96.8% complete\nINFO:tensorflow:local_step=40990 global_step=40990 loss=18.5, 96.9% complete\nINFO:tensorflow:local_step=41000 global_step=41000 loss=18.8, 96.9% complete\nINFO:tensorflow:local_step=41010 global_step=41010 loss=18.6, 96.9% complete\nINFO:tensorflow:local_step=41020 global_step=41020 loss=18.7, 96.9% complete\nINFO:tensorflow:local_step=41030 global_step=41030 loss=18.2, 97.0% complete\nINFO:tensorflow:local_step=41040 global_step=41040 loss=17.9, 97.0% complete\nINFO:tensorflow:local_step=41050 global_step=41050 loss=18.2, 97.0% complete\nINFO:tensorflow:local_step=41060 global_step=41060 loss=18.2, 97.0% complete\nINFO:tensorflow:local_step=41070 global_step=41070 loss=17.9, 97.0% complete\nINFO:tensorflow:local_step=41080 global_step=41080 loss=19.2, 97.1% complete\nINFO:tensorflow:local_step=41090 global_step=41090 loss=17.7, 97.1% complete\nINFO:tensorflow:local_step=41100 global_step=41100 loss=18.0, 97.1% complete\nINFO:tensorflow:local_step=41110 global_step=41110 loss=17.9, 97.1% complete\nINFO:tensorflow:local_step=41120 global_step=41120 loss=17.8, 97.2% complete\nINFO:tensorflow:local_step=41130 global_step=41130 loss=19.0, 97.2% complete\nINFO:tensorflow:local_step=41140 global_step=41140 loss=17.6, 97.2% complete\nINFO:tensorflow:local_step=41150 global_step=41150 loss=18.0, 97.2% complete\nINFO:tensorflow:local_step=41160 global_step=41160 loss=17.9, 97.3% complete\nINFO:tensorflow:local_step=41170 global_step=41170 loss=18.7, 97.3% complete\nINFO:tensorflow:local_step=41180 global_step=41180 loss=15.4, 97.3% complete\nINFO:tensorflow:local_step=41190 global_step=41190 loss=18.1, 97.3% complete\nINFO:tensorflow:local_step=41200 global_step=41200 loss=18.1, 97.4% complete\nINFO:tensorflow:local_step=41210 global_step=41210 loss=19.3, 97.4% complete\nINFO:tensorflow:local_step=41220 global_step=41220 loss=19.0, 97.4% complete\nINFO:tensorflow:local_step=41230 global_step=41230 loss=260.4, 97.4% complete\nINFO:tensorflow:local_step=41240 global_step=41240 loss=18.8, 97.4% complete\nINFO:tensorflow:local_step=41250 global_step=41250 loss=17.8, 97.5% complete\nINFO:tensorflow:local_step=41260 global_step=41260 loss=18.3, 97.5% complete\nINFO:tensorflow:local_step=41270 global_step=41270 loss=17.3, 97.5% complete\nINFO:tensorflow:local_step=41280 global_step=41280 loss=18.2, 97.5% complete\nINFO:tensorflow:local_step=41290 global_step=41290 loss=19.4, 97.6% complete\nINFO:tensorflow:local_step=41300 global_step=41300 loss=19.2, 97.6% complete\nINFO:tensorflow:local_step=41310 global_step=41310 loss=18.5, 97.6% complete\nINFO:tensorflow:local_step=41320 global_step=41320 loss=247.6, 97.6% complete\nINFO:tensorflow:local_step=41330 global_step=41330 loss=19.8, 97.7% complete\nINFO:tensorflow:local_step=41340 global_step=41340 loss=18.0, 97.7% complete\nINFO:tensorflow:local_step=41350 global_step=41350 loss=17.8, 97.7% complete\nINFO:tensorflow:local_step=41360 global_step=41360 loss=17.8, 97.7% complete\nINFO:tensorflow:local_step=41370 global_step=41370 loss=17.6, 97.8% complete\nINFO:tensorflow:local_step=41380 global_step=41380 loss=18.7, 97.8% complete\nINFO:tensorflow:local_step=41390 global_step=41390 loss=17.5, 97.8% complete\nINFO:tensorflow:local_step=41400 global_step=41400 loss=18.0, 97.8% complete\nINFO:tensorflow:local_step=41410 global_step=41410 loss=19.0, 97.8% complete\nINFO:tensorflow:local_step=41420 global_step=41420 loss=18.6, 97.9% complete\nINFO:tensorflow:local_step=41430 global_step=41430 loss=18.2, 97.9% complete\nINFO:tensorflow:local_step=41440 global_step=41440 loss=18.9, 97.9% complete\nINFO:tensorflow:local_step=41450 global_step=41450 loss=18.4, 97.9% complete\nINFO:tensorflow:local_step=41460 global_step=41460 loss=18.3, 98.0% complete\nINFO:tensorflow:local_step=41470 global_step=41470 loss=18.7, 98.0% complete\nINFO:tensorflow:local_step=41480 global_step=41480 loss=19.0, 98.0% complete\nINFO:tensorflow:local_step=41490 global_step=41490 loss=18.1, 98.0% complete\nINFO:tensorflow:local_step=41500 global_step=41500 loss=19.1, 98.1% complete\nINFO:tensorflow:local_step=41510 global_step=41510 loss=18.5, 98.1% complete\nINFO:tensorflow:local_step=41520 global_step=41520 loss=18.3, 98.1% complete\nINFO:tensorflow:local_step=41530 global_step=41530 loss=18.9, 98.1% complete\nINFO:tensorflow:local_step=41540 global_step=41540 loss=18.3, 98.2% complete\nINFO:tensorflow:local_step=41550 global_step=41550 loss=18.0, 98.2% complete\nINFO:tensorflow:local_step=41560 global_step=41560 loss=18.9, 98.2% complete\nINFO:tensorflow:local_step=41570 global_step=41570 loss=18.1, 98.2% complete\nINFO:tensorflow:local_step=41580 global_step=41580 loss=18.3, 98.3% complete\nINFO:tensorflow:local_step=41590 global_step=41590 loss=273.2, 98.3% complete\nINFO:tensorflow:local_step=41600 global_step=41600 loss=18.5, 98.3% complete\nINFO:tensorflow:local_step=41610 global_step=41610 loss=15.3, 98.3% complete\nINFO:tensorflow:local_step=41620 global_step=41620 loss=19.3, 98.3% complete\nINFO:tensorflow:local_step=41630 global_step=41630 loss=18.0, 98.4% complete\nINFO:tensorflow:local_step=41640 global_step=41640 loss=19.5, 98.4% complete\nINFO:tensorflow:local_step=41650 global_step=41650 loss=18.3, 98.4% complete\nINFO:tensorflow:local_step=41660 global_step=41660 loss=17.3, 98.4% complete\nINFO:tensorflow:local_step=41670 global_step=41670 loss=19.3, 98.5% complete\nINFO:tensorflow:local_step=41680 global_step=41680 loss=17.9, 98.5% complete\nINFO:tensorflow:local_step=41690 global_step=41690 loss=18.4, 98.5% complete\nINFO:tensorflow:local_step=41700 global_step=41700 loss=18.0, 98.5% complete\nINFO:tensorflow:local_step=41710 global_step=41710 loss=18.9, 98.6% complete\nINFO:tensorflow:local_step=41720 global_step=41720 loss=17.4, 98.6% complete\nINFO:tensorflow:local_step=41730 global_step=41730 loss=17.9, 98.6% complete\nINFO:tensorflow:local_step=41740 global_step=41740 loss=18.7, 98.6% complete\nINFO:tensorflow:local_step=41750 global_step=41750 loss=18.2, 98.7% complete\nINFO:tensorflow:local_step=41760 global_step=41760 loss=19.3, 98.7% complete\nINFO:tensorflow:local_step=41770 global_step=41770 loss=18.2, 98.7% complete\nINFO:tensorflow:local_step=41780 global_step=41780 loss=18.4, 98.7% complete\nINFO:tensorflow:local_step=41790 global_step=41790 loss=18.5, 98.7% complete\nINFO:tensorflow:local_step=41800 global_step=41800 loss=18.6, 98.8% complete\nINFO:tensorflow:local_step=41810 global_step=41810 loss=18.5, 98.8% complete\nINFO:tensorflow:local_step=41820 global_step=41820 loss=18.5, 98.8% complete\nINFO:tensorflow:local_step=41830 global_step=41830 loss=270.7, 98.8% complete\nINFO:tensorflow:local_step=41840 global_step=41840 loss=18.0, 98.9% complete\nINFO:tensorflow:local_step=41850 global_step=41850 loss=19.1, 98.9% complete\nINFO:tensorflow:local_step=41860 global_step=41860 loss=19.1, 98.9% complete\nINFO:tensorflow:local_step=41870 global_step=41870 loss=240.9, 98.9% complete\nINFO:tensorflow:local_step=41880 global_step=41880 loss=19.4, 99.0% complete\nINFO:tensorflow:local_step=41890 global_step=41890 loss=18.6, 99.0% complete\nINFO:tensorflow:local_step=41900 global_step=41900 loss=17.2, 99.0% complete\nINFO:tensorflow:local_step=41910 global_step=41910 loss=18.1, 99.0% complete\nINFO:tensorflow:local_step=41920 global_step=41920 loss=17.7, 99.1% complete\nINFO:tensorflow:local_step=41930 global_step=41930 loss=19.9, 99.1% complete\nINFO:tensorflow:local_step=41940 global_step=41940 loss=17.6, 99.1% complete\nINFO:tensorflow:local_step=41950 global_step=41950 loss=21.5, 99.1% complete\nINFO:tensorflow:local_step=41960 global_step=41960 loss=18.5, 99.1% complete\nINFO:tensorflow:local_step=41970 global_step=41970 loss=19.3, 99.2% complete\nINFO:tensorflow:local_step=41980 global_step=41980 loss=18.4, 99.2% complete\nINFO:tensorflow:local_step=41990 global_step=41990 loss=17.8, 99.2% complete\nINFO:tensorflow:local_step=42000 global_step=42000 loss=18.1, 99.2% complete\nINFO:tensorflow:local_step=42010 global_step=42010 loss=18.6, 99.3% complete\nINFO:tensorflow:local_step=42020 global_step=42020 loss=17.9, 99.3% complete\nINFO:tensorflow:local_step=42030 global_step=42030 loss=18.3, 99.3% complete\nINFO:tensorflow:local_step=42040 global_step=42040 loss=18.1, 99.3% complete\nINFO:tensorflow:local_step=42050 global_step=42050 loss=17.6, 99.4% complete\nINFO:tensorflow:local_step=42060 global_step=42060 loss=19.1, 99.4% complete\nINFO:tensorflow:local_step=42070 global_step=42070 loss=17.6, 99.4% complete\nINFO:tensorflow:local_step=42080 global_step=42080 loss=18.6, 99.4% complete\nINFO:tensorflow:local_step=42090 global_step=42090 loss=19.0, 99.5% complete\nINFO:tensorflow:local_step=42100 global_step=42100 loss=18.5, 99.5% complete\nINFO:tensorflow:local_step=42110 global_step=42110 loss=19.0, 99.5% complete\nINFO:tensorflow:local_step=42120 global_step=42120 loss=17.8, 99.5% complete\nINFO:tensorflow:local_step=42130 global_step=42130 loss=18.7, 99.6% complete\nINFO:tensorflow:local_step=42140 global_step=42140 loss=18.7, 99.6% complete\nINFO:tensorflow:local_step=42150 global_step=42150 loss=17.8, 99.6% complete\nINFO:tensorflow:local_step=42160 global_step=42160 loss=15.4, 99.6% complete\nINFO:tensorflow:local_step=42170 global_step=42170 loss=18.5, 99.6% complete\nINFO:tensorflow:local_step=42180 global_step=42180 loss=18.8, 99.7% complete\nINFO:tensorflow:local_step=42190 global_step=42190 loss=18.5, 99.7% complete\nINFO:tensorflow:local_step=42200 global_step=42200 loss=18.0, 99.7% complete\nINFO:tensorflow:local_step=42210 global_step=42210 loss=18.9, 99.7% complete\nINFO:tensorflow:local_step=42220 global_step=42220 loss=18.9, 99.8% complete\nINFO:tensorflow:local_step=42230 global_step=42230 loss=17.8, 99.8% complete\nINFO:tensorflow:local_step=42240 global_step=42240 loss=18.8, 99.8% complete\nINFO:tensorflow:local_step=42250 global_step=42250 loss=17.9, 99.8% complete\nINFO:tensorflow:local_step=42260 global_step=42260 loss=17.9, 99.9% complete\nINFO:tensorflow:local_step=42270 global_step=42270 loss=18.0, 99.9% complete\nINFO:tensorflow:local_step=42280 global_step=42280 loss=21.3, 99.9% complete\nINFO:tensorflow:local_step=42290 global_step=42290 loss=17.8, 99.9% complete\nINFO:tensorflow:local_step=42300 global_step=42300 loss=18.9, 100.0% complete\nINFO:tensorflow:local_step=42310 global_step=42310 loss=18.2, 100.0% complete\nINFO:tensorflow:local_step=42320 global_step=42320 loss=18.7, 100.0% complete\nWARNING:tensorflow:Issue encountered when serializing global_step.\nType is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.\n'Tensor' object has no attribute 'to_proto'\n"
]
],
[
[
"Checking the context of the 'vec' directory. Should contain checkpoints of the model plus tsv files for column and row embeddings.",
"_____no_output_____"
]
],
[
[
"os.listdir(vec_path)\n",
"_____no_output_____"
]
],
[
[
"Converting tsv to bin:",
"_____no_output_____"
]
],
[
[
"!python /content/tutorial/scripts/swivel/text2bin.py --vocab={vec_path}vocab.txt --output={vec_path}vecs.bin \\\n {vec_path}row_embedding.tsv \\\n {vec_path}col_embedding.tsv",
"executing text2bin\nmerging files ['/content/tutorial/lit/vec/row_embedding.tsv', '/content/tutorial/lit/vec/col_embedding.tsv'] into output bin\n"
],
[
"%ls {vec_path}",
"checkpoint\ncol_embedding.tsv\nevents.out.tfevents.1539004459.46972dad0a54\ngraph.pbtxt\nmodel.ckpt-0.data-00000-of-00001\nmodel.ckpt-0.index\nmodel.ckpt-0.meta\nmodel.ckpt-42320.data-00000-of-00001\nmodel.ckpt-42320.index\nmodel.ckpt-42320.meta\nrow_embedding.tsv\nvecs.bin\nvocab.txt\n"
]
],
[
[
"### Read stored binary embeddings and inspect them",
"_____no_output_____"
]
],
[
[
"import importlib.util\n\nspec = importlib.util.spec_from_file_location(\"vecs\", \"/content/tutorial/scripts/swivel/vecs.py\")\nm = importlib.util.module_from_spec(spec)\nspec.loader.exec_module(m)\nshakespeare_vecs = m.Vecs(vec_path + 'vocab.txt', vec_path + 'vecs.bin')",
"Opening vector with expected size 23552 from file /content/tutorial/lit/vec/vocab.txt\nvocab size 23552 (unique 23552)\nread rows\n"
]
],
[
[
"##Basic method to print the k nearest neighbors for a given word",
"_____no_output_____"
]
],
[
[
"def k_neighbors(vec, word, k=10):\n res = vec.neighbors(word)\n if not res:\n print('%s is not in the vocabulary, try e.g. %s' % (word, vecs.random_word_in_vocab()))\n else:\n for word, sim in res[:10]:\n print('%0.4f: %s' % (sim, word))",
"_____no_output_____"
],
[
"k_neighbors(shakespeare_vecs, 'strife')",
"1.0000: strife\n0.4599: tutors\n0.3981: tumultuous\n0.3530: future\n0.3368: daughters’\n0.3229: cease\n0.3018: Nought\n0.2866: strike.\n0.2852: War\n0.2775: nature.\n"
],
[
"k_neighbors(shakespeare_vecs,'youth')",
"1.0000: youth\n0.3436: tall,\n0.3350: vanity,\n0.2945: idleness.\n0.2929: womb;\n0.2847: tall\n0.2823: suffering\n0.2742: stillness\n0.2671: flow'ring\n0.2671: observation\n"
]
],
[
[
"## Load vecsigrafo from UMBC over WordNet",
"_____no_output_____"
]
],
[
[
"%ls ",
"\u001b[0m\u001b[01;34mcoocs\u001b[0m/ shakespeare_complete_works.txt \u001b[01;34mswivel\u001b[0m/ \u001b[01;34mvec\u001b[0m/ wget-log wget-log.1\n"
],
[
"!wget https://zenodo.org/record/1446214/files/vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz\n%ls",
"\nRedirecting output to ‘wget-log.2’.\n\u001b[0m\u001b[01;34mcoocs\u001b[0m/\nshakespeare_complete_works.txt\n\u001b[01;34mswivel\u001b[0m/\n\u001b[01;34mvec\u001b[0m/\nvecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz\nwget-log\nwget-log.1\nwget-log.2\n"
],
[
"!tar -xvzf vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz\n!rm vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz\n",
"vecsi_tlgs_wnscd_ls_f_6e_160d/row_embedding.tsv\n"
],
[
"umbc_wn_vec_path = '/content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/'",
"_____no_output_____"
]
],
[
[
"Extracting the vocabulary from the .tsv file:",
"_____no_output_____"
]
],
[
[
"with open(umbc_wn_vec_path + 'vocab.txt', 'w', encoding='utf_8') as f:\n with open(umbc_wn_vec_path + 'row_embedding.tsv', 'r', encoding='utf_8') as vec_lines:\n vocab = [line.split('\\t')[0].strip() for line in vec_lines]\n for word in vocab:\n print(word, file=f)",
"_____no_output_____"
]
],
[
[
"Converting tsv to bin:",
"_____no_output_____"
]
],
[
[
"!python /content/tutorial/scripts/swivel/text2bin.py --vocab={umbc_wn_vec_path}vocab.txt --output={umbc_wn_vec_path}vecs.bin \\\n {umbc_wn_vec_path}row_embedding.tsv ",
"executing text2bin\nmerging files ['/content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/row_embedding.tsv'] into output bin\n"
],
[
"%ls",
"\u001b[0m\u001b[01;34mcoocs\u001b[0m/ \u001b[01;34mvec\u001b[0m/ wget-log.1\nshakespeare_complete_works.txt \u001b[01;34mvecsi_tlgs_wnscd_ls_f_6e_160d\u001b[0m/ wget-log.2\n\u001b[01;34mswivel\u001b[0m/ wget-log\n"
],
[
"umbc_wn_vecs = m.Vecs(umbc_wn_vec_path + 'vocab.txt', umbc_wn_vec_path + 'vecs.bin')",
"Opening vector with expected size 1499136 from file /content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/vocab.txt\nvocab size 1499136 (unique 1499125)\nread rows\n"
],
[
"k_neighbors(umbc_wn_vecs, 'lem_California')",
"1.0000: lem_California\n0.6301: lem_Central Valley\n0.5959: lem_University of California\n0.5542: lem_Southern California\n0.5254: lem_Santa Cruz\n0.5241: lem_Astro Aerospace\n0.5168: lem_San Francisco Bay\n0.5092: lem_San Diego County\n0.5074: lem_Santa Barbara\n0.5069: lem_Santa Rosa\n"
]
],
[
[
"# Add your solution to the proposed exercise here",
"_____no_output_____"
],
[
"Follow the instructions given in the prvious lesson (*Vecsigrafos for curating and interlinking knowledge graphs*) to find correlation between terms in old Enlgish extracted from the Shakespeare corpus and terms in modern English extracted from UMBC. You will need to generate a dictionary relating pairs of lemmas between the two vocabularies and use to produce a pair of translation matrices to transform vectors from one vector space to the other. Then apply the k_neighbors method to identify the correlations.",
"_____no_output_____"
],
[
"# Conclusion",
"_____no_output_____"
],
[
"This notebook proposes the use of Shakespeare's complete works and UMBC to provide the student with embeddings that can be exploited for different operations between the two vector spaces. Particularly, we propose to identify terms and their correlations over such spaces.",
"_____no_output_____"
],
[
"# Acknowledgements",
"_____no_output_____"
],
[
"In memory of Dr. Jack Brandabur, whose passion for Shakespeare and Cervantes inspired this notebook.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0421b9d6644d44fa371f837a0ce026671189fbd | 42,392 | ipynb | Jupyter Notebook | t81_558_class_04_3_regression.ipynb | akramsystems/t81_558_deep_learning | a025888f0f609402746b98e40abc9d1ed15c5e87 | [
"Apache-2.0"
] | 1 | 2020-03-08T12:54:45.000Z | 2020-03-08T12:54:45.000Z | t81_558_class_04_3_regression.ipynb | KaShel06/t81_558_deep_learning | 8beec2fdd305dd7e8937ac9da9e7c59f5fc4bb78 | [
"Apache-2.0"
] | null | null | null | t81_558_class_04_3_regression.ipynb | KaShel06/t81_558_deep_learning | 8beec2fdd305dd7e8937ac9da9e7c59f5fc4bb78 | [
"Apache-2.0"
] | 1 | 2021-07-21T17:58:31.000Z | 2021-07-21T17:58:31.000Z | 74.241681 | 21,308 | 0.760285 | [
[
[
"<a href=\"https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# T81-558: Applications of Deep Neural Networks\n**Module 4: Training for Tabular Data**\n* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)\n* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).",
"_____no_output_____"
],
[
"# Module 4 Material\n\n* Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb)\n* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb)\n* **Part 4.3: Keras Regression for Deep Neural Networks with RMSE** [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb)\n* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb)\n* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb)",
"_____no_output_____"
],
[
"# Google CoLab Instructions\n\nThe following code ensures that Google CoLab is running the correct version of TensorFlow.",
"_____no_output_____"
]
],
[
[
"try:\n %tensorflow_version 2.x\n COLAB = True\n print(\"Note: using Google CoLab\")\nexcept:\n print(\"Note: not using Google CoLab\")\n COLAB = False",
"Note: not using Google CoLab\n"
]
],
[
[
"# Part 4.3: Keras Regression for Deep Neural Networks with RMSE\n\nRegression results are evaluated differently than classification. Consider the following code that trains a neural network for regression on the data set **jh-simple-dataset.csv**. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom scipy.stats import zscore\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\n\n# Read the data set\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv\",\n na_values=['NA','?'])\n\n# Generate dummies for job\ndf = pd.concat([df,pd.get_dummies(df['job'],prefix=\"job\")],axis=1)\ndf.drop('job', axis=1, inplace=True)\n\n# Generate dummies for area\ndf = pd.concat([df,pd.get_dummies(df['area'],prefix=\"area\")],axis=1)\ndf.drop('area', axis=1, inplace=True)\n\n# Generate dummies for product\ndf = pd.concat([df,pd.get_dummies(df['product'],prefix=\"product\")],axis=1)\ndf.drop('product', axis=1, inplace=True)\n\n# Missing values for income\nmed = df['income'].median()\ndf['income'] = df['income'].fillna(med)\n\n# Standardize ranges\ndf['income'] = zscore(df['income'])\ndf['aspect'] = zscore(df['aspect'])\ndf['save_rate'] = zscore(df['save_rate'])\ndf['subscriptions'] = zscore(df['subscriptions'])\n\n# Convert to numpy - Classification\nx_columns = df.columns.drop('age').drop('id')\nx = df[x_columns].values\ny = df['age'].values\n\n# Create train/test\nx_train, x_test, y_train, y_test = train_test_split( \n x, y, test_size=0.25, random_state=42)",
"_____no_output_____"
],
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nfrom tensorflow.keras.callbacks import EarlyStopping\n\n# Build the neural network\nmodel = Sequential()\nmodel.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1\nmodel.add(Dense(10, activation='relu')) # Hidden 2\nmodel.add(Dense(1)) # Output\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nmonitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, \n patience=5, verbose=1, mode='auto', restore_best_weights=True)\nmodel.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)\n",
"Train on 1500 samples, validate on 500 samples\nEpoch 1/1000\n1500/1500 - 1s - loss: 1905.4454 - val_loss: 1628.1341\nEpoch 2/1000\n1500/1500 - 0s - loss: 1331.4213 - val_loss: 889.0575\nEpoch 3/1000\n1500/1500 - 0s - loss: 554.8426 - val_loss: 303.7261\nEpoch 4/1000\n1500/1500 - 0s - loss: 276.2087 - val_loss: 241.2495\nEpoch 5/1000\n1500/1500 - 0s - loss: 232.2832 - val_loss: 208.2143\nEpoch 6/1000\n1500/1500 - 0s - loss: 198.5331 - val_loss: 179.5262\nEpoch 7/1000\n1500/1500 - 0s - loss: 169.0791 - val_loss: 154.5270\nEpoch 8/1000\n1500/1500 - 0s - loss: 144.1286 - val_loss: 132.8691\nEpoch 9/1000\n1500/1500 - 0s - loss: 122.9873 - val_loss: 115.0928\nEpoch 10/1000\n1500/1500 - 0s - loss: 104.7249 - val_loss: 98.7375\nEpoch 11/1000\n1500/1500 - 0s - loss: 89.8292 - val_loss: 86.2749\nEpoch 12/1000\n1500/1500 - 0s - loss: 77.3071 - val_loss: 75.0022\nEpoch 13/1000\n1500/1500 - 0s - loss: 67.0604 - val_loss: 66.1396\nEpoch 14/1000\n1500/1500 - 0s - loss: 58.9584 - val_loss: 58.4367\nEpoch 15/1000\n1500/1500 - 0s - loss: 51.2491 - val_loss: 52.7136\nEpoch 16/1000\n1500/1500 - 0s - loss: 45.1765 - val_loss: 46.5179\nEpoch 17/1000\n1500/1500 - 0s - loss: 39.8843 - val_loss: 41.3721\nEpoch 18/1000\n1500/1500 - 0s - loss: 35.1468 - val_loss: 37.2132\nEpoch 19/1000\n1500/1500 - 0s - loss: 31.1755 - val_loss: 33.0697\nEpoch 20/1000\n1500/1500 - 0s - loss: 27.6307 - val_loss: 30.3131\nEpoch 21/1000\n1500/1500 - 0s - loss: 24.8457 - val_loss: 26.9474\nEpoch 22/1000\n1500/1500 - 0s - loss: 22.4056 - val_loss: 24.3656\nEpoch 23/1000\n1500/1500 - 0s - loss: 20.3071 - val_loss: 22.1642\nEpoch 24/1000\n1500/1500 - 0s - loss: 18.5446 - val_loss: 20.4782\nEpoch 25/1000\n1500/1500 - 0s - loss: 17.1571 - val_loss: 18.8670\nEpoch 26/1000\n1500/1500 - 0s - loss: 15.9407 - val_loss: 17.6862\nEpoch 27/1000\n1500/1500 - 0s - loss: 14.9866 - val_loss: 16.5275\nEpoch 28/1000\n1500/1500 - 0s - loss: 14.1251 - val_loss: 15.6342\nEpoch 29/1000\n1500/1500 - 0s - loss: 13.4655 - val_loss: 14.8625\nEpoch 30/1000\n1500/1500 - 0s - loss: 12.8994 - val_loss: 14.2826\nEpoch 31/1000\n1500/1500 - 0s - loss: 12.5566 - val_loss: 13.6121\nEpoch 32/1000\n1500/1500 - 0s - loss: 12.0077 - val_loss: 13.3087\nEpoch 33/1000\n1500/1500 - 0s - loss: 11.5357 - val_loss: 12.6593\nEpoch 34/1000\n1500/1500 - 0s - loss: 11.2365 - val_loss: 12.1849\nEpoch 35/1000\n1500/1500 - 0s - loss: 10.8074 - val_loss: 11.9388\nEpoch 36/1000\n1500/1500 - 0s - loss: 10.5593 - val_loss: 11.4006\nEpoch 37/1000\n1500/1500 - 0s - loss: 10.2093 - val_loss: 10.9751\nEpoch 38/1000\n1500/1500 - 0s - loss: 9.8386 - val_loss: 10.8651\nEpoch 39/1000\n1500/1500 - 0s - loss: 9.5938 - val_loss: 10.5728\nEpoch 40/1000\n1500/1500 - 0s - loss: 9.1488 - val_loss: 9.8661\nEpoch 41/1000\n1500/1500 - 0s - loss: 8.8920 - val_loss: 9.5228\nEpoch 42/1000\n1500/1500 - 0s - loss: 8.5156 - val_loss: 9.1506\nEpoch 43/1000\n1500/1500 - 0s - loss: 8.2628 - val_loss: 8.9486\nEpoch 44/1000\n1500/1500 - 0s - loss: 7.9219 - val_loss: 8.5034\nEpoch 45/1000\n1500/1500 - 0s - loss: 7.7077 - val_loss: 8.0760\nEpoch 46/1000\n1500/1500 - 0s - loss: 7.3165 - val_loss: 7.6620\nEpoch 47/1000\n1500/1500 - 0s - loss: 7.0259 - val_loss: 7.4933\nEpoch 48/1000\n1500/1500 - 0s - loss: 6.7422 - val_loss: 7.0583\nEpoch 49/1000\n1500/1500 - 0s - loss: 6.5163 - val_loss: 6.8024\nEpoch 50/1000\n1500/1500 - 0s - loss: 6.2633 - val_loss: 7.3045\nEpoch 51/1000\n1500/1500 - 0s - loss: 6.0029 - val_loss: 6.2712\nEpoch 52/1000\n1500/1500 - 0s - loss: 5.6791 - val_loss: 5.9342\nEpoch 53/1000\n1500/1500 - 0s - loss: 5.4798 - val_loss: 6.0110\nEpoch 54/1000\n1500/1500 - 0s - loss: 5.2115 - val_loss: 5.3928\nEpoch 55/1000\n1500/1500 - 0s - loss: 4.9592 - val_loss: 5.2215\nEpoch 56/1000\n1500/1500 - 0s - loss: 4.7189 - val_loss: 5.0103\nEpoch 57/1000\n1500/1500 - 0s - loss: 4.4683 - val_loss: 4.7098\nEpoch 58/1000\n1500/1500 - 0s - loss: 4.2650 - val_loss: 4.5259\nEpoch 59/1000\n1500/1500 - 0s - loss: 4.0953 - val_loss: 4.4263\nEpoch 60/1000\n1500/1500 - 0s - loss: 3.8027 - val_loss: 4.1103\nEpoch 61/1000\n1500/1500 - 0s - loss: 3.5759 - val_loss: 3.7770\nEpoch 62/1000\n1500/1500 - 0s - loss: 3.3755 - val_loss: 3.5737\nEpoch 63/1000\n1500/1500 - 0s - loss: 3.1781 - val_loss: 3.4833\nEpoch 64/1000\n1500/1500 - 0s - loss: 3.0001 - val_loss: 3.2246\nEpoch 65/1000\n1500/1500 - 0s - loss: 2.7691 - val_loss: 3.1021\nEpoch 66/1000\n1500/1500 - 0s - loss: 2.6227 - val_loss: 2.8215\nEpoch 67/1000\n1500/1500 - 0s - loss: 2.4682 - val_loss: 2.7528\nEpoch 68/1000\n1500/1500 - 0s - loss: 2.3243 - val_loss: 2.5394\nEpoch 69/1000\n1500/1500 - 0s - loss: 2.1664 - val_loss: 2.3886\nEpoch 70/1000\n1500/1500 - 0s - loss: 2.0377 - val_loss: 2.2536\nEpoch 71/1000\n1500/1500 - 0s - loss: 1.8845 - val_loss: 2.2354\nEpoch 72/1000\n1500/1500 - 0s - loss: 1.7931 - val_loss: 2.0831\nEpoch 73/1000\n1500/1500 - 0s - loss: 1.6889 - val_loss: 1.8866\nEpoch 74/1000\n1500/1500 - 0s - loss: 1.5820 - val_loss: 1.7964\nEpoch 75/1000\n1500/1500 - 0s - loss: 1.5085 - val_loss: 1.7138\nEpoch 76/1000\n1500/1500 - 0s - loss: 1.4159 - val_loss: 1.6468\nEpoch 77/1000\n1500/1500 - 0s - loss: 1.3606 - val_loss: 1.5906\nEpoch 78/1000\n1500/1500 - 0s - loss: 1.2652 - val_loss: 1.5063\nEpoch 79/1000\n1500/1500 - 0s - loss: 1.1937 - val_loss: 1.4506\nEpoch 80/1000\n1500/1500 - 0s - loss: 1.1180 - val_loss: 1.4817\nEpoch 81/1000\n1500/1500 - 0s - loss: 1.1412 - val_loss: 1.2800\nEpoch 82/1000\n1500/1500 - 0s - loss: 1.0385 - val_loss: 1.2412\nEpoch 83/1000\n1500/1500 - 0s - loss: 0.9846 - val_loss: 1.1891\nEpoch 84/1000\n1500/1500 - 0s - loss: 0.9937 - val_loss: 1.1322\nEpoch 85/1000\n1500/1500 - 0s - loss: 0.8915 - val_loss: 1.0847\nEpoch 86/1000\n1500/1500 - 0s - loss: 0.8562 - val_loss: 1.1110\nEpoch 87/1000\n1500/1500 - 0s - loss: 0.8468 - val_loss: 1.0686\nEpoch 88/1000\n1500/1500 - 0s - loss: 0.7947 - val_loss: 0.9805\nEpoch 89/1000\n1500/1500 - 0s - loss: 0.7807 - val_loss: 0.9463\nEpoch 90/1000\n1500/1500 - 0s - loss: 0.7502 - val_loss: 0.9965\nEpoch 91/1000\n1500/1500 - 0s - loss: 0.7529 - val_loss: 0.9532\nEpoch 92/1000\n1500/1500 - 0s - loss: 0.6857 - val_loss: 0.8712\nEpoch 93/1000\n1500/1500 - 0s - loss: 0.6717 - val_loss: 0.8498\nEpoch 94/1000\n1500/1500 - 0s - loss: 0.6869 - val_loss: 0.8518\nEpoch 95/1000\n1500/1500 - 0s - loss: 0.6626 - val_loss: 0.8275\nEpoch 96/1000\n1500/1500 - 0s - loss: 0.6308 - val_loss: 0.7850\nEpoch 97/1000\n1500/1500 - 0s - loss: 0.6056 - val_loss: 0.7708\nEpoch 98/1000\n1500/1500 - 0s - loss: 0.5991 - val_loss: 0.7643\nEpoch 99/1000\n1500/1500 - 0s - loss: 0.6102 - val_loss: 0.8104\nEpoch 100/1000\n1500/1500 - 0s - loss: 0.5647 - val_loss: 0.7227\nEpoch 101/1000\n1500/1500 - 0s - loss: 0.5474 - val_loss: 0.7107\nEpoch 102/1000\n1500/1500 - 0s - loss: 0.5395 - val_loss: 0.6847\nEpoch 103/1000\n1500/1500 - 0s - loss: 0.5350 - val_loss: 0.7383\nEpoch 104/1000\n1500/1500 - 0s - loss: 0.5551 - val_loss: 0.6698\nEpoch 105/1000\n1500/1500 - 0s - loss: 0.5032 - val_loss: 0.6520\nEpoch 106/1000\n1500/1500 - 0s - loss: 0.5418 - val_loss: 0.7518\nEpoch 107/1000\n1500/1500 - 0s - loss: 0.4949 - val_loss: 0.6307\nEpoch 108/1000\n1500/1500 - 0s - loss: 0.5166 - val_loss: 0.6741\nEpoch 109/1000\n1500/1500 - 0s - loss: 0.4992 - val_loss: 0.6195\nEpoch 110/1000\n1500/1500 - 0s - loss: 0.4610 - val_loss: 0.6268\nEpoch 111/1000\n1500/1500 - 0s - loss: 0.4554 - val_loss: 0.5956\nEpoch 112/1000\n1500/1500 - 0s - loss: 0.4704 - val_loss: 0.5977\nEpoch 113/1000\n1500/1500 - 0s - loss: 0.4687 - val_loss: 0.5736\nEpoch 114/1000\n1500/1500 - 0s - loss: 0.4497 - val_loss: 0.5817\nEpoch 115/1000\n1500/1500 - 0s - loss: 0.4326 - val_loss: 0.5833\nEpoch 116/1000\n1500/1500 - 0s - loss: 0.4181 - val_loss: 0.5738\nEpoch 117/1000\n1500/1500 - 0s - loss: 0.4252 - val_loss: 0.5688\nEpoch 118/1000\n1500/1500 - 0s - loss: 0.4675 - val_loss: 0.5680\nEpoch 119/1000\n1500/1500 - 0s - loss: 0.4328 - val_loss: 0.5463\nEpoch 120/1000\n1500/1500 - 0s - loss: 0.4091 - val_loss: 0.5912\nEpoch 121/1000\n1500/1500 - 0s - loss: 0.4047 - val_loss: 0.5459\nEpoch 122/1000\n1500/1500 - 0s - loss: 0.4456 - val_loss: 0.5509\nEpoch 123/1000\n1500/1500 - 0s - loss: 0.4081 - val_loss: 0.5540\nEpoch 124/1000\nRestoring model weights from the end of the best epoch.\n1500/1500 - 0s - loss: 0.4353 - val_loss: 0.5538\nEpoch 00124: early stopping\n"
]
],
[
[
"### Mean Square Error\n\nThe mean square error is the sum of the squared differences between the prediction ($\\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired.\n\n$ \\mbox{MSE} = \\frac{1}{n} \\sum_{i=1}^n \\left(\\hat{y}_i - y_i\\right)^2 $\n",
"_____no_output_____"
]
],
[
[
"from sklearn import metrics\n\n# Predict\npred = model.predict(x_test)\n\n# Measure MSE error. \nscore = metrics.mean_squared_error(pred,y_test)\nprint(\"Final score (MSE): {}\".format(score))",
"Final score (MSE): 0.5463447829677607\n"
]
],
[
[
"### Root Mean Square Error\n\nThe root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired.\n\n$ \\mbox{RMSE} = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n \\left(\\hat{y}_i - y_i\\right)^2} $",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# Measure RMSE error. RMSE is common for regression.\nscore = np.sqrt(metrics.mean_squared_error(pred,y_test))\nprint(\"Final score (RMSE): {}\".format(score))",
"Final score (RMSE): 0.7391513938076291\n"
]
],
[
[
"### Lift Chart\n\n\nTo generate a lift chart, perform the following activities:\n\n* Sort the data by expected output. Plot the blue line above.\n* For every point on the x-axis plot the predicted value for that same data point. This is the green line above.\n* The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high.\n* The y-axis is ranged according to the values predicted.\n\nReading a lift chart:\n\n* The expected and predict lines should be close. Notice where one is above the ot other.\n* The below chart is the most accurate on lower age.",
"_____no_output_____"
]
],
[
[
"# Regression chart.\ndef chart_regression(pred, y, sort=True):\n t = pd.DataFrame({'pred': pred, 'y': y.flatten()})\n if sort:\n t.sort_values(by=['y'], inplace=True)\n plt.plot(t['y'].tolist(), label='expected')\n plt.plot(t['pred'].tolist(), label='prediction')\n plt.ylabel('output')\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"\n# Plot the chart\nchart_regression(pred.flatten(),y_test)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0421c74e3b5da72fa506026ce81a848a9f790a2 | 1,874 | ipynb | Jupyter Notebook | ML-week/py_ds/Work with CSV files.ipynb | gopala-kr/a-wild-week-in-ai | 5b6735b60e4ddc0c181cbc55763600aa659f5b49 | [
"MIT"
] | 62 | 2018-08-01T04:06:25.000Z | 2021-11-22T13:22:34.000Z | ML-week/py_ds/Work with CSV files.ipynb | gopala-kr/a-wild-week-in-ai | 5b6735b60e4ddc0c181cbc55763600aa659f5b49 | [
"MIT"
] | null | null | null | ML-week/py_ds/Work with CSV files.ipynb | gopala-kr/a-wild-week-in-ai | 5b6735b60e4ddc0c181cbc55763600aa659f5b49 | [
"MIT"
] | 25 | 2018-09-15T19:12:03.000Z | 2021-05-19T17:01:59.000Z | 17.847619 | 54 | 0.48826 | [
[
[
"import csv",
"_____no_output_____"
],
[
"MonthlySales = []",
"_____no_output_____"
],
[
"with open('data/MonthlySales.csv', 'r') as f:\n reader = csv.DictReader(f)\n for row in reader:\n MonthlySales.append(row)",
"_____no_output_____"
],
[
"for a in MonthlySales:\n print a",
"_____no_output_____"
],
[
"#print keys\nfor a in MonthlySales:\n print a.keys()",
"_____no_output_____"
],
[
"#print keys and values\nfor a in MonthlySales:\n for key, value in a.items():\n print key + \": \", value\n print '\\n'",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04229e3a31dbc4d5c26fd81c4d8713208ea2212 | 3,653 | ipynb | Jupyter Notebook | 00-BestPractices/Generator.ipynb | zunio/python-recipes | c932db459fae2c9e189b50f45d78d9fb918e7cb9 | [
"Apache-2.0"
] | null | null | null | 00-BestPractices/Generator.ipynb | zunio/python-recipes | c932db459fae2c9e189b50f45d78d9fb918e7cb9 | [
"Apache-2.0"
] | null | null | null | 00-BestPractices/Generator.ipynb | zunio/python-recipes | c932db459fae2c9e189b50f45d78d9fb918e7cb9 | [
"Apache-2.0"
] | null | null | null | 23.567742 | 782 | 0.520394 | [
[
[
"#https://dbader.org/blog/python-generators",
"_____no_output_____"
],
[
"def repeat_n_times(value,max_rep):\n for i in range(max_rep):\n yield value + str(i)",
"_____no_output_____"
],
[
"for a in repeat_n_times('hello',10):\n print(a)",
"hello0\nhello1\nhello2\nhello3\nhello4\nhello5\nhello6\nhello7\nhello8\nhello9\n"
],
[
"#Hard coding\ndef repeat_three_times(value):\n yield value\n yield value\n yield value",
"_____no_output_____"
],
[
"for i in repeat_three_times(\"hi\"):\n print(i)",
"hi\nhi\nhi\n"
],
[
"iterator=repeat_three_times(\"hello\")\nprint(next(iterator))\nprint(next(iterator))\nprint(next(iterator))\nprint(next(iterator))",
"hello\nhello\nhello\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0423d81870d4a545cd2a3b3fb5eef17d350305d | 121,805 | ipynb | Jupyter Notebook | examples/traversals/Untitled.ipynb | jvrana/caldera | a346324e77f20739e00a82f97530dda4906f59dd | [
"MIT"
] | 2 | 2021-12-13T17:52:17.000Z | 2021-12-13T17:52:18.000Z | examples/traversals/Untitled.ipynb | jvrana/caldera | a346324e77f20739e00a82f97530dda4906f59dd | [
"MIT"
] | 4 | 2020-10-06T21:06:15.000Z | 2020-10-10T01:18:23.000Z | examples/traversals/Untitled.ipynb | jvrana/caldera | a346324e77f20739e00a82f97530dda4906f59dd | [
"MIT"
] | null | null | null | 93.123089 | 15,828 | 0.702566 | [
[
[
"# flake8: noqa\n##########################################################\n# Relative Imports\n##########################################################\nimport sys\nfrom os.path import isfile\nfrom os.path import join\n\n\ndef find_pkg(name: str, depth: int):\n if depth <= 0:\n ret = None\n else:\n d = [\"..\"] * depth\n path_parts = d + [name, \"__init__.py\"]\n\n if isfile(join(*path_parts)):\n ret = d\n else:\n ret = find_pkg(name, depth - 1)\n return ret\n\n\ndef find_and_ins_syspath(name: str, depth: int):\n path_parts = find_pkg(name, depth)\n if path_parts is None:\n raise RuntimeError(\"Could not find {}. Try increasing depth.\".format(name))\n path = join(*path_parts)\n if path not in sys.path:\n sys.path.insert(0, path)\n\n\ntry:\n import caldera\nexcept ImportError:\n find_and_ins_syspath(\"caldera\", 3)\n\n##########################################################\n# Main\n##########################################################\n\nimport copy\nimport hydra\nfrom examples.traversals.training import TrainingModule\nfrom examples.traversals.data import DataGenerator, DataConfig\nfrom examples.traversals.configuration import Config\nfrom examples.traversals.configuration.data import Uniform, DiscreteUniform\nfrom typing import TypeVar\nfrom pytorch_lightning import Trainer\nfrom examples.traversals.loggers import logger\nfrom omegaconf import DictConfig, OmegaConf\nfrom rich.panel import Panel\nfrom rich import print\nfrom rich.syntax import Syntax\n\n\nC = TypeVar(\"C\")\n\n\ndef prime_the_model(model: TrainingModule, config: Config):\n logger.info(\"Priming the model with data\")\n config_copy: DataConfig = copy.deepcopy(config.data)\n config_copy.train.num_graphs = 10\n config_copy.eval.num_graphs = 0\n data_copy = DataGenerator(config_copy, progress_bar=False)\n for a, b in data_copy.train_loader():\n model.model.forward(a, 10)\n break\n\n\ndef print_title():\n print(Panel(\"Training Example: [red]Traversal\", title=\"[red]caldera\"))\n\n\ndef print_model(model: TrainingModule):\n print(Panel(\"Network\", expand=False))\n print(model)\n\n\ndef print_yaml(cfg: Config):\n print(Panel(\"Configuration\", expand=False))\n print(Syntax(OmegaConf.to_yaml(cfg), \"yaml\"))\n\n# def config_override(cfg: DictConfig):\n# # defaults\n# cfg.hyperparameters.lr = 1e-3\n# cfg.hyperparameters.train_core_processing_steps = 10\n# cfg.hyperparameters.eval_core_processing_steps = 10\n#\n# cfg.data.train.num_graphs = 5000\n# cfg.data.train.num_nodes = DiscreteUniform(10, 100)\n# cfg.data.train.density = Uniform(0.01, 0.03)\n# cfg.data.train.path_length = DiscreteUniform(5, 10)\n# cfg.data.train.composition_density = Uniform(0.01, 0.02)\n# cfg.data.train.batch_size = 512\n# cfg.data.train.shuffle = False\n#\n# cfg.data.eval.num_graphs = 500\n# cfg.data.eval.num_nodes = DiscreteUniform(10, 100)\n# cfg.data.eval.density = Uniform(0.01, 0.03)\n# cfg.data.eval.path_length = DiscreteUniform(5, 10)\n# cfg.data.eval.composition_density = Uniform(0.01, 0.02)\n# cfg.data.eval.batch_size = \"${data.eval.num_graphs}\"\n# cfg.data.eval.shuffle = False\n\n# @hydra.main(config_path=\"conf\", config_name=\"config\")\n# def main(hydra_cfg: DictConfig):\n\n# print_title()\n\n\n# logger.setLevel(hydra_cfg.log_level)\n# if hydra_cfg.log_level.upper() == 'DEBUG':\n# verbose = True\n# else:\n# verbose = False\n# # really unclear why hydra has so many unclear validation issues with structure configs using ConfigStore\n# # this correctly assigns the correct structured config\n# # and updates from the passed hydra config\n# # annoying... but this resolves all these issues\n# cfg = OmegaConf.structured(Config())\n# cfg.update(hydra_cfg)\n\n# # debug\n# if verbose:\n# print_yaml(cfg)\n\n# from pytorch_lightning.loggers import WandbLogger\n# wandb_logger = WandbLogger(project='pytorchlightning')\n\n# # explicitly convert the DictConfig back to Config object\n# # has the added benefit of performing validation upfront\n# # before any expensive training or logging initiates\n# config = Config.from_dict_config(cfg)\n\n# # initialize the training module\n# training_module = TrainingModule(config)\n\n# logger.info(\"Priming the model with data\")\n# prime_the_model(training_module, config)\n# logger.debug(Panel(\"Model\", expand=False))\n\n# if verbose:\n# print_model(training_module)\n\n# logger.info(\"Generating data...\")\n# data = DataGenerator(config.data)\n# data.init()\n\n# logger.info(\"Beginning training...\")\n# trainer = Trainer(gpus=config.gpus, logger=wandb_logger)\n\n# trainer.fit(\n# training_module,\n# train_dataloader=data.train_loader(),\n# val_dataloaders=data.eval_loader(),\n# )\n\n\n# if __name__ == \"__main__\":\n# main()\n",
"_____no_output_____"
],
[
"from examples.traversals.configuration import get_config\n\nconfig = get_config( as_config_class=True)\n\ndata = DataGenerator(config.data)\ndata.init()\n\ntraining_module = TrainingModule(config)\n\nlogger.info(\"Priming the model with data\")\nprime_the_model(training_module, config)",
"/home/justin/anaconda3/envs/caldera/lib/python3.7/site-packages/omegaconf/omegaconf.py:579: UserWarning: update() merge flag is is not specified, defaulting to False.\nFor more details, see https://github.com/omry/omegaconf/issues/367\n stacklevel=1,\n"
],
[
"dir(data)",
"_____no_output_____"
],
[
"from torch import optim\nfrom tqdm.auto import tqdm\nimport torch\nfrom caldera.data import GraphTuple\n\n\ndef mse_tuple(criterion, device, a, b):\n loss = torch.tensor(0.0, dtype=torch.float32, device=device)\n assert len(a) == len(b)\n for i, (_a, _b) in enumerate(zip(a, b)):\n assert _a.shape == _b.shape\n l = criterion(_a, _b)\n loss += l\n return loss\n\n\ndef train(network, loader, cuda: bool = False):\n device = 'cpu'\n if cuda and torch.cuda.is_available():\n device = 'cuda:' + str(torch.cuda.current_device())\n\n network.eval()\n network.to(device)\n input_batch, target_batch = loader.first()\n input_batch = input_batch.detach()\n input_batch.to(device)\n network(input_batch, 1)\n \n optimizer = optim.AdamW(network.parameters(), lr=1e-2)\n loss_func = torch.nn.MSELoss()\n \n losses = []\n for epoch in range(20):\n print(epoch)\n running_loss = 0.\n network.train()\n for input_batch, target_batch in loader:\n optimizer.zero_grad()\n out_batch = network(input_batch, 5)[-1]\n out_tuple = GraphTuple(out_batch.e, out_batch.x, out_batch.g)\n target_tuple = GraphTuple(target_batch.e, target_batch.x, target_batch.g)\n \n loss = mse_tuple(loss_func, device, out_tuple, target_tuple)\n loss.backward()\n running_loss = running_loss + loss.item()\n optimizer.step()\n print(running_loss)\n losses.append(running_loss)\n return losses\n\n\n# loader = DataLoaders.sigmoid_circuit(1000, 10)\ntrain(training_module.model, data.train_loader())\n",
"_____no_output_____"
],
[
"inp, targ = data.eval_loader().first()",
"_____no_output_____"
],
[
"from caldera.transforms.networkx import NetworkxAttachNumpyBool\n\ng = targ.to_networkx_list()[0]\n\nto_bool = NetworkxAttachNumpyBool('node', 'features', 'x')\ngraphs = to_bool(targ.to_networkx_list())\ngraphs[0].nodes(data=True)",
"_____no_output_____"
],
[
"from matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport networkx as nx\n%matplotlib inline\n",
"_____no_output_____"
],
[
"def edge_colors(g, key, cmap):\n edgecolors = list()\n edgelist = list(g.edges)\n edgefeat = list()\n for e in edgelist:\n edata = g.edges[e]\n edgefeat.append(edata[key][0].item())\n edgefeat = np.array(edgefeat)\n edgecolors = cmap(edgefeat)\n return edgecolors\n nx.draw_networkx_edges(g, pos=pos, edge_color=edgecolors, arrows=False)\n\ndef node_colors(g, key, cmap):\n nodecolors = list()\n nodelist = list(g.nodes)\n nodefeat = list()\n for n in nodelist:\n ndata = g.nodes[n]\n nodefeat.append(ndata[key][0].item())\n nodefeat = np.array(nodefeat)\n nodecolors = cmap(nodefeat)\n return nodecolors\n nx.draw_networkx_nodes(g, pos=pos, node_size=10, node_color=nodecolors)\n\ndef plot_graph(g, ax, cmap, key='features', seed=1):\n pos = nx.layout.spring_layout(g, seed=seed)\n nx.draw_networkx_edges(g, ax=ax, pos=pos, edge_color=edge_colors(g, key, cmap), arrows=False);\n nx.draw_networkx_nodes(g, ax=ax, pos=pos, node_size=10, node_color=node_colors(g, key, cmap))\n \n \ndef comparison_plot(out_g, expected_g):\n fig, axes = plt.subplots(1, 2, figsize=(5, 2.5))\n axes[0].axis('off')\n axes[1].axis('off')\n \n axes[0].set_title(\"out\")\n plot_graph(out_g, axes[0], cm.plasma)\n \n axes[1].set_title(\"expected\")\n plot_graph(expected_g, axes[1], cm.plasma)\n return fig, axes\n\ndef validate_compare_plot(trainer, plmodel):\n eval_loader = trainer.val_dataloaders[0]\n for x, y in eval_loader:\n break\n plmodel.eval()\n y_hat = plmodel.model.forward(x, 10)[-1]\n y_graphs = y.to_networkx_list()\n y_hat_graphs = y_hat.to_networkx_list()\n\n idx = 0\n yg = y_graphs[idx]\n yhg = y_hat_graphs[idx]\n return comparison_plot(yhg, yg)\n\nfig, axes = validate_compare_plot(trainer, training_module)",
"_____no_output_____"
],
[
"from pytorch_lightning.loggers import WandbLogger\nwandb_logger = WandbLogger(project='pytorchlightning')",
"_____no_output_____"
],
[
"wandb_logger.experiment",
"_____no_output_____"
],
[
"wandb.Image?",
"_____no_output_____"
],
[
"import wandb\nimport io\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\ndef fig_to_pil(fig):\n buf = io.BytesIO()\n fig.savefig(buf, format='png')\n buf.seek(0)\n im = Image.open(buf)\n# buf.close()\n return im\n\nwandb_logger.experiment.log({'s': [wandb.Image(fig_to_pil(fig))]} )",
"Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable\n\u001b[34m\u001b[1mwandb\u001b[0m: Wandb version 0.10.4 is available! To upgrade, please run:\n\u001b[34m\u001b[1mwandb\u001b[0m: $ pip install wandb --upgrade\n"
],
[
"wandb_logger.experiment.log",
"_____no_output_____"
],
[
"import io\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\nbuf = io.BytesIO()\nfig.savefig(buf, format='png')\nbuf.seek(0)\nim = Image.open(buf)\nim.show()\nbuf.close()\n\nstr(buf)",
"_____no_output_____"
],
[
"x.to_networkx_list()[0].nodes(data=True)\n",
"_____no_output_____"
],
[
"def comparison_plot(out_g, expected_g):\n fig, axes = plt.subplots(1, 2, figsize=(5, 2.5))\n axes[0].axis('off')\n axes[1].axis('off')\n \n axes[0].set_title(\"out\")\n plot_graph(out_g, axes[0], cm.plasma)\n \n axes[1].set_title(\"expected\")\n plot_graph(expected_g, axes[1], cm.plasma)\nx, y = data.eval_loader().first()\ny_hat = training_module.model.forward(x, 10)[-1]\ny_graphs = y.to_networkx_list()\ny_hat_graphs = y_hat.to_networkx_list()\n\nidx = 0\nyg = y_graphs[idx]\nyhg = y_hat_graphs[idx]\ncomparison_plot(yhg, yg)",
"_____no_output_____"
],
[
"g = random_graph((100, 150), d=(0.01, 0.03), e=None)\n\n\nannotate_shortest_path(g)\n# nx.draw(g)\npos = nx.layout.spring_layout(g)\nnodelist = list(g.nodes)\nnode_color = []\nfor n in nodelist:\n node_color.append(g.nodes[n]['target'][0])\nedge_list = []\nedge_color = []\nfor n1, n2, edata in g.edges(data=True):\n edge_list.append((n1, n2))\n edge_color.append(edata['target'][0])\nprint(node_color)\nnx.draw_networkx_edges(g, pos=pos, width=0.5, edge_color=edge_color)\nnx.draw_networkx_nodes(g, pos=pos, node_color=node_color, node_size=10)\n",
"_____no_output_____"
],
[
"NetworkxAttachNumpyBool?",
"_____no_output_____"
],
[
"g.nodes(data=True)",
"_____no_output_____"
],
[
"from caldera.transforms.networkx import NetworkxApplyToFeature\n\nNetworkxApplyToFeature('features', edge_func= lambda x: list(x))(g)",
"_____no_output_____"
],
[
"import time\n\nfrom rich.progress import Progress as RichProgress\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\n\n@dataclass\nclass TaskEvent:\n task_id: int\n name: str\n\n\nclass TaskProgress(object):\n DEFAULT_REFRESH_PER_SECOND = 4.\n\n def __init__(self,\n progress = None,\n task_id: int = None,\n refresh_rate_per_second: int = DEFAULT_REFRESH_PER_SECOND,\n parent = None):\n self.task_id = task_id\n self.children = []\n self.parent = parent\n self.progress = progress or RichProgress()\n self.last_updated = time.time()\n self.refresh_rate_per_second = refresh_rate_per_second\n\n def self_task(self, *args, **kwargs):\n task_id = self.progress.add_task(*args, **kwargs)\n self.task_id = task_id\n\n def add_task(self, *args, **kwargs):\n task_id = self.progress.add_task(*args, **kwargs)\n new_task = self.__class__(self.progress, task_id, self.refresh_rate_per_second, parent=self)\n self.children.append(new_task)\n return new_task\n\n @property\n def _task(self):\n return self.progress.tasks[self.task_id]\n\n def listen(self, event: TaskEvent):\n if event.name == 'refresh':\n completed = sum(t._task.completed for t in self.children)\n total = sum(t._task.total for t in self.children)\n self.update(completed=completed/total, total=1., refresh=True)\n elif event.name == 'finished':\n self.finish()\n\n def emit_up(self, event_name):\n if self.parent:\n self.parent.listen(TaskEvent(task_id=self.task_id, name=event_name))\n\n def emit_down(self, event_name: TaskEvent):\n for child in self.children:\n print(\"sending to child\")\n child.listen(TaskEvent(task_id=self.task_id, name=event_name))\n\n def update(self, *args, **kwargs):\n now = time.time()\n if 'refresh' not in kwargs:\n if now - self.last_updated > 1. / self.refresh_rate_per_second:\n kwargs['refresh'] = True\n else:\n kwargs['refresh'] = False\n \n if kwargs['refresh']:\n self.emit_up('refresh')\n self.last_updated = now\n self.progress.update(self.task_id, *args, **kwargs)\n\n def is_complete(self):\n return self.completed >= self.task.total\n\n def finish(self):\n self.progress.update(self.task_id, completed=self._task.total, refresh=True)\n self.emit_down('finished')\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.progress.__exit__(exc_type, exc_val, exc_tb)\n self.finish()\n\n\nwith TaskProgress() as progress:\n progress.self_task('main', total=10)\n bar1 = progress.add_task('bar1', total=10)\n bar2 = progress.add_task('bar2', total=10)\n for _ in range(10):\n bar1.update(advance=1)\n time.sleep(0.1)\n for _ in range(10):\n bar2.update(advance=1)\n time.sleep(0.1)\n",
"_____no_output_____"
],
[
"bar.progress.tasks[0].completed",
"_____no_output_____"
],
[
"import torch\n\n",
"_____no_output_____"
],
[
"target = torch.ones([1, 64], dtype=torch.float32) # 64 classes, batch size = 10\noutput = torch.full([1, 64], 1.5) # A prediction (logit)\n\nprint(target)\nprint(output)\n# pos_weight = torch.ones([64]) # All weights are equal to 1\ncriterion = torch.nn.BCEWithLogitsLoss()\ncriterion(output, target) # -log(sigmoid(1.5))",
"_____no_output_____"
],
[
"from caldera.data import GraphBatch\n\nbatch = GraphBatch.random_batch(2, 5, 4, 3)\ngraphs = batch.to_networkx_list()\n\nimport networkx as nx\n\nnx.draw(graphs[0])\n\nexpected = torch.randn(batch.x.shape)\nx = batch.x\nx = torch.nn.Softmax()(x)\nprint(x.sum(axis=1))\nx, expected\nx = torch.nn.BCELoss()(x, expected)\nx",
"/home/justin/anaconda3/envs/caldera/lib/python3.7/site-packages/ipykernel_launcher.py:12: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n if sys.path[0] == '':\n"
],
[
"import torch\n\nx = torch.randn(10, 10)\n\ntorch.stack([x, x]).shape",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0423e6d397a42dc08464a43a0cc8a6cf9427a98 | 140,303 | ipynb | Jupyter Notebook | notebooks/test_plot.ipynb | rajvpatil5/ab-framework | 264d59224fef0085d7f8428da2a954adaaf1ad8b | [
"MIT"
] | 108 | 2018-09-14T18:52:35.000Z | 2022-03-10T12:42:38.000Z | notebooks/test_plot.ipynb | rajvpatil5/ab-framework | 264d59224fef0085d7f8428da2a954adaaf1ad8b | [
"MIT"
] | 10 | 2018-07-22T19:02:06.000Z | 2021-08-23T20:35:10.000Z | notebooks/test_plot.ipynb | rajvpatil5/ab-framework | 264d59224fef0085d7f8428da2a954adaaf1ad8b | [
"MIT"
] | 98 | 2018-09-14T18:52:28.000Z | 2022-03-12T05:43:10.000Z | 840.137725 | 39,774 | 0.945824 | [
[
[
"cd ..",
"/Users/nguyen/projects/ab-framework\n"
],
[
"from src.plot import *",
"_____no_output_____"
]
],
[
[
"## Test zplot",
"_____no_output_____"
]
],
[
[
"zplot()",
"_____no_output_____"
],
[
"zplot(area=0.80, two_tailed=False)",
"_____no_output_____"
],
[
"zplot(area=0.80, two_tailed=False, align_right=True)",
"_____no_output_____"
]
],
[
[
"## Test abplot",
"_____no_output_____"
]
],
[
[
"abplot(n=4000, bcr=0.11, d_hat=0.03, show_alpha=True)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0425b020403795017b710139fd5cb71132d18ae | 3,635 | ipynb | Jupyter Notebook | patch_functions.ipynb | elisawarner/Imaging_Toolkit | ae93220b1e12402c267def66faee52bdb154a4af | [
"MIT"
] | null | null | null | patch_functions.ipynb | elisawarner/Imaging_Toolkit | ae93220b1e12402c267def66faee52bdb154a4af | [
"MIT"
] | 1 | 2021-01-15T23:26:26.000Z | 2021-01-15T23:26:26.000Z | patch_functions.ipynb | elisawarner/Imaging_Toolkit | ae93220b1e12402c267def66faee52bdb154a4af | [
"MIT"
] | null | null | null | 26.727941 | 114 | 0.49216 | [
[
[
"import openslide\nfrom openslide import deepzoom",
"_____no_output_____"
],
[
"def extract_patches(osr_obj, tile_size):\n \"\"\"Description: Uses openslide to extract patches at closest zoom size\n INPUT: an openslide object (the .svs image)\n OUTPUT: a list of tiles as image objects\n \"\"\"\n list_of_tiles = []\n \n t = openslide.deepzoom.DeepZoomGenerator(osr_obj, tile_size=tile_size, overlap=0, limit_bounds=False)\n max_level = t.level_count\n \n addresses = t.level_tiles[max_level-1] # zoomed in to furthest amount\n \n for x_coord in range(addresses[0]):\n for y_coord in range(addresses[1]):\n list_of_tiles.append(t.get_tile(max_level - 1, (x_coord, y_coord)))\n \n return list_of_tiles",
"_____no_output_____"
],
[
"def check_if_white(pixel):\n \"\"\"DESCRIPTION: Checks to see if a given pixel is an off-white or white pixel\n INPUT: 3x1 tuple, where each element is in range [0,255]\n OUTPUT: 1 : is a whitespace\n 0 : is not a whitespace\n \"\"\"\n for x in pixel:\n if x < 230:\n return 0\n return 1",
"_____no_output_____"
],
[
"def determine_quality(img, WHITESPACE_CUTOFF):\n \"\"\"Description: Determines quality of a histology tile image\n INPUT: Tile (Image object)\n OUTPUT: 0 : too much whitespace\n 1 : acceptable image\n \n \"\"\"\n p = img.load()\n img_size = img.size\n \n total = 0\n \n for y in range(img_size[1]):\n for x in range(0,img_size[0],3):\n if check_if_white(p[x,y]):\n total += 1\n\n if total / (img_size[0] * img_size[1] / 3) > WHITESPACE_CUTOFF:\n return 0 # too much whitespace\n else:\n return 1 # acceptable",
"_____no_output_____"
],
[
"def pad(i, n):\n \"\"\" PADS AN INTEGER\n INPUT: i : [int] number to be padded\n n : [int] total length of padded number\n OUTPUT: str\n \"\"\"\n i = str(i)\n l = len(i)\n \n if l > n:\n raise ValueError('Integer is bigger than padding specifications')\n else:\n return ('0' * (n-l)) + i",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d042696f204888e084651fc8962b878e4b454caf | 47,421 | ipynb | Jupyter Notebook | experiments/Imputation-TF-ALS.ipynb | shawnwang-tech/transdim | d36f20b1e29920d70fd02083b9c3b26fbdd1e9ec | [
"MIT"
] | 789 | 2018-10-24T06:48:47.000Z | 2022-03-29T10:15:09.000Z | experiments/Imputation-TF-ALS.ipynb | shawnwang-tech/transdim | d36f20b1e29920d70fd02083b9c3b26fbdd1e9ec | [
"MIT"
] | 14 | 2019-05-18T08:32:04.000Z | 2021-12-14T07:11:43.000Z | experiments/Imputation-TF-ALS.ipynb | shawnwang-tech/transdim | d36f20b1e29920d70fd02083b9c3b26fbdd1e9ec | [
"MIT"
] | 221 | 2018-10-30T15:14:21.000Z | 2022-03-27T07:04:17.000Z | 35.952237 | 501 | 0.490985 | [
[
[
"# About this Notebook\n\nIn this notebook, we provide the tensor factorization implementation using an iterative Alternating Least Square (ALS), which is a good starting point for understanding tensor factorization.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom numpy.linalg import inv as inv",
"_____no_output_____"
]
],
[
[
"# Part 1: Matrix Computation Concepts\n\n## 1) Kronecker product\n\n- **Definition**:\n\nGiven two matrices $A\\in\\mathbb{R}^{m_1\\times n_1}$ and $B\\in\\mathbb{R}^{m_2\\times n_2}$, then, the **Kronecker product** between these two matrices is defined as\n\n$$A\\otimes B=\\left[ \\begin{array}{cccc} a_{11}B & a_{12}B & \\cdots & a_{1m_2}B \\\\ a_{21}B & a_{22}B & \\cdots & a_{2m_2}B \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ a_{m_11}B & a_{m_12}B & \\cdots & a_{m_1m_2}B \\\\ \\end{array} \\right]$$\nwhere the symbol $\\otimes$ denotes Kronecker product, and the size of resulted $A\\otimes B$ is $(m_1m_2)\\times (n_1n_2)$ (i.e., $m_1\\times m_2$ columns and $n_1\\times n_2$ rows).\n\n- **Example**:\n\nIf $A=\\left[ \\begin{array}{cc} 1 & 2 \\\\ 3 & 4 \\\\ \\end{array} \\right]$ and $B=\\left[ \\begin{array}{ccc} 5 & 6 & 7\\\\ 8 & 9 & 10 \\\\ \\end{array} \\right]$, then, we have\n\n$$A\\otimes B=\\left[ \\begin{array}{cc} 1\\times \\left[ \\begin{array}{ccc} 5 & 6 & 7\\\\ 8 & 9 & 10\\\\ \\end{array} \\right] & 2\\times \\left[ \\begin{array}{ccc} 5 & 6 & 7\\\\ 8 & 9 & 10\\\\ \\end{array} \\right] \\\\ 3\\times \\left[ \\begin{array}{ccc} 5 & 6 & 7\\\\ 8 & 9 & 10\\\\ \\end{array} \\right] & 4\\times \\left[ \\begin{array}{ccc} 5 & 6 & 7\\\\ 8 & 9 & 10\\\\ \\end{array} \\right] \\\\ \\end{array} \\right]$$\n\n$$=\\left[ \\begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\\\ 8 & 9 & 10 & 16 & 18 & 20 \\\\ 15 & 18 & 21 & 20 & 24 & 28 \\\\ 24 & 27 & 30 & 32 & 36 & 40 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{4\\times 6}.$$\n",
"_____no_output_____"
],
[
"## 2) Khatri-Rao product (`kr_prod`)\n\n- **Definition**:\n\nGiven two matrices $A=\\left( \\boldsymbol{a}_1,\\boldsymbol{a}_2,...,\\boldsymbol{a}_r \\right)\\in\\mathbb{R}^{m\\times r}$ and $B=\\left( \\boldsymbol{b}_1,\\boldsymbol{b}_2,...,\\boldsymbol{b}_r \\right)\\in\\mathbb{R}^{n\\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,\n\n$$A\\odot B=\\left( \\boldsymbol{a}_1\\otimes \\boldsymbol{b}_1,\\boldsymbol{a}_2\\otimes \\boldsymbol{b}_2,...,\\boldsymbol{a}_r\\otimes \\boldsymbol{b}_r \\right)\\in\\mathbb{R}^{(mn)\\times r},$$\nwhere the symbol $\\odot$ denotes Khatri-Rao product, and $\\otimes$ denotes Kronecker product.\n\n- **Example**:\n\nIf $A=\\left[ \\begin{array}{cc} 1 & 2 \\\\ 3 & 4 \\\\ \\end{array} \\right]=\\left( \\boldsymbol{a}_1,\\boldsymbol{a}_2 \\right) $ and $B=\\left[ \\begin{array}{cc} 5 & 6 \\\\ 7 & 8 \\\\ 9 & 10 \\\\ \\end{array} \\right]=\\left( \\boldsymbol{b}_1,\\boldsymbol{b}_2 \\right) $, then, we have\n\n$$A\\odot B=\\left( \\boldsymbol{a}_1\\otimes \\boldsymbol{b}_1,\\boldsymbol{a}_2\\otimes \\boldsymbol{b}_2 \\right) $$\n\n$$=\\left[ \\begin{array}{cc} \\left[ \\begin{array}{c} 1 \\\\ 3 \\\\ \\end{array} \\right]\\otimes \\left[ \\begin{array}{c} 5 \\\\ 7 \\\\ 9 \\\\ \\end{array} \\right] & \\left[ \\begin{array}{c} 2 \\\\ 4 \\\\ \\end{array} \\right]\\otimes \\left[ \\begin{array}{c} 6 \\\\ 8 \\\\ 10 \\\\ \\end{array} \\right] \\\\ \\end{array} \\right]$$\n\n$$=\\left[ \\begin{array}{cc} 5 & 12 \\\\ 7 & 16 \\\\ 9 & 20 \\\\ 15 & 24 \\\\ 21 & 32 \\\\ 27 & 40 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{6\\times 2}.$$",
"_____no_output_____"
]
],
[
[
"def kr_prod(a, b):\n return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)",
"_____no_output_____"
],
[
"A = np.array([[1, 2], [3, 4]])\nB = np.array([[5, 6], [7, 8], [9, 10]])\nprint(kr_prod(A, B))",
"[[ 5 12]\n [ 7 16]\n [ 9 20]\n [15 24]\n [21 32]\n [27 40]]\n"
]
],
[
[
"## 3) CP decomposition\n\n### CP Combination (`cp_combination`)\n\n- **Definition**:\n\nThe CP decomposition factorizes a tensor into a sum of outer products of vectors. For example, for a third-order tensor $\\mathcal{Y}\\in\\mathbb{R}^{m\\times n\\times f}$, the CP decomposition can be written as\n\n$$\\hat{\\mathcal{Y}}=\\sum_{s=1}^{r}\\boldsymbol{u}_{s}\\circ\\boldsymbol{v}_{s}\\circ\\boldsymbol{x}_{s},$$\nor element-wise,\n\n$$\\hat{y}_{ijt}=\\sum_{s=1}^{r}u_{is}v_{js}x_{ts},\\forall (i,j,t),$$\nwhere vectors $\\boldsymbol{u}_{s}\\in\\mathbb{R}^{m},\\boldsymbol{v}_{s}\\in\\mathbb{R}^{n},\\boldsymbol{x}_{s}\\in\\mathbb{R}^{f}$ are columns of factor matrices $U\\in\\mathbb{R}^{m\\times r},V\\in\\mathbb{R}^{n\\times r},X\\in\\mathbb{R}^{f\\times r}$, respectively. The symbol $\\circ$ denotes vector outer product.\n\n- **Example**:\n\nGiven matrices $U=\\left[ \\begin{array}{cc} 1 & 2 \\\\ 3 & 4 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{2\\times 2}$, $V=\\left[ \\begin{array}{cc} 1 & 2 \\\\ 3 & 4 \\\\ 5 & 6 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{3\\times 2}$ and $X=\\left[ \\begin{array}{cc} 1 & 5 \\\\ 2 & 6 \\\\ 3 & 7 \\\\ 4 & 8 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{4\\times 2}$, then if $\\hat{\\mathcal{Y}}=\\sum_{s=1}^{r}\\boldsymbol{u}_{s}\\circ\\boldsymbol{v}_{s}\\circ\\boldsymbol{x}_{s}$, then, we have\n\n$$\\hat{Y}_1=\\hat{\\mathcal{Y}}(:,:,1)=\\left[ \\begin{array}{ccc} 31 & 42 & 65 \\\\ 63 & 86 & 135 \\\\ \\end{array} \\right],$$\n$$\\hat{Y}_2=\\hat{\\mathcal{Y}}(:,:,2)=\\left[ \\begin{array}{ccc} 38 & 52 & 82 \\\\ 78 & 108 & 174 \\\\ \\end{array} \\right],$$\n$$\\hat{Y}_3=\\hat{\\mathcal{Y}}(:,:,3)=\\left[ \\begin{array}{ccc} 45 & 62 & 99 \\\\ 93 & 130 & 213 \\\\ \\end{array} \\right],$$\n$$\\hat{Y}_4=\\hat{\\mathcal{Y}}(:,:,4)=\\left[ \\begin{array}{ccc} 52 & 72 & 116 \\\\ 108 & 152 & 252 \\\\ \\end{array} \\right].$$",
"_____no_output_____"
]
],
[
[
"def cp_combine(U, V, X):\n return np.einsum('is, js, ts -> ijt', U, V, X)",
"_____no_output_____"
],
[
"U = np.array([[1, 2], [3, 4]])\nV = np.array([[1, 3], [2, 4], [5, 6]])\nX = np.array([[1, 5], [2, 6], [3, 7], [4, 8]])\nprint(cp_combine(U, V, X))\nprint()\nprint('tensor size:')\nprint(cp_combine(U, V, X).shape)",
"[[[ 31 38 45 52]\n [ 42 52 62 72]\n [ 65 82 99 116]]\n\n [[ 63 78 93 108]\n [ 86 108 130 152]\n [135 174 213 252]]]\n\ntensor size:\n(2, 3, 4)\n"
]
],
[
[
"## 4) Tensor Unfolding (`ten2mat`)\n\nUsing numpy reshape to perform 3rd rank tensor unfold operation. [[**link**](https://stackoverflow.com/questions/49970141/using-numpy-reshape-to-perform-3rd-rank-tensor-unfold-operation)]",
"_____no_output_____"
]
],
[
[
"def ten2mat(tensor, mode):\n return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')",
"_____no_output_____"
],
[
"X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]], \n [[5, 6, 7, 8], [7, 8, 9, 10]], \n [[9, 10, 11, 12], [11, 12, 13, 14]]])\nprint('tensor size:')\nprint(X.shape)\nprint('original tensor:')\nprint(X)\nprint()\nprint('(1) mode-1 tensor unfolding:')\nprint(ten2mat(X, 0))\nprint()\nprint('(2) mode-2 tensor unfolding:')\nprint(ten2mat(X, 1))\nprint()\nprint('(3) mode-3 tensor unfolding:')\nprint(ten2mat(X, 2))",
"tensor size:\n(3, 2, 4)\noriginal tensor:\n[[[ 1 2 3 4]\n [ 3 4 5 6]]\n\n [[ 5 6 7 8]\n [ 7 8 9 10]]\n\n [[ 9 10 11 12]\n [11 12 13 14]]]\n\n(1) mode-1 tensor unfolding:\n[[ 1 3 2 4 3 5 4 6]\n [ 5 7 6 8 7 9 8 10]\n [ 9 11 10 12 11 13 12 14]]\n\n(2) mode-2 tensor unfolding:\n[[ 1 5 9 2 6 10 3 7 11 4 8 12]\n [ 3 7 11 4 8 12 5 9 13 6 10 14]]\n\n(3) mode-3 tensor unfolding:\n[[ 1 5 9 3 7 11]\n [ 2 6 10 4 8 12]\n [ 3 7 11 5 9 13]\n [ 4 8 12 6 10 14]]\n"
]
],
[
[
"# Part 2: Tensor CP Factorization using ALS (TF-ALS)\n\nRegarding CP factorization as a machine learning problem, we could perform a learning task by minimizing the loss function over factor matrices, that is,\n\n$$\\min _{U, V, X} \\sum_{(i, j, t) \\in \\Omega}\\left(y_{i j t}-\\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}\\right)^{2}.$$\n\nWithin this optimization problem, multiplication among three factor matrices (acted as parameters) makes this problem difficult. Alternatively, we apply the ALS algorithm for CP factorization.\n\nIn particular, the optimization problem for each row $\\boldsymbol{u}_{i}\\in\\mathbb{R}^{R},\\forall i\\in\\left\\{1,2,...,M\\right\\}$ of factor matrix $U\\in\\mathbb{R}^{M\\times R}$ is given by\n\n$$\\min _{\\boldsymbol{u}_{i}} \\sum_{j,t:(i, j, t) \\in \\Omega}\\left[y_{i j t}-\\boldsymbol{u}_{i}^\\top\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right]\\left[y_{i j t}-\\boldsymbol{u}_{i}^\\top\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right]^\\top.$$\n\nThe least square for this optimization is\n\n$$u_{i} \\Leftarrow\\left(\\sum_{j, t, i, j, t ) \\in \\Omega} \\left(x_{t} \\odot v_{j}\\right)\\left(x_{t} \\odot v_{j}\\right)^{\\top}\\right)^{-1}\\left(\\sum_{j, t :(i, j, t) \\in \\Omega} y_{i j t} \\left(x_{t} \\odot v_{j}\\right)\\right), \\forall i \\in\\{1,2, \\ldots, M\\}.$$\n\nThe alternating least squares for $V\\in\\mathbb{R}^{N\\times R}$ and $X\\in\\mathbb{R}^{T\\times R}$ are\n\n$$\\boldsymbol{v}_{j}\\Leftarrow\\left(\\sum_{i,t:(i,j,t)\\in\\Omega}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\right)^{-1}\\left(\\sum_{i,t:(i,j,t)\\in\\Omega}y_{ijt}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)\\right),\\forall j\\in\\left\\{1,2,...,N\\right\\},$$\n\n$$\\boldsymbol{x}_{t}\\Leftarrow\\left(\\sum_{i,j:(i,j,t)\\in\\Omega}\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\right)^{-1}\\left(\\sum_{i,j:(i,j,t)\\in\\Omega}y_{ijt}\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\right),\\forall t\\in\\left\\{1,2,...,T\\right\\}.$$\n",
"_____no_output_____"
]
],
[
[
"def CP_ALS(sparse_tensor, rank, maxiter):\n dim1, dim2, dim3 = sparse_tensor.shape\n dim = np.array([dim1, dim2, dim3])\n \n U = 0.1 * np.random.rand(dim1, rank)\n V = 0.1 * np.random.rand(dim2, rank)\n X = 0.1 * np.random.rand(dim3, rank)\n \n pos = np.where(sparse_tensor != 0)\n binary_tensor = np.zeros((dim1, dim2, dim3))\n binary_tensor[pos] = 1\n tensor_hat = np.zeros((dim1, dim2, dim3))\n \n for iters in range(maxiter):\n for order in range(dim.shape[0]):\n if order == 0:\n var1 = kr_prod(X, V).T\n elif order == 1:\n var1 = kr_prod(X, U).T\n else:\n var1 = kr_prod(V, U).T\n var2 = kr_prod(var1, var1)\n var3 = np.matmul(var2, ten2mat(binary_tensor, order).T).reshape([rank, rank, dim[order]])\n var4 = np.matmul(var1, ten2mat(sparse_tensor, order).T)\n for i in range(dim[order]):\n var_Lambda = var3[ :, :, i]\n inv_var_Lambda = inv((var_Lambda + var_Lambda.T)/2 + 10e-12 * np.eye(rank))\n vec = np.matmul(inv_var_Lambda, var4[:, i])\n if order == 0:\n U[i, :] = vec.copy()\n elif order == 1:\n V[i, :] = vec.copy()\n else:\n X[i, :] = vec.copy()\n\n tensor_hat = cp_combine(U, V, X)\n mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0]\n rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0])\n \n if (iters + 1) % 100 == 0:\n print('Iter: {}'.format(iters + 1))\n print('Training MAPE: {:.6}'.format(mape))\n print('Training RMSE: {:.6}'.format(rmse))\n print()\n \n return tensor_hat, U, V, X",
"_____no_output_____"
]
],
[
[
"# Part 3: Data Organization\n\n## 1) Matrix Structure\n\nWe consider a dataset of $m$ discrete time series $\\boldsymbol{y}_{i}\\in\\mathbb{R}^{f},i\\in\\left\\{1,2,...,m\\right\\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\\in\\mathbb{R}^{m\\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),\n\n$$Y=\\left[ \\begin{array}{cccc} y_{11} & y_{12} & \\cdots & y_{1f} \\\\ y_{21} & y_{22} & \\cdots & y_{2f} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ y_{m1} & y_{m2} & \\cdots & y_{mf} \\\\ \\end{array} \\right]\\in\\mathbb{R}^{m\\times f}.$$\n\n## 2) Tensor Structure\n\nWe consider a dataset of $m$ discrete time series $\\boldsymbol{y}_{i}\\in\\mathbb{R}^{nf},i\\in\\left\\{1,2,...,m\\right\\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),\n\n$$Y_{i}=\\left[ \\begin{array}{cccc} y_{11} & y_{12} & \\cdots & y_{1f} \\\\ y_{21} & y_{22} & \\cdots & y_{2f} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ y_{n1} & y_{n2} & \\cdots & y_{nf} \\\\ \\end{array} \\right]\\in\\mathbb{R}^{n\\times f},i=1,2,...,m,$$\n\ntherefore, the resulting structure is a tensor $\\mathcal{Y}\\in\\mathbb{R}^{m\\times n\\times f}$.",
"_____no_output_____"
],
[
"**How to transform a data set into something we can use for time series imputation?**\n",
"_____no_output_____"
],
[
"# Part 4: Experiments on Guangzhou Data Set",
"_____no_output_____"
]
],
[
[
"import scipy.io\n\ntensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')\ndense_tensor = tensor['tensor']\nrandom_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')\nrandom_matrix = random_matrix['random_matrix']\nrandom_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')\nrandom_tensor = random_tensor['random_tensor']\n\nmissing_rate = 0.2\n\n# =============================================================================\n### Random missing (RM) scenario:\nbinary_tensor = np.round(random_tensor + 0.5 - missing_rate)\n# =============================================================================\n\n# =============================================================================\n### Non-random missing (NM) scenario:\n# binary_tensor = np.zeros(dense_tensor.shape)\n# for i1 in range(dense_tensor.shape[0]):\n# for i2 in range(dense_tensor.shape[1]):\n# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
]
],
[
[
"**Question**: Given only the partially observed data $\\mathcal{Y}\\in\\mathbb{R}^{m\\times n\\times f}$, how can we impute the unknown missing values?\n\nThe main influential factors for such imputation model are:\n\n- `rank`.\n\n- `maxiter`.\n",
"_____no_output_____"
]
],
[
[
"import time\nstart = time.time()\nrank = 80\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.0809251\nTraining RMSE: 3.47736\n\nIter: 200\nTraining MAPE: 0.0805399\nTraining RMSE: 3.46261\n\nIter: 300\nTraining MAPE: 0.0803688\nTraining RMSE: 3.45631\n\nIter: 400\nTraining MAPE: 0.0802661\nTraining RMSE: 3.45266\n\nIter: 500\nTraining MAPE: 0.0801768\nTraining RMSE: 3.44986\n\nIter: 600\nTraining MAPE: 0.0800948\nTraining RMSE: 3.44755\n\nIter: 700\nTraining MAPE: 0.0800266\nTraining RMSE: 3.4456\n\nIter: 800\nTraining MAPE: 0.0799675\nTraining RMSE: 3.44365\n\nIter: 900\nTraining MAPE: 0.07992\nTraining RMSE: 3.4419\n\nIter: 1000\nTraining MAPE: 0.079885\nTraining RMSE: 3.44058\n\nFinal Imputation MAPE: 0.0833307\nFinal Imputation RMSE: 3.59283\n\nRunning time: 2908 seconds\n"
]
],
[
[
"**Experiment results** of missing data imputation using TF-ALS:\n\n| scenario |`rank`| `maxiter`| mape | rmse |\n|:----------|-----:|---------:|-----------:|----------:|\n|**20%, RM**| 80 | 1000 | **0.0833** | **3.5928**|\n|**40%, RM**| 80 | 1000 | **0.0837** | **3.6190**|\n|**20%, NM**| 10 | 1000 | **0.1027** | **4.2960**|\n|**40%, NM**| 10 | 1000 | **0.1028** | **4.3274**|\n",
"_____no_output_____"
],
[
"# Part 5: Experiments on Birmingham Data Set",
"_____no_output_____"
]
],
[
[
"import scipy.io\n\ntensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')\ndense_tensor = tensor['tensor']\nrandom_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')\nrandom_matrix = random_matrix['random_matrix']\nrandom_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')\nrandom_tensor = random_tensor['random_tensor']\n\nmissing_rate = 0.3\n\n# =============================================================================\n### Random missing (RM) scenario:\nbinary_tensor = np.round(random_tensor + 0.5 - missing_rate)\n# =============================================================================\n\n# =============================================================================\n### Non-random missing (NM) scenario:\n# binary_tensor = np.zeros(dense_tensor.shape)\n# for i1 in range(dense_tensor.shape[0]):\n# for i2 in range(dense_tensor.shape[1]):\n# binary_tensor[i1, i2, :] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 30\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.0509401\nTraining RMSE: 15.3163\n\nIter: 200\nTraining MAPE: 0.0498774\nTraining RMSE: 14.9599\n\nIter: 300\nTraining MAPE: 0.0490062\nTraining RMSE: 14.768\n\nIter: 400\nTraining MAPE: 0.0481006\nTraining RMSE: 14.6343\n\nIter: 500\nTraining MAPE: 0.0474233\nTraining RMSE: 14.5365\n\nIter: 600\nTraining MAPE: 0.0470442\nTraining RMSE: 14.4642\n\nIter: 700\nTraining MAPE: 0.0469617\nTraining RMSE: 14.4082\n\nIter: 800\nTraining MAPE: 0.0470459\nTraining RMSE: 14.3623\n\nIter: 900\nTraining MAPE: 0.0472333\nTraining RMSE: 14.3235\n\nIter: 1000\nTraining MAPE: 0.047408\nTraining RMSE: 14.2898\n\nFinal Imputation MAPE: 0.0583358\nFinal Imputation RMSE: 18.9148\n\nRunning time: 38 seconds\n"
]
],
[
[
"**Experiment results** of missing data imputation using TF-ALS:\n\n| scenario |`rank`| `maxiter`| mape | rmse |\n|:----------|-----:|---------:|-----------:|-----------:|\n|**10%, RM**| 30 | 1000 | **0.0615** | **18.5005**|\n|**30%, RM**| 30 | 1000 | **0.0583** | **18.9148**|\n|**10%, NM**| 10 | 1000 | **0.1447** | **41.6710**|\n|**30%, NM**| 10 | 1000 | **0.1765** | **63.8465**|\n",
"_____no_output_____"
],
[
"# Part 6: Experiments on Hangzhou Data Set",
"_____no_output_____"
]
],
[
[
"import scipy.io\n\ntensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')\ndense_tensor = tensor['tensor']\nrandom_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')\nrandom_matrix = random_matrix['random_matrix']\nrandom_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')\nrandom_tensor = random_tensor['random_tensor']\n\nmissing_rate = 0.4\n\n# =============================================================================\n### Random missing (RM) scenario:\nbinary_tensor = np.round(random_tensor + 0.5 - missing_rate)\n# =============================================================================\n\n# =============================================================================\n### Non-random missing (NM) scenario:\n# binary_tensor = np.zeros(dense_tensor.shape)\n# for i1 in range(dense_tensor.shape[0]):\n# for i2 in range(dense_tensor.shape[1]):\n# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 50\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.176548\nTraining RMSE: 17.0263\n\nIter: 200\nTraining MAPE: 0.174888\nTraining RMSE: 16.8609\n\nIter: 300\nTraining MAPE: 0.175056\nTraining RMSE: 16.7835\n\nIter: 400\nTraining MAPE: 0.174988\nTraining RMSE: 16.7323\n\nIter: 500\nTraining MAPE: 0.175013\nTraining RMSE: 16.6942\n\nIter: 600\nTraining MAPE: 0.174928\nTraining RMSE: 16.6654\n\nIter: 700\nTraining MAPE: 0.174722\nTraining RMSE: 16.6441\n\nIter: 800\nTraining MAPE: 0.174565\nTraining RMSE: 16.6284\n\nIter: 900\nTraining MAPE: 0.174454\nTraining RMSE: 16.6159\n\nIter: 1000\nTraining MAPE: 0.174409\nTraining RMSE: 16.6054\n\nFinal Imputation MAPE: 0.209776\nFinal Imputation RMSE: 100.315\n\nRunning time: 279 seconds\n"
]
],
[
[
"**Experiment results** of missing data imputation using TF-ALS:\n\n| scenario |`rank`| `maxiter`| mape | rmse |\n|:----------|-----:|---------:|-----------:|----------:|\n|**20%, RM**| 50 | 1000 | **0.1991** |**111.303**|\n|**40%, RM**| 50 | 1000 | **0.2098** |**100.315**|\n|**20%, NM**| 5 | 1000 | **0.2837** |**42.6136**|\n|**40%, NM**| 5 | 1000 | **0.2811** |**38.4201**|\n",
"_____no_output_____"
],
[
"# Part 7: Experiments on New York Data Set",
"_____no_output_____"
]
],
[
[
"import scipy.io\n\ntensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')\ndense_tensor = tensor['tensor']\nrm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')\nrm_tensor = rm_tensor['rm_tensor']\nnm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')\nnm_tensor = nm_tensor['nm_tensor']\n\nmissing_rate = 0.1\n\n# =============================================================================\n### Random missing (RM) scenario\n### Set the RM scenario by:\n# binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)\n# =============================================================================\n\n# =============================================================================\n### Non-random missing (NM) scenario\n### Set the NM scenario by:\nbinary_tensor = np.zeros(dense_tensor.shape)\nfor i1 in range(dense_tensor.shape[0]):\n for i2 in range(dense_tensor.shape[1]):\n for i3 in range(61):\n binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3] + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 30\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.511739\nTraining RMSE: 4.07981\n\nIter: 200\nTraining MAPE: 0.501094\nTraining RMSE: 4.0612\n\nIter: 300\nTraining MAPE: 0.504264\nTraining RMSE: 4.05578\n\nIter: 400\nTraining MAPE: 0.507211\nTraining RMSE: 4.05119\n\nIter: 500\nTraining MAPE: 0.509956\nTraining RMSE: 4.04623\n\nIter: 600\nTraining MAPE: 0.51046\nTraining RMSE: 4.04129\n\nIter: 700\nTraining MAPE: 0.509797\nTraining RMSE: 4.03294\n\nIter: 800\nTraining MAPE: 0.509531\nTraining RMSE: 4.02976\n\nIter: 900\nTraining MAPE: 0.509265\nTraining RMSE: 4.02861\n\nIter: 1000\nTraining MAPE: 0.508873\nTraining RMSE: 4.02796\n\nFinal Imputation MAPE: 0.540363\nFinal Imputation RMSE: 5.66633\n\nRunning time: 742 seconds\n"
]
],
[
[
"**Experiment results** of missing data imputation using TF-ALS:\n\n| scenario |`rank`| `maxiter`| mape | rmse |\n|:----------|-----:|---------:|-----------:|----------:|\n|**10%, RM**| 30 | 1000 | **0.5262** | **6.2444**|\n|**30%, RM**| 30 | 1000 | **0.5488** | **6.8968**|\n|**10%, NM**| 30 | 1000 | **0.5170** | **5.9863**|\n|**30%, NM**| 30 | 100 | **-** | **-**|\n",
"_____no_output_____"
],
[
"# Part 8: Experiments on Seattle Data Set",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)\nRM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)\ndense_mat = dense_mat.values\nRM_mat = RM_mat.values\ndense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])\nRM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288])\n\nmissing_rate = 0.2\n\n# =============================================================================\n### Random missing (RM) scenario\n### Set the RM scenario by:\nbinary_tensor = np.round(RM_tensor + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 50\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.0749497\nTraining RMSE: 4.47036\n\nIter: 200\nTraining MAPE: 0.0745197\nTraining RMSE: 4.44713\n\nIter: 300\nTraining MAPE: 0.0741685\nTraining RMSE: 4.43496\n\nIter: 400\nTraining MAPE: 0.0739049\nTraining RMSE: 4.42523\n\nIter: 500\nTraining MAPE: 0.0737243\nTraining RMSE: 4.41692\n\nIter: 600\nTraining MAPE: 0.0735726\nTraining RMSE: 4.40994\n\nIter: 700\nTraining MAPE: 0.073415\nTraining RMSE: 4.4034\n\nIter: 800\nTraining MAPE: 0.0732484\nTraining RMSE: 4.3976\n\nIter: 900\nTraining MAPE: 0.0731012\nTraining RMSE: 4.393\n\nIter: 1000\nTraining MAPE: 0.0729675\nTraining RMSE: 4.38843\n\nFinal Imputation MAPE: 0.0741996\nFinal Imputation RMSE: 4.49292\n\nRunning time: 2594 seconds\n"
],
[
"import pandas as pd\n\ndense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)\nRM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)\ndense_mat = dense_mat.values\nRM_mat = RM_mat.values\ndense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])\nRM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288])\n\nmissing_rate = 0.4\n\n# =============================================================================\n### Random missing (RM) scenario\n### Set the RM scenario by:\nbinary_tensor = np.round(RM_tensor + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 50\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.0747491\nTraining RMSE: 4.4544\n\nIter: 200\nTraining MAPE: 0.074195\nTraining RMSE: 4.42842\n\nIter: 300\nTraining MAPE: 0.0738864\nTraining RMSE: 4.4162\n\nIter: 400\nTraining MAPE: 0.0735955\nTraining RMSE: 4.40699\n\nIter: 500\nTraining MAPE: 0.0733999\nTraining RMSE: 4.40083\n\nIter: 600\nTraining MAPE: 0.0732636\nTraining RMSE: 4.3959\n\nIter: 700\nTraining MAPE: 0.0731835\nTraining RMSE: 4.39241\n\nIter: 800\nTraining MAPE: 0.0731367\nTraining RMSE: 4.3899\n\nIter: 900\nTraining MAPE: 0.0730982\nTraining RMSE: 4.38779\n\nIter: 1000\nTraining MAPE: 0.0730573\nTraining RMSE: 4.38571\n\nFinal Imputation MAPE: 0.0757934\nFinal Imputation RMSE: 4.5574\n\nRunning time: 2706 seconds\n"
],
[
"import pandas as pd\n\ndense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)\nNM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)\ndense_mat = dense_mat.values\nNM_mat = NM_mat.values\ndense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])\n\nmissing_rate = 0.2\n\n# =============================================================================\n### Non-random missing (NM) scenario\n### Set the NM scenario by:\nbinary_tensor = np.zeros((dense_mat.shape[0], 28, 288))\nfor i1 in range(binary_tensor.shape[0]):\n for i2 in range(binary_tensor.shape[1]):\n binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 10\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.100191\nTraining RMSE: 5.60051\n\nIter: 200\nTraining MAPE: 0.0981896\nTraining RMSE: 5.51405\n\nIter: 300\nTraining MAPE: 0.0969386\nTraining RMSE: 5.46377\n\nIter: 400\nTraining MAPE: 0.0967974\nTraining RMSE: 5.45581\n\nIter: 500\nTraining MAPE: 0.0966243\nTraining RMSE: 5.44397\n\nIter: 600\nTraining MAPE: 0.0960368\nTraining RMSE: 5.42168\n\nIter: 700\nTraining MAPE: 0.0958292\nTraining RMSE: 5.41295\n\nIter: 800\nTraining MAPE: 0.0957371\nTraining RMSE: 5.40865\n\nIter: 900\nTraining MAPE: 0.0956582\nTraining RMSE: 5.40568\n\nIter: 1000\nTraining MAPE: 0.095595\nTraining RMSE: 5.40339\n\nFinal Imputation MAPE: 0.0994999\nFinal Imputation RMSE: 5.63311\n\nRunning time: 351 seconds\n"
],
[
"import pandas as pd\n\ndense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)\nNM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)\ndense_mat = dense_mat.values\nNM_mat = NM_mat.values\ndense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])\n\nmissing_rate = 0.4\n\n# =============================================================================\n### Non-random missing (NM) scenario\n### Set the NM scenario by:\nbinary_tensor = np.zeros((dense_mat.shape[0], 28, 288))\nfor i1 in range(binary_tensor.shape[0]):\n for i2 in range(binary_tensor.shape[1]):\n binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)\n# =============================================================================\n\nsparse_tensor = np.multiply(dense_tensor, binary_tensor)",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nrank = 10\nmaxiter = 1000\ntensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\npos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\nfinal_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\nfinal_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\nprint('Final Imputation MAPE: {:.6}'.format(final_mape))\nprint('Final Imputation RMSE: {:.6}'.format(final_rmse))\nprint()\nend = time.time()\nprint('Running time: %d seconds'%(end - start))",
"Iter: 100\nTraining MAPE: 0.0996282\nTraining RMSE: 5.55963\n\nIter: 200\nTraining MAPE: 0.0992568\nTraining RMSE: 5.53825\n\nIter: 300\nTraining MAPE: 0.0986723\nTraining RMSE: 5.51806\n\nIter: 400\nTraining MAPE: 0.0967838\nTraining RMSE: 5.46447\n\nIter: 500\nTraining MAPE: 0.0962312\nTraining RMSE: 5.44762\n\nIter: 600\nTraining MAPE: 0.0961017\nTraining RMSE: 5.44322\n\nIter: 700\nTraining MAPE: 0.0959531\nTraining RMSE: 5.43927\n\nIter: 800\nTraining MAPE: 0.0958815\nTraining RMSE: 5.43619\n\nIter: 900\nTraining MAPE: 0.0958781\nTraining RMSE: 5.4344\n\nIter: 1000\nTraining MAPE: 0.0958921\nTraining RMSE: 5.43266\n\nFinal Imputation MAPE: 0.10038\nFinal Imputation RMSE: 5.7034\n\nRunning time: 304 seconds\n"
]
],
[
[
"**Experiment results** of missing data imputation using TF-ALS:\n\n| scenario |`rank`| `maxiter`| mape | rmse |\n|:----------|-----:|---------:|-----------:|----------:|\n|**20%, RM**| 50 | 1000 | **0.0742** |**4.4929**|\n|**40%, RM**| 50 | 1000 | **0.0758** |**4.5574**|\n|**20%, NM**| 10 | 1000 | **0.0995** |**5.6331**|\n|**40%, NM**| 10 | 1000 | **0.1004** |**5.7034**|\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0427bbae880fd2e40d9fb0a4befb5bdb9ec829b | 1,432 | ipynb | Jupyter Notebook | docs/Dashboards/Grabcut_Mesher.ipynb | erdc/AdhUI | f87b05f9e13b4a4d459a2bb576454b8596ca84a7 | [
"BSD-3-Clause"
] | 2 | 2019-07-24T18:27:13.000Z | 2019-09-05T13:03:02.000Z | docs/Dashboards/Grabcut_Mesher.ipynb | erdc/AdhUI | f87b05f9e13b4a4d459a2bb576454b8596ca84a7 | [
"BSD-3-Clause"
] | 7 | 2019-07-16T15:44:04.000Z | 2019-08-29T14:46:42.000Z | docs/Dashboards/Grabcut_Mesher.ipynb | erdc/AdhUI | f87b05f9e13b4a4d459a2bb576454b8596ca84a7 | [
"BSD-3-Clause"
] | 1 | 2021-05-05T19:57:49.000Z | 2021-05-05T19:57:49.000Z | 22.730159 | 69 | 0.560056 | [
[
[
"import panel as pn\nimport holoviews as hv\nimport geoviews as gv\nfrom adhui import CreateMesh, ConceptualModelEditor\nfrom earthsim.annotators import PolyAnnotator\nfrom earthsim.grabcut import GrabCutPanel, SelectRegionPanel\n\nhv.extension('bokeh')",
"_____no_output_____"
],
[
"# create stages for pipeline\nstages = [\n ('Select Region', SelectRegionPanel),\n ('Grabcut', GrabCutPanel),\n ('Path Editor', ConceptualModelEditor),\n ('Mesh', CreateMesh)\n]\n\n# create the pipeline\npipeline = pn.pipeline.Pipeline(stages, debug=True)\n \n# return a display of the pipeline\npipeline.layout.servable()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d04281e48a4c8fa5de971c0bf2f7b1390f5e1d34 | 1,238 | ipynb | Jupyter Notebook | temp_codes/.ipynb_checkpoints/main_conditional_aug_sup_bs900_concat_3-checkpoint.ipynb | minhtannguyen/ffjord | f3418249eaa4647f4339aea8d814cf2ce33be141 | [
"MIT"
] | null | null | null | temp_codes/.ipynb_checkpoints/main_conditional_aug_sup_bs900_concat_3-checkpoint.ipynb | minhtannguyen/ffjord | f3418249eaa4647f4339aea8d814cf2ce33be141 | [
"MIT"
] | null | null | null | temp_codes/.ipynb_checkpoints/main_conditional_aug_sup_bs900_concat_3-checkpoint.ipynb | minhtannguyen/ffjord | f3418249eaa4647f4339aea8d814cf2ce33be141 | [
"MIT"
] | null | null | null | 23.807692 | 372 | 0.585622 | [
[
[
"import os\nos.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3,4,5'",
"_____no_output_____"
],
[
"%run -p train_cnf_augmented.py --data mnist --dims 64,64,64 --strides 1,1,1,1 --num_blocks 2 --layer_type concat --multiscale True --rademacher True --batch_size 900 --test_batch_size 500 --save ../experiments_published/cnf_published_conditional_bs900_1 --seed 1 --conditional True --controlled_tol False --train_mode sup --concat_size 3 --log_freq 10 --weight_y 0.5",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d042983de851d168ae9ceabb038fccb6543c9292 | 2,279 | ipynb | Jupyter Notebook | Python_Day3/amrit.ipynb | dchaurangi/pro1 | 0c0c9f53749fb9857f69f5d564718fd3acbec7b0 | [
"MIT"
] | 1 | 2020-01-16T08:54:52.000Z | 2020-01-16T08:54:52.000Z | Python_Day3/amrit.ipynb | dchaurangi/Deloitte-Training | 0c0c9f53749fb9857f69f5d564718fd3acbec7b0 | [
"MIT"
] | null | null | null | Python_Day3/amrit.ipynb | dchaurangi/Deloitte-Training | 0c0c9f53749fb9857f69f5d564718fd3acbec7b0 | [
"MIT"
] | null | null | null | 16.635036 | 54 | 0.454585 | [
[
[
"\n\"\"\"nector is green in colour\nplease go green\"\"\"\n",
"_____no_output_____"
],
[
"print(\"amrit's specail green\")",
"amrit's specail green\n"
],
[
"def green():\n print(\"in the green\")",
"_____no_output_____"
],
[
"\n# __name__ = \"amrit\"",
"_____no_output_____"
],
[
"print(__name__)",
"__main__\n"
],
[
"if __name__ == \"__main__\":\n def peach():\n print('peach is a fruit')",
"_____no_output_____"
],
[
"#peach()",
"peach is a fruit\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d042ac3beb23649a50a784a44858141ff6ffc786 | 92,268 | ipynb | Jupyter Notebook | wgan_experiment/WGAN_experiment.ipynb | kolchinski/humanception-score | da8880eec3be39574718409cfe8ca303f41c64e6 | [
"MIT"
] | null | null | null | wgan_experiment/WGAN_experiment.ipynb | kolchinski/humanception-score | da8880eec3be39574718409cfe8ca303f41c64e6 | [
"MIT"
] | null | null | null | wgan_experiment/WGAN_experiment.ipynb | kolchinski/humanception-score | da8880eec3be39574718409cfe8ca303f41c64e6 | [
"MIT"
] | null | null | null | 45.564444 | 7,248 | 0.528471 | [
[
[
"Let's look at:\nNumber of labels per image (histogram)\nQuality score per image for images with multiple labels (sigmoid?)\n",
"_____no_output_____"
]
],
[
[
"import csv\nfrom itertools import islice\nfrom collections import defaultdict\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport torch\nimport torchvision\nimport numpy as np",
"_____no_output_____"
],
[
"CSV_PATH = 'wgangp_data.csv'",
"_____no_output_____"
],
[
"realness = {}\n# real_votes = defaultdict(int)\n# fake_votes = defaultdict(int)\ntotal_votes = defaultdict(int)\ncorrect_votes = defaultdict(int)\n\n\n\n\nwith open(CSV_PATH) as f:\n dictreader = csv.DictReader(f)\n for line in dictreader:\n img_name = line['img']\n assert(line['realness'] in ('True', 'False'))\n assert(line['correctness'] in ('True', 'False'))\n \n realness[img_name] = line['realness'] == 'True'\n if line['correctness'] == 'True':\n correct_votes[img_name] += 1\n total_votes[img_name] += 1\n \n ",
"_____no_output_____"
],
[
"pdx = pd.read_csv(CSV_PATH)\npdx",
"_____no_output_____"
],
[
"pdx[pdx.groupby('img').count() > 50]\npdx\n#df.img\n# print(df.columns)\n# print(df['img'])",
"_____no_output_____"
],
[
"# How much of the time do people guess \"fake\"? Slightly more than half!\npdx[pdx.correctness != pdx.realness].count()/pdx.count()",
"_____no_output_____"
],
[
"# How much of the time do people guess right? 94.4%\npdx[pdx.correctness].count()/pdx.count()",
"_____no_output_____"
],
[
"#90.3% of the time, real images are correctly labeled as real\npdx[pdx.realness][pdx.correctness].count()/pdx[pdx.realness].count()",
"/Users/alexkolchinski/anaconda3/envs/wgan_experiment/lib/python3.7/site-packages/ipykernel_launcher.py:1: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"#98.5% of the time, fake images are correctly labeled as fake\npdx[~pdx.realness][pdx.correctness].count()/pdx[~pdx.realness].count()",
"/Users/alexkolchinski/anaconda3/envs/wgan_experiment/lib/python3.7/site-packages/ipykernel_launcher.py:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n \n"
],
[
"len(total_votes.values())",
"_____no_output_____"
],
[
"img_dict = {img: [realness[img], correct_votes[img], total_votes[img], correct_votes[img]/total_votes[img]] for img in realness }\n# print(img_dict.keys())\n#img_dict['celeba500/005077_crop.jpg']\nplt.hist([v[3] for k,v in img_dict.items() if 'celeb' in k])",
"_____no_output_____"
],
[
"def getVotesDict(img_dict):\n\n votes_dict = defaultdict(int)\n for img in total_votes:\n votes_dict[img_dict[img][2]] += 1\n return votes_dict\n \nvotes_dict = getVotesDict(img_dict)\nfor i in sorted(votes_dict.keys()):\n print(i, votes_dict[i])",
"1 2460\n2 1283\n3 400\n4 95\n5 11\n6 2\n129 2\n130 1\n131 4\n132 4\n133 8\n134 8\n135 10\n136 8\n137 3\n138 2\n"
],
[
"selected_img_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] > 10}\nless_than_50_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] < 10}\nimgs_over_50 = list(selected_img_dict.keys())\n# print(len(selected_img_dict))\n# print(imgs_over_50)",
"_____no_output_____"
],
[
"pdx_50 = pdx[pdx.img.apply(lambda x: x in imgs_over_50)]\nlen(pdx_50)",
"_____no_output_____"
],
[
"pdx_under_50 = pdx[pdx.img.apply(lambda x: x not in imgs_over_50)]\nlen(pdx_under_50)",
"_____no_output_____"
],
[
"len(pdx_under_50[pdx_under_50.img.apply(lambda x: 'wgan' not in x)])",
"_____no_output_____"
],
[
"correctness = sorted([value[3] for key, value in selected_img_dict.items()])\nprint(correctness)\nplt.hist(correctness)\nplt.show()",
"[0.6204379562043796, 0.7164179104477612, 0.7557251908396947, 0.7898550724637681, 0.8059701492537313, 0.8120300751879699, 0.8248175182481752, 0.8283582089552238, 0.837037037037037, 0.8449612403100775, 0.8455882352941176, 0.8712121212121212, 0.875, 0.8787878787878788, 0.8805970149253731, 0.8823529411764706, 0.8897058823529411, 0.8962962962962963, 0.9, 0.9022556390977443, 0.9083969465648855, 0.9097744360902256, 0.9111111111111111, 0.9130434782608695, 0.9197080291970803, 0.924812030075188, 0.924812030075188, 0.9259259259259259, 0.9259259259259259, 0.9312977099236641, 0.937984496124031, 0.9398496240601504, 0.9473684210526315, 0.9481481481481482, 0.9552238805970149, 0.9555555555555556, 0.9558823529411765, 0.9618320610687023, 0.9621212121212122, 0.9701492537313433, 0.9701492537313433, 0.9703703703703703, 0.9703703703703703, 0.9703703703703703, 0.9779411764705882, 0.9779411764705882, 0.9848484848484849, 0.9849624060150376, 0.9852941176470589, 0.9925373134328358]\n"
],
[
"correctness = sorted([value[3] for key, value in less_than_50_dict.items()])\n# print(correctness)\nplt.hist(correctness)\nplt.show()\n\n",
"_____no_output_____"
],
[
"ct = []\n# selected_img = [img in total_votes.keys() if total_votes[img] > 1 ]",
"_____no_output_____"
],
[
"discriminator = torch.load('discriminator.pt', map_location='cpu')\n# torch.load_state_dict('discriminator.pt')",
"_____no_output_____"
],
[
"discriminator(torch.zeros(64,64,3))",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d042acfa3c0d8d2369927977ca941cd06acebed6 | 64,409 | ipynb | Jupyter Notebook | algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb | jbaeckn/learning_projects | 801492423eede89842230fd8f89c20204d87e068 | [
"MIT"
] | null | null | null | algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb | jbaeckn/learning_projects | 801492423eede89842230fd8f89c20204d87e068 | [
"MIT"
] | null | null | null | algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb | jbaeckn/learning_projects | 801492423eede89842230fd8f89c20204d87e068 | [
"MIT"
] | null | null | null | 54.769558 | 29,531 | 0.372215 | [
[
[
"# Naive Bayes Classifier\n\nPredicting positivty/negativity of movie reviews using Naive Bayes algorithm",
"_____no_output_____"
],
[
"## 1. Import Dataset\n\nLabels:\n* 0 : Negative review\n* 1 : Positive review",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport warnings\nwarnings.filterwarnings('ignore')\n\nreviews = pd.read_csv('ratings_train.txt', delimiter='\\t')\nreviews.head(10)",
"_____no_output_____"
],
[
"#divide between negative and positive reviews with more than 30 words in length\nneg = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 0)].sample(3000, random_state=43)\npos = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 1)].sample(3000, random_state=43)",
"_____no_output_____"
],
[
"pos.head()",
"_____no_output_____"
],
[
"#NLP method\nimport re\nimport konlpy\nfrom konlpy.tag import Twitter\n\nokt = Twitter()",
"_____no_output_____"
],
[
"def parse(s):\n s = re.sub(r'[?$.!,-_\\'\\\"(){}~]+', '', s)\n try:\n return okt.nouns(s)\n except:\n return []\n \n#okt.morphs is another option",
"_____no_output_____"
],
[
"neg['parsed_doc'] = neg.document.apply(parse)\npos['parsed_doc'] = pos.document.apply(parse)",
"_____no_output_____"
],
[
"neg.head()",
"_____no_output_____"
],
[
"pos.head()",
"_____no_output_____"
],
[
"# create 5800 training data / 200 test data\nneg_train = neg[:2900]\npos_train = pos[:2900]\nneg_test = neg[2900:]\npos_test = pos[2900:]",
"_____no_output_____"
]
],
[
[
"## 2. Create Corpus",
"_____no_output_____"
]
],
[
[
"neg_corpus = set(neg_train.parsed_doc.sum())\npos_corpus = set(pos_train.parsed_doc.sum())",
"_____no_output_____"
],
[
"corpus = list((neg_corpus).union(pos_corpus))\nprint('corpus length : ', len(corpus))",
"corpus length : 9836\n"
],
[
"corpus[:10]",
"_____no_output_____"
]
],
[
[
"## 3. Create Bag of Words",
"_____no_output_____"
]
],
[
[
"from collections import OrderedDict",
"_____no_output_____"
],
[
"neg_bow_vecs = []\nfor _, doc in neg.parsed_doc.items():\n bow_vecs = OrderedDict()\n for w in corpus:\n if w in doc:\n bow_vecs[w] = 1\n else:\n bow_vecs[w] = 0\n neg_bow_vecs.append(bow_vecs)",
"_____no_output_____"
],
[
"pos_bow_vecs = []\nfor _, doc in pos.parsed_doc.items():\n bow_vecs = OrderedDict()\n for w in corpus:\n if w in doc:\n bow_vecs[w] = 1\n else:\n bow_vecs[w] = 0\n pos_bow_vecs.append(bow_vecs)",
"_____no_output_____"
],
[
"#bag of word vector example\n#this length is equal to the length of the corpus\nneg_bow_vecs[0].values()",
"_____no_output_____"
]
],
[
[
"## 4. Model Training",
"_____no_output_____"
],
[
"$n$ is the dimension of each document, in other words, the length of corpus <br>\n\n$$\\large p(pos|doc) = \\Large \\frac{p(doc|pos) \\cdot p(pos)}{p(doc)}$$\n<br>\n$$\\large p(neg|doc) = \\Large \\frac{p(doc|neg) \\cdot p(neg)}{p(doc)}$$\n\n<br><br>\n**Likelihood functions:** <br><br>\n$p(word_{i}|pos) = \\large \\frac{\\text{the number of positive documents that contain the word}}{\\text{the number of positive documents}}$\n\n$p(word_{i}|neg) = \\large \\frac{\\text{the number of negative documents that contain the word}}{\\text{the number of negative documents}}$",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"corpus[:5]",
"_____no_output_____"
],
[
"list(neg_train.parsed_doc.items())[0]",
"_____no_output_____"
],
[
"#this counts how many times a word in corpus appeares in neg_train data\nneg_words_likelihood_cnts = {}\nfor w in corpus:\n cnt = 0\n for _, doc in neg_train.parsed_doc.items():\n if w in doc:\n cnt += 1\n neg_words_likelihood_cnts[w] = cnt",
"_____no_output_____"
],
[
"#this counts how many times a word in corpus appeares in pos_train data : p(neg)\npos_words_likelihood_cnts = {}\nfor w in corpus:\n cnt = 0\n for _, doc in pos_train.parsed_doc.items():\n if w in doc:\n cnt += 1\n pos_words_likelihood_cnts[w] = cnt",
"_____no_output_____"
],
[
"import operator",
"_____no_output_____"
],
[
"sorted(neg_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]",
"_____no_output_____"
],
[
"sorted(pos_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]",
"_____no_output_____"
]
],
[
[
"## 5. Classifier\n\n* We represent each documents in terms of bag of words. If the size of Corpus is $n$, this means that each bag of word of document is $n-dimensional$\n* When there isn't a word, we use **Laclacian Smoothing**",
"_____no_output_____"
]
],
[
[
"test_data = pd.concat([neg_test, pos_test], axis=0)",
"_____no_output_____"
],
[
"test_data.head()",
"_____no_output_____"
],
[
"def predict(doc):\n pos_prior, neg_prior = 1/2, 1/2 #because we have equal number of pos and neg training documents\n\n # Posterior of pos\n pos_prob = np.log(1)\n for word in corpus:\n if word in doc:\n # the word is in the current document and has appeared in pos documents\n if word in pos_words_likelihood_cnts:\n pos_prob += np.log((pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))\n else:\n # the word is in the current document, but has never appeared in pos documents : Laplacian\n pos_prob += np.log(1 / (len(pos_train) + len(corpus)))\n else:\n # the word is not in the current document, but has appeared in pos documents \n # we can find the possibility that the word is not in pos\n if word in pos_words_likelihood_cnts:\n pos_prob += \\\n np.log((len(pos_train) - pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))\n else:\n # the word is not in the current document, and has never appeared in pos documents : Laplacian\n pos_prob += np.log((len(pos_train) + 1) / (len(pos_train) + len(corpus)))\n pos_prob += np.log(pos_prior)\n \n # Posterior of neg\n neg_prob = 1\n for word in corpus:\n if word in doc:\n # 단어가 현재 문장에 존재하고, neg 문장에 나온적이 있는 경우\n if word in neg_words_likelihood_cnts:\n neg_prob += np.log((neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))\n else:\n # 단어가 현재 문장에 존재하고, neg 문장에서 한 번도 나온적이 없는 경우 : 라플라시안 스무딩\n neg_prob += np.log(1 / (len(neg_train) + len(corpus)))\n else:\n # 단어가 현재 문장에 존재하지 않고, neg 문장에 나온적이 있는 경우 (neg에서 해당단어가 없는 확률을 구할 수 있다.)\n if word in neg_words_likelihood_cnts:\n neg_prob += \\\n np.log((len(neg_train) - neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))\n else:\n # 단어가 현재 문장에 존재하지 않고, pos 문장에서 단 한 번도 나온적이 없는 경우 : 라플라시안 스무딩\n neg_prob += np.log((len(neg_train) + 1) / (len(neg_train) + len(corpus)))\n neg_prob += np.log(neg_prior)\n \n if pos_prob >= neg_prob:\n return 1\n else:\n return 0",
"_____no_output_____"
],
[
"test_data['pred'] = test_data.parsed_doc.apply(predict)",
"_____no_output_____"
],
[
"test_data.head()",
"_____no_output_____"
],
[
"test_data.shape",
"_____no_output_____"
],
[
"sum(test_data.label ^ test_data.pred)",
"_____no_output_____"
]
],
[
[
"There are a total of 200 test documents, and of these 200 tests only 46 were different",
"_____no_output_____"
]
],
[
[
"1 - sum(test_data.label ^ test_data.pred) / len(test_data)",
"_____no_output_____"
]
],
[
[
"We have about 77% accuracy rate, which is relatively high",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d042ba314602a7bd8babe865057ff680fb7b5a69 | 62,381 | ipynb | Jupyter Notebook | 01-Math-Trig-Function/sine-python/01-math-sine-python.ipynb | avantcontra/coding-druid | 1e5e7e6aa75da6cd949c3fae717c0fd1d1fa2155 | [
"MIT"
] | 64 | 2019-07-03T18:25:37.000Z | 2022-03-06T01:24:10.000Z | 01-Math-Trig-Function/sine-python/01-math-sine-python.ipynb | avantcontra/coding-druid | 1e5e7e6aa75da6cd949c3fae717c0fd1d1fa2155 | [
"MIT"
] | null | null | null | 01-Math-Trig-Function/sine-python/01-math-sine-python.ipynb | avantcontra/coding-druid | 1e5e7e6aa75da6cd949c3fae717c0fd1d1fa2155 | [
"MIT"
] | 12 | 2020-02-17T13:38:59.000Z | 2021-11-17T20:48:17.000Z | 122.076321 | 25,232 | 0.874866 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 16, 9\n\nfig, ax = plt.subplots(1,1)\nplt.axis('equal')\n\nax.set_xlim([0, 2*np.pi])\nax.set_ylim([-2.5, 2.5])\n\n# sine function plot\nx = np.linspace(0, 2*np.pi, 100)\ny = np.sin(x)\n\n# x axis path\nax.plot(3*x - 3, 0*y, linewidth=1, color='#dddddd')\n# y axis path\nax.plot(0*x, 2.5*y, linewidth=1, color='#dddddd')\n\n# unit circle path\nax.plot(np.cos(x), np.sin(x), linewidth=1)\n# sine path\nax.plot(x + 1, np.sin(x), linewidth=1)\n\n\n# ------ anim -------\nsineLine, = ax.plot([], [], linewidth=4)\nsineDot, = ax.plot([], [], 'o', color='#ff0000')\n\ncircleLine, = ax.plot([], [],linewidth=4)\ncircleDot, = ax.plot([], [], 'o', color='black')\n\ndef sineAnim(i):\n # sine anim\n sineLine.set_data(x[:i] + 1,y[:i])\n sineDot.set_data(x[i] + 1, y[i])\n # circle anim\n circleLine.set_data(np.cos(x[:i]), np.sin(x[:i]))\n circleDot.set_data(np.cos(x[i]), np.sin(x[i]))\n\nanim = animation.FuncAnimation(fig, sineAnim, frames=len(x), interval=50)\n# -------------\n\nanim.save('sine-py-effect.mp4', writer='ffmpeg')\n\n# plt.show()\n# HTML(anim.to_html5_video())",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d042c369ea9aa7c77b453bc76bcf2341b9def0c2 | 5,032 | ipynb | Jupyter Notebook | examples/01_auditing_a_dataframe.ipynb | TTitcombe/PrivacyPanda | 8c016a2d1c9b358b3cb4b7385fbd6a5fa1deed23 | [
"Apache-2.0"
] | 2 | 2020-02-26T14:26:45.000Z | 2020-03-07T12:32:07.000Z | examples/01_auditing_a_dataframe.ipynb | TTitcombe/PrivacyPanda | 8c016a2d1c9b358b3cb4b7385fbd6a5fa1deed23 | [
"Apache-2.0"
] | 19 | 2020-02-24T17:36:14.000Z | 2020-03-14T11:42:14.000Z | examples/01_auditing_a_dataframe.ipynb | TTitcombe/PrivacyPanda | 8c016a2d1c9b358b3cb4b7385fbd6a5fa1deed23 | [
"Apache-2.0"
] | null | null | null | 29.952381 | 386 | 0.544118 | [
[
[
"# Auditing a dataframe\nIn this notebook, we shall demonstrate how to use `privacypanda` to _audit_ the privacy of your data. `privacypanda` provides a simple function which prints the names of any columns which break privacy. Currently, these are:\n- Addresses\n - E.g. \"10 Downing Street\"; \"221b Baker St\"; \"EC2R 8AH\"\n- Phonenumbers (UK mobile)\n - E.g. \"+447123456789\"\n- Email addresses\n - Ending in \".com\", \".co.uk\", \".org\", \".edu\" (to be expanded soon)",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n%watermark -n -p pandas,privacypanda -g",
"Sun Mar 08 2020 \n\npandas 1.0.1\nprivacypanda 0.1.0.dev0\nGit hash: 7d1343dc13973da5c265a5a2bcf1915384c3e131\n"
],
[
"import pandas as pd\nimport privacypanda as pp",
"_____no_output_____"
]
],
[
[
"---\n## Firstly, we need data",
"_____no_output_____"
]
],
[
[
"data = pd.DataFrame(\n {\n \"user ID\": [\n 1665,\n 1,\n 5287,\n 42,\n ],\n \"User email\": [\n \"xxxxxxxxxxxxx\",\n \"xxxxxxxx\",\n \"I'm not giving you that\",\n \"[email protected]\",\n ],\n \"User address\": [\n \"AB1 1AB\",\n \"\",\n \"XXX XXX\",\n \"EC2R 8AH\",\n ],\n \"Likes raclette\": [\n 1,\n 0,\n 1,\n 1,\n ],\n }\n)",
"_____no_output_____"
]
],
[
[
"You will notice two things about this dataframe:\n1. _Some_ of the data has already been anonymized, for example by replacing characters with \"x\"s. However, the person who collected this data has not been fastidious with its cleaning as there is still some raw, potentially problematic private information. As the dataset grows, it becomes easier to miss entries with private information\n2. Not all columns expose privacy: \"Likes raclette\" is pretty benign information (but be careful, lots of benign information can be combined to form a unique fingerprint identifying an individual - let's not worry about this at the moment, though), and \"user ID\" is already an anonymized labelling of an individual.",
"_____no_output_____"
],
[
"---\n# Auditing the data's privacy\nAs a data scientist, we want a simple way to tell which columns, if any break privacy. More importantly, _how_ they break privacy determines how we deal with them. For example, emails will likely be superfluous information for analysis and can therefore be removed from the data, but age may be important and so we may wish instead to apply differential privacy to the dataset.\n\nWe can use `privacypanda`'s `report_privacy` function to see which data is problematic.",
"_____no_output_____"
]
],
[
[
"report = pp.report_privacy(data)\nprint(report)",
"User address: ['address']\nUser email: ['email']\n\n"
]
],
[
[
"`report_privacy` returns a `Report` object which stores the privacy issues of each column in the data. \n\nAs `privacypanda` is in active development, \nthis is currently only a simple dictionary of binary \"breaks\"/\"doesn't break\" privacy for each column. \nWe aim to make this information _cell-level_, \ni.e. removing/replacing the information in individual cells in order to protect privacy with less information loss.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d042c628301c193e5b166e59765440bebc8f96a0 | 21,483 | ipynb | Jupyter Notebook | Q8_asgn2.ipynb | sunil-dhaka/census-language-analysis | d70c2ae4c18f603e165bf3adb4b1c06566848e4a | [
"MIT"
] | null | null | null | Q8_asgn2.ipynb | sunil-dhaka/census-language-analysis | d70c2ae4c18f603e165bf3adb4b1c06566848e4a | [
"MIT"
] | null | null | null | Q8_asgn2.ipynb | sunil-dhaka/census-language-analysis | d70c2ae4c18f603e165bf3adb4b1c06566848e4a | [
"MIT"
] | null | null | null | 33.514821 | 204 | 0.372946 | [
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## read datafiles\n- C-18 for language population\n- C-13 for particular age-range population from a state",
"_____no_output_____"
]
],
[
[
"c18=pd.read_excel('datasets/C-18.xlsx',skiprows=6,header=None,engine='openpyxl')",
"_____no_output_____"
],
[
"c13=pd.read_excel('datasets/C-13.xls',skiprows=7,header=None)",
"_____no_output_____"
]
],
[
[
"### particular age groups are\n- 5-9\n- 10-14\n- 15-19\n- 20-24\n- 25-29\n- 30-49\n- 50-69\n- 70+\n- Age not stated",
"_____no_output_____"
],
[
"## obtain useful data from C-13 and C-18 for age-groups\n- first get particular state names for identifying specific states\n- get particular age-groups from C-18 file\n- make list of particular age group row/col for a particular states\n- now just simply iterate through each state to get relevent data and store it into a csv file\n - to get total pop of particular age-range I have used C-13 file\n - to get total pop that speaks more than 3 languages from a state in a particular age-range I have used C-18 file ",
"_____no_output_____"
]
],
[
[
"# STATE_NAMES=[list(np.unique(c18.iloc[:,2].values))]\nSTATE_NAMES=[]\nfor state in c18.iloc[:,2].values:\n if not (state in STATE_NAMES):\n STATE_NAMES.append(state)\nAGE_GROUPS=list(c18.iloc[1:10,4].values)\n# although it is a bit of manual work but it is worth the efforts\nAGE_GROUP_RANGES=[list(range(5,10)),list(range(10,15)),list(range(15,20)),list(range(20,25)),list(range(25,30)),list(range(30,50)),list(range(50,70)),list(range(70,100))+['100+'],['Age not stated']]",
"_____no_output_____"
],
[
"useful_data=[]\nfor i,state in enumerate(STATE_NAMES):\n for j,age_grp in enumerate(AGE_GROUPS):\n # this list is to get only the years in the particular age-group\n\n true_false_list=[]\n for single_year_age in c13.iloc[:,4].values:\n if single_year_age in AGE_GROUP_RANGES[j]:\n true_false_list.append(True)\n else:\n true_false_list.append(False)\n\n # here i is the state code\n male_pop=c13[(c13.loc[:,1]==i) & (true_false_list)].iloc[:,6].values.sum()\n female_pop=c13[(c13.loc[:,1]==i) & (true_false_list)].iloc[:,7].values.sum()\n \n # tri\n tri_male=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,9]\n tri_female=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,10]\n\n #bi\n bi_male=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,6] - tri_male\n bi_female=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,7] - tri_female\n\n #uni\n uni_male=male_pop-bi_male-tri_male\n uni_female=female_pop-bi_female-tri_female\n\n item={\n 'state-code':i,\n 'state-name':state,\n 'age-group':age_grp,\n 'age-group-male-pop':male_pop,\n 'age-group-female-pop':female_pop,\n 'tri-male-ratio':tri_male/male_pop,\n 'tri-female-ratio':tri_female/female_pop,\n 'bi-male-ratio':bi_male/male_pop,\n 'bi-female-ratio':bi_female/female_pop,\n 'uni-male-ratio':uni_male/male_pop,\n 'uni-female-ratio':uni_female/female_pop\n }\n\n useful_data.append(item)",
"_____no_output_____"
],
[
"df=pd.DataFrame(useful_data)",
"_____no_output_____"
]
],
[
[
"## age-analysis \n- get highest ratio age-group for a state and store it into csv file\n- above process can be repeated for all parts of the question",
"_____no_output_____"
]
],
[
[
"tri_list=[]\nbi_list=[]\nuni_list=[]\nfor i in range(36):\n male_values=df[df['state-code']==i].sort_values(by='tri-male-ratio',ascending=False).iloc[0,[2,5]].values\n female_values=df[df['state-code']==i].sort_values(by='tri-male-ratio',ascending=False).iloc[0,[2,6]].values\n tri_item={\n 'state/ut':i,\n 'age-group-males':male_values[0],\n 'ratio-males':male_values[1],\n 'age-group-females':female_values[0],\n 'ratio-females':female_values[1]\n }\n\n tri_list.append(tri_item)\n\n male_values=df[df['state-code']==i].sort_values(by='bi-male-ratio',ascending=False).iloc[0,[2,7]].values\n female_values=df[df['state-code']==i].sort_values(by='bi-male-ratio',ascending=False).iloc[0,[2,8]].values\n bi_item={\n 'state/ut':i,\n 'age-group-males':male_values[0],\n 'ratio-males':male_values[1],\n 'age-group-females':female_values[0],\n 'ratio-females':female_values[1]\n }\n\n bi_list.append(bi_item)\n\n male_values=df[df['state-code']==i].sort_values(by='uni-male-ratio',ascending=False).iloc[0,[2,9]].values\n female_values=df[df['state-code']==i].sort_values(by='uni-male-ratio',ascending=False).iloc[0,[2,10]].values\n uni_item={\n 'state/ut':i,\n 'age-group-males':male_values[0],\n 'ratio-males':male_values[1],\n 'age-group-females':female_values[0],\n 'ratio-females':female_values[1]\n }\n\n uni_list.append(uni_item)",
"_____no_output_____"
]
],
[
[
"- convert into pandas dataframes and store into CSVs",
"_____no_output_____"
]
],
[
[
"tri_df=pd.DataFrame(tri_list)\nbi_df=pd.DataFrame(bi_list)\nuni_df=pd.DataFrame(uni_list)",
"_____no_output_____"
],
[
"tri_df.to_csv('outputs/age-gender-a.csv',index=False)\nbi_df.to_csv('outputs/age-gender-b.csv',index=False)\nuni_df.to_csv('outputs/age-gender-c.csv',index=False)",
"_____no_output_____"
]
],
[
[
"## observations\n\n- in almost all states(and all cases) both highest ratio female and male age-groups are same.\n- interestingly in only one language case for all states '5-9' age group dominates, and it is also quite intutive; since at that early stage in life children only speak their mother toung only ",
"_____no_output_____"
]
],
[
[
"uni_df",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d042dc6162cb7d4cd84560588da211c25f55239f | 14,557 | ipynb | Jupyter Notebook | docs/abm-04.ipynb | sosiristseng/juliabook-abm | acdaadb6e2b593fc00d098566bb460c931f0501e | [
"MIT"
] | null | null | null | docs/abm-04.ipynb | sosiristseng/juliabook-abm | acdaadb6e2b593fc00d098566bb460c931f0501e | [
"MIT"
] | 3 | 2022-03-15T11:29:20.000Z | 2022-03-27T02:41:08.000Z | docs/abm-04.ipynb | sosiristseng/juliabook-abm | acdaadb6e2b593fc00d098566bb460c931f0501e | [
"MIT"
] | null | null | null | 27.362782 | 267 | 0.509926 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d043002ccd9f574e9fee98007e6f15c6e19921f8 | 5,119 | ipynb | Jupyter Notebook | 25-2.1 Neo4J graph example.ipynb | Gressling/examples | 9fa440b1951849b86bd1b19c8a361342e2964807 | [
"MIT"
] | 14 | 2020-11-08T17:19:27.000Z | 2022-03-22T02:57:25.000Z | 25-2.1 Neo4J graph example.ipynb | Gressling/examples | 9fa440b1951849b86bd1b19c8a361342e2964807 | [
"MIT"
] | null | null | null | 25-2.1 Neo4J graph example.ipynb | Gressling/examples | 9fa440b1951849b86bd1b19c8a361342e2964807 | [
"MIT"
] | 10 | 2021-05-29T02:24:45.000Z | 2022-03-30T15:38:56.000Z | 5,119 | 5,119 | 0.753858 | [
[
[
"# Neo4J graph example\n# author: Gressling, T\n# license: MIT License # code: github.com/gressling/examples\n# activity: single example # index: 25-2 ",
"_____no_output_____"
],
[
"# https://gist.github.com/korakot/328aaac51d78e589b4a176228e4bb06f\n# download 3.5.8 or neo4j-enterprise-4.0.0-alpha09mr02-unix\n!curl https://neo4j.com/artifact.php?name=neo4j-community-3.5.8-unix.tar.gz -v -o neo4j.tar.gz\n#!curl https://s3-eu-west-1.amazonaws.com/dist.neo4j.org/neo4j-community-3.5.8-unix.tar.gz?x-amz-security-token=IQoJb3JpZ2luX2VjEBgaCXVzLWVhc3QtMSJIMEYCIQC8JQ87qLW8MutNDC7kLf%2F8lCgTEeFw6XMHe0g6JGiLwQIhALrPjMot9j4eV1EiWsysYUamjICHutsaKG%2Fa%2B05ZJKD%2BKr0DCJD%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQABoMMTI4OTE2Njc5MzMwIgyekSOUHQOH4V1oebkqkQMSnlGj83iqgQ1X%2Bb9lDsjjfh%2FGggoIuvkn8RO9Ur3Ws24VznIHWrxQTECnTtQfsjhruOUbGJKv%2FlKBy9VU0aLu0zdrNcxeWZedOW09We0xVS4QTBipwW4i0UubWw%2FuDp1vAKPc1wLIq3vuvgflB4sXmTgvkQ%2FcT2%2BoIrvflqmSQ%2Fr9SB9Cqj9iACjxNQZrLs3qv2WgWxUNSsVjJYGXUx1yzx0ckCtxKYZ%2BzVBvqGuG1yULIodkGo4Kfbk5bh7s69gk8N4Gli7cQvYc9ajSFGg5IHXJU7%2BvRWeekX%2F2o7JlCRQogSNlW8kvv7o9ioD6Uj1mkOnR6rMsEv4Xo%2B2buKg7LqaPobmZwyjGnMBvZdndLXq37lAT7%2BP1i%2BVNCC7rak4Aqb3HtFMDZ%2F0nmdAitcKDWG1Am1mnaXiL3s6MZQ88SoU8h5RK0k01M%2FBxU9ZPwfi%2Bm8OAC%2Bgh6QyP9f7CPqSdI%2Fy8BSthxARcwWxl2ZwhHtUu7jqFf601aTu0iof%2FP2tH9oxty4kdH%2BI64qo7JCr9%2BzDx4OT9BTrqAfGlw5dReXwyh%2BSnYxW%2BB42cGs2JDcrFohn6UGdG3BtI%2FsjAFymH0vkCtXfN3irUPPzOoFzx216v%2F4GFfGebIpWqr85hT2f%2F28fck2XPbXiv%2BXSeffQdc8UkSL7dMHbquZ%2BmdOtCNlMhOWpnov5J7aICj9uY4AR60kNUSW3N4nra3CXjNEJWy%2B8ft49e6lnR9iKlVFxdwoXb1YAEx4egpctFcffoiaIEk2GinHjShAQgApOZFgOLe%2FDC9%2BnIwhxL7rSUfP7Ox%2FbEJF%2Br6VNYJddoD6D8xF2OVo%2FxZzv4M6eyw6Squ5r6i4LM7g%3D%3D&AWSAccessKeyId=ASIAR4BAINKRGKUIBRUS&Expires=1605973931&Signature=gzC025ItqNNdXpCJkGsm%2FvQt2WU%3D -o neo4j.tar.gz\n",
"_____no_output_____"
],
[
"# decompress and rename\n!tar -xf neo4j.tar.gz # or --strip-components=1\n!mv neo4j-community-3.5.8 nj",
"_____no_output_____"
],
[
"# disable password, and start server\n!sed -i '/#dbms.security.auth_enabled/s/^#//g' nj/conf/neo4j.conf\n!nj/bin/neo4j start",
"_____no_output_____"
],
[
"# from neo4j import GraphDatabase\n# !pip install py2neo",
"_____no_output_____"
],
[
"from py2neo import Graph\ngraph = Graph(\"bolt://localhost:7687\", auth=(\"neo4j\", \"password\"))\ngraph.delete_all()",
"_____no_output_____"
],
[
"# define the entities of the graph (nodes)\nfrom py2neo import Node",
"_____no_output_____"
],
[
"laboratory = Node(\"Laboratory\", name=\"Laboratory 1\")\nlab1 = Node(\"Person\", name=\"Peter\", employee_ID=2)\nlab2 = Node(\"Person\", name=\"Susan\", employee_ID=4)\nsample1 = Node(\"Sample\", name=\"A-12213\", weight=45.7)\nsample2 = Node(\"Sample\", name=\"B-33443\", weight=48.0)\n\n# shared sample between two experiments\nsample3 = Node(\"Sample\", name=\"AB-33443\", weight=24.3)\nexperiment1 = Node(\"Experiment\", name=\"Screening-45\")\nexperiment2 = Node(\"Experiment\", name=\"Screening/w/Sol\")",
"_____no_output_____"
],
[
"graph.create(laboratory | lab1 | lab2 | sample1 | sample2 | experiment1 |\nexperiment2)",
"_____no_output_____"
],
[
"# Define the relationships of the graph (edges)\nfrom py2neo import Relationship",
"_____no_output_____"
],
[
"graph.create(Relationship(lab1, \"works in\", laboratory))\ngraph.create(Relationship(lab2, \"works in\", laboratory))\ngraph.create(Relationship(lab1, \"performs\", sample1))\ngraph.create(Relationship(lab2, \"performs\", sample2))\ngraph.create(Relationship(lab2, \"performs\", sample3))\ngraph.create(Relationship(sample1, \"partof\", experiment1))\ngraph.create(Relationship(sample2, \"partof\", experiment2))\ngraph.create(Relationship(sample3, \"partof\", experiment2))\ngraph.create(Relationship(sample3, \"partof\", experiment1))",
"_____no_output_____"
],
[
"import neo4jupyter\nneo4jupyter.init_notebook_mode()\nneo4jupyter.draw(graph)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0430ce18d767c30ed75cc55228c27e587512339 | 85,980 | ipynb | Jupyter Notebook | site/en/guide/data.ipynb | zyberg2091/docs | eb1a63871415df766002b9ef2ada097f421d1bf9 | [
"Apache-2.0"
] | 7 | 2021-05-08T18:25:43.000Z | 2021-09-30T13:41:26.000Z | site/en/guide/data.ipynb | yanp/docs | 8bf42f95cef824938df10b9c2f2d1fa22f952789 | [
"Apache-2.0"
] | null | null | null | site/en/guide/data.ipynb | yanp/docs | 8bf42f95cef824938df10b9c2f2d1fa22f952789 | [
"Apache-2.0"
] | 2 | 2021-05-08T18:53:53.000Z | 2021-05-08T19:32:30.000Z | 28.978766 | 602 | 0.511793 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# tf.data: Build TensorFlow input pipelines",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/data\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/data.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/data.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"The `tf.data` API enables you to build complex input pipelines from simple,\nreusable pieces. For example, the pipeline for an image model might aggregate\ndata from files in a distributed file system, apply random perturbations to each\nimage, and merge randomly selected images into a batch for training. The\npipeline for a text model might involve extracting symbols from raw text data,\nconverting them to embedding identifiers with a lookup table, and batching\ntogether sequences of different lengths. The `tf.data` API makes it possible to\nhandle large amounts of data, read from different data formats, and perform\ncomplex transformations.\n\nThe `tf.data` API introduces a `tf.data.Dataset` abstraction that represents a\nsequence of elements, in which each element consists of one or more components.\nFor example, in an image pipeline, an element might be a single training\nexample, with a pair of tensor components representing the image and its label.\n\nThere are two distinct ways to create a dataset:\n\n* A data **source** constructs a `Dataset` from data stored in memory or in\n one or more files.\n\n* A data **transformation** constructs a dataset from one or more\n `tf.data.Dataset` objects.\n",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"import pathlib\nimport os\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nnp.set_printoptions(precision=4)",
"_____no_output_____"
]
],
[
[
"## Basic mechanics\n<a id=\"basic-mechanics\"/>\n\nTo create an input pipeline, you must start with a data *source*. For example,\nto construct a `Dataset` from data in memory, you can use\n`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.\nAlternatively, if your input data is stored in a file in the recommended\nTFRecord format, you can use `tf.data.TFRecordDataset()`.\n\nOnce you have a `Dataset` object, you can *transform* it into a new `Dataset` by\nchaining method calls on the `tf.data.Dataset` object. For example, you can\napply per-element transformations such as `Dataset.map()`, and multi-element\ntransformations such as `Dataset.batch()`. See the documentation for\n`tf.data.Dataset` for a complete list of transformations.\n\nThe `Dataset` object is a Python iterable. This makes it possible to consume its\nelements using a for loop:",
"_____no_output_____"
]
],
[
[
"dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])\ndataset",
"_____no_output_____"
],
[
"for elem in dataset:\n print(elem.numpy())",
"_____no_output_____"
]
],
[
[
"Or by explicitly creating a Python iterator using `iter` and consuming its\nelements using `next`:",
"_____no_output_____"
]
],
[
[
"it = iter(dataset)\n\nprint(next(it).numpy())",
"_____no_output_____"
]
],
[
[
"Alternatively, dataset elements can be consumed using the `reduce`\ntransformation, which reduces all elements to produce a single result. The\nfollowing example illustrates how to use the `reduce` transformation to compute\nthe sum of a dataset of integers.",
"_____no_output_____"
]
],
[
[
"print(dataset.reduce(0, lambda state, value: state + value).numpy())",
"_____no_output_____"
]
],
[
[
"<!-- TODO(jsimsa): Talk about `tf.function` support. -->\n\n<a id=\"dataset_structure\"></a>\n### Dataset structure\n\nA dataset contains elements that each have the same (nested) structure and the\nindividual components of the structure can be of any type representable by\n`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,\n`tf.TensorArray`, or `tf.data.Dataset`.\n\nThe `Dataset.element_spec` property allows you to inspect the type of each\nelement component. The property returns a *nested structure* of `tf.TypeSpec`\nobjects, matching the structure of the element, which may be a single component,\na tuple of components, or a nested tuple of components. For example:",
"_____no_output_____"
]
],
[
[
"dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))\n\ndataset1.element_spec",
"_____no_output_____"
],
[
"dataset2 = tf.data.Dataset.from_tensor_slices(\n (tf.random.uniform([4]),\n tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))\n\ndataset2.element_spec",
"_____no_output_____"
],
[
"dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n\ndataset3.element_spec",
"_____no_output_____"
],
[
"# Dataset containing a sparse tensor.\ndataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))\n\ndataset4.element_spec",
"_____no_output_____"
],
[
"# Use value_type to see the type of value represented by the element spec\ndataset4.element_spec.value_type",
"_____no_output_____"
]
],
[
[
"The `Dataset` transformations support datasets of any structure. When using the\n`Dataset.map()`, and `Dataset.filter()` transformations,\nwhich apply a function to each element, the element structure determines the\narguments of the function:",
"_____no_output_____"
]
],
[
[
"dataset1 = tf.data.Dataset.from_tensor_slices(\n tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))\n\ndataset1",
"_____no_output_____"
],
[
"for z in dataset1:\n print(z.numpy())",
"_____no_output_____"
],
[
"dataset2 = tf.data.Dataset.from_tensor_slices(\n (tf.random.uniform([4]),\n tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))\n\ndataset2",
"_____no_output_____"
],
[
"dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n\ndataset3",
"_____no_output_____"
],
[
"for a, (b,c) in dataset3:\n print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))",
"_____no_output_____"
]
],
[
[
"## Reading input data\n",
"_____no_output_____"
],
[
"### Consuming NumPy arrays\n\nSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.\n\nIf all of your input data fits in memory, the simplest way to create a `Dataset`\nfrom them is to convert them to `tf.Tensor` objects and use\n`Dataset.from_tensor_slices()`.",
"_____no_output_____"
]
],
[
[
"train, test = tf.keras.datasets.fashion_mnist.load_data()",
"_____no_output_____"
],
[
"images, labels = train\nimages = images/255\n\ndataset = tf.data.Dataset.from_tensor_slices((images, labels))\ndataset",
"_____no_output_____"
]
],
[
[
"Note: The above code snippet will embed the `features` and `labels` arrays\nin your TensorFlow graph as `tf.constant()` operations. This works well for a\nsmall dataset, but wastes memory---because the contents of the array will be\ncopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`\nprotocol buffer.",
"_____no_output_____"
],
[
"### Consuming Python generators\n\nAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.\n\nCaution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock).",
"_____no_output_____"
]
],
[
[
"def count(stop):\n i = 0\n while i<stop:\n yield i\n i += 1",
"_____no_output_____"
],
[
"for n in count(5):\n print(n)",
"_____no_output_____"
]
],
[
[
"The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.\n\nThe constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.\n\nThe `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`.",
"_____no_output_____"
]
],
[
[
"ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), )",
"_____no_output_____"
],
[
"for count_batch in ds_counter.repeat().batch(10).take(10):\n print(count_batch.numpy())",
"_____no_output_____"
]
],
[
[
"The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.\n\nIt's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.\n\nHere is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length.",
"_____no_output_____"
]
],
[
[
"def gen_series():\n i = 0\n while True:\n size = np.random.randint(0, 10)\n yield i, np.random.normal(size=(size,))\n i += 1",
"_____no_output_____"
],
[
"for i, series in gen_series():\n print(i, \":\", str(series))\n if i > 5:\n break",
"_____no_output_____"
]
],
[
[
"The first output is an `int32` the second is a `float32`.\n\nThe first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)` ",
"_____no_output_____"
]
],
[
[
"ds_series = tf.data.Dataset.from_generator(\n gen_series, \n output_types=(tf.int32, tf.float32), \n output_shapes=((), (None,)))\n\nds_series",
"_____no_output_____"
]
],
[
[
"Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`.",
"_____no_output_____"
]
],
[
[
"ds_series_batch = ds_series.shuffle(20).padded_batch(10)\n\nids, sequence_batch = next(iter(ds_series_batch))\nprint(ids.numpy())\nprint()\nprint(sequence_batch.numpy())",
"_____no_output_____"
]
],
[
[
"For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.\n\nFirst download the data:",
"_____no_output_____"
]
],
[
[
"flowers = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)",
"_____no_output_____"
]
],
[
[
"Create the `image.ImageDataGenerator`",
"_____no_output_____"
]
],
[
[
"img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)",
"_____no_output_____"
],
[
"images, labels = next(img_gen.flow_from_directory(flowers))",
"_____no_output_____"
],
[
"print(images.dtype, images.shape)\nprint(labels.dtype, labels.shape)",
"_____no_output_____"
],
[
"ds = tf.data.Dataset.from_generator(\n lambda: img_gen.flow_from_directory(flowers), \n output_types=(tf.float32, tf.float32), \n output_shapes=([32,256,256,3], [32,5])\n)\n\nds.element_spec",
"_____no_output_____"
],
[
"for images, label in ds.take(1):\n print('images.shape: ', images.shape)\n print('labels.shape: ', labels.shape)\n",
"_____no_output_____"
]
],
[
[
"### Consuming TFRecord data\n\nSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.\n\nThe `tf.data` API supports a variety of file formats so that you can process\nlarge datasets that do not fit in memory. For example, the TFRecord file format\nis a simple record-oriented binary format that many TensorFlow applications use\nfor training data. The `tf.data.TFRecordDataset` class enables you to\nstream over the contents of one or more TFRecord files as part of an input\npipeline.",
"_____no_output_____"
],
[
"Here is an example using the test file from the French Street Name Signs (FSNS).",
"_____no_output_____"
]
],
[
[
"# Creates a dataset that reads all of the examples from two files.\nfsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")",
"_____no_output_____"
]
],
[
[
"The `filenames` argument to the `TFRecordDataset` initializer can either be a\nstring, a list of strings, or a `tf.Tensor` of strings. Therefore if you have\ntwo sets of files for training and validation purposes, you can create a factory\nmethod that produces the dataset, taking filenames as an input argument:\n",
"_____no_output_____"
]
],
[
[
"dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\ndataset",
"_____no_output_____"
]
],
[
[
"Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected:",
"_____no_output_____"
]
],
[
[
"raw_example = next(iter(dataset))\nparsed = tf.train.Example.FromString(raw_example.numpy())\n\nparsed.features.feature['image/text']",
"_____no_output_____"
]
],
[
[
"### Consuming text data\n\nSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.\n\nMany datasets are distributed as one or more text files. The\n`tf.data.TextLineDataset` provides an easy way to extract lines from one or more\ntext files. Given one or more filenames, a `TextLineDataset` will produce one\nstring-valued element per line of those files.",
"_____no_output_____"
]
],
[
[
"directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'\nfile_names = ['cowper.txt', 'derby.txt', 'butler.txt']\n\nfile_paths = [\n tf.keras.utils.get_file(file_name, directory_url + file_name)\n for file_name in file_names\n]",
"_____no_output_____"
],
[
"dataset = tf.data.TextLineDataset(file_paths)",
"_____no_output_____"
]
],
[
[
"Here are the first few lines of the first file:",
"_____no_output_____"
]
],
[
[
"for line in dataset.take(5):\n print(line.numpy())",
"_____no_output_____"
]
],
[
[
"To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation:",
"_____no_output_____"
]
],
[
[
"files_ds = tf.data.Dataset.from_tensor_slices(file_paths)\nlines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)\n\nfor i, line in enumerate(lines_ds.take(9)):\n if i % 3 == 0:\n print()\n print(line.numpy())",
"_____no_output_____"
]
],
[
[
"By default, a `TextLineDataset` yields *every* line of each file, which may\nnot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or\n`Dataset.filter()` transformations. Here, you skip the first line, then filter to\nfind only survivors.",
"_____no_output_____"
]
],
[
[
"titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\ntitanic_lines = tf.data.TextLineDataset(titanic_file)",
"_____no_output_____"
],
[
"for line in titanic_lines.take(10):\n print(line.numpy())",
"_____no_output_____"
],
[
"def survived(line):\n return tf.not_equal(tf.strings.substr(line, 0, 1), \"0\")\n\nsurvivors = titanic_lines.skip(1).filter(survived)",
"_____no_output_____"
],
[
"for line in survivors.take(10):\n print(line.numpy())",
"_____no_output_____"
]
],
[
[
"### Consuming CSV data",
"_____no_output_____"
],
[
"See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. \n\nThe CSV file format is a popular format for storing tabular data in plain text.\n\nFor example:",
"_____no_output_____"
]
],
[
[
"titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")",
"_____no_output_____"
],
[
"df = pd.read_csv(titanic_file)\ndf.head()",
"_____no_output_____"
]
],
[
[
"If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported:",
"_____no_output_____"
]
],
[
[
"titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))\n\nfor feature_batch in titanic_slices.take(1):\n for key, value in feature_batch.items():\n print(\" {!r:20s}: {}\".format(key, value))",
"_____no_output_____"
]
],
[
[
"A more scalable approach is to load from disk as necessary. \n\nThe `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).\n\nThe `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple.",
"_____no_output_____"
]
],
[
[
"titanic_batches = tf.data.experimental.make_csv_dataset(\n titanic_file, batch_size=4,\n label_name=\"survived\")",
"_____no_output_____"
],
[
"for feature_batch, label_batch in titanic_batches.take(1):\n print(\"'survived': {}\".format(label_batch))\n print(\"features:\")\n for key, value in feature_batch.items():\n print(\" {!r:20s}: {}\".format(key, value))",
"_____no_output_____"
]
],
[
[
"You can use the `select_columns` argument if you only need a subset of columns.",
"_____no_output_____"
]
],
[
[
"titanic_batches = tf.data.experimental.make_csv_dataset(\n titanic_file, batch_size=4,\n label_name=\"survived\", select_columns=['class', 'fare', 'survived'])",
"_____no_output_____"
],
[
"for feature_batch, label_batch in titanic_batches.take(1):\n print(\"'survived': {}\".format(label_batch))\n for key, value in feature_batch.items():\n print(\" {!r:20s}: {}\".format(key, value))",
"_____no_output_____"
]
],
[
[
"There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column. ",
"_____no_output_____"
]
],
[
[
"titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] \ndataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)\n\nfor line in dataset.take(10):\n print([item.numpy() for item in line])",
"_____no_output_____"
]
],
[
[
"If some columns are empty, this low-level interface allows you to provide default values instead of column types.",
"_____no_output_____"
]
],
[
[
"%%writefile missing.csv\n1,2,3,4\n,2,3,4\n1,,3,4\n1,2,,4\n1,2,3,\n,,,",
"_____no_output_____"
],
[
"# Creates a dataset that reads all of the records from two CSV files, each with\n# four float columns which may have missing values.\n\nrecord_defaults = [999,999,999,999]\ndataset = tf.data.experimental.CsvDataset(\"missing.csv\", record_defaults)\ndataset = dataset.map(lambda *items: tf.stack(items))\ndataset",
"_____no_output_____"
],
[
"for line in dataset:\n print(line.numpy())",
"_____no_output_____"
]
],
[
[
"By default, a `CsvDataset` yields *every* column of *every* line of the file,\nwhich may not be desirable, for example if the file starts with a header line\nthat should be ignored, or if some columns are not required in the input.\nThese lines and fields can be removed with the `header` and `select_cols`\narguments respectively.",
"_____no_output_____"
]
],
[
[
"# Creates a dataset that reads all of the records from two CSV files with\n# headers, extracting float data from columns 2 and 4.\nrecord_defaults = [999, 999] # Only provide defaults for the selected columns\ndataset = tf.data.experimental.CsvDataset(\"missing.csv\", record_defaults, select_cols=[1, 3])\ndataset = dataset.map(lambda *items: tf.stack(items))\ndataset",
"_____no_output_____"
],
[
"for line in dataset:\n print(line.numpy())",
"_____no_output_____"
]
],
[
[
"### Consuming sets of files",
"_____no_output_____"
],
[
"There are many datasets distributed as a set of files, where each file is an example.",
"_____no_output_____"
]
],
[
[
"flowers_root = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)\nflowers_root = pathlib.Path(flowers_root)\n",
"_____no_output_____"
]
],
[
[
"Note: these images are licensed CC-BY, see LICENSE.txt for details.",
"_____no_output_____"
],
[
"The root directory contains a directory for each class:",
"_____no_output_____"
]
],
[
[
"for item in flowers_root.glob(\"*\"):\n print(item.name)",
"_____no_output_____"
]
],
[
[
"The files in each class directory are examples:",
"_____no_output_____"
]
],
[
[
"list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))\n\nfor f in list_ds.take(5):\n print(f.numpy())",
"_____no_output_____"
]
],
[
[
"Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs:",
"_____no_output_____"
]
],
[
[
"def process_path(file_path):\n label = tf.strings.split(file_path, os.sep)[-2]\n return tf.io.read_file(file_path), label\n\nlabeled_ds = list_ds.map(process_path)",
"_____no_output_____"
],
[
"for image_raw, label_text in labeled_ds.take(1):\n print(repr(image_raw.numpy()[:100]))\n print()\n print(label_text.numpy())",
"_____no_output_____"
]
],
[
[
"<!--\nTODO(mrry): Add this section.\n\n### Handling text data with unusual sizes\n-->\n\n## Batching dataset elements\n",
"_____no_output_____"
],
[
"### Simple batching\n\nThe simplest form of batching stacks `n` consecutive elements of a dataset into\na single element. The `Dataset.batch()` transformation does exactly this, with\nthe same constraints as the `tf.stack()` operator, applied to each component\nof the elements: i.e. for each component *i*, all elements must have a tensor\nof the exact same shape.",
"_____no_output_____"
]
],
[
[
"inc_dataset = tf.data.Dataset.range(100)\ndec_dataset = tf.data.Dataset.range(0, -100, -1)\ndataset = tf.data.Dataset.zip((inc_dataset, dec_dataset))\nbatched_dataset = dataset.batch(4)\n\nfor batch in batched_dataset.take(4):\n print([arr.numpy() for arr in batch])",
"_____no_output_____"
]
],
[
[
"While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape:",
"_____no_output_____"
]
],
[
[
"batched_dataset",
"_____no_output_____"
]
],
[
[
"Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation:",
"_____no_output_____"
]
],
[
[
"batched_dataset = dataset.batch(7, drop_remainder=True)\nbatched_dataset",
"_____no_output_____"
]
],
[
[
"### Batching tensors with padding\n\nThe above recipe works for tensors that all have the same size. However, many\nmodels (e.g. sequence models) work with input data that can have varying size\n(e.g. sequences of different lengths). To handle this case, the\n`Dataset.padded_batch` transformation enables you to batch tensors of\ndifferent shape by specifying one or more dimensions in which they may be\npadded.",
"_____no_output_____"
]
],
[
[
"dataset = tf.data.Dataset.range(100)\ndataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))\ndataset = dataset.padded_batch(4, padded_shapes=(None,))\n\nfor batch in dataset.take(2):\n print(batch.numpy())\n print()\n",
"_____no_output_____"
]
],
[
[
"The `Dataset.padded_batch` transformation allows you to set different padding\nfor each dimension of each component, and it may be variable-length (signified\nby `None` in the example above) or constant-length. It is also possible to\noverride the padding value, which defaults to 0.\n\n<!--\nTODO(mrry): Add this section.\n\n### Dense ragged -> tf.SparseTensor\n-->\n",
"_____no_output_____"
],
[
"## Training workflows\n",
"_____no_output_____"
],
[
"### Processing multiple epochs\n\nThe `tf.data` API offers two main ways to process multiple epochs of the same\ndata.\n\nThe simplest way to iterate over a dataset in multiple epochs is to use the\n`Dataset.repeat()` transformation. First, create a dataset of titanic data:",
"_____no_output_____"
]
],
[
[
"titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\ntitanic_lines = tf.data.TextLineDataset(titanic_file)",
"_____no_output_____"
],
[
"def plot_batch_sizes(ds):\n batch_sizes = [batch.shape[0] for batch in ds]\n plt.bar(range(len(batch_sizes)), batch_sizes)\n plt.xlabel('Batch number')\n plt.ylabel('Batch size')",
"_____no_output_____"
]
],
[
[
"Applying the `Dataset.repeat()` transformation with no arguments will repeat\nthe input indefinitely.\n\nThe `Dataset.repeat` transformation concatenates its\narguments without signaling the end of one epoch and the beginning of the next\nepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries:",
"_____no_output_____"
]
],
[
[
"titanic_batches = titanic_lines.repeat(3).batch(128)\nplot_batch_sizes(titanic_batches)",
"_____no_output_____"
]
],
[
[
"If you need clear epoch separation, put `Dataset.batch` before the repeat:",
"_____no_output_____"
]
],
[
[
"titanic_batches = titanic_lines.batch(128).repeat(3)\n\nplot_batch_sizes(titanic_batches)",
"_____no_output_____"
]
],
[
[
"If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch:",
"_____no_output_____"
]
],
[
[
"epochs = 3\ndataset = titanic_lines.batch(128)\n\nfor epoch in range(epochs):\n for batch in dataset:\n print(batch.shape)\n print(\"End of epoch: \", epoch)",
"_____no_output_____"
]
],
[
[
"### Randomly shuffling input data\n\nThe `Dataset.shuffle()` transformation maintains a fixed-size\nbuffer and chooses the next element uniformly at random from that buffer.\n\nNote: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem.",
"_____no_output_____"
],
[
"Add an index to the dataset so you can see the effect:",
"_____no_output_____"
]
],
[
[
"lines = tf.data.TextLineDataset(titanic_file)\ncounter = tf.data.experimental.Counter()\n\ndataset = tf.data.Dataset.zip((counter, lines))\ndataset = dataset.shuffle(buffer_size=100)\ndataset = dataset.batch(20)\ndataset",
"_____no_output_____"
]
],
[
[
"Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120.",
"_____no_output_____"
]
],
[
[
"n,line_batch = next(iter(dataset))\nprint(n.numpy())",
"_____no_output_____"
]
],
[
[
"As with `Dataset.batch` the order relative to `Dataset.repeat` matters.\n\n`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next: ",
"_____no_output_____"
]
],
[
[
"dataset = tf.data.Dataset.zip((counter, lines))\nshuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)\n\nprint(\"Here are the item ID's near the epoch boundary:\\n\")\nfor n, line_batch in shuffled.skip(60).take(5):\n print(n.numpy())",
"_____no_output_____"
],
[
"shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]\nplt.plot(shuffle_repeat, label=\"shuffle().repeat()\")\nplt.ylabel(\"Mean item ID\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"But a repeat before a shuffle mixes the epoch boundaries together:",
"_____no_output_____"
]
],
[
[
"dataset = tf.data.Dataset.zip((counter, lines))\nshuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)\n\nprint(\"Here are the item ID's near the epoch boundary:\\n\")\nfor n, line_batch in shuffled.skip(55).take(15):\n print(n.numpy())",
"_____no_output_____"
],
[
"repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]\n\nplt.plot(shuffle_repeat, label=\"shuffle().repeat()\")\nplt.plot(repeat_shuffle, label=\"repeat().shuffle()\")\nplt.ylabel(\"Mean item ID\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## Preprocessing data\n\nThe `Dataset.map(f)` transformation produces a new dataset by applying a given\nfunction `f` to each element of the input dataset. It is based on the\n[`map()`](https://en.wikipedia.org/wiki/Map_\\(higher-order_function\\)) function\nthat is commonly applied to lists (and other structures) in functional\nprogramming languages. The function `f` takes the `tf.Tensor` objects that\nrepresent a single element in the input, and returns the `tf.Tensor` objects\nthat will represent a single element in the new dataset. Its implementation uses\nstandard TensorFlow operations to transform one element into another.\n\nThis section covers common examples of how to use `Dataset.map()`.\n",
"_____no_output_____"
],
[
"### Decoding image data and resizing it\n\n<!-- TODO(markdaoust): link to image augmentation when it exists -->\nWhen training a neural network on real-world image data, it is often necessary\nto convert images of different sizes to a common size, so that they may be\nbatched into a fixed size.\n\nRebuild the flower filenames dataset:",
"_____no_output_____"
]
],
[
[
"list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))",
"_____no_output_____"
]
],
[
[
"Write a function that manipulates the dataset elements.",
"_____no_output_____"
]
],
[
[
"# Reads an image from a file, decodes it into a dense tensor, and resizes it\n# to a fixed shape.\ndef parse_image(filename):\n parts = tf.strings.split(filename, os.sep)\n label = parts[-2]\n\n image = tf.io.read_file(filename)\n image = tf.image.decode_jpeg(image)\n image = tf.image.convert_image_dtype(image, tf.float32)\n image = tf.image.resize(image, [128, 128])\n return image, label",
"_____no_output_____"
]
],
[
[
"Test that it works.",
"_____no_output_____"
]
],
[
[
"file_path = next(iter(list_ds))\nimage, label = parse_image(file_path)\n\ndef show(image, label):\n plt.figure()\n plt.imshow(image)\n plt.title(label.numpy().decode('utf-8'))\n plt.axis('off')\n\nshow(image, label)",
"_____no_output_____"
]
],
[
[
"Map it over the dataset.",
"_____no_output_____"
]
],
[
[
"images_ds = list_ds.map(parse_image)\n\nfor image, label in images_ds.take(2):\n show(image, label)",
"_____no_output_____"
]
],
[
[
"### Applying arbitrary Python logic\n\nFor performance reasons, use TensorFlow operations for\npreprocessing your data whenever possible. However, it is sometimes useful to\ncall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation.",
"_____no_output_____"
],
[
"For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. \n\nNote: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.\n\nTo demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead:",
"_____no_output_____"
]
],
[
[
"import scipy.ndimage as ndimage\n\ndef random_rotate_image(image):\n image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False)\n return image",
"_____no_output_____"
],
[
"image, label = next(iter(images_ds))\nimage = random_rotate_image(image)\nshow(image, label)",
"_____no_output_____"
]
],
[
[
"To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function:",
"_____no_output_____"
]
],
[
[
"def tf_random_rotate_image(image, label):\n im_shape = image.shape\n [image,] = tf.py_function(random_rotate_image, [image], [tf.float32])\n image.set_shape(im_shape)\n return image, label",
"_____no_output_____"
],
[
"rot_ds = images_ds.map(tf_random_rotate_image)\n\nfor image, label in rot_ds.take(2):\n show(image, label)",
"_____no_output_____"
]
],
[
[
"### Parsing `tf.Example` protocol buffer messages\n\nMany input pipelines extract `tf.train.Example` protocol buffer messages from a\nTFRecord format. Each `tf.train.Example` record contains one or more \"features\",\nand the input pipeline typically converts these features into tensors.",
"_____no_output_____"
]
],
[
[
"fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")\ndataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\ndataset",
"_____no_output_____"
]
],
[
[
"You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data:",
"_____no_output_____"
]
],
[
[
"raw_example = next(iter(dataset))\nparsed = tf.train.Example.FromString(raw_example.numpy())\n\nfeature = parsed.features.feature\nraw_img = feature['image/encoded'].bytes_list.value[0]\nimg = tf.image.decode_png(raw_img)\nplt.imshow(img)\nplt.axis('off')\n_ = plt.title(feature[\"image/text\"].bytes_list.value[0])",
"_____no_output_____"
],
[
"raw_example = next(iter(dataset))",
"_____no_output_____"
],
[
"def tf_parse(eg):\n example = tf.io.parse_example(\n eg[tf.newaxis], {\n 'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string),\n 'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string)\n })\n return example['image/encoded'][0], example['image/text'][0]",
"_____no_output_____"
],
[
"img, txt = tf_parse(raw_example)\nprint(txt.numpy())\nprint(repr(img.numpy()[:20]), \"...\")",
"_____no_output_____"
],
[
"decoded = dataset.map(tf_parse)\ndecoded",
"_____no_output_____"
],
[
"image_batch, text_batch = next(iter(decoded.batch(10)))\nimage_batch.shape",
"_____no_output_____"
]
],
[
[
"<a id=\"time_series_windowing\"></a>\n\n### Time series windowing",
"_____no_output_____"
],
[
"For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb).",
"_____no_output_____"
],
[
"Time series data is often organized with the time axis intact.\n\nUse a simple `Dataset.range` to demonstrate:",
"_____no_output_____"
]
],
[
[
"range_ds = tf.data.Dataset.range(100000)",
"_____no_output_____"
]
],
[
[
"Typically, models based on this sort of data will want a contiguous time slice. \n\nThe simplest approach would be to batch the data:",
"_____no_output_____"
],
[
"#### Using `batch`",
"_____no_output_____"
]
],
[
[
"batches = range_ds.batch(10, drop_remainder=True)\n\nfor batch in batches.take(5):\n print(batch.numpy())",
"_____no_output_____"
]
],
[
[
"Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other:",
"_____no_output_____"
]
],
[
[
"def dense_1_step(batch):\n # Shift features and labels one step relative to each other.\n return batch[:-1], batch[1:]\n\npredict_dense_1_step = batches.map(dense_1_step)\n\nfor features, label in predict_dense_1_step.take(3):\n print(features.numpy(), \" => \", label.numpy())",
"_____no_output_____"
]
],
[
[
"To predict a whole window instead of a fixed offset you can split the batches into two parts:",
"_____no_output_____"
]
],
[
[
"batches = range_ds.batch(15, drop_remainder=True)\n\ndef label_next_5_steps(batch):\n return (batch[:-5], # Take the first 5 steps\n batch[-5:]) # take the remainder\n\npredict_5_steps = batches.map(label_next_5_steps)\n\nfor features, label in predict_5_steps.take(3):\n print(features.numpy(), \" => \", label.numpy())",
"_____no_output_____"
]
],
[
[
"To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`:",
"_____no_output_____"
]
],
[
[
"feature_length = 10\nlabel_length = 3\n\nfeatures = range_ds.batch(feature_length, drop_remainder=True)\nlabels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length])\n\npredicted_steps = tf.data.Dataset.zip((features, labels))\n\nfor features, label in predicted_steps.take(5):\n print(features.numpy(), \" => \", label.numpy())",
"_____no_output_____"
]
],
[
[
"#### Using `window`",
"_____no_output_____"
],
[
"While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](#dataset_structure) for details.",
"_____no_output_____"
]
],
[
[
"window_size = 5\n\nwindows = range_ds.window(window_size, shift=1)\nfor sub_ds in windows.take(5):\n print(sub_ds)",
"_____no_output_____"
]
],
[
[
"The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset:",
"_____no_output_____"
]
],
[
[
" for x in windows.flat_map(lambda x: x).take(30):\n print(x.numpy(), end=' ')",
"_____no_output_____"
]
],
[
[
"In nearly all cases, you will want to `.batch` the dataset first:",
"_____no_output_____"
]
],
[
[
"def sub_to_batch(sub):\n return sub.batch(window_size, drop_remainder=True)\n\nfor example in windows.flat_map(sub_to_batch).take(5):\n print(example.numpy())",
"_____no_output_____"
]
],
[
[
"Now, you can see that the `shift` argument controls how much each window moves over.\n\nPutting this together you might write this function:",
"_____no_output_____"
]
],
[
[
"def make_window_dataset(ds, window_size=5, shift=1, stride=1):\n windows = ds.window(window_size, shift=shift, stride=stride)\n\n def sub_to_batch(sub):\n return sub.batch(window_size, drop_remainder=True)\n\n windows = windows.flat_map(sub_to_batch)\n return windows\n",
"_____no_output_____"
],
[
"ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)\n\nfor example in ds.take(10):\n print(example.numpy())",
"_____no_output_____"
]
],
[
[
"Then it's easy to extract labels, as before:",
"_____no_output_____"
]
],
[
[
"dense_labels_ds = ds.map(dense_1_step)\n\nfor inputs,labels in dense_labels_ds.take(3):\n print(inputs.numpy(), \"=>\", labels.numpy())",
"_____no_output_____"
]
],
[
[
"### Resampling\n\nWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.\n\nNote: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial.\n",
"_____no_output_____"
]
],
[
[
"zip_path = tf.keras.utils.get_file(\n origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',\n fname='creditcard.zip',\n extract=True)\n\ncsv_path = zip_path.replace('.zip', '.csv')",
"_____no_output_____"
],
[
"creditcard_ds = tf.data.experimental.make_csv_dataset(\n csv_path, batch_size=1024, label_name=\"Class\",\n # Set the column types: 30 floats and an int.\n column_defaults=[float()]*30+[int()])",
"_____no_output_____"
]
],
[
[
"Now, check the distribution of classes, it is highly skewed:",
"_____no_output_____"
]
],
[
[
"def count(counts, batch):\n features, labels = batch\n class_1 = labels == 1\n class_1 = tf.cast(class_1, tf.int32)\n\n class_0 = labels == 0\n class_0 = tf.cast(class_0, tf.int32)\n\n counts['class_0'] += tf.reduce_sum(class_0)\n counts['class_1'] += tf.reduce_sum(class_1)\n\n return counts",
"_____no_output_____"
],
[
"counts = creditcard_ds.take(10).reduce(\n initial_state={'class_0': 0, 'class_1': 0},\n reduce_func = count)\n\ncounts = np.array([counts['class_0'].numpy(),\n counts['class_1'].numpy()]).astype(np.float32)\n\nfractions = counts/counts.sum()\nprint(fractions)",
"_____no_output_____"
]
],
[
[
"A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow:",
"_____no_output_____"
],
[
"#### Datasets sampling",
"_____no_output_____"
],
[
"One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.\n\nHere, just use filter to generate them from the credit card fraud data:",
"_____no_output_____"
]
],
[
[
"negative_ds = (\n creditcard_ds\n .unbatch()\n .filter(lambda features, label: label==0)\n .repeat())\npositive_ds = (\n creditcard_ds\n .unbatch()\n .filter(lambda features, label: label==1)\n .repeat())",
"_____no_output_____"
],
[
"for features, label in positive_ds.batch(10).take(1):\n print(label.numpy())",
"_____no_output_____"
]
],
[
[
"To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each:",
"_____no_output_____"
]
],
[
[
"balanced_ds = tf.data.experimental.sample_from_datasets(\n [negative_ds, positive_ds], [0.5, 0.5]).batch(10)",
"_____no_output_____"
]
],
[
[
"Now the dataset produces examples of each class with 50/50 probability:",
"_____no_output_____"
]
],
[
[
"for features, labels in balanced_ds.take(10):\n print(labels.numpy())",
"_____no_output_____"
]
],
[
[
"#### Rejection resampling",
"_____no_output_____"
],
[
"One problem with the above `experimental.sample_from_datasets` approach is that\nit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`\nworks, but results in all the data being loaded twice.\n\nThe `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.\n\n`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.\n\nThe elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels:",
"_____no_output_____"
]
],
[
[
"def class_func(features, label):\n return label",
"_____no_output_____"
]
],
[
[
"The resampler also needs a target distribution, and optionally an initial distribution estimate:",
"_____no_output_____"
]
],
[
[
"resampler = tf.data.experimental.rejection_resample(\n class_func, target_dist=[0.5, 0.5], initial_dist=fractions)",
"_____no_output_____"
]
],
[
[
"The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler:",
"_____no_output_____"
]
],
[
[
"resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10)",
"_____no_output_____"
]
],
[
[
"The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels:",
"_____no_output_____"
]
],
[
[
"balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)",
"_____no_output_____"
]
],
[
[
"Now the dataset produces examples of each class with 50/50 probability:",
"_____no_output_____"
]
],
[
[
"for features, labels in balanced_ds.take(10):\n print(labels.numpy())",
"_____no_output_____"
]
],
[
[
"## Iterator Checkpointing",
"_____no_output_____"
],
[
"Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. \n\nTo include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor.",
"_____no_output_____"
]
],
[
[
"range_ds = tf.data.Dataset.range(20)\n\niterator = iter(range_ds)\nckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator)\nmanager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3)\n\nprint([next(iterator).numpy() for _ in range(5)])\n\nsave_path = manager.save()\n\nprint([next(iterator).numpy() for _ in range(5)])\n\nckpt.restore(manager.latest_checkpoint)\n\nprint([next(iterator).numpy() for _ in range(5)])",
"_____no_output_____"
]
],
[
[
"Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state.",
"_____no_output_____"
],
[
"## Using tf.data with tf.keras",
"_____no_output_____"
],
[
"The `tf.keras` API simplifies many aspects of creating and executing machine\nlearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup:",
"_____no_output_____"
]
],
[
[
"train, test = tf.keras.datasets.fashion_mnist.load_data()\n\nimages, labels = train\nimages = images/255.0\nlabels = labels.astype(np.int32)",
"_____no_output_____"
],
[
"fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))\nfmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)\n\nmodel = tf.keras.Sequential([\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10)\n])\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), \n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
" Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`:",
"_____no_output_____"
]
],
[
[
"model.fit(fmnist_train_ds, epochs=2)",
"_____no_output_____"
]
],
[
[
"If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument:",
"_____no_output_____"
]
],
[
[
"model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)",
"_____no_output_____"
]
],
[
[
"For evaluation you can pass the number of evaluation steps:",
"_____no_output_____"
]
],
[
[
"loss, accuracy = model.evaluate(fmnist_train_ds)\nprint(\"Loss :\", loss)\nprint(\"Accuracy :\", accuracy)",
"_____no_output_____"
]
],
[
[
"For long datasets, set the number of steps to evaluate:",
"_____no_output_____"
]
],
[
[
"loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)\nprint(\"Loss :\", loss)\nprint(\"Accuracy :\", accuracy)",
"_____no_output_____"
]
],
[
[
"The labels are not required in when calling `Model.predict`. ",
"_____no_output_____"
]
],
[
[
"predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)\nresult = model.predict(predict_ds, steps = 10)\nprint(result.shape)",
"_____no_output_____"
]
],
[
[
"But the labels are ignored if you do pass a dataset containing them:",
"_____no_output_____"
]
],
[
[
"result = model.predict(fmnist_train_ds, steps = 10)\nprint(result.shape)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0430fb379b297cc382d8afa7e770c25dd8dd764 | 5,913 | ipynb | Jupyter Notebook | Section 08 - Object Oriented Prog/Lec 72 - Special (Magic) Methods.ipynb | sansjha4900/Udemy-Python-Notes | 9d748006e926f42f5b0161a3c6c9a4a1e7e3ff7f | [
"MIT"
] | null | null | null | Section 08 - Object Oriented Prog/Lec 72 - Special (Magic) Methods.ipynb | sansjha4900/Udemy-Python-Notes | 9d748006e926f42f5b0161a3c6c9a4a1e7e3ff7f | [
"MIT"
] | null | null | null | Section 08 - Object Oriented Prog/Lec 72 - Special (Magic) Methods.ipynb | sansjha4900/Udemy-Python-Notes | 9d748006e926f42f5b0161a3c6c9a4a1e7e3ff7f | [
"MIT"
] | null | null | null | 20.674825 | 292 | 0.471503 | [
[
[
"mylist = [1,2,3]",
"_____no_output_____"
],
[
"len (mylist)",
"_____no_output_____"
],
[
"class Sample:\n pass",
"_____no_output_____"
],
[
"mysample = Sample()",
"_____no_output_____"
],
[
"len(mysample)",
"_____no_output_____"
],
[
"print (mysample)",
"<__main__.Sample object at 0x000001880AD420D0>\n"
]
],
[
[
"#### <b>As print, len, etc donot work directly for class:",
"_____no_output_____"
],
[
"<hr style=\"border:2px solid gray\"> </hr>",
"_____no_output_____"
],
[
"#### Special Methods:",
"_____no_output_____"
]
],
[
[
"class Book():\n \n def __init__ (self,title,author,pages):\n self.title = title\n self.author = author\n self.pages = pages\n \n def __str__ (self):\n return f\"{self.title} by {self.author} of {self.pages} pages.\"\n \n def __len__ (self):\n return self.pages\n \n def __del__ (self):\n print (\"Book is deleted.\")",
"_____no_output_____"
],
[
"mybook = Book(\"Python\", \"Jose\", 200)",
"_____no_output_____"
],
[
"print (mybook)",
"Python by Jose of 200 pages.\n"
],
[
"str (mybook)",
"_____no_output_____"
],
[
"len (mybook)",
"_____no_output_____"
],
[
"del mybook",
"Book is deleted.\n"
],
[
"mybook",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d04322d2acdde8e545d7f7f6ae3732aef34a008c | 2,518 | ipynb | Jupyter Notebook | images/fedora/aicoe-tensorflow-jupyter-toolbox/AI CoE's TensorFlow Jupyter Notebook.ipynb | goern/toolbox | af58feffc99d3b95c53f9c858d2b8d5abfc1165b | [
"Apache-2.0"
] | null | null | null | images/fedora/aicoe-tensorflow-jupyter-toolbox/AI CoE's TensorFlow Jupyter Notebook.ipynb | goern/toolbox | af58feffc99d3b95c53f9c858d2b8d5abfc1165b | [
"Apache-2.0"
] | 1 | 2020-05-19T07:36:42.000Z | 2020-05-28T10:06:43.000Z | images/fedora/aicoe-tensorflow-jupyter-toolbox/AI CoE's TensorFlow Jupyter Notebook.ipynb | goern/toolbox | af58feffc99d3b95c53f9c858d2b8d5abfc1165b | [
"Apache-2.0"
] | null | null | null | 20.306452 | 198 | 0.508737 | [
[
[
"let's import and check for the version of TensorFlow...",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\n\ntf.__version__",
"_____no_output_____"
]
],
[
[
"Try to obtain information of AICoE TensorFlow builds.",
"_____no_output_____"
]
],
[
[
"import os \n\ntry:\n path = os.path.dirname(os.path.dirname(tf.__file__))\n build_info_path = os.path.join(path, 'tensorflow-' + tf.__version__ + '.dist-info', 'build_info.json')\n with open(build_info_path, 'r') as build_info_file:\n build_info = json.load(build_info_file)\n \n print(build_info)\nexcept Exception as e:\n print(e)",
"[Errno 2] No such file or directory: '/home/goern/.local/share/virtualenvs/aicoe-tensorflow-jupyter-toolbox-3i4NnwxE/lib/python3.6/site-packages/tensorflow-2.1.0.dist-info/build_info.json'\n"
]
],
[
[
"... and see if we got some GPU available",
"_____no_output_____"
]
],
[
[
"tf.config.list_physical_devices('GPU')\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0432af59a5b44827061fda1382e7ae7564246e4 | 4,924 | ipynb | Jupyter Notebook | T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb | EdTonatto/UFFS-2020.2-Inteligencia_Artificial | 5f6a8f0baef2af2c1474e07ce2657159be774d41 | [
"MIT"
] | null | null | null | T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb | EdTonatto/UFFS-2020.2-Inteligencia_Artificial | 5f6a8f0baef2af2c1474e07ce2657159be774d41 | [
"MIT"
] | null | null | null | T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb | EdTonatto/UFFS-2020.2-Inteligencia_Artificial | 5f6a8f0baef2af2c1474e07ce2657159be774d41 | [
"MIT"
] | null | null | null | 24.868687 | 225 | 0.459789 | [
[
[
"# Rede Neural Simples\n\n### Implementando uma RNA Simples\n\nO diagrama abaixo mostra uma rede simples. A combinação linear dos pesos, inputs e viés formam o input h, que então é passado pela função de ativação f(h), gerando o output final do perceptron, etiquetado como y.\n <img src='RNA-simples.png' /><br>\n\n<p style=\"text-align:center\"> <i> Diagrama de uma rede neural simples</i> </p>\n \n\nCírculos são unidades, caixas são operações. O que faz as redes neurais possíveis, é que a função de ativação, f(h) pode ser qualquer função, não apenas a função degrau.\n\n<p> Por exemplo, caso f(h)=h, o output será o mesmo que o input. Agora o output da rede é </p>\n\n<p style=\"text-align:center\"> $$h = \\frac 1n\\sum_{i=1}^n(w_i*x_i)+b$$ </p>\n\n<p> Essa equação deveria ser familiar para você, pois é a mesma do modelo de regressão linear!\nOutras funções de ativação comuns são a função logística (também chamada de sigmóide), tanh e a função softmax. Nós iremos trabalhar principalmente com a função sigmóide pelo resto dessa aula:</p>\n\n\n$$f(h) = sigmoid(h)=\\frac 1 {1+e^{-h}}$$\n \n\n",
"_____no_output_____"
],
[
"## Vamos implementar uma RNA de apenas um neurônio!\n\n#### Importando a biblioteca",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"#### Função do cáculo da sigmóide",
"_____no_output_____"
]
],
[
[
"def sigmoid(x):\n return 1/(1+np.exp(-x))\n",
"_____no_output_____"
]
],
[
[
"#### Vetor dos valores de entrada",
"_____no_output_____"
]
],
[
[
"x = np.array([1.66, -0.22])\nb = 0.1\n",
"_____no_output_____"
]
],
[
[
"#### Pesos das ligações sinápticas",
"_____no_output_____"
]
],
[
[
"w = np.array([0.5, -0.3])",
"_____no_output_____"
]
],
[
[
"#### Calcule a combinação linear de entradas e pesos sinápticos",
"_____no_output_____"
]
],
[
[
"h = np.dot(x, w) + b",
"_____no_output_____"
]
],
[
[
"#### Aplicado a função de ativação do neurônio",
"_____no_output_____"
]
],
[
[
"y = sigmoid(h)\nprint('A Saida da rede eh: ', y)",
"A Saida da rede eh: 0.7302714044131816\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0432f34ad67fb3dc3c4c22e52dea95284da23a9 | 479,009 | ipynb | Jupyter Notebook | MVP.ipynb | Promeos/LADOT-Street-Sweeping-Transition-Pan | eb0d224a7ba910c4bf1db78b9fdb1365de0e6945 | [
"MIT"
] | 1 | 2021-02-05T16:05:02.000Z | 2021-02-05T16:05:02.000Z | MVP.ipynb | Promeos/LADOT-Street-Sweeping-Transition-Plan | eb0d224a7ba910c4bf1db78b9fdb1365de0e6945 | [
"MIT"
] | null | null | null | MVP.ipynb | Promeos/LADOT-Street-Sweeping-Transition-Plan | eb0d224a7ba910c4bf1db78b9fdb1365de0e6945 | [
"MIT"
] | null | null | null | 236.430898 | 87,117 | 0.895023 | [
[
[
"# Communication in Crisis\n\n## Acquire\nData: [Los Angeles Parking Citations](https://www.kaggle.com/cityofLA/los-angeles-parking-citations)<br>\nLoad the dataset and filter for:\n- Citations issued from 2017-01-01 to 2021-04-12.\n- Street Sweeping violations - `Violation Description` == __\"NO PARK/STREET CLEAN\"__\n\nLet's acquire the parking citations data from our file.\n1. Import libraries.\n1. Load the dataset.\n1. Display the shape and first/last 2 rows.\n1. Display general infomation about the dataset - w/ the # of unique values in each column.\n1. Display the number of missing values in each column.\n1. Descriptive statistics for all numeric features.",
"_____no_output_____"
]
],
[
[
"# Import libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy import stats\nimport sys\nimport time\nimport folium.plugins as plugins\nfrom IPython.display import HTML\nimport json\nimport datetime\nimport calplot\nimport folium\nimport math\nsns.set()\n\nfrom tqdm.notebook import tqdm\n\nimport src\n\n# Filter warnings\nfrom warnings import filterwarnings\nfilterwarnings('ignore')",
"_____no_output_____"
],
[
"# Load the data\ndf = src.get_sweep_data(prepared=False)",
"_____no_output_____"
],
[
"# Display the shape and dtypes of each column\nprint(df.shape)\ndf.info()",
"(2318726, 22)\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2318726 entries, 0 to 2318725\nData columns (total 22 columns):\n # Column Dtype \n--- ------ ----- \n 0 Ticket number object \n 1 Issue Date object \n 2 Issue time float64\n 3 Meter Id object \n 4 Marked Time float64\n 5 RP State Plate object \n 6 Plate Expiry Date float64\n 7 VIN float64\n 8 Make object \n 9 Body Style object \n 10 Color object \n 11 Location object \n 12 Route object \n 13 Agency float64\n 14 Violation code object \n 15 Violation Description object \n 16 Fine amount float64\n 17 Latitude float64\n 18 Longitude float64\n 19 Agency Description object \n 20 Color Description object \n 21 Body Style Description object \ndtypes: float64(8), object(14)\nmemory usage: 389.2+ MB\n"
],
[
"# Display the first two citations\ndf.head(2)",
"_____no_output_____"
],
[
"# Display the last two citations\ndf.tail(2)",
"_____no_output_____"
],
[
"# Display descriptive statistics of numeric columns\ndf.describe()",
"_____no_output_____"
],
[
"df.hist(figsize=(16, 8), bins=15)\nplt.tight_layout();",
"_____no_output_____"
]
],
[
[
"__Initial findings__\n- `Issue time` and `Marked Time` are quasi-normally distributed. Note: Poisson Distribution\n- It's interesting to see the distribution of our activity on earth follows a normal distribution.\n- Agencies 50+ write the most parking citations.\n- Most fine amounts are less than $100.00\n- There are a few null or invalid license plates.",
"_____no_output_____"
],
[
"# Prepare",
"_____no_output_____"
],
[
"- Remove spaces + capitalization from each column name.\n- Cast `Plate Expiry Date` to datetime data type.\n- Cast `Issue Date` and `Issue Time` to datetime data types.\n- Drop columns missing >=74.42\\% of their values. \n- Drop missing values.\n- Transform Latitude and Longitude columns from NAD1983StatePlaneCaliforniaVFIPS0405 feet projection to EPSG:4326 World Geodetic System 1984: used in GPS [Standard]\n- Filter data for street sweeping citations only.",
"_____no_output_____"
]
],
[
[
"# Prepare the data using a function stored in prepare.py\ndf_citations = src.get_sweep_data(prepared=True)\n\n# Display the first two rows\ndf_citations.head(2)",
"_____no_output_____"
],
[
"# Check the column data types and non-null counts.\ndf_citations.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2279063 entries, 0 to 2279062\nData columns (total 15 columns):\n # Column Dtype \n--- ------ ----- \n 0 issue_date datetime64[ns]\n 1 issue_time object \n 2 location object \n 3 route object \n 4 agency float64 \n 5 violation_description object \n 6 fine_amount float64 \n 7 latitude float64 \n 8 longitude float64 \n 9 citation_year int64 \n 10 citation_month int64 \n 11 citation_day int64 \n 12 day_of_week object \n 13 citation_hour int64 \n 14 citation_minute int64 \ndtypes: datetime64[ns](1), float64(4), int64(5), object(5)\nmemory usage: 260.8+ MB\n"
]
],
[
[
"# Exploration",
"_____no_output_____"
],
[
"## How much daily revenue is generated from street sweeper citations?\n### Daily Revenue from Street Sweeper Citations\nDaily street sweeper citations increased in 2020.",
"_____no_output_____"
]
],
[
[
"# Daily street sweeping citation revenue\ndaily_revenue = df_citations.groupby('issue_date').fine_amount.sum()\ndaily_revenue.index = pd.to_datetime(daily_revenue.index)",
"_____no_output_____"
],
[
"df_sweep = src.street_sweep(data=df_citations)\ndf_d = src.resample_period(data=df_sweep)\ndf_m = src.resample_period(data=df_sweep, period='M')\ndf_d.head()",
"_____no_output_____"
],
[
"sns.set_context('talk')\n\n# Plot daily revenue from street sweeping citations\ndf_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')\nplt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')\n\nplt.title(\"Daily Revenue from Street Sweeping Citations\")\nplt.xlabel('')\nplt.ylabel(\"Revenue (in thousand's)\")\n\nplt.xticks(rotation=0, horizontalalignment='center', fontsize=13)\nplt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])\nplt.ylim(0, 1_000_000)\n\nplt.legend(loc=2, framealpha=.8);",
"_____no_output_____"
]
],
[
[
"> __Anomaly__: Between March 2020 and October 2020 a Local Emergency was Declared by the Mayor of Los Angeles in response to COVID-19. Street Sweeping was halted to help Angelenos Shelter in Place. _Street Sweeping resumed on 10/15/2020_.",
"_____no_output_____"
],
[
"### Anomaly: Declaration of Local Emergency",
"_____no_output_____"
]
],
[
[
"sns.set_context('talk')\n\n# Plot daily revenue from street sweeping citations\ndf_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')\nplt.axvspan('2020-03-16', '2020-10-14', color='grey', alpha=.25)\nplt.text('2020-03-29', 890_000, 'Declaration of\\nLocal Emergency', fontsize=11)\n\n\nplt.title(\"Daily Revenue from Street Sweeping Citations\")\nplt.xlabel('')\nplt.ylabel(\"Revenue (in thousand's)\")\n\nplt.xticks(rotation=0, horizontalalignment='center', fontsize=13)\nplt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])\nplt.ylim(0, 1_000_000)\n\nplt.legend(loc=2, framealpha=.8);",
"_____no_output_____"
],
[
"sns.set_context('talk')\n\n# Plot daily revenue from street sweeping citations\ndf_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')\nplt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')\nplt.axvline(datetime.datetime(2020, 10, 15), color='red', linestyle=\"--\", label='October 15, 2020')\n\nplt.title(\"Daily Revenue from Street Sweeping Citations\")\nplt.xlabel('')\nplt.ylabel(\"Revenue (in thousand's)\")\n\nplt.xticks(rotation=0, horizontalalignment='center', fontsize=13)\nplt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200K', '$400K', '$600K', '$800K',])\nplt.ylim(0, 1_000_000)\n\nplt.legend(loc=2, framealpha=.8);",
"_____no_output_____"
]
],
[
[
"## Hypothesis Test\n### General Inquiry\nIs the daily citation revenue after 10/15/2020 significantly greater than average?\n\n### Z-Score\n\n$H_0$: The daily citation revenue after 10/15/2020 is less than or equal to the average daily revenue.\n\n$H_a$: The daily citation revenue after 10/15/2020 is significantly greater than average.",
"_____no_output_____"
]
],
[
[
"confidence_interval = .997\n\n# Directional Test\nalpha = (1 - confidence_interval)/2",
"_____no_output_____"
],
[
"# Data to calculate z-scores using precovid values to calculate the mean and std\ndaily_revenue_precovid = df_d.loc[df_d.index < '2020-03-16']['revenue']\nmean_precovid, std_precovid = daily_revenue_precovid.agg(['mean', 'std']).values",
"_____no_output_____"
],
[
"mean, std = df_d.agg(['mean', 'std']).values\n\n# Calculating Z-Scores using precovid mean and std\nz_scores_precovid = (df_d.revenue - mean_precovid)/std_precovid\nz_scores_precovid.index = pd.to_datetime(z_scores_precovid.index)\n\nsig_zscores_pre_covid = z_scores_precovid[z_scores_precovid>3]\n\n# Calculating Z-Scores using entire data\nz_scores = (df_d.revenue - mean)/std\nz_scores.index = pd.to_datetime(z_scores.index)\n\nsig_zscores = z_scores[z_scores>3]",
"_____no_output_____"
],
[
"sns.set_context('talk')\n\nplt.figure(figsize=(12, 6))\nsns.histplot(data=z_scores_precovid,\n bins=50,\n label='preCOVID z-scores')\n\n\nsns.histplot(data=z_scores,\n bins=50,\n color='orange',\n label='z-scores')\n\nplt.title('Daily citation revenue after 10/15/2020 is significantly greater than average', fontsize=16)\nplt.xlabel('Standard Deviations')\nplt.ylabel('# of Days')\n\nplt.axvline(3, color='Black', linestyle=\"--\", label='3 Standard Deviations')\n\nplt.xticks(np.linspace(-1, 9, 11))\nplt.legend(fontsize=13);",
"_____no_output_____"
],
[
"a = stats.zscore(daily_revenue)\n\nfig, ax = plt.subplots(figsize=(8, 8))\n\nstats.probplot(a, plot=ax)\nplt.xlabel(\"Quantile of Normal Distribution\")\nplt.ylabel(\"z-score\");",
"_____no_output_____"
]
],
[
[
"### p-values",
"_____no_output_____"
]
],
[
[
"p_values_precovid = z_scores_precovid.apply(stats.norm.cdf)\np_values = z_scores_precovid.apply(stats.norm.cdf)\n\nsignificant_dates_precovid = p_values_precovid[(1-p_values_precovid) < alpha]\nsignificant_dates = p_values[(1-p_values) < alpha]",
"_____no_output_____"
],
[
"# The chance of an outcome occuring by random chance\nprint(f'{alpha:0.3%}')",
"0.150%\n"
]
],
[
[
"### Cohen's D",
"_____no_output_____"
]
],
[
[
"fractions = [.1, .2, .5, .7, .9]\ncohen_d = []\n\nfor percentage in fractions:\n cohen_d_trial = []\n \n for i in range(10000):\n sim = daily_revenue.sample(frac=percentage)\n sim_mean = sim.mean()\n\n d = (sim_mean - mean) / (std/math.sqrt(int(len(daily_revenue)*percentage)))\n cohen_d_trial.append(d)\n \n cohen_d.append(np.mean(cohen_d_trial))",
"_____no_output_____"
],
[
"cohen_d",
"_____no_output_____"
],
[
"fractions = [.1, .2, .5, .7, .9]\ncohen_d_precovid = []\n\nfor percentage in fractions:\n cohen_d_trial = []\n \n for i in range(10000):\n sim = daily_revenue_precovid.sample(frac=percentage)\n sim_mean = sim.mean()\n\n d = (sim_mean - mean_precovid) / (std_precovid/math.sqrt(int(len(daily_revenue_precovid)*percentage)))\n cohen_d_trial.append(d)\n \n cohen_d_precovid.append(np.mean(cohen_d_trial))",
"_____no_output_____"
],
[
"cohen_d_precovid",
"_____no_output_____"
]
],
[
[
"### Significant Dates with less than a 0.15% chance of occuring\n\n- All dates that are considered significant occur after 10/15/2020\n- In the two weeks following 10/15/2020 significant events occured on __Tuesday's and Wednesday's__.",
"_____no_output_____"
]
],
[
[
"dates_precovid = set(list(sig_zscores_pre_covid.index))\ndates = set(list(sig_zscores.index))\n\n\ncommon_dates = list(dates.intersection(dates_precovid))\ncommon_dates = pd.to_datetime(common_dates).sort_values()",
"_____no_output_____"
],
[
"sig_zscores",
"_____no_output_____"
],
[
"pd.Series(common_dates.day_name(),\n common_dates)",
"_____no_output_____"
],
[
"np.random.seed(sum(map(ord, 'calplot')))\n\nall_days = pd.date_range('1/1/2020', '12/22/2020', freq='D')\n\nsignificant_events = pd.Series(np.ones_like(len(common_dates)), index=common_dates)\ncalplot.calplot(significant_events, figsize=(18, 12), cmap='coolwarm_r');",
"_____no_output_____"
]
],
[
[
"## Which parts of the city were impacted the most?",
"_____no_output_____"
]
],
[
[
"df_outliers = df_citations.loc[df_citations.issue_date.isin(list(common_dates.astype('str')))]\n\ndf_outliers.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"print(df_outliers.shape)\ndf_outliers.head()",
"(60532, 15)\n"
],
[
"m = folium.Map(location=[34.0522, -118.2437],\n min_zoom=8,\n max_bounds=True)\n\nmc = plugins.MarkerCluster()\n\nfor index, row in df_outliers.iterrows():\n mc.add_child(\n \n folium.Marker(location=[str(row['latitude']), str(row['longitude'])],\n popup='Cited {} {} at {}'.format(row['day_of_week'],\n row['issue_date'],\n row['issue_time'][:-3]),\n control_scale=True,\n clustered_marker=True\n )\n )\n \n\nm.add_child(mc)",
"_____no_output_____"
]
],
[
[
"Transfering map to Tablaeu",
"_____no_output_____"
],
[
"# Conclusions",
"_____no_output_____"
],
[
"# Appendix",
"_____no_output_____"
],
[
"## What time(s) are Street Sweeping citations issued?\n\nMost citations are issued during the hours of 8am, 10am, and 12pm.\n\n### Citation Times",
"_____no_output_____"
]
],
[
[
"# Filter street sweeping data for citations issued between\n# 8 am and 2 pm, 8 and 14 respectively.\n\ndf_citation_times = df_citations.loc[(df_citations.issue_hour >= 8)&(df_citations.issue_hour < 14)]",
"_____no_output_____"
],
[
"sns.set_context('talk')\n\n# Issue Hour Plot\ndf_citation_times.issue_hour.value_counts().sort_index().plot.bar(figsize=(8, 6))\n\n# Axis labels\nplt.title('Most Street Sweeper Citations are Issued at 8am')\nplt.xlabel('Issue Hour (24HR)')\nplt.ylabel('# of Citations (in thousands)')\n\n# Chart Formatting\nplt.xticks(rotation=0)\nplt.yticks(range(100_000, 400_001,100_000), ['100', '200', '300', '400'])\nplt.show()",
"_____no_output_____"
],
[
"sns.set_context('talk')\n\n# Issue Minute Plot\ndf_citation_times.issue_minute.value_counts().sort_index().plot.bar(figsize=(20, 9))\n\n# Axis labels\nplt.title('Most Street Sweeper Citations are Issued in the First 30 Minutes')\nplt.xlabel('Issue Minute')\nplt.ylabel('# of Citations (in thousands)')\n# plt.axvspan(0, 30, facecolor='grey', alpha=0.1)\n\n# Chart Formatting\nplt.xticks(rotation=0)\nplt.yticks(range(5_000, 40_001, 5_000), ['5', '10', '15', '20', '25', '30', '35', '40'])\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Which state has the most Street Sweeping violators?\n### License Plate\nOver 90% of all street sweeping citations are issued to California Residents.",
"_____no_output_____"
]
],
[
[
"sns.set_context('talk')\n\nfig = df_citations.rp_state_plate.value_counts(normalize=True).nlargest(3).plot.bar(figsize=(12, 6))\n\n# Chart labels\nplt.title('California residents receive the most street sweeping citations', fontsize=16)\nplt.xlabel('State')\nplt.ylabel('% of all Citations')\n\n# Tick Formatting\nplt.xticks(rotation=0)\nplt.yticks(np.linspace(0, 1, 11), labels=[f'{i:0.0%}' for i in np.linspace(0, 1, 11)])\nplt.grid(axis='x', alpha=.5)\n\nplt.tight_layout();",
"_____no_output_____"
]
],
[
[
"## Which street has the most Street Sweeping citations?\nThe characteristics of the top 3 streets:\n1. Vehicles are parked bumper to bumper leaving few parking spaces available\n2. Parking spaces have a set time limit",
"_____no_output_____"
]
],
[
[
"df_citations['street_name'] = df_citations.location.str.replace('^[\\d+]{2,}', '').str.strip()",
"_____no_output_____"
],
[
"sns.set_context('talk')\n\n# Removing the street number and white space from the address\ndf_citations.street_name.value_counts().nlargest(3).plot.barh(figsize=(16, 6))\n\n# Chart formatting\nplt.title('Streets with the Most Street Sweeping Citations', fontsize=24)\nplt.xlabel('# of Citations');",
"_____no_output_____"
]
],
[
[
"### __Abbot Kinney Blvd: \"Small Boutiques, No Parking\"__\n> [Abbot Kinney Blvd on Google Maps](https://www.google.com/maps/@33.9923689,-118.4731719,3a,75y,112.99h,91.67t/data=!3m6!1e1!3m4!1sKD3cG40eGmdWxhwqLD1BvA!2e0!7i16384!8i8192)\n\n\n<img src=\"./visuals/abbot.png\" alt=\"Abbot\" style=\"width: 450px;\" align=\"left\"/>",
"_____no_output_____"
],
[
"- Near Venice Beach\n - Small businesses and name brand stores line both sides of the street\n - Little to no parking in this area\n- Residential area inland\n - Multiplex style dwellings with available parking spaces\n - Weekly Street Sweeping on Monday from 7:30 am - 9:30 am",
"_____no_output_____"
],
[
"### __Clinton Street: \"Packed Street\"__",
"_____no_output_____"
],
[
"> [Clinton Street on Google Maps](https://www.google.com/maps/@34.0816611,-118.3306842,3a,75y,70.72h,57.92t/data=!3m9!1e1!3m7!1sdozFgC7Ms3EvaOF4-CeNAg!2e0!7i16384!8i8192!9m2!1b1!2i37)\n\n<img src=\"./visuals/clinton.png\" alt=\"Clinton\" style=\"width: 600px;\" align=\"Left\"/>",
"_____no_output_____"
],
[
"- All parking spaces on the street are filled\n- Residential Area\n - Weekly Street Sweeping on Friday from 8:00 am - 11:00 am",
"_____no_output_____"
],
[
"### __Kelton Ave: \"2 Hour Time Limit\"__\n> [Kelton Ave on Google Maps](https://www.google.com/maps/place/Kelton+Ave,+Los+Angeles,+CA/@34.0475262,-118.437594,3a,49.9y,183.92h,85.26t/data=!3m9!1e1!3m7!1s5VICHNYMVEk9utaV5egFYg!2e0!7i16384!8i8192!9m2!1b1!2i25!4m5!3m4!1s0x80c2bb7efb3a05eb:0xe155071f3fe49df3!8m2!3d34.0542999!4d-118.4434919)",
"_____no_output_____"
],
[
"<img src=\"./visuals/kelton.png\" width=\"600\" height=\"600\" align=\"left\"/>",
"_____no_output_____"
],
[
"- Most parking spaces on this street are available. This is due to the strict 2 hour time limit for parked vehicles without the proper exception permit.\n- Multiplex, Residential Area\n - Weekly Street Sweeping on Thursday from 10:00 am - 1:00 pm\n - Weekly Street Sweeping on Friday from 8:00 am - 10:00 am",
"_____no_output_____"
],
[
"## Which street has the most Street Sweeping citations, given the day of the week?\n\n- __Abbot Kinney Blvd__ is the most cited street on __Monday and Tuesday__\n- __4th Street East__ is the most cited street on __Saturday and Sunday__",
"_____no_output_____"
]
],
[
[
"# Group by the day of the week and street name\ndf_day_street = df_citations.groupby(by=['day_of_week', 'street_name'])\\\n .size()\\\n .sort_values()\\\n .groupby(level=0)\\\n .tail(1)\\\n .reset_index()\\\n .rename(columns={0:'count'})\n\n# Create a new column to sort the values by the day of the\n# week starting with Monday\ndf_day_street['order'] = [5, 6, 4, 3, 0, 2, 1]\n\n# Display the street with the most street sweeping citations\n# given the day of the week.\ndf_day_street.sort_values('order').set_index('order')",
"_____no_output_____"
]
],
[
[
"## Which Agencies issue the most street sweeping citations?\n\nThe Department of Transportation's __Western, Hollywood, and Valley__ subdivisions issue the most street sweeping citations.",
"_____no_output_____"
]
],
[
[
"sns.set_context('talk')\ndf_citations.agency.value_counts().nlargest(5).plot.barh(figsize=(12, 6));\n# plt.axhspan(2.5, 5, facecolor='0.5', alpha=.8)\n\nplt.title('Agencies With the Most Street Sweeper Citations')\nplt.xlabel('# of Citations (in thousands)')\n\nplt.xticks(np.arange(0, 400_001, 100_000), list(np.arange(0, 401, 100)))\nplt.yticks([0, 1, 2, 3, 4], labels=['DOT-WESTERN',\n 'DOT-HOLLYWOOD',\n 'DOT-VALLEY',\n 'DOT-SOUTHERN',\n 'DOT-CENTRAL']);",
"_____no_output_____"
]
],
[
[
"When taking routes into consideration, __\"Western\"__ Subdivision, route 00500, has issued the most street sweeping citations.\n- Is route 00500 larger than other street sweeping routes?",
"_____no_output_____"
]
],
[
[
"top_3_routes = df_citations.groupby(['agency', 'route'])\\\n .size()\\\n .nlargest(3)\\\n .sort_index()\\\n .rename('num_citations')\\\n .reset_index()\\\n .sort_values(by='num_citations', ascending=False)\n\ntop_3_routes.agency = [\"DOT-WESTERN\", \"DOT-SOUTHERN\", \"DOT-CENTRAL\"]",
"_____no_output_____"
],
[
"data = top_3_routes.set_index(['agency', 'route'])\n\ndata.plot(kind='barh', stacked=True, figsize=(12, 6), legend=None)\n\nplt.title(\"Agency-Route ID's with the most Street Sweeping Citations\")\nplt.ylabel('')\nplt.xlabel('# of Citations (in thousands)')\n\nplt.xticks(np.arange(0, 70_001, 10_000), [str(i) for i in np.arange(0, 71, 10)]);",
"_____no_output_____"
],
[
"df_citations['issue_time_num'] = df_citations.issue_time.str.replace(\":00\", '')\ndf_citations['issue_time_num'] = df_citations.issue_time_num.str.replace(':', '').astype(np.int)",
"_____no_output_____"
]
],
[
[
"## What is the weekly distibution of citation times?",
"_____no_output_____"
]
],
[
[
"sns.set_context('talk')\n\nplt.figure(figsize=(13, 12))\n\nsns.boxplot(data=df_citations,\n x=\"day_of_week\",\n y=\"issue_time_num\",\n order=[\"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\", \"Sunday\"],\n whis=3);\n\nplt.title(\"Distribution Citation Issue Times Throughout the Week\")\nplt.xlabel('')\nplt.ylabel('Issue Time (24HR)')\n\nplt.yticks(np.arange(0, 2401, 200), [str(i) + \":00\" for i in range(0, 25, 2)]);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0433622eca180a8b0959ca460f2a52da9c59b68 | 15,770 | ipynb | Jupyter Notebook | _posts/python/style/colorscales/colorscales.ipynb | ayulockin/documentation | 57669b410ebbb61fd6bf962a16d33d99ceb162ca | [
"CC-BY-3.0"
] | null | null | null | _posts/python/style/colorscales/colorscales.ipynb | ayulockin/documentation | 57669b410ebbb61fd6bf962a16d33d99ceb162ca | [
"CC-BY-3.0"
] | null | null | null | _posts/python/style/colorscales/colorscales.ipynb | ayulockin/documentation | 57669b410ebbb61fd6bf962a16d33d99ceb162ca | [
"CC-BY-3.0"
] | null | null | null | 30.982318 | 408 | 0.50837 | [
[
[
"#### New to Plotly?\nPlotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).\n<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).\n<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!",
"_____no_output_____"
],
[
"#### Version Check\nPlotly's python package is updated frequently. Run `pip install plotly --upgrade` to use the latest version.",
"_____no_output_____"
]
],
[
[
"import plotly\nplotly.__version__",
"_____no_output_____"
]
],
[
[
"### Custom Discretized Heatmap Colorscale",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\n\npy.iplot([{\n 'z': [\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n ],\n 'type': 'heatmap',\n 'colorscale': [\n # Let first 10% (0.1) of the values have color rgb(0, 0, 0)\n [0, 'rgb(0, 0, 0)'],\n [0.1, 'rgb(0, 0, 0)'],\n\n # Let values between 10-20% of the min and max of z\n # have color rgb(20, 20, 20)\n [0.1, 'rgb(20, 20, 20)'],\n [0.2, 'rgb(20, 20, 20)'],\n\n # Values between 20-30% of the min and max of z\n # have color rgb(40, 40, 40)\n [0.2, 'rgb(40, 40, 40)'],\n [0.3, 'rgb(40, 40, 40)'],\n\n [0.3, 'rgb(60, 60, 60)'],\n [0.4, 'rgb(60, 60, 60)'],\n\n [0.4, 'rgb(80, 80, 80)'],\n [0.5, 'rgb(80, 80, 80)'],\n\n [0.5, 'rgb(100, 100, 100)'],\n [0.6, 'rgb(100, 100, 100)'],\n\n [0.6, 'rgb(120, 120, 120)'],\n [0.7, 'rgb(120, 120, 120)'],\n\n [0.7, 'rgb(140, 140, 140)'],\n [0.8, 'rgb(140, 140, 140)'],\n\n [0.8, 'rgb(160, 160, 160)'],\n [0.9, 'rgb(160, 160, 160)'],\n\n [0.9, 'rgb(180, 180, 180)'],\n [1.0, 'rgb(180, 180, 180)']\n ],\n 'colorbar': {\n 'tick0': 0,\n 'dtick': 1\n }\n}], filename='heatmap-discrete-colorscale')",
"_____no_output_____"
]
],
[
[
"### Colorscale for Scatter Plots",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\nimport plotly.graph_objs as go\n\ndata = [\n go.Scatter(\n y=[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5],\n marker=dict(\n size=16,\n cmax=39,\n cmin=0,\n color=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39],\n colorbar=dict(\n title='Colorbar'\n ),\n colorscale='Viridis'\n ),\n mode='markers')\n]\n\nfig = go.Figure(data=data)\npy.iplot(fig)",
"_____no_output_____"
]
],
[
[
"### Colorscale for Contour Plot",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\nimport plotly.graph_objs as go\n\ndata = [\n go.Contour(\n z=[[10, 10.625, 12.5, 15.625, 20],\n [5.625, 6.25, 8.125, 11.25, 15.625],\n [2.5, 3.125, 5., 8.125, 12.5],\n [0.625, 1.25, 3.125, 6.25, 10.625],\n [0, 0.625, 2.5, 5.625, 10]],\n colorscale='Jet',\n )\n]\n\npy.iplot(data, filename='simple-colorscales-colorscale')",
"_____no_output_____"
]
],
[
[
"### Custom Heatmap Colorscale",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\nimport plotly.graph_objs as go\n\nimport six.moves.urllib\nimport json\n\nresponse = six.moves.urllib.request.urlopen('https://raw.githubusercontent.com/plotly/datasets/master/custom_heatmap_colorscale.json')\ndataset = json.load(response)\n\ndata = [\n go.Heatmap(\n z=dataset['z'],\n colorscale=[[0.0, 'rgb(165,0,38)'], [0.1111111111111111, 'rgb(215,48,39)'], [0.2222222222222222, 'rgb(244,109,67)'], [0.3333333333333333, 'rgb(253,174,97)'], [0.4444444444444444, 'rgb(254,224,144)'], [0.5555555555555556, 'rgb(224,243,248)'], [0.6666666666666666, 'rgb(171,217,233)'], [0.7777777777777778, 'rgb(116,173,209)'], [0.8888888888888888, 'rgb(69,117,180)'], [1.0, 'rgb(49,54,149)']]\n )\n]\npy.iplot(data, filename='custom-colorscale')",
"_____no_output_____"
]
],
[
[
"### Custom Contour Plot Colorscale",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\nimport plotly.graph_objs as go\n\ndata = [\n go.Contour(\n z=[[10, 10.625, 12.5, 15.625, 20],\n [5.625, 6.25, 8.125, 11.25, 15.625],\n [2.5, 3.125, 5., 8.125, 12.5],\n [0.625, 1.25, 3.125, 6.25, 10.625],\n [0, 0.625, 2.5, 5.625, 10]],\n colorscale=[[0, 'rgb(166,206,227)'], [0.25, 'rgb(31,120,180)'], [0.45, 'rgb(178,223,138)'], [0.65, 'rgb(51,160,44)'], [0.85, 'rgb(251,154,153)'], [1, 'rgb(227,26,28)']],\n )\n]\n\npy.iplot(data, filename='colorscales-custom-colorscale')",
"_____no_output_____"
]
],
[
[
"### Custom Colorbar",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\nimport plotly.graph_objs as go\n\nimport six.moves.urllib\nimport json\n\nresponse = six.moves.urllib.request.urlopen('https://raw.githubusercontent.com/plotly/datasets/master/custom_heatmap_colorscale.json')\ndataset = json.load(response)\n\ndata = [\n go.Heatmap(\n z=dataset['z'],\n colorscale=[[0.0, 'rgb(165,0,38)'], [0.1111111111111111, 'rgb(215,48,39)'], [0.2222222222222222, 'rgb(244,109,67)'],\n [0.3333333333333333, 'rgb(253,174,97)'], [0.4444444444444444, 'rgb(254,224,144)'], [0.5555555555555556, 'rgb(224,243,248)'],\n [0.6666666666666666, 'rgb(171,217,233)'],[0.7777777777777778, 'rgb(116,173,209)'], [0.8888888888888888, 'rgb(69,117,180)'],\n [1.0, 'rgb(49,54,149)']],\n colorbar = dict(\n title = 'Surface Heat',\n titleside = 'top',\n tickmode = 'array',\n tickvals = [2,50,100],\n ticktext = ['Hot','Mild','Cool'],\n ticks = 'outside'\n )\n )\n]\n\npy.iplot(data, filename='custom-colorscale-colorbar')",
"_____no_output_____"
]
],
[
[
"### Dash Example",
"_____no_output_____"
]
],
[
[
"from IPython.display import IFrame\nIFrame(src= \"https://dash-simple-apps.plotly.host/dash-colorscaleplot/\" ,width=\"100%\" ,height=\"650px\", frameBorder=\"0\")",
"_____no_output_____"
]
],
[
[
"Find the dash app source code [here](https://github.com/plotly/simple-example-chart-apps/tree/master/colorscale)",
"_____no_output_____"
],
[
"### Reference",
"_____no_output_____"
],
[
"See https://plot.ly/python/reference/ for more information and chart attribute options!",
"_____no_output_____"
]
],
[
[
"from IPython.display import display, HTML\n\ndisplay(HTML('<link href=\"//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700\" rel=\"stylesheet\" type=\"text/css\" />'))\ndisplay(HTML('<link rel=\"stylesheet\" type=\"text/css\" href=\"http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css\">'))\n\n! pip install git+https://github.com/plotly/publisher.git --upgrade\nimport publisher\npublisher.publish(\n 'colorscales.ipynb', 'python/colorscales/', 'Colorscales',\n 'How to set colorscales and heatmap colorscales in Python and Plotly. Divergent, sequential, and qualitative colorscales.',\n title = 'Colorscales in Python | Plotly',\n has_thumbnail='true', thumbnail='thumbnail/heatmap_colorscale.jpg', \n language='python', \n page_type='example_index',\n display_as='style_opt', \n order=11,\n ipynb= '~notebook_demo/187')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0433bb7c2c708b0d9282576c5ec1b46fced9120 | 477,271 | ipynb | Jupyter Notebook | Clustering Chicago Public Libraries.ipynb | KunyuHe/Clustering-Chicago-Public-Libraries | 6e4bf5a9f2de5e1150f0d5f2a4151879936f1a6b | [
"MIT"
] | null | null | null | Clustering Chicago Public Libraries.ipynb | KunyuHe/Clustering-Chicago-Public-Libraries | 6e4bf5a9f2de5e1150f0d5f2a4151879936f1a6b | [
"MIT"
] | null | null | null | Clustering Chicago Public Libraries.ipynb | KunyuHe/Clustering-Chicago-Public-Libraries | 6e4bf5a9f2de5e1150f0d5f2a4151879936f1a6b | [
"MIT"
] | null | null | null | 1,099.702765 | 371,680 | 0.951053 | [
[
[
"<h1><center>Clustering Chicago Public Libraries by Top 10 Nearby Venues</center></h1>",
"_____no_output_____"
],
[
"<h4><center>Author: Kunyu He</center></h4>\n<h5><center>University of Chicago CAPP'20<h5><center>",
"_____no_output_____"
],
[
"### Executive Summary",
"_____no_output_____"
],
[
"In this notebook, I clustered 80 public libraries in the city of Chicago into 7 clusters, based on the categories of their top ten venues nearby. It would be a nice guide for those who would like to spend their days in these libraries, exploring their surroundings, but become tired of staying in only one or few of them over time.",
"_____no_output_____"
],
[
"The rest of this notebook is organized as follows:\n\n[Data]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Data)) section briefly introduces the data source. [Methodology]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Methodology)) section briefly introduced the unsupervised learning algorithms used. In the [Imports and Format Parameters]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Imports-and-Format-Parameters)) section, I install and import the Python libraries used and set the global constants for future use. [Getting and Cleaning Data]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Getting-and-Cleaning-Data)) sections cotains code downloading and cleaning public library and nearby venues data from external sources. I perform dimension reduction, clustering and labelling mainly in the [Data Analysis]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Data-Analysis)) section. Finally, resulting folium map is presented in the [Results]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Results)) section and [Discussions]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Discussions)) section covers caveats and potential improvements.",
"_____no_output_____"
],
[
"### Data",
"_____no_output_____"
],
[
"Information of the public libraries is provided by [Chicago Public Library](https://www.chipublib.org/). You can access the data [here]((https://data.cityofchicago.org/Education/Libraries-Locations-Hours-and-Contact-Information/x8fc-8rcq)).\n\nInformation of the top venues near to (within a range of 500 meters) the public libraries is acquired from [FourSquare API](https://developer.foursquare.com/). You can explore the surroundings of any geographical coordinates of interest with a developer account.",
"_____no_output_____"
],
[
"### Methodology",
"_____no_output_____"
],
[
"The clustering algorithms used include:\n\n* [Principal Component Analysis]((https://en.wikipedia.org/wiki/Principal_component_analysis)) with [Truncated SVD]((http://infolab.stanford.edu/pub/cstr/reports/na/m/86/36/NA-M-86-36.pdf));\n* [KMeans Clustering]((https://en.wikipedia.org/wiki/K-means_clustering));\n* [Hierarchical Clustering]((https://en.wikipedia.org/wiki/Hierarchical_clustering)) with [Ward's Method]((https://en.wikipedia.org/wiki/Ward%27s_method)).\n\nPCA with TSVD is used for reducing the dimension of our feature matrix, which is a [sparse matrix]((https://en.wikipedia.org/wiki/Sparse_matrix)). KMeans and hierarchical clusering are applied to cluster the libraries in terms of their top ten nearby venue categories and the final labels are derived from hierarchical clustering with ward distance.",
"_____no_output_____"
],
[
"### Imports and Format Parameters",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport re\nimport requests\nimport matplotlib.pyplot as plt\n\nfrom matplotlib.font_manager import FontProperties\nfrom pandas.io.json import json_normalize\nfrom sklearn.decomposition import TruncatedSVD\nfrom sklearn.cluster import KMeans\nfrom scipy.cluster.hierarchy import linkage, dendrogram, fcluster",
"_____no_output_____"
]
],
[
[
"For visualization, install [folium](https://github.com/python-visualization/folium) and make an additional import.",
"_____no_output_____"
]
],
[
[
"!conda install --quiet -c conda-forge folium --yes\nimport folium",
"\n\n# All requested packages already installed.\n# packages in environment at /opt/conda/envs/DSX-Python35:\n#\nfolium 0.7.0 py_0 conda-forge\n"
],
[
"%matplotlib inline\n\ntitle = FontProperties()\ntitle.set_family('serif')\ntitle.set_size(16)\ntitle.set_weight('bold')\n\naxis = FontProperties()\naxis.set_family('serif')\naxis.set_size(12)\n\nplt.rcParams['figure.figsize'] = [12, 8]",
"_____no_output_____"
]
],
[
[
"Hard-code the geographical coordinates of the City of Chicago based on [this]((https://www.latlong.net/place/chicago-il-usa-1855.html)) page. Also prepare formatting parameters for folium map markers.",
"_____no_output_____"
]
],
[
[
"LATITUDE, LOGITUDE = 41.881832, -87.623177\n\nICON_COLORS = ['red', 'blue', 'green', 'purple', 'orange', 'beige', 'darked']\n\nHTML = \"\"\"\n <center><h4><b>Library {}</b></h4></center>\n <h5><b>Cluster:</b> {};</h5>\n <h5><b>Hours of operation:</b><br>\n {}</h5>\n <h5><b>Top five venues:</b><br>\n <center>{}<br>\n {}<br>\n {}<br>\n {}<br>\n {}</center></h5>\n \"\"\"",
"_____no_output_____"
]
],
[
[
"### Getting and Cleaning Data",
"_____no_output_____"
],
[
"#### Public Library Data",
"_____no_output_____"
]
],
[
[
"!wget --quiet https://data.cityofchicago.org/api/views/x8fc-8rcq/rows.csv?accessType=DOWNLOAD -O libraries.csv",
"_____no_output_____"
],
[
"lib = pd.read_csv('libraries.csv', usecols=['NAME ', 'HOURS OF OPERATION', 'LOCATION'])\nlib.columns = ['library', 'hours', 'location']\nlib.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 80 entries, 0 to 79\nData columns (total 3 columns):\nlibrary 80 non-null object\nhours 80 non-null object\nlocation 80 non-null object\ndtypes: object(3)\nmemory usage: 2.0+ KB\n"
]
],
[
[
"Notice that locations are stored as strings of tuples. Applying the following function to `lib`, we can convert `location` into two separate columns of latitudes and longitudes of the libraries.",
"_____no_output_____"
]
],
[
[
"def sep_location(row):\n \"\"\"\n Purpose: seperate the string of location in a given row, convert it into a tuple\n of floats, representing latitude and longitude of the library respectively\n \n Inputs:\n row (PandasSeries): a row from the `lib` dataframe\n \n Outputs:\n (tuple): of floats representing latitude and longitude of the library\n \"\"\"\n \n return tuple(float(re.compile('[()]').sub(\"\", coordinate)) for \\\n coordinate in row.location.split(', '))",
"_____no_output_____"
],
[
"lib[['latitude', 'longitude']] = lib.apply(sep_location, axis=1).apply(pd.Series)\nlib.drop('location', axis=1, inplace=True)\nlib.head()",
"_____no_output_____"
]
],
[
[
"Now data on the public libraries is ready for analysis.",
"_____no_output_____"
],
[
"#### Venue Data",
"_____no_output_____"
],
[
"Use sensitive code cell below to enter FourSquare credentials.",
"_____no_output_____"
]
],
[
[
"# The code was removed by Watson Studio for sharing.",
"_____no_output_____"
]
],
[
[
"Get top ten venues near to the libraries and store data into the `venues` dataframe, with radius set to 1000 meters by default. You can update the `VERSION` parameter to get up-to-date venue information.",
"_____no_output_____"
]
],
[
[
"VERSION = '20181206'\nFEATURES = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']\n\ndef get_venues(libraries, latitudes, longitudes, limit=10, radius=1000.0):\n \"\"\"\n Purpose: download nearby venues information through FourSquare API in a dataframe\n \n Inputs:\n libraries (PandasSeries): names of the public libraries\n latitudes (PandasSeries): latitudes of the public libraries\n longitudes (PandasSeries): longitudes of the public libraries\n limit (int): number of top venues to explore, default to 10\n radius (float): range of the circle coverage to define 'nearby', default to 1000.0\n \n Outputs: (DataFrame) \n \"\"\"\n venues_lst = []\n for library, lat, lng in zip(libraries, latitudes, longitudes):\n url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( \\\n CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, limit)\n items = requests.get(url).json()[\"response\"]['groups'][0]['items']\n venues_lst.append([(library, lat, lng, \\\n item['venue']['name'], \\\n item['venue']['location']['lat'], item['venue']['location']['lng'], \\\n item['venue']['categories'][0]['name']) for item in items])\n\n venues = pd.DataFrame([item for venues_lst in venues_lst for item in venues_lst])\n venues.columns = ['Library', 'Library Latitude', 'Library Longitude', \\\n 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category']\n \n return venues",
"_____no_output_____"
],
[
"venues = get_venues(lib.library, lib.latitude, lib.longitude)\nvenues.head()",
"_____no_output_____"
]
],
[
[
"Count unique libraries, venues and vanue categories in our `venues` dataframe.",
"_____no_output_____"
]
],
[
[
"print('There are {} unique libraries, {} unique venues and {} unique categories.'.format( \\\n len(venues.Library.unique()), \\\n len(venues.Venue.unique()), \\\n len(venues['Venue Category'].unique())))",
"There are 80 unique libraries, 653 unique venues and 173 unique categories.\n"
]
],
[
[
"Now our `venues` data is also ready for furtehr analysis.",
"_____no_output_____"
],
[
"### Data Analysis",
"_____no_output_____"
],
[
"#### Data Preprocessing",
"_____no_output_____"
],
[
"Apply one-hot encoding to get our feature matrix, group the venues by libraries and calculate the frequency of each venue category around specific library by taking the mean.",
"_____no_output_____"
]
],
[
[
"features = pd.get_dummies(venues['Venue Category'], prefix=\"\", prefix_sep=\"\")\nfeatures.insert(0, 'Library Name', venues.Library)\nX = features.groupby(['Library Name']).mean().iloc[:, 1:]\nX.head()",
"_____no_output_____"
]
],
[
[
"There are too many categories of venues in our features dataframe. Perform PCA to reduce the dimension of our data. Notice here most of the data entries in our feature matrix is zero, which means our data is sparse data, perform dimension reduction with truncated SVD. \n\nFirst, attempt to find the least number of dimensions to keep 85% of the variance and transform the feature matrix.",
"_____no_output_____"
]
],
[
[
"tsvd = TruncatedSVD(n_components=X.shape[1]-1, random_state=0).fit(X)\nleast_n = np.argmax(tsvd.explained_variance_ratio_.cumsum() > 0.85)\nprint(\"In order to keep 85% of total variance, we need to keep at least {} dimensions.\".format(least_n))\n\nX_t = pd.DataFrame(TruncatedSVD(n_components=least_n, random_state=0).fit_transform(X))",
"In order to keep 85% of total variance, we need to keep at least 36 dimensions.\n"
]
],
[
[
"Use KMeans on the transformed data and find the best number of k below.",
"_____no_output_____"
]
],
[
[
"ks = np.arange(1, 51)\ninertias = []\nfor k in ks:\n model = KMeans(n_clusters=k, random_state=0).fit(X_t)\n inertias.append(model.inertia_)\n\nplt.plot(ks, inertias, linewidth=2)\nplt.title(\"Figure 1 KMeans: Finding Best k\", fontproperties=title)\nplt.xlabel('Number of Clusters (k)', fontproperties=axis)\nplt.ylabel('Within-cluster Sum-of-squares', fontproperties=axis)\nplt.xticks(np.arange(1, 51, 2))\nplt.show()",
"_____no_output_____"
]
],
[
[
"It's really hard to decide based on elbow plot, as the downward trend lasts until 50. Alternatively, try Ward Hierachical Clustering method.",
"_____no_output_____"
]
],
[
[
"merging = linkage(X_t, 'ward')\n\nplt.figure(figsize=[20, 10])\ndendrogram(merging,\n leaf_rotation=90,\n leaf_font_size=10,\n distance_sort='descending',\n show_leaf_counts=True)\nplt.axhline(y=0.65, dashes=[6, 2], c='r')\nplt.xlabel('Library Names', fontproperties=axis)\nplt.title(\"Figure 2 Hierachical Clustering with Ward Distance: Cutting at 0.65\", fontproperties=title)\nplt.show() ",
"_____no_output_____"
]
],
[
[
"The result is way better than KMeans. We see six clusters cutting at approximately 0.70. Label the clustered libraries below. Join the labelled library names with `lib` to bind geographical coordinates and hours of operation of the puclic libraries.",
"_____no_output_____"
]
],
[
[
"labels = fcluster(merging, t=0.65, criterion='distance')\ndf = pd.DataFrame(list(zip(X.index.values, labels)))\ndf.columns = ['library', 'cluster']\n\nmerged = pd.merge(lib, df, how='inner', on='library')\nmerged.head()",
"_____no_output_____"
]
],
[
[
"### Results",
"_____no_output_____"
],
[
"Create a `folium.Map` instance `chicago` with initial zoom level of 11.",
"_____no_output_____"
]
],
[
[
"chicago = folium.Map(location=[LATITUDE, LOGITUDE], zoom_start=11)",
"_____no_output_____"
]
],
[
[
"Check the clustered map! Click on the icons to see the name, hours of operation and top five nearby venues of each public library in the city of Chicago!",
"_____no_output_____"
]
],
[
[
"for index, row in merged.iterrows():\n venues_name = venues[venues.Library == row.library].Venue.values\n label = folium.Popup(HTML.format(row.library, row.cluster, row.hours, venues_name[0], venues_name[1], venues_name[2], venues_name[3], venues_name[4]), parse_html=False)\n folium.Marker([row.latitude, row.longitude], popup=label, icon=folium.Icon(color=ICON_COLORS[row.cluster-1], icon='book')).add_to(chicago)\n\nchicago",
"_____no_output_____"
]
],
[
[
"### Discussions",
"_____no_output_____"
],
[
"There might be several caveats in my analysis:\n\n* Libraries are clustered merely according to categories of their surrounding venues. Other characteristics are left out from my considseration;\n* We can see that the resulting venues are not unique, i.e. not every public library has ten distinct venues. This might results from the fact that venues share same names in some cases, or nearby areas of these libraries overlap.\n\nFuture improvements might include:\n\n* Include hyperlinks to venue photos and tips to make it easier for users to check up in advance;\n* Use better algorithms to cluster the libraries.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d04364f4e08bf9a0d8151e969c3d3fa0e98ffdeb | 25,258 | ipynb | Jupyter Notebook | src/ipython/45 Simulation_Test.ipynb | rah/optimal-search | 96d46ae1491b105a1ee21dc75e9297337249d466 | [
"MIT"
] | null | null | null | src/ipython/45 Simulation_Test.ipynb | rah/optimal-search | 96d46ae1491b105a1ee21dc75e9297337249d466 | [
"MIT"
] | 4 | 2018-04-26T01:49:12.000Z | 2022-01-08T08:12:52.000Z | src/ipython/45 Simulation_Test.ipynb | rah/optimal-search | 96d46ae1491b105a1ee21dc75e9297337249d466 | [
"MIT"
] | null | null | null | 126.924623 | 11,200 | 0.886293 | [
[
[
"# Simulation Test",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
]
],
[
[
"import sys\nimport random\nimport numpy as np\nimport pylab\nfrom scipy import stats",
"_____no_output_____"
],
[
"sys.path.insert(0, '../simulation')\nfrom environment import Environment\nfrom predator import Predator",
"_____no_output_____"
],
[
"params = {\n 'env_size': 1000,\n 'n_patches': 20,\n 'n_trials': 100,\n 'max_moves': 5000,\n 'max_entities_per_patch': 50,\n 'min_entities_per_patch': 5,\n}",
"_____no_output_____"
],
[
"entity_results = []\ncaptured_results = []\n\nfor trial in range(params['n_trials']):\n # Set up the environment\n env = Environment(params['env_size'], params['env_size'], params['n_patches'])\n entities = random.randint(\n params['min_entities_per_patch'],\n params['max_entities_per_patch']\n )\n for patch in env.children:\n patch.create_entities(entities)\n\n pred = Predator()\n pred.xpos = env.length / 2.0\n pred.y_pos = env.width / 2.0\n pred.parent = env\n\n for i in range(params['max_moves']):\n pred.move()\n entity = pred.detect()\n pred.capture(entity)\n\n entity_results.append(entities)\n captured_results.append(len(pred.captured))\n",
"_____no_output_____"
],
[
"x = np.array(entity_results)\ny = np.array(captured_results)\n\nslope, intercept, r_value, p_value, slope_std_error = stats.linregress(x, y)\n\nprint \"Slope, intercept:\", slope, intercept\nprint \"R-squared:\", r_value**2\n\n# Calculate some additional outputs\npredict_y = intercept + slope * x\npred_error = y - predict_y\ndegrees_of_freedom = len(x) - 2\nresidual_std_error = np.sqrt(np.sum(pred_error**2) / degrees_of_freedom)\nprint \"Residual Std Error = \", residual_std_error\n\n# Plotting\npylab.plot(x, y, 'o')\npylab.plot(x, predict_y, 'k-')\npylab.show()\n",
"Slope, intercept: 0.3644392340758112 5.360355607659233\nR-squared: 0.08685303060235423\nResidual Std Error = 14.791107995437939\n"
],
[
"z = np.divide(np.multiply(y, 1.0), np.multiply(x, 1.0))",
"_____no_output_____"
],
[
"pylab.plot(x, z, 'o')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0436c2104b462c56de791487b59a3e2e38822bc | 260,442 | ipynb | Jupyter Notebook | Elliptical_Visualization.ipynb | mads5/Galaxy-Classifier | 90586fe185d4dde8c29059d94ff7177f28a6fafe | [
"MIT"
] | 3 | 2019-12-19T09:00:41.000Z | 2020-08-31T05:49:44.000Z | Elliptical_Visualization.ipynb | LeadingIndiaAI/Galaxy-classifier | 90586fe185d4dde8c29059d94ff7177f28a6fafe | [
"MIT"
] | null | null | null | Elliptical_Visualization.ipynb | LeadingIndiaAI/Galaxy-classifier | 90586fe185d4dde8c29059d94ff7177f28a6fafe | [
"MIT"
] | 6 | 2019-04-16T23:42:01.000Z | 2022-03-09T23:29:50.000Z | 898.075862 | 160,970 | 0.944894 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport cv2 \nimport keras\nfrom keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D\nfrom keras.models import Sequential\nimport warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline",
"_____no_output_____"
],
[
"elliptical= cv2.imread(\"./Category0/elliptical/134234.jpg\")",
"_____no_output_____"
],
[
"plt.imshow(elliptical)",
"_____no_output_____"
],
[
"elliptical.shape",
"_____no_output_____"
],
[
"num_classes=3",
"_____no_output_____"
],
[
"model1 = Sequential()\nmodel1.add(Conv2D(3,5,5, activation='tanh', input_shape=elliptical.shape))",
"_____no_output_____"
],
[
"def visualize_ell(model, elliptical):\n fig = plt.gcf()\n ell_batch = np.expand_dims(elliptical,axis=0)\n conv_ell = model.predict(ell_batch)\n conv_ell = np.squeeze(conv_ell, axis=0)\n print (conv_ell.shape)\n plt.imshow(conv_ell)\n fig.savefig('3_filter_1st_layer_visuals_ell.png')",
"_____no_output_____"
],
[
"visualize_irr(model1, elliptical)",
"(420, 420, 3)\n"
],
[
"model2 = Sequential()\nmodel2.add(Conv2D(1,5,5, activation='tanh', input_shape=elliptical.shape))\nmodel2.add(MaxPooling2D(pool_size=(5,5)))",
"_____no_output_____"
],
[
"def nice_galaxy_printer(model, elliptical):\n fig = plt.gcf()\n ell_batch = np.expand_dims(elliptical,axis=0)\n conv_ell2 = model.predict(ell_batch)\n\n conv_ell2 = np.squeeze(conv_ell2, axis=0)\n print (conv_ell2.shape)\n conv_ell2 = conv_ell2.reshape(conv_ell2.shape[:2])\n\n print (conv_ell2.shape)\n plt.imshow(conv_ell2)\n fig.savefig('1_filter_1st_layer_visuals_with_pooling_ell.png')",
"_____no_output_____"
],
[
"nice_galaxy_printer(model2, elliptical)",
"(84, 84, 1)\n(84, 84)\n"
],
[
"model3 = Sequential()\nmodel3.add(Conv2D(3,5,5, activation='tanh', input_shape=elliptical.shape))\nmodel3.add(MaxPooling2D(pool_size=(5,5)))\nmodel3.add(Conv2D(3,5,5, activation='tanh', input_shape=elliptical.shape))\nmodel3.add(MaxPooling2D(pool_size=(5,5)))\n\nvisualize_ell(model3, elliptical)",
"(16, 16, 3)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0437ff0f73d6aeb3c7a2ad7b74e245e9562b76c | 22,120 | ipynb | Jupyter Notebook | doc/app/evo-hypothesis.ipynb | GavinHuttley/c3test | c5bf7f8252b4f7b75a851e28275536a8c378897a | [
"BSD-3-Clause"
] | null | null | null | doc/app/evo-hypothesis.ipynb | GavinHuttley/c3test | c5bf7f8252b4f7b75a851e28275536a8c378897a | [
"BSD-3-Clause"
] | null | null | null | doc/app/evo-hypothesis.ipynb | GavinHuttley/c3test | c5bf7f8252b4f7b75a851e28275536a8c378897a | [
"BSD-3-Clause"
] | 1 | 2020-05-04T02:44:00.000Z | 2020-05-04T02:44:00.000Z | 33.874426 | 182 | 0.443038 | [
[
[
"# Testing a hypothesis -- non-stationary or time-reversible\n\nWe evaluate whether the GTR model is sufficient for a data set, compared with the GN (non-stationary general nucleotide model).",
"_____no_output_____"
]
],
[
[
"from cogent3.app import io, evo, sample\n\nloader = io.load_aligned(format=\"fasta\", moltype=\"dna\")\naln = loader(\"../data/primate_brca1.fasta\")",
"_____no_output_____"
],
[
"tree = \"../data/primate_brca1.tree\"\nsm_args = dict(optimise_motif_probs=True)\n\nnull = evo.model(\"GTR\", tree=tree, sm_args=sm_args)\nalt = evo.model(\"GN\", tree=tree, sm_args=sm_args)\nhyp = evo.hypothesis(null, alt)\nresult = hyp(aln)\ntype(result)",
"_____no_output_____"
]
],
[
[
"`result` is a `hypothesis_result` object. The `repr()` displays the likelihood ratio test statistic, degrees of freedom and associated p-value>",
"_____no_output_____"
]
],
[
[
"result",
"_____no_output_____"
]
],
[
[
"In this case, we accept the null given the p-value is > 0.05. We still use this object to demonstrate the properties of a `hypothesis_result`.",
"_____no_output_____"
],
[
"## `hypothesis_result` has attributes and keys",
"_____no_output_____"
],
[
"### Accessing the test statistics",
"_____no_output_____"
]
],
[
[
"result.LR, result.df, result.pvalue",
"_____no_output_____"
]
],
[
[
"### The null hypothesis\n\nThis model is accessed via the `null` attribute.",
"_____no_output_____"
]
],
[
[
"result.null",
"_____no_output_____"
],
[
"result.null.lf",
"_____no_output_____"
]
],
[
[
"### The alternate hypothesis",
"_____no_output_____"
]
],
[
[
"result.alt.lf",
"_____no_output_____"
]
],
[
[
"## Saving hypothesis results\n\nYou are advised to save these results as json using the standard json writer, or the db writer.\n\nThis following would write the result into a `tinydb`.\n\n```python\nfrom cogent3.app.io import write_db\n\nwriter = write_db(\"path/to/myresults.tinydb\", create=True, if_exists=\"overwrite\")\nwriter(result)\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d04384c72e1533432fcb79f9979bce9154fe750a | 3,702 | ipynb | Jupyter Notebook | oturum4_ders_notlari.ipynb | gdgkastamonu/Python_Ders_Notlari | 1192851d4a6c12b1e669e5d77ed51221fb7385d8 | [
"Apache-2.0"
] | 1 | 2020-08-20T11:14:35.000Z | 2020-08-20T11:14:35.000Z | oturum4_ders_notlari.ipynb | gdgkastamonu/Python_Ders_Notlari | 1192851d4a6c12b1e669e5d77ed51221fb7385d8 | [
"Apache-2.0"
] | null | null | null | oturum4_ders_notlari.ipynb | gdgkastamonu/Python_Ders_Notlari | 1192851d4a6c12b1e669e5d77ed51221fb7385d8 | [
"Apache-2.0"
] | null | null | null | 29.380952 | 286 | 0.554295 | [
[
[
"cities=list()\ntype (cities)\ndir(cities)\ncities. Append(İstanbul)\ncities.reverse()\ncities. sort ()\ncities. Pop()\ncities. Clear()\nother_list=[\n\"Ercan\", true,false,30.0,1,2[1,2,3,4],(1,2,3,4)\n{\"user\":\"hakan\":\"pasaport\":\"1234\"}\n",
"_____no_output_____"
],
[
"kişiler=[[173,73.7,56],[181,89,8],[176,56,8]]\nfor kişi in kişiler\nprint (kişi[0]\\100)",
"_____no_output_____"
],
[
"people =[[153,73.6,24],[131,98.8,24][166,78.8,89]]\nfor person in people:\nfor bilgi in person :\nprint (bilgi, \"\", end '')\n",
"_____no_output_____"
],
[
"def showdate(date-set)\nprint(len( date-set), \"rows\")\nfor row in date-set :\nfor data in row:\nprint(data, \"\", end =\"\")\nprint show(kişiler)\nprint show(people)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0439d81749e4924a392e8c050dd1ee9b6531439 | 711,473 | ipynb | Jupyter Notebook | FPV_ANN/notebooks/load_exist_model.ipynb | uqyge/combustionML | b0052fce732f38af478b26b5b2c0d9c94310c89e | [
"MIT"
] | 21 | 2018-03-01T12:39:15.000Z | 2021-12-01T15:59:03.000Z | FPV_ANN/notebooks/load_exist_model.ipynb | uqyge/combustionML | b0052fce732f38af478b26b5b2c0d9c94310c89e | [
"MIT"
] | 1 | 2018-06-01T16:31:46.000Z | 2018-06-12T09:07:59.000Z | FPV_ANN/notebooks/load_exist_model.ipynb | uqyge/combustionML | b0052fce732f38af478b26b5b2c0d9c94310c89e | [
"MIT"
] | 6 | 2017-09-07T18:57:16.000Z | 2021-08-07T07:46:33.000Z | 26.032675 | 9,632 | 0.427772 | [
[
[
"from keras.models import load_model\nimport pandas as pd",
"Using TensorFlow backend.\n"
],
[
"import keras.backend as K\nfrom keras.callbacks import LearningRateScheduler\nfrom keras.callbacks import Callback\nimport math\nimport numpy as np\n\n\ndef coeff_r2(y_true, y_pred):\n from keras import backend as K\n SS_res = K.sum(K.square( y_true-y_pred ))\n SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )\n return ( 1 - SS_res/(SS_tot + K.epsilon()) )\n",
"_____no_output_____"
],
[
"model = load_model('./FPV_ANN_tabulated_Standard_4Res_500n.H5')\n# model = load_model('../tmp/large_next.h5',custom_objects={'coeff_r2':coeff_r2})\n# model = load_model('../tmp/calc_100_3_3_cbrt.h5', custom_objects={'coeff_r2':coeff_r2})\nmodel.summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 3) 0 \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 500) 2000 input_1[0][0] \n__________________________________________________________________________________________________\nres1a_branch2a (Dense) (None, 500) 250500 dense_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 500) 0 res1a_branch2a[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 500) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nres1a_branch2b (Dense) (None, 500) 250500 dropout_1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 500) 0 res1a_branch2b[0][0] \n dense_1[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 500) 0 add_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 500) 0 activation_2[0][0] \n__________________________________________________________________________________________________\nres1b_branch2a (Dense) (None, 500) 250500 dropout_2[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 500) 0 res1b_branch2a[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 500) 0 activation_3[0][0] \n__________________________________________________________________________________________________\nres1b_branch2b (Dense) (None, 500) 250500 dropout_3[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 500) 0 res1b_branch2b[0][0] \n dropout_2[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 500) 0 add_2[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 500) 0 activation_4[0][0] \n__________________________________________________________________________________________________\nres1c_branch2a (Dense) (None, 500) 250500 dropout_4[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 500) 0 res1c_branch2a[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 500) 0 activation_5[0][0] \n__________________________________________________________________________________________________\nres1c_branch2b (Dense) (None, 500) 250500 dropout_5[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 500) 0 res1c_branch2b[0][0] \n dropout_4[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 500) 0 add_3[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 500) 0 activation_6[0][0] \n__________________________________________________________________________________________________\nres1d_branch2a (Dense) (None, 500) 250500 dropout_6[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 500) 0 res1d_branch2a[0][0] \n__________________________________________________________________________________________________\ndropout_7 (Dropout) (None, 500) 0 activation_7[0][0] \n__________________________________________________________________________________________________\nres1d_branch2b (Dense) (None, 500) 250500 dropout_7[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 500) 0 res1d_branch2b[0][0] \n dropout_6[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 500) 0 add_4[0][0] \n__________________________________________________________________________________________________\ndropout_8 (Dropout) (None, 500) 0 activation_8[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 15) 7515 dropout_8[0][0] \n==================================================================================================\nTotal params: 2,013,515\nTrainable params: 2,013,515\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\n\nclass data_scaler(object):\n def __init__(self):\n self.norm = None\n self.norm_1 = None\n self.std = None\n self.case = None\n self.scale = 1\n self.bias = 1e-20\n# self.bias = 1\n\n\n self.switcher = {\n 'min_std': 'min_std',\n 'std2': 'std2',\n 'std_min':'std_min',\n 'min': 'min',\n 'no':'no',\n 'log': 'log',\n 'log_min':'log_min',\n 'log_std':'log_std',\n 'log2': 'log2',\n 'sqrt_std': 'sqrt_std',\n 'cbrt_std': 'cbrt_std',\n 'nrt_std':'nrt_std',\n 'tan': 'tan'\n }\n\n def fit_transform(self, input_data, case):\n self.case = case\n if self.switcher.get(self.case) == 'min_std':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = self.norm.fit_transform(input_data)\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'std2':\n self.std = StandardScaler()\n out = self.std.fit_transform(input_data)\n\n if self.switcher.get(self.case) == 'std_min':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = self.std.fit_transform(input_data)\n out = self.norm.fit_transform(out)\n\n if self.switcher.get(self.case) == 'min':\n self.norm = MinMaxScaler()\n out = self.norm.fit_transform(input_data)\n\n if self.switcher.get(self.case) == 'no':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = input_data\n\n if self.switcher.get(self.case) == 'log_min':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n self.norm = MinMaxScaler()\n out = self.norm.fit_transform(out)\n\n if self.switcher.get(self.case) == 'log_std':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n self.std = StandardScaler()\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'log2':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = self.norm.fit_transform(input_data)\n out = np.log(np.asarray(out) + self.bias)\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'sqrt_std':\n out = np.sqrt(np.asarray(input_data / self.scale))\n self.std = StandardScaler()\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'cbrt_std':\n out = np.cbrt(np.asarray(input_data / self.scale))\n self.std = StandardScaler()\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'nrt_std':\n out = np.power(np.asarray(input_data / self.scale),1/4)\n self.std = StandardScaler()\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'tan':\n self.norm = MaxAbsScaler()\n self.std = StandardScaler()\n out = self.std.fit_transform(input_data)\n out = self.norm.fit_transform(out)\n out = np.tan(out / (2 * np.pi + self.bias))\n\n return out\n\n def transform(self, input_data):\n if self.switcher.get(self.case) == 'min_std':\n out = self.norm.transform(input_data)\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'std2':\n out = self.std.transform(input_data)\n\n if self.switcher.get(self.case) == 'std_min':\n out = self.std.transform(input_data)\n out = self.norm.transform(out)\n\n if self.switcher.get(self.case) == 'min':\n out = self.norm.transform(input_data)\n\n if self.switcher.get(self.case) == 'no':\n out = input_data\n\n if self.switcher.get(self.case) == 'log_min':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n out = self.norm.transform(out)\n\n if self.switcher.get(self.case) == 'log_std':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'log2':\n out = self.norm.transform(input_data)\n out = np.log(np.asarray(out) + self.bias)\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'sqrt_std':\n out = np.sqrt(np.asarray(input_data / self.scale))\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'cbrt_std':\n out = np.cbrt(np.asarray(input_data / self.scale))\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'nrt_std':\n out = np.power(np.asarray(input_data / self.scale),1/4)\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'tan':\n out = self.std.transform(input_data)\n out = self.norm.transform(out)\n out = np.tan(out / (2 * np.pi + self.bias))\n\n return out\n\n def inverse_transform(self, input_data):\n\n if self.switcher.get(self.case) == 'min_std':\n out = self.std.inverse_transform(input_data)\n out = self.norm.inverse_transform(out)\n\n if self.switcher.get(self.case) == 'std2':\n out = self.std.inverse_transform(input_data)\n\n if self.switcher.get(self.case) == 'std_min':\n out = self.norm.inverse_transform(input_data)\n out = self.std.inverse_transform(out)\n\n if self.switcher.get(self.case) == 'min':\n out = self.norm.inverse_transform(input_data)\n\n if self.switcher.get(self.case) == 'no':\n out = input_data\n\n if self.switcher.get(self.case) == 'log_min':\n out = self.norm.inverse_transform(input_data)\n out = (np.exp(-out) - self.bias) * self.scale\n\n if self.switcher.get(self.case) == 'log_std':\n out = self.std.inverse_transform(input_data)\n out = (np.exp(-out) - self.bias) * self.scale\n\n if self.switcher.get(self.case) == 'log2':\n out = self.std.inverse_transform(input_data)\n out = np.exp(out) - self.bias\n out = self.norm.inverse_transform(out)\n\n if self.switcher.get(self.case) == 'sqrt_std':\n out = self.std.inverse_transform(input_data)\n out = np.power(out,2) * self.scale\n\n if self.switcher.get(self.case) == 'cbrt_std':\n out = self.std.inverse_transform(input_data)\n out = np.power(out,3) * self.scale\n\n if self.switcher.get(self.case) == 'nrt_std':\n out = self.std.inverse_transform(input_data)\n out = np.power(out,4) * self.scale\n\n if self.switcher.get(self.case) == 'tan':\n out = (2 * np.pi + self.bias) * np.arctan(input_data)\n out = self.norm.inverse_transform(out)\n out = self.std.inverse_transform(out)\n\n return out\n\n \n\ndef read_h5_data(fileName, input_features, labels):\n df = pd.read_hdf(fileName)\n# df = df[df['f']<0.45]\n# for i in range(5):\n# pv_101=df[df['pv']==1]\n# pv_101['pv']=pv_101['pv']+0.002*(i+1)\n# df = pd.concat([df,pv_101])\n \n input_df=df[input_features]\n in_scaler = data_scaler()\n input_np = in_scaler.fit_transform(input_df.values,'std2')\n\n label_df=df[labels].clip(0)\n# if 'PVs' in labels:\n# label_df['PVs']=np.log(label_df['PVs']+1)\n out_scaler = data_scaler()\n label_np = out_scaler.fit_transform(label_df.values,'cbrt_std')\n\n return input_np, label_np, df, in_scaler, out_scaler",
"_____no_output_____"
],
[
"# labels = ['CH4','O2','H2O','CO','CO2','T','PVs','psi','mu','alpha']\n# labels = ['T','PVs']\n# labels = ['T','CH4','O2','CO2','CO','H2O','H2','OH','psi']\n# labels = ['CH2OH','HNCO','CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN']\n\n# labels = np.random.choice(col_labels,20,replace=False).tolist()\n# labels.append('PVs')\n# labels = col_labels\n# labels= ['CH4', 'CH2O', 'CH3O', 'H', 'O2', 'H2', 'O', 'OH', 'H2O', 'HO2', 'H2O2', \n# 'C', 'CH', 'CH2', 'CH2(S)', 'CH3', 'CO', 'CO2', 'HCO', 'CH2OH', 'CH3OH', \n# 'C2H', 'C2H2', 'C2H3', 'C2H4', 'C2H5', 'C2H6', 'HCCO', 'CH2CO', 'HCCOH', \n# 'N', 'NH', 'NH2', 'NH3', 'NNH', 'NO', 'NO2', 'N2O', 'HNO', 'CN', 'HCN', \n# 'H2CN', 'HCNN', 'HCNO', 'HNCO', 'NCO', 'N2', 'AR', 'C3H7', 'C3H8', 'CH2CHO', 'CH3CHO', 'T', 'PVs']\n# labels.remove('AR')\n# labels.remove('N2')\nlabels = ['H2', 'H', 'O', 'O2', 'OH', 'H2O', 'HO2', 'CH3', 'CH4', 'CO', 'CO2', 'CH2O', 'N2', 'T', 'PVs']\n\nprint(labels)\n\ninput_features=['f','zeta','pv']\n\n# read in the data\nx_input, y_label, df, in_scaler, out_scaler = read_h5_data('../data/tables_of_fgm_psi.h5',input_features=input_features, labels = labels)",
"['H2', 'H', 'O', 'O2', 'OH', 'H2O', 'HO2', 'CH3', 'CH4', 'CO', 'CO2', 'CH2O', 'N2', 'T', 'PVs']\n"
],
[
"from sklearn.metrics import r2_score\nfrom sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)\n\nx_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)\ny_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)\n\n\npredict_val = model.predict(x_test,batch_size=1024*8)\n# predict_val = model.predict(x_test,batch_size=1024*8)\npredict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)\n\ntest_data=pd.concat([x_test_df,y_test_df],axis=1)\npred_data=pd.concat([x_test_df,predict_df],axis=1)\n\n!rm sim_check.h5\ntest_data.to_hdf('sim_check.h5',key='test')\npred_data.to_hdf('sim_check.h5',key='pred')\n\ndf_test=pd.read_hdf('sim_check.h5',key='test')\ndf_pred=pd.read_hdf('sim_check.h5',key='pred')\n\nzeta_level=list(set(df_test['zeta']))\nzeta_level.sort()\n\n\nres_sum=pd.DataFrame()\nr2s=[]\nr2s_i=[]\n\nnames=[]\nmaxs_0=[]\nmaxs_9=[]\n\nfor r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):\n names.append(name)\n r2s.append(r2)\n maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())\n maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())\n for i in zeta_level:\n r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],\n df_test[df_test['zeta']==i][name]))\n\nres_sum['name']=names\n# res_sum['max_0']=maxs_0\n# res_sum['max_9']=maxs_9\nres_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]\n# res_sum['r2']=r2s\n\n\ntmp=np.asarray(r2s_i).reshape(-1,10)\nfor idx,z in enumerate(zeta_level):\n res_sum['r2s_'+str(z)]=tmp[:,idx]\n\nres_sum[3:]",
"_____no_output_____"
],
[
"from sklearn.metrics import r2_score\nfrom sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)\n\nx_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)\ny_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)\n\n\npredict_val = student_model.predict(x_test,batch_size=1024*8)\n# predict_val = model.predict(x_test,batch_size=1024*8)\npredict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)\n\ntest_data=pd.concat([x_test_df,y_test_df],axis=1)\npred_data=pd.concat([x_test_df,predict_df],axis=1)\n\n!rm sim_check.h5\ntest_data.to_hdf('sim_check.h5',key='test')\npred_data.to_hdf('sim_check.h5',key='pred')\n\ndf_test=pd.read_hdf('sim_check.h5',key='test')\ndf_pred=pd.read_hdf('sim_check.h5',key='pred')\n\nzeta_level=list(set(df_test['zeta']))\nzeta_level.sort()\n\n\nres_sum=pd.DataFrame()\nr2s=[]\nr2s_i=[]\n\nnames=[]\nmaxs_0=[]\nmaxs_9=[]\n\nfor r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):\n names.append(name)\n r2s.append(r2)\n maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())\n maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())\n for i in zeta_level:\n r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],\n df_test[df_test['zeta']==i][name]))\n\nres_sum['name']=names\n# res_sum['max_0']=maxs_0\n# res_sum['max_9']=maxs_9\nres_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]\n# res_sum['r2']=r2s\n\n\ntmp=np.asarray(r2s_i).reshape(-1,10)\nfor idx,z in enumerate(zeta_level):\n res_sum['r2s_'+str(z)]=tmp[:,idx]\n\nres_sum[3:]",
"_____no_output_____"
],
[
"from sklearn.metrics import r2_score\nfrom sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)\n\nx_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)\ny_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)\n\n\npredict_val = model.predict(x_test,batch_size=1024*8)\n# predict_val = model.predict(x_test,batch_size=1024*8)\npredict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)\n\ntest_data=pd.concat([x_test_df,y_test_df],axis=1)\npred_data=pd.concat([x_test_df,predict_df],axis=1)\n\n!rm sim_check.h5\ntest_data.to_hdf('sim_check.h5',key='test')\npred_data.to_hdf('sim_check.h5',key='pred')\n\ndf_test=pd.read_hdf('sim_check.h5',key='test')\ndf_pred=pd.read_hdf('sim_check.h5',key='pred')\n\nzeta_level=list(set(df_test['zeta']))\nzeta_level.sort()\n\n\nres_sum=pd.DataFrame()\nr2s=[]\nr2s_i=[]\n\nnames=[]\nmaxs_0=[]\nmaxs_9=[]\n\nfor r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):\n names.append(name)\n r2s.append(r2)\n maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())\n maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())\n for i in zeta_level:\n r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],\n df_test[df_test['zeta']==i][name]))\n\nres_sum['name']=names\n# res_sum['max_0']=maxs_0\n# res_sum['max_9']=maxs_9\nres_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]\n# res_sum['r2']=r2s\n\n\ntmp=np.asarray(r2s_i).reshape(-1,10)\nfor idx,z in enumerate(zeta_level):\n res_sum['r2s_'+str(z)]=tmp[:,idx]\n\nres_sum[3:]",
"_____no_output_____"
],
[
"from sklearn.metrics import r2_score\nfrom sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)\n\nx_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)\ny_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)\n\n\npredict_val = model.predict(x_test,batch_size=1024*8)\n# predict_val = model.predict(x_test,batch_size=1024*8)\npredict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)\n\ntest_data=pd.concat([x_test_df,y_test_df],axis=1)\npred_data=pd.concat([x_test_df,predict_df],axis=1)\n\n!rm sim_check.h5\ntest_data.to_hdf('sim_check.h5',key='test')\npred_data.to_hdf('sim_check.h5',key='pred')\n\ndf_test=pd.read_hdf('sim_check.h5',key='test')\ndf_pred=pd.read_hdf('sim_check.h5',key='pred')\n\nzeta_level=list(set(df_test['zeta']))\nzeta_level.sort()\n\n\nres_sum=pd.DataFrame()\nr2s=[]\nr2s_i=[]\n\nnames=[]\nmaxs_0=[]\nmaxs_9=[]\n\nfor r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):\n names.append(name)\n r2s.append(r2)\n maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())\n maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())\n for i in zeta_level:\n r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],\n df_test[df_test['zeta']==i][name]))\n\nres_sum['name']=names\n# res_sum['max_0']=maxs_0\n# res_sum['max_9']=maxs_9\nres_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]\n# res_sum['r2']=r2s\n\n\ntmp=np.asarray(r2s_i).reshape(-1,10)\nfor idx,z in enumerate(zeta_level):\n res_sum['r2s_'+str(z)]=tmp[:,idx]\n\nres_sum[3:]",
"_____no_output_____"
],
[
"#@title import plotly\nimport plotly.plotly as py\nimport numpy as np\nfrom plotly.offline import init_notebook_mode, iplot\n# from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter\nimport plotly.graph_objs as go\n\ndef configure_plotly_browser_state():\n import IPython\n display(IPython.core.display.HTML('''\n <script src=\"/static/components/requirejs/require.js\"></script>\n <script>\n requirejs.config({\n paths: {\n base: '/static/base',\n plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',\n },\n });\n </script>\n '''))",
"_____no_output_____"
],
[
"#@title Default title text\n# species = np.random.choice(labels)\nspecies = 'HNO' #@param {type:\"string\"}\nz_level = 0 #@param {type:\"integer\"}\n\n# configure_plotly_browser_state()\n# init_notebook_mode(connected=False)\n\nfrom sklearn.metrics import r2_score\n\n\ndf_t=df_test[df_test['zeta']==zeta_level[z_level]].sample(frac=1)\n# df_p=df_pred.loc[df_pred['zeta']==zeta_level[1]].sample(frac=0.1)\ndf_p=df_pred.loc[df_t.index]\n# error=(df_p[species]-df_t[species])\nerror=(df_p[species]-df_t[species])/(df_p[species]+df_t[species])\nr2=round(r2_score(df_p[species],df_t[species]),4)\n\nprint(species,'r2:',r2,'max:',df_t[species].max())\n\nfig_db = {\n 'data': [ \n {'name':'test data from table',\n 'x': df_t['f'],\n 'y': df_t['pv'],\n 'z': df_t[species],\n 'type':'scatter3d', \n 'mode': 'markers',\n 'marker':{\n 'size':1\n }\n },\n {'name':'prediction from neural networks',\n 'x': df_p['f'],\n 'y': df_p['pv'],\n 'z': df_p[species],\n 'type':'scatter3d', \n 'mode': 'markers',\n 'marker':{\n 'size':1\n },\n },\n {'name':'error in difference',\n 'x': df_p['f'],\n 'y': df_p['pv'],\n 'z': error,\n 'type':'scatter3d', \n 'mode': 'markers',\n 'marker':{\n 'size':1\n },\n } \n ],\n 'layout': {\n 'scene':{\n 'xaxis': {'title':'mixture fraction'},\n 'yaxis': {'title':'progress variable'},\n 'zaxis': {'title': species+'_r2:'+str(r2)}\n }\n }\n}\n# iplot(fig_db, filename='multiple-scatter')\niplot(fig_db)\n",
"HNO r2: 0.9998 max: 3.916200000000001e-08\n"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nz=0.22\nsp='HNO'\nplt.plot(df[(df.pv==1)&(df.zeta==z)]['f'],df[(df.pv==0.9)&(df.zeta==z)][sp],'rd')",
"_____no_output_____"
],
[
"from keras.models import Model\nfrom keras.layers import Dense, Input, Dropout\n\n\n\nn_neuron = 100\n# %%\nprint('set up student network')\n# ANN parameters\ndim_input = x_train.shape[1]\ndim_label = y_train.shape[1]\n\nbatch_norm = False\n\n# This returns a tensor\ninputs = Input(shape=(dim_input,),name='input_1')\n\n# a layer instance is callable on a tensor, and returns a tensor\nx = Dense(n_neuron, activation='relu')(inputs)\nx = Dense(n_neuron, activation='relu')(x)\nx = Dense(n_neuron, activation='relu')(x)\n# x = Dropout(0.1)(x)\npredictions = Dense(dim_label, activation='linear', name='output_1')(x)\n\nstudent_model = Model(inputs=inputs, outputs=predictions)\nstudent_model.summary()",
"set up student network\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 3) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 100) 400 \n_________________________________________________________________\ndense_4 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_5 (Dense) (None, 100) 10100 \n_________________________________________________________________\noutput_1 (Dense) (None, 10) 1010 \n=================================================================\nTotal params: 21,610\nTrainable params: 21,610\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"import keras.backend as K\nfrom keras.callbacks import LearningRateScheduler\nimport math\n\ndef cubic_loss(y_true, y_pred):\n return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)\n\ndef coeff_r2(y_true, y_pred):\n from keras import backend as K\n SS_res = K.sum(K.square( y_true-y_pred ))\n SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )\n return ( 1 - SS_res/(SS_tot + K.epsilon()) )\n\n \ndef step_decay(epoch):\n initial_lrate = 0.002\n drop = 0.5\n epochs_drop = 1000.0\n lrate = initial_lrate * math.pow(drop,math.floor((1+epoch)/epochs_drop))\n return lrate\n \nlrate = LearningRateScheduler(step_decay)\n\nclass SGDRScheduler(Callback):\n '''Cosine annealing learning rate scheduler with periodic restarts.\n # Usage\n ```python\n schedule = SGDRScheduler(min_lr=1e-5,\n max_lr=1e-2,\n steps_per_epoch=np.ceil(epoch_size/batch_size),\n lr_decay=0.9,\n cycle_length=5,\n mult_factor=1.5)\n model.fit(X_train, Y_train, epochs=100, callbacks=[schedule])\n ```\n # Arguments\n min_lr: The lower bound of the learning rate range for the experiment.\n max_lr: The upper bound of the learning rate range for the experiment.\n steps_per_epoch: Number of mini-batches in the dataset. Calculated as `np.ceil(epoch_size/batch_size)`.\n lr_decay: Reduce the max_lr after the completion of each cycle.\n Ex. To reduce the max_lr by 20% after each cycle, set this value to 0.8.\n cycle_length: Initial number of epochs in a cycle.\n mult_factor: Scale epochs_to_restart after each full cycle completion.\n # References\n Blog post: jeremyjordan.me/nn-learning-rate\n Original paper: http://arxiv.org/abs/1608.03983\n '''\n def __init__(self,\n min_lr,\n max_lr,\n steps_per_epoch,\n lr_decay=1,\n cycle_length=10,\n mult_factor=2):\n\n self.min_lr = min_lr\n self.max_lr = max_lr\n self.lr_decay = lr_decay\n\n self.batch_since_restart = 0\n self.next_restart = cycle_length\n\n self.steps_per_epoch = steps_per_epoch\n\n self.cycle_length = cycle_length\n self.mult_factor = mult_factor\n\n self.history = {}\n\n def clr(self):\n '''Calculate the learning rate.'''\n fraction_to_restart = self.batch_since_restart / (self.steps_per_epoch * self.cycle_length)\n lr = self.min_lr + 0.5 * (self.max_lr - self.min_lr) * (1 + np.cos(fraction_to_restart * np.pi))\n return lr\n\n def on_train_begin(self, logs={}):\n '''Initialize the learning rate to the minimum value at the start of training.'''\n logs = logs or {}\n K.set_value(self.model.optimizer.lr, self.max_lr)\n\n def on_batch_end(self, batch, logs={}):\n '''Record previous batch statistics and update the learning rate.'''\n logs = logs or {}\n self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr))\n for k, v in logs.items():\n self.history.setdefault(k, []).append(v)\n\n self.batch_since_restart += 1\n K.set_value(self.model.optimizer.lr, self.clr())\n\n def on_epoch_end(self, epoch, logs={}):\n '''Check for end of current cycle, apply restarts when necessary.'''\n if epoch + 1 == self.next_restart:\n self.batch_since_restart = 0\n self.cycle_length = np.ceil(self.cycle_length * self.mult_factor)\n self.next_restart += self.cycle_length\n self.max_lr *= self.lr_decay\n self.best_weights = self.model.get_weights()\n\n def on_train_end(self, logs={}):\n '''Set weights to the values from the end of the most recent cycle for best performance.'''\n self.model.set_weights(self.best_weights)",
"_____no_output_____"
],
[
"student_model = load_model('student.h5',custom_objects={'coeff_r2':coeff_r2})",
"_____no_output_____"
],
[
"model.summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 3) 0 \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 900) 3600 input_1[0][0] \n__________________________________________________________________________________________________\nres1a_branch2a (Dense) (None, 900) 810900 dense_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 900) 0 res1a_branch2a[0][0] \n__________________________________________________________________________________________________\nres1a_branch2b (Dense) (None, 900) 810900 activation_1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 900) 0 res1a_branch2b[0][0] \n dense_1[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 900) 0 add_1[0][0] \n__________________________________________________________________________________________________\nres1b_branch2a (Dense) (None, 900) 810900 activation_2[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 900) 0 res1b_branch2a[0][0] \n__________________________________________________________________________________________________\nres1b_branch2b (Dense) (None, 900) 810900 activation_3[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 900) 0 res1b_branch2b[0][0] \n activation_2[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 900) 0 add_2[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 100) 90100 activation_4[0][0] \n__________________________________________________________________________________________________\ndense_3 (Dense) (None, 10) 1010 dense_2[0][0] \n==================================================================================================\nTotal params: 3,338,310\nTrainable params: 3,338,310\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"gx,gy,gz=np.mgrid[0:1:600j,0:1:10j,0:1:600j]\ngx=gx.reshape(-1,1)\ngy=gy.reshape(-1,1)\ngz=gz.reshape(-1,1)\ngm=np.hstack([gx,gy,gz])\ngm.shape",
"_____no_output_____"
],
[
"from keras.callbacks import ModelCheckpoint\nfrom keras import optimizers\nbatch_size = 1024*16\nepochs = 2000\nvsplit = 0.1\n\nloss_type='mse'\n\nadam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=False)\n\nstudent_model.compile(loss=loss_type,\n# optimizer=adam_op,\n optimizer='adam',\n metrics=[coeff_r2])\n# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])\n\n# checkpoint (save the best model based validate loss)\n!mkdir ./tmp\nfilepath = \"./tmp/student_weights.best.cntk.hdf5\"\n\ncheckpoint = ModelCheckpoint(filepath,\n monitor='val_loss',\n verbose=1,\n save_best_only=True,\n mode='min',\n period=20)\n\nepoch_size=x_train.shape[0]\na=0\nbase=2\nclc=2\nfor i in range(5):\n a+=base*clc**(i)\nprint(a)\nepochs,c_len = a,base\nschedule = SGDRScheduler(min_lr=1e-5,max_lr=1e-4,\n steps_per_epoch=np.ceil(epoch_size/batch_size),\n cycle_length=c_len,lr_decay=0.8,mult_factor=2)\n\ncallbacks_list = [checkpoint]\n# callbacks_list = [checkpoint, schedule]\n\nx_train_teacher = in_scaler.transform(gm)\ny_train_teacher = model.predict(x_train_teacher, batch_size=1024*8)\nx_train, x_test, y_train, y_test = train_test_split(x_train_teacher,y_train_teacher, test_size=0.01)\n# fit the model\nhistory = student_model.fit(\n x_train, y_train,\n epochs=epochs,\n batch_size=batch_size,\n validation_split=vsplit,\n verbose=2,\n callbacks=callbacks_list,\n shuffle=True)",
"mkdir: cannot create directory ‘./tmp’: File exists\n62\nTrain on 3207600 samples, validate on 356400 samples\nEpoch 1/62\n - 4s - loss: 1.5372e-04 - coeff_r2: 0.9998 - val_loss: 7.0398e-05 - val_coeff_r2: 0.9999\nEpoch 2/62\n - 2s - loss: 7.1192e-05 - coeff_r2: 0.9999 - val_loss: 6.9974e-05 - val_coeff_r2: 0.9999\nEpoch 3/62\n - 2s - loss: 7.3368e-05 - coeff_r2: 0.9999 - val_loss: 7.0930e-05 - val_coeff_r2: 0.9999\nEpoch 4/62\n - 2s - loss: 7.6598e-05 - coeff_r2: 0.9999 - val_loss: 7.6345e-05 - val_coeff_r2: 0.9999\nEpoch 5/62\n - 2s - loss: 7.8491e-05 - coeff_r2: 0.9999 - val_loss: 8.4268e-05 - val_coeff_r2: 0.9999\nEpoch 6/62\n - 2s - loss: 7.6528e-05 - coeff_r2: 0.9999 - val_loss: 8.0171e-05 - val_coeff_r2: 0.9999\nEpoch 7/62\n - 2s - loss: 7.8521e-05 - coeff_r2: 0.9999 - val_loss: 8.1167e-05 - val_coeff_r2: 0.9999\nEpoch 8/62\n - 2s - loss: 7.9864e-05 - coeff_r2: 0.9999 - val_loss: 8.6915e-05 - val_coeff_r2: 0.9999\nEpoch 9/62\n - 2s - loss: 8.4223e-05 - coeff_r2: 0.9999 - val_loss: 9.6007e-05 - val_coeff_r2: 0.9999\nEpoch 10/62\n - 2s - loss: 8.1266e-05 - coeff_r2: 0.9999 - val_loss: 8.9177e-05 - val_coeff_r2: 0.9999\nEpoch 11/62\n - 2s - loss: 7.9030e-05 - coeff_r2: 0.9999 - val_loss: 7.5455e-05 - val_coeff_r2: 0.9999\nEpoch 12/62\n - 2s - loss: 8.1360e-05 - coeff_r2: 0.9999 - val_loss: 8.6248e-05 - val_coeff_r2: 0.9999\nEpoch 13/62\n - 2s - loss: 7.9479e-05 - coeff_r2: 0.9999 - val_loss: 7.0483e-05 - val_coeff_r2: 0.9999\nEpoch 14/62\n - 2s - loss: 8.0523e-05 - coeff_r2: 0.9999 - val_loss: 7.0475e-05 - val_coeff_r2: 0.9999\nEpoch 15/62\n - 2s - loss: 8.3205e-05 - coeff_r2: 0.9999 - val_loss: 7.8536e-05 - val_coeff_r2: 0.9999\nEpoch 16/62\n - 3s - loss: 7.8259e-05 - coeff_r2: 0.9999 - val_loss: 7.8073e-05 - val_coeff_r2: 0.9999\nEpoch 17/62\n - 3s - loss: 7.9226e-05 - coeff_r2: 0.9999 - val_loss: 9.0970e-05 - val_coeff_r2: 0.9999\nEpoch 18/62\n - 2s - loss: 7.8309e-05 - coeff_r2: 0.9999 - val_loss: 7.5601e-05 - val_coeff_r2: 0.9999\nEpoch 19/62\n - 2s - loss: 7.7620e-05 - coeff_r2: 0.9999 - val_loss: 7.7259e-05 - val_coeff_r2: 0.9999\nEpoch 20/62\n - 2s - loss: 8.2445e-05 - coeff_r2: 0.9999 - val_loss: 7.8942e-05 - val_coeff_r2: 0.9999\n\nEpoch 00020: val_loss improved from inf to 0.00008, saving model to ./tmp/student_weights.best.cntk.hdf5\nEpoch 21/62\n - 3s - loss: 7.9240e-05 - coeff_r2: 0.9999 - val_loss: 8.6459e-05 - val_coeff_r2: 0.9999\nEpoch 22/62\n - 2s - loss: 7.7118e-05 - coeff_r2: 0.9999 - val_loss: 7.7388e-05 - val_coeff_r2: 0.9999\nEpoch 23/62\n - 2s - loss: 7.8313e-05 - coeff_r2: 0.9999 - val_loss: 7.1780e-05 - val_coeff_r2: 0.9999\nEpoch 24/62\n - 3s - loss: 8.6743e-05 - coeff_r2: 0.9999 - val_loss: 7.8492e-05 - val_coeff_r2: 0.9999\nEpoch 25/62\n - 2s - loss: 7.6938e-05 - coeff_r2: 0.9999 - val_loss: 6.8764e-05 - val_coeff_r2: 0.9999\nEpoch 26/62\n - 2s - loss: 7.7853e-05 - coeff_r2: 0.9999 - val_loss: 7.2816e-05 - val_coeff_r2: 0.9999\nEpoch 27/62\n - 2s - loss: 7.7118e-05 - coeff_r2: 0.9999 - val_loss: 6.7090e-05 - val_coeff_r2: 0.9999\nEpoch 28/62\n - 2s - loss: 7.7613e-05 - coeff_r2: 0.9999 - val_loss: 7.3566e-05 - val_coeff_r2: 0.9999\nEpoch 29/62\n - 2s - loss: 8.2237e-05 - coeff_r2: 0.9999 - val_loss: 8.0517e-05 - val_coeff_r2: 0.9999\nEpoch 30/62\n - 2s - loss: 7.5754e-05 - coeff_r2: 0.9999 - val_loss: 7.9571e-05 - val_coeff_r2: 0.9999\nEpoch 31/62\n - 2s - loss: 7.7426e-05 - coeff_r2: 0.9999 - val_loss: 7.5532e-05 - val_coeff_r2: 0.9999\nEpoch 32/62\n - 2s - loss: 7.6448e-05 - coeff_r2: 0.9999 - val_loss: 9.2721e-05 - val_coeff_r2: 0.9999\nEpoch 33/62\n - 2s - loss: 7.8374e-05 - coeff_r2: 0.9999 - val_loss: 7.0651e-05 - val_coeff_r2: 0.9999\nEpoch 34/62\n - 3s - loss: 8.3776e-05 - coeff_r2: 0.9999 - val_loss: 8.8142e-05 - val_coeff_r2: 0.9999\nEpoch 35/62\n - 2s - loss: 7.5800e-05 - coeff_r2: 0.9999 - val_loss: 6.7747e-05 - val_coeff_r2: 0.9999\nEpoch 36/62\n - 2s - loss: 7.5429e-05 - coeff_r2: 0.9999 - val_loss: 7.0590e-05 - val_coeff_r2: 0.9999\nEpoch 37/62\n - 2s - loss: 7.6417e-05 - coeff_r2: 0.9999 - val_loss: 7.7214e-05 - val_coeff_r2: 0.9999\nEpoch 38/62\n - 2s - loss: 7.7997e-05 - coeff_r2: 0.9999 - val_loss: 9.1887e-05 - val_coeff_r2: 0.9999\nEpoch 39/62\n - 2s - loss: 7.6255e-05 - coeff_r2: 0.9999 - val_loss: 7.5994e-05 - val_coeff_r2: 0.9999\nEpoch 40/62\n - 2s - loss: 7.4470e-05 - coeff_r2: 0.9999 - val_loss: 6.8878e-05 - val_coeff_r2: 0.9999\n\nEpoch 00040: val_loss improved from 0.00008 to 0.00007, saving model to ./tmp/student_weights.best.cntk.hdf5\nEpoch 41/62\n - 3s - loss: 7.6974e-05 - coeff_r2: 0.9999 - val_loss: 7.2147e-05 - val_coeff_r2: 0.9999\nEpoch 42/62\n - 2s - loss: 7.8337e-05 - coeff_r2: 0.9999 - val_loss: 8.5127e-05 - val_coeff_r2: 0.9999\nEpoch 43/62\n - 3s - loss: 7.7425e-05 - coeff_r2: 0.9999 - val_loss: 7.4471e-05 - val_coeff_r2: 0.9999\nEpoch 44/62\n - 2s - loss: 7.7451e-05 - coeff_r2: 0.9999 - val_loss: 7.8081e-05 - val_coeff_r2: 0.9999\nEpoch 45/62\n - 3s - loss: 7.4969e-05 - coeff_r2: 0.9999 - val_loss: 7.0336e-05 - val_coeff_r2: 0.9999\nEpoch 46/62\n - 2s - loss: 8.0301e-05 - coeff_r2: 0.9999 - val_loss: 8.4031e-05 - val_coeff_r2: 0.9999\nEpoch 47/62\n - 2s - loss: 7.4756e-05 - coeff_r2: 0.9999 - val_loss: 7.3626e-05 - val_coeff_r2: 0.9999\nEpoch 48/62\n - 2s - loss: 7.4078e-05 - coeff_r2: 0.9999 - val_loss: 7.2226e-05 - val_coeff_r2: 0.9999\nEpoch 49/62\n - 2s - loss: 7.5466e-05 - coeff_r2: 0.9999 - val_loss: 7.9103e-05 - val_coeff_r2: 0.9999\nEpoch 50/62\n - 3s - loss: 7.9991e-05 - coeff_r2: 0.9999 - val_loss: 7.3817e-05 - val_coeff_r2: 0.9999\nEpoch 51/62\n - 3s - loss: 7.3744e-05 - coeff_r2: 0.9999 - val_loss: 7.4004e-05 - val_coeff_r2: 0.9999\nEpoch 52/62\n - 2s - loss: 7.4640e-05 - coeff_r2: 0.9999 - val_loss: 6.9226e-05 - val_coeff_r2: 0.9999\nEpoch 53/62\n - 2s - loss: 7.1235e-05 - coeff_r2: 0.9999 - val_loss: 6.5585e-05 - val_coeff_r2: 0.9999\nEpoch 54/62\n - 3s - loss: 7.2866e-05 - coeff_r2: 0.9999 - val_loss: 7.6260e-05 - val_coeff_r2: 0.9999\nEpoch 55/62\n - 2s - loss: 7.6232e-05 - coeff_r2: 0.9999 - val_loss: 6.7503e-05 - val_coeff_r2: 0.9999\nEpoch 56/62\n - 2s - loss: 7.4577e-05 - coeff_r2: 0.9999 - val_loss: 8.2046e-05 - val_coeff_r2: 0.9999\nEpoch 57/62\n - 2s - loss: 7.3923e-05 - coeff_r2: 0.9999 - val_loss: 6.7468e-05 - val_coeff_r2: 0.9999\nEpoch 58/62\n - 2s - loss: 7.1492e-05 - coeff_r2: 0.9999 - val_loss: 6.5927e-05 - val_coeff_r2: 0.9999\nEpoch 59/62\n - 2s - loss: 7.5408e-05 - coeff_r2: 0.9999 - val_loss: 7.4007e-05 - val_coeff_r2: 0.9999\nEpoch 60/62\n - 2s - loss: 7.4182e-05 - coeff_r2: 0.9999 - val_loss: 8.7424e-05 - val_coeff_r2: 0.9999\n\nEpoch 00060: val_loss did not improve from 0.00007\nEpoch 61/62\n - 3s - loss: 7.6069e-05 - coeff_r2: 0.9999 - val_loss: 7.1505e-05 - val_coeff_r2: 0.9999\nEpoch 62/62\n - 2s - loss: 7.6596e-05 - coeff_r2: 0.9999 - val_loss: 8.7942e-05 - val_coeff_r2: 0.9999\n"
],
[
"student_model.save('student_100_3.h5')",
"_____no_output_____"
],
[
"n_res = 501\npv_level = 0.996\nf_1 = np.linspace(0,1,n_res)\nz_1 = np.zeros(n_res)\npv_1 = np.ones(n_res)*pv_level\ncase_1 = np.vstack((f_1,z_1,pv_1))\n# case_1 = np.vstack((pv_1,z_1,f_1))\n\ncase_1 = case_1.T\ncase_1.shape",
"_____no_output_____"
],
[
"out=out_scaler.inverse_transform(model.predict(case_1))\nout=pd.DataFrame(out,columns=labels)\nsp='PVs'\nout.head()",
"_____no_output_____"
],
[
"table_val=df[(df.pv==pv_level) & (df.zeta==0)][sp]\ntable_val.shape",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.plot(f_1,table_val)\nplt.show",
"_____no_output_____"
],
[
"plt.plot(f_1,out[sp])\nplt.show",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"pv_101=df[df['pv']==1][df['zeta']==0]",
"/home/eg/anaconda3/envs/my_dev/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning:\n\nBoolean Series key will be reindexed to match DataFrame index.\n\n"
],
[
"pv_101['pv']=pv_101['pv']+0.01",
"_____no_output_____"
],
[
"a=pd.concat([pv_101,pv_101])",
"_____no_output_____"
],
[
"pv_101.shape",
"_____no_output_____"
],
[
"a.shape",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0439daafaa3e7133444ee2f3bde96fc50be67d0 | 17,084 | ipynb | Jupyter Notebook | Python/data_scientist_workflow.ipynb | tyler-romero/LungCancerDemo | 30f0733719a5bb3c4277fb438cbbe504071ab93c | [
"MIT"
] | null | null | null | Python/data_scientist_workflow.ipynb | tyler-romero/LungCancerDemo | 30f0733719a5bb3c4277fb438cbbe504071ab93c | [
"MIT"
] | null | null | null | Python/data_scientist_workflow.ipynb | tyler-romero/LungCancerDemo | 30f0733719a5bb3c4277fb438cbbe504071ab93c | [
"MIT"
] | null | null | null | 41.770171 | 186 | 0.671447 | [
[
[
"import pandas as pd\nfrom sklearn.decomposition import IncrementalPCA, PCA\n\nfrom lung_cancer.connection_settings import get_connection_string, TABLE_LABELS, TABLE_FEATURES, TABLE_PCA_FEATURES, IMAGES_FOLDER\nfrom lung_cancer.connection_settings import TABLE_PATIENTS, TABLE_TRAIN_ID, MICROSOFTML_MODEL_NAME, TABLE_PREDICTIONS, FASTTREE_MODEL_NAME, TABLE_CLASSIFIERS\nfrom lung_cancer.lung_cancer_utils import compute_features, train_test_split, average_pool, gather_image_paths, insert_model, create_formula, roc\n\nfrom revoscalepy import rx_import, RxSqlServerData, rx_data_step, RxInSqlServer, RxLocalSeq, rx_set_compute_context\nfrom revoscalepy import RxSqlServerData, RxInSqlServer, RxLocalSeq, rx_set_compute_context, rx_data_step\nfrom microsoftml import rx_fast_trees\nfrom microsoftml import rx_predict as ml_predict",
"_____no_output_____"
],
[
"connection_string = get_connection_string()\nsql = RxInSqlServer(connection_string=connection_string)\nlocal = RxLocalSeq()\nrx_set_compute_context(local)",
"DRIVER={ODBC Driver 13 for SQL Server};SERVER=TYLER-LAPTOP\\TYLERSQLSERVER;PORT=21816;DATABASE=lung_cancer_database;UID=demo;PWD=D@tascience\n"
],
[
"print(\"Gathering patients and labels\")\nquery = \"SELECT patient_id, label FROM {}\".format(TABLE_LABELS)\ndata_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)\ndata = rx_import(data_sql)\n\ndata[\"label\"] = data[\"label\"].astype(bool)\nn_patients = 200 # How many patients do we featurize images for?\ndata = data.head(n_patients)\nprint(data.head())",
"Gathering patients and labels\nRows Read: 1393, Total Rows Processed: 1393, Total Chunk Time: 0.008 seconds \n patient_id label\n0 0015ceb851d7251b8f399e39779d1e7d True\n1 0030a160d58723ff36d73f41b170ec21 False\n2 003f41c78e6acfa92430a057ac0b306e False\n3 006b96310a37b36cccb2ab48d10b49a3 True\n4 008464bb8521d09a42985dd8add3d0d2 True\n"
],
[
"data_to_featurize = gather_image_paths(data, IMAGES_FOLDER)\nprint(data_to_featurize.head())",
"Gathered 195 images for patient #0 with id: 0015ceb851d7251b8f399e39779d1e7d\nGathered 265 images for patient #1 with id: 0030a160d58723ff36d73f41b170ec21\nGathered 233 images for patient #2 with id: 003f41c78e6acfa92430a057ac0b306e\nGathered 173 images for patient #3 with id: 006b96310a37b36cccb2ab48d10b49a3\nGathered 146 images for patient #4 with id: 008464bb8521d09a42985dd8add3d0d2\nGathered 171 images for patient #5 with id: 0092c13f9e00a3717fdc940641f00015\nGathered 123 images for patient #6 with id: 00986bebc45e12038ef0ce3e9962b51a\nGathered 134 images for patient #7 with id: 00cba091fa4ad62cc3200a657aeb957e\nGathered 135 images for patient #8 with id: 00edff4f51a893d80dae2d42a7f45ad1\nGathered 191 images for patient #9 with id: 0121c2845f2b7df060945b072b2515d7\nGathered 217 images for patient #10 with id: 013395589c01aa01f8df81d80fb0e2b8\nGathered 231 images for patient #11 with id: 01de8323fa065a8963533c4a86f2f6c1\nGathered 159 images for patient #12 with id: 01e349d34c06410e1da273add27be25c\nGathered 241 images for patient #13 with id: 01f1140c8e951e2a921b61c9a7e782c2\nGathered 175 images for patient #14 with id: 024efb7a1e67dc820eb61cbdaa090166\nGathered 186 images for patient #15 with id: 0257df465d9e4150adef13303433ff1e\nGathered 159 images for patient #16 with id: 0268f3a7a17412178cfb039e71799a80\nGathered 106 images for patient #17 with id: 026be5d5e652b6a7488669d884ebe297\nGathered 205 images for patient #18 with id: 02801e3bbcc6966cb115a962012c35df\nGathered 183 images for patient #19 with id: 028996723faa7840bb57f57e28275e4c\nGathered 147 images for patient #20 with id: 0334c8242ce7ee1a6c1263096e4cc535\nGathered 149 images for patient #21 with id: 03fb0d0fdb187ee1160f09386b28c3f2\nGathered 135 images for patient #22 with id: 03ff23e445787886f8b0cb192b3c154d\nGathered 160 images for patient #23 with id: 043ed6cb6054cc13804a3dca342fa4d0\nGathered 223 images for patient #24 with id: 0482c444ac838adc5aa00d1064c976c1\nGathered 147 images for patient #25 with id: 04a3187ec2ed4198a25033071897bffc\nGathered 145 images for patient #26 with id: 04a52f49cdbfb8b99789b9e93f1ad319\nGathered 151 images for patient #27 with id: 04a8c47583142181728056310759dea1\nGathered 161 images for patient #28 with id: 04cfc5efa4c8c2a8944c8b9fa6cb04d1\nGathered 140 images for patient #29 with id: 04e5d435fa01b0958e3274be73312cac\nGathered 189 images for patient #30 with id: 04fca9fbec0b803326488ade96897f6e\nGathered 155 images for patient #31 with id: 05609fdb8fa0895ac8a9be373144dac7\nGathered 132 images for patient #32 with id: 059d8c14b2256a2ba4e38ac511700203\nGathered 213 images for patient #33 with id: 05a20caf6ab6df4643644c953f06a5eb\nGathered 135 images for patient #34 with id: 064366faa1a83fdcb18b2538f1717290\nGathered 133 images for patient #35 with id: 0679e5fd67b7441b8094494033f3881f\nGathered 135 images for patient #36 with id: 0700108170c91ea2219006e9484999ef\nGathered 177 images for patient #37 with id: 0708c00f6117ed977bbe1b462b56848c\nGathered 171 images for patient #38 with id: 07349deeea878c723317a1ce42cc7e58\nGathered 313 images for patient #39 with id: 07abb7bec548d1c0ccef088ce934e517\nGathered 132 images for patient #40 with id: 07bca4290a2530091ce1d5f200d9d526\nGathered 184 images for patient #41 with id: 07fdb853ff90ce3c6d5c91f619ed714e\nGathered 134 images for patient #42 with id: 080e6a00e69888fd620894f9fd0714b1\nGathered 323 images for patient #43 with id: 081f4a90f24ac33c14b61b97969b7f81\nGathered 114 images for patient #44 with id: 08528b8817429d12b7ce2bf444d264f9\nGathered 178 images for patient #45 with id: 0852f5124d69d7f8cd35fa31e1364d29\nGathered 168 images for patient #46 with id: 08643d7b9ce18405fb63f63dda258e76\nGathered 179 images for patient #47 with id: 086f95a932c83faed289854083f48831\nGathered 157 images for patient #48 with id: 0890a698c0a6ce5db48b1467011bf8d2\nGathered 284 images for patient #49 with id: 089b8f10743e449a0f64f8f311dd8a46\nGathered 397 images for patient #50 with id: 08acb3440eb23385724d006403feb585\nGathered 142 images for patient #51 with id: 09b1c678fc1009d84a038cd879be4198\nGathered 178 images for patient #52 with id: 09d7c4a3e1076dcfcae2b0a563a28364\nGathered 175 images for patient #53 with id: 09ee522a3b7dbea48aa6d39afe240129\nGathered 120 images for patient #54 with id: 09fdf599084b816247ba38d95b3c9d80\nGathered 128 images for patient #55 with id: 0a099f2549429d29b32f349e95fb2244\nGathered 133 images for patient #56 with id: 0a0c32c9e08cc2ea76a71649de56be6d\nGathered 110 images for patient #57 with id: 0a38e7597ca26f9374f8ea2770ba870d\nGathered 203 images for patient #58 with id: 0acbebb8d463b4b9ca88cf38431aac69\nGathered 280 images for patient #59 with id: 0bd0e3056cbf23a1cb7f0f0b18446068\nGathered 123 images for patient #60 with id: 0c0de3749d4fe175b7a5098b060982a1\nGathered 164 images for patient #61 with id: 0c37613214faddf8701ca41e6d43f56e\nGathered 244 images for patient #62 with id: 0c59313f52304e25d5a7dcf9877633b1\nGathered 136 images for patient #63 with id: 0c60f4b87afcb3e2dfa65abbbf3ef2f9\nGathered 180 images for patient #64 with id: 0c98fcb55e3f36d0c2b6507f62f4c5f1\nGathered 221 images for patient #65 with id: 0c9d8314f9c69840e25febabb1229fa4\nGathered 147 images for patient #66 with id: 0ca943d821204ceb089510f836a367fd\nGathered 435 images for patient #67 with id: 0d06d764d3c07572074d468b4cff954f\nGathered 183 images for patient #68 with id: 0d19f1c627df49eb223771c28548350e\nGathered 126 images for patient #69 with id: 0d2fcf787026fece4e57be167d079383\nGathered 177 images for patient #70 with id: 0d941a3ad6c889ac451caf89c46cb92a\nGathered 171 images for patient #71 with id: 0ddeb08e9c97227853422bd71a2a695e\nGathered 113 images for patient #72 with id: 0de72529c30fe642bc60dcb75c87f6bd\nGathered 152 images for patient #73 with id: 0e7ffa620c6db473b70c8454f215306b\n"
],
[
"featurized_data = compute_features(data_to_featurize, MICROSOFTML_MODEL_NAME, compute_context=sql)\nprint(featurized_data.head())",
"_____no_output_____"
],
[
"pooled_data = average_pool(data, featurized_data)\nprint(pooled_data)\nfeatures_sql = RxSqlServerData(table=TABLE_FEATURES, connection_string=connection_string)\nrx_data_step(input_data=pooled_data, output_file=features_sql, overwrite=True)",
"_____no_output_____"
],
[
"resample = False\nif resample:\n print(\"Performing Train Test Split\")\n p = 80\n train_test_split(TABLE_TRAIN_ID, TABLE_PATIENTS, p, connection_string=connection_string)",
"_____no_output_____"
],
[
"n = min(485, n_patients) # 485 features is the most that can be handled right now\n#pca = IncrementalPCA(n_components=n, whiten=True, batch_size=100)\npca = PCA(n_components=n, whiten=True)\n\ndef apply_pca(dataset, context):\n dataset = pd.DataFrame(dataset)\n feats = dataset.drop([\"label\", \"patient_id\"], axis=1)\n feats = pca.transform(feats)\n feats = pd.DataFrame(data=feats, index=dataset.index.values, columns=[\"pc\" + str(i) for i in range(feats.shape[1])])\n dataset = pd.concat([dataset[[\"label\", \"patient_id\"]], feats], axis=1)\n return dataset\n\nquery = \"SELECT * FROM {} WHERE patient_id IN (SELECT patient_id FROM {})\".format(TABLE_FEATURES, TABLE_TRAIN_ID)\ntrain_data_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)\ntrain_data = rx_import(input_data=train_data_sql)\ntrain_data = train_data.drop([\"label\", \"patient_id\"], axis=1)\npca.fit(train_data)\n\nrx_set_compute_context(local)\npca_features_sql = RxSqlServerData(table=TABLE_PCA_FEATURES, connection_string=connection_string)\nrx_data_step(input_data=features_sql, output_file=pca_features_sql, overwrite=True, transform_function=apply_pca)",
"_____no_output_____"
],
[
"# Point to the SQL table with the training data\ncolumn_info = {'label': {'type': 'integer'}}\nquery = \"SELECT * FROM {} WHERE patient_id IN (SELECT patient_id FROM {})\".format(TABLE_PCA_FEATURES, TABLE_TRAIN_ID)\nprint(query)\n#train_sql = RxSqlServerData(sql_query=query, connection_string=connection_string, column_info=column_info)\ntrain_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)",
"_____no_output_____"
],
[
"formula = create_formula(train_sql)\nprint(\"Formula:\", formula)",
"_____no_output_____"
],
[
"# Fit a classification model\nclassifier = rx_fast_trees(formula=formula,\n data=train_sql,\n num_trees=500,\n method=\"binary\",\n random_seed=5,\n compute_context=sql)\nprint(classifier)",
"_____no_output_____"
],
[
"# Serialize LGBMRegressor model and insert into table\ninsert_model(TABLE_CLASSIFIERS, connection_string, classifier, FASTTREE_MODEL_NAME) # TODO: Do table insertions in sql",
"_____no_output_____"
],
[
"# Point to the SQL table with the testing data\nquery = \"SELECT * FROM {} WHERE patient_id NOT IN (SELECT patient_id FROM {})\".format(TABLE_PCA_FEATURES, TABLE_TRAIN_ID)\nprint(query)\ntest_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)#, column_info=column_info",
"_____no_output_____"
],
[
"# Make predictions on the test data\npredictions = ml_predict(classifier, data=test_sql, extra_vars_to_write=[\"label\", \"patient_id\"])\nprint(predictions.head())",
"_____no_output_____"
],
[
"predictions_sql = RxSqlServerData(table=TABLE_PREDICTIONS, connection_string=connection_string)\nrx_data_step(predictions, predictions_sql, overwrite=True)",
"_____no_output_____"
],
[
"# Evaluate model using ROC\nroc(predictions[\"label\"], predictions[\"Probability\"])",
"_____no_output_____"
],
[
"# Specify patient to make prediction for\nPatientIndex = 9",
"_____no_output_____"
],
[
"# Select patient data\nquery = \"SELECT TOP(1) * FROM {} AS t1 INNER JOIN {} AS t2 ON t1.patient_id = t2.patient_id WHERE t2.idx = {}\".format(TABLE_PCA_FEATURES, TABLE_PATIENTS, PatientIndex)\nprint(query)\npatient_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)",
"_____no_output_____"
],
[
"# Make Prediction on a single patient\npredictions = ml_predict(classifier, data=patient_sql, extra_vars_to_write=[\"label\", \"patient_id\"])\n\n\nprint(\"The probability of cancer for patient {} with patient_id {} is {}%\".format(PatientIndex, predictions[\"patient_id\"].iloc[0], predictions[\"Probability\"].iloc[0]*100))\nif predictions[\"label\"].iloc[0] == 0:\n print(\"Ground Truth: This patient does not have cancer\")\nelse:\n print(\"Ground Truth: This patient does have cancer\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0439eddcf884a6d81c59b68bcb7a6761af6260a | 40,702 | ipynb | Jupyter Notebook | NLPFE2.ipynb | AsterLaoWhy/Thinkful | fa5d54d02b8af6a851cc7c2cec826dc8caeb777a | [
"MIT"
] | null | null | null | NLPFE2.ipynb | AsterLaoWhy/Thinkful | fa5d54d02b8af6a851cc7c2cec826dc8caeb777a | [
"MIT"
] | null | null | null | NLPFE2.ipynb | AsterLaoWhy/Thinkful | fa5d54d02b8af6a851cc7c2cec826dc8caeb777a | [
"MIT"
] | null | null | null | 39.825832 | 1,380 | 0.458012 | [
[
[
"import numpy as np\nimport pandas as pd\nimport sklearn\nimport spacy\nimport re\nfrom nltk.corpus import gutenberg\nimport nltk\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nnltk.download('gutenberg')\n!python -m spacy download en",
"[nltk_data] Downloading package gutenberg to\n[nltk_data] C:\\Users\\jonat\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package gutenberg is already up-to-date!\n"
]
],
[
[
"## 1. Converting words or sentences into numeric vectors is fundamental when working with text data. To make sure you are solid on how these vectors work, please generate the tf-idf vectors for the last three sentences of the example we gave at the beginning of this checkpoint. If you are feeling uncertain, have your mentor walk you through it.",
"_____no_output_____"
],
[
"* 4: 1.585, 1, 0, 1, 1.585, 0,0,0,0\n* 5: 0,0,0,0,0, .585, 1, 1.585, 1\n* 6: 0,0,0,0,0,0, 1, 0, 2",
"_____no_output_____"
]
],
[
[
"# utility function for standard text cleaning\ndef text_cleaner(text):\n # visual inspection identifies a form of punctuation spaCy does not\n # recognize: the double dash '--'. Better get rid of it now!\n text = re.sub(r'--',' ',text)\n text = re.sub(\"[\\[].*?[\\]]\", \"\", text)\n text = re.sub(r\"(\\b|\\s+\\-?|^\\-?)(\\d+|\\d*\\.\\d+)\\b\", \" \", text)\n text = ' '.join(text.split())\n return text",
"_____no_output_____"
],
[
"# load and clean the data.\npersuasion = gutenberg.raw('austen-persuasion.txt')\nalice = gutenberg.raw('carroll-alice.txt')\n\n# the chapter indicator is idiosyncratic\npersuasion = re.sub(r'Chapter \\d+', '', persuasion)\nalice = re.sub(r'CHAPTER .*', '', alice)\n \nalice = text_cleaner(alice)\npersuasion = text_cleaner(persuasion)\n# parse the cleaned novels. this can take a bit\nnlp = spacy.load('en_core_web_sm')\nalice_doc = nlp(alice)\npersuasion_doc = nlp(persuasion)",
"_____no_output_____"
],
[
"# group into sentences\nalice_sents = [[sent, \"Carroll\"] for sent in alice_doc.sents]\npersuasion_sents = [[sent, \"Austen\"] for sent in persuasion_doc.sents]\n\n# combine the sentences from the two novels into one data frame\nsentences = pd.DataFrame(alice_sents + persuasion_sents, columns = [\"text\", \"author\"])\nsentences.head()",
"_____no_output_____"
],
[
"# get rid off stop words and punctuation\n# and lemmatize the tokens\nfor i, sentence in enumerate(sentences[\"text\"]):\n sentences.loc[i, \"text\"] = \" \".join(\n [token.lemma_ for token in sentence if not token.is_punct and not token.is_stop])",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import TfidfVectorizer\n\nvectorizer = TfidfVectorizer(\n max_df=0.5, min_df=2, use_idf=True, norm=u'l2', smooth_idf=True)\n\n\n# applying the vectorizer\nX = vectorizer.fit_transform(sentences[\"text\"])\n\ntfidf_df = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())\nsentences = pd.concat([tfidf_df, sentences[[\"text\", \"author\"]]], axis=1)\n\n# keep in mind that the log base 2 of 1 is 0,\n# so a tf-idf score of 0 indicates that the word was present once in that sentence.\nsentences.head()",
"_____no_output_____"
],
[
"sentences.loc[4]",
"_____no_output_____"
]
],
[
[
"## 2. In the 2-grams example above, we only used 2-grams as our features. This time, use both 1-grams and 2-grams together as your feature set. Run the same models in the example and compare the results.",
"_____no_output_____"
]
],
[
[
"# utility function for standard text cleaning\ndef text_cleaner(text):\n # visual inspection identifies a form of punctuation spaCy does not\n # recognize: the double dash '--'. Better get rid of it now!\n text = re.sub(r'--',' ',text)\n text = re.sub(\"[\\[].*?[\\]]\", \"\", text)\n text = re.sub(r\"(\\b|\\s+\\-?|^\\-?)(\\d+|\\d*\\.\\d+)\\b\", \" \", text)\n text = ' '.join(text.split())\n return text",
"_____no_output_____"
],
[
"# load and clean the data.\npersuasion = gutenberg.raw('austen-persuasion.txt')\nalice = gutenberg.raw('carroll-alice.txt')\n\n# the chapter indicator is idiosyncratic\npersuasion = re.sub(r'Chapter \\d+', '', persuasion)\nalice = re.sub(r'CHAPTER .*', '', alice)\n \nalice = text_cleaner(alice)\npersuasion = text_cleaner(persuasion)",
"_____no_output_____"
],
[
"# parse the cleaned novels. this can take a bit\nnlp = spacy.load('en')\nalice_doc = nlp(alice)\npersuasion_doc = nlp(persuasion)",
"_____no_output_____"
],
[
"# group into sentences\nalice_sents = [[sent, \"Carroll\"] for sent in alice_doc.sents]\npersuasion_sents = [[sent, \"Austen\"] for sent in persuasion_doc.sents]\n\n# combine the sentences from the two novels into one data frame\nsentences = pd.DataFrame(alice_sents + persuasion_sents, columns = [\"text\", \"author\"])\nsentences.head()",
"_____no_output_____"
],
[
"# get rid off stop words and punctuation\n# and lemmatize the tokens\nfor i, sentence in enumerate(sentences[\"text\"]):\n sentences.loc[i, \"text\"] = \" \".join(\n [token.lemma_ for token in sentence if not token.is_punct and not token.is_stop])",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import TfidfVectorizer\n\nvectorizer = TfidfVectorizer(\n max_df=0.5, min_df=2, use_idf=True, norm=u'l2', smooth_idf=True, ngram_range=(1,2))\n\n\n# applying the vectorizer\nX = vectorizer.fit_transform(sentences[\"text\"])\n\ntfidf_df = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())\nsentences = pd.concat([tfidf_df, sentences[[\"text\", \"author\"]]], axis=1)\n\n# keep in mind that the log base 2 of 1 is 0,\n# so a tf-idf score of 0 indicates that the word was present once in that sentence.\nsentences.head()",
"_____no_output_____"
],
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier\nfrom sklearn.model_selection import train_test_split\n\nY = sentences['author']\nX = np.array(sentences.drop(['text','author'], 1))\n\n# We split the dataset into train and test sets\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.4, random_state=123)\n\n# Models\nlr = LogisticRegression()\nrfc = RandomForestClassifier()\ngbc = GradientBoostingClassifier()\n\nlr.fit(X_train, y_train)\nrfc.fit(X_train, y_train)\ngbc.fit(X_train, y_train)\n\nprint(\"----------------------Logistic Regression Scores----------------------\")\nprint('Training set score:', lr.score(X_train, y_train))\nprint('\\nTest set score:', lr.score(X_test, y_test))\n\nprint(\"----------------------Random Forest Scores----------------------\")\nprint('Training set score:', rfc.score(X_train, y_train))\nprint('\\nTest set score:', rfc.score(X_test, y_test))\n\nprint(\"----------------------Gradient Boosting Scores----------------------\")\nprint('Training set score:', gbc.score(X_train, y_train))\nprint('\\nTest set score:', gbc.score(X_test, y_test))",
"----------------------Logistic Regression Scores----------------------\nTraining set score: 0.9036488027366021\n\nTest set score: 0.8555555555555555\n----------------------Random Forest Scores----------------------\nTraining set score: 0.9694982896237172\n\nTest set score: 0.8414529914529915\n----------------------Gradient Boosting Scores----------------------\nTraining set score: 0.8246864310148233\n\nTest set score: 0.8102564102564103\n"
]
],
[
[
"As can be seen above, using 1-gram along with 2-gram improved the performances of all of the models.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d043a46c88e59728887c4946e7e826b86f2bcc61 | 17,237 | ipynb | Jupyter Notebook | videoretrieval/Demo notebook GPT-1.ipynb | googleinterns/via-content-understanding | ca12ebe6aa4da16224a8ca86dc45aaaaa7cfda09 | [
"Apache-2.0"
] | 1 | 2020-05-22T14:51:28.000Z | 2020-05-22T14:51:28.000Z | videoretrieval/Demo notebook GPT-1.ipynb | googleinterns/via-content-understanding | ca12ebe6aa4da16224a8ca86dc45aaaaa7cfda09 | [
"Apache-2.0"
] | 4 | 2020-05-31T21:57:44.000Z | 2020-07-23T23:32:52.000Z | videoretrieval/Demo notebook GPT-1.ipynb | googleinterns/via-content-understanding | ca12ebe6aa4da16224a8ca86dc45aaaaa7cfda09 | [
"Apache-2.0"
] | 1 | 2020-05-19T17:28:10.000Z | 2020-05-19T17:28:10.000Z | 24.380481 | 175 | 0.455938 | [
[
[
"# Training Collaborative Experts on MSR-VTT\nThis notebook shows how to download code that trains a Collaborative Experts model with GPT-1 + NetVLAD on the MSR-VTT Dataset.\n",
"_____no_output_____"
],
[
"## Setup\n\n* Download Code and Dependencies\n* Import Modules\n* Download Language Model Weights\n* Download Datasets\n* Generate Encodings for Dataset Captions \n\n",
"_____no_output_____"
],
[
"### Code Downloading and Dependency Downloading\n* Specify tensorflow version\n* Clone repository from Github\n* `cd` into the correct directory\n* Install the requirements\n\n\n",
"_____no_output_____"
]
],
[
[
"%tensorflow_version 2.x",
"_____no_output_____"
],
[
"!git clone https://github.com/googleinterns/via-content-understanding.git",
"_____no_output_____"
],
[
"%cd via-content-understanding/videoretrieval/",
"_____no_output_____"
],
[
"!pip install -r requirements.txt",
"_____no_output_____"
],
[
"!pip install --upgrade tensorflow_addons",
"_____no_output_____"
]
],
[
[
"### Importing Modules",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport languagemodels\nimport train.encoder_datasets\nimport train.language_model\nimport experts\nimport datasets\nimport datasets.msrvtt.constants\nimport os\nimport models.components\nimport models.encoder\nimport helper.precomputed_features\nfrom tensorflow_addons.activations import mish \nimport tensorflow_addons as tfa\nimport metrics.loss",
"_____no_output_____"
]
],
[
[
"### Language Model Downloading\n\n* Download GPT-1\n\n",
"_____no_output_____"
]
],
[
[
"gpt_model = languagemodels.OpenAIGPTModel()",
"_____no_output_____"
]
],
[
[
"### Dataset downloading\n\n\n* Downlaod Datasets\n* Download Precomputed Features\n\n",
"_____no_output_____"
]
],
[
[
"datasets.msrvtt_dataset.download_dataset()",
"_____no_output_____"
]
],
[
[
"Note: The system `curl` is more memory efficent than the download function in our codebase, so here `curl` is used rather than the download function in our codebase.",
"_____no_output_____"
]
],
[
[
"url = datasets.msrvtt.constants.features_tar_url\npath = datasets.msrvtt.constants.features_tar_path\nos.system(f\"curl {url} > {path}\") ",
"_____no_output_____"
],
[
"helper.precomputed_features.cache_features(\n datasets.msrvtt_dataset,\n datasets.msrvtt.constants.expert_to_features,\n datasets.msrvtt.constants.features_tar_path,)",
"_____no_output_____"
]
],
[
[
"### Embeddings Generation\n\n* Generate Embeddings for MSR-VTT\n* **Note: this will take 20-30 minutes on a colab, depending on the GPU** ",
"_____no_output_____"
]
],
[
[
"train.language_model.generate_and_cache_contextual_embeddings(\n gpt_model, datasets.msrvtt_dataset)",
"_____no_output_____"
]
],
[
[
"## Training\n\n\n* Build Train Datasets\n* Initialize Models\n* Compile Encoders\n* Fit Model\n* Test Model\n",
"_____no_output_____"
],
[
"### Datasets Generation",
"_____no_output_____"
]
],
[
[
"experts_used = [\n experts.i3d,\n experts.r2p1d,\n experts.resnext,\n experts.senet,\n experts.speech_expert,\n experts.ocr_expert,\n experts.audio_expert,\n experts.densenet,\n experts.face_expert]",
"_____no_output_____"
],
[
"train_ds, valid_ds, test_ds = (\n train.encoder_datasets.generate_encoder_datasets(\n gpt_model, datasets.msrvtt_dataset, experts_used))",
"_____no_output_____"
]
],
[
[
"### Model Initialization",
"_____no_output_____"
]
],
[
[
"class MishLayer(tf.keras.layers.Layer):\n def call(self, inputs):\n return mish(inputs)",
"_____no_output_____"
],
[
"mish(tf.Variable([1.0]))",
"_____no_output_____"
],
[
"text_encoder = models.components.TextEncoder(\n len(experts_used),\n num_netvlad_clusters=28,\n ghost_clusters=1,\n language_model_dimensionality=768,\n encoded_expert_dimensionality=512,\n residual_cls_token=False,\n)",
"_____no_output_____"
],
[
"video_encoder = models.components.VideoEncoder(\n num_experts=len(experts_used),\n experts_use_netvlad=[False, False, False, False, True, True, True, False, False],\n experts_netvlad_shape=[None, None, None, None, 19, 43, 8, None, None],\n expert_aggregated_size=512,\n encoded_expert_dimensionality=512,\n g_mlp_layers=3,\n h_mlp_layers=0,\n make_activation_layer=MishLayer)",
"_____no_output_____"
],
[
"encoder = models.encoder.EncoderForFrozenLanguageModel(\n video_encoder,\n text_encoder,\n 0.0938,\n [1, 5, 10, 50],\n 20)",
"_____no_output_____"
]
],
[
[
"### Encoder Compliation",
"_____no_output_____"
]
],
[
[
"def build_optimizer(lr=0.001):\n learning_rate_scheduler = tf.keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate=lr,\n decay_steps=101,\n decay_rate=0.95,\n staircase=True)\n\n return tf.keras.optimizers.Adam(learning_rate_scheduler)",
"_____no_output_____"
],
[
"encoder.compile(build_optimizer(0.1), metrics.loss.bidirectional_max_margin_ranking_loss)",
"_____no_output_____"
],
[
"train_ds_prepared = (train_ds\n .shuffle(1000)\n .batch(64, drop_remainder=True)\n .prefetch(tf.data.experimental.AUTOTUNE))",
"_____no_output_____"
],
[
"encoder.video_encoder.trainable = True\nencoder.text_encoder.trainable = True",
"_____no_output_____"
]
],
[
[
"### Model fitting",
"_____no_output_____"
]
],
[
[
"encoder.fit(\n train_ds_prepared,\n epochs=100,\n)",
"_____no_output_____"
]
],
[
[
"### Tests",
"_____no_output_____"
]
],
[
[
"captions_per_video = 20\nnum_videos_upper_bound = 100000 ",
"_____no_output_____"
],
[
"ranks = []\n\nfor caption_index in range(captions_per_video):\n batch = next(iter(test_ds.shard(captions_per_video, caption_index).batch(\n num_videos_upper_bound)))\n video_embeddings, text_embeddings, mixture_weights = encoder.forward_pass(\n batch, training=False)\n \n similarity_matrix = metrics.loss.build_similarity_matrix(\n video_embeddings,\n text_embeddings,\n mixture_weights,\n batch[-1])\n rankings = metrics.rankings.compute_ranks(similarity_matrix)\n ranks += list(rankings.numpy())",
"_____no_output_____"
],
[
"def recall_at_k(ranks, k):\n return len(list(filter(lambda i: i <= k, ranks))) / len(ranks)",
"_____no_output_____"
],
[
"median_rank = sorted(ranks)[len(ranks)//2]",
"_____no_output_____"
],
[
"mean_rank = sum(ranks)/len(ranks)",
"_____no_output_____"
],
[
"print(f\"Median Rank: {median_rank}\")",
"_____no_output_____"
],
[
"print(f\"Mean Rank: {mean_rank}\")",
"_____no_output_____"
],
[
"for k in [1, 5, 10, 50]:\n recall = recall_at_k(ranks, k)\n print(f\"R@{k}: {recall}\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d043b2dde50d5696b49b3b12314df88389e7a3b7 | 68,234 | ipynb | Jupyter Notebook | PyBer_ride_data.ipynb | hemsmalli5/PyBer_Analysis | 3e3c5ff56aaffa2ef6c9572b99666343b9c8fb58 | [
"MIT"
] | null | null | null | PyBer_ride_data.ipynb | hemsmalli5/PyBer_Analysis | 3e3c5ff56aaffa2ef6c9572b99666343b9c8fb58 | [
"MIT"
] | null | null | null | PyBer_ride_data.ipynb | hemsmalli5/PyBer_Analysis | 3e3c5ff56aaffa2ef6c9572b99666343b9c8fb58 | [
"MIT"
] | null | null | null | 264.472868 | 21,512 | 0.91428 | [
[
[
"%matplotlib inline\n# Dependencies\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n# Load in csv\npyber_ride_df = pd.read_csv(\"Resources/PyBer_ride_data.csv\")\npyber_ride_df",
"_____no_output_____"
],
[
"pyber_ride_df.plot(x=\"Month\", y=\"Avg. Fare ($USD)\")\nplt.show()",
"_____no_output_____"
],
[
"# Set x-axis and tick locations.\nx_axis = np.arange(len(pyber_ride_df))\ntick_locations = [value for value in x_axis]\n# Plot the data.\npyber_ride_df.plot(x=\"Month\", y=\"Avg. Fare ($USD)\")\nplt.xticks(tick_locations, pyber_ride_df[\"Month\"])\nplt.show()",
"_____no_output_____"
],
[
"pyber_ride_df.plot.bar(x=\"Month\", y=\"Avg. Fare ($USD)\")\nplt.show()",
"_____no_output_____"
],
[
"errors = pyber_ride_df.std()\npyber_ride_df.plot.bar(x=\"Month\", y=\"Avg. Fare ($USD)\", color='skyblue', yerr=errors, capsize=4)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d043c73d7bb388049105088bea3b26f4cc1aed28 | 9,640 | ipynb | Jupyter Notebook | tp1/Ejercicio 4.ipynb | NicoGallegos/fiuba-simulacion-grupo6 | c904eed4f1e8c07716ba38d44e83449c2ac5acd9 | [
"MIT"
] | null | null | null | tp1/Ejercicio 4.ipynb | NicoGallegos/fiuba-simulacion-grupo6 | c904eed4f1e8c07716ba38d44e83449c2ac5acd9 | [
"MIT"
] | null | null | null | tp1/Ejercicio 4.ipynb | NicoGallegos/fiuba-simulacion-grupo6 | c904eed4f1e8c07716ba38d44e83449c2ac5acd9 | [
"MIT"
] | null | null | null | 59.141104 | 6,548 | 0.794813 | [
[
[
"#### Ejercicio 4",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport math as math\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\n\ndef normalGenerator(media, desvio,nroMuestras):\n\n c = math.sqrt(2*math.exp(1)/np.pi);\n t = np.random.exponential(scale=1, size=nroMuestras);\n \n p = list();\n for i in t:\n p.append(fx(i)/(c*fy(i)));\n \n z = list();\n for n in range(1,nroMuestras):\n r = np.random.uniform();\n if (r < p[n]):\n r2 = np.random.uniform();\n if (r2 < 0.5):\n z.append(t[n]*desvio+media);\n else:\n z.append(t[n]*-1*desvio+media);\n\n \n return z;\n \ndef fx(x):\n return math.exp(-x**2/2)/math.sqrt(2*np.pi);\n \ndef fy(y):\n return math.exp(-y);\n\nresults= normalGenerator(35,5,100000);\n\n\nplt.hist(results,bins=200);",
"_____no_output_____"
]
],
[
[
"#### VARIANZA",
"_____no_output_____"
]
],
[
[
"print(np.var(results));",
"25.0067772918\n"
]
],
[
[
"#### MEDIA",
"_____no_output_____"
]
],
[
[
"print(np.mean(results));",
"34.9540678007\n"
]
],
[
[
"#### DESVIACION ESTANDAR",
"_____no_output_____"
]
],
[
[
"print(np.std(results));",
"5.00067768325\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d043c90de015d7053f00d82c16a39b0c25bca1cd | 2,005 | ipynb | Jupyter Notebook | 2.data-structures/exercises/5.french-indonesian-dictionary.ipynb | mdjamina/python-M1TAL | bd84d119d2acc278d48bfa247eaa333930cf62f0 | [
"MIT"
] | 7 | 2020-10-05T17:04:14.000Z | 2021-09-27T08:45:22.000Z | 2.data-structures/exercises/5.french-indonesian-dictionary.ipynb | mdjamina/python-M1TAL | bd84d119d2acc278d48bfa247eaa333930cf62f0 | [
"MIT"
] | null | null | null | 2.data-structures/exercises/5.french-indonesian-dictionary.ipynb | mdjamina/python-M1TAL | bd84d119d2acc278d48bfa247eaa333930cf62f0 | [
"MIT"
] | 7 | 2020-11-21T10:38:52.000Z | 2021-11-02T23:00:27.000Z | 23.869048 | 301 | 0.500249 | [
[
[
"# Un dictionnaire français-indonésien\n\nVous disposez d’un fichier tabulaire dans le répertoire *data* avec une liste de mots en français et, en regard, leur traduction en indonésien. Complétez le programme ci-dessous afin que, pour toute entrée saisie par un utilisateur, il renvoie la traduction en indonésien ou un message d’erreur.",
"_____no_output_____"
]
],
[
[
"#!/usr/bin/env python\n#-*- coding: utf-8 -*-\n\n#\n# Modules\n#\nimport csv\n\n#\n# User functions\n#\ndef load_data(path):\n \"\"\"Loads a data in csv format\n \n path -- path to data\n \"\"\"\n\n lines = []\n\n with open(path) as csvfile:\n reader = csv.reader(csvfile, delimiter='\\t')\n for line in reader:\n lines.append(tuple(line))\n\n return lines\n\ndef main(path_to_data):\n \"\"\"Main function.\n \n path_to_data -- csv file\n \"\"\"\n lines = load_data(path_to_data)\n\n#\n# Main procedure\n#\nif __name__ == \"__main__\":\n \n path_to_data = '../data/french-indonesian.tsv'\n \n main(path_to_data)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
d043d23c201e2bbb43270026405e4316ff49b6d9 | 23,374 | ipynb | Jupyter Notebook | Pytorch CNN MNIST digit recognition.ipynb | ajuhz/Udemy-neural-network-boorcamp | 242bebf9c6e8da843db6da62859e29375ecf2a3e | [
"MIT"
] | null | null | null | Pytorch CNN MNIST digit recognition.ipynb | ajuhz/Udemy-neural-network-boorcamp | 242bebf9c6e8da843db6da62859e29375ecf2a3e | [
"MIT"
] | null | null | null | Pytorch CNN MNIST digit recognition.ipynb | ajuhz/Udemy-neural-network-boorcamp | 242bebf9c6e8da843db6da62859e29375ecf2a3e | [
"MIT"
] | null | null | null | 34.272727 | 275 | 0.508813 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d043dfd719b1eecdc3d3cf6d2207e1cda5e337f1 | 395,420 | ipynb | Jupyter Notebook | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | a87597da2605be9f4ce0b84ed84408b313a3dc72 | [
"MIT"
] | null | null | null | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | a87597da2605be9f4ce0b84ed84408b313a3dc72 | [
"MIT"
] | null | null | null | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | a87597da2605be9f4ce0b84ed84408b313a3dc72 | [
"MIT"
] | null | null | null | 692.504378 | 289,948 | 0.931233 | [
[
[
"# k-Nearest Neighbor (kNN) exercise\n\n*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*\n\nThe kNN classifier consists of two stages:\n\n- During training, the classifier takes the training data and simply remembers it\n- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\n- The value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.",
"_____no_output_____"
]
],
[
[
"# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\nfrom __future__ import print_function\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint('Training data shape: ', X_train.shape)\nprint('Training labels shape: ', y_train.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)",
"Training data shape: (50000, 32, 32, 3)\nTraining labels shape: (50000,)\nTest data shape: (10000, 32, 32, 3)\nTest labels shape: (10000,)\n"
],
[
"# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()",
"_____no_output_____"
],
[
"# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = list(range(num_training))\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = list(range(num_test))\nX_test = X_test[mask]\ny_test = y_test[mask]",
"_____no_output_____"
],
[
"# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint(X_train.shape, X_test.shape)",
"(5000, 3072) (500, 3072)\n"
],
[
"from cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\n1. First we must compute the distances between all test examples and all train examples. \n2. Given these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.\n\nFirst, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.",
"_____no_output_____"
]
],
[
[
"# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint(dists.shape)",
"(500, 5000)\n"
],
[
"# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\n- What in the data is the cause behind the distinctly bright rows?\n- What causes the columns?",
"_____no_output_____"
],
[
"**Your Answer**: *fill this in.*\n* The distinctly bright rows indicate that they are all far away from all the training set (outlier)\n* The distinctly bright columns indicate that they are all far away from all the test set",
"_____no_output_____"
]
],
[
[
"# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))",
"_____no_output_____"
]
],
[
[
"You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:",
"_____no_output_____"
]
],
[
[
"y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))",
"_____no_output_____"
]
],
[
[
"You should expect to see a slightly better performance than with `k = 1`.",
"_____no_output_____"
]
],
[
[
"# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')",
"_____no_output_____"
],
[
"# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')",
"_____no_output_____"
],
[
"# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint('Two loop version took %f seconds' % two_loop_time)\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint('One loop version took %f seconds' % one_loop_time)\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint('No loop version took %f seconds' % no_loop_time)\n\n# you should see significantly faster performance with the fully vectorized implementation",
"_____no_output_____"
]
],
[
[
"### Cross-validation\n\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.",
"_____no_output_____"
]
],
[
[
"num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\n#pass\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\n#pass\nfor k in k_choices:\n inner_accuracies = np.zeros(num_folds)\n for i in range(num_folds):\n X_sub_train = np.concatenate(np.delete(X_train_folds, i, axis=0))\n y_sub_train = np.concatenate(np.delete(y_train_folds, i, axis=0))\n print(X_sub_train.shape,y_sub_train.shape)\n \n X_sub_test = X_train_folds[i]\n y_sub_test = y_train_folds[i]\n print(X_sub_test.shape,y_sub_test.shape)\n \n classifier = KNearestNeighbor()\n classifier.train(X_sub_train, y_sub_train)\n \n dists = classifier.compute_distances_no_loops(X_sub_test)\n pred_y = classifier.predict_labels(dists, k)\n num_correct = np.sum(y_sub_test == pred_y)\n inner_accuracies[i] = float(num_correct)/X_test.shape[0]\n \n k_to_accuracies[k] = np.sum(inner_accuracies)/num_folds\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print('k = %d, accuracy = %f' % (k, accuracy))",
"(4000, 3072) (4000,)\n(1000, 3072) (1000,)\n"
],
[
"X_train_folds = np.array_split(X_train, 5)\nt = np.delete(X_train_folds, 1,axis=0)\n\nprint(X_train_folds)",
"_____no_output_____"
],
[
"# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()",
"_____no_output_____"
],
[
"# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 1\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d043ea4a4b614fecdc0b5bea8b7ce565b0f78f41 | 33,131 | ipynb | Jupyter Notebook | nbs/13_callback.core.ipynb | aquietlife/fastai2 | af18d16aa5f7b7d31388697ab451cbb6b4104e02 | [
"Apache-2.0"
] | null | null | null | nbs/13_callback.core.ipynb | aquietlife/fastai2 | af18d16aa5f7b7d31388697ab451cbb6b4104e02 | [
"Apache-2.0"
] | null | null | null | nbs/13_callback.core.ipynb | aquietlife/fastai2 | af18d16aa5f7b7d31388697ab451cbb6b4104e02 | [
"Apache-2.0"
] | null | null | null | 32.259981 | 378 | 0.582445 | [
[
[
"# default_exp callback.core",
"_____no_output_____"
],
[
"#export\nfrom fastai2.data.all import *\nfrom fastai2.optimizer import *",
"_____no_output_____"
],
[
"from nbdev.showdoc import *",
"_____no_output_____"
],
[
"#export\n_all_ = ['CancelFitException', 'CancelEpochException', 'CancelTrainException', 'CancelValidException', 'CancelBatchException']",
"_____no_output_____"
]
],
[
[
"# Callback\n\n> Basic callbacks for Learner",
"_____no_output_____"
],
[
"## Callback - ",
"_____no_output_____"
]
],
[
[
"#export\n_inner_loop = \"begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch\".split()",
"_____no_output_____"
],
[
"#export\nclass Callback(GetAttr):\n \"Basic class handling tweaks of the training loop by changing a `Learner` in various events\"\n _default,learn,run,run_train,run_valid = 'learn',None,True,True,True\n def __repr__(self): return type(self).__name__\n\n def __call__(self, event_name):\n \"Call `self.{event_name}` if it's defined\"\n _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or\n (self.run_valid and not getattr(self, 'training', False)))\n if self.run and _run: getattr(self, event_name, noop)()\n if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit\n \n def __setattr__(self, name, value):\n if hasattr(self.learn,name):\n warn(f\"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.\")\n super().__setattr__(name, value)\n \n @property\n def name(self):\n \"Name of the `Callback`, camel-cased and with '*Callback*' removed\"\n return class2attr(self, 'Callback')",
"_____no_output_____"
]
],
[
[
"The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:\n- compute the output of the model from the input\n- calculate a loss between this output and the desired target\n- compute the gradients of this loss with respect to all the model parameters\n- update the parameters accordingly\n- zero all the gradients\n\nAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:\n\n- `begin_fit`: called before doing anything, ideal for initial setup.\n- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.\n- `begin_train`: called at the beginning of the training part of an epoch.\n- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).\n- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.\n- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).\n- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).\n- `after_step`: called after the step and before the gradients are zeroed.\n- `after_batch`: called at the end of a batch, for any clean-up before the next one.\n- `after_train`: called at the end of the training phase of an epoch.\n- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.\n- `after_validate`: called at the end of the validation part of an epoch.\n- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.\n- `after_fit`: called at the end of training, for final clean-up.",
"_____no_output_____"
]
],
[
[
"show_doc(Callback.__call__)",
"_____no_output_____"
],
[
"tst_cb = Callback()\ntst_cb.call_me = lambda: print(\"maybe\")\ntest_stdout(lambda: tst_cb(\"call_me\"), \"maybe\")",
"_____no_output_____"
],
[
"show_doc(Callback.__getattr__)",
"_____no_output_____"
]
],
[
[
"This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.",
"_____no_output_____"
]
],
[
[
"mk_class('TstLearner', 'a')\n\nclass TstCallback(Callback):\n def batch_begin(self): print(self.a)\n\nlearn,cb = TstLearner(1),TstCallback()\ncb.learn = learn\ntest_stdout(lambda: cb('batch_begin'), \"1\")",
"_____no_output_____"
]
],
[
[
"Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong:",
"_____no_output_____"
]
],
[
[
"learn.a",
"_____no_output_____"
],
[
"class TstCallback(Callback):\n def batch_begin(self): self.a += 1\n\nlearn,cb = TstLearner(1),TstCallback()\ncb.learn = learn\ncb('batch_begin')\ntest_eq(cb.a, 2)\ntest_eq(cb.learn.a, 1)",
"/home/sgugger/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: UserWarning: You are setting an attribute (a) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.a` if you would like to change it in the learner.\n app.launch_new_instance()\n"
]
],
[
[
"A proper version needs to write `self.learn.a = self.a + 1`:",
"_____no_output_____"
]
],
[
[
"class TstCallback(Callback):\n def batch_begin(self): self.learn.a = self.a + 1\n\nlearn,cb = TstLearner(1),TstCallback()\ncb.learn = learn\ncb('batch_begin')\ntest_eq(cb.learn.a, 2)",
"_____no_output_____"
],
[
"show_doc(Callback.name, name='Callback.name')",
"_____no_output_____"
],
[
"test_eq(TstCallback().name, 'tst')\nclass ComplicatedNameCallback(Callback): pass\ntest_eq(ComplicatedNameCallback().name, 'complicated_name')",
"_____no_output_____"
]
],
[
[
"### TrainEvalCallback -",
"_____no_output_____"
]
],
[
[
"#export\nclass TrainEvalCallback(Callback):\n \"`Callback` that tracks the number of iterations done and properly sets training/eval mode\"\n run_valid = False\n def begin_fit(self):\n \"Set the iter and epoch counters to 0, put the model and the right device\"\n self.learn.train_iter,self.learn.pct_train = 0,0.\n self.model.to(self.dls.device)\n\n def after_batch(self):\n \"Update the iter counter (in training mode)\"\n self.learn.pct_train += 1./(self.n_iter*self.n_epoch)\n self.learn.train_iter += 1\n\n def begin_train(self):\n \"Set the model in training mode\"\n self.learn.pct_train=self.epoch/self.n_epoch\n self.model.train()\n self.learn.training=True\n\n def begin_validate(self):\n \"Set the model in validation mode\"\n self.model.eval()\n self.learn.training=False",
"_____no_output_____"
],
[
"show_doc(TrainEvalCallback, title_level=3)",
"_____no_output_____"
]
],
[
[
"This `Callback` is automatically added in every `Learner` at initialization.",
"_____no_output_____"
]
],
[
[
"#hide\n#test of the TrainEvalCallback below in Learner.fit",
"_____no_output_____"
],
[
"show_doc(TrainEvalCallback.begin_fit)",
"_____no_output_____"
],
[
"show_doc(TrainEvalCallback.after_batch)",
"_____no_output_____"
],
[
"show_doc(TrainEvalCallback.begin_train)",
"_____no_output_____"
],
[
"show_doc(TrainEvalCallback.begin_validate)",
"_____no_output_____"
],
[
"# export\ndefaults.callbacks = [TrainEvalCallback]",
"_____no_output_____"
]
],
[
[
"### GatherPredsCallback -",
"_____no_output_____"
]
],
[
[
"#export\n#TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors.\nclass GatherPredsCallback(Callback):\n \"`Callback` that saves the predictions and targets, optionally `with_loss`\"\n def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0):\n store_attr(self, \"with_input,with_loss,save_preds,save_targs,concat_dim\")\n\n def begin_batch(self):\n if self.with_input: self.inputs.append((to_detach(self.xb)))\n\n def begin_validate(self):\n \"Initialize containers\"\n self.preds,self.targets = [],[]\n if self.with_input: self.inputs = []\n if self.with_loss: self.losses = []\n\n def after_batch(self):\n \"Save predictions, targets and potentially losses\"\n preds,targs = to_detach(self.pred),to_detach(self.yb)\n if self.save_preds is None: self.preds.append(preds)\n else: (self.save_preds/str(self.iter)).save_array(preds)\n if self.save_targs is None: self.targets.append(targs)\n else: (self.save_targs/str(self.iter)).save_array(targs[0])\n if self.with_loss:\n bs = find_bs(self.yb)\n loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)\n self.losses.append(to_detach(loss))\n\n def after_validate(self):\n \"Concatenate all recorded tensors\"\n if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim))\n if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim))\n if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim))\n if self.with_loss: self.losses = to_concat(self.losses)\n\n def all_tensors(self):\n res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets]\n if self.with_input: res = [self.inputs] + res\n if self.with_loss: res.append(self.losses)\n return res",
"_____no_output_____"
],
[
"show_doc(GatherPredsCallback, title_level=3)",
"_____no_output_____"
],
[
"show_doc(GatherPredsCallback.begin_validate)",
"_____no_output_____"
],
[
"show_doc(GatherPredsCallback.after_batch)",
"_____no_output_____"
],
[
"show_doc(GatherPredsCallback.after_validate)",
"_____no_output_____"
]
],
[
[
"## Callbacks control flow",
"_____no_output_____"
],
[
"It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.\n\nThis is made possible by raising specific exceptions the training loop will look for (and properly catch).",
"_____no_output_____"
]
],
[
[
"#export\n_ex_docs = dict(\n CancelFitException=\"Skip the rest of this batch and go to `after_batch`\",\n CancelEpochException=\"Skip the rest of the training part of the epoch and go to `after_train`\",\n CancelTrainException=\"Skip the rest of the validation part of the epoch and go to `after_validate`\",\n CancelValidException=\"Skip the rest of this epoch and go to `after_epoch`\",\n CancelBatchException=\"Interrupts training and go to `after_fit`\")\n\nfor c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)",
"_____no_output_____"
],
[
"show_doc(CancelBatchException, title_level=3)",
"_____no_output_____"
],
[
"show_doc(CancelTrainException, title_level=3)",
"_____no_output_____"
],
[
"show_doc(CancelValidException, title_level=3)",
"_____no_output_____"
],
[
"show_doc(CancelEpochException, title_level=3)",
"_____no_output_____"
],
[
"show_doc(CancelFitException, title_level=3)",
"_____no_output_____"
]
],
[
[
"You can detect one of those exceptions occurred and add code that executes right after with the following events:\n- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`\n- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`\n- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`\n- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`\n- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`",
"_____no_output_____"
]
],
[
[
"# export\n_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \\\n after_backward after_step after_cancel_batch after_batch after_cancel_train \\\n after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \\\n after_epoch after_cancel_fit after_fit')\n\nmk_class('event', **_events.map_dict(),\n doc=\"All possible events as attributes to get tab-completion and typo-proofing\")",
"_____no_output_____"
],
[
"# export\n_all_ = ['event']",
"_____no_output_____"
],
[
"show_doc(event, name='event', title_level=3)",
"_____no_output_____"
],
[
"test_eq(event.after_backward, 'after_backward')",
"_____no_output_____"
]
],
[
[
"Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.",
"_____no_output_____"
],
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_callback.core.ipynb.\nConverted 13a_learner.ipynb.\nConverted 13b_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.transfer_learning.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.ulmfit.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 45_collab.ipynb.\nConverted 50_datablock_examples.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 72_callback.neptune.ipynb.\nConverted 97_test_utils.ipynb.\nConverted 99_pytorch_doc.ipynb.\nConverted index.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d043f537b5ebcaa7fed777930e1d4db2893ab984 | 90,470 | ipynb | Jupyter Notebook | 23 - Python for Finance/2_Calculating and Comparing Rates of Return in Python/11_Calculating the Rate of Return of Indices (5:03)/Calculating the Return of Indices - Solution_Yahoo_Py3.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 23 - Python for Finance/2_Calculating and Comparing Rates of Return in Python/11_Calculating the Rate of Return of Indices (5:03)/Calculating the Return of Indices - Solution_Yahoo_Py3.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 23 - Python for Finance/2_Calculating and Comparing Rates of Return in Python/11_Calculating the Rate of Return of Indices (5:03)/Calculating the Return of Indices - Solution_Yahoo_Py3.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 210.395349 | 79,352 | 0.896717 | [
[
[
"## Calculating the Return of Indices",
"_____no_output_____"
],
[
"*Suggested Answers follow (usually there are multiple ways to solve a problem in Python).*",
"_____no_output_____"
],
[
"Consider three famous American market indices – Dow Jones, S&P 500, and the Nasdaq for the period of 1st of January 2000 until today.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom pandas_datareader import data as wb\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"tickers = ['^DJI', '^GSPC', '^IXIC']\n\nind_data = pd.DataFrame()\n\nfor t in tickers:\n ind_data[t] = wb.DataReader(t, data_source='yahoo', start='2000-1-1')['Adj Close']",
"_____no_output_____"
],
[
"ind_data.head()",
"_____no_output_____"
],
[
"ind_data.tail()",
"_____no_output_____"
]
],
[
[
"Normalize the data to 100 and plot the results on a graph. ",
"_____no_output_____"
]
],
[
[
"(ind_data / ind_data.iloc[0] * 100).plot(figsize=(15, 6));\nplt.show()",
"_____no_output_____"
]
],
[
[
"How would you explain the common and the different parts of the behavior of the three indices?",
"_____no_output_____"
],
[
"*****",
"_____no_output_____"
],
[
"Obtain the simple returns of the indices.",
"_____no_output_____"
]
],
[
[
"ind_returns = (ind_data / ind_data.shift(1)) - 1\n\nind_returns.tail()",
"_____no_output_____"
]
],
[
[
"Estimate the average annual return of each index.",
"_____no_output_____"
]
],
[
[
"annual_ind_returns = ind_returns.mean() * 250\nannual_ind_returns",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d043fb00958e850464811f1b282b4c21d89d684e | 46,442 | ipynb | Jupyter Notebook | model_1 - DecisionTree.ipynb | RussellMcGrath/machine-learning-challenge | 2e031f463c599e2db484b1906380fd54cf3eff86 | [
"MIT"
] | null | null | null | model_1 - DecisionTree.ipynb | RussellMcGrath/machine-learning-challenge | 2e031f463c599e2db484b1906380fd54cf3eff86 | [
"MIT"
] | null | null | null | model_1 - DecisionTree.ipynb | RussellMcGrath/machine-learning-challenge | 2e031f463c599e2db484b1906380fd54cf3eff86 | [
"MIT"
] | null | null | null | 48.886316 | 1,450 | 0.491861 | [
[
[
"# Update sklearn to prevent version mismatches\n#!pip install sklearn --upgrade",
"_____no_output_____"
],
[
"# install joblib. This will be used to save your model. \n# Restart your kernel after installing \n#!pip install joblib",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"# Read the CSV and Perform Basic Data Cleaning",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"resources/exoplanet_data.csv\")\n# Drop the null columns where all values are null\ndf = df.dropna(axis='columns', how='all')\n# Drop the null rows\ndf = df.dropna()\ndf.head()",
"_____no_output_____"
]
],
[
[
"# Select your features (columns)",
"_____no_output_____"
]
],
[
[
"# Set features. This will also be used as your x values.\n#selected_features = df[['names', 'of', 'selected', 'features', 'here']]\nfeature_list = df.columns.to_list()\nfeature_list.remove(\"koi_disposition\")\n\nremoval_list = []\nfor x in feature_list:\n if \"err\" in x:\n removal_list.append(x)\nprint(removal_list)\n \nselected_features = df[feature_list].drop(columns=removal_list)\nselected_features.head()",
"['koi_period_err1', 'koi_period_err2', 'koi_time0bk_err1', 'koi_time0bk_err2', 'koi_impact_err1', 'koi_impact_err2', 'koi_duration_err1', 'koi_duration_err2', 'koi_depth_err1', 'koi_depth_err2', 'koi_prad_err1', 'koi_prad_err2', 'koi_insol_err1', 'koi_insol_err2', 'koi_steff_err1', 'koi_steff_err2', 'koi_slogg_err1', 'koi_slogg_err2', 'koi_srad_err1', 'koi_srad_err2']\n"
]
],
[
[
"# Create a Train Test Split\n\nUse `koi_disposition` for the y values",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(selected_features, df[\"koi_disposition\"], random_state=13)",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
]
],
[
[
"# Pre-processing\n\nScale the data using the MinMaxScaler and perform some feature selection",
"_____no_output_____"
]
],
[
[
"# Scale your data\nfrom sklearn.preprocessing import MinMaxScaler\nX_scaler = MinMaxScaler().fit(X_train)\n#y_scaler = MinMaxScaler().fit(y_train)\n\nX_train_scaled = X_scaler.transform(X_train)\nX_test_scaled = X_scaler.transform(X_test)\n#y_train_scaled = y_scaler.transform(y_train)\n#y_test_scaled = y_scaler.transform(y_train)",
"_____no_output_____"
]
],
[
[
"# Train the Model\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn import tree\ndecision_tree_model = tree.DecisionTreeClassifier()\ndecision_tree_model = decision_tree_model.fit(X_train, y_train)\n\nprint(f\"Training Data Score: {decision_tree_model.score(X_train_scaled, y_train)}\")\nprint(f\"Testing Data Score: {decision_tree_model.score(X_test_scaled, y_test)}\")",
"Training Data Score: 0.6055693305359527\nTesting Data Score: 0.5835240274599542\n"
]
],
[
[
"# Hyperparameter Tuning\n\nUse `GridSearchCV` to tune the model's parameters",
"_____no_output_____"
]
],
[
[
"decision_tree_model.get_params()",
"_____no_output_____"
],
[
"# Create the GridSearchCV model\nfrom sklearn.model_selection import GridSearchCV\nparam_grid = {'C': [1, 5, 10, 50],\n 'gamma': [0.0001, 0.0005, 0.001, 0.005]}\ngrid = GridSearchCV(decision_tree_model, param_grid, verbose=3)",
"_____no_output_____"
],
[
"# Train the model with GridSearch\ngrid.fit(X_train,y_train)",
"Fitting 5 folds for each of 16 candidates, totalling 80 fits\n[CV] C=1, gamma=0.0001 ...............................................\n"
],
[
"print(grid.best_params_)\nprint(grid.best_score_)",
"_____no_output_____"
]
],
[
[
"# Save the Model",
"_____no_output_____"
]
],
[
[
"# save your model by updating \"your_name\" with your name\n# and \"your_model\" with your model variable\n# be sure to turn this in to BCS\n# if joblib fails to import, try running the command to install in terminal/git-bash\nimport joblib\nfilename = 'your_name.sav'\njoblib.dump(your_model, filename)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d043fb14ee11b0e332df0f79ccc52c4088b3c21f | 300,804 | ipynb | Jupyter Notebook | docs/notebooks/negative_binomial.ipynb | PsychoinformaticsLab/bambi | 425b7b88f01f093ed131433d8559bcc6e6d23bf8 | [
"MIT"
] | null | null | null | docs/notebooks/negative_binomial.ipynb | PsychoinformaticsLab/bambi | 425b7b88f01f093ed131433d8559bcc6e6d23bf8 | [
"MIT"
] | null | null | null | docs/notebooks/negative_binomial.ipynb | PsychoinformaticsLab/bambi | 425b7b88f01f093ed131433d8559bcc6e6d23bf8 | [
"MIT"
] | null | null | null | 232.281081 | 92,290 | 0.893961 | [
[
[
"# Negative Binomial Regression (Students absence example)",
"_____no_output_____"
],
[
"## Negative binomial distribution review",
"_____no_output_____"
],
[
"I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th \"success\". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.\n\n$$\nY \\sim \\text{NB}(k, p)\n$$\n\nwhere $0 \\le p \\le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \\in \\{k, k + 1, \\cdots\\}$\n\nThe probability mass function (pmf) is \n\n$$\np(y | k, p)= \\binom{y - 1}{y-k}(1 -p)^{y - k}p^k\n$$\n\nIf you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \\ge k$.",
"_____no_output_____"
],
[
"But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in [PyMC3](https://docs.pymc.io/api/distributions/discrete.html#pymc3.distributions.discrete.NegativeBinomial), the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\\mu$, the variable $Y$ has a Poisson distribution with mean $\\mu$.\n\nUnder this alternative definition, the pmf is\n\n$$\n\\displaystyle p(y | k, \\alpha) = \\binom{y + \\alpha - 1}{y} \\left(\\frac{\\alpha}{\\mu + \\alpha}\\right)^\\alpha\\left(\\frac{\\mu}{\\mu + \\alpha}\\right)^y\n$$\n\nwhere $\\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\\alpha$ is the rate parameter of the gamma.",
"_____no_output_____"
]
],
[
[
"import arviz as az\nimport bambi as bmb\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom scipy.stats import nbinom",
"_____no_output_____"
],
[
"az.style.use(\"arviz-darkgrid\")",
"_____no_output_____"
],
[
"import warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)",
"_____no_output_____"
]
],
[
[
"In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes. ",
"_____no_output_____"
]
],
[
[
"y = np.arange(0, 30)\nk = 3\np1 = 0.5\np2 = 0.3",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)\n\nax[0].bar(y, nbinom.pmf(y, k, p1))\nax[0].set_xticks(np.linspace(0, 30, num=11))\nax[0].set_title(f\"k = {k}, p = {p1}\")\n\nax[1].bar(y, nbinom.pmf(y, k, p2))\nax[1].set_xticks(np.linspace(0, 30, num=11))\nax[1].set_title(f\"k = {k}, p = {p2}\")\n\nfig.suptitle(\"Y = Number of failures until k successes\", fontsize=16);",
"_____no_output_____"
]
],
[
[
"For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156.",
"_____no_output_____"
]
],
[
[
"print(nbinom.pmf(y, k, p1)[0])\nprint(nbinom.pmf(y, k, p1)[3])",
"0.12499999999999997\n0.15624999999999992\n"
]
],
[
[
"Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)\n\nax[0].bar(y + k, nbinom.pmf(y, k, p1))\nax[0].set_xticks(np.linspace(3, 30, num=10))\nax[0].set_title(f\"k = {k}, p = {p1}\")\n\nax[1].bar(y + k, nbinom.pmf(y, k, p2))\nax[1].set_xticks(np.linspace(3, 30, num=10))\nax[1].set_title(f\"k = {k}, p = {p2}\")\n\nfig.suptitle(\"Y = Number of trials until k successes\", fontsize=16);",
"_____no_output_____"
]
],
[
[
"## Negative binomial in GLM",
"_____no_output_____"
],
[
"The negative binomial distribution belongs to the exponential family, and the canonical link function is \n\n$$\ng(\\mu_i) = \\log\\left(\\frac{\\mu_i}{k + \\mu_i}\\right) = \\log\\left(\\frac{k}{\\mu_i} + 1\\right)\n$$\n\nbut it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results.",
"_____no_output_____"
],
[
"## Load and explore Students data\n\nThis example is based on this [UCLA example](https://stats.idre.ucla.edu/r/dae/negative-binomial-regression/).\n\nSchool administrators study the attendance behavior of high school juniors at two schools. Predictors of the **number of days of absence** include the **type of program** in which the student is enrolled and a **standardized test in math**. We have attendance data on 314 high school juniors.\n\nThe variables of insterest in the dataset are\n\n* daysabs: The number of days of absence. It is our response variable.\n* progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.\n* math: Score in a standardized math test.",
"_____no_output_____"
]
],
[
[
"data = pd.read_stata(\"https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta\")",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
]
],
[
[
"We assign categories to the values 1, 2, and 3 of our `\"prog\"` variable.",
"_____no_output_____"
]
],
[
[
"data[\"prog\"] = data[\"prog\"].map({1: \"General\", 2: \"Academic\", 3: \"Vocational\"})\ndata.head()",
"_____no_output_____"
]
],
[
[
"The Academic program is the most popular program (167/314) and General is the least popular one (40/314)",
"_____no_output_____"
]
],
[
[
"data[\"prog\"].value_counts()",
"_____no_output_____"
]
],
[
[
"Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex=\"col\")\nprograms = list(data[\"prog\"].unique())\nprograms.sort()\n\nfor idx, program in enumerate(programs):\n # Histogram\n ax[idx, 0].hist(data[data[\"prog\"] == program][\"math\"], edgecolor='black', alpha=0.9)\n ax[idx, 0].axvline(data[data[\"prog\"] == program][\"math\"].mean(), color=\"C1\")\n \n # Barplot\n days = data[data[\"prog\"] == program][\"daysabs\"]\n days_mean = days.mean()\n days_counts = days.value_counts()\n values = list(days_counts.index)\n count = days_counts.values\n ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9)\n ax[idx, 1].axvline(days_mean, color=\"C1\")\n \n # Titles\n ax[idx, 0].set_title(program)\n ax[idx, 1].set_title(program)\n\nplt.setp(ax[-1, 0], xlabel=\"Math score\")\nplt.setp(ax[-1, 1], xlabel=\"Days of absence\");",
"_____no_output_____"
]
],
[
[
"The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score.\n \nOn the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average.",
"_____no_output_____"
],
[
"## Models\n\nWe are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program. \n\nIn order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling `math` and comparing how long it took to fit.\n\nWe are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem\n\n* Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.\n* The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).\n\nWe also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.\n\nBut then, why negative binomial? Can't we just use a Poisson likelihood?\n\nYes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better.",
"_____no_output_____"
],
[
"### Model 1 \n\n$$\n\\log{Y_i} = \\beta_1 \\text{Academic}_i + \\beta_2 \\text{General}_i + \\beta_3 \\text{Vocational}_i + \\beta_4 \\text{Math_std}_i\n$$\n\n### Model 2\n\n$$\n\\log{Y_i} = \\beta_1 \\text{Academic}_i + \\beta_2 \\text{General}_i + \\beta_3 \\text{Vocational}_i + \\beta_4 \\text{Math_std}_i\n + \\beta_5 \\text{General}_i \\cdot \\text{Math_std}_i + \\beta_6 \\text{Vocational}_i \\cdot \\text{Math_std}_i\n$$\n\nIn both cases we have the following dummy variables\n\n\n$$\\text{Academic}_i = \n\\left\\{ \n \\begin{array}{ll}\n 1 & \\textrm{if student is under Academic program} \\\\\n 0 & \\textrm{other case} \n \\end{array}\n\\right.\n$$\n\n$$\\text{General}_i = \n\\left\\{ \n \\begin{array}{ll}\n 1 & \\textrm{if student is under General program} \\\\\n 0 & \\textrm{other case} \n \\end{array}\n\\right.\n$$\n\n$$\\text{Vocational}_i = \n\\left\\{ \n \\begin{array}{ll}\n 1 & \\textrm{if student is under Vocational program} \\\\\n 0 & \\textrm{other case} \n \\end{array}\n\\right.\n$$\n\nand $Y$ represents the days of absence.\n\nSo, for example, the first model for a student under the Vocational program reduces to\n$$\n\\log{Y_i} = \\beta_3 + \\beta_4 \\text{Math_std}_i\n$$\n\nAnd one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\\beta_0$ above. This choice allows us to represent the effect of each program directly with $\\beta_1$, $\\beta_2$, and $\\beta_3$.",
"_____no_output_____"
],
[
"## Model fit\n\nIt's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The `0` on the right hand side of `~` simply means we don't want to have the intercept term that is added by default. `scale(math)` tells Bambi we want to use standardize `math` before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here.\n\n### Model 1",
"_____no_output_____"
]
],
[
[
"model_additive = bmb.Model(\"daysabs ~ 0 + prog + scale(math)\", data, family=\"negativebinomial\")\nidata_additive = model_additive.fit()",
"Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [prog, scale(math), daysabs_alpha]\n"
]
],
[
[
"### Model 2\n\nFor this second model we just add `prog:scale(math)` to indicate the interaction. A shorthand would be to use `y ~ 0 + prog*scale(math)`, which uses the **full interaction** operator. In other words, it just means we want to include the interaction between `prog` and `scale(math)` as well as their main effects.",
"_____no_output_____"
]
],
[
[
"model_interaction = bmb.Model(\"daysabs ~ 0 + prog + scale(math) + prog:scale(math)\", data, family=\"negativebinomial\")\nidata_interaction = model_interaction.fit()",
"Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [prog, scale(math), prog:scale(math), daysabs_alpha]\n"
]
],
[
[
"## Explore models",
"_____no_output_____"
],
[
"The first thing we do is calling `az.summary()`. Here we pass the `InferenceData` object the `.fit()` returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics.",
"_____no_output_____"
]
],
[
[
"az.summary(idata_additive)",
"_____no_output_____"
],
[
"az.summary(idata_interaction)",
"_____no_output_____"
]
],
[
[
"The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with `plot_forest()`. There we simply pass a list containing the `InferenceData` objects of the models we want to compare.",
"_____no_output_____"
]
],
[
[
"az.plot_forest(\n [idata_additive, idata_interaction],\n model_names=[\"Additive\", \"Interaction\"],\n var_names=[\"prog\", \"scale(math)\"],\n combined=True,\n figsize=(8, 4)\n);",
"_____no_output_____"
]
],
[
[
"One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of `scale(math)` is slightly lower in the model that considers the interaction, but the difference is not significant. \n\nWe can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.\n\nIn addition, the marginal posterior for `math` shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs).",
"_____no_output_____"
]
],
[
[
"az.plot_forest(idata_interaction, var_names=[\"prog:scale(math)\"], combined=True, figsize=(8, 4))\nplt.axvline(0);",
"_____no_output_____"
]
],
[
[
"## Plot predicted mean response\n\nWe finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals.",
"_____no_output_____"
]
],
[
[
"math_score = np.arange(1, 100)\n\n# This function takes a model and an InferenceData object.\n# It returns of length 3 with predictions for each type of program.\ndef predict(model, idata):\n predictions = []\n for program in programs:\n new_data = pd.DataFrame({\"math\": math_score, \"prog\": [program] * len(math_score)})\n new_idata = model.predict(\n idata, \n data=new_data,\n inplace=False\n )\n prediction = new_idata.posterior.stack(sample=[\"chain\", \"draw\"])[\"daysabs_mean\"].values\n predictions.append(prediction)\n \n return predictions",
"_____no_output_____"
],
[
"prediction_additive = predict(model_additive, idata_additive)\nprediction_interaction = predict(model_interaction, idata_interaction)",
"_____no_output_____"
],
[
"mu_additive = [prediction.mean(1) for prediction in prediction_additive]\nmu_interaction = [prediction.mean(1) for prediction in prediction_interaction]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4))\n\nfor idx, program in enumerate(programs):\n ax[0].plot(math_score, mu_additive[idx], label=f\"{program}\", color=f\"C{idx}\", lw=2)\n az.plot_hdi(math_score, prediction_additive[idx].T, color=f\"C{idx}\", ax=ax[0])\n\n ax[1].plot(math_score, mu_interaction[idx], label=f\"{program}\", color=f\"C{idx}\", lw=2)\n az.plot_hdi(math_score, prediction_interaction[idx].T, color=f\"C{idx}\", ax=ax[1])\n\nax[0].set_title(\"Additive\");\nax[1].set_title(\"Interaction\");\nax[0].set_xlabel(\"Math score\")\nax[1].set_xlabel(\"Math score\")\nax[0].set_ylim(0, 25)\nax[0].legend(loc=\"upper right\");",
"_____no_output_____"
]
],
[
[
"As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0.",
"_____no_output_____"
],
[
"If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use `az.compare()` to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?\n\nAlso, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace `family=\"negativebinomial\"` with `family=\"poisson\"` and then you're ready to compare results!",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n%watermark -n -u -v -iv -w",
"Last updated: Wed Jun 01 2022\n\nPython implementation: CPython\nPython version : 3.9.7\nIPython version : 8.3.0\n\nsys : 3.9.7 (default, Sep 16 2021, 13:09:58) \n[GCC 7.5.0]\npandas : 1.4.2\nnumpy : 1.21.5\narviz : 0.12.1\nmatplotlib: 3.5.1\nbambi : 0.7.1\n\nWatermark: 2.3.0\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0440f89cb8d7eed11ce23cd383723bb1b383f11 | 348,821 | ipynb | Jupyter Notebook | P1.ipynb | owennottank/CarND-LaneLines-P1 | 74ebfab2d4f8f5803ef1eb0665be11af626be8d2 | [
"MIT"
] | null | null | null | P1.ipynb | owennottank/CarND-LaneLines-P1 | 74ebfab2d4f8f5803ef1eb0665be11af626be8d2 | [
"MIT"
] | null | null | null | P1.ipynb | owennottank/CarND-LaneLines-P1 | 74ebfab2d4f8f5803ef1eb0665be11af626be8d2 | [
"MIT"
] | null | null | null | 489.230014 | 114,988 | 0.93814 | [
[
[
"# Self-Driving Car Engineer Nanodegree\n\n\n## Project: **Finding Lane Lines on the Road** \n***\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \n\nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.\n\n---\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\n\n**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n\n---",
"_____no_output_____"
],
[
"**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**\n\n---\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n <p></p> \n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>",
"_____no_output_____"
],
[
"**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ",
"_____no_output_____"
],
[
"## Import Packages",
"_____no_output_____"
]
],
[
[
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Read in an Image",
"_____no_output_____"
]
],
[
[
"#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
],
[
[
"## Ideas for Lane Detection Pipeline",
"_____no_output_____"
],
[
"**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**\n\n`cv2.inRange()` for color selection \n`cv2.fillPoly()` for regions selection \n`cv2.line()` to draw lines on an image given endpoints \n`cv2.addWeighted()` to coadd / overlay two images \n`cv2.cvtColor()` to grayscale or change color \n`cv2.imwrite()` to output images to file \n`cv2.bitwise_and()` to apply a mask to an image\n\n**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**",
"_____no_output_____"
],
[
"## Helper Functions",
"_____no_output_____"
],
[
"Below are some helper functions to help get you started. They should look familiar from the lesson!",
"_____no_output_____"
]
],
[
[
"import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n `vertices` should be a numpy array of integer points.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=2):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n for line in lines:\n for x1,y1,x2,y2 in line:\n cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., γ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + γ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, γ)",
"_____no_output_____"
]
],
[
[
"## Test Images\n\nBuild your pipeline to work on the images in the directory \"test_images\" \n**You should make sure your pipeline works well on these images before you try the videos.**",
"_____no_output_____"
]
],
[
[
"import os\nlist_img = os.listdir(\"test_images/\")\nos.listdir(\"test_images/\")",
"_____no_output_____"
]
],
[
[
"## Build a Lane Finding Pipeline\n\n",
"_____no_output_____"
],
[
"Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.\n\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.",
"_____no_output_____"
]
],
[
[
"# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images_output directory.\ndef lane_finding(image):\n # 1. convert image to grayscale\n gray = grayscale(image)\n cv2.imwrite('test_images_output/gray.jpg',gray)\n # 2. Gaussian smoothing of gray image\n kernel_size = 5\n gray_blur = gaussian_blur(gray,kernel_size)\n cv2.imwrite('test_images_output/gray_blur.jpg',gray_blur)\n # 3. canny edge detection\n low_threshold = 50\n high_threshold = 110\n edges = canny(gray_blur, low_threshold,high_threshold)\n cv2.imwrite('test_images_output/edges.jpg',edges) \n # 4. region selection (masking)\n imshape = image.shape\n lb = [0,imshape[0]]\n rb = [imshape[1],imshape[0]]\n lu = [400, 350]\n ru = [600, 350]\n #vertices = np.array([[(0,imshape[0]),(400, 350), (600, 350), (imshape[1],imshape[0])]], dtype=np.int32)\n vertices = np.array([[lb,lu, ru, rb]], dtype=np.int32)\n plt.imshow(image)\n x = [lb[0], rb[0], ru[0], lu[0],lb[0]]\n y = [lb[1], rb[1], ru[1], lu[1],lb[1]]\n plt.plot(x, y, 'b--', lw=2)\n plt.savefig('test_images_output/region_interest.jpg')\n masked_edges = region_of_interest(edges, vertices)\n # 5. Hough transform for lane lines\n rho = 1\n theta = np.pi/180\n threshold = 10\n min_line_len = 50\n max_line_gap = 100\n line_image = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)\n # 6. show lanes in original image\n lane_image = weighted_img(line_image, image, α=0.8, β=1., γ=0.)\n plt.imshow(lane_image)\n return lane_image\n#lane_image = lane_finding(image)\n#plt.imshow(lane_image)\n# output_dir = \"test_images_output/\"\n# for img in list_img:\n# image = mpimg.imread('test_images/'+img)\n# lane_image = lane_finding(image)\n# img_name = output_dir + img\n# status = cv2.imwrite(img_name, cv2.cvtColor(lane_image, cv2.COLOR_RGB2BGR)) \n # caution:\n # 1. destination folder must exist, or image cannot be saved!\n # 2. cv2.imwrite changes RGB channels, which need to be converted, or the saved image has different colors\n # print(\"Image written to file-system : \",status)\n",
"_____no_output_____"
]
],
[
[
"## Test on Videos\n\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\n\nWe can test our solution on two provided videos:\n\n`solidWhiteRight.mp4`\n\n`solidYellowLeft.mp4`\n\n**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**\n\n**If you get an error that looks like this:**\n```\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\n```\n**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**",
"_____no_output_____"
]
],
[
[
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML",
"_____no_output_____"
],
[
"def process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image where lines are drawn on lanes)\n result = lane_finding(image)\n\n return result",
"_____no_output_____"
]
],
[
[
"Let's try the one with the solid white lane on the right first ...",
"_____no_output_____"
]
],
[
[
"white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)",
"\rt: 0%| | 0/221 [00:00<?, ?it/s, now=None]"
]
],
[
[
"Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.",
"_____no_output_____"
]
],
[
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))",
"_____no_output_____"
]
],
[
[
"## Improve the draw_lines() function\n\n**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".**\n\n**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**",
"_____no_output_____"
],
[
"Now for the one with the solid yellow lane on the left. This one's more tricky!",
"_____no_output_____"
]
],
[
[
"yellow_output = 'test_videos_output/solidYellowLeft.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)\n##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)",
"\rt: 0%| | 0/125 [00:00<?, ?it/s, now=None]"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))",
"_____no_output_____"
]
],
[
[
"## Writeup and Submission\n\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.\n",
"_____no_output_____"
],
[
"## Optional Challenge\n\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!",
"_____no_output_____"
]
],
[
[
"challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)",
"_____no_output_____"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d04415f2ebc17c529653d82f5ec3222ba893a35e | 26,240 | ipynb | Jupyter Notebook | content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb | MahopacHS/spring2019-DavisGrimm | 3aad8151ddd6618d379ec2ae257cc17592c4c826 | [
"MIT"
] | null | null | null | content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb | MahopacHS/spring2019-DavisGrimm | 3aad8151ddd6618d379ec2ae257cc17592c4c826 | [
"MIT"
] | null | null | null | content/lessons/12/Class-Coding-Lab/CCL-Data-Analysis-With-Pandas.ipynb | MahopacHS/spring2019-DavisGrimm | 3aad8151ddd6618d379ec2ae257cc17592c4c826 | [
"MIT"
] | null | null | null | 66.43038 | 3,497 | 0.503887 | [
[
[
"# In-Class Coding Lab: Data Analysis with Pandas\n\nIn this lab, we will perform a data analysis on the **RMS Titanic** passenger list. The RMS Titanic is one of the most famous ocean liners in history. On April 15, 1912 it sank after colliding with an iceberg in the North Atlantic Ocean. To learn more, read here: https://en.wikipedia.org/wiki/RMS_Titanic \n\nOur goal today is to perform a data analysis on a subset of the passenger list. We're looking for insights as to which types of passengers did and didn't survive. Women? Children? 1st Class Passengers? 3rd class? Etc. \n\nI'm sure you've heard the expression often said during emergencies: \"Women and Children first\" Let's explore this data set and find out if that's true!\n\nBefore we begin you should read up on what each of the columns mean in the data dictionary. You can find this information on this page: https://www.kaggle.com/c/titanic/data \n\n\n## Loading the data set\n\nFirst we load the dataset into a Pandas `DataFrame` variable. The `sample(10)` method takes a random sample of 10 passengers from the data set.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\n# this turns off warning messages\nimport warnings\nwarnings.filterwarnings('ignore')\n\npassengers = pd.read_csv('CCL-titanic.csv')\npassengers.sample(10)",
"_____no_output_____"
]
],
[
[
"## How many survived?\n\nOne of the first things we should do is figure out how many of the passengers in this data set survived. Let's start with isolating just the `'Survivied'` column into a series:",
"_____no_output_____"
]
],
[
[
"passengers['Survived'].sample(10)",
"_____no_output_____"
]
],
[
[
"There's too many to display so we just display a random sample of 10 passengers. \n\n- 1 means the passenger survivied\n- 0 means the passenger died\n\nWhat we really want is to count the number of survivors and deaths. We do this by querying the `value_counts()` of the `['Survived']` column, which returns a `Series` of counts, like this:",
"_____no_output_____"
]
],
[
[
"passengers['Survived'].value_counts()",
"_____no_output_____"
]
],
[
[
"Only 342 passengers survived, and 549 perished. Let's observe this same data as percentages of the whole. We do this by adding the `normalize=True` named argument to the `value_counts()` method.",
"_____no_output_____"
]
],
[
[
"passengers['Survived'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"**Just 38% of passengers in this dataset survived.**",
"_____no_output_____"
],
[
"### Now you Try it!\n\n**FIRST** Write a Pandas expression to display counts of males and female passengers using the `Sex` variable:",
"_____no_output_____"
]
],
[
[
"passengers['Sex'].value_counts()",
"_____no_output_____"
]
],
[
[
"**NEXT** Write a Pandas expression to display male /female passenger counts as a percentage of the whole number of passengers in the data set.",
"_____no_output_____"
]
],
[
[
"passengers['Sex'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"If you got things working, you now know that **35% of passengers were female**.",
"_____no_output_____"
],
[
"## Who survivies? Men or Women?\n\nWe now know that 35% of the passengers were female, and 65% we male. \n\n**The next think to think about is how do survivial rates affect these numbers? **\n\nIf the ratio is about the same for surviviors only, then we can conclude that your **Sex** did not play a role in your survival on the RMS Titanic. \n\nLet's find out.",
"_____no_output_____"
]
],
[
[
"survivors = passengers[passengers['Survived'] ==1]\nsurvivors['PassengerId'].count()",
"_____no_output_____"
]
],
[
[
"Still **342** like we discovered originally. Now let's check the **Sex** split among survivors only:",
"_____no_output_____"
]
],
[
[
"survivors['Sex'].value_counts()",
"_____no_output_____"
]
],
[
[
"WOW! That is a huge difference! But you probably can't see it easily. Let's represent it in a `DataFrame`, so that it's easier to visualize:",
"_____no_output_____"
]
],
[
[
"sex_all_series = passengers['Sex'].value_counts()\nsex_survivor_series = survivors['Sex'].value_counts()\n\nsex_comparision_df = pd.DataFrame({ 'AllPassengers' : sex_all_series, 'Survivors' : sex_survivor_series })\nsex_comparision_df['SexSurvivialRate'] = sex_comparision_df['Survivors'] / sex_comparision_df['AllPassengers']\nsex_comparision_df",
"_____no_output_____"
]
],
[
[
" **So, females had a 74% survival rate. Much better than the overall rate of 38%**\n \nWe should probably briefly explain the code above. \n\n- The first two lines get a series count of all passengers by Sex (male / female) and count of survivors by sex\n- The third line creates DataFrame. Recall a pandas dataframe is just a dict of series. We have two keys 'AllPassengers' and 'Survivors'\n- The fourth line creates a new column in the dataframe which is just the survivors / all passengers to get the rate of survival for that Sex.\n\n## Feature Engineering: Adults and Children\n\nSometimes the variable we want to analyze is not readily available, but can be created from existing data. This is commonly referred to as **feature engineering**. The name comes from machine learning where we use data called *features* to predict an outcome. \n\nLet's create a new feature called `'AgeCat'` as follows:\n\n- When **Age** <=18 then 'Child'\n- When **Age** >18 then 'Adult'\n\nThis is easy to do in pandas. First we create the column and set all values to `np.nan` which means 'Not a number'. This is Pandas way of saying no value. Then we set the values based on the rules we set for the feature.",
"_____no_output_____"
]
],
[
[
"passengers['AgeCat'] = np.nan # Not a number\npassengers['AgeCat'][ passengers['Age'] <=18 ] = 'Child'\npassengers['AgeCat'][ passengers['Age'] > 18 ] = 'Adult'\npassengers.sample(5)",
"_____no_output_____"
]
],
[
[
"Let's get the count and distrubutions of Adults and Children on the passenger list.",
"_____no_output_____"
]
],
[
[
"passengers['AgeCat'].value_counts()",
"_____no_output_____"
]
],
[
[
"And here's the percentage as a whole:",
"_____no_output_____"
]
],
[
[
"passengers['AgeCat'].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"So close to **80%** of the passengers were adults. Once again let's look at the ratio of `AgeCat` for survivors only. If your age has no bearing of survivial, then the rates should be the same. \n\nHere's the counts of Adult / Children among the survivors only:",
"_____no_output_____"
]
],
[
[
"survivors = passengers[passengers['Survived'] ==1]\nsurvivors['AgeCat'].value_counts()",
"_____no_output_____"
]
],
[
[
"### Now You Try it!\n\nCalculate the `AgeCat` survival rate, similar to how we did for the `SexSurvivalRate`. ",
"_____no_output_____"
]
],
[
[
"agecat_all_series = passengers['AgeCat'].value_counts()\nagecat_survivor_series = survivors['AgeCat'].value_counts()\n\n# todo make a data frame, add AgeCatSurvivialRate column, display dataframe \nagecat_comparision_df = pd.DataFrame({ 'AllPassengers' : agecat_all_series, 'Survivors' : agecat_survivor_series })\nagecat_comparision_df['AgeCatSurvivialRate'] = agecat_comparision_df['Survivors'] / agecat_comparision_df['AllPassengers']\nagecat_comparision_df",
"_____no_output_____"
]
],
[
[
"**So, children had a 50% survival rate, better than the overall rate of 38%**\n\n## So, women and children first?\n\nIt looks like the RMS really did have the motto: \"Women and Children First.\"\n\nHere's our insights. We know:\n\n- If you were a passenger, you had a 38% chance of survival.\n- If you were a female passenger, you had a 74% chance of survival.\n- If you were a child passenger, you had a 50% chance of survival. \n\n\n### Now you try it for Passenger Class\n\nRepeat this process for `Pclass` The passenger class variable. Display the survival rates for each passenger class. What does the information tell you about passenger class and survival rates?\n\nI'll give you a hint... \"Money Talks\"\n",
"_____no_output_____"
]
],
[
[
"# todo: repeat the analysis in the previous cell for Pclass \npclass_all_series = passengers['Pclass'].value_counts()\npclass_survivor_series = survivors['Pclass'].value_counts() \npclass_comparision_df = pd.DataFrame({ 'AllPassengers' : pclass_all_series, 'Survivors' : pclass_survivor_series })\npclass_comparision_df['PclassSurvivialRate'] = pclass_comparision_df['Survivors'] / pclass_comparision_df['AllPassengers']\npclass_comparision_df",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0443516d10b5719f0db1f386a7d98e32ed11f8c | 39,176 | ipynb | Jupyter Notebook | Examples/.ipynb_checkpoints/Example-checkpoint.ipynb | microfluidix/HMRF | d6f64ca99537e638d978e474034cabc64c6f047f | [
"MIT"
] | 1 | 2021-11-16T10:40:25.000Z | 2021-11-16T10:40:25.000Z | Examples/.ipynb_checkpoints/Example-checkpoint.ipynb | gronteix/HMRF | 0c7b4b1ea9cd2934b2c9218e9d48b7c63b819a34 | [
"MIT"
] | null | null | null | Examples/.ipynb_checkpoints/Example-checkpoint.ipynb | gronteix/HMRF | 0c7b4b1ea9cd2934b2c9218e9d48b7c63b819a34 | [
"MIT"
] | 1 | 2021-11-16T10:40:59.000Z | 2021-11-16T10:40:59.000Z | 111.931429 | 29,412 | 0.860833 | [
[
[
"import numpy as np\nfrom scipy.spatial import Delaunay\nimport networkx as nx\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport pandas\nimport os\n\nimport graphsonchip.graphmaker\nfrom graphsonchip.graphmaker import make_spheroids\nfrom graphsonchip.graphmaker import graph_generation_func\nfrom graphsonchip.graphplotter import graph_plot",
"_____no_output_____"
]
],
[
[
"## Generate small plot",
"_____no_output_____"
]
],
[
[
"cells = make_spheroids.generate_artificial_spheroid(10)['cells']\nspheroid = {}\nspheroid['cells'] = cells\n\nG = graph_generation_func.generate_voronoi_graph(spheroid, dCells = 0.6)\n\nfor ind in G.nodes():\n \n if ind % 2 == 0:\n \n G.add_node(ind, color = 'r')\n \n else:\n \n G.add_node(ind, color = 'b')",
"_____no_output_____"
],
[
"graph_plot.network_plot_3D(G)\n\n#plt.savefig('example_code.pdf')",
"_____no_output_____"
],
[
"path = r'/Users/gustaveronteix/Documents/Projets/Projets Code/3D-Segmentation-Sebastien/data'\n\nspheroid_data = pandas.read_csv(os.path.join(path, 'spheroid_table_3.csv'))\n\nmapper = {\"centroid-0\": \"z\", \"centroid-1\": \"x\", \"centroid-2\": \"y\"}\nspheroid_data = spheroid_data.rename(columns = mapper)",
"_____no_output_____"
],
[
"spheroid = pr.single_spheroid_process(spheroid_data)\n\nG = graph.generate_voronoi_graph(spheroid, zRatio = 1, dCells = 20)",
"_____no_output_____"
],
[
"for ind in G.nodes():\n \n G.add_node(ind, color ='g')\n \npos =nx.get_node_attributes(G,'pos')\n\ngp.network_plot_3D(G, 5)\n\n#plt.savefig('Example_image.pdf')",
"_____no_output_____"
],
[
"path = r'/Volumes/Multicell/Sebastien/Gustave_Jeremie/spheroid_sample_Francoise.csv'\n\nspheroid_data = pandas.read_csv(path)\n\nspheroid = pr.single_spheroid_process(spheroid_data)\n\nG = graph.generate_voronoi_graph(spheroid, zRatio = 2, dCells = 35)",
"_____no_output_____"
],
[
"for ind in G.nodes():\n \n G.add_node(ind, color = 'r')\n \npos =nx.get_node_attributes(G,'pos')\n\ngp.network_plot_3D(G, 20)\n\nplt.savefig('/Volumes/Multicell/Sebastien/Gustave_Jeremie/spheroid_sample_Francoise.pdf', transparent=True)",
"_____no_output_____"
]
],
[
[
"## Batch analyze the data",
"_____no_output_____"
]
],
[
[
"spheroid_path = './utility/spheroid_sample_1.csv'\n\nspheroid_data = pandas.read_csv(spheroid_path)\n\nspheroid = pr.single_spheroid_process(spheroid_data[spheroid_data['area'] > 200])\n\nG = graph.generate_voronoi_graph(spheroid, zRatio = 2, dCells = 35)",
"_____no_output_____"
],
[
"import glob \nfrom collections import defaultdict\n\ndegree_frame_Vor = pandas.DataFrame()\ni = 0\ndegree_frame_Geo = pandas.DataFrame()\nj = 0\n\ndeg_Vor = []\ndeg_Geo = []\n\nfor fname in glob.glob('./utility/*.csv'):\n \n spheroid_data = pandas.read_csv(fname)\n \n spheroid_data['x'] *= 1.25\n spheroid_data['y'] *= 1.25\n spheroid_data['z'] *= 1.25\n spheroid_data = spheroid_data[spheroid_data['area']>200]\n spheroid = pr.single_spheroid_process(spheroid_data)\n\n \n G = generate_voronoi_graph(spheroid, zRatio = 1, dCells = 55)\n degree_sequence = sorted([d for n, d in G.degree()], reverse=True)\n degreeCount = collections.Counter(degree_sequence)\n \n for key in degreeCount.keys():\n \n N_tot = 0\n \n for k in degreeCount.keys():\n N_tot += degreeCount[k]\n \n degree_frame_Vor.loc[i, 'degree'] = key\n degree_frame_Vor.loc[i, 'p'] = degreeCount[key]/N_tot\n degree_frame_Vor.loc[i, 'fname'] = fname\n i += 1\n \n deg_Vor += list(degree_sequence)\n \n G = graph.generate_geometric_graph(spheroid, zRatio = 1, dCells = 26)\n degree_sequence = sorted([d for n, d in G.degree()], reverse=True)\n degreeCount = collections.Counter(degree_sequence)\n \n for key in degreeCount.keys():\n \n N_tot = 0\n \n for k in degreeCount.keys():\n N_tot += degreeCount[k]\n \n degree_frame_Geo.loc[j, 'degree'] = key\n degree_frame_Geo.loc[j, 'p'] = degreeCount[key]/N_tot\n degree_frame_Geo.loc[j, 'fname'] = fname\n j += 1\n \n deg_Geo.append(degreeCount[key])",
"_____no_output_____"
],
[
"indx = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).index\nmean = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).values\nstd = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).std(axis = 1).values\n\nindx_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).index\nmean_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).values\nstd_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).std(axis = 1).values",
"_____no_output_____"
],
[
"import seaborn as sns\n\nsns.set_style('white')\n\nplt.errorbar(indx+0.3, mean, yerr=std, \n marker = 's', linestyle = ' ', color = 'b',\n label = 'Voronoi')\n\nplt.errorbar(indx_geo-0.3, mean_geo, yerr=std_geo, \n marker = 'o', linestyle = ' ', color = 'r',\n label = 'Geometric')",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import curve_fit\nfrom scipy.special import factorial\nfrom scipy.stats import poisson\n\n# the bins should be of integer width, because poisson is an integer distribution\nbins = np.arange(25)-0.5\nentries, bin_edges, patches = plt.hist(deg_Vor, bins=bins, density=True, label='Data')\n\n# calculate bin centres\nbin_middles = 0.5 * (bin_edges[1:] + bin_edges[:-1])\n\n\ndef fit_function(k, lamb):\n '''poisson function, parameter lamb is the fit parameter'''\n return poisson.pmf(k, lamb)\n\n\n# fit with curve_fit\nparameters, cov_matrix = curve_fit(fit_function, bin_middles, entries)\n\n# plot poisson-deviation with fitted parameter\nx_plot = np.arange(0, 25)\n\nplt.plot(\n x_plot,\n fit_function(x_plot, *parameters),\n marker='o', linestyle='',\n label='Fit result',\n)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"parameters",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d044422ad5233952e2af1682664e031952994618 | 19,707 | ipynb | Jupyter Notebook | docs/examples/explainer/README.ipynb | outerbounds/tempo | 0878ae32ed6163a1c5115f20167d991a28535364 | [
"Apache-2.0"
] | 75 | 2021-02-15T09:49:02.000Z | 2022-03-31T02:06:38.000Z | docs/examples/explainer/README.ipynb | outerbounds/tempo | 0878ae32ed6163a1c5115f20167d991a28535364 | [
"Apache-2.0"
] | 106 | 2021-02-13T09:25:19.000Z | 2022-03-25T16:18:00.000Z | docs/examples/explainer/README.ipynb | outerbounds/tempo | 0878ae32ed6163a1c5115f20167d991a28535364 | [
"Apache-2.0"
] | 21 | 2021-02-12T17:12:50.000Z | 2022-03-04T02:09:26.000Z | 30.041159 | 211 | 0.528239 | [
[
[
"# Model Explainer Example\n\n\n\nIn this example we will:\n\n * [Describe the project structure](#Project-Structure)\n * [Train some models](#Train-Models)\n * [Create Tempo artifacts](#Create-Tempo-Artifacts)\n * [Run unit tests](#Unit-Tests)\n * [Save python environment for our classifier](#Save-Classifier-Environment)\n * [Test Locally on Docker](#Test-Locally-on-Docker)\n * [Production on Kubernetes via Tempo](#Production-Option-1-(Deploy-to-Kubernetes-with-Tempo))\n * [Prodiuction on Kuebrnetes via GitOps](#Production-Option-2-(Gitops))",
"_____no_output_____"
],
[
"## Prerequisites\n\nThis notebooks needs to be run in the `tempo-examples` conda environment defined below. Create from project root folder:\n\n```bash\nconda env create --name tempo-examples --file conda/tempo-examples.yaml\n```",
"_____no_output_____"
],
[
"## Project Structure",
"_____no_output_____"
]
],
[
[
"!tree -P \"*.py\" -I \"__init__.py|__pycache__\" -L 2",
"\u001b[01;34m.\u001b[00m\r\n├── \u001b[01;34martifacts\u001b[00m\r\n│ ├── \u001b[01;34mexplainer\u001b[00m\r\n│ └── \u001b[01;34mmodel\u001b[00m\r\n├── \u001b[01;34mk8s\u001b[00m\r\n│ └── \u001b[01;34mrbac\u001b[00m\r\n└── \u001b[01;34msrc\u001b[00m\r\n ├── constants.py\r\n ├── data.py\r\n ├── explainer.py\r\n ├── model.py\r\n └── tempo.py\r\n\r\n6 directories, 5 files\r\n"
]
],
[
[
"## Train Models\n\n * This section is where as a data scientist you do your work of training models and creating artfacts.\n * For this example we train sklearn and xgboost classification models for the iris dataset.",
"_____no_output_____"
]
],
[
[
"import os\nimport logging\nimport numpy as np\nimport json\nimport tempo\n\nfrom tempo.utils import logger\n\nfrom src.constants import ARTIFACTS_FOLDER\n\nlogger.setLevel(logging.ERROR)\nlogging.basicConfig(level=logging.ERROR)",
"_____no_output_____"
],
[
"from src.data import AdultData\n\ndata = AdultData()",
"_____no_output_____"
],
[
"from src.model import train_model\n\nadult_model = train_model(ARTIFACTS_FOLDER, data)",
"Train accuracy: 0.9656333333333333\nTest accuracy: 0.854296875\n"
],
[
"from src.explainer import train_explainer\n\ntrain_explainer(ARTIFACTS_FOLDER, data, adult_model)",
"_____no_output_____"
]
],
[
[
"## Create Tempo Artifacts\n",
"_____no_output_____"
]
],
[
[
"from src.tempo import create_explainer, create_adult_model\n\nsklearn_model = create_adult_model()\nExplainer = create_explainer(sklearn_model)\nexplainer = Explainer()",
"_____no_output_____"
],
[
"# %load src/tempo.py\nimport os\n\nimport dill\nimport numpy as np\nfrom alibi.utils.wrappers import ArgmaxTransformer\nfrom src.constants import ARTIFACTS_FOLDER, EXPLAINER_FOLDER, MODEL_FOLDER\n\nfrom tempo.serve.metadata import ModelFramework\nfrom tempo.serve.model import Model\nfrom tempo.serve.pipeline import PipelineModels\nfrom tempo.serve.utils import pipeline, predictmethod\n\n\ndef create_adult_model() -> Model:\n sklearn_model = Model(\n name=\"income-sklearn\",\n platform=ModelFramework.SKLearn,\n local_folder=os.path.join(ARTIFACTS_FOLDER, MODEL_FOLDER),\n uri=\"gs://seldon-models/test/income/model\",\n )\n\n return sklearn_model\n\n\ndef create_explainer(model: Model):\n @pipeline(\n name=\"income-explainer\",\n uri=\"s3://tempo/explainer/pipeline\",\n local_folder=os.path.join(ARTIFACTS_FOLDER, EXPLAINER_FOLDER),\n models=PipelineModels(sklearn=model),\n )\n class ExplainerPipeline(object):\n def __init__(self):\n pipeline = self.get_tempo()\n models_folder = pipeline.details.local_folder\n\n explainer_path = os.path.join(models_folder, \"explainer.dill\")\n with open(explainer_path, \"rb\") as f:\n self.explainer = dill.load(f)\n\n def update_predict_fn(self, x):\n if np.argmax(self.models.sklearn(x).shape) == 0:\n self.explainer.predictor = self.models.sklearn\n self.explainer.samplers[0].predictor = self.models.sklearn\n else:\n self.explainer.predictor = ArgmaxTransformer(self.models.sklearn)\n self.explainer.samplers[0].predictor = ArgmaxTransformer(self.models.sklearn)\n\n @predictmethod\n def explain(self, payload: np.ndarray, parameters: dict) -> str:\n print(\"Explain called with \", parameters)\n self.update_predict_fn(payload)\n explanation = self.explainer.explain(payload, **parameters)\n return explanation.to_json()\n\n # explainer = ExplainerPipeline()\n # return sklearn_model, explainer\n return ExplainerPipeline\n",
"_____no_output_____"
]
],
[
[
"## Save Explainer\n",
"_____no_output_____"
]
],
[
[
"!ls artifacts/explainer/conda.yaml",
"artifacts/explainer/conda.yaml\r\n"
],
[
"tempo.save(Explainer)",
"Collecting packages...\nPacking environment at '/home/clive/anaconda3/envs/tempo-d87b2b65-e7d9-4e82-9c0d-0f83f48c07a3' to '/home/clive/work/mlops/fork-tempo/docs/examples/explainer/artifacts/explainer/environment.tar.gz'\n[########################################] | 100% Completed | 1min 13.1s\n"
]
],
[
[
"## Test Locally on Docker\n\nHere we test our models using production images but running locally on Docker. This allows us to ensure the final production deployed model will behave as expected when deployed.",
"_____no_output_____"
]
],
[
[
"from tempo import deploy_local\nremote_model = deploy_local(explainer)",
"_____no_output_____"
],
[
"r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={\"threshold\":0.90}))\nprint(r[\"data\"][\"anchor\"])",
"['Marital Status = Separated', 'Sex = Female']\n"
],
[
"r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={\"threshold\":0.99}))\nprint(r[\"data\"][\"anchor\"])",
"['Marital Status = Separated', 'Sex = Female', 'Capital Gain <= 0.00', 'Education = Associates', 'Country = United-States']\n"
],
[
"remote_model.undeploy()",
"_____no_output_____"
]
],
[
[
"## Production Option 1 (Deploy to Kubernetes with Tempo)\n\n * Here we illustrate how to run the final models in \"production\" on Kubernetes by using Tempo to deploy\n \n### Prerequisites\n \nCreate a Kind Kubernetes cluster with Minio and Seldon Core installed using Ansible as described [here](https://tempo.readthedocs.io/en/latest/overview/quickstart.html#kubernetes-cluster-with-seldon-core).",
"_____no_output_____"
]
],
[
[
"!kubectl apply -f k8s/rbac -n production",
"secret/minio-secret configured\r\nserviceaccount/tempo-pipeline unchanged\r\nrole.rbac.authorization.k8s.io/tempo-pipeline unchanged\r\nrolebinding.rbac.authorization.k8s.io/tempo-pipeline-rolebinding unchanged\r\n"
],
[
"from tempo.examples.minio import create_minio_rclone\nimport os\ncreate_minio_rclone(os.getcwd()+\"/rclone-minio.conf\")",
"_____no_output_____"
],
[
"tempo.upload(sklearn_model)\ntempo.upload(explainer)",
"_____no_output_____"
],
[
"from tempo.serve.metadata import SeldonCoreOptions\nruntime_options = SeldonCoreOptions(**{\n \"remote_options\": {\n \"namespace\": \"production\",\n \"authSecretName\": \"minio-secret\"\n }\n })",
"_____no_output_____"
],
[
"from tempo import deploy_remote\nremote_model = deploy_remote(explainer, options=runtime_options)",
"_____no_output_____"
],
[
"r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={\"threshold\":0.95}))\nprint(r[\"data\"][\"anchor\"])",
"['Relationship = Unmarried', 'Marital Status = Separated', 'Capital Gain <= 0.00']\n"
],
[
"remote_model.undeploy()",
"_____no_output_____"
]
],
[
[
"## Production Option 2 (Gitops)\n\n * We create yaml to provide to our DevOps team to deploy to a production cluster\n * We add Kustomize patches to modify the base Kubernetes yaml created by Tempo",
"_____no_output_____"
]
],
[
[
"from tempo import manifest\nfrom tempo.serve.metadata import SeldonCoreOptions\nruntime_options = SeldonCoreOptions(**{\n \"remote_options\": {\n \"namespace\": \"production\",\n \"authSecretName\": \"minio-secret\"\n }\n })\nyaml_str = manifest(explainer, options=runtime_options)\nwith open(os.getcwd()+\"/k8s/tempo.yaml\",\"w\") as f:\n f.write(yaml_str)",
"_____no_output_____"
],
[
"!kustomize build k8s",
"apiVersion: machinelearning.seldon.io/v1\r\nkind: SeldonDeployment\r\nmetadata:\r\n annotations:\r\n seldon.io/tempo-description: \"\"\r\n seldon.io/tempo-model: '{\"model_details\": {\"name\": \"income-explainer\", \"local_folder\":\r\n \"/home/clive/work/mlops/fork-tempo/docs/examples/explainer/artifacts/explainer\",\r\n \"uri\": \"s3://tempo/explainer/pipeline\", \"platform\": \"tempo\", \"inputs\": {\"args\":\r\n [{\"ty\": \"numpy.ndarray\", \"name\": \"payload\"}, {\"ty\": \"builtins.dict\", \"name\":\r\n \"parameters\"}]}, \"outputs\": {\"args\": [{\"ty\": \"builtins.str\", \"name\": null}]},\r\n \"description\": \"\"}, \"protocol\": \"tempo.kfserving.protocol.KFServingV2Protocol\",\r\n \"runtime_options\": {\"runtime\": \"tempo.seldon.SeldonKubernetesRuntime\", \"state_options\":\r\n {\"state_type\": \"LOCAL\", \"key_prefix\": \"\", \"host\": \"\", \"port\": \"\"}, \"insights_options\":\r\n {\"worker_endpoint\": \"\", \"batch_size\": 1, \"parallelism\": 1, \"retries\": 3, \"window_time\":\r\n 0, \"mode_type\": \"NONE\", \"in_asyncio\": false}, \"ingress_options\": {\"ingress\":\r\n \"tempo.ingress.istio.IstioIngress\", \"ssl\": false, \"verify_ssl\": true}, \"replicas\":\r\n 1, \"minReplicas\": null, \"maxReplicas\": null, \"authSecretName\": \"minio-secret\",\r\n \"serviceAccountName\": null, \"add_svc_orchestrator\": false, \"namespace\": \"production\"}}'\r\n labels:\r\n seldon.io/tempo: \"true\"\r\n name: income-explainer\r\n namespace: production\r\nspec:\r\n predictors:\r\n - annotations:\r\n seldon.io/no-engine: \"true\"\r\n componentSpecs:\r\n - spec:\r\n containers:\r\n - name: classifier\r\n resources:\r\n limits:\r\n cpu: 1\r\n memory: 1Gi\r\n requests:\r\n cpu: 500m\r\n memory: 500Mi\r\n graph:\r\n envSecretRefName: minio-secret\r\n implementation: TEMPO_SERVER\r\n modelUri: s3://tempo/explainer/pipeline\r\n name: income-explainer\r\n serviceAccountName: tempo-pipeline\r\n type: MODEL\r\n name: default\r\n replicas: 1\r\n protocol: kfserving\r\n---\r\napiVersion: machinelearning.seldon.io/v1\r\nkind: SeldonDeployment\r\nmetadata:\r\n annotations:\r\n seldon.io/tempo-description: \"\"\r\n seldon.io/tempo-model: '{\"model_details\": {\"name\": \"income-sklearn\", \"local_folder\":\r\n \"/home/clive/work/mlops/fork-tempo/docs/examples/explainer/artifacts/model\",\r\n \"uri\": \"gs://seldon-models/test/income/model\", \"platform\": \"sklearn\", \"inputs\":\r\n {\"args\": [{\"ty\": \"numpy.ndarray\", \"name\": null}]}, \"outputs\": {\"args\": [{\"ty\":\r\n \"numpy.ndarray\", \"name\": null}]}, \"description\": \"\"}, \"protocol\": \"tempo.kfserving.protocol.KFServingV2Protocol\",\r\n \"runtime_options\": {\"runtime\": \"tempo.seldon.SeldonKubernetesRuntime\", \"state_options\":\r\n {\"state_type\": \"LOCAL\", \"key_prefix\": \"\", \"host\": \"\", \"port\": \"\"}, \"insights_options\":\r\n {\"worker_endpoint\": \"\", \"batch_size\": 1, \"parallelism\": 1, \"retries\": 3, \"window_time\":\r\n 0, \"mode_type\": \"NONE\", \"in_asyncio\": false}, \"ingress_options\": {\"ingress\":\r\n \"tempo.ingress.istio.IstioIngress\", \"ssl\": false, \"verify_ssl\": true}, \"replicas\":\r\n 1, \"minReplicas\": null, \"maxReplicas\": null, \"authSecretName\": \"minio-secret\",\r\n \"serviceAccountName\": null, \"add_svc_orchestrator\": false, \"namespace\": \"production\"}}'\r\n labels:\r\n seldon.io/tempo: \"true\"\r\n name: income-sklearn\r\n namespace: production\r\nspec:\r\n predictors:\r\n - annotations:\r\n seldon.io/no-engine: \"true\"\r\n graph:\r\n envSecretRefName: minio-secret\r\n implementation: SKLEARN_SERVER\r\n modelUri: gs://seldon-models/test/income/model\r\n name: income-sklearn\r\n type: MODEL\r\n name: default\r\n replicas: 1\r\n protocol: kfserving\r\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0445af96cb923014663b98907b86c66fe5933e9 | 38,563 | ipynb | Jupyter Notebook | community/en/nmt.ipynb | thezwick/examples | baa164aab116c4110315bcfd50a572fee1c55ee6 | [
"Apache-2.0"
] | 3 | 2021-02-02T15:56:47.000Z | 2021-04-08T14:05:54.000Z | community/en/nmt.ipynb | thezwick/examples | baa164aab116c4110315bcfd50a572fee1c55ee6 | [
"Apache-2.0"
] | 7 | 2020-11-13T18:56:38.000Z | 2022-03-12T00:37:46.000Z | community/en/nmt.ipynb | thezwick/examples | baa164aab116c4110315bcfd50a572fee1c55ee6 | [
"Apache-2.0"
] | 8 | 2021-05-01T04:50:58.000Z | 2021-05-01T07:57:04.000Z | 39.633094 | 361 | 0.545186 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\").\n\n# Neural Machine Translation with Attention\n\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>\n",
"_____no_output_____"
],
[
"# This notebook is still under construction! Please come back later.\n\n\nThis notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using TF 2.0 APIs. This is an advanced example that assumes some knowledge of sequence to sequence models.\n\nAfter training the model in this notebook, you will be able to input a Spanish sentence, such as *\"¿todavia estan en casa?\"*, and return the English translation: *\"are you still at home?\"*\n\nThe translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:\n\n<img src=\"https://tensorflow.org/images/spanish-english.png\" alt=\"spanish-english attention plot\">\n\nNote: This example takes approximately 10 mintues to run on a single P100 GPU.",
"_____no_output_____"
]
],
[
[
"import collections\nimport io\nimport itertools\nimport os\nimport random\nimport re\nimport time\nimport unicodedata\n\nimport numpy as np\n\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\n\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## Download and prepare the dataset\n\nWe'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:\n\n```\nMay I borrow this book?\t¿Puedo tomar prestado este libro?\n```\n\nThere are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:\n\n1. Clean the sentences by removing special characters.\n1. Add a *start* and *end* token to each sentence.\n1. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).\n1. Pad each sentence to a maximum length.",
"_____no_output_____"
]
],
[
[
"# TODO(brianklee): This preprocessing should ideally be implemented in TF\n# because preprocessing should be exported as part of the SavedModel.\n\n# Converts the unicode file to ascii\n# https://stackoverflow.com/a/518232/2809427\ndef unicode_to_ascii(s):\n return ''.join(c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn')\n\nSTART_TOKEN = u'<start>'\nEND_TOKEN = u'<end>'\n\ndef preprocess_sentence(w):\n # remove accents; lowercase everything\n w = unicode_to_ascii(w.strip()).lower()\n\n # creating a space between a word and the punctuation following it\n # eg: \"he is a boy.\" => \"he is a boy .\"\n # https://stackoverflow.com/a/3645931/3645946\n w = re.sub(r'([?.!,¿])', r' \\1 ', w)\n\n # replacing everything with space except (a-z, '.', '?', '!', ',')\n w = re.sub(r'[^a-z?.!,¿]+', ' ', w)\n\n # adding a start and an end token to the sentence\n # so that the model know when to start and stop predicting.\n w = '<start> ' + w + ' <end>'\n return w\n",
"_____no_output_____"
],
[
"en_sentence = u\"May I borrow this book?\"\nsp_sentence = u\"¿Puedo tomar prestado este libro?\"\nprint(preprocess_sentence(en_sentence))\nprint(preprocess_sentence(sp_sentence))\n",
"_____no_output_____"
]
],
[
[
"Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset (of course, translation quality degrades with less data).\n",
"_____no_output_____"
]
],
[
[
"def load_anki_data(num_examples=None):\n # Download the file\n path_to_zip = tf.keras.utils.get_file(\n 'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',\n extract=True)\n\n path_to_file = os.path.dirname(path_to_zip) + '/spa-eng/spa.txt'\n with io.open(path_to_file, 'rb') as f:\n lines = f.read().decode('utf8').strip().split('\\n')\n\n # Data comes as tab-separated strings; one per line.\n eng_spa_pairs = [[preprocess_sentence(w) for w in line.split('\\t')] for line in lines]\n\n # The translations file is ordered from shortest to longest, so slicing from\n # the front will select the shorter examples. This also speeds up training.\n if num_examples is not None:\n eng_spa_pairs = eng_spa_pairs[:num_examples]\n eng_sentences, spa_sentences = zip(*eng_spa_pairs)\n\n eng_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')\n spa_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')\n eng_tokenizer.fit_on_texts(eng_sentences)\n spa_tokenizer.fit_on_texts(spa_sentences)\n return (eng_spa_pairs, eng_tokenizer, spa_tokenizer)\n",
"_____no_output_____"
],
[
"NUM_EXAMPLES = 30000\nsentence_pairs, english_tokenizer, spanish_tokenizer = load_anki_data(NUM_EXAMPLES)\n",
"_____no_output_____"
],
[
"# Turn our english/spanish pairs into TF Datasets by mapping words -> integers.\n\ndef make_dataset(eng_spa_pairs, eng_tokenizer, spa_tokenizer):\n eng_sentences, spa_sentences = zip(*eng_spa_pairs)\n eng_ints = eng_tokenizer.texts_to_sequences(eng_sentences)\n spa_ints = spa_tokenizer.texts_to_sequences(spa_sentences)\n\n padded_eng_ints = tf.keras.preprocessing.sequence.pad_sequences(\n eng_ints, padding='post')\n padded_spa_ints = tf.keras.preprocessing.sequence.pad_sequences(\n spa_ints, padding='post')\n\n dataset = tf.data.Dataset.from_tensor_slices((padded_eng_ints, padded_spa_ints))\n return dataset\n",
"_____no_output_____"
],
[
"# Train/test split\ntrain_size = int(len(sentence_pairs) * 0.8)\nrandom.shuffle(sentence_pairs)\ntrain_sentence_pairs, test_sentence_pairs = sentence_pairs[:train_size], sentence_pairs[train_size:]\n# Show length\nlen(train_sentence_pairs), len(test_sentence_pairs)\n",
"_____no_output_____"
],
[
"_english, _spanish = train_sentence_pairs[0]\n_eng_ints, _spa_ints = english_tokenizer.texts_to_sequences([_english])[0], spanish_tokenizer.texts_to_sequences([_spanish])[0]\nprint(\"Source language: \")\nprint('\\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_eng_ints, _english.split())))\nprint(\"Target language: \")\nprint('\\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_spa_ints, _spanish.split())))\n",
"_____no_output_____"
],
[
"# Set up datasets\nBATCH_SIZE = 64\n\ntrain_ds = make_dataset(train_sentence_pairs, english_tokenizer, spanish_tokenizer)\ntest_ds = make_dataset(test_sentence_pairs, english_tokenizer, spanish_tokenizer)\ntrain_ds = train_ds.shuffle(len(train_sentence_pairs)).batch(BATCH_SIZE, drop_remainder=True)\ntest_ds = test_ds.batch(BATCH_SIZE, drop_remainder=True)\n",
"_____no_output_____"
],
[
"print(\"Dataset outputs elements with shape ({}, {})\".format(\n *train_ds.output_shapes))",
"_____no_output_____"
]
],
[
[
"## Write the encoder and decoder model\n\nHere, we'll implement an encoder-decoder model with attention. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.\n\n<img src=\"https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg\" width=\"500\" alt=\"attention mechanism\">\n\nThe input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*. \n",
"_____no_output_____"
]
],
[
[
"ENCODER_SIZE = DECODER_SIZE = 1024\nEMBEDDING_DIM = 256\nMAX_OUTPUT_LENGTH = train_ds.output_shapes[1][1]\n\ndef gru(units):\n return tf.keras.layers.GRU(units,\n return_sequences=True,\n return_state=True,\n recurrent_activation='sigmoid',\n recurrent_initializer='glorot_uniform')",
"_____no_output_____"
],
[
"class Encoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, encoder_size):\n super(Encoder, self).__init__()\n self.embedding_dim = embedding_dim\n self.encoder_size = encoder_size\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = gru(encoder_size)\n\n def call(self, x, hidden):\n x = self.embedding(x)\n output, state = self.gru(x, initial_state=hidden)\n return output, state\n\n def initial_hidden_state(self, batch_size):\n return tf.zeros((batch_size, self.encoder_size))\n\n",
"_____no_output_____"
]
],
[
[
"\nFor the decoder, we're using *Bahdanau attention*. Here are the equations that are implemented:\n\n<img src=\"https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg\" alt=\"attention equation 0\" width=\"800\">\n<img src=\"https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg\" alt=\"attention equation 1\" width=\"800\">\n\nLets decide on notation before writing the simplified form:\n\n* FC = Fully connected (dense) layer\n* EO = Encoder output\n* H = hidden state\n* X = input to the decoder\n\nAnd the pseudo-code:\n\n* `score = FC(tanh(FC(EO) + FC(H)))`\n* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.\n* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.\n* `embedding output` = The input to the decoder X is passed through an embedding layer.\n* `merged vector = concat(embedding output, context vector)`\n* This merged vector is then given to the GRU\n \nThe shapes of all the vectors at each step have been specified in the comments in the code:",
"_____no_output_____"
]
],
[
[
"class BahdanauAttention(tf.keras.Model):\n def __init__(self, units):\n super(BahdanauAttention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n \n def call(self, hidden_state, enc_output):\n # enc_output shape = (batch_size, max_length, hidden_size)\n\n # (batch_size, hidden_size) -> (batch_size, 1, hidden_size)\n hidden_with_time = tf.expand_dims(hidden_state, 1)\n \n # score shape == (batch_size, max_length, 1)\n score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time)))\n # attention_weights shape == (batch_size, max_length, 1)\n attention_weights = tf.nn.softmax(score, axis=1)\n\n # context_vector shape after sum = (batch_size, hidden_size)\n context_vector = attention_weights * enc_output\n context_vector = tf.reduce_sum(context_vector, axis=1)\n \n return context_vector, attention_weights\n\n\nclass Decoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, decoder_size):\n super(Decoder, self).__init__()\n self.vocab_size = vocab_size\n self.embedding_dim = embedding_dim\n self.decoder_size = decoder_size\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = gru(decoder_size)\n self.fc = tf.keras.layers.Dense(vocab_size)\n self.attention = BahdanauAttention(decoder_size)\n\n def call(self, x, hidden, enc_output):\n context_vector, attention_weights = self.attention(hidden, enc_output)\n\n # x shape after passing through embedding == (batch_size, 1, embedding_dim)\n x = self.embedding(x)\n\n # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)\n x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n\n # passing the concatenated vector to the GRU\n output, state = self.gru(x)\n\n # output shape == (batch_size, hidden_size)\n output = tf.reshape(output, (-1, output.shape[2]))\n\n # output shape == (batch_size, vocab)\n x = self.fc(output)\n\n return x, state, attention_weights\n",
"_____no_output_____"
]
],
[
[
"## Define a translate function\n\nNow, let's put the encoder and decoder halves together. The encoder step is fairly straightforward; we'll just reuse Keras's dynamic unroll. For the decoder, we have to make some choices about how to feed the decoder RNN. Overall the process goes as follows:\n\n1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.\n2. The encoder output, encoder hidden state and the <START> token is passed to the decoder.\n3. The decoder returns the *predictions* and the *decoder hidden state*.\n4. The encoder output, hidden state and next token is then fed back into the decoder repeatedly. This has two different behaviors under training and inference:\n - during training, we use *teacher forcing*, where the correct next token is fed into the decoder, regardless of what the decoder emitted.\n - during inference, we use `tf.argmax(predictions)` to select the most likely continuation and feed it back into the decoder. Another strategy that yields more robust results is called *beam search*.\n5. Repeat step 4 until either the decoder emits an <END> token, indicating that it's done translating, or we run into a hardcoded length limit. \n",
"_____no_output_____"
]
],
[
[
"class NmtTranslator(tf.keras.Model):\n def __init__(self, encoder, decoder, start_token_id, end_token_id):\n super(NmtTranslator, self).__init__()\n self.encoder = encoder\n self.decoder = decoder\n # (The token_id should match the decoder's language.)\n # Uses start_token_id to initialize the decoder.\n self.start_token_id = tf.constant(start_token_id)\n # Check for sequence completion using this token_id\n self.end_token_id = tf.constant(end_token_id)\n\n\n @tf.function \n def call(self, inp, target=None, max_output_length=MAX_OUTPUT_LENGTH):\n '''Translate an input.\n\n If target is provided, teacher forcing is used to generate the translation.\n '''\n batch_size = inp.shape[0]\n hidden = self.encoder.initial_hidden_state(batch_size)\n\n enc_output, enc_hidden = self.encoder(inp, hidden)\n dec_hidden = enc_hidden\n\n if target is not None:\n output_length = target.shape[1]\n else:\n output_length = max_output_length\n\n predictions_array = tf.TensorArray(tf.float32, size=output_length - 1)\n attention_array = tf.TensorArray(tf.float32, size=output_length - 1)\n # Feed <START> token to start decoder.\n dec_input = tf.cast([self.start_token_id] * batch_size, tf.int32)\n # Keep track of which sequences have emitted an <END> token\n is_done = tf.zeros([batch_size], dtype=tf.bool)\n\n for i in tf.range(output_length - 1):\n dec_input = tf.expand_dims(dec_input, 1)\n predictions, dec_hidden, attention_weights = self.decoder(dec_input, dec_hidden, enc_output)\n predictions = tf.where(is_done, tf.zeros_like(predictions), predictions)\n \n # Write predictions/attention for later visualization.\n predictions_array = predictions_array.write(i, predictions)\n attention_array = attention_array.write(i, attention_weights)\n\n # Decide what to pass into the next iteration of the decoder.\n if target is not None:\n # if target is known, use teacher forcing\n dec_input = target[:, i + 1]\n else:\n # Otherwise, pick the most likely continuation\n dec_input = tf.argmax(predictions, axis=1, output_type=tf.int32)\n\n # Figure out which sentences just completed.\n is_done = tf.logical_or(is_done, tf.equal(dec_input, self.end_token_id))\n # Exit early if all our sentences are done.\n if tf.reduce_all(is_done):\n break\n\n # [time, batch, predictions] -> [batch, time, predictions]\n return tf.transpose(predictions_array.stack(), [1, 0, 2]), tf.transpose(attention_array.stack(), [1, 0, 2, 3])\n \n \n ",
"_____no_output_____"
]
],
[
[
"## Define the loss function\n\nOur loss function is a word-for-word comparison between true answer and model prediction.\n \n real = [<start>, 'This', 'is', 'the', 'correct', 'answer', '.', '<end>', '<oov>']\n pred = ['This', 'is', 'what', 'the', 'model', 'emitted', '.', '<end>']\n\nresults in comparing\n\n This/This, is/is, the/what, correct/the, answer/model, ./emitted, <end>/.\nand ignoring the rest of the prediction.\n",
"_____no_output_____"
]
],
[
[
"def loss_fn(real, pred):\n # The prediction doesn't include the <start> token.\n real = real[:, 1:]\n # Cut down the prediction to the correct shape (We ignore extra words).\n pred = pred[:, :real.shape[1]]\n # If real == <OOV>, then mask out the loss.\n mask = 1 - np.equal(real, 0)\n loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask\n\n # Sum loss over the time dimension, but average it over the batch dimension.\n return tf.reduce_mean(tf.reduce_sum(loss_, axis=1))\n\n",
"_____no_output_____"
]
],
[
[
"## Configure model directory\n\nWe'll use one directory to save all of our relevant artifacts (summary logs, checkpoints, SavedModel exports, etc.)",
"_____no_output_____"
]
],
[
[
"# Where to save checkpoints, tensorboard summaries, etc.\nMODEL_DIR = '/tmp/tensorflow/nmt_attention'\n\ndef apply_clean():\n if tf.io.gfile.exists(MODEL_DIR):\n print('Removing existing model dir: {}'.format(MODEL_DIR))\n tf.io.gfile.rmtree(MODEL_DIR)\n",
"_____no_output_____"
],
[
"# Optional: remove existing data\napply_clean()",
"_____no_output_____"
],
[
"# Summary writers\ntrain_summary_writer = tf.summary.create_file_writer(\n os.path.join(MODEL_DIR, 'summaries', 'train'), flush_millis=10000)\ntest_summary_writer = tf.summary.create_file_writer(\n os.path.join(MODEL_DIR, 'summaries', 'eval'), flush_millis=10000, name='test')\n",
"_____no_output_____"
],
[
"# Set up all stateful objects\nencoder = Encoder(len(english_tokenizer.word_index) + 1, EMBEDDING_DIM, ENCODER_SIZE)\ndecoder = Decoder(len(spanish_tokenizer.word_index) + 1, EMBEDDING_DIM, DECODER_SIZE)\nstart_token_id = spanish_tokenizer.word_index[START_TOKEN]\nend_token_id = spanish_tokenizer.word_index[END_TOKEN]\nmodel = NmtTranslator(encoder, decoder, start_token_id, end_token_id)\n\n# TODO(brianklee): Investigate whether Adam defaults have changed and whether it affects training.\noptimizer = tf.keras.optimizers.Adam(epsilon=1e-8)# tf.keras.optimizers.SGD(learning_rate=0.01)#Adam()\n",
"_____no_output_____"
],
[
"# Checkpoints\ncheckpoint_dir = os.path.join(MODEL_DIR, 'checkpoints')\ncheckpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')\ncheckpoint = tf.train.Checkpoint(\n encoder=encoder, decoder=decoder, optimizer=optimizer)\n# Restore variables on creation if a checkpoint exists.\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))",
"_____no_output_____"
],
[
"# SavedModel exports\nexport_path = os.path.join(MODEL_DIR, 'export')",
"_____no_output_____"
]
],
[
[
"# Visualize the model's output\n\nLet's visualize our model's output. (It hasn't been trained yet, so it will output gibberish.)\n\nWe'll use this visualization to check on the model's progress.",
"_____no_output_____"
]
],
[
[
"def plot_attention(attention, sentence, predicted_sentence):\n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(1, 1, 1)\n ax.matshow(attention, cmap='viridis')\n \n fontdict = {'fontsize': 14}\n \n ax.set_xticklabels([''] + sentence.split(), fontdict=fontdict, rotation=90)\n ax.set_yticklabels([''] + predicted_sentence.split(), fontdict=fontdict)\n ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n\n plt.show()\n\ndef ints_to_words(tokenizer, ints):\n return ' '.join(tokenizer.index_word[int(i)] if int(i) != 0 else '<OOV>' for i in ints)\n \ndef sentence_to_ints(tokenizer, sentence):\n sentence = preprocess_sentence(sentence)\n return tf.constant(tokenizer.texts_to_sequences([sentence])[0])\n\ndef translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, ints, target_ints=None):\n \"\"\"Run translation on a sentence and plot an attention matrix.\n \n Sentence should be passed in as list of integers.\n \"\"\"\n ints = tf.expand_dims(ints, 0)\n predictions, attention = model(ints)\n prediction_ids = tf.squeeze(tf.argmax(predictions, axis=-1))\n attention = tf.squeeze(attention)\n sentence = ints_to_words(english_tokenizer, ints[0])\n predicted_sentence = ints_to_words(spanish_tokenizer, prediction_ids)\n print(u'Input: {}'.format(sentence))\n print(u'Predicted translation: {}'.format(predicted_sentence))\n if target_ints is not None:\n print(u'Correct translation: {}'.format(ints_to_words(spanish_tokenizer, target_ints)))\n plot_attention(attention, sentence, predicted_sentence) \n\ndef translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, sentence, target_sentence=None):\n \"\"\"Same as translate_and_plot_ints, but pass in a sentence as a string.\"\"\"\n english_ints = sentence_to_ints(english_tokenizer, sentence)\n spanish_ints = sentence_to_ints(spanish_tokenizer, target_sentence) if target_sentence is not None else None\n translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, english_ints, target_ints=spanish_ints)\n",
"_____no_output_____"
],
[
"translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, u\"it's really cold here\", u'hace mucho frio aqui')",
"_____no_output_____"
]
],
[
[
"# Train the model\n",
"_____no_output_____"
]
],
[
[
"def train(model, optimizer, dataset):\n \"\"\"Trains model on `dataset` using `optimizer`.\"\"\"\n start = time.time()\n avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)\n for inp, target in dataset:\n with tf.GradientTape() as tape:\n predictions, _ = model(inp, target=target)\n loss = loss_fn(target, predictions)\n\n avg_loss(loss)\n gradients = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n if tf.equal(optimizer.iterations % 10, 0):\n tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)\n avg_loss.reset_states()\n rate = 10 / (time.time() - start)\n print('Step #%d\\tLoss: %.6f (%.2f steps/sec)' % (optimizer.iterations, loss, rate))\n start = time.time()\n if tf.equal(optimizer.iterations % 100, 0):\n# translate_and_plot_words(model, english_index, spanish_index, u\"it's really cold here.\", u'hace mucho frio aqui.')\n translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, inp[0], target[0])\n\ndef test(model, dataset, step_num):\n \"\"\"Perform an evaluation of `model` on the examples from `dataset`.\"\"\"\n avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)\n for inp, target in dataset:\n predictions, _ = model(inp)\n loss = loss_fn(target, predictions)\n avg_loss(loss)\n\n print('Model test set loss: {:0.4f}'.format(avg_loss.result()))\n tf.summary.scalar('loss', avg_loss.result(), step=step_num)\n\n\n ",
"_____no_output_____"
],
[
"NUM_TRAIN_EPOCHS = 10\nfor i in range(NUM_TRAIN_EPOCHS):\n start = time.time()\n with train_summary_writer.as_default():\n train(model, optimizer, train_ds)\n end = time.time()\n print('\\nTrain time for epoch #{} ({} total steps): {}'.format(\n i + 1, optimizer.iterations, end - start))\n with test_summary_writer.as_default():\n test(model, test_ds, optimizer.iterations)\n checkpoint.save(checkpoint_prefix)\n\n",
"_____no_output_____"
],
[
"# TODO(brianklee): This seems to be complaining about input shapes not being set?\n# tf.saved_model.save(model, export_path)",
"_____no_output_____"
]
],
[
[
"## Next steps\n\n* [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French.\n* Experiment with training on a larger dataset, or using more epochs\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0446049c9a97517f22ffa826d7f2c8dd7ce1f40 | 57,388 | ipynb | Jupyter Notebook | gym_basics_mountain_car_v0.ipynb | PratikSavla/Reinformemt-Learning-Examples | 6e18a02c9cb328bfffd65bf1746175e6e80dbaf3 | [
"MIT"
] | null | null | null | gym_basics_mountain_car_v0.ipynb | PratikSavla/Reinformemt-Learning-Examples | 6e18a02c9cb328bfffd65bf1746175e6e80dbaf3 | [
"MIT"
] | null | null | null | gym_basics_mountain_car_v0.ipynb | PratikSavla/Reinformemt-Learning-Examples | 6e18a02c9cb328bfffd65bf1746175e6e80dbaf3 | [
"MIT"
] | null | null | null | 53.834897 | 8,102 | 0.609535 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# This code creates a virtual display to draw game images on. \n# If you are running locally, just ignore it\nimport os\nif type(os.environ.get(\"DISPLAY\")) is not str or len(os.environ.get(\"DISPLAY\"))==0:\n !bash ../xvfb start\n %env DISPLAY=:1",
"_____no_output_____"
]
],
[
[
"### OpenAI Gym\n\nWe're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.\n\nThat's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.\n\nSo here's how it works:",
"_____no_output_____"
]
],
[
[
"import gym\nenv = gym.make(\"MountainCar-v0\")\n\nplt.imshow(env.render('rgb_array'))\nprint(\"Observation space:\", env.observation_space)\nprint(\"Action space:\", env.action_space)",
"Observation space: Box(2,)\nAction space: Discrete(3)\n"
]
],
[
[
"Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away.",
"_____no_output_____"
],
[
"### Gym interface\n\nThe three main methods of an environment are\n* __reset()__ - reset environment to initial state, _return first observation_\n* __render()__ - show current environment state (a more colorful version :) )\n* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info)\n * _new observation_ - an observation right after commiting the action __a__\n * _reward_ - a number representing your reward for commiting action __a__\n * _is done_ - True if the MDP has just finished, False if still in progress\n * _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.",
"_____no_output_____"
]
],
[
[
"obs0 = env.reset()\nprint(\"initial observation code:\", obs0)\n\n# Note: in MountainCar, observation is just two numbers: car position and velocity",
"initial observation code: [-0.40266439 0. ]\n"
],
[
"print(\"taking action 2 (right)\")\nnew_obs, reward, is_done, _ = env.step(2)\n\nprint(\"new observation code:\", new_obs)\nprint(\"reward:\", reward)\nprint(\"is game over?:\", is_done)\n\n# Note: as you can see, the car has moved to the riht slightly (around 0.0005)",
"taking action 2 (right)\nnew observation code: [ -4.02551631e-01 1.12759220e-04]\nreward: -1.0\nis game over?: False\n"
]
],
[
[
"### Play with it\n\nBelow is the code that drives the car to the right. \n\nHowever, it doesn't reach the flag at the far right due to gravity. \n\n__Your task__ is to fix it. Find a strategy that reaches the flag. \n\nYou're not required to build any sophisticated algorithms for now, feel free to hard-code :)\n\n_Hint: your action at each step should depend either on __t__ or on __s__._",
"_____no_output_____"
]
],
[
[
"\n# create env manually to set time limit. Please don't change this.\nTIME_LIMIT = 250\nenv = gym.wrappers.TimeLimit(gym.envs.classic_control.MountainCarEnv(),\n max_episode_steps=TIME_LIMIT + 1)\ns = env.reset()\nactions = {'left': 0, 'stop': 1, 'right': 2}\n\n# prepare \"display\"\n%matplotlib notebook\nfig = plt.figure()\nax = fig.add_subplot(111)\nfig.show()\n\ndef policy(t):\n if t>50 and t<100:\n return actions['left']\n else:\n return actions['right']\n\n\nfor t in range(TIME_LIMIT):\n \n s, r, done, _ = env.step(policy(t))\n \n #draw game image on display\n ax.clear()\n ax.imshow(env.render('rgb_array'))\n fig.canvas.draw()\n \n if done:\n print(\"Well done!\")\n break\nelse: \n print(\"Time limit exceeded. Try again.\")",
"\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\n"
]
],
[
[
"### Submit to coursera",
"_____no_output_____"
]
],
[
[
"from submit import submit_interface\nsubmit_interface(policy, \"[email protected]\", \"IT3M0zwksnBtCJXV\")",
"\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\nSubmitted to Coursera platform. See results on assignment page!\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d04467ae63eb2901adf12e968a5e9e347096026f | 69,291 | ipynb | Jupyter Notebook | 04_BVP_problems.ipynb | lihu8918/numerical-methods-pdes | e3fb4d972174e832d223afb55ff7307d2305572b | [
"CC0-1.0"
] | null | null | null | 04_BVP_problems.ipynb | lihu8918/numerical-methods-pdes | e3fb4d972174e832d223afb55ff7307d2305572b | [
"CC0-1.0"
] | null | null | null | 04_BVP_problems.ipynb | lihu8918/numerical-methods-pdes | e3fb4d972174e832d223afb55ff7307d2305572b | [
"CC0-1.0"
] | 1 | 2021-05-23T08:01:01.000Z | 2021-05-23T08:01:01.000Z | 31.567654 | 509 | 0.51112 | [
[
[
"<table>\n <tr align=left><td><img align=left src=\"https://i.creativecommons.org/l/by/4.0/88x31.png\">\n <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>\n</table>",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# Boundary Value Problems: Discretization",
"_____no_output_____"
],
[
"## Model Problems",
"_____no_output_____"
],
[
"The simplest boundary value problem (BVP) we will run into is the one-dimensional version of Poisson's equation\n$$\n u''(x) = f(x).\n$$",
"_____no_output_____"
],
[
"Usually we solve this equation on a finite interval with either Dirichlet or Neumann boundary condtions. Because there are two derivatives in the equation we need two boundary conditions to solve the PDE (really and ODE in this case) uniquely. To start let us consider the following basic problem\n$$\\begin{aligned}\n u''(x) = f(x) ~~~ \\Omega = [a, b] \\\\\n u(a) = \\alpha ~~~ u(b) = \\beta.\n\\end{aligned}$$",
"_____no_output_____"
],
[
"BVPs of this sort are often the result of looking at the steady-state form of a time dependent PDE. For instance, if we were considering the steady-state solution to the heat equation\n$$\n u_t(x,t) = \\kappa u_{xx}(x,t) + \\Psi(x,t) ~~~~ \\Omega = [0, T] \\times [a, b] \\\\\n u(x, 0) = u^0(x) ~~~ u(a, t) = \\alpha(t) ~~~ u(b, t) = \\beta(t)\n$$\nwe would solve the equation where $u_t = 0$ and arrive at\n$$\n u''(x) = - \\Psi / \\kappa,\n$$\na version of Poisson's equation above.",
"_____no_output_____"
],
[
"In higher spatial dimensions the second derivative turns into a Laplacian. Notation varies for this but all these are equivalent statements:\n$$\\begin{aligned}\n \\nabla^2 u(\\vec{x}) &= f(\\vec{x}) \\\\\n \\Delta u(\\vec{x}) &= f(\\vec{x}) \\\\\n \\sum^N_{i=1} u_{x_i x_i} &= f(\\vec{x}).\n\\end{aligned}$$",
"_____no_output_____"
],
[
"## One-Dimensional Discretization\n\nAs a first approach to solving the one-dimensional Poisson's equation let's break up the domain into `m` points, often called a *mesh* or *grid*. Our goal is to approximate the unknown function $u(x)$ as the mesh points $x_i$. First we can relate the number of mesh points `m` to the distance between with\n$$\n \\Delta x = \\frac{1}{m + 1}.\n$$\nThe mesh points $x_i$ can be written as\n$$\n x_i = a + i \\Delta x.\n$$",
"_____no_output_____"
],
[
"We can let $\\Delta x$ vary and many of the formulas above have only minor modifications but we will leave that for homework. Notationally we will also adopt the notation\n$$\n U_i \\approx u(x_i)\n$$\nso that $U_i$ are the approximate solution at the grid points and retain the lower-case $u$ to denote the true solution.",
"_____no_output_____"
],
[
"To simplify our discussion let's consider the ODE\n$$\n u''(x) = f(x) ~~~ \\Omega = [0, 1] \\\\\n u(0) = \\alpha ~~~ u(1) = \\beta.\n$$",
"_____no_output_____"
],
[
"Applying the 2nd order, centered difference approximation for the 2nd derivative we have the equation\n$$\n D^2 U_i = \\frac{1}{\\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1})\n$$\nso that we end up with the approximate algebraic expression at every grid point of\n$$\n \\frac{1}{\\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i) ~~~ i = 1, 2, 3, \\ldots, m.\n$$",
"_____no_output_____"
],
[
"Note at this point that these algebraic equations are coupled as each $U_i$ depends on its neighbors. This means we can write these as system of coupled equations\n$$\n A U = F.\n$$",
"_____no_output_____"
],
[
"#### Write the system of equations\n$$\n \u001f\\frac{1}{\\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i) ~~~ i = 1, 2, 3, \\ldots, m.\n$$\n\nNote the boundary conditions!",
"_____no_output_____"
],
[
"$$\n \\frac{1}{\\Delta x^2} \\begin{bmatrix}\n -2 & 1 & & & \\\\\n 1 & -2 & 1 & & \\\\\n & 1 & -2 & 1 & \\\\\n & & 1 & -2 & 1 \\\\\n & & & 1 & -2 \\\\\n \\end{bmatrix} \\begin{bmatrix}\n U_1 \\\\ U_2 \\\\ U_3 \\\\ U_4 \\\\ U_5\n \\end{bmatrix} = \n \\begin{bmatrix}\n f(x_1) - \\frac{\\alpha}{\\Delta x^2} \\\\ f(x_2) \\\\ f(x_3) \\\\ f(x_4) \\\\ f(x_5) - \\frac{\\beta}{\\Delta x^2} \\\\\n \\end{bmatrix}.\n$$",
"_____no_output_____"
],
[
"#### Example\n\nWant to solve the BVP\n$$\n u_{xx} = e^x, ~~~~ x \\in [0, 1] ~~~~ \\text{with} ~~~~ u(0) = 0.0, \\text{ and } u(1) = 3\n$$\nvia the construction of a linear system of equations.",
"_____no_output_____"
],
[
"$$\\begin{aligned}\n u_{xx} &= e^x \\\\\n u_x &= A + e^x \\\\\n u &= Ax + B + e^x\\\\\n u(0) &= B + 1 = 0 \\Rightarrow B = -1 \\\\\n u(1) &= A - 1 + e^{1} = 3 \\Rightarrow A = 4 - e\\\\ \n ~\\\\\n u(x) &= (4 - e) x - 1 + e^x\n\\end{aligned}$$",
"_____no_output_____"
]
],
[
[
"# Problem setup\na = 0.0\nb = 1.0\nu_a = 0.0\nu_b = 3.0\nf = lambda x: numpy.exp(x)\nu_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)\n\n# Descretization\nm = 10\nx_bc = numpy.linspace(a, b, m + 2)\nx = x_bc[1:-1]\ndelta_x = (b - a) / (m + 1)\n\n# Construct matrix A\nA = numpy.zeros((m, m))\ndiagonal = numpy.ones(m) / delta_x**2\nA += numpy.diag(diagonal * -2.0, 0)\nA += numpy.diag(diagonal[:-1], 1)\nA += numpy.diag(diagonal[:-1], -1)\n\n# Construct RHS\nb = f(x)\nb[0] -= u_a / delta_x**2\nb[-1] -= u_b / delta_x**2\n\n# Solve system\nU = numpy.empty(m + 2)\nU[0] = u_a\nU[-1] = u_b\nU[1:-1] = numpy.linalg.solve(A, b)\n\n# Plot result\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x_bc, U, 'o', label=\"Computed\")\naxes.plot(x_bc, u_true(x_bc), 'k', label=\"True\")\naxes.set_title(\"Solution to $u_{xx} = e^x$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"u(x)\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Error Analysis\n\nA natural question to ask given our approximation $U_i$ is how close this is to the true solution $u(x)$ at the grid points $x_i$. To address this we will define the error $E$ as\n$$\n E = U - \\hat{U}\n$$\nwhere $U$ is the vector of the approximate solution and $\\hat{U}$ is the vector composed of the $u(x_i)$. ",
"_____no_output_____"
],
[
"This leaves $E$ as a vector still so often we ask the question how does the norm of $E$ behave given a particular $\\Delta x$. For the $\\infty$-norm we would have\n$$\n ||E||_\\infty = \\max_{1 \\leq i \\leq m} |E_i| = \\max_{1 \\leq i \\leq m} |U_i - u(x_i)|\n$$",
"_____no_output_____"
],
[
"If we can show that $||E||_\\infty$ goes to zero as $\\Delta x \\rightarrow 0$ we can then claim that the approximate solution $U_i$ at any of the grid points $E_i \\rightarrow 0$. If we would like to use other norms we often define slightly modified versions of the norms that also contain the grid width $\\Delta x$ where\n$$\\begin{aligned}\n ||E||_1 &= \\Delta x \\sum^m_{i=1} |E_i| \\\\\n ||E||_2 &= \\left( \\Delta x \\sum^m_{i=1} |E_i|^2 \\right )^{1/2}\n\\end{aligned}$$\nThese are referred to as *grid function norms*.\n\nThe $E$ defined above is known as the *global error*. One of our main goals throughout this course is to understand how $E$ behaves given other factors as we defined later.",
"_____no_output_____"
],
[
"### Local Truncation Error\n\nThe *local truncation error* (LTE) can be defined by replacing the approximate solution $U_i$ by the approximate solution $u(x_i)$. Since the algebraic equations are an approximation to the original BVP, we do not expect that the true solution will exactly satisfy these equations, this resulting difference is the LTE.",
"_____no_output_____"
],
[
"For our one-dimensional finite difference approximation from above we have\n$$\n \u001f\\frac{1}{\\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i).\n$$",
"_____no_output_____"
],
[
"Replacing $U_i$ with $u(x_i)$ in this equation leads to\n$$\n \u001f\\tau_i = \\frac{1}{\\Delta x^2} (u(x_{i+1}) - 2 u(x_i) + u(x_{i-1})) - f(x_i).\n$$",
"_____no_output_____"
],
[
"In this form the LTE is not as useful but if we assume $u(x)$ is smooth we can repalce the $u(x_i)$ with their Taylor series counterparts, similar to what we did for finite differences. The relevant Taylor series are\n$$\n u(x_{i \\pm 1}) = u(x_i) \\pm u'(x_i) \\Delta x + \\frac{1}{2} u''(x_i) \\Delta x^2 \\pm \\frac{1}{6} u'''(x_i) \\Delta x^3 + \\frac{1}{24} u^{(4)}(x_i) \\Delta x^4 + \\mathcal{O}(\\Delta x^5)\n$$",
"_____no_output_____"
],
[
"This leads to an expression for $\\tau_i$ of\n$$\\begin{aligned}\n \u001f\\tau_i &= \\frac{1}{\\Delta x^2} \\left [u''(x_i) \\Delta x^2 + \\frac{1}{12} u^{(4)}(x_i) \\Delta x^4 + \\mathcal{O}(\\Delta x^5) \\right ] - f(x_i) \\\\\n &= u''(x_i) + \\frac{1}{12} u^{(4)}(x_i) \\Delta x^2 + \\mathcal{O}(\\Delta x^4) - f(x_i) \\\\\n &= \\frac{1}{12} u^{(4)}(x_i) \\Delta x^2 + \\mathcal{O}(\\Delta x^4)\n\\end{aligned}$$\nwhere we note that the true solution would satisfy $u''(x) = f(x)$.\n\nAs long as $ u^{(4)}(x_i) $ remains finite (smooth) we know that $\\tau_i \\rightarrow 0$ as $\\Delta x \\rightarrow 0$",
"_____no_output_____"
],
[
"We can also write the vector of LTEs as\n$$\n \\tau = A \\hat{U} - F\n$$\nwhich implies\n$$\n A\\hat{U} = F + \\tau.\n$$",
"_____no_output_____"
],
[
"### Global Error\n\nWhat we really want to bound is the global error $E$. To relate the global error and LTE we can substitute $E = U - \\hat{U}$ into our expression for the LTE to find\n$$\n A E = -\\tau.\n$$\nThis means that the global error is the solution to the system of equations we defined for the approximation except with $\\tau$ as the forcing function rather than $F$!",
"_____no_output_____"
],
[
"This also implies that the global error $E$ can be thought of as an approximation to similar BVP as we started with where\n$$\n e''(x) = -\\tau(x) ~~~ \\Omega = [0, 1] \\\\\n e(0) = 0 ~~~ e(1) = 0.\n$$",
"_____no_output_____"
],
[
"We can solve this ODE directly by integrating twice since to find to leading order\n$$\\begin{aligned}\n e(x) &\\approx -\\frac{1}{12} \\Delta x^2 u''(x) + \\frac{1}{12} \\Delta x^2 (u''(0) + x (u''(1) - u''(0))) \\\\\n &= \\mathcal{O}(\\Delta x^2) \\\\\n &\\rightarrow 0 ~~~ \\text{as} ~~~ \\Delta x \\rightarrow 0.\n\\end{aligned}$$",
"_____no_output_____"
],
[
"### Stability\n\nWe showed that the continuous analog to $E$, $e(x)$, does in fact go to zero as $\\Delta x \\rightarrow 0$ but what about $E$? Instead of showing something based on $e(x)$ let's look back at the original system of equations for the global error\n$$\n A^{\\Delta x} E^{\\Delta x} = - \\tau^{\\Delta x}\n$$\nwhere we now denote a particular realization of the system by the corresponding grid spacing $\\Delta x$. ",
"_____no_output_____"
],
[
"If we could invert $A^{\\Delta x}$ we could compute $E^{\\Delta x}$ directly. Assuming that we can and taking an appropriate norm we find\n$$\\begin{aligned}\n E^{\\Delta x} &= (A^{\\Delta x})^{-1} \\tau^{\\Delta x} \\\\\n ||E^{\\Delta x}|| &= ||(A^{\\Delta x})^{-1} \\tau^{\\Delta x}|| \\\\\n & \\leq ||(A^{\\Delta x})^{-1} ||~|| \\tau^{\\Delta x}||\n\\end{aligned}$$",
"_____no_output_____"
],
[
"We know that $\\tau^{\\Delta x} \\rightarrow 0$ as $\\Delta x \\rightarrow 0$ already for our example so if we can bound the norm of the matrix $(A^{\\Delta x})^{-1}$ by some constant $C$ for sufficiently small $\\Delta x$ we can then write a bound on the global error of\n$$\n ||E^{\\Delta x}|| \\leq C ||\\tau^{\\Delta x}||\n$$\ndemonstrating that $E^{\\Delta x} \\rightarrow 0 $ at least as fast as $\\tau^{\\Delta x} \\rightarrow 0$.",
"_____no_output_____"
],
[
"We can generalize this observation to all linear BVP problems by supposing that we have a finite difference approximation to a linear BVP of the form\n$$\n A^{\\Delta x} U^{\\Delta x} = F^{\\Delta x},\n$$\nwhere $\\Delta x$ is the grid spacing. ",
"_____no_output_____"
],
[
"We say the approximation is *stable* if $(A^{\\Delta x})^{-1}$ exists $\\forall \\Delta x < \\Delta x_0$ and there is a constant $C$ such that\n$$\n ||(A^{\\Delta x})^{-1}|| \\leq C ~~~~ \\forall \\Delta x < \\Delta x_0.\n$$",
"_____no_output_____"
],
[
"### Consistency\n\nA related and important idea for the discretization of any PDE is that it be consistent with the equation we are approximating. If\n$$\n ||\\tau^{\\Delta x}|| \\rightarrow 0 ~~\\text{as}~~ \\Delta x \\rightarrow 0\n$$\nthen we say an approximation is *consistent* with the differential equation.",
"_____no_output_____"
],
[
"### Convergence\n\nWe now have all the pieces to say something about the global error $E$. A method is said to be *convergent* if\n$$\n ||E^{\\Delta x}|| \\rightarrow 0 ~~~ \\text{as} ~~~ \\Delta x \\rightarrow 0.\n$$",
"_____no_output_____"
],
[
"If an approximation is both consistent ($||\\tau^{\\Delta x}|| \\rightarrow 0 ~~\\text{as}~~ \\Delta x \\rightarrow 0$) and stable ($||E^{\\Delta x}|| \\leq C ||\\tau^{\\Delta x}||$) then the approximation is convergent.",
"_____no_output_____"
],
[
"We have only derived this in the case of linear BVPs but in fact these criteria for convergence are often found to be true for any finite difference approximation (and beyond for that matter). This statement of convergence can also often be strengthened to say\n$$\n \\mathcal{O}(\\Delta x^p) ~\\text{LTE}~ + ~\\text{stability} ~ \\Rightarrow \\mathcal{O}(\\Delta x^p) ~\\text{global error}.\n$$",
"_____no_output_____"
],
[
"It turns out the most difficult part of this process is usually the statement regarding stability. In the next section we will see for our simple example how we can prove stability in the 2-norm.",
"_____no_output_____"
],
[
"### Stability in the 2-Norm\n\nRecalling our definition of stability, we need to show that for our previously defined $A$ that\n$$\n (A^{\\Delta x})^{-1}\n$$\nexists and\n$$\n ||(A^{\\Delta x})^{-1}|| \\leq C ~~~ \\forall \\Delta x < \\Delta x_0\n$$\nfor some $C$. ",
"_____no_output_____"
],
[
"We can show that $A$ is in fact invertible but can we bound the norm of the inverse? Recall that the 2-norm of a symmetric matrix is equal to its spectral radius, i.e.\n$$\n ||A||_2 = \\rho(A) = \\max_{1\\leq p \\leq m} |\\lambda_p|.\n$$",
"_____no_output_____"
],
[
"Since the inverse of $A$ is also symmetric the eigenvalues of $A^{-1}$ are the inverses of the eigenvalues of $A$ implying that\n$$\n ||A^{-1}||_2 = \\rho(A^{-1}) = \\max_{1\\leq p \\leq m} \\left| \\frac{1}{\\lambda_p} \\right| = \\frac{1}{\\max_{1\\leq p \\leq m} \\left| \\lambda_p \\right|}.\n$$",
"_____no_output_____"
],
[
"If none of the $\\lambda_p$ of $A$ are zero for sufficiently small $\\Delta x$ and the rest are finite as $\\Delta x \\rightarrow 0$ we have shown the stability of the approximation.",
"_____no_output_____"
],
[
"The eigenvalues of the matrix $A$ from above can be written as\n$$\n \\lambda_p = \\frac{2}{\\Delta x^2} (\\cos(p \\pi \\Delta x) - 1)\n$$\nwith the corresponding eigenvectors $v^p$ \n$$\n v^p_j = \\sin(p \\pi j \\Delta x)\n$$\nas the $j$th component with $j = 1, \\ldots, m$.",
"_____no_output_____"
],
[
"#### Check that these are in fact the eigenpairs of the matrix $A$\n$$\n \\lambda_p = \\frac{2}{\\Delta x^2} (\\cos(p \\pi \\Delta x) - 1)\n$$\n\n$$\n v^p_j = \\sin(p \\pi j \\Delta x)\n$$",
"_____no_output_____"
],
[
"$$\\begin{aligned}\n (A v^p)_j &= \\frac{1}{\\Delta x^2} (v^p_{j-1} - 2 v^p_j + v^p_{j+1} ) \\\\\n &= \\frac{1}{\\Delta x^2} (\\sin(p \\pi (j-1) \\Delta x) - 2 \\sin(p \\pi j \\Delta x) + \\sin(p \\pi (j+1) \\Delta x) ) \\\\\n &= \\frac{1}{\\Delta x^2} (\\sin(p \\pi j \\Delta x) \\cos(p \\pi \\Delta x) - 2 \\sin(p \\pi j \\Delta x) + \\sin(p \\pi j \\Delta x) \\cos(p \\pi \\Delta x) \\\\\n &= \\lambda_p v^p_j.\n\\end{aligned}$$",
"_____no_output_____"
],
[
"#### Compute the smallest eigenvalue\nIf we can show that the eigenvalues are away from the origin then we know $||A||_2$ will be bounded. In this case the eigenvalues are negative so we need to show that they are always strictly less than zero.\n\n$$\n \\lambda_p = \\frac{2}{\\Delta x^2} (\\cos(p \\pi \\Delta x) - 1)\n$$\nUse a Taylor series to get an idea of how this behaves with respect to $\\Delta x$",
"_____no_output_____"
],
[
"From these expressions we know that smallest eigenvalue is\n$$\\begin{aligned}\n \\lambda_1 &= \\frac{2}{\\Delta x^2} (\\cos(p \\pi \\Delta x) - 1) \\\\\n &= \\frac{2}{\\Delta x^2} \\left (-\\frac{1}{2} p^2 \\pi^2 \\Delta x^2 + \\frac{1}{24} p^4 \\pi^4 \\Delta x^4 + \\mathcal{O}(\\Delta^6) \\right ) \\\\\n &= -p^2 \\pi^2 + \\mathcal{O}(\\Delta x^2).\n\\end{aligned}$$\n\nNote that this also gives us an error bound as this eigenvalue also will also lead to the largest eigenvalue of the inverse matrix. We can therefore say\n$$\n ||E^{\\Delta x}||_2 \\leq ||(A^{\\Delta x})^{-1}||_2 ||\\tau^{\\Delta x}||_2 \\approx \\frac{1}{\\pi^2} ||\\tau^{\\Delta x}||_2.\n$$",
"_____no_output_____"
],
[
"### Stability in the $\\infty$-Norm\n\nThe straight forward approach to show that $||E||_\\infty \\rightarrow 0$ as $\\Delta x \\rightarrow 0$ would be to use the matrix bound\n$$\n ||E||_\\infty \\leq \\frac{1}{\\sqrt{\\Delta x}} ||E||_2.\n$$",
"_____no_output_____"
],
[
"For our example problem we showed that $||E||_2 = \\mathcal{O}(\\Delta x^2)$ so this implies that we at least know that $||E||_\\infty = \\mathcal{O}(\\Delta x^{3/2})$. This is unfortunate as we expect $||E||_\\infty = \\mathcal{O}(\\Delta x^{2})$ due to the discretization. In order to alleviate this problem let's go back and consider our definition of stability but this time consider the $\\infty$-norm.",
"_____no_output_____"
],
[
"It turns out that our matrix $A$ can be seen as a number of discrete approximations to *Green's functions* in each column. This is more broadly applicable later on so we will spend some time reviewing the theory of Green's functions and apply them to our simple example problem.",
"_____no_output_____"
],
[
"### Green's Functions\n\nConsider the BVP with Dirichlet boundary conditions\n$$\n u''(x) = f(x) ~~~~ \\Omega = [0, 1] \\\\\n u(0) = \\alpha ~~~~ u(1) = \\beta.\n$$\nPick a fixed point $\\bar{x} \\in \\Omega$, the Green's function $G(x ; \\bar{x})$ solves the BVP above with\n$$\n f(x) = \\delta(x - \\bar{x})\n$$\nand $\\alpha = \\beta = 0$. You could think of this as the result of a steady-state problem of the heat equation with a point-loss of heat somewhere in the domain.",
"_____no_output_____"
],
[
"To find the Green's function for our particular problem we can integrate just around the point $\\bar{x}$ near the $\\delta$ function source to find\n$$\\begin{aligned}\n \\int^{\\bar{x} + \\epsilon}_{\\bar{x} - \\epsilon} u''(x) dx &= \\int^{\\bar{x} + \\epsilon}_{\\bar{x} - \\epsilon} \\delta(x - \\bar{x}) dx \\\\\n u'(\\bar{x} + \\epsilon) - u'(\\bar{x} - \\epsilon) &= 1\n\\end{aligned}$$\nrecalling that by definition the integral of the $\\delta$ function must be 1 if the interval of integration includes $\\bar{x}$. We see that the jump in the derivative at $\\bar{x}$ from the left and right should be 1.",
"_____no_output_____"
],
[
"After a bit of algebra we can solve for the Green's function for our model BVP as\n$$\n G(x; \\bar{x}) = \\left \\{ \\begin{aligned}\n (\\bar{x} - 1) x & & 0 \\leq x \\leq \\bar{x} \\\\\n \\bar{x} (x - 1) & & \\bar{x} \\leq x \\leq 1\n \\end{aligned} \\right . .\n$$",
"_____no_output_____"
],
[
"One imporant property of linear PDEs (or ODEs) in general is that they exhibit the principle of superposition. The reason we care about this with Green's functions is that if we have a $f(x)$ composed of two $\\delta$ functions, it turns out the solution is the sum of the corresponding two Green's functions. For instance if\n$$\n f(x) = \\delta(x - 0.25) + 2 \\delta(x - 0.5)\n$$\nthen\n$$\n u(x) = G(x ; 0.25) + 2 G(x ; 0.5).\n$$",
"_____no_output_____"
],
[
"This of course can be extended to an infinite number of $\\delta$ functions so that\n$$\n f(x) = \\int^1_0 f(\\bar{x}) \\delta(x - \\bar{x}) d\\bar{x}\n$$\nand therefore\n$$\n u(x) = \\int^1_0 f(\\bar{x}) G(x ; \\bar{x}) d\\bar{x}.\n$$",
"_____no_output_____"
],
[
"To incorporate the effects of boundary conditions we can continue to add Green's functions to the solution to find the general solution of our original BVP as\n$$\n u(x) = \\alpha (1 - x) + \\beta x + \\int^1_0 f(\\bar{x}) G(x ; \\bar{x}) d\\bar{x}.\n$$",
"_____no_output_____"
],
[
"So why did we do all this? Well the Green's function solution representation above can be thought of as a linear operator on the function $f(x)$. Written in perhaps more familiar terms we have\n$$\n \\mathcal{A} u = f ~~~~ u = \\mathcal{A}^{-1} f.\n$$\nWe see now that our linear operator $\\mathcal{A}$ may be the continuous analog to our discrete matrix $A$.",
"_____no_output_____"
],
[
"To proceed we will modify our original matrix $A$ into a slightly different version based on the same discretization. Instead of moving the boundary terms to the right-hand-side of the equation instead we will introduce two new \"unknowns\", called *ghost cells*, that will be placed at the edges of the grid. We will label these $U_0$ and $U_{m+1}$. In reality we know the value of these points, they are the boundary conditions!",
"_____no_output_____"
],
[
"The modified system then looks like\n$$\n A = \\frac{1}{\\Delta x^2} \\begin{bmatrix}\n \\Delta x^2 & 0 \\\\\n 1 & -2 & 1 \\\\\n & 1 & -2 & 1 \\\\\n & & \\ddots & \\ddots & \\ddots \\\\\n & & & 1 & -2 & 1 \\\\\n & & & & 1 & -2 & 1 \\\\\n & & & & & 0 & \\Delta x^2\n \\end{bmatrix} ~~~ U = \\begin{bmatrix}\n U_0 \\\\ U_1 \\\\ \\vdots \\\\ U_m \\\\ U_{m+1}\n \\end{bmatrix}~~~~~ F = \\begin{bmatrix}\n \\alpha \\\\ f(x_1) \\\\ \\vdots \\\\ f(x_{m}) \\\\ \\beta\n \\end{bmatrix} \n$$",
"_____no_output_____"
],
[
"This has the advantage later that we can implement more general boundary conditions and it isolates the algebraic dependence on the boundary conditions. The drawbacks are that the matrix no longer has as simple of a form as before.",
"_____no_output_____"
],
[
"Let's finally turn to the form of the matrix $A^{-1}$. Introducing a bit more notation, let $A_{j}$ denote the $j$th column and $A_{ij}$ denote the $i$th $j$th element of the matrix $A$.\n\nWe know that \n$$\n A A^{-1}_j = e_j\n$$\nwhere $e_j$ is the unit vector with $1$ in the $j$th row ($j$th column of the identity matrix). ",
"_____no_output_____"
],
[
"Note that the above system has some similarities to a discretized version of the Green's function problem. Here $e_j$ represents the $\\delta$ function, $A$ the original operator, and $A^{-1}_j$ the effect that the $j$th $\\delta$ function (corresponding to the $\\bar{x}$) has on the full solution.",
"_____no_output_____"
],
[
"It turns out that we can write down the inverse matrix directly using Green's functions (see LeVeque for the details) but we end up with\n$$\n A^{-1}_{ij} = \\Delta xG(x_i ; x_j) = \\left \\{ \\begin{aligned}\n \\Delta x (x_j - 1) x_i, & & i &= 1,2, \\ldots j \\\\\n \\Delta x (x_i - 1) x_j, & & i &= j, j+1, \\ldots , m\n \\end{aligned} \\right . .\n$$",
"_____no_output_____"
],
[
"We can also write the effective right-hand side of our system as\n$$\n F = \\alpha e_0 + \\beta e_{m+1} + \\sum^m_{j=1} f_j e_j\n$$\nand finally the solution as\n$$\n U = \\alpha A^{-1}_{0} + \\beta A^{-1}_{m+1} + \\sum^m_{j=1} f_j A^{-1}_{j}\n$$\nwhose elements are\n$$\n U_i = \\alpha(1 - x_i) + \\beta x_i + \\Delta x \\sum^m_{j=1} f_j G(x_i ; x_j).\n$$",
"_____no_output_____"
],
[
"Alright, where has all this gotten us? Well, since we now know what the form of $A^{-1}$ is we may be able to get at the $\\infty$-norm of this matrix. Recall that the $\\infty$-norm of a matrix (induced from the $\\infty$-norm) for a vector is\n$$\n || C ||_\\infty = \\max_{0\\leq i \\leq m+1} \\sum^{m+1}_{j=0} |C_{ij}|\n$$",
"_____no_output_____"
],
[
"Note that due to the form of the matrix $A^{-1}$ the first row's sum is \n$$\n \\sum^{m+1}_{j=0} A_{0j}^{-1} = 1\n$$\nas is the last rows $A^{-1}_{m+1}$. We also know that for the other rows $A^{-1}_{i,0} < 1$ and $A^{-1}_{i,m+1} < 1$. ",
"_____no_output_____"
],
[
"The intermediate rows are also all bounded as\n$$\n \\sum^{m+1}_{j=0} |A^{-1}_{ij}| \\leq 1 + 1 + m \\Delta x < 3\n$$\nusing the fact we know that\n$$\n \\Delta x = \\frac{1}{m+1}.\n$$\n\nThis completes our stability wanderings as we can now say definitively that\n$$\n ||A^{-1}||_\\infty < 3 ~~~ \\forall \\Delta x.\n$$",
"_____no_output_____"
],
[
"## Neumann Boundary Conditions\n\nAs mentioned before, we can incorporate other types of boundary conditions into our discretization using the modified version of our matrix. Let's try to do this for our original problem but with one side having Neumann boundary conditions:\n$$\n u''(x) = f(x) ~~~ \\Omega = [-1, 1] \\\\\n u(-1) = \\alpha ~~~ u'(1) = \\sigma.\n$$",
"_____no_output_____"
],
[
"**Group Work**\n\n$$\n u''(x) = f(x) ~~~ \\Omega = [-1, 1] \\\\\n u(-1) = \\alpha ~~~ u'(1) = \\sigma.\n$$\n\n$u(x) = -(5 + e) x - (2 + e + e^{-1}) + e^x$\n\nExplore implementing the Neumann boundary condition by\n1. using a one-sided 1st order expression,\n1. using a centered 2nd order expression, and\n1. using a one-sided 2nd order expression",
"_____no_output_____"
]
],
[
[
"def solve_mixed_1st_order_one_sided(m):\n # Problem setup\n a = -1.0\n b = 1.0\n alpha = 3.0\n sigma = -5.0\n f = lambda x: numpy.exp(x)\n\n # Descretization\n x_bc = numpy.linspace(a, b, m + 2)\n x = x_bc[1:-1]\n delta_x = (b - a) / (m + 1)\n\n # Construct matrix A\n A = numpy.zeros((m + 2, m + 2))\n diagonal = numpy.ones(m + 2) / delta_x**2\n A += numpy.diag(diagonal * -2.0, 0)\n A += numpy.diag(diagonal[:-1], 1)\n A += numpy.diag(diagonal[:-1], -1)\n\n # Construct RHS\n b = f(x_bc)\n\n # Boundary conditions\n A[0, 0] = 1.0\n A[0, 1] = 0.0\n A[-1, -1] = 1.0 / (delta_x)\n A[-1, -2] = -1.0 / (delta_x)\n\n b[0] = alpha\n b[-1] = sigma\n\n # Solve system\n U = numpy.linalg.solve(A, b)\n\n return x_bc, U\n\n\nu_true = lambda x: -(5.0 + numpy.exp(1.0)) * x - (2.0 + numpy.exp(1.0) + numpy.exp(-1.0)) + numpy.exp(x)\n\nx_bc, U = solve_mixed_1st_order_one_sided(10)\n \n# Plot result\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x_bc, U, 'o', label=\"Computed\")\naxes.plot(x_bc, u_true(x_bc), 'k', label=\"True\")\naxes.set_title(\"Solution to $u_{xx} = e^x$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"u(x)\")\nplt.show()",
"_____no_output_____"
],
[
"def solve_mixed_2nd_order_centered(m):\n # Problem setup\n a = -1.0\n b = 1.0\n alpha = 3.0\n sigma = -5.0\n f = lambda x: numpy.exp(x)\n\n # Descretization\n x_bc = numpy.linspace(a, b, m + 2)\n x = x_bc[1:-1]\n delta_x = (b - a) / (m + 1)\n\n # Construct matrix A\n A = numpy.zeros((m + 2, m + 2))\n diagonal = numpy.ones(m + 2) / delta_x**2\n A += numpy.diag(diagonal * -2.0, 0)\n A += numpy.diag(diagonal[:-1], 1)\n A += numpy.diag(diagonal[:-1], -1)\n\n # Construct RHS\n b = f(x_bc)\n\n # Boundary conditions\n A[0, 0] = 1.0\n A[0, 1] = 0.0\n A[-1, -1] = -1.0 / (delta_x)\n A[-1, -2] = 1.0 / (delta_x)\n\n b[0] = alpha\n b[-1] = delta_x / 2.0 * f(x_bc[-1]) - sigma\n\n # Solve system\n U = numpy.linalg.solve(A, b)\n\n return x_bc, U\n\nx_bc, U = solve_mixed_2nd_order_centered(10)\n \n# Plot result\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x_bc, U, 'o', label=\"Computed\")\naxes.plot(x_bc, u_true(x_bc), 'k', label=\"True\")\naxes.set_title(\"Solution to $u_{xx} = e^x$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"u(x)\")\nplt.show()",
"_____no_output_____"
],
[
"def solve_mixed_2nd_order_one_sided(m):\n # Problem setup\n a = -1.0\n b = 1.0\n alpha = 3.0\n sigma = -5.0\n f = lambda x: numpy.exp(x)\n \n # Descretization\n x_bc = numpy.linspace(a, b, m + 2)\n x = x_bc[1:-1]\n delta_x = (b - a) / (m + 1)\n \n # Construct matrix A\n A = numpy.zeros((m + 2, m + 2))\n diagonal = numpy.ones(m + 2) / delta_x**2\n A += numpy.diag(diagonal * -2.0, 0)\n A += numpy.diag(diagonal[:-1], 1)\n A += numpy.diag(diagonal[:-1], -1)\n\n # Construct RHS\n b = f(x_bc)\n\n # Boundary conditions\n A[0, 0] = 1.0\n A[0, 1] = 0.0\n A[-1, -1] = 3.0 / (2.0 * delta_x)\n A[-1, -2] = -4.0 / (2.0 * delta_x)\n A[-1, -3] = 1.0 / (2.0 * delta_x)\n\n b[0] = alpha\n b[-1] = sigma\n\n # Solve system\n U = numpy.linalg.solve(A, b)\n\n return x_bc, U\n\nx_bc, U = solve_mixed_2nd_order_one_sided(10)\n \n# Plot result\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x_bc, U, 'o', label=\"Computed\")\naxes.plot(x_bc, u_true(x_bc), 'k', label=\"True\")\naxes.set_title(\"Solution to $u_{xx} = e^x$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"u(x)\")\nplt.show()",
"_____no_output_____"
],
[
"# Problem setup\na = -1.0\nb = 1.0\nalpha = 3.0\nsigma = -5.0\nf = lambda x: numpy.exp(x)\nu_true = lambda x: -(5.0 + numpy.exp(1.0)) * x - (2.0 + numpy.exp(1.0) + numpy.exp(-1.0)) + numpy.exp(x)\n\n# Compute the error as a function of delta_x\nm_range = numpy.arange(10, 200, 20)\ndelta_x = numpy.empty(m_range.shape)\nerror = numpy.empty((m_range.shape[0], 3))\nfor (i, m) in enumerate(m_range):\n \n x = numpy.linspace(a, b, m + 2)\n delta_x[i] = (b - a) / (m + 1)\n\n # Compute solution\n _, U = solve_mixed_1st_order_one_sided(m)\n error[i, 0] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)\n _, U = solve_mixed_2nd_order_one_sided(m)\n error[i, 1] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)\n _, U = solve_mixed_2nd_order_centered(m)\n error[i, 2] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)\n \ntitles = [\"1st Order, One-Sided\", \"2nd Order, Centered\", \"2nd Order, One-Sided\"]\norder_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))\nfor i in xrange(3):\n fig = plt.figure()\n axes = fig.add_subplot(1, 1, 1)\n\n axes.loglog(delta_x, error[:, i], 'ko', label=\"Approx. Derivative\")\n\n axes.loglog(delta_x, order_C(delta_x[0], error[0,i], 1.0) * delta_x**1.0, 'r--', label=\"1st Order\")\n axes.loglog(delta_x, order_C(delta_x[0], error[0,i], 2.0) * delta_x**2.0, 'b--', label=\"2nd Order\")\n axes.legend(loc=4)\n axes.set_title(titles[i])\n axes.set_xlabel(\"$\\Delta x$\")\n axes.set_ylabel(\"$|u(x) - U|$\")\n\n\nplt.show()\n\nU = solve_mixed_1st_order_one_sided(10)\nU = solve_mixed_2nd_order_one_sided(10)\nU = solve_mixed_2nd_order_centered(10)",
"_____no_output_____"
]
],
[
[
"## Existance and Uniqueness\n\nOne question that should be asked before embarking upon a numerical solution to any equation is whether the original is *well-posed*. Well-posedness is defined as a problem that has a unique solution and depends continuously on the input data (inital condition and boundary conditions are examples).",
"_____no_output_____"
],
[
"Consider the BVP we have been exploring but now add strictly Neumann boundary conditions\n$$\n u''(x) = f(x) ~~~ \\Omega = [0, 1] \\\\\n u'(0) = \\sigma_0 ~~~ u'(1) = \\sigma_1.\n$$\nWe can easily discretize this using one of our methods developed above but we run into problems.",
"_____no_output_____"
]
],
[
[
"# Problem setup\na = -1.0\nb = 1.0\nalpha = 3.0\nsigma = -5.0\nf = lambda x: numpy.exp(x)\n\n# Descretization\nm = 50\nx_bc = numpy.linspace(a, b, m + 2)\nx = x_bc[1:-1]\ndelta_x = (b - a) / (m + 1)\n\n# Construct matrix A\nA = numpy.zeros((m + 2, m + 2))\ndiagonal = numpy.ones(m + 2) / delta_x**2\nA += numpy.diag(diagonal * -2.0, 0)\nA += numpy.diag(diagonal[:-1], 1)\nA += numpy.diag(diagonal[:-1], -1)\n\n# Construct RHS\nb = f(x_bc)\n\n# Boundary conditions\nA[0, 0] = -1.0 / delta_x\nA[0, 1] = 1.0 / delta_x\nA[-1, -1] = -1.0 / (delta_x)\nA[-1, -2] = 1.0 / (delta_x)\n\nb[0] = alpha\nb[-1] = delta_x / 2.0 * f(x_bc[-1]) - sigma\n\n# Solve system\nU = numpy.linalg.solve(A, b)",
"_____no_output_____"
]
],
[
[
"We can see why $A$ is singular, the constant vector $e = [1, 1, 1, 1, 1,\\ldots, 1]^T$ is in fact in the null-space of $A$. Our numerical method has actually demonstrated this problem is *ill-posed*! Indeed, since the boundary conditions are only on the derivatives there are an infinite number of solutions to the BVP (this could also occur if there were no solutions).",
"_____no_output_____"
],
[
"Another way to understand why this is the case is to examine this problem again as the steady-state problem originating with the heat equation. Consider the heat equation with $\\sigma_0 = \\sigma_1 = 0$ and $f(x) = 0$. This setup would preserve and heat in the rod as none can escape through the ends of the rod. In fact, the solution to the steady-state problem would simply to redistribute the heat in the rod evenly across the rod based on the initial condition. We then would have a solution\n$$\n u(x) = \\int^1_0 u^0(x) dx = C.\n$$",
"_____no_output_____"
],
[
"The problem comes from the fact that the steady-state problem does not know about this bit of information by itself. This means that the BVP as it stands could pick out any $C$ and it would be a solution.",
"_____no_output_____"
],
[
"The solution is similar if we had the same setup except $f(x) \\neq 0$. Now we are either adding or subtracting heat in the rod. In this case there may not be a steady state at all! You can actually show that if the addition and subtraction of heat exactly cancels we may in fact have a solution if\n$$\n \\int^1_0 f(x) dx = 0\n$$\nwhich leads again to an infinite number of solutions.",
"_____no_output_____"
],
[
"## General Linear Second Order Discretization\n\nLet's now describe a method for solving the equation\n$$\n a(x) u''(x) + b(x) u'(x) + c(x) u(x) = f(x) ~~~~ \\Omega = [a, b] \\\\\n u(a) = \\alpha ~~~~ u(b) = \\beta.\n$$",
"_____no_output_____"
],
[
"Try discretizing this using second order finite differences and write the system for\n$$\n a(x) u''(x) + b(x) u'(x) + c(x) u(x) = f(x) ~~~~ \\Omega = [a, b] \\\\\n u(a) = \\alpha ~~~~ u(b) = \\beta.\n$$",
"_____no_output_____"
],
[
"The general, second order finite difference approximation to the above equation can be written as\n$$\n a_i \\frac{U_{i+1} - 2 U_i + U_{i-1}}{\\Delta x^2} + b_i \\frac{U_{i+1} - U_{i-1}}{2 \\Delta x} + c_i U_i = f_i\n$$\nleading to the matrix entries\n$$\n A_{i,i} = -\\frac{2 a_i}{\\Delta x^2} + c_i\n$$\non the diagonal and\n$$\n A_{i,i\\pm1} = \\frac{a_i}{\\Delta x^2} \\pm \\frac{b_i}{2 \\Delta x}\n$$\non the sub-diagonals. We can take of the boundary conditions by either using the ghost-points approach or by incorporating them into the right hand side evaluation.",
"_____no_output_____"
],
[
"### Example:\n\nConsider the steady-state heat conduction problem with a variable $\\kappa(x)$ so that\n$$\n (\\kappa(x) u'(x))' = f(x), ~~~~ \\Omega = [0, 1] \\\\\n u(0) = \\alpha ~~~~ u(1) = \\beta\n$$",
"_____no_output_____"
],
[
"By the chain rule we know\n$$\n \\kappa(x) u''(x) + \\kappa'(x) u'(x) = f(x).\n$$",
"_____no_output_____"
],
[
"It turns out that in this case this approach is not really the best approach to solving the problem. In many cases it is best to discretize the original form of the physics rather than a perhaps equivalent formulation. To demonstrate this let's try to construct a system to solve the original equations\n$$\n (\\kappa(x) u'(x))' = f(x).\n$$",
"_____no_output_____"
],
[
"First we will approximate the expression\n$$\n \\kappa(x) u'(x)\n$$\nbut at the points half-way in between the points $x_i$, i.e. $x_{i + 1/2}$. ",
"_____no_output_____"
],
[
"We also will take this approximation effectively to be $\\Delta x / 2$ and find\n$$\n \\kappa(x_{i+1/2}) u'(x_{i+1/2}) = \\kappa_{i+1/2} \\frac{U_{i+1} - U_i}{\\Delta x}.\n$$",
"_____no_output_____"
],
[
"Now taking this approximation and differencing it with the same difference centered at $x_{i-1/2}$ leads to\n$$\\begin{aligned}\n (\\kappa(x_i) u'(x_i))' &= \\frac{1}{\\Delta x} \\left [ \\kappa_{i+1/2} \\frac{U_{i+1} - U_i}{\\Delta x} - \\kappa_{i-1/2} \\frac{U_{i} - U_{i-1}}{\\Delta x} \\right ] \\\\\n &= \\frac{\\kappa_{i+1/2}U_{i+1} - \\kappa_{i+1/2} U_i -\\kappa_{i-1/2} U_{i} + \\kappa_{i-1/2} U_{i-1}}{\\Delta x^2} \\\\ \n &= \\frac{\\kappa_{i+1/2}U_{i+1} - (\\kappa_{i+1/2} - \\kappa_{i-1/2}) U_i + \\kappa_{i-1/2} U_{i-1}}{\\Delta x^2}\n\\end{aligned}$$",
"_____no_output_____"
],
[
"Note that these formulations are actually equivalent to $\\mathcal{O}(\\Delta x^2)$. The matrix entries are\n$$\\begin{aligned}\n A_{i,i} = -\\frac{\\kappa_{i+1/2} - \\kappa_{i-1/2}}{\\Delta x^2} \\\\\n A_{i,i \\pm 1} = \\frac{\\kappa_{i\\pm 1/2}}{\\Delta x^2}.\n\\end{aligned}$$\nNote that this latter discretization is symmetric. This will have consequences as to how well or quickly we can solve the resulting system of linear equations.",
"_____no_output_____"
],
[
"## Non-Linear Equations\n\nOur model problem, Poisson's equation, is a linear BVP. How would we approach a non-linear problem? As a new model problem let's consider the non-linear pendulum problem. The physical system is a mass $m$ connected to a rigid, massless rod of length $L$ which is allowed to swing about a point. The angle $\\theta(t)$ is taken with reference to the stable at-rest point with the mass hanging downwards. ",
"_____no_output_____"
],
[
"This system can be described by\n$$\n \\theta''(t) = \\frac{-g}{L} \\sin(\\theta(t)).\n$$\nWe will take $\\frac{g}{L} = 1$ for convenience.",
"_____no_output_____"
],
[
"Looking at the Taylor series of $\\sin$ we can approximate this equation for small $\\theta$ as\n$$\n \\sin(\\theta) \\approx \\theta - \\frac{\\theta^3}{6} + \\mathcal{O}(\\theta^5)\n$$\nso that\n$$\n \\theta'' = -\\theta.\n$$",
"_____no_output_____"
],
[
"We know that this equation has solutions of the form\n$$\n \\theta(t) = C_1 \\cos t + C_2 \\sin t.\n$$\nWe clearly need two boundary conditions to uniquely specify the system which can be a bit awkward given that we usually specify these at two points in the spatial domain. Since we are in time we can specify the initial position of the pendulum $\\theta(0) = \\alpha$ however the second condition would specify where the pendulum would be sometime in the future, say $\\theta(T) = \\beta$. We could also specify another initial condition such as the angular velocity $\\theta'(0) = \\sigma$.",
"_____no_output_____"
]
],
[
[
"# Simple linear pendulum solutions\ndef linear_pendulum(t, alpha=0.01, beta=0.01, T=1.0):\n C_1 = alpha\n C_2 = (beta - alpha * numpy.cos(T)) / numpy.sin(T)\n return C_1 * numpy.cos(t) + C_2 * numpy.sin(t)\n\n\nalpha = [0.1, -0.1, -1.0]\nbeta = [0.1, 0.1, 0.0]\nT = [1.0, 1.0, 1.0]\nt = numpy.linspace(0, 10.0, 100)\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\nfor i in xrange(len(alpha)):\n axes.plot(t, linear_pendulum(t, alpha[i], beta[i], T[i]))\naxes.set_title(\"Solutions to the Linear Pendulum Problem\")\naxes.set_xlabel(\"t\")\naxes.set_ylabel(\"$\\theta$\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"But how would we go about handling the fully non-linear problem? First let's discretize using our approach to date with the second order, centered second derivative finite difference approximation to find\n$$\n \\frac{1}{\\Delta t^2}(\\theta_{i+1} - 2 \\theta_i + \\theta_{i-1}) + \\sin (\\theta_i) = 0.\n$$",
"_____no_output_____"
],
[
"The most common approach to solving a non-linear BVP like this (and many non-linear PDEs for that matter) is to use Newton's method. Recall that if we have a non-linear function $G(\\theta)$ and we want to find $\\theta$ such that\n$$\n G(\\theta) = 0\n$$\nwe can expand $G(\\theta)$ in a Taylor series to find\n$$\n G(\\theta^{[k+1]}) = G(\\theta^{[k]}) + G'(\\theta^{[k]}) (\\theta^{[k+1]} - \\theta^{[k]}) + \\mathcal{O}((\\theta^{[k+1]} - \\theta^{[k]})^2)\n$$",
"_____no_output_____"
],
[
"If we want $G(\\theta^{[k+1]}) = 0$ we can set this in the expression above (this is also known as a fixed point iteration) and dropping the higher order terms we can solve for $\\theta^{[k+1]}$ to find\n$$\\begin{aligned}\n 0 &= G(\\theta^{[k]}) + G'(\\theta^{[k]}) (\\theta^{[k+1]} - \\theta^{[k]} )\\\\\n G'(\\theta^{[k]}) \\theta^{[k+1]} &= G'(\\theta^{[k]}) \\theta^{[k]} - G(\\theta^{[k]})\n\\end{aligned}$$",
"_____no_output_____"
],
[
"At this point we need to be careful, if we have a system of equations we cannot simply divide through by $G'(\\theta^{[k]})$ (which is now a matrix) to find our new value $\\theta^{[k+1]}$. Instead we need to invert the matrix $G'(\\theta^{[k]})$. Another way to write this is as an update to the value $\\theta^{[k+1]}$ where\n$$\n \\theta^{[k+1]} = \\theta^{[k]} + \\delta^{[k]}\n$$\nwhere\n$$\n J(\\theta^{[k]}) \\delta^{[k]} = -G(\\theta^{[k]}).\n$$",
"_____no_output_____"
],
[
"Here we have introduced notation for the **Jacobian matrix** whose elements are\n$$\n J_{ij}(\\theta) = \\frac{\\partial}{\\partial \\theta_j} G_i(\\theta).\n$$",
"_____no_output_____"
],
[
"So how do we compute the Jacobian matrix? Since we know the system of equations in this case we can write down in general what the entries of $J$ are.\n$$\n \\frac{1}{\\Delta t^2}(\\theta_{i+1} - 2 \\theta_i + \\theta_{i-1}) + \\sin (\\theta_i) = 0.\n$$",
"_____no_output_____"
],
[
"$$\n J_{ij}(\\theta) = \\left \\{ \\begin{aligned}\n &\\frac{1}{\\Delta t^2} & & j = i - 1, j = i + 1 \\\\\n -&\\frac{2}{\\Delta t^2} + \\cos(\\theta_i) & & j = i \\\\\n &0 & & \\text{otherwise}\n \\end{aligned} \\right .\n$$",
"_____no_output_____"
],
[
"With the Jacobian in hand we can solve the BVP by iterating until some stopping criteria is met (we have converged to our satisfaction).",
"_____no_output_____"
],
[
"### Example\n\nSolve the linear and non-linear pendulum problem with $T=2\\pi$, $\\alpha = \\beta = 0.7$.\n - Does the linear equation have a unique solution\n - Do you expect the original problem to have a unique solution (i.e. does the non-linear problem have a unique solution)?",
"_____no_output_____"
]
],
[
[
"def solve_nonlinear_pendulum(m, alpha, beta, T, max_iterations=100, tolerance=1e-3, verbose=False):\n \n # Discretization\n t_bc = numpy.linspace(0.0, T, m + 2)\n t = t_bc[1:-1]\n delta_t = T / (m + 1)\n diagonal = numpy.ones(t.shape)\n G = numpy.empty(t_bc.shape)\n \n # Initial guess\n theta = 0.7 * numpy.cos(t_bc)\n theta[0] = alpha\n theta[-1] = beta\n \n # Main iteration loop\n success = False\n for num_step in xrange(1, max_iterations):\n \n # Construct Jacobian matrix\n J = numpy.diag(diagonal * -2.0 / delta_t**2 + numpy.cos(theta[1:-1]), 0)\n J += numpy.diag(diagonal[:-1] / delta_t**2, -1)\n J += numpy.diag(diagonal[:-1] / delta_t**2, 1)\n \n # Construct vector G\n G = (theta[:-2] - 2.0 * theta[1:-1] + theta[2:]) / delta_t**2 + numpy.sin(theta[1:-1])\n \n # Take care of BCs\n G[0] = (alpha - 2.0 * theta[1] + theta[2]) / delta_t**2 + numpy.sin(theta[1])\n G[-1] = (theta[-3] - 2.0 * theta[-2] + beta) / delta_t**2 + numpy.sin(theta[-2])\n \n # Solve\n delta = numpy.linalg.solve(J, -G)\n theta[1:-1] += delta\n \n if verbose:\n print \" (%s) Step size: %s\" % (num_step, numpy.linalg.norm(delta))\n \n if numpy.linalg.norm(delta) < tolerance:\n success = True\n break\n \n if not success:\n print numpy.linalg.norm(delta)\n raise ValueError(\"Reached maximum allowed steps before convergence criteria met.\")\n \n return t_bc, theta\n\nt, theta = solve_nonlinear_pendulum(100, 0.7, 0.7, 2.0 * numpy.pi, tolerance=1e-9, verbose=True)\nplt.plot(t, theta)\nplt.show()",
"_____no_output_____"
],
[
"# Linear Problem\nalpha = 0.7\nbeta = 0.7\nT = 2.0 * numpy.pi\nt = numpy.linspace(0, T, 100)\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(t, linear_pendulum(t, alpha, beta, T), 'r-', label=\"Linear\")\n\n# Non-linear problem\nt, theta = solve_nonlinear_pendulum(100, alpha, beta, T)\naxes.plot(t, theta, 'b-', label=\"Non-Linear\")\n\naxes.set_title(\"Solutions to the Pendulum Problem\")\naxes.set_xlabel(\"t\")\naxes.set_ylabel(\"$\\theta$\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Accuracy\n\nNote that there are two different ideas of convergence going on in our non-linear solver above, one is the convergence of the finite difference approximation controlled by $\\Delta x$ and the convergence of the Newton iteration. We expect both to be second order (Newton's method converges quadratically under suitable assumptions). How do these two methods combine to affect the global error though?",
"_____no_output_____"
],
[
"First let's compute the LTE\n$$\\begin{aligned}\n \\tau_{i} &= \\frac{1}{\\Delta t^2} (\\theta(t_{i+1}) - 2 \\theta(t_i) + \\theta(t_{i-1})) + \\sin \\theta(t_i) \\\\\n &= \\frac{1}{\\Delta t^2} \\left (\\theta(t_i) + \\theta'(t_i) \\Delta t + \\frac{1}{2} \\theta''(t_i) \\Delta t^2 + \\frac{1}{6} \\theta'''(t_i) \\Delta t^3 + \\frac{1}{24} \\theta^{(4)}(t_i) \\Delta t^4 - 2 \\theta(t_i) \\right .\\\\\n &~~~~~~~~~~~~~~ \\left . + \\theta(t_i) - \\theta'(t_i) \\Delta t + \\frac{1}{2} \\theta''(t_i) \\Delta t^2 - \\frac{1}{6} \\theta'''(t_i) \\Delta t^3 + \\frac{1}{24} \\theta^{(4)}(t_i) \\Delta t^4 + \\mathcal{O}(\\Delta t^5) \\right) + \\sin \\theta(t_i) \\\\\n &= \\frac{1}{\\Delta t^2} \\left (\\theta''(t_i) \\Delta t^2 + \\frac{1}{12} \\theta^{(4)}(t_i) \\Delta t^4 \\mathcal{O}(\\Delta t^6) \\right) + \\sin \\theta(t_i) \\\\\n &= \\theta''(t_i) + \\sin \\theta(t_i) + \\frac{1}{12} \\theta^{(4)}(t_i) \\Delta t^2 + \\mathcal{O}(\\Delta t^4).\n\\end{aligned}$$",
"_____no_output_____"
],
[
"For Newton's method we can consider the difference of taking a step with the true solution to the BVP $\\hat{\\theta}$ vs. the approximate solution $\\theta$. We can formulate an analogous LTE where\n$$\n G(\\Theta) = 0 ~~~ G(\\hat{\\Theta}) = \\tau.\n$$",
"_____no_output_____"
],
[
"Following our discussion from before we can use these two expressions to find\n$$\n G(\\Theta) - G(\\hat{\\Theta}) = -\\tau\n$$\nand from here we want to derive an expression of the global error $E = \\Theta - \\hat{\\Theta}$. ",
"_____no_output_____"
],
[
"Since $G(\\theta)$ is not linear we will write the above expression as a Taylor series to find\n$$\n G(\\Theta) = G(\\hat{\\Theta}) + J(\\hat{\\Theta}) E + \\mathcal{O}(||E||^2).\n$$",
"_____no_output_____"
],
[
"Using this expression we find\n$$\n J(\\hat{\\Theta}) E = -\\tau + \\mathcal{O}(||E||^2).\n$$\nIgnoring higher order terms then we have a linear expression for $E$ which we can solve.",
"_____no_output_____"
],
[
"This motivates another definition of stability then involving the Jacobian of $G$. The nonlinear difference methods $G(\\Theta) = 0$ is *stable* in some norm $||\\cdot||$ if the matrices $(J^{\\Delta t})^{-1}$ are uniformly bounded in the norm as $\\Delta t \\rightarrow 0$. In other words $\\exists C$ and $\\Delta t^0$ s.t.\n$$\n ||(J^{\\Delta t})^{-1}|| \\leq C ~~~ \\forall \\Delta t < \\Delta t^0.\n$$",
"_____no_output_____"
],
[
"Given this sense of stability and consistency ($||\\tau|| \\rightarrow 0$ as $\\Delta t \\rightarrow 0$) then the method converges as\n$$\n ||E^{\\Delta t}|| \\rightarrow 0 ~~~ \\text{as} ~~~ \\Delta t \\rightarrow 0.\n$$\n\nNote that we are still not guaranteed that Newton's method will converge, say from a bad initial guess, even though we have shown convergence. It can be proven that Newton's method will converge from a sufficiently good initial guess. It also should be noted that although Newton's method may have an error that is to round-off does not imply that the error will follow suit.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d04477f1fe78891265f20735e379988750edd82a | 11,352 | ipynb | Jupyter Notebook | week07/02_many_to_many_classification_exercise.ipynb | modulabs/modu-tensorflow | 41c0e899d1b65075320adae03b2732734665d9fb | [
"Apache-2.0"
] | 14 | 2018-07-07T21:38:14.000Z | 2019-01-27T18:40:22.000Z | week07/02_many_to_many_classification_exercise.ipynb | losskatsu/modu-tensorflow | 41c0e899d1b65075320adae03b2732734665d9fb | [
"Apache-2.0"
] | null | null | null | week07/02_many_to_many_classification_exercise.ipynb | losskatsu/modu-tensorflow | 41c0e899d1b65075320adae03b2732734665d9fb | [
"Apache-2.0"
] | 2 | 2018-07-29T01:01:44.000Z | 2019-02-23T07:22:34.000Z | 24.624729 | 198 | 0.489782 | [
[
[
"# Many to Many Classification\nSimple example for Many to Many Classification (Simple pos tagger) by Recurrent Neural Networks\n\n- Creating the **data pipeline** with `tf.data`\n- Preprocessing word sequences (variable input sequence length) using `padding technique` by `user function (pad_seq)`\n- Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)\n- Training **many to many classification** with `tf.contrib.seq2seq.sequence_loss`\n- Masking unvalid token with `tf.sequence_mask`\n- Creating the model as **Class**\n- Reference\n - https://github.com/aisolab/sample_code_of_Deep_learning_Basics/blob/master/DLEL/DLEL_12_2_RNN_(toy_example).ipynb",
"_____no_output_____"
],
[
"### Setup",
"_____no_output_____"
]
],
[
[
"import os, sys\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport string\n%matplotlib inline\n\nslim = tf.contrib.slim\nprint(tf.__version__)",
"1.10.0\n"
]
],
[
[
"### Prepare example data ",
"_____no_output_____"
]
],
[
[
"sentences = [['I', 'feel', 'hungry'],\n ['tensorflow', 'is', 'very', 'difficult'],\n ['tensorflow', 'is', 'a', 'framework', 'for', 'deep', 'learning'],\n ['tensorflow', 'is', 'very', 'fast', 'changing']]\npos = [['pronoun', 'verb', 'adjective'],\n ['noun', 'verb', 'adverb', 'adjective'],\n ['noun', 'verb', 'determiner', 'noun', 'preposition', 'adjective', 'noun'],\n ['noun', 'verb', 'adverb', 'adjective', 'verb']]",
"_____no_output_____"
],
[
"# word dic\nword_list = []\nfor elm in sentences:\n word_list += elm\nword_list = list(set(word_list))\nword_list.sort()\nword_list = ['<pad>'] + word_list\n\nword_dic = {word : idx for idx, word in enumerate(word_list)}\nprint(word_dic)",
"{'<pad>': 0, 'I': 1, 'a': 2, 'changing': 3, 'deep': 4, 'difficult': 5, 'fast': 6, 'feel': 7, 'for': 8, 'framework': 9, 'hungry': 10, 'is': 11, 'learning': 12, 'tensorflow': 13, 'very': 14}\n"
],
[
"# pos dic\npos_list = []\nfor elm in pos:\n pos_list += elm\npos_list = list(set(pos_list))\npos_list.sort()\npos_list = ['<pad>'] + pos_list\nprint(pos_list)\n\npos_dic = {pos : idx for idx, pos in enumerate(pos_list)}\npos_dic",
"['<pad>', 'adjective', 'adverb', 'determiner', 'noun', 'preposition', 'pronoun', 'verb']\n"
],
[
"pos_idx_to_dic = {elm[1] : elm[0] for elm in pos_dic.items()}\npos_idx_to_dic",
"_____no_output_____"
]
],
[
[
"### Create pad_seq function",
"_____no_output_____"
]
],
[
[
"def pad_seq(sequences, max_len, dic):\n seq_len, seq_indices = [], []\n for seq in sequences:\n seq_len.append(len(seq))\n seq_idx = [dic.get(char) for char in seq]\n seq_idx += (max_len - len(seq_idx)) * [dic.get('<pad>')] # 0 is idx of meaningless token \"<pad>\"\n seq_indices.append(seq_idx)\n return seq_len, seq_indices",
"_____no_output_____"
]
],
[
[
"### Pre-process data",
"_____no_output_____"
]
],
[
[
"max_length = 10\nX_length, X_indices = pad_seq(sequences = sentences, max_len = max_length, dic = word_dic)\nprint(X_length, np.shape(X_indices))",
"[3, 4, 7, 5] (4, 10)\n"
],
[
"y = [elm + ['<pad>'] * (max_length - len(elm)) for elm in pos]\ny = [list(map(lambda el : pos_dic.get(el), elm)) for elm in y]\nprint(np.shape(y))",
"(4, 10)\n"
],
[
"y",
"_____no_output_____"
]
],
[
[
"### Define SimPosRNN",
"_____no_output_____"
]
],
[
[
"class SimPosRNN:\n def __init__(self, X_length, X_indices, y, n_of_classes, hidden_dim, max_len, word_dic):\n \n # Data pipeline\n with tf.variable_scope('input_layer'):\n # input layer를 구현해보세요\n # tf.get_variable을 사용하세요\n # tf.nn.embedding_lookup을 사용하세요\n self._X_length = X_length\n self._X_indices = X_indices\n self._y = y\n \n \n # RNN cell (many to many)\n with tf.variable_scope('rnn_cell'):\n # RNN cell을 구현해보세요\n # tf.contrib.rnn.BasicRNNCell을 사용하세요\n # tf.nn.dynamic_rnn을 사용하세요\n # tf.contrib.rnn.OutputProjectionWrapper를 사용하세요\n \n with tf.variable_scope('seq2seq_loss'):\n # tf.sequence_mask를 사용하여 masks를 정의하세요\n # tf.contrib.seq2seq.sequence_loss의 weights argument에 masks를 넣으세요\n \n with tf.variable_scope('prediction'):\n # tf.argmax를 사용하세요\n \n def predict(self, sess, X_length, X_indices):\n # predict instance method를 구현하세요\n\n return sess.run(self._prediction, feed_dict = feed_prediction)",
"_____no_output_____"
]
],
[
[
"### Create a model of SimPosRNN",
"_____no_output_____"
]
],
[
[
"# hyper-parameter#\nlr = .003\nepochs = 100\nbatch_size = 2\ntotal_step = int(np.shape(X_indices)[0] / batch_size)\nprint(total_step)",
"2\n"
],
[
"## create data pipeline with tf.data\n# tf.data를 이용해서 직접 구현해보세요",
"_____no_output_____"
],
[
"# 최종적으로 model은 아래의 코드를 통해서 생성됩니다.\nsim_pos_rnn = SimPosRNN(X_length = X_length_mb, X_indices = X_indices_mb, y = y_mb,\n n_of_classes = 8, hidden_dim = 16, max_len = max_length, word_dic = word_dic)",
"_____no_output_____"
]
],
[
[
"### Creat training op and train model",
"_____no_output_____"
]
],
[
[
"## create training op\nopt = tf.train.AdamOptimizer(learning_rate = lr)\ntraining_op = opt.minimize(loss = sim_pos_rnn.seq2seq_loss)",
"_____no_output_____"
],
[
"sess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\ntr_loss_hist = []\n\nfor epoch in range(epochs):\n avg_tr_loss = 0\n tr_step = 0\n \n sess.run(tr_iterator.initializer)\n try:\n while True:\n # 여기를 직접구현하시면 됩니다.\n\n \n except tf.errors.OutOfRangeError:\n pass\n \n avg_tr_loss /= tr_step\n tr_loss_hist.append(avg_tr_loss)\n if (epoch + 1) % 10 == 0:\n print('epoch : {:3}, tr_loss : {:.3f}'.format(epoch + 1, avg_tr_loss))",
"_____no_output_____"
],
[
"yhat = sim_pos_rnn.predict(sess = sess, X_length = X_length, X_indices = X_indices)\nyhat",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"yhat = [list(map(lambda elm : pos_idx_to_dic.get(elm), row)) for row in yhat]\nfor elm in yhat:\n print(elm)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d04485f5da43b6858aa0aabfc5fe72428b09107c | 13,475 | ipynb | Jupyter Notebook | face_recognition_android.ipynb | wang-junjian/face-recognition-services | 54fb33eeb83de36d2080dd8bd08770b56d02f9de | [
"MIT"
] | 1 | 2020-06-01T03:55:01.000Z | 2020-06-01T03:55:01.000Z | face_recognition_android.ipynb | wang-junjian/face-recognition-services | 54fb33eeb83de36d2080dd8bd08770b56d02f9de | [
"MIT"
] | null | null | null | face_recognition_android.ipynb | wang-junjian/face-recognition-services | 54fb33eeb83de36d2080dd8bd08770b56d02f9de | [
"MIT"
] | null | null | null | 38.5 | 230 | 0.581373 | [
[
[
"# Android的人脸识别库(NDK)",
"_____no_output_____"
],
[
"## 创建工程向导\n\n\n\n",
"_____no_output_____"
],
[
"## dlib库源代码添加到工程\n\n* 把dlib目录下的dlib文件夹拷贝到app/src/main/",
"_____no_output_____"
],
[
"## 增加JNI接口\n\n### 创建Java接口类\n在app/src/main/java/com/wangjunjian/facerecognition下创建类FaceRecognition\n```java\npackage com.wangjunjian.facerecognition;\n\nimport android.graphics.Rect;\n\npublic class FaceRecognition {\n static {\n System.loadLibrary(\"face-recognition\");\n }\n\n public native void detect(String filename, Rect rect);\n\n}\n```\n\n### 通过Java接口类输出C++头文件\n打开Terminal窗口,输入命令(**Windows系统下要把:改为;**)\n```bash\ncd app/src/main/\njavah -d jni -classpath /Users/wjj/Library/Android/sdk/platforms/android-21/android.jar:java com.wangjunjian.facerecognition.FaceRecognition\n```\n\n### 参考资料\n* [JNI 无法确定Bitmap的签名](https://blog.csdn.net/wxxgreat/article/details/48030775)\n* [在编辑JNI头文件的时候碰到无法确定Bitmap的签名问题](https://www.jianshu.com/p/b49bdcbfb5ed)",
"_____no_output_____"
],
[
"## 实现人脸检测\n打开app/src/main/cpp/face-recognition.cpp\n```cpp\n#include <jni.h>\n#include <string>\n#include <dlib/image_processing/frontal_face_detector.h>\n#include <dlib/image_io.h>\n\n#include \"jni/com_wangjunjian_facerecognition_FaceRecognition.h\"\n\nusing namespace dlib;\nusing namespace std;\n\nJNIEXPORT void JNICALL Java_com_wangjunjian_facerecognition_FaceRecognition_detect\n (JNIEnv *env, jobject clazz, jstring filename, jobject rect)\n{\n const char* pfilename = env->GetStringUTFChars(filename, JNI_FALSE);\n\n static frontal_face_detector detector = get_frontal_face_detector();\n array2d<unsigned char> img;\n load_image(img, pfilename);\n\n env->ReleaseStringUTFChars(filename, pfilename);\n\n std::vector<rectangle> dets = detector(img, 0);\n\n if (dets.size() > 0)\n {\n rectangle faceRect = dets[0];\n\n jclass rectClass = env->GetObjectClass(rect);\n\n jfieldID fidLeft = env->GetFieldID(rectClass, \"left\", \"I\");\n env->SetIntField(rect, fidLeft, faceRect.left());\n jfieldID fidTop = env->GetFieldID(rectClass, \"top\", \"I\");\n env->SetIntField(rect, fidTop, faceRect.top());\n jfieldID fidRight = env->GetFieldID(rectClass, \"right\", \"I\");\n env->SetIntField(rect, fidRight, faceRect.right());\n jfieldID fidBottom = env->GetFieldID(rectClass, \"bottom\", \"I\");\n env->SetIntField(rect, fidBottom, faceRect.bottom());\n }\n}\n```\n\n### 参考资料\n*[Android使用JNI实现Java与C之间传递数据](https://blog.csdn.net/furongkang/article/details/6857610)",
"_____no_output_____"
],
[
"## 修改 app/CMakeLists.txt\n\n```\n# For more information about using CMake with Android Studio, read the\n# documentation: https://d.android.com/studio/projects/add-native-code.html\n\n# Sets the minimum version of CMake required to build the native library.\n\ncmake_minimum_required(VERSION 3.4.1)\n\n# 设置库输出路径变量\nset(DISTRIBUTION_DIR ${CMAKE_SOURCE_DIR}/../distribution)\n\n# 包含dlib的make信息\ninclude(${CMAKE_SOURCE_DIR}/src/main/dlib/cmake)\n\n# Creates and names a library, sets it as either STATIC\n# or SHARED, and provides the relative paths to its source code.\n# You can define multiple libraries, and CMake builds them for you.\n# Gradle automatically packages shared libraries with your APK.\n\nadd_library( # Sets the name of the library.\n face-recognition\n\n # Sets the library as a shared library.\n SHARED\n\n # Provides a relative path to your source file(s).\n src/main/cpp/face-recognition.cpp )\n\n# 设置每个平台的ABI输出路径\nset_target_properties(face-recognition PROPERTIES\n LIBRARY_OUTPUT_DIRECTORY\n ${DISTRIBUTION_DIR}/libs/${ANDROID_ABI})\n\n# Searches for a specified prebuilt library and stores the path as a\n# variable. Because CMake includes system libraries in the search path by\n# default, you only need to specify the name of the public NDK library\n# you want to add. CMake verifies that the library exists before\n# completing its build.\n\nfind_library( # Sets the name of the path variable.\n log-lib\n\n # Specifies the name of the NDK library that\n # you want CMake to locate.\n log )\n\n# Specifies libraries CMake should link to your target library. You\n# can link multiple libraries, such as libraries you define in this\n# build script, prebuilt third-party libraries, or system libraries.\n\n# 连接dlib和android\ntarget_link_libraries( # Specifies the target library.\n face-recognition\n android\n dlib\n\n # Links the target library to the log library\n # included in the NDK.\n ${log-lib} )\n```\n\n### 参考资料\n* [AndroidStudio用Cmake方式编译NDK代码](https://blog.csdn.net/joe544351900/article/details/53637549)",
"_____no_output_____"
],
[
"## 修改 app/build.gradle\n\n```\n//修改为库\n//apply plugin: 'com.android.application'\napply plugin: 'com.android.library'\n\nandroid {\n compileSdkVersion 26\n defaultConfig {\n //移除应用ID\n //applicationId \"com.wangjunjian.facerecognition\"\n minSdkVersion 21\n targetSdkVersion 26\n versionCode 1\n versionName \"1.0\"\n testInstrumentationRunner \"android.support.test.runner.AndroidJUnitRunner\"\n externalNativeBuild {\n cmake {\n arguments '-DANDROID_PLATFORM=android-21',\n '-DANDROID_TOOLCHAIN=clang', '-DANDROID_STL=c++_static', '-DCMAKE_BUILD_TYPE=Release ..'\n cppFlags \"-frtti -fexceptions -std=c++11 -O3\"\n }\n }\n //要生成的目标平台ABI\n ndk {\n abiFilters 'armeabi-v7a', 'arm64-v8a', 'x86', 'x86_64'\n }\n }\n buildTypes {\n release {\n minifyEnabled false\n proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'\n }\n }\n externalNativeBuild {\n cmake {\n path \"CMakeLists.txt\"\n }\n }\n //JNI库输出路径\n sourceSets {\n main {\n jniLibs.srcDirs = ['../distribution/libs']\n }\n }\n //消除错误 Caused by: com.android.builder.merge.DuplicateRelativeFileException: More than one file was found with OS independent path 'lib/x86/libface-recognition.so'\n packagingOptions {\n pickFirst 'lib/armeabi-v7a/libface-recognition.so'\n pickFirst 'lib/arm64-v8a/libface-recognition.so'\n pickFirst 'lib/x86/libface-recognition.so'\n pickFirst 'lib/x86_64/libface-recognition.so'\n }\n}\n\n//打包jar到指定路径\ntask makeJar(type: Copy) {\n delete 'build/libs/face-recognition.jar'\n from('build/intermediates/packaged-classes/release/')\n into('../distribution/libs/')\n include('classes.jar')\n rename('classes.jar', 'face-recognition.jar')\n}\nmakeJar.dependsOn(build)\n\ndependencies {\n implementation fileTree(dir: 'libs', include: ['*.jar'])\n implementation 'com.android.support:appcompat-v7:26.1.0'\n testImplementation 'junit:junit:4.12'\n androidTestImplementation 'com.android.support.test:runner:1.0.2'\n androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'\n}\n```\n\n### 参考资料\n* [Android NDK samples with Android Studio](https://github.com/googlesamples/android-ndk)\n* [Android studio 将 Module 打包成 Jar 包](https://www.cnblogs.com/xinaixia/p/7660173.html)\n* [could not load library \"libc++_shared.so\" needed by \"libgpg.so\"](https://github.com/playgameservices/play-games-plugin-for-unity/issues/280)\n* [Android NDK cannot load libc++_shared.so, gets \"cannot locate symbol 'rand' reference](https://stackoverflow.com/questions/28504875/android-ndk-cannot-load-libc-shared-so-gets-cannot-locate-symbol-rand-refe)\n* [记录Android-Studio遇到的各种坑](https://blog.csdn.net/u012874222/article/details/50616698)\n* [Gradle flavors for android with custom source sets - what should the gradle files look like?](https://stackoverflow.com/questions/19461145/gradle-flavors-for-android-with-custom-source-sets-what-should-the-gradle-file)\n* [Android Studio 2.2 gradle调用ndk-build](https://www.jianshu.com/p/0e50ae3c4d0d)\n* [Android NDK: How to build for ARM64-v8a with minimumSdkVersion = 19](https://stackoverflow.com/questions/41102128/android-ndk-how-to-build-for-arm64-v8a-with-minimumsdkversion-19)",
"_____no_output_____"
],
[
"## 编译输出开发库\n\n打开Terminal窗口,输入命令\n```bash\n./gradlew makeJar\n```\n\n\n\n### 参考资料\n* [-bash :gradlew command not found](https://blog.csdn.net/yyh352091626/article/details/52343951)",
"_____no_output_____"
],
[
"## 查看jar文档列表\n```bash\njar vtf distribution/libs/face-recognition.jar\n```\n\n### 参考资料\n* [Linux环境下查看jar包的归档目录](https://blog.csdn.net/tanga842428/article/details/55101253)",
"_____no_output_____"
],
[
"## 参考资料\n* [Face Landmarks In Your Android App](http://slides.com/boywang/face-landmarks-in-your-android-app/fullscreen#/)\n* [dlib-android](https://github.com/tzutalin/dlib-android)\n* [深入理解Android(一):Gradle详解](http://www.infoq.com/cn/articles/android-in-depth-gradle/)\n* [Android NDK Gradle3.0 以上最新生成.so之旅](https://blog.csdn.net/xiaozhu0922/article/details/78835144)\n* [Android Studio 手把手教你NDK打包SO库文件,并提供对应API 使用它(赋demo)](https://blog.csdn.net/u011445031/article/details/72884703)\n* [Building dlib for android ndk](https://stackoverflow.com/questions/41331400/building-dlib-for-android-ndk)\n* [使用 Android Studio 写出第一个 NDK 程序(超详细)](https://blog.csdn.net/young_time/article/details/80346631)\n* [Android studio3.0 JNI/NDK开发流程](https://www.jianshu.com/p/a37782b56770)\n* [dlib 18 android编译dlib库,运行matrix_ex demo](https://blog.csdn.net/longji/article/details/78115807)\n* [Android开发——Android Studio下使用Cmake在NDK环境下移植Dlib库](https://blog.csdn.net/u012525096/article/details/78950979)\n* [android编译系统makefile(Android.mk)写法](http://www.cnblogs.com/hesiming/archive/2011/03/15/1984444.html)\n* [dlib-android/jni/jni_detections/jni_pedestrian_det.cpp](https://github.com/tzutalin/dlib-android/blob/master/jni/jni_detections/jni_pedestrian_det.cpp)\n* [Face Detection using MTCNN and TensorFlow in android](http://androidcodehub.com/face-detection-using-mtcnn-tensorflow-android/)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d044887200d9a2e8b4a959bd80a7503bd8216929 | 43,167 | ipynb | Jupyter Notebook | Step3_SVM-ClassifierTo-LabelVideoData.ipynb | itsMahad/MothAbdominalRestriction | 3f71766261515431d6b63e2dbc8c2d6ebdcb752f | [
"MIT"
] | null | null | null | Step3_SVM-ClassifierTo-LabelVideoData.ipynb | itsMahad/MothAbdominalRestriction | 3f71766261515431d6b63e2dbc8c2d6ebdcb752f | [
"MIT"
] | null | null | null | Step3_SVM-ClassifierTo-LabelVideoData.ipynb | itsMahad/MothAbdominalRestriction | 3f71766261515431d6b63e2dbc8c2d6ebdcb752f | [
"MIT"
] | null | null | null | 119.245856 | 22,132 | 0.87048 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nstyle.use(\"ggplot\")\nfrom sklearn import svm\nimport pandas as pd\nimport os\nimport scipy as sc",
"_____no_output_____"
],
[
"# get the annotated data to build the classifier\ndirec = r'C:\\Users\\Daniellab\\Desktop\\Light_level_videos_second_batch\\Data\\Step3\\Annotation'\nfile = pd.read_csv(direc + '\\Mahad_ManualAnnotation_pooledAllDataTogether.csv')",
"_____no_output_____"
]
],
[
[
"Check the distribution of the true and false trials",
"_____no_output_____"
]
],
[
[
"mu, sigma = 0, 0.1 # mean and standard deviation\ns = np.random.normal(mu, sigma, 1000)\n\nk2_test, p_test = sc.stats.normaltest(s, axis=0, nan_policy='omit')\nprint(\"p = {:g}\".format(p_test))\n\nif p_test < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis\n print('This random distribution is not normally distributed')\nelse:\n print('This random distribution is normally distributed')",
"p = 0.247467\nThis random distribution is normally distributed\n"
],
[
"trueTrials = file.FramesInView[file.TrialStatus == 1]\n\nk2_true, p_true = sc.stats.normaltest(np.log(trueTrials), axis=0, nan_policy='omit')\nprint(\"p = {:g}\".format(p_true))\n\nif p_true < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis\n print('the true trials are not normally distributed')\nelse:\n print('The true trials are normally distributed')",
"p = 0.045632\nthe true trials are not normally distributed\n"
],
[
"falseTrials = file.FramesInView[file.TrialStatus == 0]\n\nk2_false, p_false = sc.stats.normaltest(np.log(falseTrials), axis=0, nan_policy='omit')\nprint(\"p = {:g}\".format(p_false)) \n\nif p_false < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis\n print('the false trials are not normally distributed')\nelse:\n print('The false trials are normally distributed')",
"p = 0.000107942\nthe false trials are not normally distributed\n"
],
[
"x = np.asarray(file.FramesInView)\ny = np.zeros(len(x))\ndata = np.transpose(np.array([x,y]))\nManual_Label = np.asarray(file.TrialStatus)\n\nplt.scatter(data[:,0],data[:,1], c = Manual_Label) #see what the data looks like",
"_____no_output_____"
],
[
"# build the linear classifier\nclf = svm.SVC(kernel = 'linear', C = 1.0)\nclf.fit(data,Manual_Label)",
"_____no_output_____"
],
[
"w = clf.coef_[0]\ny0 = clf.intercept_\n\nnew_line = w[0]*data[:,0] - y0\nnew_line.shape",
"_____no_output_____"
],
[
"# see what the classifier did to the labels - find a way to draw a line along the \"point\" and draw \"margin\"\n\nplt.hist(trueTrials, bins =10**np.linspace(0, 4, 40), color = 'lightyellow', label = 'true trials', zorder=0)\nplt.hist(falseTrials, bins =10**np.linspace(0, 4, 40), color = 'mediumpurple', alpha=0.35, label = 'false trials', zorder=5)\n\nannotation = []\nfor x,_ in data:\n YY = clf.predict([[x,0]])[0]\n annotation.append(YY)\n\nplt.scatter(data[:,0],data[:,1]+10, c = annotation, \n alpha=0.3, edgecolors='none', zorder=10, label = 'post-classification')\n# plt.plot(new_line)\nplt.xscale(\"log\")\nplt.yscale('linear')\nplt.xlabel('Trial length (in frame Number)')\nplt.title('Using a Classifier to indentify true trials')\nplt.legend()\n# plt.savefig(r'C:\\Users\\Daniellab\\Desktop\\Light_level_videos_c-10\\Data\\Step3\\Annotation\\Figuers_3.svg')\nplt.tight_layout()",
"_____no_output_____"
],
[
"# run the predictor for all dataset and annotate them\ndirec = r'C:\\Users\\Daniellab\\Desktop\\Light_level_videos_second_batch\\Data\\Step2_Tanvi_Method'\nnew_path = r'C:\\Users\\Daniellab\\Desktop\\Light_level_videos_second_batch\\Data\\Step3'\n\nfile = [file for file in os.listdir(direc) if file.endswith('.csv')]\n# test = file[0]\n\nfor item in file:\n print(item)\n df = pd.read_csv(direc + '/' + item)\n label = []\n # run the classifer on this\n for xx in df.Frames_In_View:\n YY = clf.predict([[xx,0]])[0]\n label.append(YY)\n\n df1 = pd.DataFrame({'label': label})\n new_df = pd.concat([df, df1], axis = 1)\n# new_df.to_csv(new_path + '/' + item[:-4] + '_labeled.csv')",
"L0.1_c-3_m10_MothInOut.csv\nL0.1_c-3_m12_MothInOut.csv\nL0.1_c-3_m20_MothInOut.csv\nL0.1_c-3_m21_MothInOut.csv\nL0.1_c-3_m22_MothInOut.csv\nL0.1_c-3_m23_MothInOut.csv\nL0.1_c-3_m24_MothInOut.csv\nL0.1_c-3_m25_MothInOut.csv\nL0.1_c-3_m27_MothInOut.csv\nL0.1_c-3_m2_MothInOut.csv\nL0.1_c-3_m32_MothInOut.csv\nL0.1_c-3_m34_MothInOut.csv\nL0.1_c-3_m37_MothInOut.csv\nL0.1_c-3_m38_MothInOut.csv\nL0.1_c-3_m39_MothInOut.csv\nL0.1_c-3_m40_MothInOut.csv\nL0.1_c-3_m41_MothInOut.csv\nL0.1_c-3_m43_MothInOut.csv\nL0.1_c-3_m44_MothInOut.csv\nL0.1_c-3_m45_MothInOut.csv\nL0.1_c-3_m46_MothInOut.csv\nL0.1_c-3_m47_MothInOut.csv\nL0.1_c-3_m48_MothInOut.csv\nL0.1_c-3_m49_MothInOut.csv\nL0.1_c-3_m50_MothInOut.csv\nL0.1_c-3_m54_MothInOut.csv\nL0.1_c-3_m57_MothInOut.csv\nL0.1_c-3_m5_MothInOut.csv\nL0.1_c-3_m8_MothInOut.csv\nL50_c-3_m10_MothInOut.csv\nL50_c-3_m12_MothInOut.csv\nL50_c-3_m13_MothInOut.csv\nL50_c-3_m14_MothInOut.csv\nL50_c-3_m15_MothInOut.csv\nL50_c-3_m21_MothInOut.csv\nL50_c-3_m22_MothInOut.csv\nL50_c-3_m24_MothInOut.csv\nL50_c-3_m25_MothInOut.csv\nL50_c-3_m26_MothInOut.csv\nL50_c-3_m2_MothInOut.csv\nL50_c-3_m30_MothInOut.csv\nL50_c-3_m32_MothInOut.csv\nL50_c-3_m33_MothInOut.csv\nL50_c-3_m34_MothInOut.csv\nL50_c-3_m35_MothInOut.csv\nL50_c-3_m37_MothInOut.csv\nL50_c-3_m38_MothInOut.csv\nL50_c-3_m39_MothInOut.csv\nL50_c-3_m45_MothInOut.csv\nL50_c-3_m49_MothInOut.csv\nL50_c-3_m50_MothInOut.csv\nL50_c-3_m51_MothInOut.csv\nL50_c-3_m58_MothInOut.csv\nL50_c-3_m6_MothInOut.csv\nL50_c-3_m9_MothInOut.csv\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0448b89d34860e51ae6dc1f689b72d826ae4ce4 | 3,866 | ipynb | Jupyter Notebook | old/drake_examples/cart_dircol.ipynb | mmolnar0/sgillen_research | 752e09fdf7a996c832e71b0a8296322fe77e9ae3 | [
"MIT"
] | null | null | null | old/drake_examples/cart_dircol.ipynb | mmolnar0/sgillen_research | 752e09fdf7a996c832e71b0a8296322fe77e9ae3 | [
"MIT"
] | null | null | null | old/drake_examples/cart_dircol.ipynb | mmolnar0/sgillen_research | 752e09fdf7a996c832e71b0a8296322fe77e9ae3 | [
"MIT"
] | null | null | null | 32.762712 | 85 | 0.562597 | [
[
[
"from __future__ import print_function\n\nimport pydrake\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom pydrake.all import (DirectCollocation, FloatingBaseType,\n PiecewisePolynomial, RigidBodyTree, RigidBodyPlant,\n SolutionResult)\nfrom underactuated import (FindResource, PlanarRigidBodyVisualizer)\n\nfrom IPython.display import HTML\n\n%matplotlib inline\n\ntree = RigidBodyTree(FindResource(\"cartpole/cartpole.urdf\"),\n FloatingBaseType.kFixed)\nplant = RigidBodyPlant(tree)\ncontext = plant.CreateDefaultContext()\n\ndircol = DirectCollocation(plant, context, num_time_samples=21,\n minimum_timestep=0.1, maximum_timestep=0.4)\n\ndircol.AddEqualTimeIntervalsConstraints()\n\ninitial_state = (0., 0., 0., 0.)\ndircol.AddBoundingBoxConstraint(initial_state, initial_state,\n dircol.initial_state())\n# More elegant version is blocked on drake #8315:\n# dircol.AddLinearConstraint(dircol.initial_state() == initial_state)\n\nfinal_state = (0., math.pi, 0., 0.)\ndircol.AddBoundingBoxConstraint(final_state, final_state,\n dircol.final_state())\n# dircol.AddLinearConstraint(dircol.final_state() == final_state)\n\nR = 10 # Cost on input \"effort\".\nu = dircol.input()\ndircol.AddRunningCost(R*u[0]**2)\n\n# Add a final cost equal to the total duration.\ndircol.AddFinalCost(dircol.time())\n\ninitial_x_trajectory = \\\n PiecewisePolynomial.FirstOrderHold([0., 4.],\n np.column_stack((initial_state,\n final_state)))\n\ndircol.SetInitialTrajectory(PiecewisePolynomial(), initial_x_trajectory)\n\nresult = dircol.Solve()\nassert(result == SolutionResult.kSolutionFound)\n\nx_trajectory = dircol.ReconstructStateTrajectory()\n\n#vis = PlanarRigidBodyVisualizer(tree, xlim=[-2.5, 2.5], ylim=[-1, 2.5])\n#ani = vis.animate(x_trajectory, repeat=True)\n\nu_trajectory = dircol.ReconstructInputTrajectory()\ntimes = np.linspace(u_trajectory.start_time(), u_trajectory.end_time(), 100)\nu_lookup = np.vectorize(u_trajectory.value)\nu_values = u_lookup(times)\n\nplt.figure()\nplt.plot(times, u_values)\nplt.xlabel('time (seconds)')\nplt.ylabel('force (Newtons)')\n\n\nprbv = PlanarRigidBodyVisualizer(tree, xlim=[-2.5, 2.5], ylim=[-1, 2.5])\nani = prbv.animate(x_trajectory, resample=30, repeat=True)\nplt.close(prbv.fig)\nHTML(ani.to_html5_video())\n\n\n#plt.show()\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d04491e4d125affa59af01d4e51f0158ddde1f08 | 49,660 | ipynb | Jupyter Notebook | HGPextreme/examples/motorcycle/func_prediction.ipynb | umbrellagong/HGPextreme | d32c0ffc3e19dd3f5812e886c65000286e3eb905 | [
"MIT"
] | null | null | null | HGPextreme/examples/motorcycle/func_prediction.ipynb | umbrellagong/HGPextreme | d32c0ffc3e19dd3f5812e886c65000286e3eb905 | [
"MIT"
] | null | null | null | HGPextreme/examples/motorcycle/func_prediction.ipynb | umbrellagong/HGPextreme | d32c0ffc3e19dd3f5812e886c65000286e3eb905 | [
"MIT"
] | null | null | null | 297.365269 | 45,960 | 0.933105 | [
[
[
"# Info\n\ncomparison of vhgpr and sgpr",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append(\"../../\")",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io as sio\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import WhiteKernel, RBF, ConstantKernel as C\nfrom core import VHGPR",
"_____no_output_____"
],
[
"plt.rcParams.update({'font.size': 16})",
"_____no_output_____"
]
],
[
[
"### data and test points",
"_____no_output_____"
]
],
[
[
"Data = sio.loadmat('motorcycle.mat')\nDX = Data['X']\nDY = Data['y'].flatten()\n\nx = np.atleast_2d(np.linspace(0,60,100)).T # Test points",
"_____no_output_____"
]
],
[
[
"### VHGPR",
"_____no_output_____"
]
],
[
[
"kernelf = C(10.0, (1e-1, 5*1e3)) * RBF(5, (1e-1, 1e2)) # mean kernel\nkernelg = C(10.0, (1e-1, 1e2)) * RBF(5, (1e-1, 1e2)) # variance kernel \nmodel_v = VHGPR(kernelf, kernelg)\nresults_v = model_v.fit(DX, DY).predict(x)",
"_____no_output_____"
]
],
[
[
"### Standard GPR",
"_____no_output_____"
]
],
[
[
"kernel = C(1e1, (1e-1, 1e4)) * RBF(1e1, (1e-1, 1e2)) + WhiteKernel(1e1, (1e-1, 1e4))\nmodel_s = GaussianProcessRegressor(kernel, n_restarts_optimizer = 5)\nresults_s = model_s.fit(DX, DY).predict(x, return_std = True) ",
"_____no_output_____"
]
],
[
[
"### Comparison",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize = (6,4)) \nplt.plot(DX,DY,\"o\")\nplt.plot(x, results_v[0],'r', label='vhgpr')\nplt.plot(x, results_v[0] + 2 * np.sqrt(np.exp(results_v[2])), 'r--')\nplt.plot(x, results_v[0] - 2 * np.sqrt(np.exp(results_v[2])),'r--')\nplt.plot(x, results_s[0],'k', label='sgpr')\nplt.plot(x, results_s[0] + 2* np.sqrt(np.exp(model_s.kernel_.theta[2])),'k--')\nplt.plot(x, results_s[0] - 2* np.sqrt(np.exp(model_s.kernel_.theta[2])),'k--')\nplt.xlim(0,60)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend()\nplt.grid()\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d044b78367bce0804a35cd90f0dc693537502293 | 7,600 | ipynb | Jupyter Notebook | notebooks/python/raw/ex_5.ipynb | NeuroLaunch/learntools | 60f8929f3526b7469d3da236a13748fde8584153 | [
"Apache-2.0"
] | 1 | 2019-12-09T04:45:42.000Z | 2019-12-09T04:45:42.000Z | notebooks/python/raw/ex_5.ipynb | NeuroLaunch/learntools | 60f8929f3526b7469d3da236a13748fde8584153 | [
"Apache-2.0"
] | null | null | null | notebooks/python/raw/ex_5.ipynb | NeuroLaunch/learntools | 60f8929f3526b7469d3da236a13748fde8584153 | [
"Apache-2.0"
] | null | null | null | 24.675325 | 338 | 0.546711 | [
[
[
"#$EXERCISE_PREAMBLE$\n\nAs always, run the setup code below before working on the questions (and if you leave this notebook and come back later, remember to run the setup code again).",
"_____no_output_____"
]
],
[
[
"from learntools.core import binder; binder.bind(globals())\nfrom learntools.python.ex5 import *\nprint('Setup complete.')",
"_____no_output_____"
]
],
[
[
"# Exercises",
"_____no_output_____"
],
[
"## 1.\n\nHave you ever felt debugging involved a bit of luck? The following program has a bug. Try to identify the bug and fix it.",
"_____no_output_____"
]
],
[
[
"def has_lucky_number(nums):\n \"\"\"Return whether the given list of numbers is lucky. A lucky list contains\n at least one number divisible by 7.\n \"\"\"\n for num in nums:\n if num % 7 == 0:\n return True\n else:\n return False",
"_____no_output_____"
]
],
[
[
"Try to identify the bug and fix it in the cell below:",
"_____no_output_____"
]
],
[
[
"def has_lucky_number(nums):\n \"\"\"Return whether the given list of numbers is lucky. A lucky list contains\n at least one number divisible by 7.\n \"\"\"\n for num in nums:\n if num % 7 == 0:\n return True\n else:\n return False\n\nq1.check()",
"_____no_output_____"
],
[
"#_COMMENT_IF(PROD)_\nq1.hint()\n#_COMMENT_IF(PROD)_\nq1.solution()",
"_____no_output_____"
]
],
[
[
"## 2.\n\n### a.\nLook at the Python expression below. What do you think we'll get when we run it? When you've made your prediction, uncomment the code and run the cell to see if you were right.",
"_____no_output_____"
]
],
[
[
"#[1, 2, 3, 4] > 2",
"_____no_output_____"
]
],
[
[
"### b\nR and Python have some libraries (like numpy and pandas) compare each element of the list to 2 (i.e. do an 'element-wise' comparison) and give us a list of booleans like `[False, False, True, True]`. \n\nImplement a function that reproduces this behaviour, returning a list of booleans corresponding to whether the corresponding element is greater than n.\n",
"_____no_output_____"
]
],
[
[
"def elementwise_greater_than(L, thresh):\n \"\"\"Return a list with the same length as L, where the value at index i is \n True if L[i] is greater than thresh, and False otherwise.\n \n >>> elementwise_greater_than([1, 2, 3, 4], 2)\n [False, False, True, True]\n \"\"\"\n pass\n\nq2.check()",
"_____no_output_____"
],
[
"#_COMMENT_IF(PROD)_\nq2.solution()",
"_____no_output_____"
]
],
[
[
"## 3.\n\nComplete the body of the function below according to its docstring",
"_____no_output_____"
]
],
[
[
"def menu_is_boring(meals):\n \"\"\"Given a list of meals served over some period of time, return True if the\n same meal has ever been served two days in a row, and False otherwise.\n \"\"\"\n pass\n\nq3.check()",
"_____no_output_____"
],
[
"#_COMMENT_IF(PROD)_\nq3.hint()\n#_COMMENT_IF(PROD)_\nq3.solution()",
"_____no_output_____"
]
],
[
[
"## 4. <span title=\"A bit spicy\" style=\"color: darkgreen \">🌶️</span>\n\nNext to the Blackjack table, the Python Challenge Casino has a slot machine. You can get a result from the slot machine by calling `play_slot_machine()`. The number it returns is your winnings in dollars. Usually it returns 0. But sometimes you'll get lucky and get a big payday. Try running it below:",
"_____no_output_____"
]
],
[
[
"play_slot_machine()",
"_____no_output_____"
]
],
[
[
"By the way, did we mention that each play costs $1? Don't worry, we'll send you the bill later.\n\nOn average, how much money can you expect to gain (or lose) every time you play the machine? The casino keeps it a secret, but you can estimate the average value of each pull using a technique called the **Monte Carlo method**. To estimate the average outcome, we simulate the scenario many times, and return the average result.\n\nComplete the following function to calculate the average value per play of the slot machine.",
"_____no_output_____"
]
],
[
[
"def estimate_average_slot_payout(n_runs):\n \"\"\"Run the slot machine n_runs times and return the average net profit per run.\n Example calls (note that return value is nondeterministic!):\n >>> estimate_average_slot_payout(1)\n -1\n >>> estimate_average_slot_payout(1)\n 0.5\n \"\"\"\n pass",
"_____no_output_____"
]
],
[
[
"When you think you know the expected value per spin, uncomment the line below to see how close you were.",
"_____no_output_____"
]
],
[
[
"#_COMMENT_IF(PROD)_\nq4.solution()",
"_____no_output_____"
]
],
[
[
"#$KEEP_GOING$\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d044c0984d6f3d93a102d6cba9686bb4e9878414 | 19,935 | ipynb | Jupyter Notebook | deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb | TeoZosa/deep-learning-v2-pytorch | 8e73c26f2ebf49769b798e9ff26bd90d7de69f7d | [
"Apache-2.0"
] | null | null | null | deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb | TeoZosa/deep-learning-v2-pytorch | 8e73c26f2ebf49769b798e9ff26bd90d7de69f7d | [
"Apache-2.0"
] | 159 | 2021-05-07T21:34:19.000Z | 2022-03-28T13:33:29.000Z | deep_learning_v2_pytorch/project-bikesharing/Predicting_bike_sharing_data.ipynb | TeoZosa/deep-learning-v2-pytorch | 8e73c26f2ebf49769b798e9ff26bd90d7de69f7d | [
"Apache-2.0"
] | null | null | null | 40.518293 | 664 | 0.60607 | [
[
[
"# Your first neural network\n\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Load and prepare the data\n\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"_____no_output_____"
]
],
[
[
"data_path = \"Bike-Sharing-Dataset/hour.csv\"\n\nrides = pd.read_csv(data_path)",
"_____no_output_____"
],
[
"rides.head()",
"_____no_output_____"
]
],
[
[
"## Checking out the data\n\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"_____no_output_____"
]
],
[
[
"rides[: 24 * 10].plot(x=\"dteday\", y=\"cnt\")",
"_____no_output_____"
]
],
[
[
"### Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.",
"_____no_output_____"
]
],
[
[
"dummy_fields = [\"season\", \"weathersit\", \"mnth\", \"hr\", \"weekday\"]\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = [\n \"instant\",\n \"dteday\",\n \"season\",\n \"weathersit\",\n \"weekday\",\n \"atemp\",\n \"mnth\",\n \"workingday\",\n \"hr\",\n]\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"_____no_output_____"
]
],
[
[
"### Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"_____no_output_____"
]
],
[
[
"quant_features = [\"casual\", \"registered\", \"cnt\", \"temp\", \"hum\", \"windspeed\"]\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean) / std",
"_____no_output_____"
]
],
[
[
"### Splitting the data into training, testing, and validation sets\n\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"_____no_output_____"
]
],
[
[
"# Save data for approximately the last 21 days\ntest_data = data[-21 * 24 :]\n\n# Now remove the test data from the data set\ndata = data[: -21 * 24]\n\n# Separate the data into features and targets\ntarget_fields = [\"cnt\", \"casual\", \"registered\"]\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = (\n test_data.drop(target_fields, axis=1),\n test_data[target_fields],\n)",
"_____no_output_____"
]
],
[
[
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"_____no_output_____"
]
],
[
[
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[: -60 * 24], targets[: -60 * 24]\nval_features, val_targets = features[-60 * 24 :], targets[-60 * 24 :]",
"_____no_output_____"
]
],
[
[
"## Time to build the network\n\nBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n\n<img src=\"assets/neural_network.png\" width=300px>\n\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n\n> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n2. Implement the forward pass in the `train` method.\n3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n4. Implement the forward pass in the `run` method.\n ",
"_____no_output_____"
]
],
[
[
"#############\n# In the my_answers.py file, fill out the TODO sections as specified\n#############\n\nfrom my_answers import NeuralNetwork",
"_____no_output_____"
],
[
"def MSE(y, Y):\n return np.mean((y - Y) ** 2)",
"_____no_output_____"
]
],
[
[
"## Unit tests\n\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly before you starting trying to train it. These tests must all be successful to pass the project.",
"_____no_output_____"
]
],
[
[
"import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3], [-0.1]])\n\n\nclass TestMethods(unittest.TestCase):\n\n ##########\n # Unit tests for data loading\n ##########\n\n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == \"bike-sharing-dataset/hour.csv\")\n\n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n\n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(\n np.all(network.activation_function(0.5) == 1 / (1 + np.exp(-0.5)))\n )\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n network.train(inputs, targets)\n self.assertTrue(\n np.allclose(\n network.weights_hidden_to_output,\n np.array([[0.37275328], [-0.03172939]]),\n )\n )\n self.assertTrue(\n np.allclose(\n network.weights_input_to_hidden,\n np.array(\n [\n [0.10562014, -0.20185996],\n [0.39775194, 0.50074398],\n [-0.29887597, 0.19962801],\n ]\n ),\n )\n )\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)",
"_____no_output_____"
]
],
[
[
"## Training the network\n\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n\nYou'll also be using a method known as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n\n### Choose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.\n\n### Choose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n\n### Choose the number of hidden nodes\nIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. \n\nTry a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.",
"_____no_output_____"
]
],
[
[
"import sys\n\n####################\n### Set the hyperparameters in you myanswers.py file ###\n####################\n\nfrom my_answers import iterations, learning_rate, hidden_nodes, output_nodes\n\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {\"train\": [], \"validation\": []}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.iloc[batch].values, train_targets.iloc[batch][\"cnt\"]\n\n network.train(X, y)\n\n # Printing out the training progress\n train_loss = MSE(\n np.array(network.run(train_features)).T, train_targets[\"cnt\"].values\n )\n val_loss = MSE(np.array(network.run(val_features)).T, val_targets[\"cnt\"].values)\n sys.stdout.write(\n \"\\rProgress: {:2.1f}\".format(100 * ii / float(iterations))\n + \"% ... Training loss: \"\n + str(train_loss)[:5]\n + \" ... Validation loss: \"\n + str(val_loss)[:5]\n )\n sys.stdout.flush()\n\n losses[\"train\"].append(train_loss)\n losses[\"validation\"].append(val_loss)",
"_____no_output_____"
],
[
"plt.plot(losses[\"train\"], label=\"Training loss\")\nplt.plot(losses[\"validation\"], label=\"Validation loss\")\nplt.legend()\n_ = plt.ylim()",
"_____no_output_____"
]
],
[
[
"## Check out your predictions\n\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(8, 4))\n\nmean, std = scaled_features[\"cnt\"]\npredictions = np.array(network.run(test_features)).T * std + mean\nax.plot(predictions[0], label=\"Prediction\")\nax.plot((test_targets[\"cnt\"] * std + mean).values, label=\"Data\")\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.iloc[test_data.index][\"dteday\"])\ndates = dates.apply(lambda d: d.strftime(\"%b %d\"))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"_____no_output_____"
]
],
[
[
"## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\n \nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\n> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\n#### Your answer below",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d044c3037ab1cffe79605f44227e197923eb5010 | 7,560 | ipynb | Jupyter Notebook | 11_cte4.ipynb | mnrclab/Advanced_SQL_TimeSeries | 395c97f01bf003e5c661c36e1b81589b2341fb17 | [
"Unlicense"
] | null | null | null | 11_cte4.ipynb | mnrclab/Advanced_SQL_TimeSeries | 395c97f01bf003e5c661c36e1b81589b2341fb17 | [
"Unlicense"
] | null | null | null | 11_cte4.ipynb | mnrclab/Advanced_SQL_TimeSeries | 395c97f01bf003e5c661c36e1b81589b2341fb17 | [
"Unlicense"
] | null | null | null | 27.391304 | 53 | 0.303968 | [
[
[
"sql(\n '''\n WITH daily_avg_sales AS\n (SELECT\n DAY(order_date) order_day,\n avg(sales) avg_sales\n FROM \n superstore\n GROUP BY\n order_day\n )\n SELECT order_day, avg_sales\n FROM daily_avg_sales\n ORDER BY order_day;\n '''\n)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d044ceb47938a9434bc7d9a89df1722a948a6d09 | 67,691 | ipynb | Jupyter Notebook | notebooks/introduction_to_rlberry.ipynb | antoine-moulin/rlberry | 676af9d1bb9094a6790a9aa3ff7e67b13584a183 | [
"MIT"
] | null | null | null | notebooks/introduction_to_rlberry.ipynb | antoine-moulin/rlberry | 676af9d1bb9094a6790a9aa3ff7e67b13584a183 | [
"MIT"
] | null | null | null | notebooks/introduction_to_rlberry.ipynb | antoine-moulin/rlberry | 676af9d1bb9094a6790a9aa3ff7e67b13584a183 | [
"MIT"
] | null | null | null | 125.353704 | 12,922 | 0.825767 | [
[
[
"<a href=\"https://colab.research.google.com/github/rlberry-py/rlberry/blob/main/notebooks/introduction_to_rlberry.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Introduction to\n \n\n\n\n",
"_____no_output_____"
],
[
"# Colab setup",
"_____no_output_____"
]
],
[
[
"# install rlberry library\n!git clone https://github.com/rlberry-py/rlberry.git \n!cd rlberry && git pull && pip install -e . > /dev/null 2>&1\n\n# install ffmpeg-python for saving videos\n!pip install ffmpeg-python > /dev/null 2>&1\n\n# install optuna for hyperparameter optimization\n!pip install optuna > /dev/null 2>&1\n\n# packages required to show video\n!pip install pyvirtualdisplay > /dev/null 2>&1\n!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1\n\nprint(\"\")\nprint(\" ~~~ Libraries installed, please restart the runtime! ~~~ \")\nprint(\"\")\n",
"fatal: destination path 'rlberry' already exists and is not an empty directory.\nremote: Enumerating objects: 9, done.\u001b[K\nremote: Counting objects: 100% (9/9), done.\u001b[K\nremote: Compressing objects: 100% (1/1), done.\u001b[K\nremote: Total 5 (delta 4), reused 5 (delta 4), pack-reused 0\u001b[K\nUnpacking objects: 100% (5/5), done.\nFrom https://github.com/rlberry-py/rlberry\n f2abf19..ea58731 main -> origin/main\nUpdating f2abf19..ea58731\nFast-forward\n rlberry/stats/agent_stats.py | 2 \u001b[32m+\u001b[m\u001b[31m-\u001b[m\n 1 file changed, 1 insertion(+), 1 deletion(-)\n\n ~~~ Libraries installed, please restart the runtime! ~~~ \n\n"
],
[
"# Create directory for saving videos\n!mkdir videos > /dev/null 2>&1\n\n# Initialize display and import function to show videos\nimport rlberry.colab_utils.display_setup\nfrom rlberry.colab_utils.display_setup import show_video",
"_____no_output_____"
]
],
[
[
"# Interacting with a simple environment",
"_____no_output_____"
]
],
[
[
"from rlberry.envs import GridWorld\n\n# A grid world is a simple environment with finite states and actions, on which \n# we can test simple algorithms.\n# -> The reward function can be accessed by: env.R[state, action]\n# -> And the transitions: env.P[state, action, next_state]\nenv = GridWorld(nrows=3, ncols=10,\n reward_at = {(1,1):0.1, (2, 9):1.0},\n walls=((1,4),(2,4), (1,5)),\n success_probability=0.9)\n\n\n# Let's visuzalize a random policy in this environment!\nenv.enable_rendering()\nenv.reset()\nfor tt in range(20):\n action = env.action_space.sample()\n next_state, reward, is_terminal, info = env.step(action)\n\n# save video and clear buffer\nenv.save_video('./videos/gw.mp4', framerate=5)\nenv.clear_render_buffer()\n# show video\nshow_video('./videos/gw.mp4')",
"videos/gw.mp4\n"
]
],
[
[
"# Creating an agent\n\nLet's create an agent that runs value iteration to find a near-optimal policy.\nThis is possible in our GridWorld, because we have access to the transitions `env.P` and the rewards `env.R`.\n\n\nAn Agent must implement at least two methods, **fit()** and **policy()**.\n\nIt can also implement **sample_parameters()** used for hyperparameter optimization with [Optuna](https://optuna.org/).\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom rlberry.agents import Agent\n\nclass ValueIterationAgent(Agent):\n name = 'ValueIterationAgent'\n def __init__(self, env, gamma=0.99, epsilon=1e-5, **kwargs): # it's important to put **kwargs to ensure compatibility with the base class \n \"\"\"\n gamma: discount factor\n episilon: precision of value iteration\n \"\"\"\n Agent.__init__(self, env, **kwargs) # self.env is initialized in the base class\n\n self.gamma = gamma\n self.epsilon = epsilon \n self.Q = None # Q function to be computed in fit()\n \n def fit(self, **kwargs): \n \"\"\"\n Run value iteration.\n \"\"\"\n S, A = env.observation_space.n, env.action_space.n \n Q = np.zeros((S, A))\n V = np.zeros(S)\n\n while True:\n TQ = np.zeros((S, A))\n for ss in range(S):\n for aa in range(A):\n TQ[ss, aa] = env.R[ss, aa] + self.gamma*env.P[ss, aa, :].dot(V)\n V = TQ.max(axis=1)\n\n if np.abs(TQ-Q).max() < self.epsilon:\n break\n Q = TQ\n self.Q = Q\n \n def policy(self, observation, **kwargs):\n return self.Q[observation, :].argmax()\n\n \n @classmethod\n def sample_parameters(cls, trial):\n \"\"\"\n Sample hyperparameters for hyperparam optimization using Optuna (https://optuna.org/)\n \"\"\"\n gamma = trial.suggest_categorical('gamma', [0.1, 0.25, 0.5, 0.75, 0.99])\n return {'gamma':gamma}\n",
"_____no_output_____"
],
[
"# Now, let's fit and test the agent!\nagent = ValueIterationAgent(env)\nagent.fit()\n\n\n# Run agent's policy\nenv.enable_rendering()\nstate = env.reset()\nfor tt in range(20):\n action = agent.policy(state)\n state, reward, is_terminal, info = env.step(action)\n\n# save video and clear buffer\nenv.save_video('./videos/gw.mp4', framerate=5)\nenv.clear_render_buffer()\n# show video\nshow_video('./videos/gw.mp4')\n",
"videos/gw.mp4\n"
]
],
[
[
"# `AgentStats`: A powerfull class for hyperparameter optimization, training and evaluating agents.",
"_____no_output_____"
]
],
[
[
"# Create random agent as a baseline\nclass RandomAgent(Agent):\n name = 'RandomAgent'\n def __init__(self, env, gamma=0.99, epsilon=1e-5, **kwargs): # it's important to put **kwargs to ensure compatibility with the base class \n \"\"\"\n gamma: discount factor\n episilon: precision of value iteration\n \"\"\"\n Agent.__init__(self, env, **kwargs) # self.env is initialized in the base class\n\n def fit(self, **kwargs): \n pass\n \n def policy(self, observation, **kwargs):\n return self.env.action_space.sample()",
"_____no_output_____"
],
[
"from rlberry.stats import AgentStats, compare_policies\n\n# Define parameters\nvi_params = {'gamma':0.1, 'epsilon':1e-3}\n\n# Create AgentStats to fit 4 agents using 1 job\nvi_stats = AgentStats(ValueIterationAgent, env, eval_horizon=20, init_kwargs=vi_params, n_fit=4, n_jobs=1)\nvi_stats.fit()\n\n# Create AgentStats for baseline\nbaseline_stats = AgentStats(RandomAgent, env, eval_horizon=20, n_fit=1)\n\n# Compare policies using 10 Monte Carlo simulations\noutput = compare_policies([vi_stats, baseline_stats], n_sim=10)\n",
"\n Training AgentStats for ValueIterationAgent... \n\n\n ... trained! \n\n\n Training AgentStats for RandomAgent... \n\n"
],
[
"# The value of gamma above makes our VI agent quite bad! Let's optimize it.\nvi_stats.optimize_hyperparams(n_trials=15, timeout=30, n_sim=5, n_fit=1, n_jobs=1, sampler_method='random', pruner_method='none')\n\n# fit with optimized params\nvi_stats.fit()\n\n# ... and see the results\noutput = compare_policies([vi_stats, baseline_stats], n_sim=10)\n",
"\u001b[32m[I 2020-11-22 15:33:24,381]\u001b[0m A new study created in memory with name: no-name-853cb971-a1ac-45a3-8d33-82ccd07b32c1\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,406]\u001b[0m Trial 0 finished with value: 0.9 and parameters: {'gamma': 0.25}. Best is trial 0 with value: 0.9.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,427]\u001b[0m Trial 1 finished with value: 0.8799999999999999 and parameters: {'gamma': 0.5}. Best is trial 0 with value: 0.9.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,578]\u001b[0m Trial 2 finished with value: 2.0 and parameters: {'gamma': 0.99}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,595]\u001b[0m Trial 3 finished with value: 0.9399999999999998 and parameters: {'gamma': 0.25}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,612]\u001b[0m Trial 4 finished with value: 0.96 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,630]\u001b[0m Trial 5 finished with value: 0.8599999999999998 and parameters: {'gamma': 0.5}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,650]\u001b[0m Trial 6 finished with value: 0.8599999999999998 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,667]\u001b[0m Trial 7 finished with value: 0.8799999999999999 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,683]\u001b[0m Trial 8 finished with value: 0.9399999999999998 and parameters: {'gamma': 0.25}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,704]\u001b[0m Trial 9 finished with value: 1.4200000000000002 and parameters: {'gamma': 0.75}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,725]\u001b[0m Trial 10 finished with value: 1.2599999999999998 and parameters: {'gamma': 0.75}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,741]\u001b[0m Trial 11 finished with value: 0.78 and parameters: {'gamma': 0.1}. Best is trial 2 with value: 2.0.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,890]\u001b[0m Trial 12 finished with value: 2.02 and parameters: {'gamma': 0.99}. Best is trial 12 with value: 2.02.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,909]\u001b[0m Trial 13 finished with value: 0.96 and parameters: {'gamma': 0.25}. Best is trial 12 with value: 2.02.\u001b[0m\n\u001b[32m[I 2020-11-22 15:33:24,937]\u001b[0m Trial 14 finished with value: 0.8400000000000001 and parameters: {'gamma': 0.75}. Best is trial 12 with value: 2.02.\u001b[0m\n[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d044d21828cca7bfc6c1258e0bfbb4fb44eac2f2 | 142,367 | ipynb | Jupyter Notebook | notebooks/run_model.ipynb | clementjumel/master_thesis | 5a39657a212f794690e7c426f60e10ba70d50da9 | [
"Apache-2.0"
] | 2 | 2020-07-08T19:33:52.000Z | 2020-07-18T16:52:59.000Z | notebooks/run_model.ipynb | clementjumel/master_thesis | 5a39657a212f794690e7c426f60e10ba70d50da9 | [
"Apache-2.0"
] | null | null | null | notebooks/run_model.ipynb | clementjumel/master_thesis | 5a39657a212f794690e7c426f60e10ba70d50da9 | [
"Apache-2.0"
] | null | null | null | 26.861698 | 115 | 0.511397 | [
[
[
"# Setup\n### Imports",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append('../')\ndel sys\n\n%reload_ext autoreload\n%autoreload 2\n\nfrom toolbox.parsers import standard_parser, add_task_arguments, add_model_arguments\nfrom toolbox.utils import load_task, get_pretrained_model, to_class_name\nimport modeling.models as models",
"_____no_output_____"
]
],
[
[
"### Notebook functions",
"_____no_output_____"
]
],
[
[
"from numpy import argmax, mean\n\ndef run_models(model_names, word2vec, bart, args, train=False):\n args.word2vec = word2vec\n args.bart = bart\n \n pretrained_model = get_pretrained_model(args)\n \n for model_name in model_names:\n args.model = model_name\n print(model_name)\n\n model = getattr(models, to_class_name(args.model))(args=args, pretrained_model=pretrained_model)\n model.play(task=task, args=args)\n \n if train:\n valid_scores = model.valid_scores['average_precision']\n test_scores = model.test_scores['average_precision']\n\n valid_scores = [mean(epoch_scores) for epoch_scores in valid_scores]\n test_scores = [mean(epoch_scores) for epoch_scores in test_scores]\n\n i_max = argmax(valid_scores)\n print(\"max for epoch %i\" % (i_max+1))\n print(\"valid score: %.5f\" % valid_scores[i_max])\n print(\"test score: %.5f\" % test_scores[i_max])",
"_____no_output_____"
]
],
[
[
"### Parameters",
"_____no_output_____"
]
],
[
[
"ap = standard_parser()\nadd_task_arguments(ap)\nadd_model_arguments(ap)\nargs = ap.parse_args([\"-m\", \"\",\n \"--root\", \"..\"])",
"_____no_output_____"
]
],
[
[
"### Load the data",
"_____no_output_____"
]
],
[
[
"task = load_task(args)",
"Task loaded from ../results/modeling_task/context-dependent-same-type_50-25-25_rs24_bs4_cf-v0_tf-v0.pkl.\n\n"
]
],
[
[
"# Basic baselines",
"_____no_output_____"
]
],
[
[
"run_models(model_names=[\"random\",\n \"frequency\"],\n word2vec=False,\n bart=False,\n args=args)",
"random\nEvaluation on the valid loader...\n"
]
],
[
[
"# Basic baselines",
"_____no_output_____"
]
],
[
[
"run_models(model_names=[\"summaries-count\",\n \"summaries-unique-count\",\n \"summaries-overlap\",\n \"activated-summaries\",\n \"context-count\",\n \"context-unique-count\",\n \"summaries-context-count\",\n \"summaries-context-unique-count\",\n \"summaries-context-overlap\"],\n word2vec=False,\n bart=False,\n args=args)",
"summaries-count\nEvaluation on the valid loader...\n"
]
],
[
[
"# Embedding baselines",
"_____no_output_____"
]
],
[
[
"run_models(model_names=[\"summaries-average-embedding\",\n \"summaries-overlap-average-embedding\",\n \"context-average-embedding\",\n \"summaries-context-average-embedding\",\n \"summaries-context-overlap-average-embedding\"],\n word2vec=True,\n bart=False,\n args=args)",
"Word2Vec embedding loaded.\n\nsummaries-average-embedding\nEvaluation on the valid loader...\n"
]
],
[
[
"### Custom classifier",
"_____no_output_____"
]
],
[
[
"run_models(model_names=[\"custom-classifier\"],\n word2vec=True,\n bart=False,\n args=args,\n train=True)",
"Word2Vec embedding loaded.\n\ncustom-classifier\nLearning answers counts...\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits